Particle Physics Planet


July 24, 2017

Christian P. Robert - xi'an's og

Jeffreys priors for mixtures [or not]

Clara Grazian and I have just arXived [and submitted] a paper on the properties of Jeffreys priors for mixtures of distributions. (An earlier version had not been deemed of sufficient interest by Bayesian Analysis.) In this paper, we consider the formal Jeffreys prior for a mixture of Gaussian distributions and examine whether or not it leads to a proper posterior with a sufficient number of observations.  In general, it does not and hence cannot be used as a reference prior. While this is a negative result (and this is why Bayesian Analysis did not deem it of sufficient importance), I find it definitely relevant because it shows that the default reference prior [in the sense that the Jeffreys prior is the primary choice in nonparametric settings] does not operate in this wide class of distributions. What is surprising is that the use of a Jeffreys-like prior on a global location-scale parameter (as in our 1996 paper with Kerrie Mengersen or our recent work with Kaniav Kamary and Kate Lee) remains legit if proper priors are used on all the other parameters. (This may be yet another illustration of the tequilla-like toxicity of mixtures!)

Francisco Rubio and Mark Steel already exhibited this difficulty of the Jeffreys prior for mixtures of densities with disjoint supports [which reveals the mixture latent variable and hence turns the problem into something different]. Which relates to another point of interest in the paper, derived from a 1988 [Valencià Conference!] paper by José Bernardo and Javier Giròn, where they show the posterior associated with a Jeffreys prior on a mixture is proper when (a) only estimating the weights p and (b) using densities with disjoint supports. José and Javier use in this paper an astounding argument that I had not seen before and which took me a while to ingest and accept. Namely, the Jeffreys prior on a observed model with latent variables is bounded from above by the Jeffreys prior on the corresponding completed model. Hence if the later leads to a proper posterior for the observed data, so does the former. Very smooth, indeed!!!

Actually, we still support the use of the Jeffreys prior but only for the mixture mixtures, because it has the property supported by Judith and Kerrie of a conservative prior about the number of components. Obviously, we cannot advocate its use over all the parameters of the mixture since it then leads to an improper posterior.


Filed under: Books, Statistics, University life Tagged: Bayesian Analysis, improper posteriors, Jeffreys priors, mixtures of distributions, noninformative priors, reference priors

by xi'an at July 24, 2017 10:17 PM

Peter Coles - In the Dark

Deep Time and Doggerland

One of the bonuses on offer during the BBC Proms season on Radio 3 is the opportunity to listen to the fascinating discussions recorded over the road from the Albert Hall at Imperial College and broadcast during the intervals under the title of Proms Extra. Last week (at Prom Number 4) there was a discussion with the title Deep Time, taking its theme from the UK premier of a fascinating composition of the same name by Sir Harrison Birtwistle.

The Proms Extra programme focussed on `Deep Time’ in the sense in which it is used in geological, i.e. time as inferred from rock strata and the fossil record. In the course of the discussion mention was made of Doggerland which is not, as you might imagine, a theme park devoted to outdoor sexual activities, but an area now submerged beneath the North Sea that connected Great Britain to continental Europe during and after the last glacial period. About 12,000 years ago at the start of the Holocene Era, it is thought that the area now covered by the North Sea looked something like this:

(Picture credit: this website). Obviously the cities marked on the map where not there at the time! Britain was connected to mainland at this time, although much of the land mass was under glaciers at the time. At the end of the last ice age the glaciers retreated, sea levels rose and the area once covered by Doggerland was submerged. It is thought that this happened around 8500 years ago. Great Britain has been separated from the continent by less than 10,000 years.

Doggerland gets its name from the Dogger Bank, a huge sandbank off the North-Eastern coast of England which is thought to be a glacial moraine left behind by the retreating ice sheet. The Dogger bank lies about 60 miles from the coast, and is about 60 miles wide by 100 miles long. The water is quite shallow – typically 20 metres deep and is a well-known fishing area. Its name derives from old Dutch fishing vessels called doggers who specialised in catching cod. Here’s a map (from here) showing the Dogger Bank:

When I was a teenager I had the opportunity, with a few friends from school, to go out from Newcastle in a trawler to the Dogger Bank. The skipper insisted that the Dogger Bank was, in places, so shallow that you could paddle around on it with your trousers rolled up. We all believed him, but he was clearly having us on!

The other thing I remember about that trip in a trawler – apart from the all-pervasive smell of fish – was that a bit of storm brewed up on the way home. All my school friends got sea-sick, but I didn’t. That was the first time I realised that I don’t suffer from seasickness. I can enjoy travelling on ships and boats without having to worry about it.

Dogger is of course also the name of one of the sea areas used in the Shipping Forecast: it is East of the coastal area Tyne, South of Forties, North of Humber and West of German Bight. Whenever I hear the shipping forecast on the radio, I always feel a bit of nostalgia when I hear the names of these areas read out.

Anyway, trawlers operating at the Dogger Bank frequently bring up bits of ancient animals (including mammoth and rhinoceros) as well as prehistoric human artefacts, showing that the area was at one time inhabited. I don’t think anybody knows exactly how long it took Doggerland to become submerged, but it may well have involved one or more catastrophic flooding events. If there were people living on Doggerland then,  they obviously had to migrate one way or the other..

 

 


by telescoper at July 24, 2017 02:37 PM

John Baez - Azimuth

Correlated Equilibria in Game Theory

Erica Klarreich is one of the few science journalists who explains interesting things I don’t already know clearly enough so I can understand them. I recommend her latest article:

• Erica Klarreich, In game theory, no clear path to equilibrium, Quanta, 18 July 2017.

Economists like the concept of ‘Nash equilibrium’, but it’s problematic in some ways. This matters for society at large.

In a Nash equilibrium for a multi-player game, no player can improve their payoff by unilaterally changing their strategy. This doesn’t mean everyone is happy: it’s possible to be trapped in a Nash equilibrium where everyone is miserable, because anyone changing their strategy unilaterally would be even more miserable. (Think ‘global warming’.)

The great thing about Nash equilibria is that their meaning is easy to fathom, and they exist. John Nash won a Nobel prize for a paper proving that they exist. His paper was less than one page long. But he proved the existence of Nash equilibria for arbitrary multi-player games using a nonconstructive method: a fixed point theorem that doesn’t actually tell you how to find the equilibrium!

Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:

• Wikipedia, Correlated equilibrium.

The precise mathematical definition near the start of this article is a pretty good example of how you shouldn’t explain something: it contains a big fat equation containing symbols not mentioned previously, and so on. By thinking about it for a while, I was able to fight my way through it. Someday I should improve it—and someday I should explain the idea here! But for now, I’ll just quote this passage, which roughly explains the idea in words:

The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy (assuming the others don’t deviate), the distribution is called a correlated equilibrium.

According to Erica Klarreich it’s a useful notion. She even makes it sound revolutionary:

This might at first sound like an arcane construct, but in fact we use correlated equilibria all the time—whenever, for example, we let a coin toss decide whether we’ll go out for Chinese or Italian, or allow a traffic light to dictate which of us will go through an intersection first.

In [some] examples, each player knows exactly what advice the “mediator” is giving to the other player, and the mediator’s advice essentially helps the players coordinate which Nash equilibrium they will play. But when the players don’t know exactly what advice the others are getting—only how the different kinds of advice are correlated with each other—Aumann showed that the set of correlated equilibria can contain more than just combinations of Nash equilibria: it can include forms of play that aren’t Nash equilibria at all, but that sometimes result in a more positive societal outcome than any of the Nash equilibria. For example, in some games in which cooperating would yield a higher total payoff for the players than acting selfishly, the mediator can sometimes beguile players into cooperating by withholding just what advice she’s giving the other players. This finding, Myerson said, was “a bolt from the blue.”

(Roger Myerson is an economics professor at the University of Chicago who won a Nobel prize for his work on game theory.)

And even though a mediator can give many different kinds of advice, the set of correlated equilibria of a game, which is represented by a collection of linear equations and inequalities, is more mathematically tractable than the set of Nash equilibria. “This other way of thinking about it, the mathematics is so much more beautiful,” Myerson said.

While Myerson has called Nash’s vision of game theory “one of the outstanding intellectual advances of the 20th century,” he sees correlated equilibrium as perhaps an even more natural concept than Nash equilibrium. He has opined on numerous occasions that “if there is intelligent life on other planets, in a majority of them they would have discovered correlated equilibrium before Nash equilibrium.”

When it comes to repeated rounds of play, many of the most natural ways that players could choose to adapt their strategies converge, in a particular sense, to correlated equilibria. Take, for example, “regret minimization” approaches, in which before each round, players increase the probability of using a given strategy if they regret not having played it more in the past. Regret minimization is a method “which does bear some resemblance to real life — paying attention to what’s worked well in the past, combined with occasionally experimenting a bit,” Roughgarden said.

(Tim Roughgarden is a theoretical computer scientist at Stanford University.)

For many regret-minimizing approaches, researchers have shown that play will rapidly converge to a correlated equilibrium in the following surprising sense: after maybe 100 rounds have been played, the game history will look essentially the same as if a mediator had been advising the players all along. It’s as if “the [correlating] device was somehow implicitly found, through the interaction,” said Constantinos Daskalakis, a theoretical computer scientist at the Massachusetts Institute of Technology.

As play continues, the players won’t necessarily stay at the same correlated equilibrium — after 1,000 rounds, for instance, they may have drifted to a new equilibrium, so that now their 1,000-game history looks as if it had been guided by a different mediator than before. The process is reminiscent of what happens in real life, Roughgarden said, as societal norms about which equilibrium should be played gradually evolve.

In the kinds of complex games for which Nash equilibrium is hard to reach, correlated equilibrium is “the natural leading contender” for a replacement solution concept, Nisan said.

As Klarreich hints, you can find correlated equilibria using a technique called linear programming. That was proved here, I think:

• Christos H. Papadimitriou and Tim Roughgarden, Computing correlated equilibria in multi-player games, J. ACM 55 (2008), 14:1-14:29.

Do you know something about correlated equilibria that I should know? If so, please tell me!


by John Baez at July 24, 2017 02:54 AM

July 23, 2017

Christian P. Robert - xi'an's og

forward event-chain Monte Carlo

One of the authors of this paper contacted me to point out their results arXived last February [and revised last month] as being related to our bouncy particle paper arXived two weeks ago. And to an earlier paper by Michel et al. (2014) published in the Journal of Chemical Physics. (The authors actually happen to work quite nearby, on a suburban road I take every time I bike to Dauphine!) I think one reason we missed this paper in our literature survey is the use of a vocabulary taken from Physics rather than our Monte Carlo community, as in, e.g., using “event chain” instead of “bouncy particle”… The paper indeed contains schemes similar to ours, as did the on-going work by Chris Sherlock and co-authors Chris presented last week at the Isaac Newton Institute workshop on scalability. (Although I had troubles reading its physics style, in particular the justification for stationarity or “global balance” and the use of “infinitesimals”.)

“…we would like to find the optimal set of directions {e} necessary for the ergodicity and  allowing for an efficient exploration of the target distribution.”

The improvement sought is about improving the choice of the chain direction at each direction change. In order to avoid the random walk behaviour. The proposal is to favour directions close to the gradient of the log-likelihood, keeping the orthogonal to this gradient constant in direction (as in our paper) if not in scale. (As indicated above I have trouble understanding the ergodicity proof, if not the irreducibility. I also do not see how solving (11), which should be (12), is feasible in general. And why (29) amounts to simulating from (27)…)


Filed under: Statistics, Travel, University life Tagged: bouncy particle sampler

by xi'an at July 23, 2017 10:17 PM

Peter Coles - In the Dark

Women’s Cricket World Cup Winners!

There wasn’t any cricket in Cardiff today because  Glamorgan’s T20 Blast match was abandoned without a ball being bowled. However, that meant I was able to follow the thrilling final of the Women’s World Cup at Lord’s, which was won by England by 9 runs.

I didn’t think England’s total of 228/7 off 50 overs was going to be enough, and India seemed to be set for a comfortable win, but England’s bowlers stuck to their task magnificently and India crumbled in the last five overs to be bowled out for 219, having lost their last 6 wickets for just 24 runs.

A great performance by England and a magnificent advertisement for Women’s Cricket in front of a sellout crowd at Lord’s. I think this may herald a huge surge in popularity for the women’s game. Congratulations to England and commiserations to India.

Now, is there anything to stop England fielding an all-female team against South Africa on Thursday? England women played with a lot more determination today than the men did against South Africa at Trento Bridge!


by telescoper at July 23, 2017 05:00 PM

Peter Coles - In the Dark

The Brexit HBR Business Case

I think the Government has picked option C!

Voice of TREASON

cartoon6953

Today we’re going to work through a strategic business case to evaluate how you’re likely to perform in role.

Investment Case

You have an initial investment of £50-60B to make that will have an impact in £100s of Billions over decades. The transformation will completely distract your Executive Team and all your senior managers leaving you unable to do anything else except the project. Once initiated the project cost will be sunk and and the company irreversibly comitted to the course.

All of your consultants have advised you against initiating the project. Your competitors, sensing a misstep have started to hire your most trusted staff. You have a tenuous grip on your board and e-team and expect to lose some critical board votes that will secure the project.

You’re  certain you don’t have the staff to manage the initial analysis  let alone the deployment of the project.

A year ago…

View original post 203 more words


by telescoper at July 23, 2017 10:56 AM

July 22, 2017

Christian P. Robert - xi'an's og

Takaisin helsinkiin

I am off tomorrow morning to Helsinki for the European Meeting of Statisticians (EMS 2017). Where I will talk on how to handle multiple estimators in Monte Carlo settings (although I have not made enough progress in this direction to include anything truly novel in the talk!) Here are the slides:

I look forward this meeting, as I remember quite fondly the previous one I attended in Budapest. Which was of the highest quality in terms of talks and interactions. (I also remember working hard with Randal Douc on a yet-unfinished project!)


Filed under: pictures, Statistics, Travel Tagged: ABCruise, conference, EMS 2017, Europe, ferry harbour, Finland, folded Markov chain, Helsinki, North, Randal Douc, Scandinavia

by xi'an at July 22, 2017 10:17 PM

Peter Coles - In the Dark

Natwest T20 Blast: Glamorgan v Sussex

Last night’s Twenty20 match in Cardiff was planned as a staff social outing for members of the School of Physics & Astronomy at Cardiff University. I had to do some things at home before the 6.30 start so didn’t join the group that went to a pub first but went straight to the ground.

It had rained much of the day, but stopped around 6pm. When I got to the ground the covers were still on:

The umpires inspected the pitch at 7pm, and during their deliberations it started drizzling. They decided to have another look at 7.30.

I stayed inches ground, updating the rest of the staff group who happily stayed in the pub while I sat in the gloom of a sparsely populated SWALEC.

Eventually the ground staff started to remove the covers

The toss was finally thrown at 8pm. Glamorgan won and decided to field. Play would start at 8.30, with 9 overs per side.

Play did get under way at 8.30..

It was predictably knockabout stuff, with Sussex slogging from the word go. They reached 87 for 2 off 8 overs, but then the rain returned. A little after 9pm the game was abandoned. Fewer than 10 overs having been bowled, tickets were refunded.

It was a shame that we didn’t get a full game, not only because the social event was a damp squib, but also because Glamorgan really wanted a win. Their previous match at the SWALEC (against Somerset last Saturday) was also rained off but their match  the following day against Essex in Chelmsford led to a victory with a six off the last ball as Glamorgan chased 220 to win off 20 overs.

Anyway, it’s the return match against Essex in Cardiff on Sunday so let’s hope for a full game then.


by telescoper at July 22, 2017 11:09 AM

July 21, 2017

Lubos Motl - string vacua and pheno

Does weak gravity conjecture predict neutrino type, masses and cosmological constant?
String cosmologist Gary Shiu and his junior collaborator Yuta Hamada (Wisconsin) released a rather fascinating hep-th preprint today
Weak Gravity Conjecture, Multiple Point Principle and the Standard Model Landscape
They are combining some of the principles that are seemingly most abstract, most stringy, and use them in such a way that they seem to deduce an estimate for utterly observable quantities such as a realistic magnitude of neutrino masses, their being Dirac, and a sensible estimate for the cosmological constant, too.



What have they done?

In 2005, when I watched him happily, Cumrun Vafa coined the term swampland for the "lore" that was out there but wasn't clearly articulated before that. Namely the lore that even in the absence of the precise identified vacuum of string theory, string theory seems to make some general predictions and ban certain things that would be allowed in effective quantum field theories. According to Vafa, the landscape may be large but it is still just an infinitely tiny, precious fraction embedded in a much larger and less prestigious region, the swampland, the space of possible effective field theories which is full of mud, feces, and stinking putrefying corpses of critics of string theory such as Mr Šmoits. Vafa's paper is less colorful but be sure that this is what he meant. ;-)

The weak gravity conjecture – the hypothesis (justified by numerous very different and complementary pieces of evidence) that consistency of quantum gravity really demands gravity among elementary particles to be weaker than other forces – became the most well-known example of the swampland reasoning. But Cumrun and his followers have pointed out several other general predictions that may be made in string theory but not without it.




Aside from the weak gravity conjecture, Shiu and Hamada use one particular observation: that theories of quantum gravity (=string/M-theory in the most general sense) should be consistent not only in their original spacetime but it should also be possible to compactify them while preserving the consistency.




Shiu and Hamada use this principle for the Core Theory, as Frank Wilczek calls the Standard Model combined with gravity. Well, it's only the Standard Model part that is "really" exploited by Shiu and Hamada. However, the fact that the actual theory also contains quantum gravity is needed to justify the application of the quantum gravity anti-swampland principle. Their point is highly creative. When the surrounding Universe including the Standard Model is a vacuum of string/M-theory, some additional operations – such as extra compactification – should be possible with this vacuum.

On top of these swampland things, Shiu and Hamada also adopt another principle, Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle. The principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something special seems to happen over there. This principle has been used to argue that the fine-structure constant should be around \(\alpha\approx 1/(136.8\pm 9)\), the top quark mass should be \(m_t\approx 173\pm 5 \GeV\), the Higgs mass should be \(m_h\approx 135\pm 9 \GeV\), and so on. The track record of this principle looks rather impressive to me. In some sense, this principle isn't just inequivalent to naturalness; it is close to its opposite. Naturalness could favor points in the bulk of a "single phase"; the multiple criticality principle favors points in the parameter space that are of "measure zero" to a maximal power, in fact.

Fine. So Shiu and Hamada take our good old Standard Model and compactify one or two spatial dimensions on a circle \(S^1\) or the torus \(T^2\) because you shouldn't be afraid of doing such things with the string theoretical vacua, and our Universe is one of them. When they compactify it, they find out that aside from the well-known modest Higgs vev, there is also a stationary point where the Higgs vev is Planckian.

So they analyze the potential as the function of the scalar fields and find out that depending on the unknown facts about the neutrinos, these extra stationary points may be unstable because of various new instabilities. Now, they also impose the multiple point criticality principle and demand our 4-dimensional vacuum to be degenerate with the 3-dimensional compactification – where one extra spatial dimension becomes a short circle. This degeneracy is an unusual, novel, stringy application of the multiple criticality principle that was previously used for boring quantum field theories only.

This degeneracy basically implies that the neutrino masses must be of order \(1-10\meV\). Obviously, they knew in advance that they wanted to get a similar conclusion because this conclusion seems to be most consistent with our knowledge about neutrinos. And neutrinos should be Dirac fermions, not Majorana fermions. Dirac neutrinos are needed for the spin structure to disable a decay by Witten's bubble of nothing. On top of that, the required vacua only exist if the cosmological constant is small enough, so they have a new justification for the smallness of the cosmological constant that must be comparable to the fourth power of these neutrino masses, too – and as you may know, this is a good approximate estimate of the cosmological constant, too.

Note that back in 1994, Witten still believed that the cosmological constant had to be zero and he used a compactification of our 4D spacetime down to 3D to get an argument. In some sense, Shiu and Hamada are doing something similar – they don't cite that paper by Witten, however – except that their setup is more advanced and it produces a conclusion that is compatible with the observer nonzero cosmological constant.



Jožin from the Swampland mainly eats the inhabitants of Prague. And who could have thought? He can only be dealt with effectively with the help of a crop duster.

So although these principles are abstract and at least some of them seem unproven or even "not sufficiently justified", there seems to be something correct about them because Shiu and Hamada may extract rather realistic conclusions out of these principles. But if they are right, I think that they did much more than an application of existing principles. They applied them in truly novel, creative ways.

If their apparent success were more than just a coincidence, I would love to understand the deeper reasons why the multiple criticality principle is right and many other things that are needed for a satisfactory explanation why this "had to work".

by Luboš Motl (noreply@blogger.com) at July 21, 2017 02:30 PM

Symmetrybreaking - Fermilab/SLAC

Watch the underground groundbreaking

This afternoon, watch a livestream of the start of excavation for the future home of the Deep Underground Neutrino Experiment.

Photo of the Yates surface facilities at Sanford Lab, a white building surrounded by tree-covered mountains

Today in South Dakota, dignitaries, scientists and engineers will mark the start of construction of the future home of America's flagship neutrino experiment with a groundbreaking ceremony.

Participants will hold shovels and give speeches. But this will be no ordinary groundbreaking. It will take place a mile under the earth at Sanford Underground Research Facility, the deepest underground physics lab in the United States.

The groundbreaking will celebrate the beginning of excavation for the Long-Baseline Neutrino Facility, which will house the Deep Underground Neutrino Experiment. When complete, LBNF/DUNE will be the largest experiment ever built in the US to study the properties of mysterious particles called neutrinos. Unlocking the mysteries of these particles could help explain more about how the universe works and why matter exists at all.

Watch the underground groundbreaking at 2:20 p.m. Mountain Time (3:20 p.m. Central) via livestream.

by Kathryn Jepsen at July 21, 2017 01:00 PM

Peter Coles - In the Dark

The Dead Statesman

I could not dig; I dared not rob:
Therefore I lied to please the mob.
Now all my lies are proved untrue
And I must face the men I slew.
What tale shall serve me here among
Mine angry and defrauded young?

by Rudyard Kipling (1865-1936)


by telescoper at July 21, 2017 12:24 PM

Christian P. Robert - xi'an's og

Boots is deliberately overcharging for the morning-after pill!

“Boots charges £28.25 for Levonelle emergency contraceptive (the leading brand) and £26.75 for its own generic version. Tesco now charges £13.50 for Levonelle and Superdrug £13.49 for a generic version. In France, the tablet costs £5.50.” The Guardian, July 20, 2017


Filed under: Kids, pictures Tagged: #justsaynon, boots, boycott, morning-after pill, United Kingdom

by xi'an at July 21, 2017 12:18 PM

Emily Lakdawalla - The Planetary Society Blog

LightSail 2 updates: Prox-1 mission changes, new launch date
LightSail 2 and Prox-1 are expected to launch board a SpaceX Falcon Heavy no earlier than April 30, 2018.

July 21, 2017 11:00 AM

July 20, 2017

Axel Maas - Looking Inside the Standard Model

Getting better
One of our main tools in our research are numerical simulations. E.g. the research of the previous entry would have been impossible without.

Numerical simulations require computers to run them. And even though computers become continuously more powerful, they are limited in the end. Not to mention that they cost money to buy and to use. Yes, also using them is expensive. Think of the electricity bill or even having space available for them.

So, to reduce the costs, we need to use them efficiently. That is good for us, because we can do more research in the same time. And that means that we as a society can make scientific progress faster. But it also reduces financial costs, which in fundamental research almost always means the taxpayer's money. And it reduces the environmental stress which we exercise by having and running the computers. That is also something which should not be forgotten.

So what does efficiently mean?

Well, we need to write our own computer programs. What we do nobody did before us. Most of what we do is really the edge of what we understand. So nobody was here before us and could have provided us with computer programs. We do them ourselves.

For that to be efficient, we need three important ingredients.

The first seems to be quite obvious. The programs should be correct before we use them to make a large scale computation. It would be very wasteful to run on a hundred computers for several months, just to figure out it was all for naught, because there was an error. Of course, we need to test them somewhere, but this can be done with much less effort. But this takes actually quite some time. And is very annoying. But it needs to be done.

The next two issues seems to be the same, but are actually subtly different. We need to have fast and optimized algorithms. The important difference is: The quality of the algorithm decides how fast it can be in principle. The actual optimization decides to which extent it uses this potential.

The latter point is something which requires a substantial amount of experience with programming. It is not something which can be learned theoretically. And it is more of a craftsmanship than anything else. Being good in optimization can make a program a thousand times faster. So, this is one reason why we try to teach students programming early, so that they can acquire the necessary experience before they enter research in their thesis work. Though there is still today research work which can be done without computers, it has become markedly less over the decades. It will never completely vanish, though. But it may well become a comparatively small fraction.

But whatever optimization can do, it can do only so much without good algorithms. And now we enter the main topic of this entry.

It is not only the code which we develop by ourselves. It is also the algorithms. Because again, they are new. Nobody did this before. So it is also up to us to make them efficient. But to really write a good algorithm requires knowledge about its background. This is called domain-specific knowledge. Knowing the scientific background. One reason more why you cannot get it off-the-shelf. Thus, if you want to calculate something new in research using computer simulations that means usually sitting down and writing a new algorithm.

But even once an algorithm is written down this does not mean that it is necessarily already the fastest possible one. Also this requires on the one hand experience, but even more so it is something new. And it is thus research as well to make it fast. So they can, and need to be, made better.

Right now I am supervising two bachelor theses where exactly this is done. The algorithms are indeed directly those which are involved with the research mentioned in the beginning. While both are working on the same algorithm, they do it with quite different emphasis.

The aim in one project is to make the algorithm faster, without changing its results. It is a classical case of improving an algorithm. If successful, it will make it possible to push the boundaries of what projects can be done. Thus, it makes computer simulations more efficient, and thus satisfies allows to do more research. One goal reached. Unfortunately the 'if' already tells that, as always with research, there is never a guarantee that it is possible. But if this kind of research should continue, it is necessary. The only alternative is waiting for a decade for the computers to become faster, and doing something different in the time in between. Not a very interesting option.

The other one is a little bit different. Here, the algorithm should be modified to serve a slightly different goal. It is not a fundamentally different goal, but subtly different so. Thus, while it does not create a fundamentally new algorithm, it still does create something new. Something, which will make a different kind of research possible. Without the modification, the other kind of research may not be possible for some time to come. But just as it is not possible to guarantee that an algorithm can be made more efficient, it is also not always possible that an algorithm with any reasonable amount of potential can be created at all. So this is also true research.

Thus, it remains exciting of what both theses will ultimately lead to.

So, as you see, behind the scenes research is quite full of the small things which make the big things possible. Both of these projects are probably closer to our everyday work than most of the things I have been posting before. The everyday work in research is quite often grinding. But, as always, this is what makes the big things ultimately possible. Without such projects as these two theses, our progress would be slowed down to a snail's speed.

by Axel Maas (noreply@blogger.com) at July 20, 2017 03:38 PM

Andrew Jaffe - Leaves on the Line

Python Bug Hunting

This is a technical, nerdy post, mostly so I can find the information if I need it later, but possibly of interest to others using a Mac with the Python programming language, and also since I am looking for excuses to write more here. (See also updates below.)

It seems that there is a bug in the latest (mid-May 2017) release of Apple’s macOS Sierra 10.12.5 (ok, there are plenty of bugs, as there in any sufficiently complex piece of software).

It first manifested itself (to me) as an error when I tried to load the jupyter notebook, a web-based graphical front end to Python (and other languages). When the command is run, it opens up a browser window. However, after updating macOS from 10.12.4 to 10.12.5, the browser didn’t open. Instead, I saw an error message:

    0:97: execution error: "http://localhost:8888/tree?token=<removed>" doesn't understand the "open location" message. (-1708)

A little googling found that other people had seen this error, too. I was able to figure out a workaround pretty quickly: this behaviour only happens when I wanted to use the “default” browser, which is set in the “General” tab of the “System Preferences” app on the Mac (I have it set to Apple’s own “Safari” browser, but you can use Firefox or Chrome or something else). Instead, there’s a text file you can edit to explicitly set the browser that you want jupyter to use, located at ~/.jupyter/jupyter_notebook_config.py, by including the line

c.NotebookApp.browser = u'Safari'

(although an unrelated bug in Python means that you can’t currently use “Chrome” in this slot).

But it turns out this isn’t the real problem. I went and looked at the code in jupyter that is run here, and it uses a Python module called webbrowser. Even outside of jupyter, trying to use this module to open the default browser fails, with exactly the same error message (though I’m picking a simpler URL at http://python.org instead of the jupyter-related one above):

>>> import webbrowser
>>> br = webbrowser.get()
>>> br.open("http://python.org")
0:33: execution error: "http://python.org" doesn't understand the "open location" message. (-1708)
False

So I reported this as an error in the Python bug-reporting system, and hoped that someone with more experience would look at it.

But it nagged at me, so I went and looked at the source code for the webbrowser module. There, it turns out that the programmers use a macOS command called “osascript” (which is a command-line interface to Apple’s macOS automation language “AppleScript”) to launch the browser, with a slightly different syntax for the default browser compared to explicitly picking one. Basically, the command is osascript -e 'open location "http://www.python.org/"'. And this fails with exactly the same error message. (The similar code osascript -e 'tell application "Safari" to open location "http://www.python.org/"' which picks a specific browser runs just fine, which is why explicitly setting “Safari” back in the jupyter file works.)

But there is another way to run the exact same AppleScript command. Open the Mac app called “Script Editor”, type open location "http://python.org" into the window, and press the “run” button. From the experience with “osascript”, I expected it to fail, but it didn’t: it runs just fine.

So the bug is very specific, and very obscure: it depends on exactly how the offending command is run, so appears to be a proper bug, and not some sort of security patch from Apple (and it certainly doesn’t appear in the 10.12.5 release notes). I have filed a bug report with Apple, but these are not publicly accessible, and are purported to be something of a black hole, with little feedback from the still-secretive Apple development team.

Updates:

by Andrew at July 20, 2017 08:45 AM

July 19, 2017

Emily Lakdawalla - The Planetary Society Blog

In total eclipse of a star, New Horizons' future flyby target makes its presence known
The team reported two weeks ago that the first attempts at observing 2014 MU69 were unsuccessful. But in their third try, on July 17, astronomers in Argentina saw the telltale sign of MU69's presence: a stellar wink.

July 19, 2017 09:04 PM

Emily Lakdawalla - The Planetary Society Blog

Congress gives NASA's planetary science division some love (and a Mars orbiter)
The House of Representatives proposed $2.1 billion for NASA's planetary science budget, which would be an all-time high. Part of the increase would be used to start work on a new reconnaissance and communications orbiter.

July 19, 2017 11:00 AM

Axel Maas - Looking Inside the Standard Model

Tackling ambiguities
I have recently published a paper with a rather lengthy and abstract title. I wanted to enlighten in this entry a little bit what is going on.

The paper is actually on a problem which occupies me by now since more than a decade. And this is the problem how to really define what we mean when we talk about gluons. The reason for this problem is a certain ambiguity. This ambiguity arises because it is often much more convenient to have auxiliary additional stuff around to make calculations simple. But then you have to deal with this additional stuff. In a paper last year I noted that the amount of stuff is much larger than originally anticipated. So you have to deal with more stuff.

The aim of the research leading to the paper was to make progress with that.

So what did I do? To understand this, it is first necessary to say a few words about how we describe gluons. We describe them by mathematical functions. The simplest such mathematical functions makes, loosely speaking, a statement about how probable it is that a gluon moves from one point to another. Since a fancy word for moving is propagating, this function is called a propagator.

So the first question I posed was whether the ambiguity in dealing with the stuff affects this. You may ask whether this should happen at all. Is a gluon not a particle? Should this not be free of ambiguities? Well, yes and no. A particle which we actually detect should be free of ambiguities. But gluons are not detected. Gluons are, in fact, never seen directly. They are confined. This is a very peculiar feature of the strong force. And one which is not satisfactorily fully understood. But it is experimentally well established.

Since therefore something happens to gluons before we can observe them, there is now a way out. If the gluon is ambiguous, then this ambiguity has to be canceled by whatever happens to it. Then whatever we detect is not ambiguous. But cancellations are fickle things. If you are not careful in your calculations, something is left uncanceled. And then your results become ambiguous. This has to be avoided. Of course, this is purely a problem for us theoreticians. The experimentalists never have this problem. A long time ago I actually already wrote together with a few other people a paper on this, showing how it may proceed.

So, the natural first step is to figure out what you have to cancel. And therefore to map the ambiguity in its full extent. The possibilities discussed since decades look roughly like this:

As you see, at short distances there is (essentially) no ambiguity. This is actually quite well understood. It is a feature very deeply embedded in the strong interaction. It has to do with the fact that, despite its name, the strong interaction makes itself less known the shorter the distance. But for weak effects we have very precise tools, and we therefore understand it.

On the other hand at long distances - well, there we knew for a long time not even qualitatively what is going on for sure. But, finally, over the decades, we were able to constrain the behavior at least partly. Now, I tested a large part of the remaining range of ambiguities. In the end, it indeed mattered little. There is almost no effect left of the ambiguity on the behavior of the gluon. So, it seems we have this under control.

Or do we? One of the important things in research is that it is never sufficient to confirm your result just by looking at a single thing. Either your explanation fits everything we see and measure, or it cannot be the full story. Or may even be wrong and the agreement with part of the observations is just a lucky coincidence. Well, actually not lucky. Rather terrible, since this misguides you.

Of course, doing all in one go is a horrendous amount of work, and so you work on a few at the time. Preferably, you first work on those where the most problems are expected. It is just ultimately that you need to have covered everything. But you cannot stop and claim victory before you did.

So I did, and looked in the paper at a handful of other quantities. And indeed, in some of them there remain effects. Especially, if you look at how strong the strong interaction is, depending on the distance where you measure it, something remains:

The effects of the ambiguity are thus not qualitative. So it does not change our qualitative understanding of how the strong force works. But there remains some quantitative effect, which we need to take into account.

There is one more important side effect. When I calculated the effects of the ambiguity, I learned also to control how the ambiguity manifests. This does not alter that there is an ambiguity, nor that it has consequences. But it allows others to reproduce how I controlled the ambiguity. This is important because now two results from different sources can be put together, and when using the same control they will fit such that for experimental observables the ambiguity cancels. And thus we have achieved the goal.

To be fair, however, this is currently at the level of an operative control. It is not yet a mathematically well-defined and proven procedure. As with so many cases, this still needs to be developed. But having operative control allows to develop the rigorous control easier than starting without it. So, progress has been made.

by Axel Maas (noreply@blogger.com) at July 19, 2017 08:00 AM

Axel Maas - Looking Inside the Standard Model

Using evolution for particle physics
(I will start to illustrate the entries with some simple sketches. I am not very experienced with it, and thus, they will be quite basic. But with making more of them I should gain experience, and they should become better eventually)

This entry will be on the recently started bachelor thesis of Raphael Wagner.

He is addressing the following problem. One of the mainstays of our research are computer simulations. But our computer simulations are not exact. They work by simulating a physical system many times with different starts. The final result is then an average over all the simulations. There is an (almost) infinite number of starts. Thus, we cannot include them all. As a consequence, our average is not the exact value we are looking for. Rather, it is an estimate. We can also estimate in which range around the real result should be.

This is sketched in the following picture

The black line is our estimate and the red lines give the range were the true value should be. From left to right some parameter runs. In the case of the thesis, the parameter is the time. The value is roughly the probability for a particle to survive this time. So we have an estimate for the survivability probability.

Fortunately, we know a little more. From quite basic principles we know that this survivability cannot depend in an arbitrary way on the time. Rather, it has a particular mathematical form. This function depends only on a very small set of numbers. The most important one is the mass of the particle.

What we then do is to start with some theory. We simulate it. And then we extract from such a survival probability the masses of the particles. Yes, we do not know them beforehand. This is because the masses of particles are changed in a quantum theory by quantum effects. These are which we simulate, to get a final value of the masses.

Up to now, we try to determine the mass in a very simple-minded way: We determined them by just looking for numbers for the mathematical functions which are closest to the data. That seems reasonable. Unfortunately, the function is not so simple. Thus, you can mathematically show that this does not give necessarily the best result. You can imagine this in the following way: Imagine you want to find the deepest valley in area. Surely, walking down hill will get you in a valley. But only walking down hill this will usually not be the deepest one:

But this is the way we determine the numbers so far. So there may be other options.

There is a different possibility. In the picture of the hills, you could rather deploy a number of ants, of which some prefer to walk up, some down, and some sometimes so and otherwise opposite. The ants live, die, and reproduce. Now, if you give the ants more to eat if they live in a deeper valley, at some time evolution will bring the population to live in the deepest valley:

And then you have what you want.

This is called a genetic algorithm. It is used in many areas of engineering. The processor of the computer or smartphone you use to read this has likely been optimized using such algorithms.

The bachelor thesis is now to apply the same idea to find better estimates for the masses of the particles in our simulations. This requires to understand what would be the equivalent to the deepness of the valley and the food for the ants. And how long we let evolution run its course. Then, we have only to monitor the (virtual) ants to find our prize.

by Axel Maas (noreply@blogger.com) at July 19, 2017 07:53 AM

Lubos Motl - string vacua and pheno

When research has no beef, it's impossible to divide credit
A few days ago, would-be researcher Sabine Hossenfelder had a headache. Why? She saw an article in Nature that was written by Chiara Marletto and Vlatko Vedral and that speculated about some ways to test the phenomena involving both quantum interference and gravity. And Hossenfelder thinks that Marletto and Vedral should "cite" Hossenfelder because:
For about 15 years, I have worked on quantum gravity phenomenology, ... my research area has its own conference series ... I have never seen nor heard anything of Chiara Marletto and Vlatko Vedral ... If they think such meetings are a good idea, I recommend they attend them. There’s no shortage. ...
And so on. Later, Hossenfelder also notices that a "Bose-Eistein condensate" has appeared in one of the – not really plausible – experiments proposed by Marletto and Vedral. So some outsiders must be stealing her work, right?

No.

The problem with Hossenfelder's claims is that regardless of those "15 years" and the bogus "conferences" for which she has misappropriated all the sponsors' money, her "field" doesn't really exist because no results that could be considered "beef of this field" have ever been found. All the texts that the likes of Hossenfelder pretend to be "research papers" are just worthless piles of junk whose only purpose is to fool the most gullible laymen and make them pay.

But she has made no impact on science (yet?). Marletto and Vedral don't mention people like Hossenfelder because people like Hossenfelder have never found a damn thing in physics.




She began to participate in this fraudulent activity as a junior collaborator to Lee Smolin, probably the most successful crackpot when it comes to deceiving the low-IQ laymen and pocketing their money.

For decades, Smolin has been saying that he can do important cutting-edge research in quantum gravity without even learning string theory. Stupid laymen who have no clue about the state of physics may buy such statements. They can even become excited when they rebrand their utter stupidity as a sort of faith or a political cause. It usually takes less than a minute to persuade them. However, actual physicists know that all these claims are ludicrously and demonstrably wrong. They know that Smolin hasn't ever found a damn thing and nothing changes about it after 15 or 30 years. The only thing that changes after 15 or 30 years is that Lee Smolin can no longer be interpreted as an occasional trickster; he is undoubtedly a life-long scammer.

And so is Sabine Hossenfelder and many people in that fraudulent movement to steal the sponsors' money – who would like "their field" started as an appendix to Smolin's tricks to become a standalone industry. They have done these things for 15 or 30 years but there are still no results that could be understood, acknowledged, and appreciated by genuine physicists, no results that would explain something or that could make a physicist excited about an explanation that actually seems to make sense. I's really cute how Hossenfelder promotes her meetings and those of her fellow fraudsters. But meetings aren't the purpose of the scientific research. Results – explanations and predictions – are the purpose of the scientific research. And she doesn't have any.




Marletto and Vedral have speculatively described some experiments involving quantum superpositions and Bose-Einstein condensates. Those are rather elementary words so it's not shocking that you find other papers that accidentally mention these buzzwords in somewhat similar combinations. But there's still no clear experiment that should yield some interesting results and even more certainly, there are no actual results. There is no evidence that the things that they're saying are on the right track. That's why it makes no sense to cite the authors of such speculations. If they have combined the words in similar ways, it may always be considered a coincidence because each of these people has combined the buzzwords in many different random ways.

They're like a rather big group of people who try to search for gold in a forest and they're being paid for this search by some sponsors. They scream that their activity is important and it's a very promising forest but they haven't found any gold. And actual competent physicists who are outside the forest think that they know a basically rigorous proof that there's no gold in that forest: that's why no gold has been found yet. Most likely, no gold will ever be found over there. Meanwhile, Hossenfelder wants to get the credit because she's been employed – and paid – as a searcher for gold in that forest for 15 years. Sorry but your research has been worthless so far. You haven't found anything. And your conferences etc. are therefore just costs, not benefits.

When someone actually finds something new that changes the physics research, he knows it – and so do his colleagues in the subdiscipline. A random example to be specific: When Vafa discovered F-theory, a version or formulation of string theory that makes a 12-dimensional spacetime geometry enter string theory research, there's no doubt that it's a new and important thing. F-theory has some very specific properties that may be demonstrated to be what they are and not something else; it comes equipped with all the evidence that is needed to become persuaded that it exists and has these properties; and on top of that, this package of ideas and mathematical structures automatically explains some previous properties – such as S-duality of type IIB string theory – by some independent, e.g. geometric, arguments. It simply makes sense and may be built upon. So people may search for compactifications of F-theory which are equally well-defined and specific and see that they unavoidably have additional virtues when it comes to the explanation of mathematical facts, to the construction of particle physics, to the explanation of the smallness of the gravitational constant or the cosmological constant, and other things.

One may fool billions of stupid laymen and tell them that F–theory is no good or it is not science for some weird metaphysical reasons except that everyone who is not a complete idiot immediately sees that F-theory is a very specific, highly predictive theory; the arguments behind it make sense; the arguments found by Vafa are very particular and make sense as well; and one may separate them from additional followup papers that amplify the value of Vafa's finding. F-theory will clearly stay with us, it's a real structure that was unknown before the 1990s but that was found at that moment and can't be undiscovered. It's analogous to a new species of apes; the \(E_8\) Lie group; or any other tangible discovery in science. Vafa has found some gold in a mine – the string theory mine is a better place to search for gold than a forest – and others could have found gold and platinum near Vafa's place, too. And they actually did.

Hossenfelder's and similar "research of quantum gravity" pretends to be analogous except that they haven't found anything yet. There are no results. There are no consistent mathematical structures whose properties may be deduced in any way. They don't have any counterpart of type I, IIA, IIB string theory, heterotic string theory, M-theory, F-theory, Myers effect, noncommutative geometry from B-fields, precision microscopic counting of black hole entropy, Matrix theory, matrix string theory, giant gravitons, ABJM, flux vacua, monodromy inflation, AdS/CFT, ER=EPR, SYK, and I could speak for a long time. All these stringy results make sense and may be turned to chapters of a very careful textbook. They're gold – sometimes bigger amounts of gold, sometimes smaller ones, but they clearly have a positive value. Hossenfelder et al. just don't have a counterpart of a single thing. Their activity is purely sociological – an effort to deceive the laymen who simply can't distinguish gold from feces when it comes to theoretical physics. Their counterpart of heterotic string theory is some "maybe Bose-Einstein condensate blah blah I promise blah blah string theory isn't science blah blah experimentally test everything" – incoherent sequences of verbal junk.

She can't get credit for gold because she hasn't found any yet. And she can't get credit for šit because every human being produces this material at some moments, roughly once a day. So her relationship to the feces isn't exclusive or special in any way. There are millions of people who just don't understand the state of theoretical physics but who want to make important statements about it, anyway. So they end up saying similar things as Sabine Hossenfelder. But it's just the generic junk. There's no value in it, there's no exclusivity in it, and there's no way to assign credit for it because it's still random junk.

Every competent quantum gravity expert will tell you that there don't exist any viable non-stringy theories of quantum gravity on the market of physics ideas; and there don't exist any promising new (newer than 20 years) experiments to test quantum gravity in a foreseeable future. So if someone had worked on these two categories, whatever she was doing, it was clearly a waste of time and money.

It's not just about Hossenfelder and her fake field of "phenomenology of quantum gravity" where the beef is non-existent, and battles for credit are therefore unavoidable.

Two days earlier, Physics Today published an interview with Gerard 't Hooft, Q&A: Gerard ’t Hooft on the future of quantum mechanics. 't Hooft has said several vague and usual words about his crackpot assertions that quantum mechanics should be replaced with the cellular automatons as dictated by his "cellular automaton interpretation of quantum mechanics". Several crackpots have offered their own theories – that only differ from 't Hooft's crackpot theory by 't Hooft's being more famous in the past. But Rick Roesler asked a very good question about this whole 't Hooft's stuff:
How is this different from Wolfram's cellular-automata - based approach?
A nice question, indeed. The only difference is that "credit for Wolfram" is a part of Stephen Wolfram's theory while "credit for 't Hooft" is a part of 't Hooft's theory. Otherwise these two men – and many others – are saying exactly the same and exactly equally preposterous thing. The whole Universe is a classical deterministic cellular automaton which can produce a theory of everything and all the trivial proofs that this assertion is rubbish may be ignored and should be ignored because we, the inventors of this paradigm, are so smart.

Why don't they cooperate or build on the other man's books and papers? Well, let me tell you why. Because both men know – correctly know – that there is nothing valuable to be found in the texts of the other man. They don't really want to find gold because they subconsciously know that there's none. They want to get credit for the quasi-gold – for the feces.

Now, from a scientific viewpoint, there is nothing to discuss here. The idea that one should return to classical, let alone deterministic, physics and re-explain all the things that have needed quantum mechanics for 90+ years is ludicrous. The idea that the right classical system will be as simple, in the human sense, and therefore as anthropomorphic and childish as a particular cellular automaton adds additional layers of farce on top of the idea. From a scientific viewpoint, the value of all these claims is exactly zero, of course.

But these claims are looking for "value" that is only seen from non-scientific perspectives. Simple-minded laymen may be led to say "Why not?". And they may decide that these theories should be trusted and assigned to Wolfram – or to 't Hooft – because Wolfram – or 't Hooft – is great. Needless to say, people who are paying attention to these ad hominem "arguments" aren't scientifically evaluating any evidence. They're just being manipulated.

The main point of this blog post is that when there's no actual beef that may be described scientifically, there's no objective way to divide credit, either. Who should be credited with the idea that the Universe is a cellular automaton? Wolfram and Fredkin – both of whom I know from Greater Boston – were surely making similar claims decades before 't Hooft. But so were many others. Every other layman who learns something about algorithms and who has some ambitions to understand the physical Universe makes a similar statement. Our Universe is like a computer with a simple algorithm of type A or B – he picks some simple enough algorithm that produces some "complex data". Producing "complex data" is extremely far from producing "something useful for physics" – but they don't want to get this far. ;-) Anyway, their words are still enough to impress millions of stupid laymen.

So it is really thousands of people – a whole political movement – who have made similar statements explicitly and millions of people who made them informally. All of them did more or less the same thing. And they haven't found anything that has some measurable value so far. It's hard to divide credit – there is no credit to be divided and the number of people who would like to get some credit is very high.

The laymen are being brainwashed by all this pseudoscience and by fraudsters of Hossenfelder's type. All of them have wooden earphones on their ears and expect the luxurious airplanes to land. But something must be wrong because the airplanes are not landing. This whole pseudoscientific movement would be so ludicrous if it weren't so tragic and consuming an increasing fraction of the human society. Fake scientists such as Ms Hossenfelder are becoming not just more numerous; they are becoming more arrogant, too. Instead of trying to carefully hide that they have been deceiving the public and the sponsors for years, Ms Hossenfelder is using this fact as an argument why she should have a monopoly over this activity and others, in this case Marletto and Vedral, shouldn't be allowed to do the same thing.

Has the intelligence of the laymen deteriorated so much that they are unable to see through these cheap tricks anymore? Hossenfelder's complaint against Marletto and Vedral is that these two authors of the piece in Nature haven't attended conferences organized by Hossenfelder. Are you really willing to believe that before you make an important discovery, you have the duty to attend conferences run by a German would-be scientist who only attract other fake researchers who have never found a damn thing? Apologies: science recognizes no such duties. Future discoveries will almost certainly be done by the people who haven't attended conferences of some arrogant pseudo-scientists, by luminaries who know that the Hossenfelder-style papers and conferences are a pure waste of time and money. Those who can't even figure out this simple point have virtually no chance to contribute to physics.

by Luboš Motl (noreply@blogger.com) at July 19, 2017 05:30 AM

The n-Category Cafe

What is the comprehension construction?

Dominic Verity and I have just posted a paper on the arXiv entitled “The comprehension construction.” This post is meant to explain what we mean by the name.

The comprehension construction is somehow analogous to both the straightening and the unstraightening constructions introduced by Lurie in his development of the theory of quasi-categories. Most people use the term <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories as a rough synonym for quasi-categories, but we reserve this term for something more general: the objects in any <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos. There is an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos whose objects are quasi-categories and another whose objects are complete Segal spaces. But there are also more exotic <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmoi whose objects model <semantics>(,n)<annotation encoding="application/x-tex">(\infty,n)</annotation></semantics>-categories or fibered <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-categories, and our comprehension construction applies to any of these contexts.

The input to the comprehension construction is any cocartesian fibration between <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories together with a third <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>. The output is then a particular homotopy coherent diagram that we refer to as the comprehension functor. In the case <semantics>A=1<annotation encoding="application/x-tex">A=1</annotation></semantics>, the comprehension functor defines a “straightening” of the cocartesian fibration. In the case where the cocartesian fibration is the universal one over the quasi-category of small <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories, the comprehension functor converts a homotopy coherent diagram of shape <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> into its “unstraightening,” a cocartesian fibration over <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>.

The fact that the comprehension construction can be applied in any <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos has an immediate benefit. The codomain projection functor associated to an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> defines a cocartesian fibration in the slice <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos over <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, in which case the comprehension functor specializes to define the Yoneda embedding.

Classical comprehension

The comprehension scheme in ZF set theory asserts that for any proposition <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> involving a variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> whose values range over some set <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> there exists a subset

<semantics>{xAϕ(x)}<annotation encoding="application/x-tex">\{ x \in A \mid \phi(x)\}</annotation></semantics>

comprised of those elements for which the formula is satisfied. If the proposition <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> is represented by its characteristic function <semantics>χ ϕ:A2<annotation encoding="application/x-tex">\chi_\phi \colon A \to 2</annotation></semantics>, then this subset is defined by the following pullback

<semantics><semantics><annotation-xml encoding="SVG1.1"> </annotation-xml></semantics><annotation encoding="application/x-tex"> \begin{svg} <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="140pt" viewBox="0 0 108.733 59.778" version="1.1"> <defs> <g> <symbol overflow="visible" id="glypha0-0"> <path style="stroke:none;" d="M 2.046875 -2.546875 C 2.046875 -2.328125 2.15625 -2.140625 2.375 -2 L 2.53125 -2.109375 C 2.40625 -2.25 2.359375 -2.34375 2.359375 -2.46875 C 2.359375 -2.71875 2.46875 -2.84375 3.078125 -3.390625 C 3.703125 -3.96875 3.890625 -4.25 3.890625 -4.6875 C 3.890625 -5.328125 3.296875 -5.84375 2.53125 -5.84375 C 2.203125 -5.84375 1.9375 -5.765625 1.53125 -5.578125 L 1.25 -4.734375 L 1.46875 -4.671875 L 1.625 -5.03125 C 1.703125 -5.234375 1.9375 -5.34375 2.265625 -5.34375 C 2.8125 -5.34375 3.1875 -5 3.1875 -4.53125 C 3.1875 -3.765625 2.046875 -3.234375 2.046875 -2.546875 Z M 2.40625 -1.46875 C 2.1875 -1.46875 2 -1.265625 2 -1.046875 C 2 -0.828125 2.1875 -0.609375 2.40625 -0.609375 C 2.65625 -0.609375 2.84375 -0.8125 2.84375 -1.046875 C 2.84375 -1.265625 2.640625 -1.46875 2.40625 -1.46875 Z M 4.359375 -6.171875 L 4.359375 -0.421875 L 0.875 -0.421875 L 0.875 -6.171875 Z M 4.78125 -6.25 C 4.78125 -6.546875 4.734375 -6.59375 4.4375 -6.59375 L 0.78125 -6.59375 C 0.484375 -6.59375 0.4375 -6.546875 0.4375 -6.25 L 0.4375 -0.34375 C 0.4375 -0.046875 0.484375 0 0.78125 0 L 4.4375 0 C 4.734375 0 4.78125 -0.046875 4.78125 -0.34375 Z M 4.78125 -6.25 "/> </symbol> <symbol overflow="visible" id="glypha0-1"> <path style="stroke:none;" d="M 2.875 1.75 C 2.25 1.609375 2.046875 1.171875 2.046875 0.453125 L 2.046875 -1.28125 C 2.046875 -2.0625 1.96875 -2.515625 1.25 -2.71875 L 1.25 -2.734375 C 1.9375 -2.90625 2.046875 -3.34375 2.046875 -4.078125 L 2.046875 -5.9375 C 2.046875 -6.65625 2.234375 -7.046875 2.875 -7.234375 C 1.890625 -7.234375 1.328125 -7.015625 1.328125 -5.765625 L 1.328125 -3.90625 C 1.328125 -3.265625 1.203125 -2.90625 0.578125 -2.71875 C 1.234375 -2.53125 1.328125 -2.21875 1.328125 -1.5 L 1.328125 0.171875 C 1.328125 1.484375 1.75 1.75 2.875 1.75 Z M 2.875 1.75 "/> </symbol> <symbol overflow="visible" id="glypha0-2"> <path style="stroke:none;" d="M 0.09375 -0.015625 C 0.234375 0.0625 0.40625 0.109375 0.515625 0.109375 C 0.84375 0.109375 1.234375 -0.171875 1.546875 -0.640625 L 2.296875 -1.8125 L 2.40625 -1.125 C 2.546875 -0.28125 2.765625 0.109375 3.125 0.109375 C 3.34375 0.109375 3.671875 -0.0625 3.984375 -0.34375 L 4.46875 -0.78125 L 4.390625 -0.984375 C 4.03125 -0.671875 3.78125 -0.53125 3.625 -0.53125 C 3.46875 -0.53125 3.34375 -0.625 3.234375 -0.828125 C 3.15625 -1.015625 3.046875 -1.390625 2.984375 -1.671875 L 2.8125 -2.6875 L 3.15625 -3.171875 C 3.625 -3.8125 3.890625 -4.046875 4.203125 -4.046875 C 4.359375 -4.046875 4.484375 -3.96875 4.53125 -3.8125 L 4.671875 -3.859375 L 4.828125 -4.703125 C 4.703125 -4.78125 4.609375 -4.8125 4.53125 -4.8125 C 4.125 -4.8125 3.734375 -4.453125 3.109375 -3.53125 L 2.734375 -2.984375 L 2.6875 -3.453125 C 2.5625 -4.453125 2.296875 -4.8125 1.703125 -4.8125 C 1.453125 -4.8125 1.21875 -4.71875 1.140625 -4.59375 L 0.5625 -3.765625 L 0.734375 -3.671875 C 1.03125 -4 1.21875 -4.140625 1.421875 -4.140625 C 1.75 -4.140625 1.96875 -3.734375 2.140625 -2.765625 L 2.25 -2.140625 L 1.84375 -1.53125 C 1.421875 -0.859375 1.078125 -0.53125 0.796875 -0.53125 C 0.640625 -0.53125 0.53125 -0.578125 0.515625 -0.625 L 0.40625 -0.90625 L 0.203125 -0.875 C 0.203125 -0.53125 0.125 -0.265625 0.09375 -0.015625 Z M 0.09375 -0.015625 "/> </symbol> <symbol overflow="visible" id="glypha0-3"> <path style="stroke:none;" d="M 5.078125 -0.078125 L 5.078125 -0.609375 L 3.078125 -0.609375 C 2.140625 -0.609375 1.21875 -1.390625 1.09375 -2.4375 L 4.90625 -2.4375 L 4.90625 -2.953125 L 1.09375 -2.953125 C 1.203125 -3.953125 2.03125 -4.78125 3.078125 -4.78125 L 5.078125 -4.78125 L 5.078125 -5.3125 L 3.125 -5.3125 C 1.671875 -5.3125 0.546875 -4.09375 0.546875 -2.703125 C 0.546875 -1.3125 1.671875 -0.078125 3.125 -0.078125 Z M 5.078125 -0.078125 "/> </symbol> <symbol overflow="visible" id="glypha0-4"> <path style="stroke:none;" d="M 5.65625 -0.546875 C 5.65625 -0.390625 5.5 -0.296875 5.21875 -0.28125 L 4.78125 -0.25 L 4.78125 0.03125 C 5.796875 0 5.796875 0 6 0 C 6.203125 0 6.203125 0 7.21875 0.03125 L 7.21875 -0.25 L 6.953125 -0.28125 C 6.484375 -0.34375 6.484375 -0.34375 6.40625 -0.796875 L 5.53125 -7.03125 L 5.078125 -7.03125 L 1.359375 -1.109375 C 1 -0.5625 0.8125 -0.40625 0.484375 -0.3125 L 0.28125 -0.265625 L 0.28125 0.03125 C 1.203125 0 1.203125 0 1.390625 0 C 1.578125 0 1.609375 0 2.46875 0.03125 L 2.46875 -0.25 L 1.9375 -0.28125 C 1.78125 -0.296875 1.640625 -0.375 1.640625 -0.46875 C 1.640625 -0.546875 1.703125 -0.6875 1.890625 -1.015625 L 2.90625 -2.765625 L 5.359375 -2.765625 L 5.609375 -0.890625 L 5.609375 -0.859375 C 5.609375 -0.84375 5.65625 -0.6875 5.65625 -0.546875 Z M 4.9375 -5.984375 L 5.3125 -3.125 L 3.15625 -3.125 Z M 4.9375 -5.984375 "/> </symbol> <symbol overflow="visible" id="glypha0-5"> <path style="stroke:none;" d="M 2.46875 1.703125 L 2.46875 -7.125 L 1.890625 -7.125 L 1.890625 1.703125 Z M 2.46875 1.703125 "/> </symbol> <symbol overflow="visible" id="glypha0-6"> <path style="stroke:none;" d="M 6.203125 -2.84375 C 6.21875 -4.265625 5.03125 -4.71875 3.8125 -4.71875 C 3.96875 -5.46875 4.0625 -6.21875 4.3125 -6.96875 L 4.3125 -7.03125 C 4.171875 -6.984375 4.03125 -6.90625 3.875 -6.8125 L 3.53125 -4.71875 C 1.828125 -4.71875 0.296875 -3.65625 0.265625 -1.84375 C 0.234375 -0.375 1.421875 0.078125 2.671875 0.15625 L 2.28125 2.875 C 2.484375 2.8125 2.65625 2.734375 2.8125 2.609375 C 2.828125 1.78125 2.875 0.984375 2.984375 0.15625 C 4.796875 0.03125 6.15625 -0.875 6.203125 -2.84375 Z M 5.375 -2.71875 C 5.359375 -1.296875 4.5 -0.109375 3 -0.0625 L 3.765625 -4.46875 C 4.859375 -4.453125 5.390625 -3.78125 5.375 -2.71875 Z M 3.484375 -4.46875 L 2.6875 -0.046875 C 1.546875 -0.140625 1.0625 -0.734375 1.078125 -1.859375 C 1.109375 -3.34375 1.953125 -4.421875 3.484375 -4.46875 Z M 3.484375 -4.46875 "/> </symbol> <symbol overflow="visible" id="glypha0-7"> <path style="stroke:none;" d="M 1.453125 -2.65625 C 1.453125 -5.25 2.421875 -6.296875 3 -6.984375 L 2.8125 -7.234375 C 2.25 -6.734375 0.59375 -5.40625 0.59375 -2.65625 C 0.59375 -1.578125 0.84375 -0.578125 1.328125 0.3125 C 1.671875 0.984375 2 1.375 2.8125 2.140625 L 3 1.9375 C 2.546875 1.359375 1.453125 0.15625 1.453125 -2.65625 Z M 1.453125 -2.65625 "/> </symbol> <symbol overflow="visible" id="glypha0-8"> <path style="stroke:none;" d="M 0.515625 -7.234375 L 0.3125 -6.984375 C 0.875 -6.34375 1.859375 -5.25 1.859375 -2.65625 C 1.859375 0.09375 0.828125 1.3125 0.3125 1.9375 L 0.515625 2.140625 C 1.03125 1.640625 2.71875 0.234375 2.71875 -2.640625 C 2.71875 -5.40625 1.078125 -6.734375 0.515625 -7.234375 Z M 0.515625 -7.234375 "/> </symbol> <symbol overflow="visible" id="glypha0-9"> <path style="stroke:none;" d="M 2.734375 -2.71875 C 2.125 -2.90625 1.984375 -3.265625 1.984375 -3.90625 L 1.984375 -5.765625 C 1.984375 -7.015625 1.4375 -7.234375 0.4375 -7.234375 C 1.09375 -7.046875 1.265625 -6.65625 1.265625 -5.9375 L 1.265625 -4.078125 C 1.265625 -3.34375 1.390625 -2.90625 2.078125 -2.734375 L 2.078125 -2.71875 C 1.359375 -2.515625 1.265625 -2.0625 1.265625 -1.28125 L 1.265625 0.453125 C 1.265625 1.171875 1.0625 1.609375 0.4375 1.75 C 1.5625 1.75 1.984375 1.484375 1.984375 0.171875 L 1.984375 -1.5 C 1.984375 -2.21875 2.078125 -2.53125 2.734375 -2.71875 Z M 2.734375 -2.71875 "/> </symbol> <symbol overflow="visible" id="glypha0-10"> <path style="stroke:none;" d="M 4.171875 0.03125 L 4.171875 -0.265625 L 3.65625 -0.296875 C 3.09375 -0.328125 3 -0.4375 3 -0.953125 L 3 -6.984375 L 0.59375 -5.96875 L 0.671875 -5.46875 L 2.15625 -6.125 L 2.15625 -0.953125 C 2.15625 -0.4375 2.046875 -0.328125 1.515625 -0.296875 L 0.953125 -0.265625 L 0.953125 0.03125 C 2.5 0 2.5 0 2.609375 0 C 2.90625 0 4 0.03125 4.171875 0.03125 Z M 4.171875 0.03125 "/> </symbol> <symbol overflow="visible" id="glypha0-11"> <path style="stroke:none;" d="M 0.15625 -0.234375 L 0.15625 0.03125 C 2.03125 0.03125 2.03125 0 2.375 0 C 2.734375 0 2.734375 0.03125 4.671875 0.03125 L 4.671875 -0.8125 C 3.515625 -0.765625 3.0625 -0.8125 1.21875 -0.765625 L 3.03125 -2.6875 C 4 -3.71875 4.296875 -4.265625 4.296875 -5.015625 C 4.296875 -6.15625 3.515625 -6.875 2.25 -6.875 C 1.53125 -6.875 1.046875 -6.671875 0.5625 -6.171875 L 0.390625 -4.8125 L 0.671875 -4.8125 L 0.8125 -5.28125 C 0.96875 -5.859375 1.328125 -6.09375 2 -6.09375 C 2.84375 -6.09375 3.40625 -5.5625 3.40625 -4.71875 C 3.40625 -3.96875 2.984375 -3.234375 1.859375 -2.03125 Z M 0.15625 -0.234375 "/> </symbol> <symbol overflow="visible" id="glypha1-0"> <path style="stroke:none;" d="M 1.421875 -1.78125 C 1.421875 -1.625 1.5 -1.5 1.65625 -1.40625 L 1.765625 -1.46875 C 1.6875 -1.5625 1.65625 -1.640625 1.65625 -1.71875 C 1.65625 -1.890625 1.71875 -1.984375 2.140625 -2.375 C 2.59375 -2.78125 2.71875 -2.96875 2.71875 -3.28125 C 2.71875 -3.734375 2.3125 -4.078125 1.765625 -4.078125 C 1.546875 -4.078125 1.359375 -4.03125 1.078125 -3.890625 L 0.875 -3.3125 L 1.03125 -3.265625 L 1.140625 -3.515625 C 1.1875 -3.65625 1.359375 -3.734375 1.59375 -3.734375 C 1.96875 -3.734375 2.234375 -3.5 2.234375 -3.15625 C 2.234375 -2.625 1.421875 -2.265625 1.421875 -1.78125 Z M 1.6875 -1.03125 C 1.53125 -1.03125 1.390625 -0.890625 1.390625 -0.734375 C 1.390625 -0.578125 1.53125 -0.421875 1.6875 -0.421875 C 1.859375 -0.421875 2 -0.578125 2 -0.734375 C 2 -0.890625 1.84375 -1.03125 1.6875 -1.03125 Z M 3.046875 -4.3125 L 3.046875 -0.296875 L 0.609375 -0.296875 L 0.609375 -4.3125 Z M 3.34375 -4.375 C 3.34375 -4.578125 3.3125 -4.609375 3.09375 -4.609375 L 0.546875 -4.609375 C 0.34375 -4.609375 0.3125 -4.578125 0.3125 -4.375 L 0.3125 -0.25 C 0.3125 -0.03125 0.34375 0 0.546875 0 L 3.09375 0 C 3.3125 0 3.34375 -0.03125 3.34375 -0.25 Z M 3.34375 -4.375 "/> </symbol> <symbol overflow="visible" id="glypha1-1"> <path style="stroke:none;" d="M 4.765625 -3.625 L 4.765625 -4.03125 L 0.453125 -4.03125 L 0.453125 -3.625 L 2.40625 -3.625 L 2.40625 0 L 2.8125 0 L 2.8125 -3.625 Z M 4.765625 -3.625 "/> </symbol> <symbol overflow="visible" id="glypha1-2"> <path style="stroke:none;" d="M 2.1875 -1.5 C 2.53125 -1.8125 3.640625 -2.90625 4.1875 -3.03125 L 4.1875 -3.09375 C 4.09375 -3.1875 3.953125 -3.265625 3.8125 -3.34375 C 3.265625 -2.796875 2.6875 -2.234375 2.15625 -1.671875 C 2.0625 -2.859375 2.046875 -3.296875 1.609375 -3.296875 C 1.296875 -3.296875 1 -3.03125 0.796875 -2.828125 L 0.796875 -2.71875 L 0.890625 -2.71875 L 0.890625 -2.734375 C 1.65625 -3.5 1.734375 -1.828125 1.8125 -1.28125 C 1.125 -0.578125 0.484375 0.21875 0 1.0625 C 0.046875 1.15625 0.125 1.265625 0.21875 1.34375 L 0.296875 1.34375 C 0.703125 0.5 1.265625 -0.296875 1.828125 -1.046875 C 1.953125 0.1875 1.984375 1.203125 2.578125 1.203125 C 2.90625 1.203125 3.21875 0.953125 3.453125 0.75 L 3.453125 0.671875 L 3.375 0.671875 C 2.28125 1.4375 2.3125 -0.859375 2.1875 -1.5 Z M 2.1875 -1.5 "/> </symbol> <symbol overflow="visible" id="glypha2-0"> <path style="stroke:none;" d="M 1.015625 -1.28125 C 1.015625 -1.171875 1.078125 -1.078125 1.1875 -1 L 1.265625 -1.046875 C 1.203125 -1.125 1.1875 -1.171875 1.1875 -1.234375 C 1.1875 -1.359375 1.234375 -1.421875 1.53125 -1.6875 C 1.859375 -1.984375 1.9375 -2.125 1.9375 -2.34375 C 1.9375 -2.671875 1.65625 -2.921875 1.265625 -2.921875 C 1.09375 -2.921875 0.96875 -2.890625 0.765625 -2.78125 L 0.625 -2.375 L 0.734375 -2.34375 L 0.8125 -2.515625 C 0.859375 -2.609375 0.96875 -2.671875 1.140625 -2.671875 C 1.40625 -2.671875 1.59375 -2.5 1.59375 -2.265625 C 1.59375 -1.875 1.015625 -1.609375 1.015625 -1.28125 Z M 1.203125 -0.734375 C 1.09375 -0.734375 1 -0.640625 1 -0.515625 C 1 -0.40625 1.09375 -0.296875 1.203125 -0.296875 C 1.328125 -0.296875 1.421875 -0.40625 1.421875 -0.515625 C 1.421875 -0.640625 1.328125 -0.734375 1.203125 -0.734375 Z M 2.171875 -3.078125 L 2.171875 -0.21875 L 0.4375 -0.21875 L 0.4375 -3.078125 Z M 2.390625 -3.125 C 2.390625 -3.28125 2.375 -3.296875 2.21875 -3.296875 L 0.390625 -3.296875 C 0.25 -3.296875 0.21875 -3.28125 0.21875 -3.125 L 0.21875 -0.171875 C 0.21875 -0.03125 0.25 0 0.390625 0 L 2.21875 0 C 2.375 0 2.390625 -0.03125 2.390625 -0.171875 Z M 2.390625 -3.125 "/> </symbol> <symbol overflow="visible" id="glypha2-1"> <path style="stroke:none;" d="M 3.09375 -1.421875 C 3.109375 -2.140625 2.515625 -2.359375 1.90625 -2.359375 C 1.984375 -2.734375 2.03125 -3.109375 2.15625 -3.484375 L 2.15625 -3.515625 C 2.078125 -3.484375 2.015625 -3.453125 1.9375 -3.40625 L 1.765625 -2.359375 C 0.921875 -2.359375 0.15625 -1.828125 0.140625 -0.921875 C 0.125 -0.1875 0.703125 0.046875 1.328125 0.078125 L 1.140625 1.4375 C 1.234375 1.40625 1.328125 1.359375 1.40625 1.296875 C 1.40625 0.890625 1.4375 0.484375 1.484375 0.078125 C 2.390625 0.015625 3.078125 -0.4375 3.09375 -1.421875 Z M 2.6875 -1.359375 C 2.671875 -0.640625 2.25 -0.0625 1.5 -0.03125 L 1.875 -2.234375 C 2.421875 -2.234375 2.703125 -1.890625 2.6875 -1.359375 Z M 1.734375 -2.234375 L 1.34375 -0.03125 C 0.765625 -0.0625 0.53125 -0.359375 0.53125 -0.9375 C 0.546875 -1.671875 0.984375 -2.21875 1.734375 -2.234375 Z M 1.734375 -2.234375 "/> </symbol> </g> <clipPath id="clipa1"> <path d="M 86 29 L 108.734375 29 L 108.734375 49 L 86 49 Z M 86 29 "/> </clipPath> <clipPath id="clipa2"> <path d="M 79 37 L 99 37 L 99 59.777344 L 79 59.777344 Z M 79 37 "/> </clipPath> </defs> <g id="surfacea1"> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-1" x="8.508" y="14.465"/> <use xlink:href="#glypha0-2" x="11.815596" y="14.465"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-3" x="19.546605" y="14.465"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-4" x="27.925185" y="14.465"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-5" x="38.176742" y="14.465"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-6" x="45.30003" y="14.465"/> <use xlink:href="#glypha0-7" x="51.925185" y="14.465"/> <use xlink:href="#glypha0-2" x="55.232782" y="14.465"/> <use xlink:href="#glypha0-8" x="60.204139" y="14.465"/> <use xlink:href="#glypha0-9" x="63.511735" y="14.465"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-10" x="95.254" y="14.465"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-4" x="33.923" y="51.525"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha0-11" x="95.254" y="51.525"/> </g> <path style="fill:none;stroke-width:0.59776;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -8.199031 20.051563 L 0.304875 20.051563 L 0.304875 28.555469 " transform="matrix(1,0,0,-1,54.367,48.825)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 17.008 37.059375 L 35.625188 37.059375 " transform="matrix(1,0,0,-1,54.367,48.825)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.0012075 2.67073 L 2.669176 -0.001145 L 0.0012075 -2.669114 " transform="matrix(1,0,0,-1,87.61598,11.76448)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -16.702937 27.567188 L -16.702937 8.95 " transform="matrix(1,0,0,-1,54.367,48.825)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.0006 2.671518 L 2.671275 -0.0003575 L -0.0006 -2.668326 " transform="matrix(0,1,1,0,37.66442,37.5006)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 43.375188 30.414844 L 43.375188 8.789844 " transform="matrix(1,0,0,-1,54.367,48.825)"/> <g clip-path="url(#clipa1)" clip-rule="nonzero"> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.00015625 2.669816 L 2.668125 0.0018475 L 0.00015625 -2.670028 " transform="matrix(0,1,1,0,97.74034,37.66)"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha1-1" x="100.16" y="31.597"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -8.409969 0.00078125 L 35.625188 0.00078125 " transform="matrix(1,0,0,-1,54.367,48.825)"/> <g clip-path="url(#clipa2)" clip-rule="nonzero"> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.0012075 2.66875 L 2.669176 0.00078125 L 0.0012075 -2.671094 " transform="matrix(1,0,0,-1,87.61598,48.825)"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha1-2" x="64.41" y="54.747"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glypha2-1" x="68.65" y="56.211"/> </g> </g> </svg> \end{svg} </annotation></semantics>

of the canonical monomorphism <semantics>:12<annotation encoding="application/x-tex">\top \colon 1 \to 2</annotation></semantics>. For that reason, <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> is often called the subobject classifier of the category <semantics>Set<annotation encoding="application/x-tex">\text{Set}</annotation></semantics> and the morphism <semantics>:12<annotation encoding="application/x-tex">\top\colon 1 \to 2</annotation></semantics> is regarded as being its generic subobject. On abstracting this point of view, we obtain the theory of elementary toposes.

The Grothendieck construction as comprehension

What happens to the comprehension scheme when we pass from the 1-categorical context just discussed to the world of 2-categories?

A key early observation in this regard, due to Ross Street I believe, is that we might usefully regard the Grothendieck construction as an instance of a generalised form of comprehension for the category of categories. This analogy becomes clear when we observe that the category of elements of a functor <semantics>F:𝒞Set<annotation encoding="application/x-tex">F \colon \mathcal{C} \to \text{Set}</annotation></semantics> may be formed by taking the pullback:

<semantics><semantics><annotation-xml encoding="SVG1.1"> </annotation-xml></semantics><annotation encoding="application/x-tex"> \begin{svg} <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="120pt" viewBox="0 0 77.723 62.884" version="1.1"> <defs> <g> <symbol overflow="visible" id="glyphb0-0"> <path style="stroke:none;" d="M 2.046875 -2.546875 C 2.046875 -2.328125 2.15625 -2.140625 2.375 -2 L 2.53125 -2.109375 C 2.40625 -2.25 2.359375 -2.34375 2.359375 -2.46875 C 2.359375 -2.71875 2.46875 -2.84375 3.078125 -3.390625 C 3.703125 -3.96875 3.890625 -4.25 3.890625 -4.6875 C 3.890625 -5.328125 3.296875 -5.84375 2.53125 -5.84375 C 2.203125 -5.84375 1.9375 -5.765625 1.53125 -5.578125 L 1.25 -4.734375 L 1.46875 -4.671875 L 1.625 -5.03125 C 1.703125 -5.234375 1.9375 -5.34375 2.265625 -5.34375 C 2.8125 -5.34375 3.1875 -5 3.1875 -4.53125 C 3.1875 -3.765625 2.046875 -3.234375 2.046875 -2.546875 Z M 2.40625 -1.46875 C 2.1875 -1.46875 2 -1.265625 2 -1.046875 C 2 -0.828125 2.1875 -0.609375 2.40625 -0.609375 C 2.65625 -0.609375 2.84375 -0.8125 2.84375 -1.046875 C 2.84375 -1.265625 2.640625 -1.46875 2.40625 -1.46875 Z M 4.359375 -6.171875 L 4.359375 -0.421875 L 0.875 -0.421875 L 0.875 -6.171875 Z M 4.78125 -6.25 C 4.78125 -6.546875 4.734375 -6.59375 4.4375 -6.59375 L 0.78125 -6.59375 C 0.484375 -6.59375 0.4375 -6.546875 0.4375 -6.25 L 0.4375 -0.34375 C 0.4375 -0.046875 0.484375 0 0.78125 0 L 4.4375 0 C 4.734375 0 4.78125 -0.046875 4.78125 -0.34375 Z M 4.78125 -6.25 "/> </symbol> <symbol overflow="visible" id="glyphb0-1"> <path style="stroke:none;" d="M 6.921875 -8 C 6.921875 -8.578125 6.1875 -8.828125 5.71875 -8.828125 C 3.671875 -8.828125 3.421875 -5.765625 3.328125 -4.28125 C 3.21875 -2.5 3.078125 -0.71875 3.03125 1.078125 C 3 1.90625 3.109375 4.09375 1.828125 4.09375 C 1.71875 4.09375 1.578125 4.0625 1.5 3.953125 C 1.53125 3.90625 1.546875 3.828125 1.546875 3.765625 C 1.546875 3.453125 1.140625 3.140625 0.828125 3.140625 C 0.59375 3.140625 0.53125 3.390625 0.53125 3.578125 C 0.53125 4.171875 1.25 4.40625 1.71875 4.40625 C 3.46875 4.40625 3.78125 1.890625 3.9375 0.59375 C 4.109375 -1.03125 4.203125 -2.6875 4.3125 -4.3125 C 4.375 -5.1875 4.203125 -8.5 5.609375 -8.5 C 5.71875 -8.5 5.859375 -8.46875 5.9375 -8.390625 C 5.90625 -8.3125 5.90625 -8.25 5.90625 -8.1875 C 5.90625 -7.859375 6.3125 -7.5625 6.609375 -7.5625 C 6.84375 -7.5625 6.921875 -7.796875 6.921875 -8 Z M 6.921875 -8 "/> </symbol> <symbol overflow="visible" id="glyphb0-2"> <path style="stroke:none;" d="M 2.84375 -6.4375 L 3.90625 -6.484375 C 4.53125 -6.484375 4.921875 -6.390625 4.9375 -6.234375 L 5.015625 -5.359375 L 5.25 -5.359375 C 5.28125 -5.984375 5.359375 -6.484375 5.46875 -6.875 C 5.25 -6.890625 4.96875 -6.890625 4.796875 -6.890625 L 4.390625 -6.890625 L 2.734375 -6.875 L 2.453125 -6.875 C 2.203125 -6.875 1.796875 -6.875 1.421875 -6.890625 L 0.9375 -6.890625 L 0.90625 -6.625 L 1.46875 -6.59375 C 1.703125 -6.59375 1.8125 -6.5 1.8125 -6.328125 C 1.8125 -6.1875 1.78125 -5.875 1.71875 -5.578125 L 0.984375 -1.25 C 0.8125 -0.328125 0.8125 -0.3125 0.40625 -0.28125 L 0.046875 -0.25 L 0 0.03125 L 0.34375 0.015625 C 0.734375 0.015625 1.0625 0 1.265625 0 C 1.453125 0 1.734375 0.015625 2.125 0.015625 L 2.640625 0.03125 L 2.671875 -0.25 L 2.03125 -0.28125 C 1.765625 -0.296875 1.671875 -0.375 1.671875 -0.5625 C 1.671875 -0.625 1.6875 -0.734375 1.6875 -0.78125 L 2.046875 -3 L 3.21875 -3 C 3.484375 -3 3.796875 -2.984375 4.28125 -2.96875 L 4.53125 -2.953125 L 4.71875 -3.40625 L 4.6875 -3.484375 C 3.875 -3.4375 3.28125 -3.421875 2.5 -3.421875 L 2.140625 -3.421875 L 2.59375 -6.125 C 2.625 -6.28125 2.625 -6.328125 2.6875 -6.4375 Z M 2.84375 -6.4375 "/> </symbol> <symbol overflow="visible" id="glyphb0-3"> <path style="stroke:none;" d="M 7.890625 -6.90625 C 8.203125 -6.90625 8.453125 -6.8125 8.453125 -6.453125 C 8.453125 -6.234375 8.25 -5.921875 7.859375 -5.546875 C 6.90625 -4.609375 4.75 -3.828125 3.21875 -3.78125 C 5.328125 -6.125 6.9375 -6.90625 7.890625 -6.90625 Z M 2.171875 -3.59375 C 1.234375 -2.375 1.09375 -1.703125 1.09375 -1.21875 C 1.09375 -0.65625 1.296875 0.15625 2.765625 0.15625 C 4.234375 0.15625 5.96875 -1.046875 5.96875 -2.078125 C 5.96875 -2.65625 5.515625 -2.953125 4.921875 -2.953125 C 3.828125 -2.953125 2.765625 -2.078125 2.171875 -0.65625 L 2.359375 -0.609375 C 2.890625 -1.703125 3.890625 -2.734375 4.9375 -2.734375 C 5.546875 -2.734375 5.71875 -2.34375 5.71875 -2.09375 C 5.71875 -1.15625 4.046875 -0.046875 2.71875 -0.046875 C 2.03125 -0.046875 1.515625 -0.328125 1.515625 -0.921875 C 1.515625 -1.78125 2.296875 -2.765625 3 -3.53125 C 6.5625 -3.734375 8.71875 -5.625 8.71875 -6.546875 C 8.71875 -6.984375 8.25 -7.109375 7.859375 -7.109375 C 6.375 -7.109375 4.171875 -5.65625 2.40625 -3.8125 C 1.4375 -3.890625 0.703125 -4.328125 0.703125 -5.078125 C 0.703125 -6.046875 1.9375 -7.203125 3.890625 -7.203125 C 4.453125 -7.203125 5.03125 -7.078125 5.3125 -7.03125 L 5.375 -7.1875 C 4.9375 -7.3125 4.34375 -7.421875 3.765625 -7.421875 C 2.78125 -7.421875 2 -7.203125 1.375 -6.75 C 0.75 -6.3125 0.453125 -5.765625 0.453125 -5.109375 C 0.453125 -4.8125 0.5 -3.875 2.171875 -3.59375 Z M 2.171875 -3.59375 "/> </symbol> <symbol overflow="visible" id="glyphb1-0"> <path style="stroke:none;" d="M 1.421875 -1.78125 C 1.421875 -1.625 1.5 -1.5 1.65625 -1.40625 L 1.765625 -1.46875 C 1.6875 -1.5625 1.65625 -1.640625 1.65625 -1.71875 C 1.65625 -1.890625 1.71875 -1.984375 2.140625 -2.375 C 2.59375 -2.78125 2.71875 -2.96875 2.71875 -3.28125 C 2.71875 -3.734375 2.3125 -4.078125 1.765625 -4.078125 C 1.546875 -4.078125 1.359375 -4.03125 1.078125 -3.890625 L 0.875 -3.3125 L 1.03125 -3.265625 L 1.140625 -3.515625 C 1.1875 -3.65625 1.359375 -3.734375 1.59375 -3.734375 C 1.96875 -3.734375 2.234375 -3.5 2.234375 -3.15625 C 2.234375 -2.625 1.421875 -2.265625 1.421875 -1.78125 Z M 1.6875 -1.03125 C 1.53125 -1.03125 1.390625 -0.890625 1.390625 -0.734375 C 1.390625 -0.578125 1.53125 -0.421875 1.6875 -0.421875 C 1.859375 -0.421875 2 -0.578125 2 -0.734375 C 2 -0.890625 1.84375 -1.03125 1.6875 -1.03125 Z M 3.046875 -4.3125 L 3.046875 -0.296875 L 0.609375 -0.296875 L 0.609375 -4.3125 Z M 3.34375 -4.375 C 3.34375 -4.578125 3.3125 -4.609375 3.09375 -4.609375 L 0.546875 -4.609375 C 0.34375 -4.609375 0.3125 -4.578125 0.3125 -4.375 L 0.3125 -0.25 C 0.3125 -0.03125 0.34375 0 0.546875 0 L 3.09375 0 C 3.3125 0 3.34375 -0.03125 3.34375 -0.25 Z M 3.34375 -4.375 "/> </symbol> <symbol overflow="visible" id="glyphb1-1"> <path style="stroke:none;" d="M 2.5 -1.703125 L 2.5 -2.140625 L 2.453125 -2.171875 L 1.484375 -2 L 1.96875 -2.875 L 1.9375 -2.96875 L 1.515625 -3.109375 L 1.453125 -3.078125 L 1.265625 -2.078125 L 0.609375 -2.765625 L 0.53125 -2.765625 L 0.234375 -2.40625 L 0.25 -2.328125 L 1.15625 -1.890625 L 0.25 -1.515625 L 0.21875 -1.453125 L 0.46875 -1.09375 L 0.53125 -1.09375 L 1.28125 -1.703125 L 1.40625 -0.71875 L 1.46875 -0.6875 L 1.9375 -0.875 L 1.9375 -0.9375 L 1.46875 -1.796875 L 2.4375 -1.65625 Z M 2.5 -1.703125 "/> </symbol> <symbol overflow="visible" id="glyphb1-2"> <path style="stroke:none;" d="M 2.0625 -4.96875 L 1.6875 -4.96875 L -0.265625 1.171875 L 0.125 1.171875 Z M 2.0625 -4.96875 "/> </symbol> <symbol overflow="visible" id="glyphb1-3"> <path style="stroke:none;" d="M 2 -4.5 L 2.734375 -4.53125 C 3.15625 -4.53125 3.4375 -4.46875 3.453125 -4.359375 L 3.5 -3.75 L 3.671875 -3.75 C 3.6875 -4.1875 3.75 -4.53125 3.8125 -4.796875 C 3.671875 -4.8125 3.46875 -4.828125 3.359375 -4.828125 L 3.0625 -4.8125 L 1.921875 -4.796875 L 1.71875 -4.796875 C 1.546875 -4.796875 1.265625 -4.8125 1 -4.8125 L 0.65625 -4.828125 L 0.640625 -4.625 L 1.03125 -4.609375 C 1.1875 -4.609375 1.265625 -4.546875 1.265625 -4.421875 C 1.265625 -4.328125 1.234375 -4.109375 1.203125 -3.890625 L 0.6875 -0.875 C 0.5625 -0.234375 0.5625 -0.21875 0.28125 -0.1875 L 0.03125 -0.171875 L 0 0.015625 L 0.234375 0.015625 C 0.515625 0 0.734375 0 0.890625 0 C 1.015625 0 1.21875 0 1.484375 0.015625 L 1.84375 0.015625 L 1.875 -0.171875 L 1.421875 -0.1875 C 1.234375 -0.203125 1.171875 -0.265625 1.171875 -0.390625 C 1.171875 -0.4375 1.171875 -0.515625 1.1875 -0.546875 L 1.4375 -2.09375 L 2.25 -2.09375 C 2.4375 -2.09375 2.65625 -2.09375 2.984375 -2.078125 L 3.171875 -2.0625 L 3.296875 -2.390625 L 3.28125 -2.4375 C 2.703125 -2.40625 2.296875 -2.390625 1.75 -2.390625 L 1.484375 -2.390625 L 1.8125 -4.28125 C 1.828125 -4.390625 1.84375 -4.421875 1.875 -4.5 Z M 2 -4.5 "/> </symbol> <symbol overflow="visible" id="glyphb2-0"> <path style="stroke:none;" d="M 0.265625 0 L 0.265625 -6.875 L 2.3125 -6.875 L 2.3125 0 Z M 2.0625 -0.265625 L 2.0625 -6.625 L 0.515625 -6.625 L 0.515625 -0.265625 Z M 2.0625 -0.265625 "/> </symbol> <symbol overflow="visible" id="glyphb2-1"> <path style="stroke:none;" d="M 3.953125 -6.640625 L 4 -5.625 L 4 -5.546875 C 4 -5.492188 3.992188 -5.460938 3.984375 -5.453125 C 3.972656 -5.441406 3.941406 -5.4375 3.890625 -5.4375 L 3.796875 -5.421875 L 3.765625 -5.421875 C 3.703125 -5.410156 3.660156 -5.421875 3.640625 -5.453125 C 3.628906 -5.484375 3.609375 -5.585938 3.578125 -5.765625 C 3.535156 -6.023438 3.398438 -6.238281 3.171875 -6.40625 C 2.953125 -6.582031 2.695312 -6.671875 2.40625 -6.671875 C 2.0625 -6.671875 1.78125 -6.566406 1.5625 -6.359375 C 1.34375 -6.148438 1.234375 -5.878906 1.234375 -5.546875 C 1.234375 -5.242188 1.328125 -4.988281 1.515625 -4.78125 C 1.703125 -4.582031 2.082031 -4.328125 2.65625 -4.015625 C 3.320312 -3.660156 3.769531 -3.332031 4 -3.03125 C 4.238281 -2.726562 4.359375 -2.332031 4.359375 -1.84375 C 4.359375 -1.25 4.140625 -0.757812 3.703125 -0.375 C 3.265625 0.0078125 2.707031 0.203125 2.03125 0.203125 C 1.8125 0.203125 1.550781 0.171875 1.25 0.109375 C 0.957031 0.046875 0.738281 -0.015625 0.59375 -0.078125 C 0.519531 -0.117188 0.46875 -0.269531 0.4375 -0.53125 L 0.359375 -1.203125 L 0.34375 -1.375 C 0.34375 -1.425781 0.375 -1.453125 0.4375 -1.453125 L 0.546875 -1.46875 L 0.5625 -1.46875 C 0.59375 -1.46875 0.613281 -1.457031 0.625 -1.4375 C 0.632812 -1.425781 0.648438 -1.394531 0.671875 -1.34375 C 0.742188 -1.144531 0.800781 -1 0.84375 -0.90625 C 0.894531 -0.820312 0.96875 -0.734375 1.0625 -0.640625 C 1.332031 -0.335938 1.679688 -0.1875 2.109375 -0.1875 C 2.460938 -0.1875 2.769531 -0.316406 3.03125 -0.578125 C 3.289062 -0.835938 3.421875 -1.144531 3.421875 -1.5 C 3.421875 -1.832031 3.320312 -2.113281 3.125 -2.34375 C 2.925781 -2.570312 2.535156 -2.851562 1.953125 -3.1875 C 1.328125 -3.539062 0.914062 -3.84375 0.71875 -4.09375 C 0.519531 -4.34375 0.421875 -4.679688 0.421875 -5.109375 C 0.421875 -5.753906 0.640625 -6.238281 1.078125 -6.5625 C 1.503906 -6.894531 2.015625 -7.0625 2.609375 -7.0625 C 3.023438 -7.0625 3.425781 -6.988281 3.8125 -6.84375 C 3.863281 -6.820312 3.894531 -6.796875 3.90625 -6.765625 C 3.925781 -6.742188 3.941406 -6.703125 3.953125 -6.640625 Z M 3.953125 -6.640625 "/> </symbol> <symbol overflow="visible" id="glyphb2-2"> <path style="stroke:none;" d="M 3.890625 -2.578125 L 1.078125 -2.578125 C 1.054688 -2.390625 1.046875 -2.265625 1.046875 -2.203125 C 1.046875 -1.734375 1.203125 -1.320312 1.515625 -0.96875 C 1.828125 -0.613281 2.191406 -0.4375 2.609375 -0.4375 C 2.785156 -0.4375 2.957031 -0.476562 3.125 -0.5625 C 3.300781 -0.65625 3.503906 -0.804688 3.734375 -1.015625 L 3.90625 -0.890625 C 3.414062 -0.160156 2.832031 0.203125 2.15625 0.203125 C 1.644531 0.203125 1.207031 0 0.84375 -0.40625 C 0.488281 -0.8125 0.3125 -1.304688 0.3125 -1.890625 C 0.3125 -2.503906 0.5 -3.0625 0.875 -3.5625 C 1.269531 -4.09375 1.765625 -4.359375 2.359375 -4.359375 C 2.953125 -4.359375 3.390625 -4.128906 3.671875 -3.671875 C 3.816406 -3.429688 3.890625 -3.117188 3.890625 -2.734375 Z M 1.125 -2.8125 L 2.421875 -2.84375 C 2.703125 -2.84375 2.878906 -2.875 2.953125 -2.9375 C 3.035156 -3.007812 3.078125 -3.109375 3.078125 -3.234375 C 3.078125 -3.453125 2.992188 -3.640625 2.828125 -3.796875 C 2.671875 -3.960938 2.476562 -4.046875 2.25 -4.046875 C 1.988281 -4.046875 1.757812 -3.9375 1.5625 -3.71875 C 1.363281 -3.507812 1.21875 -3.207031 1.125 -2.8125 Z M 1.125 -2.8125 "/> </symbol> <symbol overflow="visible" id="glyphb2-3"> <path style="stroke:none;" d="M 0.109375 -3.78125 L 0.109375 -3.984375 C 0.648438 -4.398438 1.097656 -4.773438 1.453125 -5.109375 C 1.535156 -5.097656 1.578125 -5.070312 1.578125 -5.03125 C 1.578125 -4.976562 1.566406 -4.890625 1.546875 -4.765625 C 1.535156 -4.640625 1.53125 -4.539062 1.53125 -4.46875 L 1.5 -4.234375 L 2.71875 -4.234375 C 2.800781 -4.234375 2.847656 -4.222656 2.859375 -4.203125 C 2.878906 -4.179688 2.890625 -4.132812 2.890625 -4.0625 L 2.890625 -3.9375 C 2.890625 -3.863281 2.878906 -3.816406 2.859375 -3.796875 C 2.835938 -3.785156 2.789062 -3.78125 2.71875 -3.78125 L 1.5 -3.78125 L 1.5 -1.46875 C 1.5 -1.082031 1.523438 -0.820312 1.578125 -0.6875 C 1.617188 -0.59375 1.6875 -0.515625 1.78125 -0.453125 C 1.882812 -0.390625 1.992188 -0.359375 2.109375 -0.359375 C 2.234375 -0.359375 2.351562 -0.390625 2.46875 -0.453125 C 2.59375 -0.523438 2.6875 -0.617188 2.75 -0.734375 C 2.800781 -0.804688 2.832031 -0.84375 2.84375 -0.84375 L 2.875 -0.828125 L 2.984375 -0.734375 C 2.859375 -0.398438 2.695312 -0.160156 2.5 -0.015625 C 2.3125 0.128906 2.070312 0.203125 1.78125 0.203125 C 1.4375 0.203125 1.171875 0.0976562 0.984375 -0.109375 C 0.890625 -0.210938 0.828125 -0.316406 0.796875 -0.421875 C 0.765625 -0.535156 0.75 -0.738281 0.75 -1.03125 L 0.75 -3.1875 C 0.75 -3.476562 0.726562 -3.648438 0.6875 -3.703125 C 0.644531 -3.753906 0.515625 -3.78125 0.296875 -3.78125 Z M 0.109375 -3.78125 "/> </symbol> </g> <clipPath id="clipb1"> <path d="M 38 41 L 58 41 L 58 62.882812 L 38 62.882812 Z M 38 41 "/> </clipPath> </defs> <g id="surfaceb6"> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb0-1" x="8.508" y="16.48"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb0-2" x="17.82" y="16.973"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb1-1" x="51.782" y="12.211"/> <use xlink:href="#glyphb1-2" x="54.487854" y="12.211"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb2-1" x="56.891" y="16.973"/> <use xlink:href="#glyphb2-2" x="61.663105" y="16.973"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb2-3" x="65.907189" y="16.973"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb0-3" x="12.188" y="55.453"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb2-1" x="54.337" y="55.453"/> <use xlink:href="#glyphb2-2" x="59.109105" y="55.453"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb2-3" x="63.353189" y="55.453"/> </g> <path style="fill:none;stroke-width:0.59776;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -17.264344 24.307687 L -8.760438 24.307687 L -8.760438 32.811594 " transform="matrix(1,0,0,-1,38.862,52.753)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -10.959656 38.479562 L 7.657531 38.479562 " transform="matrix(1,0,0,-1,38.862,52.753)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.00015125 2.671167 L 2.66812 -0.0007075 L 0.00015125 -2.668676 " transform="matrix(1,0,0,-1,44.14438,14.27273)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -22.932313 27.956125 L -22.932313 9.335031 " transform="matrix(1,0,0,-1,38.862,52.753)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.0009675 2.668946 L 2.670908 0.0009775 L -0.0009675 -2.670898 " transform="matrix(0,1,1,0,15.92871,41.04003)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 21.638 29.456125 L 21.638 8.979562 " transform="matrix(1,0,0,-1,38.862,52.753)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.0002525 2.668529 L 2.671623 0.00056 L -0.0002525 -2.671315 " transform="matrix(0,1,1,0,60.49944,41.39869)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -14.639344 -0.00090625 L 10.212219 -0.00090625 " transform="matrix(1,0,0,-1,38.862,52.753)"/> <g clip-path="url(#clipb1)" clip-rule="nonzero"> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.00039875 2.670969 L 2.668367 -0.00090625 L 0.00039875 -2.668875 " transform="matrix(1,0,0,-1,46.69882,52.753)"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphb1-3" x="35.068" y="60"/> </g> </g> </svg> \end{svg} </annotation></semantics>

Here the projection functor on the right, from the slice <semantics> */Set<annotation encoding="application/x-tex">{}^{\ast/}\text{Set}</annotation></semantics> of the category of sets under the one point set, is a discrete cocartesian fibration. It follows, therefore, that this pullback is also a 2-pullback and that its left-hand vertical is a discrete cocartesian fibration.

Street’s point of view is (roughly) that in a 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> it is the (suitably defined) discrete cocartesian fibrations that play the role that the sub-objects inhabit in topos theory. Then the generic sub-object <semantics>:1Ω<annotation encoding="application/x-tex">\top\colon 1\to \Omega</annotation></semantics> becomes a discrete cocartesian fibration <semantics>:S *S<annotation encoding="application/x-tex">\top\colon S_\ast\to S</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> with the property that pullback of <semantics><annotation encoding="application/x-tex">\top</annotation></semantics> along 1-cells <semantics>a:AS<annotation encoding="application/x-tex">a\colon A\to S</annotation></semantics> provides us with equivalences between each hom-category <semantics>Fun 𝒦(A,S)<annotation encoding="application/x-tex">\text{Fun}_{\mathcal{K}}(A,S)</annotation></semantics> and the category <semantics>dCoCart(𝒦) /A<annotation encoding="application/x-tex">\text{dCoCart}(\mathcal{K})_{/A}</annotation></semantics> of discrete cocartesian fibrations over <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>.

This account, however, glosses over one important point; thus far we have only specified that each comparison functor <semantics>Fun 𝒦(A,S)dCoCart(𝒦) /A<annotation encoding="application/x-tex">\text{Fun}_{\mathcal{K}}(A,S) \to \text{dCoCart}(\mathcal{K})_{/A}</annotation></semantics> should act by pulling back <semantics>:S *S<annotation encoding="application/x-tex">\top\colon S_{\ast}\to S</annotation></semantics> along each 1-cell <semantics>a:AS<annotation encoding="application/x-tex">a\colon A\to S</annotation></semantics>. We have said nothing about how, or weather, this action might extend in any reasonable way to 2-cells <semantics>ϕ:ab<annotation encoding="application/x-tex">\phi\colon a\Rightarrow b</annotation></semantics> in <semantics>Fun 𝒦(A,S)<annotation encoding="application/x-tex">\text{Fun}_{\mathcal{K}}(A,S)</annotation></semantics>!

The key observation in that regard is that for any fixed “representably defined” cocartesian fibration <semantics>p:EB<annotation encoding="application/x-tex">p\colon E\to B</annotation></semantics> in a (finitely complete) 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>, we may extend pullback to define a pseudo-functor <semantics>Fun 𝒦(A,B)𝒦/A<annotation encoding="application/x-tex">\text{Fun}_{\mathcal{K}}(A,B)\to\mathcal{K}/A</annotation></semantics>. This carries each 1-cell <semantics>a:AB<annotation encoding="application/x-tex">a\colon A\to B</annotation></semantics> to the pullback <semantics>p a:E aA<annotation encoding="application/x-tex">p_a\colon E_a\to A</annotation></semantics> of <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> along <semantics>a<annotation encoding="application/x-tex">a</annotation></semantics> and its action on a 2-cell <semantics>ϕ:ab<annotation encoding="application/x-tex">\phi\colon a\Rightarrow b</annotation></semantics> is constructed in the manner depicted in the following diagram:

<semantics><semantics><annotation-xml encoding="SVG1.1"> </annotation-xml></semantics><annotation encoding="application/x-tex"> \begin{svg} <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="240pt" viewBox="0 0 188.627 156.432" version="1.1"> <defs> <g> <symbol overflow="visible" id="glyphc0-0"> <path style="stroke:none;" d="M 2.046875 -2.546875 C 2.046875 -2.328125 2.15625 -2.140625 2.375 -2 L 2.53125 -2.109375 C 2.40625 -2.25 2.359375 -2.34375 2.359375 -2.46875 C 2.359375 -2.71875 2.46875 -2.84375 3.078125 -3.390625 C 3.703125 -3.96875 3.890625 -4.25 3.890625 -4.6875 C 3.890625 -5.328125 3.296875 -5.84375 2.53125 -5.84375 C 2.203125 -5.84375 1.9375 -5.765625 1.53125 -5.578125 L 1.25 -4.734375 L 1.46875 -4.671875 L 1.625 -5.03125 C 1.703125 -5.234375 1.9375 -5.34375 2.265625 -5.34375 C 2.8125 -5.34375 3.1875 -5 3.1875 -4.53125 C 3.1875 -3.765625 2.046875 -3.234375 2.046875 -2.546875 Z M 2.40625 -1.46875 C 2.1875 -1.46875 2 -1.265625 2 -1.046875 C 2 -0.828125 2.1875 -0.609375 2.40625 -0.609375 C 2.65625 -0.609375 2.84375 -0.8125 2.84375 -1.046875 C 2.84375 -1.265625 2.640625 -1.46875 2.40625 -1.46875 Z M 4.359375 -6.171875 L 4.359375 -0.421875 L 0.875 -0.421875 L 0.875 -6.171875 Z M 4.78125 -6.25 C 4.78125 -6.546875 4.734375 -6.59375 4.4375 -6.59375 L 0.78125 -6.59375 C 0.484375 -6.59375 0.4375 -6.546875 0.4375 -6.25 L 0.4375 -0.34375 C 0.4375 -0.046875 0.484375 0 0.78125 0 L 4.4375 0 C 4.734375 0 4.78125 -0.046875 4.78125 -0.34375 Z M 4.78125 -6.25 "/> </symbol> <symbol overflow="visible" id="glyphc0-1"> <path style="stroke:none;" d="M 2.203125 -3.328125 L 3.375 -3.328125 L 4.6875 -3.296875 L 4.875 -3.734375 L 4.828125 -3.8125 C 4.03125 -3.765625 3.421875 -3.75 2.65625 -3.75 L 2.28125 -3.75 L 2.75 -6.4375 C 3.078125 -6.453125 3.890625 -6.484375 4.109375 -6.484375 C 4.734375 -6.484375 5.140625 -6.390625 5.15625 -6.234375 L 5.234375 -5.359375 L 5.46875 -5.359375 C 5.5 -5.984375 5.578125 -6.484375 5.6875 -6.875 C 5.46875 -6.890625 5.171875 -6.890625 5 -6.890625 C 4.96875 -6.890625 4.859375 -6.890625 4.609375 -6.890625 L 3.421875 -6.875 C 3.328125 -6.875 1.59375 -6.890625 1.5 -6.890625 L 1.046875 -6.890625 L 1.015625 -6.625 L 1.5625 -6.59375 C 1.796875 -6.59375 1.90625 -6.515625 1.90625 -6.328125 C 1.90625 -6.1875 1.859375 -5.875 1.8125 -5.578125 L 0.953125 -0.546875 C 0.90625 -0.375 0.78125 -0.296875 0.34375 -0.234375 L 0.296875 0.03125 L 0.703125 0.015625 C 1.015625 0.015625 1.640625 0 1.828125 0 L 4.078125 0.03125 C 4.21875 0.03125 4.46875 0.015625 4.796875 0.015625 L 5.125 0 L 5.15625 -0.359375 C 5.1875 -0.5625 5.234375 -0.859375 5.328125 -1.34375 L 5.375 -1.609375 L 5.109375 -1.609375 L 4.9375 -1 C 4.84375 -0.640625 4.75 -0.53125 4.53125 -0.484375 C 4.34375 -0.4375 3.609375 -0.390625 3.078125 -0.390625 C 2.65625 -0.390625 2.390625 -0.40625 1.703125 -0.453125 Z M 2.203125 -3.328125 "/> </symbol> <symbol overflow="visible" id="glyphc0-2"> <path style="stroke:none;" d="M 5.65625 -0.546875 C 5.65625 -0.390625 5.5 -0.296875 5.21875 -0.28125 L 4.78125 -0.25 L 4.78125 0.03125 C 5.796875 0 5.796875 0 6 0 C 6.203125 0 6.203125 0 7.21875 0.03125 L 7.21875 -0.25 L 6.953125 -0.28125 C 6.484375 -0.34375 6.484375 -0.34375 6.40625 -0.796875 L 5.53125 -7.03125 L 5.078125 -7.03125 L 1.359375 -1.109375 C 1 -0.5625 0.8125 -0.40625 0.484375 -0.3125 L 0.28125 -0.265625 L 0.28125 0.03125 C 1.203125 0 1.203125 0 1.390625 0 C 1.578125 0 1.609375 0 2.46875 0.03125 L 2.46875 -0.25 L 1.9375 -0.28125 C 1.78125 -0.296875 1.640625 -0.375 1.640625 -0.46875 C 1.640625 -0.546875 1.703125 -0.6875 1.890625 -1.015625 L 2.90625 -2.765625 L 5.359375 -2.765625 L 5.609375 -0.890625 L 5.609375 -0.859375 C 5.609375 -0.84375 5.65625 -0.6875 5.65625 -0.546875 Z M 4.9375 -5.984375 L 5.3125 -3.125 L 3.15625 -3.125 Z M 4.9375 -5.984375 "/> </symbol> <symbol overflow="visible" id="glyphc0-3"> <path style="stroke:none;" d="M 5.40625 -2.265625 C 5.40625 -3.359375 4.578125 -3.515625 3.765625 -3.640625 C 4.328125 -3.84375 4.578125 -3.953125 4.890625 -4.234375 C 5.34375 -4.625 5.578125 -5.09375 5.578125 -5.609375 C 5.578125 -6.421875 5 -6.890625 4 -6.890625 C 3.984375 -6.890625 3.875 -6.890625 3.734375 -6.890625 L 3.09375 -6.875 C 2.984375 -6.875 2.625 -6.875 2.5 -6.875 C 2.3125 -6.875 2 -6.875 1.515625 -6.890625 L 1 -6.890625 L 0.96875 -6.625 L 1.5 -6.59375 C 1.734375 -6.59375 1.84375 -6.515625 1.84375 -6.328125 C 1.84375 -6.1875 1.796875 -5.875 1.75 -5.578125 L 0.890625 -0.546875 C 0.84375 -0.375 0.734375 -0.296875 0.3125 -0.234375 L 0.265625 0.03125 L 0.625 0.015625 C 0.890625 0 1.046875 0 1.171875 0 C 1.28125 0 1.53125 0.015625 1.796875 0.015625 L 2.15625 0.03125 L 2.359375 0.046875 C 2.546875 0.046875 2.6875 0.0625 2.765625 0.0625 C 3.90625 0.0625 5.40625 -0.875 5.40625 -2.265625 Z M 1.65625 -0.40625 L 2.15625 -3.390625 L 3.015625 -3.390625 C 4.03125 -3.390625 4.53125 -2.96875 4.53125 -2.109375 C 4.53125 -1.078125 3.8125 -0.34375 2.8125 -0.34375 C 2.265625 -0.34375 2.0625 -0.375 1.65625 -0.40625 Z M 3.515625 -6.53125 C 4.3125 -6.53125 4.71875 -6.171875 4.71875 -5.421875 C 4.71875 -4.375 3.984375 -3.75 2.734375 -3.75 L 2.21875 -3.75 L 2.71875 -6.484375 C 2.875 -6.484375 3.28125 -6.53125 3.515625 -6.53125 Z M 3.515625 -6.53125 "/> </symbol> <symbol overflow="visible" id="glyphc1-0"> <path style="stroke:none;" d="M 1.421875 -1.78125 C 1.421875 -1.625 1.5 -1.5 1.65625 -1.40625 L 1.765625 -1.46875 C 1.6875 -1.5625 1.65625 -1.640625 1.65625 -1.71875 C 1.65625 -1.890625 1.71875 -1.984375 2.140625 -2.375 C 2.59375 -2.78125 2.71875 -2.96875 2.71875 -3.28125 C 2.71875 -3.734375 2.3125 -4.078125 1.765625 -4.078125 C 1.546875 -4.078125 1.359375 -4.03125 1.078125 -3.890625 L 0.875 -3.3125 L 1.03125 -3.265625 L 1.140625 -3.515625 C 1.1875 -3.65625 1.359375 -3.734375 1.59375 -3.734375 C 1.96875 -3.734375 2.234375 -3.5 2.234375 -3.15625 C 2.234375 -2.625 1.421875 -2.265625 1.421875 -1.78125 Z M 1.6875 -1.03125 C 1.53125 -1.03125 1.390625 -0.890625 1.390625 -0.734375 C 1.390625 -0.578125 1.53125 -0.421875 1.6875 -0.421875 C 1.859375 -0.421875 2 -0.578125 2 -0.734375 C 2 -0.890625 1.84375 -1.03125 1.6875 -1.03125 Z M 3.046875 -4.3125 L 3.046875 -0.296875 L 0.609375 -0.296875 L 0.609375 -4.3125 Z M 3.34375 -4.375 C 3.34375 -4.578125 3.3125 -4.609375 3.09375 -4.609375 L 0.546875 -4.609375 C 0.34375 -4.609375 0.3125 -4.578125 0.3125 -4.375 L 0.3125 -0.25 C 0.3125 -0.03125 0.34375 0 0.546875 0 L 3.09375 0 C 3.3125 0 3.34375 -0.03125 3.34375 -0.25 Z M 3.34375 -4.375 "/> </symbol> <symbol overflow="visible" id="glyphc1-1"> <path style="stroke:none;" d="M 1.890625 -1.421875 L 1.6875 -0.53125 C 1.65625 -0.421875 1.640625 -0.296875 1.640625 -0.1875 C 1.640625 -0.03125 1.703125 0.0625 1.8125 0.0625 C 1.96875 0.0625 2.265625 -0.125 2.828125 -0.59375 L 2.78125 -0.734375 C 2.609375 -0.59375 2.40625 -0.40625 2.265625 -0.40625 C 2.1875 -0.40625 2.15625 -0.46875 2.15625 -0.578125 C 2.15625 -0.609375 2.15625 -0.625 2.15625 -0.640625 L 2.796875 -3.296875 L 2.734375 -3.359375 L 2.5 -3.21875 C 2.21875 -3.328125 2.09375 -3.359375 1.90625 -3.359375 C 1.71875 -3.359375 1.578125 -3.328125 1.390625 -3.234375 C 0.953125 -3.015625 0.71875 -2.8125 0.546875 -2.46875 C 0.25 -1.84375 0.03125 -1.015625 0.03125 -0.46875 C 0.03125 -0.15625 0.125 0.078125 0.265625 0.078125 C 0.515625 0.078125 1.078125 -0.28125 1.890625 -1.421875 Z M 2.21875 -2.890625 C 2.0625 -2.125 1.9375 -1.765625 1.703125 -1.40625 C 1.296875 -0.8125 0.875 -0.40625 0.65625 -0.40625 C 0.578125 -0.40625 0.53125 -0.5 0.53125 -0.6875 C 0.53125 -1.140625 0.71875 -1.953125 0.96875 -2.515625 C 1.140625 -2.890625 1.296875 -3.015625 1.625 -3.015625 C 1.796875 -3.015625 1.921875 -2.984375 2.21875 -2.890625 Z M 2.21875 -2.890625 "/> </symbol> <symbol overflow="visible" id="glyphc1-2"> <path style="stroke:none;" d="M 1.640625 -5.03125 L 1.546875 -5.109375 C 1.1875 -4.921875 0.9375 -4.859375 0.4375 -4.8125 L 0.40625 -4.671875 L 0.75 -4.671875 C 0.90625 -4.671875 0.984375 -4.625 0.984375 -4.5 C 0.984375 -4.453125 0.96875 -4.375 0.96875 -4.328125 L 0.265625 -0.5 C 0.265625 -0.46875 0.265625 -0.453125 0.265625 -0.421875 C 0.265625 -0.15625 0.59375 0.078125 0.96875 0.078125 C 1.234375 0.078125 1.59375 -0.0625 1.890625 -0.265625 C 2.5625 -0.75 3.015625 -1.703125 3.015625 -2.625 C 3.015625 -2.890625 2.953125 -3.15625 2.875 -3.265625 C 2.828125 -3.328125 2.734375 -3.359375 2.625 -3.359375 C 2.453125 -3.359375 2.25 -3.296875 2.0625 -3.203125 C 1.703125 -3.015625 1.46875 -2.8125 1.03125 -2.265625 Z M 2.25 -2.953125 C 2.421875 -2.953125 2.515625 -2.796875 2.515625 -2.453125 C 2.515625 -2 2.375 -1.40625 2.15625 -0.953125 C 1.9375 -0.46875 1.65625 -0.25 1.28125 -0.25 C 0.953125 -0.25 0.78125 -0.40625 0.78125 -0.703125 C 0.78125 -0.9375 0.890625 -1.921875 1.453125 -2.515625 C 1.671875 -2.75 2.046875 -2.953125 2.25 -2.953125 Z M 2.25 -2.953125 "/> </symbol> <symbol overflow="visible" id="glyphc1-3"> <path style="stroke:none;" d="M 1.21875 0.078125 C 1.609375 0.078125 2.421875 -0.4375 2.71875 -0.875 C 3 -1.296875 3.234375 -2.0625 3.234375 -2.578125 C 3.234375 -3.046875 3.078125 -3.359375 2.828125 -3.359375 C 2.53125 -3.359375 2.171875 -3.171875 1.8125 -2.84375 C 1.515625 -2.609375 1.375 -2.421875 1.125 -2.015625 L 1.296875 -2.84375 C 1.3125 -2.96875 1.328125 -3.0625 1.328125 -3.15625 C 1.328125 -3.28125 1.28125 -3.359375 1.171875 -3.359375 C 1.03125 -3.359375 0.765625 -3.21875 0.25 -2.84375 L 0.0625 -2.703125 L 0.109375 -2.5625 L 0.328125 -2.703125 C 0.515625 -2.84375 0.59375 -2.875 0.65625 -2.875 C 0.734375 -2.875 0.78125 -2.8125 0.78125 -2.703125 C 0.78125 -2.65625 0.765625 -2.515625 0.75 -2.453125 L 0.34375 -0.0625 C 0.28125 0.359375 0.140625 1 0 1.625 L -0.046875 1.875 L 0 1.921875 C 0.140625 1.875 0.28125 1.84375 0.515625 1.8125 L 0.796875 -0.015625 C 0.953125 0.03125 1.09375 0.078125 1.21875 0.078125 Z M 1 -1.15625 C 1.15625 -2.046875 1.890625 -2.953125 2.46875 -2.953125 C 2.65625 -2.953125 2.734375 -2.796875 2.734375 -2.484375 C 2.734375 -1.921875 2.453125 -1.09375 2.09375 -0.5625 C 1.953125 -0.359375 1.734375 -0.25 1.4375 -0.25 C 1.21875 -0.25 1.046875 -0.296875 0.875 -0.40625 Z M 1 -1.15625 "/> </symbol> <symbol overflow="visible" id="glyphc1-4"> <path style="stroke:none;" d="M 3.171875 -4.40625 C 3.171875 -4.75 2.90625 -4.96875 2.5625 -4.96875 C 1.421875 -4.96875 1.140625 -3.015625 0.921875 -2.265625 L 0.109375 -1.703125 L 0.1875 -1.578125 L 0.859375 -2.03125 C 0.734375 -1.625 0.515625 -1.140625 0.515625 -0.703125 C 0.515625 -0.265625 0.78125 0.09375 1.25 0.09375 C 1.859375 0.09375 2.3125 -0.515625 2.5625 -1 L 2.421875 -1.046875 C 2.25 -0.703125 1.890625 -0.1875 1.453125 -0.1875 C 1.15625 -0.1875 1.015625 -0.40625 1.015625 -0.671875 C 1.015625 -1.09375 1.40625 -2.125 1.515625 -2.53125 C 2.078125 -2.90625 3.171875 -3.640625 3.171875 -4.40625 Z M 2.96875 -4.359375 C 2.96875 -3.75 2.03125 -3.078125 1.59375 -2.75 L 1.90625 -3.828125 C 2 -4.15625 2.203125 -4.703125 2.609375 -4.703125 C 2.8125 -4.703125 2.96875 -4.578125 2.96875 -4.359375 Z M 2.96875 -4.359375 "/> </symbol> <symbol overflow="visible" id="glyphc1-5"> <path style="stroke:none;" d="M 1.546875 -2.328125 L 2.359375 -2.328125 L 3.28125 -2.296875 L 3.40625 -2.609375 L 3.375 -2.65625 C 2.8125 -2.640625 2.390625 -2.625 1.859375 -2.625 L 1.59375 -2.625 L 1.921875 -4.5 C 2.140625 -4.515625 2.71875 -4.53125 2.875 -4.53125 C 3.3125 -4.53125 3.59375 -4.46875 3.609375 -4.359375 L 3.65625 -3.75 L 3.828125 -3.75 C 3.84375 -4.1875 3.890625 -4.53125 3.96875 -4.796875 C 3.828125 -4.8125 3.609375 -4.828125 3.484375 -4.828125 C 3.46875 -4.828125 3.390625 -4.828125 3.21875 -4.8125 L 2.390625 -4.8125 C 2.328125 -4.796875 1.109375 -4.8125 1.046875 -4.8125 L 0.734375 -4.828125 L 0.703125 -4.625 L 1.09375 -4.609375 C 1.25 -4.609375 1.328125 -4.546875 1.328125 -4.421875 C 1.328125 -4.328125 1.296875 -4.109375 1.265625 -3.890625 L 0.65625 -0.390625 C 0.640625 -0.265625 0.546875 -0.203125 0.25 -0.15625 L 0.203125 0.015625 L 0.5 0.015625 C 0.703125 0 1.140625 0 1.28125 0 L 2.84375 0.015625 C 2.953125 0.015625 3.125 0.015625 3.359375 0 L 3.578125 0 L 3.609375 -0.25 C 3.625 -0.390625 3.65625 -0.59375 3.71875 -0.9375 L 3.75 -1.125 L 3.5625 -1.125 L 3.453125 -0.703125 C 3.390625 -0.453125 3.328125 -0.375 3.171875 -0.328125 C 3.03125 -0.3125 2.515625 -0.265625 2.140625 -0.265625 C 1.859375 -0.265625 1.671875 -0.28125 1.1875 -0.328125 Z M 1.546875 -2.328125 "/> </symbol> <symbol overflow="visible" id="glyphc1-6"> <path style="stroke:none;" d="M 2.1875 -1.5 C 2.53125 -1.8125 3.640625 -2.90625 4.1875 -3.03125 L 4.1875 -3.09375 C 4.09375 -3.1875 3.953125 -3.265625 3.8125 -3.34375 C 3.265625 -2.796875 2.6875 -2.234375 2.15625 -1.671875 C 2.0625 -2.859375 2.046875 -3.296875 1.609375 -3.296875 C 1.296875 -3.296875 1 -3.03125 0.796875 -2.828125 L 0.796875 -2.71875 L 0.890625 -2.71875 L 0.890625 -2.734375 C 1.65625 -3.5 1.734375 -1.828125 1.8125 -1.28125 C 1.125 -0.578125 0.484375 0.21875 0 1.0625 C 0.046875 1.15625 0.125 1.265625 0.21875 1.34375 L 0.296875 1.34375 C 0.703125 0.5 1.265625 -0.296875 1.828125 -1.046875 C 1.953125 0.1875 1.984375 1.203125 2.578125 1.203125 C 2.90625 1.203125 3.21875 0.953125 3.453125 0.75 L 3.453125 0.671875 L 3.375 0.671875 C 2.28125 1.4375 2.3125 -0.859375 2.1875 -1.5 Z M 2.1875 -1.5 "/> </symbol> <symbol overflow="visible" id="glyphc1-7"> <path style="stroke:none;" d="M 4.328125 -1.984375 C 4.34375 -2.984375 3.515625 -3.296875 2.65625 -3.296875 C 2.78125 -3.8125 2.84375 -4.34375 3.015625 -4.875 L 3.015625 -4.90625 C 2.90625 -4.875 2.8125 -4.828125 2.703125 -4.765625 L 2.46875 -3.296875 C 1.28125 -3.296875 0.203125 -2.5625 0.1875 -1.296875 C 0.171875 -0.265625 0.984375 0.0625 1.875 0.109375 L 1.59375 2.015625 C 1.734375 1.96875 1.859375 1.90625 1.96875 1.8125 C 1.96875 1.25 2 0.6875 2.078125 0.109375 C 3.359375 0.015625 4.3125 -0.609375 4.328125 -1.984375 Z M 3.75 -1.90625 C 3.75 -0.90625 3.140625 -0.078125 2.09375 -0.046875 L 2.625 -3.125 C 3.390625 -3.109375 3.765625 -2.640625 3.75 -1.90625 Z M 2.4375 -3.125 L 1.875 -0.03125 C 1.078125 -0.09375 0.734375 -0.515625 0.75 -1.296875 C 0.78125 -2.34375 1.359375 -3.09375 2.4375 -3.125 Z M 2.4375 -3.125 "/> </symbol> <symbol overflow="visible" id="glyphc2-0"> <path style="stroke:none;" d="M 1.015625 -1.28125 C 1.015625 -1.171875 1.078125 -1.078125 1.1875 -1 L 1.265625 -1.046875 C 1.203125 -1.125 1.1875 -1.171875 1.1875 -1.234375 C 1.1875 -1.359375 1.234375 -1.421875 1.53125 -1.6875 C 1.859375 -1.984375 1.9375 -2.125 1.9375 -2.34375 C 1.9375 -2.671875 1.65625 -2.921875 1.265625 -2.921875 C 1.09375 -2.921875 0.96875 -2.890625 0.765625 -2.78125 L 0.625 -2.375 L 0.734375 -2.34375 L 0.8125 -2.515625 C 0.859375 -2.609375 0.96875 -2.671875 1.140625 -2.671875 C 1.40625 -2.671875 1.59375 -2.5 1.59375 -2.265625 C 1.59375 -1.875 1.015625 -1.609375 1.015625 -1.28125 Z M 1.203125 -0.734375 C 1.09375 -0.734375 1 -0.640625 1 -0.515625 C 1 -0.40625 1.09375 -0.296875 1.203125 -0.296875 C 1.328125 -0.296875 1.421875 -0.40625 1.421875 -0.515625 C 1.421875 -0.640625 1.328125 -0.734375 1.203125 -0.734375 Z M 2.171875 -3.078125 L 2.171875 -0.21875 L 0.4375 -0.21875 L 0.4375 -3.078125 Z M 2.390625 -3.125 C 2.390625 -3.28125 2.375 -3.296875 2.21875 -3.296875 L 0.390625 -3.296875 C 0.25 -3.296875 0.21875 -3.28125 0.21875 -3.125 L 0.21875 -0.171875 C 0.21875 -0.03125 0.25 0 0.390625 0 L 2.21875 0 C 2.375 0 2.390625 -0.03125 2.390625 -0.171875 Z M 2.390625 -3.125 "/> </symbol> <symbol overflow="visible" id="glyphc2-1"> <path style="stroke:none;" d="M 1.34375 -1.015625 L 1.203125 -0.390625 C 1.1875 -0.296875 1.171875 -0.203125 1.171875 -0.125 C 1.171875 -0.015625 1.21875 0.046875 1.296875 0.046875 C 1.40625 0.046875 1.609375 -0.078125 2.03125 -0.421875 L 1.984375 -0.53125 C 1.875 -0.421875 1.71875 -0.296875 1.609375 -0.296875 C 1.5625 -0.296875 1.546875 -0.34375 1.546875 -0.40625 C 1.546875 -0.4375 1.546875 -0.453125 1.546875 -0.46875 L 2 -2.359375 L 1.953125 -2.390625 L 1.796875 -2.3125 C 1.578125 -2.375 1.5 -2.40625 1.359375 -2.40625 C 1.21875 -2.40625 1.125 -2.375 0.984375 -2.3125 C 0.6875 -2.15625 0.515625 -2.015625 0.390625 -1.765625 C 0.171875 -1.328125 0.015625 -0.71875 0.015625 -0.328125 C 0.015625 -0.109375 0.09375 0.0625 0.1875 0.0625 C 0.375 0.0625 0.765625 -0.203125 1.34375 -1.015625 Z M 1.59375 -2.0625 C 1.484375 -1.515625 1.390625 -1.265625 1.21875 -1 C 0.9375 -0.578125 0.625 -0.296875 0.46875 -0.296875 C 0.40625 -0.296875 0.375 -0.359375 0.375 -0.5 C 0.375 -0.8125 0.515625 -1.390625 0.6875 -1.796875 C 0.8125 -2.0625 0.921875 -2.15625 1.171875 -2.15625 C 1.28125 -2.15625 1.375 -2.140625 1.59375 -2.0625 Z M 1.59375 -2.0625 "/> </symbol> <symbol overflow="visible" id="glyphc2-2"> <path style="stroke:none;" d="M 1.171875 -3.59375 L 1.109375 -3.65625 C 0.859375 -3.53125 0.671875 -3.484375 0.3125 -3.4375 L 0.296875 -3.34375 L 0.53125 -3.34375 C 0.65625 -3.34375 0.703125 -3.296875 0.703125 -3.21875 C 0.703125 -3.1875 0.703125 -3.125 0.6875 -3.09375 L 0.1875 -0.359375 C 0.1875 -0.34375 0.1875 -0.3125 0.1875 -0.296875 C 0.1875 -0.109375 0.421875 0.0625 0.703125 0.0625 C 0.875 0.0625 1.140625 -0.046875 1.34375 -0.1875 C 1.828125 -0.53125 2.15625 -1.21875 2.15625 -1.875 C 2.15625 -2.0625 2.109375 -2.265625 2.046875 -2.328125 C 2.015625 -2.375 1.953125 -2.40625 1.875 -2.40625 C 1.765625 -2.40625 1.609375 -2.359375 1.46875 -2.296875 C 1.21875 -2.15625 1.046875 -2.015625 0.75 -1.609375 Z M 1.609375 -2.109375 C 1.734375 -2.109375 1.796875 -2 1.796875 -1.75 C 1.796875 -1.4375 1.6875 -1 1.546875 -0.6875 C 1.375 -0.34375 1.1875 -0.171875 0.90625 -0.171875 C 0.6875 -0.171875 0.5625 -0.296875 0.5625 -0.5 C 0.5625 -0.671875 0.640625 -1.375 1.03125 -1.796875 C 1.203125 -1.96875 1.453125 -2.109375 1.609375 -2.109375 Z M 1.609375 -2.109375 "/> </symbol> <symbol overflow="visible" id="glyphc2-3"> <path style="stroke:none;" d="M 3.09375 -1.421875 C 3.109375 -2.140625 2.515625 -2.359375 1.90625 -2.359375 C 1.984375 -2.734375 2.03125 -3.109375 2.15625 -3.484375 L 2.15625 -3.515625 C 2.078125 -3.484375 2.015625 -3.453125 1.9375 -3.40625 L 1.765625 -2.359375 C 0.921875 -2.359375 0.15625 -1.828125 0.140625 -0.921875 C 0.125 -0.1875 0.703125 0.046875 1.328125 0.078125 L 1.140625 1.4375 C 1.234375 1.40625 1.328125 1.359375 1.40625 1.296875 C 1.40625 0.890625 1.4375 0.484375 1.484375 0.078125 C 2.390625 0.015625 3.078125 -0.4375 3.09375 -1.421875 Z M 2.6875 -1.359375 C 2.671875 -0.640625 2.25 -0.0625 1.5 -0.03125 L 1.875 -2.234375 C 2.421875 -2.234375 2.703125 -1.890625 2.6875 -1.359375 Z M 1.734375 -2.234375 L 1.34375 -0.03125 C 0.765625 -0.0625 0.53125 -0.359375 0.53125 -0.9375 C 0.546875 -1.671875 0.984375 -2.21875 1.734375 -2.234375 Z M 1.734375 -2.234375 "/> </symbol> </g> <clipPath id="clipc1"> <path d="M 169 99 L 188.628906 99 L 188.628906 119 L 169 119 Z M 169 99 "/> </clipPath> <clipPath id="clipc2"> <path d="M 169 102 L 188.628906 102 L 188.628906 123 L 169 123 Z M 169 102 "/> </clipPath> <clipPath id="clipc3"> <path d="M 66 116 L 182 116 L 182 156.433594 L 66 156.433594 Z M 66 116 "/> </clipPath> <clipPath id="clipc4"> <path d="M 0 64 L 96 64 L 96 156.433594 L 0 156.433594 Z M 0 64 "/> </clipPath> <clipPath id="clipc5"> <path d="M 0 81 L 79 81 L 79 156.433594 L 0 156.433594 Z M 0 81 "/> </clipPath> <clipPath id="clipc6"> <path d="M 44 69 L 130 69 L 130 156.433594 L 44 156.433594 Z M 44 69 "/> </clipPath> </defs> <g id="surfacec11"> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc0-1" x="5.733" y="10.215"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-1" x="11.81" y="12.307"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc0-1" x="62.36" y="66.908"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-2" x="68.437" y="69"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc0-1" x="177.561" y="39.631"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc0-2" x="6.78" y="96.389"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc0-2" x="63.472" y="153.082"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc0-3" x="177.561" y="124.655"/> </g> <path style="fill:none;stroke-width:0.59776;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5.670406 127.561563 L 14.174313 127.561563 L 14.174313 136.065469 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.59776;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 62.361812 70.86625 L 70.865719 70.86625 L 70.865719 79.370156 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.00146875 133.581094 L -0.00146875 64.549844 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.00133 2.670406 L 2.669299 -0.00146875 L 0.00133 -2.669437 " transform="matrix(0,1,1,0,10.521,79.27992)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.00069625 2.670406 L 2.668665 -0.00146875 L 0.00069625 -2.669437 " transform="matrix(0,1,1,0,10.521,82.65946)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-3" x="2.127" y="51.593"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc2-1" x="5.607" y="53.057"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 7.361813 56.760781 C 69.248531 57.323281 105.07275 51.335 162.752438 30.932656 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.000459353 2.668619 L 2.669316 0.0000329516 L 0.000729418 -2.669742 " transform="matrix(0.94386,0.33388,0.33388,-0.94386,171.03131,117.8588)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-1" x="99.66" y="95.408"/> </g> <path style="fill:none;stroke-width:6.38115;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 56.693844 76.889688 L 56.693844 7.1475 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 56.693844 76.889688 L 56.693844 7.854531 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.00087375 2.668193 L 2.671001 0.00022375 L -0.00087375 -2.671651 " transform="matrix(0,1,1,0,67.21462,135.97353)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.0015175 2.668193 L 2.670357 0.00022375 L -0.0015175 -2.671651 " transform="matrix(0,1,1,0,67.21462,139.35308)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-3" x="58.725" y="108.286"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc2-2" x="62.205" y="109.75"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 170.080563 106.307656 L 170.080563 36.151406 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <g clip-path="url(#clipc1)" clip-rule="nonzero"> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.00075875 2.671578 L 2.671116 -0.0002975 L -0.00075875 -2.668266 " transform="matrix(0,1,1,0,180.60186,107.67654)"/> </g> <g clip-path="url(#clipc2)" clip-rule="nonzero"> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.0014125 2.671578 L 2.670462 -0.0002975 L -0.0014125 -2.668266 " transform="matrix(0,1,1,0,180.60186,111.0561)"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-3" x="183.02" y="79.429"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 8.408688 141.811563 C 69.893062 142.295938 105.479 136.311563 162.752438 115.975625 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.000789323 2.671251 L 2.669944 -0.00173669 L 0.000631738 -2.671165 " transform="matrix(0.94386,0.33513,0.33513,-0.94386,171.03131,32.81452)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-4" x="100.183" y="8.975"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc2-1" x="103.461" y="10.439"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 65.1665 84.8975 C 104.732906 84.319375 128.869625 90.413125 162.799313 109.350625 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.0011896 2.669647 L 2.669631 0.00224822 L 0.00033564 -2.671972 " transform="matrix(0.87622,-0.48898,-0.48898,-0.87622,171.24004,41.39721)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-4" x="129.114" y="67.129"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc2-2" x="132.392" y="68.594"/> </g> <g clip-path="url(#clipc3)" clip-rule="nonzero"> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 64.053219 -0.122031 C 104.131344 -0.82125 128.365719 5.221719 162.799313 24.311563 " transform="matrix(1,0,0,-1,10.521,149.585)"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.00228129 2.669346 L 2.669635 0.00226955 L -0.000851717 -2.667756 " transform="matrix(0.87622,-0.48575,-0.48575,-0.87622,171.24004,126.42767)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-2" x="128.561" y="152.401"/> </g> <g clip-path="url(#clipc4)" clip-rule="nonzero"> <path style="fill:none;stroke-width:2.50212;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 7.143063 49.545938 L 49.553219 7.1475 " transform="matrix(1,0,0,-1,10.521,149.585)"/> </g> <g clip-path="url(#clipc5)" clip-rule="nonzero"> <path style="fill:none;stroke-width:1.32652;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 7.143063 49.545938 L 49.553219 7.1475 " transform="matrix(1,0,0,-1,10.521,149.585)"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-dasharray:0.5878,1.99255;stroke-miterlimit:10;" d="M 8.150875 133.581094 L 48.045406 93.694375 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -0.000365622 2.669032 L 2.670645 -0.00201649 L 0.0023588 -2.670265 " transform="matrix(0.7071,0.70709,0.70709,-0.7071,56.88645,54.21175)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-5" x="28.596" y="43.446"/> </g> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc2-3" x="32.85" y="44.91"/> </g> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-dasharray:0.5878,1.99255;stroke-miterlimit:10;" d="M 8.408688 135.967813 C 59.201656 101.764688 104.57275 94.612344 162.740719 111.338906 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 0.00145937 2.669797 L 2.669449 0.000003605 L -0.000344101 -2.667986 " transform="matrix(0.96297,-0.27695,-0.27695,-0.96297,170.97237,38.90337)"/> <path style=" stroke:none;fill-rule:nonzero;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 94.132812 19.882812 L 98.386719 19.882812 L 98.386719 15.628906 L 94.132812 15.628906 Z M 94.132812 19.882812 "/> <path style=" stroke:none;fill-rule:nonzero;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 82.140625 39.203125 L 86.394531 39.203125 L 86.394531 34.949219 L 82.140625 34.949219 Z M 82.140625 39.203125 "/> <path style="fill:none;stroke-width:2.50212;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-dasharray:2.98883,2.98883;stroke-miterlimit:10;" d="M 84.236812 129.409219 L 76.443844 116.858438 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:1.32652;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-dasharray:2.98883,2.98883;stroke-miterlimit:10;" d="M 84.236812 129.409219 L 76.443844 116.858438 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:round;stroke-linejoin:round;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -1.282237 2.535841 C -0.662401 1.197191 1.016703 0.0475058 1.969744 0.000451193 C 1.014777 -0.0456128 -0.660663 -1.196313 -1.28042 -2.537157 " transform="matrix(-0.52776,0.8499,0.8499,0.52776,86.96495,32.72802)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-6" x="92.683" y="33.338"/> </g> <path style=" stroke:none;fill-rule:nonzero;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 93.636719 104.875 L 97.890625 104.875 L 97.890625 100.621094 L 93.636719 100.621094 Z M 93.636719 104.875 "/> <path style=" stroke:none;fill-rule:nonzero;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 75.492188 134.644531 L 79.746094 134.644531 L 79.746094 130.390625 L 75.492188 130.390625 Z M 75.492188 134.644531 "/> <g clip-path="url(#clipc6)" clip-rule="nonzero"> <path style="fill:none;stroke-width:2.50212;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 83.768062 44.417031 L 69.756344 21.432656 " transform="matrix(1,0,0,-1,10.521,149.585)"/> </g> <path style="fill:none;stroke-width:1.32652;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 83.768062 44.417031 L 69.756344 21.432656 " transform="matrix(1,0,0,-1,10.521,149.585)"/> <path style="fill:none;stroke-width:0.5878;stroke-linecap:round;stroke-linejoin:round;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M -1.282995 2.538152 C -0.660468 1.195687 1.014021 0.0480966 1.972973 -0.000506815 C 1.013364 -0.0479691 -0.660274 -1.195832 -1.282551 -2.536405 " transform="matrix(-0.52275,0.85748,0.85748,0.52275,80.2779,128.15223)"/> <g style="fill:rgb(0%,0%,0%);fill-opacity:1;"> <use xlink:href="#glyphc1-7" x="89.111" y="124.969"/> </g> </g> </svg> \end{svg} </annotation></semantics>

Here we make use of the fact that <semantics>p:EB<annotation encoding="application/x-tex">p\colon E\to B</annotation></semantics> is a cocartesian fibration in order to lift the whiskered 2-cell <semantics>ϕp a<annotation encoding="application/x-tex">\phi p_a</annotation></semantics> to a cocartesian 2-cell <semantics>χ<annotation encoding="application/x-tex">\chi</annotation></semantics>. Its codomain 1-cell may then be factored through <semantics>E b<annotation encoding="application/x-tex">E_b</annotation></semantics>, using the pullback property of the front square, to give a 1-cell <semantics>E ϕ:E aE b<annotation encoding="application/x-tex">E_{\phi}\colon E_a\to E_b</annotation></semantics> over <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> as required. Standard (essential) uniqueness properties of cocartesian lifts may now be deployed to provide canonical isomorphisms <semantics>E ψϕE ψE ϕ<annotation encoding="application/x-tex">E_{\psi\cdot\phi}\cong E_{\psi}\circ E_{\phi}</annotation></semantics> and <semantics>E id aid E a<annotation encoding="application/x-tex">E_{\id_a}\cong\id_{E_a}</annotation></semantics> and to prove that these satisfy required coherence conditions.

It is this 2-categorical comprehension construction that motivates the key construction of our paper.

Comprehension and 2-fibrations

In passing, we might quickly observe that the 2-categorical comprehension construction may be regarded as being but one aspect of the theory of 2-fibrations. Specifically the totality of all cocartesian fibrations and cartesian functors between them in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is a 2-category whose codomain projection <semantics>coCart(𝒦)𝒦<annotation encoding="application/x-tex">\text{coCart}(\mathcal{K})\to\mathcal{K}</annotation></semantics> is a cartesian 2-fibration, it is indeed the archetypal such gadget. Under this interpretation, the lifting construction used to define the pseudo-functor <semantics>Fun 𝒦(A,B)𝒦 /A<annotation encoding="application/x-tex">\text{Fun}_{\mathcal{K}}(A,B) \to \mathcal{K}_{/A}</annotation></semantics> is quite simply the typical cartesian 2-cell lifting property characteristic of a 2-fibration.

In an early draft of our paper, our narrative followed just this kind of route. There we showed that the totality of cocartesian fibrations in an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos could be assembled to give the total space of a kind of cartesian fibration of (weak) 2-complicial sets. In the end, however, we abandoned this presentation in favour of one that was more explicitly to the point for current purposes. Watch this space, however, because we are currently preparing a paper on the complicial version of this theory which will return to this point of view. For us this has become a key component of our work on foundations of complicial approach to <semantics>(,)<annotation encoding="application/x-tex">(\infty,\infty)</annotation></semantics>-category theory.

An <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categorical comprehension construction

In an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>, by which we mean a category enriched over quasi-categories that admits a specified class of isofibrations and certain simplicially enriched limits, we may again define <semantics>p:EB<annotation encoding="application/x-tex">p \colon E \twoheadrightarrow B</annotation></semantics> to be a cocartesian fibration representably. That is to say, <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> is a cocartesian fibration if it is an isofibration in the specified class and if <semantics>Fun 𝒦(X,p):Fun 𝒦(X,E)Fun 𝒦(X,B)<annotation encoding="application/x-tex">\text{Fun}_{\mathcal{K}}(X,p) \colon \text{Fun}_{\mathcal{K}}(X,E) \to \text{Fun}_{\mathcal{K}}(X,B)</annotation></semantics> is a cocartesian fibration of quasi-categories for every <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Then a direct “homotopy coherent” generalisation of the 2-categorical construction discussed above demonstrates that we define an associated comprehension functor:

<semantics>c p,A:Fun 𝒦(A,B)coCart(𝒦) /A.<annotation encoding="application/x-tex">c_{p,A} \colon \mathfrak{C}\text{Fun}_{\mathcal{K}}(A,B)\to \text{coCart}(\mathcal{K})_{/A}.</annotation></semantics>

The image lands in the maximal Kan complex enriched subcategory of the quasi-categorically enriched category of cocartesian fibrations and cartesian functors over <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, so the comprehension functor transposes to define a map of quasi-categories

<semantics>c p,A:Fun 𝒦(A,B)(coCart(𝒦) /A)<annotation encoding="application/x-tex">c_{p,A} \colon \text{Fun}_{\mathcal{K}}(A,B) \to \mathbb{N}(\text{coCart}(\mathcal{K})_{/A})</annotation></semantics>

whose codomain is defined by applying the homotopy coherent nerve.

Straightening as comprehension

The “straightening” of a cocartesian fibration into a homotopy coherent diagram is certainly one of early highlights in Lurie’s account of quasi-category theory. Such functors are intrinsically tricky to construct, since that process embroils us in specifying an infinite hierarchy of homotopy coherent data.

We may deploy the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categorical comprehension to provide a alternative approach to straightening. To that end we work in the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos of quasi-categories <semantics>qCat<annotation encoding="application/x-tex">\text{qCat}</annotation></semantics> and let <semantics>A=1<annotation encoding="application/x-tex">A=1</annotation></semantics>, and observe that the comprehension functor <semantics>c p,1:BqCat<annotation encoding="application/x-tex">c_{p,1}\colon \mathfrak{C}B \to \text{qCat}</annotation></semantics> is itself the straightening of <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>. Indeed, it is possible to use the constructions in our paper to extend this variant of unstraightening to give a functor of quasi-categories:

<semantics>(coCart /B)Fun(B,Q)<annotation encoding="application/x-tex">\mathbb{N}(\text{coCart}_{/B}) \to \text{Fun}(B,Q)</annotation></semantics>

Here <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> is the (large) quasi-category constructed by taking the homotopy coherent nerve of (the maximal Kan complex enriched subcategory of) <semantics>qCat<annotation encoding="application/x-tex">\text{qCat}</annotation></semantics>. So the objects of <semantics>Fun(B,Q)<annotation encoding="application/x-tex">\text{Fun}(B,Q)</annotation></semantics> correspond bijectively to “straight” simplicial functors <semantics>BqCat<annotation encoding="application/x-tex">\mathfrak{C}B\to\qCat</annotation></semantics>. We should confess, however, that we do not explicitly pursue the full construction of this straightening functor there.

Unstraightening as comprehension

In the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categorical context, the Grothendieck construction is christened unstraightening by Lurie. It is inverse to the straightening construction discussed above.

We may also realise unstraightening as comprehension. To that end we follow Ross Street’s lead by taking <semantics>Q *<annotation encoding="application/x-tex">Q_{\ast}</annotation></semantics> to be a quasi-category of pointed quasi-categories and apply the comprehension construction to the “forget the point” projection <semantics>Q *Q<annotation encoding="application/x-tex">Q_{\ast}\to Q</annotation></semantics>. The comprehension functor thus derived

<semantics>c p,A:Fun(A,Q)(dCoCart /A)<annotation encoding="application/x-tex">c_{p,A} \colon Fun(A,Q) \to \mathbb{N}\left(dCoCart_{/A}\right)</annotation></semantics>

defines a quasi-categorical analogue of Lurie’s unstraightening construction. In an upcoming paper we use the quasi-categorical variant of Beck’s monadicity theorem to prove that this functor is an equivalence. We also extend this result to certain other <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmoi, such as the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos of (co)cartesian fibrations over a fixed quasi-category.

Constructing the Yoneda embedding

Applying the comprehension construction to the cocartesian fibration <semantics>cod:A 2A<annotation encoding="application/x-tex">cod : A^2 \to A</annotation></semantics> in the slice <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos <semantics>𝒦 /A<annotation encoding="application/x-tex">\mathcal{K}_{/A}</annotation></semantics>, we obtain a map

<semantics>y:Fun 𝒦(1,A)Cart(𝒦) /A<annotation encoding="application/x-tex">y \colon \text{Fun}_{\mathcal{K}}(1,A) \to \mathbb{N}Cart(\mathcal{K})_{/A}</annotation></semantics>

that carries an element <semantics>a:1A<annotation encoding="application/x-tex">a \colon 1 \to A</annotation></semantics> to the groupoidal cartesian fibration <semantics>dom:AaA<annotation encoding="application/x-tex">dom : A \downarrow a \to A</annotation></semantics>. This provides us with a particularly explicit model of the Yoneda embedding, whose action on hom-spaces is easily computed. In particular, this allows us to easily demonstrate that the Yoneda embedding is fully-faithful and thus that every quasi-category is equivalent to the homotopy coherent nerve of some Kan complex enriched category.

by riehl (eriehl@math.jhu.edu) at July 19, 2017 01:39 AM

July 18, 2017

Symmetrybreaking - Fermilab/SLAC

Shaking the dark matter paradigm

A theory about gravity challenges our understanding of the universe.

Gravity vs. Dark Matter Reflection alternatives with dark matter on the left and no dark matter on the right.

For millennia, humans held a beautiful belief. Our planet, Earth, was at the center of a vast universe, and all of the planets and stars and celestial bodies revolved around us. This geocentric model, though it had floated around since 6th century BCE, was written in its most elegant form by Claudius Ptolemy in 140 AD.

When this model encountered problems, such as the retrograde motions of planets, scientists reworked the data to fit the model by coming up with phenomena such as epicycles, mini orbits.

It wasn’t until 1543, 1400 years later, that Nicolaus Copernicus set in motion a paradigm shift that would give way to centuries of new discoveries. According to Copernicus’ radical theory, Earth was not the center of the universe but simply one of a long line of planets orbiting around the sun.

But even as evidence that we lived in a heliocentric system piled up and scientists such as Galileo Galilei perfected the model, society held onto the belief that the entire universe orbited around Earth until the early 19th century.

To Erik Verlinde, a theoretical physicist at the University of Amsterdam, the idea of dark matter is the geocentric model of the 21st century. 

“What people are doing now is allowing themselves free parameters to sort of fit the data,” Verlinde says. “You end up with a theory that has so many free parameters it's hard to disprove.”

Dark matter, an as-yet-undetected form of matter that scientists believe makes up more than a quarter of the mass and energy of the universe, was first theorized when scientists noticed that stars at the outer edges of galaxies and galaxy clusters were moving much faster than Newton’s theory of gravity said they should. Up until this point, scientists have assumed that the best explanation for this is that there must be missing mass in the universe holding those fast-moving stars in place in the form of dark matter. 

But Verlinde has come up with a set of equations that explains these galactic rotation curves by viewing gravity as an emergent force — a result of the quantum structure of space.

The idea is related to dark energy, which scientists think is the cause for the accelerating expansion of our universe. Verlinde thinks that what we see as dark matter is actually just interactions between galaxies and the sea of dark energy in which they’re embedded.

“Before I started working on this I never had any doubts about dark matter,” Verlinde says. “But then I started thinking about this link with quantum information and I had the idea that dark energy is carrying more of the dynamics of reality than we realize.”

Verlinde is not the first theorist to come up with an alternative to dark matter. Many feel that his theory echoes the sentiment of physicist Mordehai Milgrom’s equations of “modified Newtonian dynamics,” or MOND. Just as Einstein modified Newton’s laws of gravity to fit to the scale of planets and solar systems, MOND modifies Einstein’s laws of gravity to fit to the scale of galaxies and galaxy clusters.

Verlinde, however, makes the distinction that he’s not deriving the equations of MOND, rather he’s deriving what he calls a “scaling relation,” or a volume effect of space-time that only becomes important at large distances. 

Stacy McGaugh, an astrophysicist at Case Western Reserve University, says that while MOND is primarily the notion that the effective force of gravity changes with acceleration, Verlinde’s ideas are more of a ground-up theoretical work.

“He's trying to look at the structure of space-time and see if what we call gravity is a property that emerges from that quantum structure, hence the name emergent gravity,” McGaugh says. “In principle, it's a very different approach that doesn't necessarily know about MOND or have anything to do with it.”

One of the appealing things about Verlinde’s theory, McGaugh says, is that it naturally produces evidence of MOND in a way that “just happens.” 

“That's the sort of thing that one looks for,” McGaugh says. “There needs to be some basis of why MOND happens, and this theory might provide it.”

Verlinde’s ideas have been greeted with a fair amount of skepticism in the scientific community, in part because, according to Kathryn Zurek, a theoretical physicist at the US Department of Energy’s Lawrence Berkeley National Laboratory, his theory leaves a lot unexplained. 

“Theories of modified gravity only attempt to explain galactic rotation curves [those fast-moving planets],” Zurek says. “As evidence for dark matter, that's only one very small part of the puzzle. Dark matter explains a whole host of observations from the time of the cosmic microwave background when the universe was just a few hundred thousand years old through structure formation all the way until today.”

 

Inline: Shaking the dark matter paradigm
Illustration by Ana Kova

Zurek says that in order for scientists to start lending weight to his claims, Verlinde needs to build the case around his theory and show that it accommodates a wider range of observations. But, she says, this doesn’t mean that his ideas should be written off.

“One should always poke at the paradigm,” Zurek says, “even though the cold dark matter paradigm has been hugely successful, you always want to check your assumptions and make sure that you're not missing something that could be the tip of the iceberg.”

McGaugh had a similar crisis of faith in dark matter when he was working on an experiment wherein MOND’s predictions were the only ones that came true in his data. He had been making observations of low-surface-brightness galaxies, wherein stars are spread more thinly than galaxies such as the Milky Way where the stars are crowded relatively close together.

McGaugh says his results did not make sense to him in the standard dark matter context, and it turned out that the properties that were confusing to him had already been predicted by Milgrom’s MOND equations in 1983, before people had even begun to take seriously the idea of low-surface-brightness galaxies.

Although McGaugh’s experience caused him to question the existence of dark matter and instead argue for MOND, others have not been so quick to join the cause.

“We subscribe to a particular paradigm and most of our thinking is constrained within the boundaries of that paradigm, and so if we encounter a situation in which there is a need for a paradigm shift, it's really hard to think outside that box,” McGaugh says. “Even though we have rules for the game as to when you're supposed to change your mind and we all in principle try to follow that, in practice there are some changes of mind that are so big that we just can't overcome our human nature.”

McGaugh says that many of his colleagues believe that there’s so much evidence for dark matter that it’s a waste of time to consider any alternatives. But he believes that all of the evidence for dark matter might instead be an indication that there is something wrong with our theories of gravity. 

“I kind of worry that we are headed into another thousand years of dark epicycles,” McGaugh says.

But according to Zurek, if MOND came up with anywhere near the evidence that has been amassed for the dark matter paradigm, people would be flocking to it. The problem, she says, is that at the moment MOND just does not come anywhere near to passing the number of tests that cold dark matter has. She adds that there are some physicists who argue that the cold dark matter paradigm can, in fact, explain those observations about low-surface-brightness galaxies.

Recently, Case Western held a workshop wherein they gathered together representatives from different communities, including those working on dark matter models, to discuss dwarf galaxies and the external field effect, which is the notion that very low-density objects will be affected by what’s around them. MOND predicts that the dynamics of a small satellite galaxy will depend on its proximity to its giant host in a way that doesn't happen with dark matter.

McGaugh says that in attendance at the workshop were a group of more philosophically inclined people who use a set of rules to judge theories, which they’ve put together by looking back at how theories have developed in the past. 

“One of the interesting things that came out of that was that MOND is doing better on that score card,” he says. “It’s more progressive in the sense that it's making successful predictions for new phenomena whereas in the case of dark matter we've had to repeatedly invoke ad hoc fixes to patch things up.”

Verlinde’s ideas, however, didn’t come up much within the workshop. While McGaugh says that the two theories are closely enough related that he would hope the same people pursuing MOND would be interested in Verlinde’s theory, he added that not everyone shares that attitude. Many are waiting for more theoretical development and further observational tests.

“The theory needs to make a clear prediction so that we can then devise a program to go out and test it,” he says. “It needs to be further worked out to get beyond where we are now.”

Verlinde says he realizes that he still needs to develop his ideas further and extend them to explain things such as the formation of galaxies and galaxy clusters. Although he has mostly been working on this theory on his own, he recognizes the importance of building a community around his ideas.

Over the past few months, he has been giving presentations at different universities, including Princeton, Harvard, Berkeley, Stanford, and Caltech. There is currently a large community of people working on ideas of quantum information and gravity, he says, and his main goal is to get more people, in particular string theorists, to start thinking about his ideas to help him improve them.

“I think that when we understand gravity better and we use those equations to describe the evolution of the universe, we may be able to answer questions more precisely about how the universe started,” Verlinde says. “I really think that the current description is only part of the story and there's a much deeper way of understanding it—maybe an even more beautiful way.”

 

by Ali Sundermier at July 18, 2017 01:00 PM

July 17, 2017

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr accueil or at our information office located at CERN, on the ground floor of bldg. 504, Monday through Friday from 12.30 to 15.30.

July 17, 2017 02:07 PM

CERN Bulletin

EVE et École

Il reste des places disponibles !

La structure Espace de vie enfantine (EVE) et École de l’Association du personnel du CERN vous informe qu’il reste quelques places pour la rentrée scolaire 2017-2018 :

  • à la crèche (2-3 ans) (accueil sur 2, 3 ou 5 jours) ;
  • au jardin d’enfants (2-4 ans) (accueil à la matinée) ;
  • en classe de 1ère primaire (1P) (4-5 ans).

N’hésitez pas à rapidement nous contacter si vous êtes intéressés ; nous sommes à votre disposition pour répondre à toutes vos questions : Staff.Kindergarten@cern.ch.

L’EVE et École de l’Association du personnel du CERN est ouverte non seulement aux enfants des personnels du CERN (MPE, MPA) mais également aux enfants des personnes ne travaillant pas sur le domaine du CERN.

Autres informations : http://nurseryschool.web.cern.ch/

July 17, 2017 02:07 PM

CERN Bulletin

Petanque club

Pour la 21e année consécutive se disputait le Challenge de notre regretté défunt "Claude Carteret" (décédé le 09 juin 2017) dans le cadre des concours internes de notre club de pétanque CERN.

Le temps étant plus que pluvieux nous avons donc dû jouer au boulodrome de Saint-Genis-Pouilly qui nous avait été prêté à cette occasion.

Vingt-deux participants se rencontraient pour trois parties au tirage à la mêlée.

Après des parties serrées notre juge arbitre en la personne de Claude Jouve déclarait vainqueur son frère Christian avec trois parties gagnées donc impérial.

2e : Roland Dunand avec lui aussi trois parties gagnées mais battu au goal-average.

3e : André Domeniconi lui aussi trois parties gagnées mais un goal-average moins bon.

1ère féminine: Mireille, cousine de Roland Dunand.

La soirée se terminait par un excellent buffet froid préparé par Jennifer et Sylvie que nous remercions infiniment.

Rendez-vous le jeudi 27 juillet 2017 pour le Challenge de « Patrick Durand ».

July 17, 2017 02:07 PM

CERN Bulletin

Cine club

Wednesday 19 July 2017 at 20:00
CERN Council Chamber

The Adventures of Baron Munchausen

Directed by Terry Gilliam
UK, 1988, 126 min.

The fantastic tale of an 18th century aristocrat, his talented henchmen and a little girl in their efforts to save a town from defeat by the Turks. Being swallowed by a giant sea-monster, a trip to the moon, a dance with Venus and an escape from the Grim Reaper are only some of the improbable adventures.

Original version English; French subtitles

 

Wednesday 26 July 2017 at 20:00
CERN Council Chamber

Twelve Monkeys

Directed by Terry Gilliam
UK, 1995, 129 min.

In a future world devastated by disease, a convict is sent back in time to gather information about the man-made virus that wiped out most of the human population on the planet.

Original version English; French subtitles

July 17, 2017 02:07 PM

CERN Bulletin

Health Insurance – Affiliation to LAMal insurance for families of CERN personnel

On May 16, the HR department published in the CERN Bulletin an article concerning cross-border workers (“frontaliers”) and the exercise of the right of choice in health insurance:

« In view of the Agreement concluded on 7 July 2016 between Switzerland and France regarding the choice of health insurance system* for persons resident in France and working in Switzerland ("frontaliers"), the Swiss authorities have indicated that those persons who have not “formally exercised their right to choose a health insurance system before 30 September 2017 risk automatically becoming members of the Swiss LAMal system” and having to “pay penalties to their insurers that may amount to several years’ worth of contributions”. Among others, this applies to spouses of members of the CERN personnel who live in France and work in Switzerland. »

But the CERN Health Insurance Scheme (CHIS), provides insurance not only to all CERN employees and, under certain conditions, a few associates, but also to their families, namely their spouses, registered partners and dependent children.

In addition, in April 2015, the HR Department published the following information concerning the issue of health insurance for “frontaliers” who are dependents of members of the CHIS:

« After extensive exchanges, we finally obtained a response a few days ago from the Swiss authorities, with which we are fully satisfied and which we can summarise as follows:

  1. Frontalier workers who are currently using the CHIS as their basic health insurance can continue to do so.
  2. Family members who become frontalier workers, or those who have not yet exercised their “right to choose” (droit d’option) can opt to use the CHIS as their basic health insurance. To this end, they must complete the form regarding the health insurance of frontaliers, ticking the LAMal box and submitting their certificate of CHIS membership (available from UNIQA).
  3. For family members who joined the LAMal system since June 2014, CERN is in contact with the Swiss authorities and the Geneva Health Insurance Service with a view to securing an exceptional arrangement allowing them to leave the LAMal system and use the CHIS as their basic health insurance.
  4. People who exercised their “right to choose” and opted into the French Sécurité sociale or the Swiss LAMal system before June 2014 can no longer change, as the decision is irreversible. As family members, however, they remain beneficiaries of the CHIS, which then serves as their complementary insurance.
  5. If a frontalier family member uses the CHIS as his or her basic health insurance and the main member concerned ceases to be a member of the CHIS or the relationship between the two ends (divorce or dissolution of a civil partnership), the frontalier must join LAMal. »

Since spring 2017, the Staff Association has been contacted by several staff members who reported that their spouse is summoned by the Canton of Geneva Health Insurance Service (SAM), or the corresponding Health Insurance Service of the Canton where they work, to join the Swiss health insurance (LAMal) even if they are affiliated to the CHIS. Previously, no such obligation had ever been mentioned, let alone implemented!

But this injunction applies not only to “frontalier” workers, but also to spouses, registered partners or children who have opted for the Swiss nationality, and the Swiss members of the personnel with short-term contracts, such as Swiss administrative students.

The Staff Association consulted a few people and various documents, the Swiss law and its implementing regulations, in particular the Ordinance on health insurance (OAMal) dated 27 June 1995 (state as on 1 July 2017).

The conclusion is that, for several months now, various Swiss services have had different interpretations of legal articles, but also that Swiss authorities, at the highest level, wish to solve this problem.

Indeed, a revision of the Ordinance on health insurance (OAMal) is being undertaken to explicitly mention the case of the beneficiaries of members of international organizations.

This new version of the OAMal should be published at the latest beginning of 2018. It should allow to revert back to the initial situation, and also allow all those who have been automatically affiliated to the LAMal to leave their LAMal insurer and revert to only using the CHIS.

In the meantime, what can you do, what should you do?

If you have already exercised your right to opt-in and the CHIS has been recognized as your basic insurance, we recommend that you do not contact the Health Insurance Service (or the equivalent service in another Canton) if the latter does not contact you. Also, and to the maximum possible extent, avoid any change in administrative situation that may cause this service to reopen your file.

If you have not yet exercised your right to opt-in, please comply with the instructions given by HR in its communication of April 2015 which mention to fill out the form for health insurance for “frontaliers” by ticking the LAMal box and providing your certificate of affiliation to the CHIS (to be requested from UNIQA). However, if the SAM, or the equivalent service, does not recognize the CHIS as your basic health insurance and forces you to join a LAMal insurer, we advise you:

  • to protest against this decision (the most important is to not voluntarily join a LAMal insurance), and
  • to demand, in the same message, that you can renounce this forced affiliation, leave the LAMal and join the CHIS once the situation has been clarified at the Swiss Federal level, notably following the publication of the new Ordinance on Health Insurance.

We, of course, continue to follow this major issue which is being actively managed by the Legal Service and the Host States Relations Service. We will keep you informed of any development.

July 17, 2017 11:07 AM

Emily Lakdawalla - The Planetary Society Blog

Your Guide to the Great American Eclipse of 2017
The Moon will totally eclipse the Sun for the first time as seen from the continental United States in more than 40 years on August 21, 2017. What are eclipses, and what's special about this one?

July 17, 2017 11:00 AM

Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs: When US Labs Competed For Leadership In HEP

The clip below, together with the following few which will be published every few days in the coming weeks, is extracted from the third chapter of my book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". It recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989.

read more

by Tommaso Dorigo at July 17, 2017 09:35 AM

July 16, 2017

Matt Strassler - Of Particular Significance

Ongoing Chance of Northern (or Southern) Lights

As forecast, the cloud of particles from Friday’s solar flare (the “coronal mass emission”, or “CME”) arrived at our planet a few hours after my last post, early in the morning New York time. If you’d like to know how I knew that it had reached Earth, and how I know what’s going on now, scroll down to the end of this post and I’ll show you the data I was following, which is publicly available at all times.

So far the resulting auroras have stayed fairly far north, and so I haven’t seen any — though they were apparently seen last night in Washington and Wyoming, and presumably easily seen in Canada and Alaska. [Caution: sometimes when people say they’ve been “seen”, they don’t quite mean that; I often see lovely photos of aurora that were only visible to a medium-exposure camera shot, not to the naked eye.]  Or rather, I should say that the auroras have stayed fairly close to the Earth’s poles; they were also seen in New Zealand.

Russia and Europe have a good opportunity this evening. As for the U.S.? The storm in the Earth’s magnetic field is still going on, so tonight is still a definite possibility for northern states. Keep an eye out! Look for what is usually a white or green-hued glow, often in swathes or in stripes pointing up from the northern horizon, or even overhead if you’re lucky.  The stripes can move around quite rapidly.

Now, here’s how I knew all this.  I’m no expert on auroras; that’s not my scientific field at all.   But the U.S. Space Weather Prediction Center at the National Oceanic and Atmospheric Administration, which needs to monitor conditions in space in case they should threaten civilian and military satellites or even installations on the ground, provides a wonderful website with lots of relevant data.

The first image on the site provides the space weather overview; a screenshot from the present is shown below, with my annotations.  The upper graph indicates a blast of x-rays (a form of light not visible to the human eye) which is generated when the solar flare, the magnetically-driven explosion on the sun, first occurs.  Then the slower cloud of particles (protons, electrons, and other atomic nuclei, all of which have mass and therefore can’t travel at light’s speed) takes a couple of days to reach Earth.  It’s arrival is shown by the sudden jump in the middle graph.  Finally, the lower graph measures how active the Earth’s magnetic field is.  The only problem with that plot is it tends to be three hours out of date, so beware of that! A “Kp index” of 5 shows significant activity; 6 means that auroras are likely to be moving away from the poles, and 7 or 8 mean that the chances in a place like the north half of the United States are pretty good.  So far, 6 has been the maximum generated by the current flare, but things can fluctuate a little, so 6 or 7 might occur tonight.  Keep an eye on that lower plot; if it drops back down to 4, forget it, but it it’s up at 7, take a look for sure!

SpaceWxDataJuly162017

Also on the site is data from the ACE satellite.  This satellite sits 950 thousand miles [1.5 million kilometers] from Earth, between Earth and the Sun, which is 93 million miles [150 million kilometers] away.  At that vantage point, it gives us (and our other satellites) a little early warning, of up to an hour, before the cloud of slow particles from a solar flare arrives.  That provides enough lead-time to turn off critical equipment that might otherwise be damaged.  And you can see, in the plot below, how at a certain time in the last twenty-four hours the readings from the satellite, which had been tepid before, suddenly started fluctuating wildly.  That was the signal that the flare had struck the satellite, and would arrive shortly at our location.

ACEDataJuly162017.png

It’s a wonderful feature of the information revolution that you can get all this scientific data yourself, and not wait around hoping for a reporter or blogger to process it for you.  None of this was available when I was a child, and I missed many a sky show.  A big thank you to NOAA, and to the U.S. taxpayers who make their work possible.

 

 


Filed under: Astronomy Tagged: astronomy, auroras, space

by Matt Strassler at July 16, 2017 09:09 PM

July 15, 2017

Matt Strassler - Of Particular Significance

Lights in the Sky (maybe…)

The Sun is busy this summer. The upcoming eclipse on August 21 will turn day into deep twilight and transfix millions across the United States.  But before we get there, we may, if we’re lucky, see darkness transformed into color and light.

On Friday July 14th, a giant sunspot in our Sun’s upper regions, easily visible if you project the Sun’s image onto a wall, generated a powerful flare.  A solar flare is a sort of magnetically powered explosion; it produces powerful electromagnetic waves and often, as in this case, blows a large quantity of subatomic particles from the Sun’s corona. The latter is called a “coronal mass ejection.” It appears that the cloud of particles from Friday’s flare is large, and headed more or less straight for the Earth.

Light, visible and otherwise, is an electromagnetic wave, and so the electromagnetic waves generated in the flare — mostly ultraviolet light and X-rays — travel through space at the speed of light, arriving at the Earth in eight and a half minutes. They cause effects in the Earth’s upper atmosphere that can disrupt radio communications, or worse.  That’s another story.

But the cloud of subatomic particles from the coronal mass ejection travels a few hundred times slower than light, and it takes it about two or three days to reach the Earth.  The wait is on.

Bottom line: a huge number of high-energy subatomic particles may arrive in the next 24 to 48 hours. If and when they do, the electrically charged particles among them will be trapped in, and shepherded by, the Earth’s magnetic field, which will drive them spiraling into the atmosphere close to the Earth’s polar regions. And when they hit the atmosphere, they’ll strike atoms of nitrogen and oxygen, which in turn will glow. Aurora Borealis, Northern Lights.

So if you live in the upper northern hemisphere, including Europe, Canada and much of the United States, keep your eyes turned to the north (and to the south if you’re in Australia or southern South America) over the next couple of nights. Dark skies may be crucial; the glow may be very faint.

You can also keep abreast of the situation, as I will, using NOAA data, available for instance at

http://www.swpc.noaa.gov/communities/space-weather-enthusiasts

The plot on the upper left of that website, an example of which is reproduced below, shows three types of data. The top graph shows the amount of X-rays impacting the atmosphere; the big jump on the 14th is Friday’s flare. And if and when the Earth’s magnetic field goes nuts and auroras begin, the bottom plot will show the so-called “Kp Index” climbing to 5, 6, or hopefully 7 or 8. When the index gets that high, there’s a much greater chance of seeing auroras much further away from the poles than usual.

The latest space weather overview plot

Keep an eye also on the data from the ACE satellite, lower down on the website; it’s placed to give Earth an early warning, so when its data gets busy, you’ll know the cloud of particles is not far away.

Wishing you all a great sky show!


Filed under: LHC News

by Matt Strassler at July 15, 2017 09:54 PM

Tommaso Dorigo - Scientificblogging

Muon G-2: The Anomaly That Could Change Physics, And A New Exciting Theoretical Development
Do you remember the infamous "g-2" measurement ? The anomalous magnetic moment of the muon has been for over a decade in the agenda of HEP physicists, both as a puzzle and as a hope for good things to come. 

Ever since the Brookhaven laboratories estimated the quantity at a value over 3 standard deviations away from the equally precise theoretical predictions, the topic (could the discrepancy be due to new physics??) has been commonplace in dinner table conversations among HEP physicists. 

read more

by Tommaso Dorigo at July 15, 2017 02:01 PM

July 14, 2017

Emily Lakdawalla - The Planetary Society Blog

Meet Scott Pace, the National Space Council's new executive secretary
Pace will help develop policies that affect the future of NASA. Here's a guide to this influential new member of the Trump administration.

July 14, 2017 03:51 PM

The n-Category Cafe

Laws of Mathematics "Commendable"

Australia’s Prime Minister Malcolm Turnbull, today:

The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.

The context: Turnbull wants Australia to undermine encryption by compelling backdoors by law. The argument is that governments should have the right to read all their citizens’ communications.

Technologists have explained over and over again why this won’t work, but politicians like Turnbull know better. The recent, enormous, Petya and WannaCry malware attacks (hitting British hospitals, for instance) show what can happen when intelligence agencies such as the NSA treat vulnerabilities in software as opportunities to be exploited rather than problems to be fixed.

Thanks to David Roberts for sending me the link.

by leinster (Tom.Leinster@gmx.com) at July 14, 2017 02:37 PM

July 13, 2017

Clifford V. Johnson - Asymptotia

It Can be Done

For those interested in giving more people access to science, and especially those who act as gate-keepers, please pause to note that* a primetime drama featuring tons of real science in nearly every episode can get 10 Emmy nominations. Congratulations National Geographic’s Genius! (Full list here. See an earlier post … Click to continue reading this post

The post It Can be Done appeared first on Asymptotia.

by Clifford at July 13, 2017 09:17 PM

Symmetrybreaking - Fermilab/SLAC

SLAC accelerator plans appear in Smithsonian art exhibit

The late artist June Schwarcz found inspiration in some unusual wrapping paper her husband brought home from the lab.

 

Photograph of June Schwarcz at home

Leroy Schwarcz, one of the first engineers hired to build SLAC National Accelerator Laboratory’s original 2-mile-long linear accelerator, thought his wife might like to use old mechanical drawings of the project as wrapping paper. So, he brought them home.

His wife, acclaimed enamelist June Schwarcz, had other ideas.

Today, works called SLAC Drawing III, VII and VIII, created in 1974 and 1975 from electroplated copper and enamel, form a unique part of a retrospective at the Smithsonian’s Renwick Gallery in Washington, D.C.

Among the richly formed and boldly textured and colored vessels that make up the majority of June’s oeuvre, the SLAC-inspired panels stand out for their fidelity to the mechanical design of their inspiration. 

The description next to the display at the gallery describe the “SLAC Blueprints” as resembling “ancient pictographs drawn on walls of a cave or glyphs carved in stone.” The designs appear to depict accelerator components, such as electromagnets and radio frequency structures.

According to Harold B. Nelson, who curated the exhibit with Bernard N. Jazzar, “The panels are quite unusual in the subtle color palette she chose; in her use of predominantly opaque enamels; in her reliance on a rectilinear, geometric format for her compositions; and in her reference in the work to machines, plans, numbers, and mechanical parts. 

“We included them because they are extremely beautiful and visually powerful. Together they form an important group within her body of work.”

Making history

June and Leroy Schwarcz met in the late 1930s and were married in 1943. Two years later they moved to Chicago where Leroy would become chief mechanical engineer for the University of Chicago’s synchrocyclotron, which was at the time the highest-energy proton accelerator in the world.

Having studied art and design at the Pratt Institute in Brooklyn several years earlier, June found her way into a circle of notable artists in Chicago, including Bauhaus legend László Moholy-Nagy, founder of Chicago’s Institute of Design.

Around 1954, June was introduced to enameling and shortly thereafter began to exhibit her art. She and her husband had two children and relocated several times during the 1950s for Leroy’s work. In 1958 they settled in Sausalito, California, where June set up her studio in the lower level of their hillside home. 

In 1961, Leroy became the first mechanical engineer hired by Stanford University to work on “Project M,” which would become the famous 2-mile-long linear accelerator at SLAC. He oversaw the engineers during early design and construction of the linac, which eventually enabled Nobel-winning particle physics research.

June and Leroy’s daughter, Kim Schwarcz, who made a living as a glass blower and textile artist until the mid 1980s and occasionally exhibited with her mother, remembers those early days at the future lab.

“Before SLAC was built, the offices were in Quonset huts, and my father used to bring me down, and I would bicycle all over the campus,” she recalled. “Pief was a family friend and so was Bob Mozley. Mom introduced Bob to his future wife…It was a small community and a really nice community.” 

W.K.H. “Pief” Panofsky was the first director of SLAC; he and Mozley were renowned SLAC physicists and national arms control experts.

Finding beauty

Kim was not surprised that her mother made art based on the SLAC drawings. She remembers June photographing the foggy view outside their home and getting inspiration from nature, ethnic art and Japanese clothing.

“She would take anything and make something out of it,” Kim said. “She did an enamel of an olive oil can once and a series called Adam’s Pants that were based on the droopy pants my son wore as a teen.”

But the fifteen SLAC-inspired compositions were unique and a family favorite; Kim and her brother Carl both own some of them, and others are at museums.

In a 2001 oral history interview with the Smithsonian Institution's Archives of American Art, June explained the detailed work involved in creating the SLAC drawings by varnishing, scribing, electroplating and enameling a copper sheet: “I'm primarily interested in having things that are beautiful, and of course, beauty is a complicated thing to devise, to find.”

Engineering art

Besides providing inspiration in the form of technical drawings, Leroy was influential in June’s career in other ways.

Around 1962 he introduced her to Jimmy Pope at the SLAC machine shop, who showed June how to do electroplating, a signature technique of her work. Electroplating involves using an electric current to deposit a coating of metal onto another material. She used it to create raised surfaces and to transform thin sheets of copper—which she stitched together using copper wire—into substantial, free-standing vessel-like forms. She then embellished these sculptures with colored enamel.

Leroy built a 30-gallon plating bath and other tools for June’s art-making at their shared workshop. 

“Mom was tiny, 5 feet tall, and she had these wobbly pieces on the end of a fork that she would put into a hot kiln. It was really heavy. Dad made a stand so she could rest her arm and slide the piece in,” Kim recalls.

“He was very inventive in that way, and very creative himself,” she said. “He did macramé in the 1960s, made wooden spoons and did scrimshaw carvings on bone that were really good.”

Kim remembers the lower-level workshop as a chaotic and inventive space. “For the longest time, there was a wooden beam in the middle of the workshop we would trip over. It was meant for a boat dad wanted to build—and eventually did build after he retired,” she said.

At SLAC Leroy’s work was driven by his “amazingly good intuition,” according to a tribute written by Mozley upon his colleague’s death in 1993. Even when he favored crude drawings to exact math, “his intuitive designs were almost invariably right,” he wrote. 

After the accelerator was built, Leroy turned his attention to the design, construction and installation of a streamer chamber scientists at SLAC used as a particle detector. In 1971 he took a leave of absence from the California lab to go back to Chicago and move the synchrocyclotron’s 2000-ton magnet from the university to Fermi National Accelerator Laboratory. 

“[Leroy] was the only person who could have done this because, although drawings existed, knowledge of the assembly procedures existed only in the minds of Leroy and those who had helped him put the cyclotron together,” Mozley wrote.

Beauty on display

June continued making art at her Sausalito home studio up until two weeks before her death in 2015 at the age of 97. A 2007 video shows the artist at work there 10 years prior to her passing. 

After Leroy died, her own art collection expanded on the shelves and walls of her home.

“As a kid, the art was just what mom did, and it never changed,” Kim remembers. “She couldn’t wait for us to go to school so she could get to work, and she worked through health challenges in later years.”

The Smithsonian exhibit is a unique collection of June’s celebrated work, with its traces of a shared history with SLAC and one of the lab’s first mechanical engineers.

“June had an exceptionally inquisitive mind, and we think you get a sense of the rich breadth of her vision in this wonderful body of work,” says curator Jazzar.

June Schwarcz: Invention and Variation is the first retrospective of the artist’s work in 15 years and includes almost 60 works. The exhibit runs through August 27 at the Smithsonian American Art Museum Renwick Gallery. 

Editor's note: Some of the information from this article was derived from an essay written by Jazzar and Nelson that appears in a book based on the exhibition with the same title.

by Angela Anderson at July 13, 2017 01:00 PM

July 12, 2017

Marco Frasca - The Gauge Connection

Something to say but not yet…

Last week I have been in Montpellier to attend QCD 17 Conference hosted at the CNRS and whose mainly organizer is Stephan Narison. At this conference participates a lot of people from CERN presenting new results very nearly to the main summer conferences. This year, QCD 17 was in conjuction with EPSHEP 2017 were the new results coming from LHC were firstly presented. This means that the contents of the talks in the two conferences just superposed in a matter of few hours.

On Friday, the last day of conference, I posted the following twitter after attending the talk by Shunsuke Honda on behalf of ATLAS at QCD 17:

and the reason was this slide

The title of the talk was “Cross sections and couplings of the Higgs Boson from ATLAS”. As you can read from it, there is a deviation of about 2 sigmas from the Standard Model for the Higgs decaying to ZZ(4l) for VBF. Indeed, they can claim agreement yet but it is interesting anyway (maybe are we missing anything?). The previous day at EPSHEP 2017, Ruchi Gupta on behalf of ATLAS presented an identical talk with the title “Measurement of the Higgs boson couplings and properties in the diphoton, ZZ and WW decay channels using the ATLAS detector” and the slide was the following:

The result is still there but with a somewhat sober presentation. What does this mean? Presently, this amounts to very few. We are still within the Standard Model even if something seems to peep out. In order to claim a discovery, this effect should be seen with a lower error and at CMS too. The implications would be that there could be a more complex spectrum of the Higgs sector with a possible new understanding of naturalness if such a spectrum would not have a formal upper bound. People at CERN promised more data coming in the next weeks. Let us see what will happen to this small effect.


Filed under: Conference, Particle Physics, Physics Tagged: ATLAS, CERN, Higgs decay

by mfrasca at July 12, 2017 12:56 PM

Clifford V. Johnson - Asymptotia

A Street Scene Materializing

Well, I finished all the line art on that SF short story I was asked to write and draw. And the good news is that the editor of the anthology it will be part of is extremely pleased with the story. So that's good news since I put a lot of work into it and it would be hard to change anything significant at this stage! So all I have to do is paint the 20 pages, which should be fun. The line art is in a pencil style, and so I might do some colour that is in a loose style to match. In any case, below is a video capture (2 mins long) of the complete process of me drawing a panel for part of a page of the story (unpainted panel is at top of this post). I did this on the plane back from Europe a short while ago. It's an [...] Click to continue reading this post

The post A Street Scene Materializing appeared first on Asymptotia.

by Clifford at July 12, 2017 02:58 AM

July 11, 2017

ZapperZ - Physics and Physicists

The Higgs - Five Years In
In case you've been asleep the past 5 years or so and what to catch up on our lovable Higgs, here is a quick, condensed version of the saga so far.

Where were you on 4 July 2012, the day the Higgs boson discovery was announced? Many people will be able to answer without referring to their diary. Perhaps you were among the few who had managed to secure a seat in CERN’s main auditorium, or who joined colleagues in universities and laboratories around the world to watch the webcast.

This story promises to have lots of sequels, just like the movies released so far this year.

Zz.

by ZapperZ (noreply@blogger.com) at July 11, 2017 08:56 PM

ZapperZ - Physics and Physicists

The Universe's First Atoms Verify Big Bang Theory
The Big Bang theory makes many predictions and consequences, all of them are being thoroughly tested (unlike Intelligent Design or Creationism). These predictions and consequences are quantitative in nature, i.e. the theory predicts actual numbers.

Many of these "numbers" have been verified by experiments and observations, and they are continually being measured to higher precision. This latest one comes about from the prediction of the amount of certain gases during the early evolution of our universe.

But more data has just come in! Two new measurements, in a paper just coming out now by Signe Riemer-Sørensen and Espen Sem Jenssen, of different gas clouds lines up with a different quasar have given us our best determination of deuterium's abundance right after the Big Bang: 0.00255%. This is to be compared with the theoretical prediction from the Big Bang: 0.00246%, with an uncertainty of ±0.00006%. To within the errors, the agreement is spectacular. In fact, if you sum up all the data from deuterium measurements taken in this fashion, the agreement is indisputable.

The more they test it, the more convincing it becomes.

Zz.

by ZapperZ (noreply@blogger.com) at July 11, 2017 04:21 PM

Symmetrybreaking - Fermilab/SLAC

A new model for standards

In an upcoming refresh, particle physics will define units of measurement such as the meter, the kilogram and the second.

Image of yellow ruler background with moon and red and Plank graphics

While America remains obstinate about using Imperial units such as miles, pounds and degrees Fahrenheit, most of the world has agreed that using units that are actually divisible by 10 is a better idea. The metric system, also known as the International System of Units (SI), is the most comprehensive and precise system for measuring the universe that humans have developed. 

In 2018, the 26th General Conference on Weights and Measures will convene and likely adopt revised definitions for the seven base metric system units for measuring: length, mass, time, temperature, electric current, luminosity and quantity.

The modern metric system owes its precision to particle physics, which has the tools to investigate the universe more precisely than any microscope. Measurements made by particle physicists can be used to refine the definitions of metric units. In May, a team of German physicists at the Physikalisch-Technische Bundesanstalt made the most precise measurements yet of the Boltzmann constant, which will be used to define units of temperature.

Since the metric system was established in the 1790s, scientists have attempted to give increasingly precise definitions to these units. The next update will define every base unit using fundamental constants of the universe that have been derived by particle physics.

meter (distance): 

Starting in 1799, the meter was defined by a prototype meter bar, which was just a platinum bar. Physicists eventually realized that distance could be defined by the speed of light, which has been measured with an accuracy to one part in a billion using an interferometer (interestingly, the same type of detector the LIGO collaboration used to discover gravitational waves). The meter is currently defined as the distance traveled by light (in a vacuum) for 1/299,792,458 of a second, and will remain effectively unchanged in 2018.

kilogram (mass):

For over a century, the standard kilogram has been a small platinum-iridium cylinder housed at the International Bureau of Weights and Measures in France. But even its precise mass fluctuates due to factors such as accumulation of microscopic dust. Scientists hope to redefine the kilogram in 2018 by setting the value of Planck’s constant to exactly 6.626070040×10-34 kilograms times meters squared per second. Planck’s constant is the smallest amount of quantized energy possible. This fundamental value, which is represented with the letter h, is integral to calculating energies in particle physics.

second (time):

The earliest seconds were defined as divisions of time between full moons. Later, seconds were defined by solar days, and eventually the time it took Earth to revolve around the sun. Today, seconds are defined by atomic time, which is precise to 1 part in 10 billion. Atomic time is calculated by periods of radiation by atoms, a measurement that relies heavily on particle physics techniques. One second is currently defined as 9,192,631,770 periods of the radiation for a Cesium-133 atom and will remain effectively unchanged. 

kelvin (temperature):

Kelvin is the temperature scale that starts at the coldest possible state of matter. Currently, a kelvin is defined by the triple point of water—where water can exist as a solid, liquid and gas. The triple point is 273.16 Kelvin, so a single kelvin is 1/273.16 of the triple point. But because water can never be completely pure, impurities can influence the triple point. In 2018 scientists hope to redefine kelvin by setting the value of Boltzmann’s constant to exactly 1.38064852×10−23 joules per kelvin. Boltzmann’s constant links the movement of particles in a gas (the average kinetic energy) to the temperature of the gas. Denoted by the symbol k, the Boltzmann constant is ubiquitous throughout physics calculations that involve temperature and entropy.  

ampere (electric current):

André-Marie Ampère, who is often considered the father of electrodynamics, has the honor of having the basic unit of electric current named after him. Right now, the ampere is defined by the amount of current required to produce of a force of 2×10−7 newtons for each meter between two parallel conductors of infinite length. Naturally, it’s a bit hard to come by things of infinite length, so the proposed definition is instead to define amperes by the fundamental charge of a particle. This new definition would rely on the charge of the electron, which will be set to 1.6021766208×10−19 amperes times seconds.

candela (luminosity):

The last of the base SI units to be established, the candela measures luminosity—what we typically refer to as brightness. Early standards for the candela used a phenomenon from quantum mechanics called “black body radiation.” This is the light that all objects radiate as a function of their heat. Currently, the candela is defined more fundamentally as 1/683 watt per square radian at a frequency of 540×1012 herz over a certain area, a definition which will remain effectively unchanged. Hard to picture? A candle, conveniently, emits about one candela of luminous intensity.

mole (quantity):

Different from all the other base units, the mole measures quantity alone. Over hundreds of years, scientists starting from Amedeo Avogadro worked to better understand how the number of atoms was related to mass, leading to the current definition of the mole: the number of atoms in 12 grams of carbon-12. This number, which is known as Avogadro’s constant and used in many calculations of mass in particle physics, is about 6 x 1023. To make the mole more precise, the new definition would set Avogadro’s constant to exactly 6.022140857×1023, decoupling it from the kilogram.

by Daniel Garisto at July 11, 2017 03:42 PM

July 10, 2017

The n-Category Cafe

A Bicategory of Decorated Cospans

My students are trying to piece together general theory of networks, inspired by many examples. A good general theory should clarify and unify these examples. What some people call network theory, I’d just call ‘applied graph invariant theory’: they come up with a way to calculate numbers from graphs, they calculate these numbers for graphs that show up in nature, and then they try to draw conclusions about this. That’s fine as far as it goes, but there’s a lot more to network theory!

There are many kinds of networks. You can usually create big networks of a given kind by sticking together smaller networks of this kind. The networks usually do something, and the behavior of the whole is usually determined by the behavior of the parts and how the parts are stuck together.

So, we should think of networks of a given kind as morphisms in a category, or more generally elements of an algebra of some operad, and define a map sending each such network to its behavior. Then we can study this map mathematically!

All these insights (and many more) are made precise in Fong’s theory of ‘decorated cospans’:

Kenny Courser is starting to look at the next thing: how one network can turn into another. For example, a network might change over time, or we might want to simplify a complicated network somehow. If a network is morphism, a process where one network turns into another could be a ‘2-morphism’: that is, a morphism between morphisms. Just as categories have objects and morphisms, bicategories have objects, morphisms and 2-morphisms.

So, Kenny is looking at bicategories. As a first step, Kenny took Brendan’s setup and souped it up to define ‘decorated cospan bicategories’:

In this paper, he showed that these bicategories are often ‘symmetric monoidal’. This means that you can not only stick networks together end to end, you can also set them side by side or cross one over the other—and similarly for processes that turn one network into another! A symmetric monoidal bicategory is a somewhat fearsome structure, so Kenny used some clever machinery developed by Mike Shulman to get the job done:

I would love to talk about the details, but they’re a bit technical so I think I’d better talk about something more basic. Namely: what’s a decorated cospan category and what’s a decorated cospan bicategory?

First: what’s a decorated cospan? A cospan in some category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a diagram like this:

where the objects and morphisms are all in <semantics>C.<annotation encoding="application/x-tex">C.</annotation></semantics> For example, if <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is the category of sets, we’ve got two sets <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> mapped to a set <semantics>Γ.<annotation encoding="application/x-tex">\Gamma.</annotation></semantics>

In a ‘decorated’ cospan, the object <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is equipped or, as we like to say, ‘decorated’ with extra structure. For example:

Here the set <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> consists of 3 points—but it’s decorated with a graph whose edges are labelled by numbers! You could use this to describe an electrical circuit made of resistors. The set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> would then be the set of ‘input terminals’, and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> the set of ‘output terminals’.

In this example, and indeed in many others, there’s no serious difference between inputs and outputs. We could reflect the picture, switching the roles of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y,<annotation encoding="application/x-tex">Y,</annotation></semantics> and the inputs would become outputs and vice versa. One reason for distinguishing them is that we can then attach the outputs of one circuit to the inputs of another and build a larger circuit. If we think of our circuit as a morphism from the input set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to the output set <semantics>Y,<annotation encoding="application/x-tex">Y,</annotation></semantics> this process of attaching circuits to form larger ones can be seen as composing morphisms in a category.

In other words, if we get the math set up right, we can compose a decorated cospan from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> and a decorated cospan from <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> to <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> and get a decorated cospan from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to <semantics>Z.<annotation encoding="application/x-tex">Z.</annotation></semantics> So with luck, we get a category with objects of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> as objects, and decorated cospans between these guys as morphisms!

For example, we can compose this:

and this:

to get this:

What did I mean by saying ‘with luck’? Well, there’s not really any luck involved, but we need some assumptions for all this to work. Before we even get to the decorations, we need to be able to compose cospans. We can do this whenever our cospans live in a category with pushouts. In category theory, a pushout is how we glue two things together.

So, suppose our category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> has pushouts. IF we then have two cospans in <semantics>C,<annotation encoding="application/x-tex">C,</annotation></semantics> one from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> and one from <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> to <semantics>Z:<annotation encoding="application/x-tex">Z:</annotation></semantics>

we can take a pushout:

and get a cospan from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to <semantics>Z:<annotation encoding="application/x-tex">Z:</annotation></semantics>

All this is fine and dandy, but there’s a slight catch: the pushout is only defined up to isomorphism, so we can’t expect this process of composing cospans to be associative: it will only be associative up to isomorphism.

What does that mean? What’s an isomorphism of cospans?

I’m glad you asked. A map of cospans is a diagram like this:

where the two triangles commmute. You can see two cospans in this picture; the morphism <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> provides the map from one to the other. If <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is an isomorphism, then this is an isomorphism of cospans.

To get around this problem, we can work with a category where the morphisms aren’t cospans, but isomorphism classes of cospans. That’s what Brendan did, and it’s fine for many purposes.

But back around 1972, when Bénabou was first inventing bicategories, he noticed that you could also create a bicategory with

  • objects of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> as objects,
  • spans in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> as morphisms, and
  • maps of spans in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> as 2-morphisms.

Bicategories are perfectly happy for composition of 1-morphisms to be associative only up to isomorphism, so this solves the problem in a somewhat nicer way. (Taking equivalence classes of things when you don’t absolutely need to is regarded with some disdain in category theory, because it often means you’re throwing out information—and when you throw out information, you often regret it later.)

So, if you’re interested in decorated cospan categories, and you’re willing to work with bicategories, you should consider thinking about decorated cospan bicategories. And now, thanks to Kenny Courser’s work, you can!

He showed how the decorations work in the bicategorical approach: for example, he proved that whenever <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> has finite colimits and

<semantics>F:(C,+)(Set,×)<annotation encoding="application/x-tex"> F : (C,+) \to (\mathrm{Set}, \times)</annotation></semantics>

is a lax symmetric monoidal functor, you get a symmetric monoidal bicategory where a morphism is a cospan in <semantics>C:<annotation encoding="application/x-tex">C:</annotation></semantics>

with the object <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> decorated by an element of <semantics>F(Γ).<annotation encoding="application/x-tex">F(\Gamma).</annotation></semantics>

Proving this took some virtuosic work in category theory. The key turns out to be this glorious diagram:

For the explanation, check out Proposition 4.5 in his paper.

I’ll talk more about applications of cospan bicategories when I blog about some other papers Kenny Courser and Daniel Cicala are writing.

by john (baez@math.ucr.edu) at July 10, 2017 03:08 PM

Tommaso Dorigo - Scientificblogging

600 Attend To Outreach Event In Venice
On Saturday, July 8th, the "Sala Perla" of the Palazzo del Casinò was crowded by 600 attendees, who filled all seats and then some. The event, titled "Universo: tempo zero - breve storia dell'inizio", was organized in conjunction with the international EPS conference, which takes place until this Wednesday at Lido of Venice. It featured a discussion between the anchor, Silvia Rosa Brusin, and a few guests: Fabiola Gianotti, general director of CERN; Antonio Masiero, vice-president of INFN; and Mirko Pojer, responsible of operations of the LHC collider. The program was enriched by a few videos, and by readings by Sonia Bergamasco and jazz music by Umberto Petrin.

read more

by Tommaso Dorigo at July 10, 2017 02:56 PM

July 08, 2017

John Baez - Azimuth

A Bicategory of Decorated Cospans

My students are trying to piece together general theory of networks, inspired by many examples. A good general theory should clarify and unify these examples. What some people call network theory, I’d just call ‘applied graph invariant theory’: they come up with a way to calculate numbers from graphs, they calculate these numbers for graphs that show up in nature, and then they try to draw conclusions about this. That’s fine as far as it goes, but there’s a lot more to network theory!

There are many kinds of networks. You can usually create big networks of a given kind by sticking together smaller networks of this kind. The networks usually do something, and the behavior of the whole is usually determined by the behavior of the parts and how the parts are stuck together.

So, we should think of networks of a given kind as morphisms in a category, or more generally elements of an algebra of some operad, and define a map sending each such network to its behavior. Then we can study this map mathematically!

All these insights (and many more) are made precise in Fong’s theory of ‘decorated cospans’:

• Brendan Fong, The Algebra of Open and Interconnected Systems, Ph.D. thesis, University of Oxford, 2016. (Blog article here.)

Kenny Courser is starting to look at the next thing: how one network can turn into another. For example, a network might change over time, or we might want to simplify a complicated network somehow. If a network is morphism, a process where one network turns into another could be a ‘2-morphism’: that is, a morphism between morphisms. Just as categories have objects and morphisms, bicategories have objects, morphisms and 2-morphisms.

So, Kenny is looking at bicategories. As a first step, Kenny took Brendan’s setup and souped it up to define ‘decorated cospan bicategories’:

• Kenny Courser, Decorated cospan bicategories, to appear in Theory and Applications of Categories.

In this paper, he showed that these bicategories are often ‘symmetric monoidal’. This means that you can not only stick networks together end to end, you can also set them side by side or cross one over the other—and similarly for processes that turn one network into another! A symmetric monoidal bicategory is a somewhat fearsome structure, so Kenny used some clever machinery developed by Mike Shulman to get the job done:

• Mike Shulman, Constructing symmetric monoidal bicategories.

I would love to talk about the details, but they’re a bit technical so I think I’d better talk about something more basic. Namely: what’s a decorated cospan category and what’s a decorated cospan bicategory?

First: what’s a decorated cospan? A cospan in some category C is a diagram like this:

where the objects and morphisms are all in C. For example, if C is the category of sets, we’ve got two sets X and Y mapped to a set \Gamma.

In a ‘decorated’ cospan, the object \Gamma is equipped or, as we like to say, ‘decorated’ with extra structure. For example:

Here the set \Gamma consists of 3 points—but it’s decorated with a graph whose edges are labelled by numbers! You could use this to describe an electrical circuit made of resistors. The set X would then be the set of ‘input terminals’, and Y the set of ‘output terminals’.

In this example, and indeed in many others, there’s no serious difference between inputs and outputs. We could reflect the picture, switching the roles of X and Y, and the inputs would become outputs and vice versa. One reason for distinguishing them is that we can then attach the outputs of one circuit to the inputs of another and build a larger circuit. If we think of our circuit as a morphism from the input set X to the output set Y, this process of attaching circuits to form larger ones can be seen as composing morphisms in a category.

In other words, if we get the math set up right, we can compose a decorated cospan from X to Y and a decorated cospan from Y to Z and get a decorated cospan from X to Z. So with luck, we get a category with objects of C as objects, and decorated cospans between these guys as morphisms!

For example, we can compose this:

and this:

to get this:

What did I mean by saying ‘with luck’? Well, there’s not really any luck involved, but we need some assumptions for all this to work. Before we even get to the decorations, we need to be able to compose cospans. We can do this whenever our cospans live in a category with pushouts. In category theory, a pushout is how we glue two things together.

So, suppose our category C has pushouts. IF we then have two cospans in C, one from X to Y and one from Y to Z:

we can take a pushout:

and get a cospan from X to Z:

All this is fine and dandy, but there’s a slight catch: the pushout is only defined up to isomorphism, so we can’t expect this process of composing cospans to be associative: it will only be associative up to isomorphism.

What does that mean? What’s an isomorphism of cospans?

I’m glad you asked. A map of cospans is a diagram like this:

where the two triangles commmute. You can see two cospans in this picture; the morphism f provides the map from one to the other. If f is an isomorphism, then this is an isomorphism of cospans.

To get around this problem, we can work with a category where the morphisms aren’t cospans, but isomorphism classes of cospans. That’s what Brendan did, and it’s fine for many purposes.

But back around 1972, when Bénabou was first inventing bicategories, he noticed that you could also create a bicategory with

• objects of C as objects,
• spans in C as morphisms, and
• maps of spans in C as 2-morphisms.

Bicategories are perfectly happy for composition of 1-morphisms to be associative only up to isomorphism, so this solves the problem in a somewhat nicer way. (Taking equivalence classes of things when you don’t absolutely need to is regarded with some disdain in category theory, because it often means you’re throwing out information—and when you throw out information, you often regret it later.)

So, if you’re interested in decorated cospan categories, and you’re willing to work with bicategories, you should consider thinking about decorated cospan bicategories. And now, thanks to Kenny Courser’s work, you can!

He showed how the decorations work in the bicategorical approach: for example, he proved that whenever C has finite colimits and

F : (C,+) \to (\mathrm{Set}, \times)

is a lax symmetric monoidal functor, you get a symmetric monoidal bicategory where a morphism is a cospan in C:

with the object \Gamma decorated by an element of F(\Gamma).

Proving this took some virtuosic work in category theory. The key turns out to be this glorious diagram:

For the explanation, check out Proposition 4.1 in his paper.

I’ll talk more about applications of cospan bicategories when I blog about some other papers Kenny Courser and Daniel Cicala are writing.


by John Baez at July 08, 2017 12:07 AM

July 07, 2017

Symmetrybreaking - Fermilab/SLAC

Quirks of the arXiv

Sometimes, physics papers turn funny.

Header: Quirks of the arXiv

Since it went up in 1991, the arXiv (pronounced like the word “archive”) has been a hub for scientific papers in quantitative fields such as physics, math and computer science. Many of its million-plus papers are serious products of intense academic work that are later published in peer-reviewed journals. Still, some manage to have a little more character than the rest. For your consideration, we’ve gathered seven of the quirkiest physics papers on the arXiv.

Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?

In 2011, an experiment appeared to find particles traveling faster than the speed of light. To spare readers uninterested in lengthy calculations demonstrating the unlikeliness of this probably impossible phenomenon, the abstract for this analysis cut to the chase.

Paper Thumbnail

Quantum Tokens for Digital Signatures

Sometimes the best way to explain something is to think about how you might explain it to a child—for example, as a fairy tale.

Paper Thumbnail

A dialog on quantum gravity

Unless you’re intimately familiar with string theory and quantum loop gravity, this Socratic dialogue is like Plato’s Republic: It’s all Greek to you.

Paper Thumbnail

The Proof of Innocence

Pulled over after he was apparently observed failing to halt at a stop sign, the author of this paper, Dmitri Krioukov, was determined to prove his innocence—as only a scientist would.

Using math, he demonstrated that, to a police officer measuring the angular speed of Krioukov’s car, a brief obstruction from view could cause an illusion that the car did not stop. Krioukov submitted his proof to the arXiv; the judge ruled in his favor.

Paper Thumbnail

Quantum weak coin flipping with arbitrarily small bias

Not many papers in the arXiv illustrate their point with a tale involving human sacrifice. There’s something about quantum informatics that brings out the weird side of physicists.

Paper Thumbnail
Paper Thumbnail

10 = 6 + 4

A theorist calculated an alternative decomposition of 10 dimensions into 6 spacetime dimensions with local Conformal symmetry and 4-dimensional compact Internal Symmetry Space. For the title of his paper, he decided to go with something a little simpler.

Paper Thumbnail

Would Bohr be born if Bohm were born before Born?

This tricky tongue-twisting treatise theorizes a tangential timeline to testify that taking up quantum theories turns on timeliness.

Paper Thumbnail

by Daniel Garisto at July 07, 2017 01:00 PM

Tommaso Dorigo - Scientificblogging

LHCb Unearths New Doubly-Charmed Hadron Where Marek Karliner And Jonathan Rosner Ordered It
[UPDATE: see at the bottom for some additional commentary following a post on the matter by our friend Lubos Motl in his blog, where he quotes this piece and disagrees on the interest of finding the Xi mass in perfect agreement with an a priori calculation.]

It is always nice to learn that a new hadron is discovered - this broadens our understanding of the extremely complicated fabric of Quantum Chromodynamics (QCD), the theory of strong interactions that govern nuclear matter and are responsible for its stability. 

read more

by Tommaso Dorigo at July 07, 2017 07:17 AM

July 06, 2017

John Baez - Azimuth

Entropy 2018

The editors of the journal Entropy are organizing this conference:

Entropy 2018 — From Physics to Information Sciences and Geometry, 14–16 May 2018, Auditorium Enric Casassas, Faculty of Chemistry, University of Barcelona, Barcelona, Spain.

They write:

One of the most frequently used scientific words is the word “entropy”. The reason is that it is related to two main scientific domains: physics and information theory. Its origin goes back to the start of physics (thermodynamics), but since Shannon, it has become related to information theory. This conference is an opportunity to bring researchers of these two communities together and create a synergy. The main topics and sessions of the conference cover:

• Physics: classical and quantum thermodynamics
• Statistical physics and Bayesian computation
• Geometrical science of information, topology and metrics
• Maximum entropy principle and inference
• Kullback and Bayes or information theory and Bayesian inference
• Entropy in action (applications)

The inter-disciplinary nature of contributions from both theoretical and applied perspectives are very welcome, including papers addressing conceptual and methodological developments, as well as new applications of entropy and information theory.

All accepted papers will be published in the proceedings of the conference. A selection of invited and contributed talks presented during the conference will be invited to submit an extended version of their paper for a special issue of the open access journal Entropy.


by John Baez at July 06, 2017 02:59 PM

July 03, 2017

The n-Category Cafe

The Geometric McKay Correspondence (Part 2)

Last time I sketched how the <semantics>E 8<annotation encoding="application/x-tex">E_8</annotation></semantics> Dynkin diagram arises from the icosahedron. This time I’m fill in some details. I won’t fill in all the details, because I don’t know how! Working them out is the goal of this series, and I’d like to enlist your help.

As Kennedy said: ask not what your n-Café can do for you. Ask what you can do for your n-Café!

Remember the basic idea. We start with the rotational symmetry group of the isosahedron and take its double cover, getting a 120-element group <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> called the binary icosahedral group. Since this is naturally a subgroup of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> it acts on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, and we can form the quotient space

<semantics>S= 2/Γ<annotation encoding="application/x-tex"> S = \mathbb{C}^2/\Gamma </annotation></semantics>

This is a smooth manifold except at the origin — by which I mean the point coming from <semantics>0 2<annotation encoding="application/x-tex">0 \in \mathbb{C}^2</annotation></semantics>. Luckily we can ‘resolve’ this singularity! This implies that we can find a smooth manifold <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> and a smooth map

<semantics>π:S˜S<annotation encoding="application/x-tex"> \pi \colon \widetilde{S} \to S </annotation></semantics>

that’s one-to-one and onto except at the origin. There may be various ways to do this, but there’s one best way, the ‘minimal’ resolution, and that’s what I’ll be talking about.

The origin is where all the fun happens. The map <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> sends 8 spheres to the origin in <semantics> 2/Γ<annotation encoding="application/x-tex">\mathbb{C}^2/\Gamma</annotation></semantics>, one for each dot in the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> Dynkin diagram:

Two of these spheres intersect in a point if their dots are connected by an edge; otherwise they’re disjoint.

This is wonderful! So, the question is just how do we really see it? For starters, how do we get our hands on this manifold <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> and this map <semantics>π:S˜S<annotation encoding="application/x-tex"> \pi \colon \widetilde{S} \to S</annotation></semantics>?

For this we need some algebraic geometry. Indeed, the whole subject of ‘resolving singularities’ is part of algebraic geometry! However, since I still remember my ignorant youth, I want to avoid flinging around the vocabulary of this subject until we actually need it. So, experts will have to pardon my baby-talk. Nonexperts can repay me in cash, chocolate, bitcoins or beer.

What’s <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> like? First I’ll come out and tell you, and then I’ll start explaining what the heck I just said.

Theorem. <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> is the space of all <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>-invariant ideals <semantics>I[x,y]<annotation encoding="application/x-tex">I \subseteq \mathbb{C}[x,y]</annotation></semantics> such that <semantics>[x,y]/I<annotation encoding="application/x-tex">\mathbb{C}[x,y]/I</annotation></semantics> is isomorphic, as a representation of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>, to the regular representation of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>.

If you want a proof, this is Corollary 12.8 in Kirillov’s Quiver Representations and Quiver Varieties. It’s on page 245, so you’ll need to start by reading lots of other stuff. It’s a great book! But it’s not completely self-contained: for example, right before Corollary 12.8 he brings in a crucial fact without proof: “it can be shown that in dimension 2, if a crepant resolution exists, it is minimal”.

I will not try to prove this theorem; instead I will start explaining what it means.

Suppose you have a bunch of points <semantics>p 1,,p n 2<annotation encoding="application/x-tex">p_1, \dots, p_n \in \mathbb{C}^2</annotation></semantics>. We can look at all the polynomials on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> that vanish at these points. What is this collection of polynomials like?

Let’s use <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> as names for the standard coordinates on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, so polynomials on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> are just polynomials in these variables. Let’s call the ring of all such polynomials <semantics>[x,y]<annotation encoding="application/x-tex">\mathbb{C}[x,y]</annotation></semantics>. And let’s use <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> to stand for the collection of such polynomials that vanish at our points <semantics>p 1,,p n<annotation encoding="application/x-tex">p_1, \dots, p_n</annotation></semantics>.

Here are two obvious facts about <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>:

A. If <semantics>fI<annotation encoding="application/x-tex">f \in I</annotation></semantics> and <semantics>gI<annotation encoding="application/x-tex">g \in I</annotation></semantics> then <semantics>f+gI<annotation encoding="application/x-tex">f + g \in I</annotation></semantics>.

B. If <semantics>fI<annotation encoding="application/x-tex">f \in I</annotation></semantics> and <semantics>g[x,y]<annotation encoding="application/x-tex">g \in \mathbb{C}[x,y]</annotation></semantics> then <semantics>fgI<annotation encoding="application/x-tex">f g \in I</annotation></semantics>.

We summarize these by saying <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is an ideal, and this is why we called it <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>. (So clever!)

Here’s a slightly less obvious fact about <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>:

C. If the points <semantics>p 1,,p n<annotation encoding="application/x-tex">p_1, \dots, p_n</annotation></semantics> are all distinct, then <semantics>[x,y]/I<annotation encoding="application/x-tex">\mathbb{C}[x,y]/I</annotation></semantics> has dimension <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>.

The point is that the value of a function <semantics>f[x,y]<annotation encoding="application/x-tex">f \in \mathbb{C}[x,y]</annotation></semantics> at a point <semantics>p i<annotation encoding="application/x-tex">p_i</annotation></semantics> doesn’t change if we add an element of <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> to <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, so this value defines a linear functional on <semantics>[x,y]/I<annotation encoding="application/x-tex">\mathbb{C}[x,y]/I</annotation></semantics> . Guys like this form a basis of linear functionals on <semantics>[x,y]/I<annotation encoding="application/x-tex">\mathbb{C}[x,y]/I</annotation></semantics>, so it’s <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional.

All this should make you interested in the set of ideals <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> with <semantics>dim([x,y]/I)=n<annotation encoding="application/x-tex">\mathrm{dim}(\mathbb{C}[x,y]/I) = n </annotation></semantics>. This set is called the Hilbert scheme <semantics>Hilb n( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^n(\mathbb{C}^2)</annotation></semantics>.

Why is it called a scheme? Well, Hilbert had a bunch of crazy schemes and this was one. Just kidding: actually Hilbert schemes were invented by Grothendieck in 1961. I don’t know why he named them after Hilbert. The kind of Hilbert scheme I’m using is a very basic one, more precisely called the ‘punctual’ Hilbert scheme.

The Hilbert scheme <semantics>Hilb n( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^n(\mathbb{C}^2)</annotation></semantics> is a whole lot like the set of unordered <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tuples of distinct points in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>. Indeed, we’ve seen that every such <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tuple gives a point in the Hilbert scheme. But there are also other points in the Hilbert scheme! And this is where the fun starts!

Imagine <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> particles moving in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, with their motion described by polynomial functions of time. As long as these particles don’t collide, they define a curve in the Hilbert scheme. But it still works when they collide! When they collide, this curve will hit a point in the Hilbert scheme that doesn’t come from an unordered <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tuple of distinct points in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>. This point describes a ‘type of collision’.

More precisely: <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tuples of distinct points in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> give an open dense set in the Hilbert scheme, but there are other points in the Hilbert scheme which can be reached as limits of those in this open dense set! The topology here is very subtle, so let’s look at an example.

Let’s look at the Hilbert scheme <semantics>Hilb 2( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^2(\mathbb{C}^2)</annotation></semantics>. Given two distinct points <semantics>p 1,p 2 2<annotation encoding="application/x-tex">p_1, p_2 \in \mathbb{C}^2</annotation></semantics>, we get an ideal

<semantics>{f[x,y]:f(p 1)=f(p 2)=0}<annotation encoding="application/x-tex"> \{ f \in \mathbb{C}[x,y] \, : \; f(p_1) = f(p_2) = 0 \} </annotation></semantics>

This ideal is a point in our Hilbert scheme, since <semantics>dim([x,y]/I)=2<annotation encoding="application/x-tex">\mathrm{dim}(\mathbb{C}[x,y]/I) = 2 </annotation></semantics>.

But there are other points in our Hilbert scheme! For example, if we take any point <semantics>p 2<annotation encoding="application/x-tex">p \in \mathbb{C}^2</annotation></semantics> and any vector <semantics>v 2<annotation encoding="application/x-tex">v \in \mathbb{C}^2</annotation></semantics>, there’s an ideal consisting of polynomials that vanish at <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> and whose directional derivative in the <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> direction also vanishes at <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>:

<semantics>I={f[x,y]:f(p)=lim t0f(p+tv)f(p)t=0}<annotation encoding="application/x-tex"> I = \{ f \in \mathbb{C}[x,y] \, : \; f(p) = \lim_{t \to 0} \frac{f(p+t v) - f(p)}{t} = 0 \} </annotation></semantics>

It’s pretty easy to check that this is an ideal and that <semantics>dim([x,y]/I)=2<annotation encoding="application/x-tex">\mathrm{dim}(\mathbb{C}[x,y]/I) = 2 </annotation></semantics>. We can think of this ideal as describing ‘two particles in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> that have collided at <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> with relative velocity some multiple of <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>’.

For example you could have one particle sitting at <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> while another particle smacks into it while moving with velocity <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>; as they collide the corresponding curve in the Hilbert scheme would hit <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>.

This would also work if the velocity were any multiple of <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>, since we also have

<semantics>I={f[x,y]:f(p)=lim t0f(p+ctv)f(p)t=0}<annotation encoding="application/x-tex"> I = \{ f \in \mathbb{C}[x,y] \, : \; f(p) = \lim_{t \to 0} \frac{f(p+ c t v) - f(p)}{t} = 0 \} </annotation></semantics>

for any constant <semantics>c0<annotation encoding="application/x-tex">c \ne 0</annotation></semantics>. And note, this constant can be complex. I’m trying to appeal to your inner physicist, but we’re really doing algebraic geometry over the complex numbers, so we can do weird stuff like multiply velocities by complex numbers.

Or, both particles could be moving and collide at <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> while their relative velocity was some complex multiple of <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>. As they collide, the corresponding point in the Hilbert scheme would still hit <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>.

But here’s the cool part: such ‘2-particle collisions with specified position and relative velocity’ give all the points in the Hilbert scheme <semantics>Hilb 2( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^2(\mathbb{C}^2)</annotation></semantics>, except of course for those points coming from 2 particles with distinct positions.

What happens when we go to the next Hilbert scheme, <semantics>Hilb 3( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^3(\mathbb{C}^2)</annotation></semantics>? This Hilbert scheme has an open dense set corresponding to triples of particles with distinct positions. It has other points coming from situations where two particles collide with some specified position and relative velocity while a third ‘bystander’ particle sits somewhere else. But it also has points coming from triple collisions. And these are more fancy! Not only velocities but accelerations play a role!

I could delve into this further, but for now I’ll just point you here:

• John Baez, The Hilbert scheme for 3 points on a surface, MathOverflow, June 7, 2017.

The main thing to keep in mind is this. As <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> increases, there are more and more ways we can dream up ideals <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> with <semantics>dim([x,y]/I)=n<annotation encoding="application/x-tex">\mathrm{dim}(\mathbb{C}[x,y]/I) = n </annotation></semantics>. But all these ideals consist of functions that vanish at <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> or fewer points and also obey other equations saying that various linear combinations of their first, second, and higher derivatives vanish. We can think of these ideals as ways for <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> particles to collide, with conditions on their positions, velocities, accelerations, etc. The total number of conditions needs to be <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>.

Now let’s revisit that description of the wonderful space we’re seeking to understand, <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics>:

Theorem. <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> is the space of all <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>-invariant ideals <semantics>I[x,y]<annotation encoding="application/x-tex">I \subseteq \mathbb{C}[x,y]</annotation></semantics> such that <semantics>[x,y]/I<annotation encoding="application/x-tex">\mathbb{C}[x,y]/I</annotation></semantics> is isomorphic, as a representation of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>, to the regular representation of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>.

Since <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> has 120 elements, its regular representation — the obvious representation of this group on the space of complex functions on this group — is 120-dimensional. So, points in <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> are ideals <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> with <semantics>dim([x,y]/I)=120<annotation encoding="application/x-tex">\mathrm{dim}(\mathbb{C}[x,y]/I) = 120 </annotation></semantics>. So, they’re points in the Hilbert scheme <semantics>Hilb 120( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^{120}(\mathbb{C}^2)</annotation></semantics>.

But they’re not just any old points in this Hilbert scheme! The binary icosahedral group <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> acts on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> and thus anything associated with it. In particular, it acts on the HIlbert scheme <semantics>Hilb 120( 2)<annotation encoding="application/x-tex">\mathrm{Hilb}^{120}(\mathbb{C}^2)</annotation></semantics>. A point in this Hilbert scheme can lie in <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> only if it’s invariant under the action of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>. And given this, it’s in <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> if and only if <semantics>[x,y]/I<annotation encoding="application/x-tex">\mathbb{C}[x,y]/I</annotation></semantics> is isomorphic to the regular representation of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>.

Given all this, there’s an easy way to get your hands on a point <semantics>IS˜<annotation encoding="application/x-tex">I \in \widetilde{S}</annotation></semantics>. Just take any nonzero element of <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> and act on it by <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics>. You’ll get 120 distinct points in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> — I promise. Do you see why? Then let <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> be the set of polynomials that vanish on all these points.

If you don’t see why this works, please ask me.

In fact, we saw last time that your 120 points will be the vertices of a 600-cell centered at the origin of <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>:

By this construction we get enough points to form an open dense subset of <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics>. These are the points that aren’t mapped to the origin by

<semantics>π:S˜S<annotation encoding="application/x-tex"> \pi \colon \widetilde{S} \to S </annotation></semantics>

Alas, it’s the other points in <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> that I’m really interested in. As I hope you see, these are certain ‘limits’ of 600-cells that have ‘shrunk to the origin’… or in other words, highly symmetrical ways for 120 points in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> to collide at the origin, with some highly symmetrical conditions on their velocities, accelerations, etc.

That’s what I need to understand.

by john (baez@math.ucr.edu) at July 03, 2017 10:50 PM

Symmetrybreaking - Fermilab/SLAC

When was the Higgs actually discovered?

The announcement on July 4 was just one part of the story. Take a peek behind the scenes of the discovery of the Higgs boson.

Photo from the back of a crowded conference room on the day of the Higgs announcement

Joe Incandela sat in a conference room at CERN and watched with his arms folded as his colleagues presented the latest results on the hunt for the Higgs boson. It was December 2011, and they had begun to see the very thing they were looking for—an unexplained bump emerging from the data.

“I was far from convinced,” says Incandela, a professor at the University of California, Santa Barbara and the former spokesperson of the CMS experiment at the Large Hadron Collider.

For decades, scientists had searched for the elusive Higgs boson: the holy grail of modern physics and the only piece of the robust and time-tested Standard Model that had yet to be found.

The construction of the LHC was motivated in large part by the absence of this fundamental component from our picture of the universe. Without it, physicists couldn’t explain the origin of mass or the divergent strengths of the fundamental forces.

“Without the Higgs boson, the Standard Model falls apart,” says Matthew McCullough, a theorist at CERN. “The Standard Model was fitting the experimental data so well that most of the theory community was convinced that something playing the role of Higgs boson would be discovered by the LHC.”

The Standard Model predicted the existence of the Higgs but did not predict what the particle’s mass would be. Over the years, scientists had searched for it across a wide range of possible masses. By 2011, there was only a tiny region left to search; everything else had been excluded by previous generations of experimentation. If the predicted Higgs boson were anywhere, it had to be there, right where the LHC scientists were looking.

But Incandela says he was skeptical about these preliminary results. He knew that the Higgs could manifest itself in many different forms, and this particular channel was extremely delicate.

“A tiny mistake or an unfortunate distribution of the background events could make it look like a new particle is emerging from the data when in reality, it’s nothing,” Incandela says.

A common mantra in science is that extraordinary claims require extraordinary evidence. The challenge isn’t just collecting the data and performing the analysis; it’s deciding if every part of the analysis is trustworthy. If the analysis is bulletproof, the next question is whether the evidence is substantial enough to claim a discovery. And if a discovery can be claimed, the final question is what, exactly, has been discovered? Scientists can have complete confidence in their results but remain uncertain about how to interpret them.

In physics, it’s easy to say what something is not but nearly impossible to say what it is. A single piece of corroborated, contradictory evidence can discredit an entire theory and destroy an organization’s credibility.

“We’ll never be able to definitively say if something is exactly what we think it is, because there’s always something we don’t know and cannot test or measure,” Incandela says. “There could always be a very subtle new property or characteristic found in a high-precision experiment that revolutionizes our understanding.”

With all of that in mind, Incandela and his team made a decision: From that point on, everyone would refine their scientific analyses using special data samples and a patch of fake data generated by computer simulations covering the interesting areas of their analyses. Then, when they were sure about their methodology and had enough data to make a significant observation, they would remove the patch and use their algorithms on all the real data in a process called unblinding.

“This is a nice way of providing an unbiased view of the data and helps us build confidence in any unexpected signals that may be appearing, particularly if the same unexpected signal is seen in different types of analyses,” Incandela says.

A few weeks before July 4, all the different analysis groups met with Incandela to present a first look at their unblinded results. This time the bump was very significant and showing up at the same mass in two independent channels.

“At that point, I knew we had something,” Incandela says. “That afternoon we presented the results to the rest of the collaboration. The next few weeks were among the most intense I have ever experienced.”

Meanwhile, the other general-purpose experiment at the LHC, ATLAS, was hot on the trail of the same mysterious bump.

Andrew Hard was a graduate student at The University of Wisconsin, Madison working on the ATLAS Higgs analysis with his PhD thesis advisor Sau Lan Wu.

“Originally, my plan had been to return home to Tennessee and visit my parents over the winter holidays,” Hard says. “Instead, I came to CERN every day for five months—even on Christmas. There were a few days when I didn't see anyone else at CERN. One time I thought some colleagues had come into the office, but it turned out to be two stray cats fighting in the corridor.”

Hard was responsible for writing the code that selected and calibrated the particles of light the ATLAS detector recorded during the LHC’s high-energy collisions. According to predictions from the Standard Model, the Higgs can transform into two of these particles when it decays, so scientists on both experiments knew that this project would be key to the discovery process.

“We all worked harder than we thought we could,” Hard says. “People collaborated well and everyone was excited about what would come next. All in all, it was the most exciting time in my career. I think the best qualities of the community came out during the discovery.”

At the end of June, Hard and his colleagues synthesized all of their work into a single analysis to see what it revealed. And there it was again—that same bump, this time surpassing the statistical threshold the particle physics community generally requires to claim a discovery.

“Soon everyone in the group started running into the office to see the number for the first time,” Hard says. “The Wisconsin group took a bunch of photos with the discovery plot.”

Hard had no idea whether CMS scientists were looking at the same thing. At this point, the experiments were keeping their latest results secret—with the exception of Incandela, Fabiola Gianotti (then ATLAS spokesperson) and a handful of CERN’s senior management, who regularly met to discuss their progress and results.

“I told the collaboration that the most important thing was for each experiment to work independently and not worry about what the other experiment was seeing,” Incandela says. “I did not tell anyone what I knew about ATLAS. It was not relevant to the tasks at hand.”

Still, rumors were circulating around theoretical physics groups both at CERN and abroad. Mccullough, then a postdoc at the Massachusetts Institute of Technology, was avidly following the progress of the two experiments.

“We had an update in December 2011 and then another one a few months later in March, so we knew that both experiments were seeing something,” he says. “When this big excess showed up in July 2012, we were all convinced that it was the guy responsible for curing the ails of the Standard Model, but not necessarily precisely that guy predicted by the Standard Model. It could have properties mostly consistent with the Higgs boson but still be not absolutely identical.”

The week before announcing what they’d found, Hard’s analysis group had daily meetings to discuss their results. He says they were excited but also nervous and stressed: Extraordinary claims require extraordinary confidence.

“One of our meetings lasted over 10 hours, not including the dinner break halfway through,” Hard says. “I remember getting in a heated exchange with a colleague who accused me of having a bug in my code.”

After both groups had independently and intensely scrutinized their Higgs-like bump through a series of checks, cross-checks and internal reviews, Incandela and Gianotti decided it was time to tell the world.

“Some people asked me if I was sure we should say something,” Incandela says. “I remember saying that this train has left the station. This is what we’ve been working for, and we need to stand behind our results.”

On July 4, 2012, Incandela and Gianotti stood before an expectant crowd and, one at a time, announced that decades of searching and generations of experiments had finally culminated in the discovery of a particle “compatible with the Higgs boson.”

Science journalists rejoiced and rushed to publish their stories. But was this new particle the long-awaited Higgs boson? Or not?

Discoveries in science rarely happen all at once; rather, they build slowly over time. And even when the evidence overwhelmingly points in a clear direction, scientists will rarely speak with superlatives or make definitive claims.

“There is always a risk of overlooking the details,” Incandela says, “and major revolutions in science are often born in the details.”

Immediately after the July 4 announcement, theorists from around the world issued a flurry of theoretical papers presenting alternative explanations and possible tests to see if this excess really was the Higgs boson predicted by the Standard Model or just something similar.

“A lot of theory papers explored exotic ideas,” McCullough says. “It’s all part of the exercise. These papers act as a straw man so that we can see just how well we understand the particle and what additional tests need to be run.”

For the next several months, scientists continued to examine the particle and its properties. The more data they collected and the more tests they ran, the more the discovery looked like the long-awaited Higgs boson. By March, both experiments had twice as much data and twice as much evidence.

“Amongst ourselves, we called it the Higgs,” Incandela says, “but to the public, we were more careful.”

It was increasingly difficult to keep qualifying their statements about it, though. “It was just getting too complicated,” Incandela says. “We didn’t want to always be in this position where we had to talk about this particle like we didn’t know what it was.”

On March 14, 2013—nine months and 10 days after the original announcement—CERN issued a press release quoting Incandela as saying, “to me, it is clear that we are dealing with a Higgs boson, though we still have a long way to go to know what kind of Higgs boson it is.”​

To this day, scientists are open to the possibility that the Higgs they found is not exactly the Higgs they expected.

“We are definitely, 100 percent sure that this is a Standard-Model-like Higgs boson,” Incandela says. “But we’re hoping that there’s a chink in that armor somewhere. The Higgs is a sign post, and we’re hoping for a slight discrepancy which will point us in the direction of new physics.”

by Sarah Charley at July 03, 2017 09:50 PM

The n-Category Cafe

The Geometric McKay Correspondence (Part 1)

The ‘geometric McKay correspondence’, actually discovered by Patrick du Val in 1934, is a wonderful relation between the Platonic solids and the ADE Dynkin diagrams. In particular, it sets up a connection between two of my favorite things, the icosahedron:

and the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> Dynkin diagram:

When I recently gave a talk on this topic, I realized I didn’t understand it as well as I’d like. Since then I’ve been making progress with the help of this book:

  • Alexander Kirillov Jr., Quiver Representations and Quiver Varieties, AMS, Providence, Rhode Island, 2016.

I now think I glimpse a way forward to a very concrete and vivid understanding of the relation between the icosahedron and E8. It’s really just a matter of taking the ideas in this book and working them out concretely in this case. But it takes some thought, at least for me. I’d like to enlist your help.

The rotational symmetry group of the icosahedron is a subgroup of <semantics>SO(3)<annotation encoding="application/x-tex">\mathrm{SO}(3)</annotation></semantics> with 60 elements, so its double cover up in <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> has 120. This double cover is called the binary icosahedral group, but I’ll call it <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> for short.

This group <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is the star of the show, the link between the icosahedron and E8. To visualize this group, it’s good to think of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> as the unit quaternions. This lets us think of the elements of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> as 120 points in the unit sphere in 4 dimensions. They are in fact the vertices of a 4-dimensional regular polytope, which looks like this:

It’s called the 600-cell.

Since <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is a subgroup of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> it acts on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, and we can form the quotient space

<semantics>S= 2/Γ<annotation encoding="application/x-tex">S = \mathbb{C}^2/\Gamma</annotation></semantics>

This is a smooth manifold except at the origin—that is, the point coming from <semantics>0 2<annotation encoding="application/x-tex">0 \in \mathbb{C}^2</annotation></semantics>. There’s a singularity at the origin, and this where <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> is hiding! The reason is that there’s a smooth manifold <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> and a map

<semantics>π:S˜S<annotation encoding="application/x-tex">\pi : \widetilde{S} \to S</annotation></semantics>

that’s one-to-one and onto except at the origin. It maps 8 spheres to the origin! There’s one of these spheres for each dot here:

Two of these spheres intersect in a point if their dots are connected by an edge; otherwise they’re disjoint.

The challenge is to find a nice concrete description of <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics>, the map <semantics>π:S˜S<annotation encoding="application/x-tex">\pi : \widetilde{S} \to S</annotation></semantics>, and these 8 spheres.

But first it’s good to get a mental image of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. Each point in this space is a <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> orbit in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, meaning a set like this:

<semantics>{gx:gΓ}<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \}</annotation></semantics>

for some <semantics>x 2<annotation encoding="application/x-tex">x \in \mathbb{C}^2</annotation></semantics>. For <semantics>x=0<annotation encoding="application/x-tex">x = 0</annotation></semantics> this set is a single point, and that’s what I’ve been calling the ‘origin’. In all other cases it’s 120 points, the vertices of a 600-cell in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>. This 600-cell is centered at the point <semantics>0 2<annotation encoding="application/x-tex">0 \in \mathbb{C}^2</annotation></semantics>, but it can be big or small, depending on the magnitude of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>.

So, as we take a journey starting at the origin in <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, we see a point explode into a 600-cell, which grows and perhaps also rotates as we go. The origin, the singularity in <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, is a bit like the Big Bang.

Unfortunately not every 600-cell centered at the origin is of the form I’ve shown:

<semantics>{gx:gΓ}<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \}</annotation></semantics>

It’s easiest to see this by thinking of points in 4d space as quaternions rather than elements of <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>. Then the points <semantics>gΓ<annotation encoding="application/x-tex">g \in \Gamma</annotation></semantics> are unit quaternions forming the vertices of a 600-cell, and multiplying <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on the right by <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> dilates this 600-cell and also rotates it… but we don’t get arbitrary rotations this way. To get an arbitrarily rotated 600-cell we’d have to use both a left and right multiplication, and consider

<semantics>{xgy:gΓ}<annotation encoding="application/x-tex">\{x g y : \; g \in \Gamma \}</annotation></semantics>

for a pair of quaternions <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics>.

Luckily, there’s a simpler picture of the space <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. It’s the space of all regular icosahedra centered at the origin in 3d space!

To see this, we start by switching to the quaternion description, which says

<semantics>S=/Γ<annotation encoding="application/x-tex">S = \mathbb{H}/\Gamma</annotation></semantics>

Specifying a point <semantics>x<annotation encoding="application/x-tex">x \in \mathbb{H}</annotation></semantics> amounts to specifying the magnitude <semantics>x<annotation encoding="application/x-tex">\|x\|</annotation></semantics> together with <semantics>x/x<annotation encoding="application/x-tex">x/\|x\|</annotation></semantics>, which is a unit quaternion, or equivalently an element of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics>. So, specifying a point in

<semantics>{gx:gΓ}/Γ<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \} \in \mathbb{H}/\Gamma </annotation></semantics>

amounts to specifying the magnitude <semantics>x<annotation encoding="application/x-tex">\|x\|</annotation></semantics> together with a point in <semantics>SU(2)/Γ<annotation encoding="application/x-tex">\mathrm{SU}(2)/\Gamma</annotation></semantics>. But <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> modulo the binary icosahedral group <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is the same as <semantics>SO(3)<annotation encoding="application/x-tex">\mathrm{SO}(3)</annotation></semantics> modulo the icosahedral group (the rotational symmetry group of an icosahedron). Furthermore, <semantics>SO(3)<annotation encoding="application/x-tex">\mathrm{SO}(3)</annotation></semantics> modulo the icosahedral group is just the space of unit-sized icosahedra centered at the origin of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>.

So, specifying a point

<semantics>{gx:gΓ}/Γ<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \} \in \mathbb{H}/\Gamma</annotation></semantics>

amounts to specifying a nonnegative number <semantics>x<annotation encoding="application/x-tex">\|x\|</annotation></semantics> together with a unit-sized icosahedron centered at the origin of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>. But this is the same as specifying an icosahedron of arbitrary size centered at the origin of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>. There’s just one subtlety: we allow the size of this icosahedron to be zero, but then the way it’s rotated no longer matters.

So, <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is the space of icosahedra centered at the origin, with the ‘icosahedron of zero size’ being a singularity in this space. When we pass to the smooth manifold <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics>, we replace this singularity with 8 spheres, intersecting in a pattern described by the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> Dynkin diagram.

Points on these spheres are limiting cases of icosahedra centered at the origin. We can approach these points by letting an icosahedron centered at the origin shrink to zero size in a clever way, perhaps spinning about wildly as it does.

I don’t understand this last paragraph nearly as well as I’d like! I’m quite sure it’s true, and I know a lot of relevant information, but I don’t see it. There should be a vivid picture of how this works, not just an abstract argument. Next time I’ll start trying to assemble the material that I think needs to go into building this vivid picture.

by john (baez@math.ucr.edu) at July 03, 2017 05:26 PM

Tommaso Dorigo - Scientificblogging

EPS 2017: 1000 Physicists In Venice
The 2017 edition of the European Physical Society conference will take place in the Lido of Venice this week, from July 5th to 12th. For the first time in many years -30 as of now- a big international conference in HEP is organized in Italy, a datum I found surprising at first. When I learned it, the years were 26 and I was in a local organizing committee that tried to propose another conference in the same location. Although excellent, our proposal was ditched, and from the episode I learned I should not be too surprised for the hiatus. 

read more

by Tommaso Dorigo at July 03, 2017 01:57 PM

July 02, 2017

John Baez - Azimuth

The Geometric McKay Correspondence (Part 2)

Last time I sketched how the E_8 Dynkin diagram arises from the icosahedron. This time I’m fill in some details. I won’t fill in all the details, because I don’t know how! Working them out is the goal of this series, and I’d like to enlist your help.

(In fact, I’m running this series of posts both here and at the n-Category Café. So far I’m getting many more comments over there. So, to keep the conversation in one place, I’ll disable comments here and urge you to comment over there.)

Remember the basic idea. We start with the rotational symmetry group of the isosahedron and take its double cover, getting a 120-element group \Gamma called the binary icosahedral group. Since this is naturally a subgroup of \mathrm{SU}(2) it acts on \mathbb{C}^2, and we can form the quotient space

S = \mathbb{C}^2/\Gamma

This is a smooth manifold except at the origin—by which I mean the point coming from 0 \in \mathbb{C}^2. Luckily we can ‘resolve’ this singularity! This implies that we can find a smooth manifold \widetilde{S} and a smooth map

\pi \colon \widetilde{S} \to S

that’s one-to-one and onto except at the origin. There may be various ways to do this, but there’s one best way, the ‘minimal’ resolution, and that’s what I’ll be talking about.

The origin is where all the fun happens. The map \pi sends 8 spheres to the origin in \mathbb{C}^2/\Gamma, one for each dot in the \mathrm{E}_8 Dynkin diagram:

Two of these spheres intersect in a point if their dots are connected by an edge; otherwise they’re disjoint.

This is wonderful! So, the question is just how do we really see it? For starters, how do we get our hands on this manifold \widetilde{S} and this map \pi \colon \widetilde{S} \to S?

For this we need some algebraic geometry. Indeed, the whole subject of ‘resolving singularities’ is part of algebraic geometry! However, since I still remember my ignorant youth, I want to avoid flinging around the vocabulary of this subject until we actually need it. So, experts will have to pardon my baby-talk. Nonexperts can repay me in cash, chocolate, bitcoins or beer.

What’s \widetilde{S} like? First I’ll come out and tell you, and then I’ll start explaining what the heck I just said.

Theorem. \widetilde{S} is the space of all \Gamma-invariant ideals I \subseteq \mathbb{C}[x,y] such that \mathbb{C}[x,y]/I is isomorphic, as a representation of \Gamma, to the regular representation of \Gamma.

If you want a proof, this is Corollary 12.8 in Kirillov’s Quiver Representations and Quiver Varieties. It’s on page 245, so you’ll need to start by reading lots of other stuff. It’s a great book! But it’s not completely self-contained: for example, right before Corollary 12.8 he brings in a crucial fact without proof: “it can be shown that in dimension 2, if a crepant resolution exists, it is minimal”.

I will not try to prove the theorem; instead I will start explaining what it means.

Suppose you have a bunch of points p_1, \dots, p_n \in \mathbb{C}^2. We can look at all the polynomials on \mathbb{C}^2 that vanish at these points. What is this collection of polynomials like?

Let’s use x and y as names for the standard coordinates on \mathbb{C}^2, so polynomials on \mathbb{C}^2 are just polynomials in these variables. Let’s call the ring of all such polynomials \mathbb{C}[x,y]. And let’s use I to stand for the collection of such polynomials that vanish at our points p_1, \dots, p_n.

Here are two obvious facts about I:

A. If f \in I and g \in I then f + g \in I.

B. If f \in I and g \in \mathbb{C}[x,y] then fg \in I.

We summarize these by saying I is an ideal, and this is why we called it I. (So clever!)

Here’s a slightly less obvious fact about I:

C. If the points p_1, \dots, p_n are all distinct, then \mathbb{C}[x,y]/I has dimension n.

The point is that the value of a function f \in \mathbb{C}[x,y] at a point p_i doesn’t change if we add an element of I to f, so this value defines a linear functional on \mathbb{C}[x,y]/I. Guys like this form a basis of linear functionals on \mathbb{C}[x,y]/I, so it’s n-dimensional.

All this should make you interested in the set of ideals I with \mathrm{dim}(\mathbb{C}[x,y]/I) = n. This set is called the Hilbert scheme \mathrm{Hilb}^n(\mathbb{C}^2).

Why is it called a scheme? Well, Hilbert had a bunch of crazy schemes and this was one. Just kidding: actually Hilbert schemes were invented by Grothendieck in 1961. I don’t know why he named them after Hilbert. The kind of Hilbert scheme I’m using is a very basic one, more precisely called the ‘punctual’ Hilbert scheme.

The Hilbert scheme \mathrm{Hilb}^n(\mathbb{C}^2) is a whole lot like the set of unordered n-tuples of distinct points in \mathbb{C}^2. Indeed, we’ve seen that every such n-tuple gives a point in the Hilbert scheme. But there are also other points in the Hilbert scheme! And this is where the fun starts!

Imagine n particles moving in \mathbb{C}^2, with their motion described by polynomial functions of time. As long as these particles don’t collide, they define a curve in the Hilbert scheme. But it still works when they collide! When they collide, this curve will hit a point in the Hilbert scheme that doesn’t come from an unordered n-tuple of distinct points in \mathbb{C}^2. This point describes a ‘type of collision’.

More precisely: n-tuples of distinct points in \mathbb{C}^2 give an open dense set in the Hilbert scheme, but there are other points in the Hilbert scheme which can be reached as limits of those in this open dense set! The topology here is very subtle, so let’s look at an example.

Let’s look at the Hilbert scheme \mathrm{Hilb}^2(\mathbb{C}^2). Given two distinct points p_1, p_2 \in \mathbb{C}^2, we get an ideal

\{ f \in \mathbb{C}[x,y] \, : \; f(p_1) = f(p_2) = 0 \}

This ideal is a point in our Hilbert scheme, since \mathrm{dim}(\mathbb{C}[x,y]/I) = 2 .

But there are other points in our Hilbert scheme! For example, if we take any point p \in \mathbb{C}^2 and any vector v \in \mathbb{C}^2, there’s an ideal consisting of polynomials that vanish at p and whose directional derivative in the v direction also vanishes at p:

\displaystyle{ I = \{ f \in \mathbb{C}[x,y] \, : \; f(p) = \lim_{t \to 0} \frac{f(p+t v) - f(p)}{t} = 0 \} }

It’s pretty easy to check that this is an ideal and that \mathrm{dim}(\mathbb{C}[x,y]/I) = 2 . We can think of this ideal as describing two particles in \mathbb{C}^2 that have collided at p with relative velocity some multiple of v.

For example you could have one particle sitting at p while another particle smacks into it while moving with velocity v; as they collide the corresponding curve in the Hilbert scheme would hit I.

This would also work if the velocity were any multiple of v, since we also have

\displaystyle{ I = \{ f \in \mathbb{C}[x,y] \, : \; f(p) = \lim_{t \to 0} \frac{f(p+ c t v) - f(p)}{t} = 0 \} }

for any constant c \ne 0. And note, this constant can be complex. I’m trying to appeal to your inner physicist, but we’re really doing algebraic geometry over the complex numbers, so we can do weird stuff like multiply velocities by complex numbers.

Or, both particles could be moving and collide at p while their relative velocity was some complex multiple of v. As they collide, the corresponding point in the Hilbert scheme would still hit I.

But here’s the cool part: such ‘2-particle collisions with specified position and relative velocity’ give all the points in the Hilbert scheme \mathrm{Hilb}^2(\mathbb{C}^2), except of course for those points coming from 2 particles with distinct positions.

What happens when we go to the next Hilbert scheme, \mathrm{Hilb}^3(\mathbb{C}^2)? This Hilbert scheme has an open dense set corresponding to triples of particles with distinct positions. It has other points coming from situations where two particles collide with some specified position and relative velocity while a third ‘bystander’ particle sits somewhere else. But it also has points coming from triple collisions. And these are more fancy! Not only velocities but accelerations play a role!

I could delve into this further, but for now I’ll just point you here:

• John Baez, The Hilbert scheme for 3 points on a surface, MathOverflow, June 7, 2017.

The main thing to keep in mind is this. As n increases, there are more and more ways we can dream up ideals I with \mathrm{dim}(\mathbb{C}[x,y]/I) = n. But all these ideals consist of functions that vanish at n or fewer points and also obey other equations saying that various linear combinations of their first, second, and higher derivatives vanish. We can think of these ideals as ways for n particles to collide, with conditions on their positions, velocities, accelerations, etc. The total number of conditions needs to be n.

Now let’s revisit that description of the wonderful space we’re seeking to understand, \widetilde{S}:

Theorem. \widetilde{S} is the space of all \Gamma-invariant ideals I \subseteq \mathbb{C}[x,y] such that \mathbb{C}[x,y]/I is isomorphic, as a representation of \Gamma, to the regular representation of \Gamma.

Since \Gamma has 120 elements, its regular representation—the obvious representation of this group on the space of complex functions on this group—is 120-dimensional. So, points in \widetilde{S} are ideals I with \mathrm{dim}(\mathbb{C}[x,y]/I) = 120 . So, they’re points in the Hilbert scheme \mathrm{Hilb}^{120}(\mathbb{C}^2).

But they’re not just any old points in this Hilbert scheme! The binary icosahedral group \Gamma acts on \mathbb{C}^2 and thus anything associated with it. In particular, it acts on the Hilbert scheme \mathrm{Hilb}^{120}(\mathbb{C}^2). A point in this Hilbert scheme can lie in \widetilde{S} only if it’s invariant under the action of \Gamma. And given this, it’s in \widetilde{S} if and only if \mathbb{C}[x,y]/I is isomorphic to the regular representation of \Gamma.

Given all this, there’s an easy way to get your hands on a point I \in \widetilde{S}. Just take any nonzero element of \mathbb{C}^2 and act on it by \Gamma. You’ll get 120 distinct points in \mathbb{C}^2 — I promise. Do you see why? Then let I be the set of polynomials that vanish on all these points.

If you don’t see why this works, please ask me.

In fact, we saw last time that your 120 points will be the vertices of a 600-cell centered at the origin of \mathbb{C}^2:

By this construction we get enough points to form an open dense subset of \widetilde{S}. These are the points that aren’t mapped to the origin by

\pi \colon \widetilde{S} \to S

Alas, it’s the other points in \widetilde{S} that I’m really interested in. As I hope you see, these are certain ‘limits’ of 600-cells that have ‘shrunk to the origin’… or in other words, highly symmetrical ways for 120 points in \mathbb{C}^2 to collide at the origin, with some highly symmetrical conditions on their velocities, accelerations, etc.

That’s what I need to understand.


by John Baez at July 02, 2017 09:54 PM

June 30, 2017

Symmetrybreaking - Fermilab/SLAC

What’s really happening during an LHC collision?

It’s less of a collision and more of a symphony.

Illustration of a particle collision inside the Large Hadron Collider

The Large Hadron Collider is definitely large. With a 17-mile circumference, it is the biggest collider on the planet. But the latter fraction of its name is a little misleading. That’s because what collides in the LHC are the tiny pieces inside the hadrons, not the hadrons themselves.

Hadrons are composite particles made up of quarks and gluons. The gluons carry the strong force, which enables the quarks to stick together and binds them into a single particle. The main fodder for the LHC are hadrons called protons. Protons are made up of three quarks and an indefinable number of gluons. (Protons in turn make up atoms, which are the building blocks of everything around us.)

If a proton were enlarged to the size of a basketball, it would look empty. Just like atoms, protons are mostly empty space. The individual quarks and gluons inside are known to be extremely small, less than 1/10,000th the size of the entire proton.

“The inside of a proton would look like the atmosphere around you,” says Richard Ruiz, a theorist at Durham University. “It’s a mixture of empty space and microscopic particles that, for all intents and purposes, have no physical volume.

“But if you put those particles inside a balloon, you’ll see the balloon expand. Even though the internal particles are microscopic, they interact with each other and exert a force on their surroundings, inevitably producing something which does have an observable volume.”

So how do you collide two objects that are effectively empty space? You can’t. But luckily, you don’t need a classical collision to unleash a particle’s full potential.

In particle physics, the term “collide” can mean that two protons glide through each other, and their fundamental components pass so close together that they can talk to each other. If their voices are loud enough and resonate in just the right way, they can pluck deep hidden fields that will sing their own tune in response—by producing new particles.

“It’s a lot like music,” Ruiz says. “The entire universe is a symphony of complex harmonies which call and respond to each other. We can easily produce the mid-range tones, which would be like photons and muons, but some of these notes are so high that they require a huge amount of energy and very precise conditions to resonate.”

Space is permeated with dormant fields that can briefly pop a particle into existence when vibrated with the right amount of energy. These fields play important roles but almost always work behind the scenes. The Higgs field, for instance, is always interacting with other particles to help them gain mass. But a Higgs particle will only appear if the field is plucked with the right resonance.

When protons meet during an LHC collision, they break apart and the quarks and gluons come spilling out. They interact and pull more quarks and gluons out of space, eventually forming a shower of fast-moving hadrons.

This subatomic symbiosis is facilitated by the LHC and recorded by the experiment, but it’s not restricted to the laboratory environment; particles are also accelerated by cosmic sources such as supernova remnants. “This happens everywhere in the universe,” Ruiz says. “The LHC and its experiments are not special in that sense. They’re more like a big concert hall that provides the energy to pop open and record the symphony inside each proton.”

by Sarah Charley at June 30, 2017 04:40 PM

Lubos Motl - string vacua and pheno

Strings 2017 is over, next meeting in Okinawa
The annual Strings 2017 conference (videos) is over. The Friday afternoon was dedicated to an optional visit of the participants to the actual capital of Israel, i.e. Jerusalem. Well, you know, I don't want to be controversial. Instead, I am being loyal. Last month, the Czech Parliament recognized Jerusalem as the capital of Israel and urged the government to stop funding UNESCO because of its anti-Israel activities.



Meanwhile, TRF reader Roy Weinberg who lives in Tel Aviv had an easier job to visit the conference than I will have tomorrow when I go to give a talk about the continuum and discontinuum in Moravia. He attended several big shots' real technical talks, just like most people in cities hosting string conferences should, and he picked the nicest person. Ladies and Gentlemen, the winner is... Edward Witten. Congratulations.

I hope that it's OK to repost Roy's half-refreshed selfie with Weinberg and Witten. Roy, I hope that you're working on an updated version of your theorem. ;-)




I won't embed Gross-Weinberg and Maldacena-Weinberg not to become too ludicrous.

Juan Maldacena was surprised about the number of Roy's tattoos. Most readers will surely be surprised that Juan was surprised – his countrymate Pope Francis has 39 tattoos on his back.




I do think that it's weird that there are so few people who are clever enough to find about such famous scientists in the town and who take the selfies. Imagine how many people would be taking selfies with Kate Perry or someone like that. And Tel Aviv is almost certainly above the average.

Because Strings 2018 takes place in Okinawa, Japan and I wasn't able to find any TRF visitors from Okinawa at all – correct me if I were too sloppy (this compares with some 600 users in Tel Aviv who visit TRF each month) – Japanese readers should already start to work on their plans for June 2018.

Czech PM Sobotka is just visiting Japan. He paid tribute to the memory of the Czech architect who built the A-dome in Hiroshima, the only building that survived the blast, and noticed that the Japanese were rather enthusiastic not only about Czech classical music but also about Alfons Mucha's "Slav Epic" Art Nouveau impressive kitsch. If one thinks about it, this genre of painting is rather similar to some of the cute, infantile looking Japanese genres, so why not.

But back to the main point. Not that fame is important but I still think that it's somewhat depressing that even if you become one of the 10 most famous physicists in the world, you will find approximately one bold physics fan with a good taste who wants to take a selfie with you. And that the videos with the conference talks only have at most a few hundred views. But I am afraid that these numbers do faithfully represent the degree of the general public's interest in theoretical physics.

by Luboš Motl (noreply@blogger.com) at June 30, 2017 03:44 PM

June 29, 2017

Georg von Hippel - Life on the lattice

Lattice 2017, Day Six
On the last day of the 2017 lattice conference, there were plenary sessions only. The first plenary session opened with a talk by Antonio Rago, who gave a "community review" of lattice QCD on new chips. New chips in the case of lattice QCD means mostly Intel's new Knight's Landing architecture, to whose efficient use significant effort is devoted by the community. Different groups pursue very different approaches, from purely OpenMP-based C codes to mixed MPI/OpenMP-based codes maximizing the efficiency of the SIMD pieces using assembler code. The new NVidia Tesla Volta and Intel's OmniPath fabric also featured in the review.

The next speaker was Zoreh Davoudi, who reviewed lattice inputs for nuclear physics. While simulating heavier nuclei directly in the lattice is still infeasible, nuclear phenomenologists appear to be very excited about the first-principles lattice QCD simulations of multi-baryon systems now reaching maturity, because these can be use to tune and validate nuclear models and effective field theories, from which predictions for heavier nuclei can then be derived so as to be based ultimately on QCD. The biggest controversy in the multi-baryon sector at the moment is due to HALQCD's claim that the multi-baryon mass plateaux seen by everyone except HALQCD (who use their own method based on Bethe-Salpeter amplitudes) are probably fakes or "mirages", and that using the Lüscher method to determine multi-baryon binding would require totally unrealistic source-sink separations of over 10 fm. The volume independence of the bound-state energies determined from the allegedly fake plateaux, as contrasted to the volume dependence of the scattering-state energies so extracted, provides a fairly strong defence against this claim, however. There are also new methods to improve the signal-to-noise ratio for multi-baryon correlation functions, such as phase reweighting.

This was followed by a talk on the tetraquark candidate Zc(3900) by Yoichi Ikeda, who spent a large part of his talk on reiterating the HALQCD claim that the Lüscher method requires unrealistically large time separations. During the questions, William Detmold raised the important point that there would be no excited-state contamination at all if the interpolating operator created an eigenstate of the QCD Hamiltonian, and that for improved interpolating operators (such as generated by the variational method) one can get rather close to this situation, so that the HLAQCD criticism seems hardly applicable. As for the Zc(3900), HALQCD find it to be not a resonance, but a kinematic cusp, although this conclusion is based on simulations at rather heavy pion masses (mπ> 400 MeV).

The final plenary session was devoted to the anomalous magnetic moment of the muon, which is perhaps the most pressing topic for the lattice community, since the new (g-2) experiment is now running, and theoretical predictions matching the improved experimental precision will be needed soon. The first speaker was Christoph Lehner, who presented RBC/UKQCD's efforts to determine the hadronic vacuum polarization contribution to aμ with high precision. The strategy for this consists of two main ingredients: one is to minimize the statistical and systematic errors of the lattice calculation by using a full-volume low-mode average via a multigrid Lanczos method, explicitly including the leading effects of strong isospin breaking and QED, and the contribution from disconnected diagrams, and the other is to combine lattice and phenomenology to take maximum advantage of their respective strengths. This is achieved by using the time-momentum representation with a continuum correlator reconstructed from the R-ratio, which turns out to be quite precise at large times, but more uncertain at shorter times, which is exactly the opposite of the situation for the lattice correlator. Using a window which continuously switches over from the lattice to the continuum at time separations around 1.2 fm then minimizes the overall error on aμ.

The last plenary talk was given by Gilberto Colangelo, who discussed the new dispersive approach to the hadronic light-by-light scattering contribution to aμ. Up to now the theory results for this small, but important, contribution have been based on models, which will always have an a priori unknown and irreducible systematic error, although lattice efforts are beginning to catch up. For a dispersive approach based on general principles such as analyticity and unitarity, the hadronic light-by-light tensor first needs to be Lorentz decomposed, which gives 138 tensors, of which 136 are independent, and of which gauge invariance permits only 54, of which 7 are distinct, with the rest related by crossing symmetry; care has to be taken to choose the tensor basis such that there are no kinematic singularities. A master formula in terms of 12 linear combinations of these components has been derived by Gilberto and collaborators, and using one- and two-pion intermediate states (and neglecting the rest) in a systematic fashion, they have been able to produce a model-independent theory result with small uncertainties based on experimental data for pion form factors and scattering amplitudes.

The closing remarks were delivered by Elvira Gamiz, who advised participants that the proceedings deadline of 18 October will be strict, because this year's proceedings will not be published in PoS, but in EPJ Web of Conferences, who operate a much stricter deadline policy. Many thanks to Elvira for organizing such a splendid lattice conference! (I can appreciate how much work that is, and I think you should have received far more applause.)

Huey-Wen Lin invited the community to East Lansing, Michigan, USA, for the Lattice 2018 conference, which will take place 22-28 July 2018 on the campus of Michigan State University.

The IAC announced that Lattice 2019 will take place in Wuhan, China.

And with that the conference ended. I stayed in Granada for a couple more days of sightseeing and relaxation, but the details thereof will be of legitimate interest only to a very small subset of my readership (whom I keep updated via different channels), and I therefore conclude my coverage and return the blog to its accustomed semi-hiatus state.


by Georg v. Hippel (noreply@blogger.com) at June 29, 2017 01:47 PM

June 27, 2017

Lubos Motl - string vacua and pheno

Jafferis' and other talks at Strings 2017
Strings 2017 is talking place in Tel Aviv, Israel this week. The talks may be watched at
The Strings 2017 YouTube channel
I hope that the TRF readers will increase the number of views. The most watched video so far has fewer than 140 views, somewhat less than 3 billion views of the Gangnam Style. ;-)




On Wednesday 5 pm, there will be a talk by Andy Strominger and Marika Taylor titled "How you can write a talk promoting feminism, reverse racism, and similar garbage, present it at the annual Strings 2017 conference, and make everyone pretend that everything is alright even though the talk has obviously nothing to do with the topic of the conference". I kindly urge the participants to scream and whistle during the talk if the talk will really take place.

Incidentally, there have been lots of female speakers. Out of the 27 talks I see, at least 5 are female (update: 6 of 29). I am about 90% certain that this overrepresentation of women – almost (update: over) 20% – is due to some design by the organizers. But I've watched all these women's talks (5 times 6 minutes: only short ones) and they mostly seem very smart and competent.




A French TRF reader has chosen the following most interesting talk so far:



It is based on Daniel Jafferis' March 2017 paper. I have just watched it and while it's a little bit more qualitative and a little bit less rich in equations and nice structures than one might want, it's both cool and closely related not only to the questions that I am thinking about but also to many of the answers.

Jafferis gives his own presentation of the reasons why bulk operators in a theory of quantum gravity can't be given by state-independent linear operators. There are numerous examples, some of them were given on this blog, but his simplest one is the "number of components of the spacetime \(N\)". A non-entangled state of two copies of a CFT has the eigenvalue \(N=2\) while the maximally entangled state of the two CFTs – where the entanglement glues the two boundaries – has \(N=1\). Funnily enough, however, these two states (entangled and unentangled) are not orthogonal each other. That contradicts the assumption that \(N\) is a state-dependent Hermitian linear operator on the Hilbert space of the two CFTs. A simple example, indeed.

However, Daniel went further than to just point out that Raju and Papadodimas (and also your humble correspondent, Berenstein, Miller, and others) were right. He made quite some work to demystify the point, too. In particular, he says that the non-existence of similar operators on the CFT(s) space representing the bulk observables isn't any evidence that the AdS/CFT duality breaks down for similar questions. It doesn't break down because the right operator doesn't exist on the bulk side, either. In particular, he argues that the would-be bulk operator isn't physical because it isn't diffeomorphism-invariant, at least not when you allow nonperturbative diffeomorphisms.

To show these points, Jafferis employs some underused yet exciting concept, the Hartle-Hawking wave function, the "initial state of the Universe" calculated with the help of a smooth Euclidean path integral, which solves the Wheeler-DeWitt equation. My understanding is that Daniel only picked the Hartle-Hawking wave function in order to isolate the problem as much as possible – to choose a path integral without any detailed boundary information etc. I am not sure whether he actually claims that there is an interesting and well-defined Hartle-Hawking wave function in examples of AdS/CFT and what predictions it is actually making. He finds that the states \(\ket h\) for some metrics that aren't really orthogonal to each other – a point I made some years ago. Daniel has a new explanation why this overcompleteness of this basis exists: it's because there are many ways to slice a spacetime. I don't quite see that these two effects are "exactly" mapped to each other but maybe Daniel does.

Jafferis also proposes an explanation for the nonperturbatively small overlaps (nonzero inner products). They arise because the complexified geometries may be connected. If I understand well, he says that spacetimes of two different topologies may be considered the same spacetime with different slicings where the difference in slicings is allowed to be complexified. OK, I don't quite understand a single example too well, at the technical level, but I feel that the qualitative statement he makes is true, anyway. In my opinion, one should be more quantitative and careful here because while we "want" to show that some inner products are surprisingly nonzero, there are still lots of inner products that should better still be zero and your cubist treatment of quantum gravity shouldn't invalidate this orthogonality.

The Marolf-Wall paradox disappears because the operators are delegitimized on both sides of the AdS/CFT correspondence. OK, that's a destructive solution to the problem. I would still prefer a solution where fixes are made so that the operators are legitimized on both sides ;-) so that one doesn't throw the baby out with the bath water. To some extent, it seems that Daniel has started to do things like that but it seems very far from a completion. In particular, he made some related steps on a slide he jumped over, one about the local gauge-invariant Hamiltonians that are relational and describe a measurement process. On that slide, he mentions that there's no canonical way to separate the diffeomorphism dressing.

Because, as I mentioned, his solution to the paradox is destructive – he threw away some structures along with the contradictions – he needs a replacement for the delegitimized operators and the physics they used to clarify. So if you agree with his resolution, your new task is to find a framework that describes the outcomes of measurements in the bulk correctly.

Session chair Veronika Hubený asked whether it's right that one can't define a local gauge-invariant operator without specifying a slice. Jafferis basically answers Yes, but the problem only appears at the quantum level when you have to consider wave functions that don't pick a preferred slice – such as the wave functions that are candidates to solve the Wheeler-DeWitt equation. Some geodesics may be used to define something and operators only act as expected if the geodesics go through that slice. I am being vague because I haven't understood his statement at the full precision.

Suvrat Raju tried to rephrase Jafferis' idea about the non-existence of the operator in a slightly different way. He more or less said that there are two ways to quantize observables, through the semiclassical or Hartle-Hawking methods. Jafferis said he would sympathize with the interpretation but this picture is irrelevant here because the semiclassical side quickly runs to spacetime singularities. According to Jafferis, one should therefore be "agnostic" about the existence of the operator in that case. The Jafferis-Raju discussion grew hardcore – I think that fewer than 12 people in the world could follow it – and Veronika Hubený terminated the video by thanks.

A funny terminological detail: I must also praise Daniel for having used the term "ER-EPR correspondence" which I have used at least since July 2013, as an obvious counterpart of the AdS/CFT correspondence. You may check that the first courageous folks have joined me in 2014 but only in recent 2 years, this phrase began to spread. ;-)

by Luboš Motl (noreply@blogger.com) at June 27, 2017 05:39 PM

Symmetrybreaking - Fermilab/SLAC

The rise of LIGO’s space-studying super-team

The era of multi-messenger astronomy promises rich rewards—and a steep learning curve.

Two women dancing in space

Sometimes you need more than one perspective to get the full story.

Scientists including astronomers working with the Fermi Large Area Telescope have recorded brief bursts of high-energy photons called gamma rays coming from distant reaches of space. They suspect such eruptions result from the merging of two neutron stars—the collapsed cores of dying stars—or from the collision of a neutron star and a black hole. 

But gamma rays alone can’t tell them that. The story of the dense, crashing cores would be more convincing if astronomers saw a second signal coming from the same event—for example, the release of ripples in space-time called gravitational waves.

“The Fermi Large Area Telescope detects a few short gamma ray bursts per year already, but detecting one in correspondence to a gravitational-wave event would be the first direct confirmation of this scenario,” says postdoctoral researcher Giacomo Vianello of the Kavli Institute for Particle Astrophysics and Cosmology, a joint institution of SLAC National Accelerator Laboratory and Stanford University.

Scientists discovered gravitational waves in 2015 (announced in 2016). Using the Laser Interferometer Gravitational-Wave Observatory, or LIGO, they detected the coalescence of two massive black holes.

LIGO scientists are now sharing their data with a network of fellow space watchers to see if any of their signals match up. Combining multiple signals to create a more complete picture of astronomical events is called multi-messenger astronomy.​

Looking for a match

“We had this dream of finding astronomical events to match up with our gravitational wave triggers,” says LIGO scientist Peter Shawhan of the University of Maryland. ​

But LIGO can only narrow down the source of its signals to a region large enough to contain roughly 100,000 galaxies. 

Searching for contemporaneous signals within that gigantic volume of space is extremely challenging, especially since most telescopes only view a small part of the sky at a time. So Shawhan and his colleagues developed a plan to send out an automatic alert to other observatories whenever LIGO detected an interesting signal of its own. The alert would contain preliminary calculations and the estimated location of the source of the potential gravitational waves.

“Our early efforts were pretty crude and only involved a small number of partners with telescopes, but it kind of got this idea started,” Shawhan says. The LIGO Collaboration and the Virgo Collaboration, its European partner, revamped and expanded the program while upgrading their detectors. Since 2014, 92 groups have signed up to receive alerts from LIGO, and the number is growing. 

LIGO is not alone in latching onto the promise of multi-messenger astronomy. The Supernova Early Warning System (SNEWS) also unites multiple experiments to look at the same event in different ways. Neutral, rarely interacting particles called neutrinos escape more quickly from collapsing stars than optical light, so a network of neutrino experiments is prepared to alert optical observatories as soon as they get the first warning of a nearby supernova in the form of a burst of neutrinos. 

National Science Foundation Director France Córdova has lauded multi-messenger astronomy, calling it in 2016 a bold research idea that would lead to transformative discoveries.​

The learning curve

Catching gamma ray bursts alongside gravitational waves is no simple feat. 

The Fermi Large Area Telescope orbits the earth as the primary instrument on the Fermi Gamma-ray Space Telescope. The telescope is constantly in motion and has a large field of view that surveys the entire sky multiple times per day. 

But a gamma-ray burst lasts just a few seconds, and it takes about three hours for LAT to complete its sweep. So even if an event that releases gravitational waves also produces a gamma-ray burst, LAT might not be looking in the right direction at the right time. It would need to catch the afterglow of the event. 

Fermi LAT scientist Nicola Omodei of Stanford University acknowledges another challenge: The window to see the burst alongside gravitational waves might not line up with the theoretical predictions. It’s never been done before, so the signal could look different or come at a different time than expected. 

That doesn’t stop him and his colleagues from trying, though. “We want to cover all bases, and we adopt different strategies,” he says. “To make sure we are not missing any preceding or delayed signal, we also look on much longer time scales, analyzing the days before and after the trigger.”

Scientists using the second instrument on the Fermi Gamma-ray Space Telescope have already found an unconfirmed signal that aligned with the first gravitational waves LIGO detected, says scientist Valerie Connaughton of the Universities Space Research Association, who works on the Gamma-Ray Burst Monitor. “We were surprised to find a transient event 0.4 seconds after the first GW seen by LIGO.”

While the event is theoretically unlikely to be connected to the gravitational wave, she says the timing and location “are enough for us to be interested and to challenge the theorists to explain how something that was not expected to produce gamma rays might have done so.”

From the ground up

It’s not just space-based experiments looking for signals that align with LIGO alerts. A working group called DESgw, members of the Dark Energy Survey with independent collaborators, have found a way to use the Dark Energy Camera, a 570-Megapixel digital camera mounted on a telescope in the Chilean Andes, to follow up on gravitational wave detections.​

“We have developed a rapid response system to interrupt the planned observations when a trigger occurs,” says DES scientist Marcelle Soares-Santos of Fermi National Accelerator Laboratory. “The DES is a cosmological survey; following up gravitational wave sources was not originally part of the DES scientific program.” 

Once they receive a signal, the DESgw collaborators meet to evaluate the alert and weigh the cost of changing the planned telescope observations against what scientific data they could expect to see—most often how much of the LIGO source location could be covered by DECam observations.

“We could, in principle, put the telescope onto the sky for every event as soon as night falls,” says DES scientist Jim Annis, also of Fermilab. “In practice, our telescope is large and the demand for its time is high, so we wait for the right events in the right part of the sky before we open up and start imaging.”

At an even lower elevation, scientists at the IceCube neutrino experiment—made up of detectors drilled down into Antarctic ice—are following LIGO’s exploits as well.

“The neutrinos IceCube is looking for originate from the most extreme environment in the cosmos,” says IceCube scientist Imre Bartos of Columbia University. “We don't know what these environments are for sure, but we strongly suspect that they are related to black holes.”

LIGO and IceCube are natural partners. Both gravitational waves and neutrinos travel for the most part unimpeded through space. Thus, they carry pure information about where they originate, and the two signals can be monitored together nearly in real time to help refine the calculated location of the source.

The ability to do this is new, Bartos says. Neither gravitational waves nor high-energy neutrinos had been detected from the cosmos when he started working on IceCube in 2008. “During the past few years, both of them were discovered, putting the field on a whole new footing.”

Shawhan and the LIGO collaboration are similarly optimistic about the future of their program and multi-messenger astronomy. More gravitational wave detectors are planned or under construction, including an upgrade to the European detector Virgo, the KAGRA detector in Japan, and a third LIGO detector in India, and that means scientists will home in closer and closer on their targets.​

by Troy Rummler at June 27, 2017 02:27 PM

June 25, 2017

Georg von Hippel - Life on the lattice

Lattice 2017, Day Five
The programme for today took account of the late end of the conference dinner in the early hours of the day, by moving the plenary sessions by half an hour. The first plenary talk of the day was given by Ben Svetitsky, who reviewed the status of BSM investigations using lattice field theory. An interesting point Ben raised was that these studies go not so much "beyond" the Standard Model (like SUSY, dark matter, or quantum gravity would), but "behind" or "beneath" it by seeking for a deeper explanation of the seemingly unnaturally small Higgs mass, flavour hierarchies, and other unreasonable-looking features of the SM. The original technicolour theory is quite dead, being Higgsless, but "walking" technicolour models are an area of active investigation. These models have a β-function that comes close to zero at some large coupling, leading to an almost conformal behaviour near the corresponding IR almost-fixed point. In such almost conformal theories, a light scalar (i.e. the Higgs) could arise naturally as the pseudo-Nambu-Goldstone boson of the approximate dilatation symmetry of the theory. A range of different gauge groups, numbers of flavours, and fermion representations are being investigated, with the conformal or quasi-conformal status of some of these being apparently controversial. An alternative approach to Higgs compositeness has the Higgs appear as the exact Nambu-Goldstone boson of some spontaneous symmetry breaking which keeps SU(2)L⨯U(1) intact, with the Higgs potential being generated at the loop level by the coupling to the SM sector. There are also some models of this type being actively investigated.

The next plenary speaker was Stefano Forte, who reviewed the status and prospects of determining the strong coupling αs from sources other than the lattice. The PDG average for αs is a weighted average of six values, four of which are the pre-averages of the determinations from the lattice, from τ decays, from jet rates and shapes, and from parton distribution functions, and two of which are the determinations from the global electroweak fit and from top production at the LHC. Each of these channels has its own systematic issues, and one problem can be that overaggressive error estimates give too much weight to the corresponding determination, leading to statistically implausible scatter of results in some channels. It should be noted, however, that the lattice results are all quite compatible, with the most precise results by ALPHA and by HPQCD (which use different lattice formulations and completely different analysis methods) sitting right on top of each other.

This was followed by a presentation by Thomas Korzec of the determination of αs by the ALPHA collaboration. I cannot really attempt to do justice to this work in a blog post, so I encourage you to look at their paper. By making use of both the Schrödinger functional and the gradient flow coupling in finite volume, they are able to non-perturbatively run αs between hadronic and perturbative scales with high accuracy.

After the coffee break, Erhard Seiler reviewed the status of the complex Langevin method, which is one of the leading methods for simulating actions with a sign problem, e.g. at finite chemical potential or with a θ term. Unfortunately, it is known that the complex Langevin method can sometimes converge to wrong results, and this can be traced to the violation by the complexification of the conditions under which the (real) Langevin method is justified, of which the development of zeros in e-S seems to be the most important case, giving rise to poles in the force which will violate ergodicity. There seems to be a lack of general theorems for situations like this, although the complex Langevin method has apparently been shown to be correct under certain difficult-to-check conditions. One of the best hopes for simulating with complex Langevin seems to be the dynamical stabilization proposed by Benjamin Jäger and collaborators.

This was followed by Paulo Bedaque discussing the prospects of solving the sign problem using the method of thimbles and related ideas. As far as I understand, thimbles are permissible integration regions in complexified configuration space on which the imaginary part of the action is constant, and which can thus be integrated over without a sign problem. A holomorphic flow that is related both to the gradient flow and the Hamiltonian flow can be constructed so as to flow from the real integration region to the thimbles, and based on this it appears to have become possible to solve some toy models with a sign problem, even going so far as to perform real-time simulations in the Keldysh-Schwinger formalism in Euclidean space (if I understood correctly).

In the afternoon, there was a final round of parallel sessions, one of which was again dedicated to the anomalous magnetic moment of the muon, this time focusing on the very difficult hadronic light-by-light contribution, for which the Mainz group has some very encouraging first results.

by Georg v. Hippel (noreply@blogger.com) at June 25, 2017 06:30 AM

June 23, 2017

Symmetrybreaking - Fermilab/SLAC

World’s biggest neutrino experiment moves one step closer

The startup of a 25-ton test detector at CERN advances technology for the Deep Underground Neutrino Experiment.

People in hard hats install the 311 detector

In a lab at CERN sits a very important box. It covers about three parking spaces and is more than a story tall. Sitting inside is a metal device that tracks energetic cosmic particles.

This is a prototype detector, a stepping-stone on the way to the future Deep Underground Neutrino Experiment (DUNE). On June 21, it recorded its first particle tracks.

So begins the largest ever test of an extremely precise method for measuring elusive particles called neutrinos, which may hold the key to why our universe looks the way it does and how it came into being.

A two-phase detector

The prototype detector is named WA105 3x1x1 (its dimensions in meters) and holds five active tons—3000 liters—of liquid argon. Argon is well suited to interacting with neutrinos then transmitting the subsequent light and electrons for collection. Previous liquid argon neutrino detectors, such as ICARUS and MicroBooNE, detected signals from neutrinos using wires in the liquid argon. But crucially, this new test detector also holds a small amount of gaseous argon, earning it the special status of a two-phase detector.

As particles pass through the detector, they interact with the argon atoms inside. Electrons are stripped off of atoms and drift through the liquid toward an “extraction grid,” which kicks them into the gas. There, large electron multipliers create a cascade of electrons, leading to a stronger signal that scientists can use to reconstruct the particle track in 3D. Previous tests of this method were conducted in small detectors using about 250 active liters of liquid argon.

“This is the first time anyone will demonstrate this technology at this scale,” says Sebastien Murphy, who led the construction of the detector at CERN.

The 3x1x1 test detector represents a big jump in size compared to previous experiments, but it’s small compared to the end goal of DUNE, which will hold 40,000 active tons of liquid argon. Scientists say they will take what they learn and apply it (and some of the actual electronic components) to next-generation single- and dual-phase prototypes, called ProtoDUNE.

The technology used for both types of detectors is a time projection chamber, or TPC. DUNE will stack many large modules snugly together like LEGO blocks to create enormous DUNE detectors, which will catch neutrinos a mile underground at Sanford Underground Research Facility in South Dakota. Overall development for liquid argon TPCs has been going on for close to 40 years, and research and development for the dual-phase for more than a decade. The idea for this particular dual-phase test detector came in 2013.

“The main goal [with WA105 3x1x1] is to demonstrate that we can amplify charges in liquid argon detectors on the same large scale as we do in standard gaseous TPCs,” Murphy says.

By studying neutrinos and antineutrinos that travel 800 miles through the Earth from the US Department of Energy’s Fermi National Accelerator Laboratory to the DUNE detectors, scientists aim to discover differences in the behavior of matter and antimatter. This could point the way toward explaining the abundance of matter over antimatter in the universe. The supersensitive detectors will also be able to capture neutrinos from exploding stars (supernovae), unveiling the formation of neutron stars and black holes. In addition, they allow scientists to hunt for a rare phenomenon called proton decay.

“All the R&D we did for so many years and now want to do with ProtoDUNE is the homework we have to do,” says André Rubbia, the spokesperson for the WA105 3x1x1 experiment and former co-spokesperson for DUNE. “Ultimately, we are all extremely excited by the discovery potential of DUNE itself.”

Image of particle tracks

One of the first tracks in the prototype detector, caused by a cosmic ray.

André Rubbia

Testing, testing, 3-1-1, check, check

Making sure a dual-phase detector and its electronics work at cryogenic temperatures of minus 184 degrees Celsius (minus 300 degrees Fahrenheit) on a large scale is the primary duty of the prototype detector—but certainly not its only one. The membrane that surrounds the liquid argon and keeps it from spilling out will also undergo a rigorous test. Special cryogenic cameras look for any hot spots where the liquid argon is predisposed to boiling away and might cause voltage breakdowns near electronics.

After many months of hard work, the cryogenic team and those working on the CERN neutrino platform have already successfully corrected issues with the cryostat, resulting in a stable level of incredibly pure liquid argon. The liquid argon has to be pristine and its level just below the large electron multipliers so that the electrons from the liquid will make it into the gaseous argon.

“Adding components to a detector is never trivial, because you’re adding impurities such as water molecules and even dust,” says Laura Manenti, a research associate at the University College London in the UK. “That is why the liquid argon in the 311—and soon to come ProtoDUNEs—has to be recirculated and purified constantly.”

While ultimately the full-scale DUNE detectors will sit in the most intense neutrino beam in the world, scientists are testing the WA105 3x1x1 components using muons from cosmic rays, high-energy particles arriving from space. These efforts are supported by many groups, including the Department of Energy’s Office of Science.

The plan is now to run the experiment, gather as much data as possible, and then move on to even bigger territory.

“The prospect of starting DUNE is very exciting, and we have to deliver the best possible detector,” Rubbia says. “One step at a time, we’re climbing a large mountain. We’re not at the top of Everest yet, but we’re reaching the first chalet.”

by Lauren Biron at June 23, 2017 04:57 PM

Symmetrybreaking - Fermilab/SLAC

Howie Day records love song to physics

After the musician learned that grad students at CERN had created a parody of his 2004 single “Collide,” he flew to Switzerland to sing it at the LHC.

Howie Day plays a song at CERN

Singer-songwriter Howie Day was sitting in a coffee shop in Denver one morning while on tour when he saw the Twitter notifications: CERN had shared a parody video of his hit song “Collide,” sung from the perspective of a proton in the Large Hadron Collider.

Sarah Charley, US communications manager for the LHC experiments, had come up with the idea for the video. She created it with the help of graduate students Jesse Heilman of the University of California, Riverside and Tom Perry and Laser Seymour Kaplan of the University of Wisconsin, Madison.

They spent lunches and coffee breaks workshopping their new version of the lyrics, which were originally about two people falling in love despite their differences. They spent a combined 20 hours in CERN’s editing studio recording the vocals and instrumentation of the track. Then they wandered around the laboratory for a full Saturday, filming at various sites. Charley edited the footage together.

“I was flattered, and it was quite funny, too,” Day says of seeing the video for the first time. “I immediately retweeted it and then sent a direct message inquiring about a visit. I figured it was a long shot, but why not?”

That started a conversation that led to Day planning a visit to CERN and booking time in his studio to re-record the song from the ground up with the new lyrics. “It was about the most fun I've ever had in the studio,” Day says. “We literally laughed all day long. I sent the track off to CERN with the note, ‘Should we make another music video?’”

The answer was yes.

While at CERN, Day spent two days visiting the ATLAS and CMS experiments, the CERN Data Centre and the SM18 magnet-testing facility. He also was given the rare opportunity to travel down into the LHC tunnel. CERN’s video crew tagged along to film him at the various sites.

“Going down into the LHC tunnel was a once-in-a lifetime opportunity, and it felt that way. It was like seeing the northern lights, or playing The Tonight Show, or bringing a new puppy home.”

Day, who says he has always been fascinated by the “why” of things, had been aware of CERN before this project, but he had only a rough idea of what went on there. He says that it wasn’t until he got there that things started to make sense.

“Obviously nothing can prepare you for the sheer scale of the place, but also the people who worked there were amazing,” Day says. “I felt completely overwhelmed and humbled the entire time. It was truly great to be working at the site where humans may make the most important scientific discoveries of our lifetime.”

Heilman, now a postdoctoral researcher at Carleton University, says that he saw the song as a way to reach out to people outside the culture of academia.

“All of us have been steeped in the science for so long that we sort of forget how to speak a language,” he says. “It's always important for academics and researchers to learn different ways to communicate what we’re doing because we’re doing it for people and for society.”

There’s a point in the original song where there’s an emotional build, he says, and Day sings, “I’ve found I’m scared to know, I’m always on your mind.”

The parody uses that part of the song to express the hopes and fears of experimentalists looking for evidence that might not ever appear.

“We're all experimentalists, so we will all spend our careers searching for something,” Heilman says. “The feeling is that [the theory of] supersymmetry, while it's this thing that everybody's been so excited about for a long time, really doesn’t seem that likely to a lot of us anymore because we’re eliminating a lot of the phase space. It's sort of like this white whale hunt. And so our lyrics, ‘Can SUSY still be found?’ is this emotional cry to the physics.”

Charley says she hopes that, through the video, they’re able to “reach and touch people with the science who we normally can't talk to.”

“I think you can appreciate something without fully understanding it,” she says. “As someone who is a professional science communicator, that's always the line I'm walking: trying to find ways that people can appreciate and understand and value something without needing to get a PhD. You can't devote your life to everything, but you can still have an appreciation for things in the world outside your own specific field.”

by Ali Sundermier at June 23, 2017 12:42 PM

Georg von Hippel - Life on the lattice

Lattice 2017, Days Three and Four
Wednesday was the customary short day, with parallel sessions in the morning, and time for excursions in the afternoon. I took the "Historic Granada" walking tour, which included visits to the Capilla Real and the very impressive Cathedral of Granada.

The first plenary session of today had a slightly unusual format in that it was a kind of panel discussion on the topic of axions and QCD topology at finite temperature.

After a brief outline by Mikko Laine, the session chair, the session started off with a talk by Guy Moore on the role of axions in cosmology and the role of lattice simulations in this context. Axions arise in the Peccei-Quinn solution to the strong CP problem and are a potential dark matter candidate. Guy presented some of his own real-time lattice simulations in classical field theory for axion fields, which exhibit the annihilation of cosmic-string-like vortex defects and associated axion production, and pointed out the need for accurate lattice QCD determinations of the topological susceptibility in the temperature range of 500-1200 MeV in order to fix the mass of the axion more precisely from the dark matter density (assuming that dark matter consists of axions).

The following talks were all fairly short. Claudio Bonati presented algorithmic developments for simulations of the topological properties of high-temperature QCD. The long autocorrelations of the topological charge at small lattice spacing are a problem. Metadynamics, which bias the Monte Carlo evolution in a non-Markovian manner so as to more efficiently sample the configuration space, appear to be of help.

Hidenori Fukaya reviewed the question of whether U(1)A remains anomalous at high temperature, which he claimed (both on theoretical grounds and based on numerical simulation results) it doesn't. I didn't quite understand this, since as far as I understand the axial anomaly, it is an operator identity, which will remain true even if both sides of the identity were to happen to vanish at high enough temperature, which is all that seemed to be shown; but this may just be my ignorance showing.

Tamas Kovacs showed recent results on the temperature-dependence of the topological susceptibility of QCD. By a careful choice of algorithms based on physical considerations, he could measure the topological susceptibility over a wide range of temperatures, showing that it becomes tiny at large temperature.

Then the speakers all sat on the stage as a panel and fielded questions from the audience. Perhaps it might have been a good idea to somehow force the speakers to engage each other; as it was, the advantage of this format over simply giving each speaker a longer time for answering questions didn't immediately become apparent to me.

After the coffee break, things returned to the normal format. Boram Yoon gave a review of lattice determinations of the neutron electric dipole moment. Almost any BSM source of CP violation must show up as a contribution to the neutron EDM, which is therefore a very sensitive probe of new physics. The very strong experimental limits on any possible neutron EDM imply e.g. |θ|<10-10 in QCD through lattice measurements of the effects of a θ term on the neutron EDM. Similarly, limits can be put on any quark EDMs or quark chromoelectric dipole moments. The corresponding lattice simulations have to deal with sign problems, and the usual techniques (Taylor expansions, simulations at complex θ) are employed to get past this, and seem to be working very well.

The next plenary speaker was Phiala Shanahan, who showed recent results regarding the gluon structure of hadrons and nuclei. This line of research is motivated by the prospect of an electron-ion collider that would be particularly sensitive to the gluon content of nuclei. For gluonic contributions to the momentum and spin decomposition of the nucleon, there are some fresh results from different groups. For the gluonic transversity, Phiala and her collaborators have performed first studies in the φ system. The gluonic radii of small nuclei have also been looked at, with no deviation from the single-nucleon case visible at the present level of accuracy.

The 2017 Kenneth Wilson Award was awarded to Raúl Briceño for his groundbreaking contributions to the study of resonances in lattice QCD. Raúl has been deeply involved both in the theoretical developments behind extending the reach of the Lüscher formalism to more and more complicated situations, and in the numerical investigations of resonance properties rendered possible by those developments.

After the lunch break, there were once again parallel sessions, two of which were dedicated entirely to the topic of the hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon, which has become one of the big topics in lattice QCD.

In the evening, the conference dinner took place. The food was excellent, and the Flamenco dancers who arrived at midnight (we are in Spain after all, where it seems dinner never starts before 9pm) were quite impressive.

by Georg v. Hippel (noreply@blogger.com) at June 23, 2017 12:20 PM

June 22, 2017

Symmetrybreaking - Fermilab/SLAC

African School works to develop local expertise

Universities in Africa are teaming up to offer free training to students interested in fundamental physics.

Header_2:African School

Last Feremenga was born in a small town in Zimbabwe. As a high school student in a specialized school in the capital, Harare, he was drawn to the study of physics.

“Physics was at the top of my list of potential academic fields to pursue,” he says.

But with limited opportunities nearby, that was going to require a lot of travel.

With help from the US Education Assistance Center at the American Embassy in Harare, Feremenga was accepted at the University of Chicago in 2007. As an undergraduate, he conducted research for a year at the nearby US Department of Energy’s Fermi National Accelerator Laboratory.

Then, through the University of Texas at Arlington, he became one of just a handful of African nationals to conduct research as a user at European research center CERN. Feremenga joined the ATLAS experiment at the Large Hadron Collider. He spent his grad-school years traveling between CERN and Argonne National Laboratory near Chicago, analyzing hundreds of terabytes of ATLAS data.

“I became interested in solving problems across diverse disciplines, not just physics,” he says.

“At CERN and Argonne, I assisted in developing a system that filters interesting events from large data-sets. I also analyzed these large datasets to find interesting physics patterns.”

Group photo of African students wearing coats and standing in front of a sign indicating the distance to different cities
The African School of Fundamental Physics and Applications

In December 2016, he received his PhD. In February 2017, he accepted a job at technology firm Digital Reasoning in Nashville, Tennessee.

To pursue particle physics, Feremenga needed to spend the entirety of his higher education outside Zimbabwe. Only one activity brought him even within the same continent as his home: the African School of Fundamental Physics and Applications. Feremenga attended the school in the program’s inaugural year at South Africa’s Stellenbosch University.

The ASP received funding for a year from France’s Centre National de la Recherche Scientific (CNRS) in 2008. Since then, major supporters among 20 funding institutions have included the International Center for Theoretical Physics (ICTP) in Trieste, Italy; the South African National Research Foundation, and department of Science and Technology; and the South African Institute of Physics. Other major supporters have included CERN, the US National Science Foundation and the University of Rwanda.

The free, three-week ASP has been held bi-annually since 2010. Targeting students in sub-Saharan Africa, the school has been held in South Africa, Ghana, Senegal and Rwanda. The 2018 School is slated to take place in Namibia. Thanks to outreach efforts, applications have risen from 125 in 2010 to 439 in 2016.

Inline_2:African School
The African School of Fundamental Physics and Applications

The 50 to 80 students selected for the school must have a minimum of a 3-year university education in math, physics, engineering and/or computer science. The first week of the school focuses on theoretical physics; the second week, experimental physics; the third week, physics applications and high-performance computing.

School organizers stay in touch to support alumni in pursuing higher education, says organizer Ketevi Assamagan. “We maintain contact with the students and help them as much as we can,” Assamagan says. “ASP alumni are pursuing higher education in Africa, Asia, Europe and the US.”

Assamagan, originally from Togo but now a US citizen, worked on the Higgs hunt with the ATLAS experiment. He is currently at Brookhaven National Lab in New York, which supports him devoting 10 percent of his time to the ASP.

While sub-Saharan countries are just beginning to close the gap in physics, there is one well-established accelerator complex in South Africa, operated by the iThemba LABS of Cape Town and Johannesburg. The 30-year-old Separated-Sector Cyclotron, which primarily produces particle beams for nuclear research and for training at the postdoc level, is the largest accelerator of its kind in the southern hemisphere.

Jonathan Dorfan, former Director of SLAC National Accelerator Laboratory and a native of South Africa, attended University of Cape Town. Dorfan recalls that after his Bachelor’s and Master’s degrees, the best PhD opportunities were in the US or Britain. He says he’s hopeful that that outlook could one day change.

Organizers of the African School of Fundamental Physics and Applications continue reaching out to students on the continent in the hopes that one day, someone like Feremenga won’t have to travel across the world to pursue particle physics.

by Mike Perricone at June 22, 2017 02:40 PM

Lubos Motl - string vacua and pheno

Dwarf galaxies: gravity really, really is not entropic
Verlinde has already joined the community of fraudulent pseudoscientists who keep on "working" on something they must know to be complete rubbish

In the text Researchers Check Space-Time to See if It’s Made of Quantum Bits, the Quanta Magazine describes a fresh paper by Kris Pardo (Princeton U.)
Testing Emergent Gravity with Isolated Dwarf Galaxies
which tested some 2016 dark matter "application" of Erik Verlinde's completely wrong "entropic gravity" meme. Verlinde has irrationally linked his "entropic gravity" meme with some phenomenological, parameter-free fit for the behavior of galaxies. What a surprise, when this formula is compared to dwarf galaxies which are, you know, a bit smaller, it doesn't seem to work.

The maximum circular velocities are observed to reach up to 280 km/s but the predicted ones are at most 165 km/s. So it doesn't work, the model is falsified. This moment of the death of the model is where the discussion of the model should end and this is indeed where my discussion of the model ends.




But what I want to discuss is how much this branch of physics has been filled with garbage in a recent decade or two. I don't actually believe that Erik Verlinde believes that his formulae have any reason to produce nontrivially good predictions.




What he did was just to find some approximate fit for some data about a class of galaxies – which is only good up to a factor of a few (perhaps two, perhaps ten). It's not so shocking that such a rough fit may exist because by construction, all the galaxies in his class were qualitatively similar to each other. That's why only one or a small number of parameters describes the important enough characteristics of each galaxy and everything important we observe must be roughly a function of it. When you think about it, the functional dependence simply has to be close enough to a linear or power law function for such a limited class of similar objects.

And this fit was "justified" by some extremely sloppy arguments as being connected with his older "entropic gravity" meme. It claims that gravity is an entropic force – resulting from the desire of physical systems to increase their entropy. This is obviously wrong because the gravitational motion would be unavoidably irreversible; and because all the quantum interference phenomena (e.g. with neutrons in the gravitational field) would be rendered impossible if there were a high entropy underlying the gravitational potential even in the absence of the event horizons.

So his original meme is wrong and it contradicts absolutely basic observed facts about the gravitational force between the celestial bodies, such as the fact that planetary orbits are approximately elliptical. But the idea of this Verlinde-style of work is not to care and increase the ambitions. While his theory has no chance to correctly deduce even the Kepler laws, he just remains silent about it and starts to claim that it can do everything and it may even replace dark matter if not dark matter and dark energy.

An even stinker package of bogus claims and would-be equations is offered to "justify" this ambitious claim in the eyes of the truly gullible people, if I avoid the scientific term "imbeciles" for a while. Astrophysicists at Princeton feel the urge to spend their time with this junk. Needless to say, the "theory" is based on wrong assumptions, stupidities, and deception about connections between all these wrong claims and the actual, observed, correct claims. It has no reason to predict anything else correctly and it doesn't.

Verlinde's statement after his newest theory was basically killed is truly Smolinesque:
This is interesting and good work. [But] emergent gravity hasn’t been developed to the point where it can make specific predictions about all dwarf galaxies. I think more work needs to be done on both the observational and the theory side.
Holy cow. It's not too interesting work. It's just a straightforward paper showing that Mr Verlinde's "theory" contradicts well-known dwarf galaxy data. But even if it were interesting, it is absolutely ludicrous for Mr Verlinde to present himself as some kind of a superior judge of Pardo's work. He is just a student who tried a very stupid thing and was demolished by his professor who has showed him the actual correct data.

By the way, the excuse involving the word "predictions" is cute, too. Verlinde emits fog whose purpose is to create the impression that the falsification of his delusions doesn't matter because his theory hasn't been developed to predict properties of "all dwarf galaxies". But a key point of Pardo's paper is that it doesn't matter. One may predict certain things statistically and the predicted speeds are generally too low. The mean value of the distribution is low and so are the extremes. One doesn't need to predict and test every single individual dwarf galaxy. Verlinde just wants the imbeciles to think that a test hasn't really been done yet – except that it has. And he suggests that some fix or loophole exists – except that it doesn't.

But what I hate most about this piece of crackpot work and hundreds of others is this Smolinesque sentence:
I think more work needs to be done on both the observational and the theory side.
Promises and begging for others to support this kind of junk in the future, perhaps even more so than so far.

Should more work be done on both sides? No, on the contrary, less work or no work should be done on this "theory" because it was killed; it is, on the contrary, the promising ideas (those that have predicted something to agree with something else we know) that deserve more work and elaboration in the future. Further research will surely kill it even more, but Mr Verlinde will care even less.

If Mr Verlinde were pursuing the scientific method, he would understand that his theory doesn't work, he would abandon it, stop working on it, stop trying to make others work on it, and, last but not least, he would stop receive funding that is justified by this garbage. He should be the first man who points out that the value of this garbage is zero. But sadly enough, he doesn't have the integrity for that and the people around him don't have the intelligence to behave in the way that would actually be compatible with the rules of the scientific method.

And it's not just Smolin and Verlinde. The field has gotten crowded by dozens of sociologically high-profile fraudsters who pompously keep on working on various kinds of crackpottery that have been known to be absolutely wrong, worthless piece of junk for many years and sometimes decades. Entropic gravity, loop quantum gravity, spin foam, causal dynamical triangulation and dozens of other cousins like that, Bohmian theory, many world theory and a dozen of other "interpretations", various nonsensical claims about reversibility of the macroscopic phenomena, Boltzmann brains, and the list would go on and on and on.

Lots of these people are keeping themselves influential by some connections with the media or other politically powerful interests. Imagine what it does to the students who are deciding about their research specialization. Many if not most of them are exposed to a department where a high-profile fraudster like that overshadows all the meaningful and credible research. Many of these students join this bogus research and they quickly learn what really matters in their kind of "science". And they are very different skills from those that their honest classmates need. In particular, they learn to do the P.R. and they learn to say "how much they care about testing" and also they learn to talk about "future work" whenever they are proved wrong in a test, which happens after every single test, and so on. There is an intense struggle going on between genuine science and bogus science and genuine science is losing the battle.

by Luboš Motl (noreply@blogger.com) at June 22, 2017 06:15 AM

June 20, 2017

Georg von Hippel - Life on the lattice

Lattice 2017, Day Two
Welcome back to our blog coverage of the Lattics 2017 conference in Granada.

Today's first plenary session started with an experimental talk by Arantza Oyanguren of the LHCb collaboration on B decay anomalies at LHCb. LHCb have amassed a huge number of b-bbar pairs, which allow them to search for and study in some detail even the rarest of decay modes, and they are of course still collecting more integrated luminosity. Readers of this blog will likely recall the Bs → μ+μ- branching ratio result from LHCb, which agreed with the Standard Model prediction. In the meantime, there are many similar results for branching ratios that do not agree with Standard Model predictions at the 2-3σ level, e.g. the ratios of branching fractions like Br(B+→K+μ+μ-)/Br(B+→K+e+e-), in which lepton flavour universality appears to be violated. Global fits to data in these channels appear to favour the new physics hypothesis, but one should be cautious because of the "look-elsewhere" effect: when studying a very large number of channels, some will show an apparently significant deviation simply by statistical chance. On the other hand, it is very interesting that all the evidence indicating potential new physics (including the anomalous magnetic moment of the muon and the discrepancy between the muonic and electronic determinations of the proton electric charge radius) involve differences between processes involving muons and analogous processes involving electrons, an observation I'm sure model-builders have made a long time ago.

This was followed by a talk on flavour physics anomalies by Damir Bečirević. Expanding on the theoretical interpretation of the anomalies discussed in the previous talk, he explained how the data seem to indicate a violation of lepton flavour universality at the level where the Wilson coefficient C9 in the effective Hamiltonian is around zero for electrons, and around -1 for muons. Experimental data seem to favour the situation where C10=-C9, which can be accommodated is certain models with a Z' boson coupling preferentially to muons, or in certain special leptoquark models with corrections at the loop level only. Since I have little (or rather no) expertise in phenomenological model-building, I have no idea how likely these explanations are.

The next speaker was Xu Feng, who presented recent progress in kaon physics simulations on the lattice. The "standard" kaon quantities, such as the kaon decay constant or f+(0), are by now very well-determined from the lattice, with overall errors at the sub-percent level, but beyond that there are many important quantities, such as the CP-violating amplitudes in K → ππ decays, that are still poorly known and very challenging. RBC/UKQCD have been leading the attack on many of these observables, and have presented a possible solution to the ΔI=1/2 rule, which consists in non-perturbative effects making the amplitude A0 much larger relative to A2 than what would be expected from naive colour counting. Making further progress on long-distance contributions to the KL-KS mass difference or εK will require working at the physical pion mass and treating the charm quark with good control of discretization effects. For some processes, such as KL→π0+-, even the sign of the coefficient would be desirable.

After the coffee break, Luigi Del Debbio talked about parton distributions in the LHC era. The LHC data reduce the error on the NNLO PDFs by around a factor of two in the intermediate-x region. Conversely, the theory errors coming from the PDFs are a significant part of the total error from the LHC on Higgs physics and BSM searches. In particular the small-x and large-x regions remain quite uncertain. On the lattice, PDFs can be determined via quasi-PDFs, in which the Wilson line inside the non-local bilinear is along a spatial direction rather than in a light-like direction. However, there are still theoretical issues to be settled in order to ensure that the renormalization and matching the the continuum really lead to the determination of continuum PDFs in the end.

Next was a talk about chiral perturbation theory results on the multi-hadron state contamination of nucleon observables by Oliver Bär. It is well known that until very recently, lattice calculations of the nucleon axial charge underestimated its value relative to experiment, and this has been widely attributed to excited-state effects. Now, Oliver has calculated the corrections from nucleon-pion states on the extraction of the axial charge in chiral perturbation theory, and has found that they actually should lead to an overestimation of the axial charge from the plateau method, at least for source-sink separations above 2 fm, where ChPT is applicable. Similarly, other nucleon charges should be overestimated by 5-10%. Of course, nobody is currently measuring in that distance regime, and so it is quite possible that higher-order corrections or effects not captured by ChPT overcompensate this and lead to an underestimation, which would however mean that there is some instermediate source-sink separation for which one gets the experimental result by accident, as it were.

The final plenary speaker of the morning was Chia-Cheng Chang, who discussed progress towards a precise lattice determination of the nucleon axial charge, presenting the results of the CalLAT collaboration from using what they refer to as the Feynman-Hellmann method, a novel way of implementing what is essentially the summation method through ideas based in the Feynman-Hellmann theorem (but which doesn't involve simulating with a modified action, as a straightforward applicaiton of the Feynman-Hellmann theorem would demand).

After the lunch break, there were parallel sessions, and in the evening, the poster session took place. A particular interesting and entertaining contribution was a quiz about women's contributions to physics and computer science, the winner of which will win a bottle of wine and a book.

by Georg v. Hippel (noreply@blogger.com) at June 20, 2017 08:27 PM

Georg von Hippel - Life on the lattice

Lattice 2017, Day One
Hello from Granada and welcome to our coverage of the 2017 lattice conference.

After welcome addresses by the conference chair, a representative of the government agency in charge of fundamental research, and the rector of the university, the conference started off in a somewhat sombre mood with a commemoration of Roberto Petronzio, a pioneer of lattice QCD, who passed away last year. Giorgio Parisi gave a memorial talk summarizing Roberto's many contributions to the development of the field, from his early work on perturbative QCD and the parton model, through his pioneering contributions to lattice QCD back in the days of small quenched lattices, to his recent work on partially twisted boundary conditions and on isospin breaking effects, which is very much at the forefront of the field at the moment, not to omit Roberto's role as director of the Italian INFN in politically turbulent times.

This was followed by a talk by Martin Lüscher on stochastic locality and master-field simulations of very large lattices. The idea of a master-field simulation is based on the observation of volume self-averaging, i.e. that the variance of volume-averaged quantities is much smaller on large lattices (intuitively, this would be because an infinitely-extended properly thermalized lattice configuration would have to contain any possible finite sub-configuration with a frequency corresponding to its weight in the path integral, and that thus a large enough typical lattice configuration is itself a sort of ensemble). A master field is then a huge (e.g. 2564) lattice configuration, on which volume averages of quantities are computed, which have an expectation value equal to the QCD expectation value of the quantity in question, and a variance which can be estimated using a double volume sum that is doable using an FFT. To generate such huge lattice, algorithms with global accept-reject steps (like HMC) are unsuitable, because ΔH grows with the square root of the volume, but stochastic molecular dynamics (SMD) can be used, and it has been rigorously shown that for short-enough trajectory lengths SMD converges to a unique stationary state even without an accept-reject step.

After the coffee break, yet another novel simulation method was discussed by Ignacio Cirac, who presented techniques to perform quantum simulations of QED and QCD on alattice. While quantum computers of the kind that would render RSA-based public-key cryptography irrelevant remain elusive at the moment, the idea of a quantum simulator (which is essentially an analogue quantum computer), which goes back to Richard Feynman, can already be realized in practice: optical lattices allow trapping atoms on lattice sites while fine-tuning their interactions so as to model the couplings of some other physical system, which can thus be simulated. The models that are typically simulated in this way are solid-state models such as the Hubbard model, but it is of course also possible to setup a quantum simulator for a lattice field theory that has been formulated in the Hamiltonian framework. In order to model a gauge theory, it is necessary to model the gauge symmetry by some atomic symmetry such as angular momentum conservation, and this has been done at least in theory for QED and QCD. The Schwinger model has been studied in some detail. The plaquette action for d>1+1 additionally requires a four-point interaction between the atoms modelling the link variables, which can be realized using additional auxiliary variables, and non-abelian gauge groups can be encoded using multiple species of bosonic atoms. A related theoretical tool that is still in its infancy, but shows significant promise, is the use of tensor networks. This is based on the observation that for local Hamiltonians the entanglement between a region and its complement grows only as the surface of the region, not its volume, so only a small corner of the total Hilbert space is relevant; this allows one to write the coefficients of the wavefunction in a basis of local states as a contraction of tensors, from where classical algorithms that scale much better than the exponential growth in the number of variables that would naively be expected can be derived. Again, the method has been successfully applied to the Schwinger model, but higher dimensions are still challenging, because the scaling, while not exponential, still becomes very bad.

Staying with the topic of advanced simulation techniques, the next talk was Leonardo Giusti speaking about the block factorization of fermion determinants into local actions for multi-boson fields. By decomposing the lattice into three pieces, of which the middle one separates the other by a distance Δ large enough to render e-MπΔ small, and by applying a domain decomposition similar to the one used in Lüscher's DD-HMC algorithm to the Dirac operator, Leonardo and collaborators have been able to derive a multi-boson algorithm that allows to perform multilevel integration with dynamical fermions. For hadronic observables, the quark propagator also needs to be factorized, which Leonardo et al. also have achieved, making a significant decrease in statistical error possible.

After the lunch break there were parallel sessions, in one of which I gave my own talk and another one of which I chaired, thus finishing all of my duties other than listening (and blogging) on day one.

In the evening, there was a reception followed by a special guided tour of the truly stunning Alhambra (which incidentally contains a great many colourful - and very tasteful - lattices in the form of ornamental patterns).

by Georg v. Hippel (noreply@blogger.com) at June 20, 2017 08:26 PM

John Baez - Azimuth

The Theory of Devices

I’m visiting the University of Genoa and talking to two category theorists: Marco Grandis and Giuseppe Rosolini. Grandis works on algebraic topology and higher categories, while Rosolini works on the categorical semantics of programming languages.

Yesterday, Marco Grandis showed me a fascinating paper by his thesis advisor:

• Gabriele Darbo, Aspetti algebrico-categoriali della teoria dei dispotivi, Symposia Mathematica IV (1970), Istituto Nazionale di Alta Matematica, 303–336.

It’s closely connected to Brendan Fong’s thesis, but also different—and, of course, much older. According to Grandis, Darbo was the first person to work on category theory in Italy. He’s better known for other things, like ‘Darbo’s fixed point theorem’, but this piece of work is elegant, and, it seems to me, strangely ahead of its time.

The paper’s title translates as ‘Algebraic-categorical aspects of the theory of devices’, and its main concept is that of a ‘universe of devices’: a collection of devices of some kind that can be hooked up using wires to form more devices of this kind. Nowadays we might study this concept using operads—but operads didn’t exist in 1970, and Darbo did quite fine without them.

The key is the category \mathrm{FinCorel}, which has finite sets as objects and ‘corelations’ as morphisms. I explained corelations here:

Corelations in network theory, 2 February 2016.

Briefly, a corelation from a finite set X to a finite set Y is a partition of the disjoint union of X and Y. We can get such a partition from a bunch of wires connecting points of X and Y. The idea is that two points lie in the same part of the partition iff they’re connected, directly or indirectly, by a path of wires. So, if we have some wires like this:

they determine a corelation like this:

There’s an obvious way to compose corelations, giving a category \mathrm{FinCorel}.

Gabriele Darbo doesn’t call them ‘corelations’: he calls them ‘trasduttori’. A literal translation might be ‘transducers’. But he’s definitely talking about corelations, and like Fong he thinks they are basic for studying ways to connect systems.

Darbo wants a ‘universe of devices’ to assign to each finite set X a set D(X) of devices having X as their set of ‘terminals’. Given a device in D(X) and a corelation f \colon X \to Y, thought of as a bunch of wires, he wants to be able to attach these wires to the terminals in X and get a new device with Y as its set of terminals. Thus he wants a map D(f): D(X) \to D(Y). If you draw some pictures, you’ll see this should give a functor

D : \mathrm{FinCorel} \to \mathrm{Set}

Moreover, if we have device with a set X of terminals and a device with a set Y of terminals, we should be able to set them side by side and get a device whose set of terminals form the set X + Y, meaning the disjoint union of X and Y. So, Darbo wants to have maps

\delta_{X,Y} : D(X) \times D(Y) \to D(X + Y)

If you draw some more pictures you can convince yourself that \delta should be a lax symmetric monoidal functor… if you’re one of the lucky few who knows what that means. If you’re not, you can look it up in many places, such as Section 1.2 here:

• Brendan Fong, The Algebra of Open and Interconnected Systems, Ph.D. thesis, University of Oxford, 2016. (Blog article here.)

Darbo does not mention lax symmetric monoidal functors, perhaps because such concepts were first introduced by Mac Lane only in 1968. But as far as I can tell, Darbo’s definition is almost equivalent to this:

Definition. A universe of devices is a lax symmetric monoidal functor D \colon \mathrm{FinCorel} \to \mathrm{Set}.

One difference is that Darbo wants there to be exactly one device with no terminals. Thus, he assumes D(\emptyset) is a one-element set, say 1, while the definition above would only demand the existence of a map \delta \colon 1 \to D(\emptyset) obeying a couple of axioms. That gives a particular ‘favorite’ device with no terminals. I believe we get Darbo’s definition from the above one if we further assume \delta is the identity map. This makes sense if we take the attitude that ‘a device is determined by its observable behavior’, but not otherwise. This attitude is called ‘black-boxing’.

Darbo does various things in his paper, but the most exciting to me is his example connected to linear electrical circuits. He defines, for any pair of objects V and I in an abelian category C, a particular universe of devices. He calls this the universe of linear devices having V as the object of potentials and I as the object of currents.

If you don’t like abelian categories, think of C as the category of finite-dimensional real vector spaces, and let V = I = \mathbb{R}. Electric potential and electric current are described by real numbers so this makes sense.

The basic idea will be familiar to Fong fans. In an electrical circuit made of purely conductive wires, when two wires merge into one we add the currents to get the current on the wire going out. When one wire splits into two we duplicate the potential to get the potentials on the wires going out. Working this out further, any corelation f : X \to Y between finite set determines two linear relations, one

f_* : I^X \rightharpoonup I^Y

relating the currents on the wires coming in to the currents on the wires going out, and one

f^* : V^Y \rightharpoonup V^X

relating the potentials on the wires going out to the potentials on the wires coming in. Here I^X is the direct sum of X copies of I, and so on; the funky arrow indicates that we have a linear relation rather than a linear map. Note that f_* goes forward while f^* goes backward; this is mainly just conventional, since you can turn linear relations around, but we’ll see it’s sort of nice.

If we let \mathrm{Rel}(A,B) be the set of linear relations between two objects A, B \in C, we can use the above technology to get a universe of devices where

D(X) = \mathrm{Rel}(V^X, I^X)

In other words, a device of this kind is simply a linear relation between the potentials and currents at its terminals!

How does D get to be a functor D : \mathrm{FinCorel} \to \mathrm{FinSet}? That’s pretty easy. We’ve defined it on objects (that is, finite sets) by the above formula. So, suppose we have a morphism (that is, a corelation) f \colon X \to Y. How do we define D(f) : D(X) \to D(Y)?

To answer this question, we need a function

D(f) : \mathrm{Rel}(V^X, I^X) \to \mathrm{Rel}(V^Y, I^Y)

Luckily, we’ve got linear relations

f_* : I^X \rightharpoonup I^Y

and

f^* : V^Y \rightharpoonup V^X

So, given any linear relation R \in \mathrm{Rel}(V^X, I^X), we just define

D(f)(R) = f_* \circ R \circ f^*

Voilà!

People who have read Fong’s thesis, or my paper with Blake Pollard on reaction networks:

• John Baez and Blake Pollard, A compositional framework for reaction networks.

will find many of Darbo’s ideas eerily similar. In particular, the formula

D(f)(R) = f_* \circ R \circ f^*

appears in Lemma 16 of my paper with Blake, where we are defining a category of open dynamical systems. We prove that D is a lax symmetric monoidal functor, which is just what Darbo proved—though in a different context, since our R is not linear like his, and for a different purpose, since he’s trying to show D is a ‘universe of devices’, while we’re trying to construct the category of open dynamical systems as a ‘decorated cospan category’.

In short: if this work of Darbo had become more widely known, the development of network theory could have been sped up by three decades! But there was less interest in a general theory of networks at the time, lax monoidal functors were brand new, operads unknown… and, sadly, few mathematicians read Italian.

Darbo has other papers, and so do his students. We should read them and learn from them! Here are a few open-access ones:

• Franco Parodi, Costruzione di un universo di dispositivi non lineari su una coppia di gruppi abeliani , Rendiconti del Seminario Matematico della Università di Padova 58 (1977), 45–54.

• Franco Parodi, Categoria degli universi di dispositivi e categoria delle T-algebre, Rendiconti del Seminario Matematico della Università di Padova 62 (1980), 1–15.

• Stefano Testa, Su un universo di dispositivi monotoni, Rendiconti del Seminario Matematico della Università di Padova 65 (1981), 53–57.

At some point I will scan in G. Darbo’s paper and make it available here.


by John Baez at June 20, 2017 02:45 PM

Symmetrybreaking - Fermilab/SLAC

A speed trap for dark matter, revisited

A NASA rocket experiment could use the Doppler effect to look for signs of dark matter in mysterious X-ray emissions from space.

Image of stars and reddish, glowing clouds of dust at the center of the Milky Way Galaxy

Researchers who hoped to look for signs of dark matter particles in data from the Japanese ASTRO-H/Hitomi satellite suffered a setback last year when the satellite malfunctioned and died just a month after launch.

Now the idea may get a second chance.

In a new paper, published in Physical Review D, scientists from the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory, suggest that their novel search method could work just as well with the future NASA-funded Micro-X rocket experiment—an X-ray space telescope attached to a research rocket.

The search method looks for a difference in the Doppler shifts produced by movements of dark matter and regular matter, says Devon Powell, a graduate student at KIPAC and lead author on the paper with co-authors Ranjan Laha, Kenny Ng and Tom Abel.

The Doppler effect is a shift in the frequency of sound or light as its source moves toward or away from an observer. The rising and falling pitch of a passing train whistle is a familiar example, and the radar guns that cops use to catch speeders also work on this principle.

The dark matter search technique, called Dark Matter Velocity Spectroscopy, is like setting up a speed trap to “catch” dark matter.

“We think that dark matter has zero averaged velocity, while our solar system is moving,” says Laha, who is a postdoc at KIPAC.  “Due to this relative motion, the dark matter signal would experience a Doppler shift. However, it would be completely different than the Doppler shifts from signals coming from astrophysical objects because those objects typically co-rotate around the center of the galaxy with the sun, and dark matter doesn’t. This means we should be able to distinguish the Doppler signatures from dark and regular matter.”

Researchers would look for subtle frequency shifts in measurements of a mysterious X-ray emission. This 3500-electronvolt (3.5 keV) emission line, observed in data from the European XMM-Newton spacecraft and NASA’s Chandra X-ray Observatory, is hard to explain with known astrophysical processes. Some say it could be a sign of hypothetical dark matter particles called sterile neutrinos decaying in space.

“The challenge is to find out whether the X-ray line is due to dark matter or other astrophysical sources,” Powell says. “We’re looking for ways to tell the difference.”

The idea for this approach is not new: Laha and others described the method in a research paper last year, in which they suggested using X-ray data from Hitomi to do the Doppler shift comparison. Although the spacecraft sent some data home before it disintegrated, it did not see any sign of the 3.5-keV signal, casting doubt on the interpretation that it might be produced by the decay of dark matter particles. The Dark Matter Velocity Spectroscopy method was never applied, and the issue was never settled.  

In the future Micro-X experiment, a rocket will catapult a small telescope above Earth’s atmosphere for about five minutes to collect X-ray signals from a specific direction in the sky. The experiment will then parachute back to the ground to be recovered. The researchers hope that Micro-X will do several flights to set up a speed trap for dark matter.

Illustration of a research rocket catapulting an experiment above Earth’s atmosphere
Jeremy Stoller, NASA

“We expect the energy shifts of dark matter signals to be very small because our solar system moves relatively slowly,” Laha says. “That’s why we need cutting-edge instruments with superb energy resolution. Our study shows that Dark Matter Velocity Spectroscopy could be successfully done with Micro-X, and we propose six different pointing directions away from the center of the Milky Way.”

Esra Bulbul from the MIT Kavli Institute for Astrophysics and Space Research, who wasn’t involved in the study, says, “In the absence of Hitomi observations, the technique outlined for Micro-X provides a promising alternative for testing the dark matter origin of the 3.5-keV line.” But Bulbul, who was the lead author of the paper that first reported the mystery X-ray signal in superimposed data of 73 galaxy clusters, also points out that the Micro-X analysis would be limited to our own galaxy.

The feasibility study for Micro-X is more detailed than the prior analysis for Hitomi. “The earlier paper used certain approximations—for instance, that the dark matter halos of galaxies are spherical, which we know isn’t true,” Powell says. “This time we ran computer simulations without this approximation and predicted very precisely what Micro-X would actually see.”

The authors say their method is not restricted to the 3.5-keV line and can be applied to any sharp signal potentially associated with dark matter. They hope that Micro-X will do the first practice test. Their wish might soon come true.

“We really like the idea presented in the paper,” says Enectali Figueroa-Feliciano, the principal investigator for Micro-X at Northwestern University, who was not involved in the study. “We would look at the center of the Milky Way first, where dark matter is most concentrated. If we saw an unidentified line and it were strong enough, looking for Doppler shifts away from the center would be the next step.”  

by Manuel Gnida at June 20, 2017 01:00 PM

Clifford V. Johnson - Asymptotia

Clifford V. Johnson - Asymptotia

Random Machinery

Making up random (ish) bits of machinery can be lots of fun!

(Click for larger view. This is for a short story I was asked to write, to appear next year.)

-cvj Click to continue reading this post

The post Random Machinery appeared first on Asymptotia.

by Clifford at June 20, 2017 04:45 AM

June 18, 2017

Sean Carroll - Preposterous Universe

A Response to “On the time lags of the LIGO signals” (Guest Post)

This is a special guest post by Ian Harry, postdoctoral physicist at the Max Planck Institute for Gravitational Physics, Potsdam-Golm. You may have seen stories about a paper that recently appeared, which called into question whether the LIGO gravitational-wave observatory had actually detected signals from inspiralling black holes, as they had claimed. Ian’s post is an informal response to these claims, on behalf of the LIGO Scientific Collaboration. He argues that there are data-analysis issues that render the new paper, by James Creswell et al., incorrect. Happily, there are sufficient online tools that this is a question that interested parties can investigate for themselves. Here’s Ian:


On 13 Jun 2017 a paper appeared on the arXiv titled “On the time lags of the LIGO signals” by Creswell et al. This paper calls into question the 5-sigma detection claim of GW150914 and following detections. In this short response I will refute these claims.

Who am I? I am a member of the LIGO collaboration. I work on the analysis of LIGO data, and for 10 years have been developing searches for compact binary mergers. The conclusions I draw here have been checked by a number of colleagues within the LIGO and Virgo collaborations. We are also in touch with the authors of the article to raise these concerns directly, and plan to write a more formal short paper for submission to the arXiv explaining in more detail the issues I mention below. In the interest of timeliness, and in response to numerous requests from outside of the collaboration, I am sharing these notes in the hope that they will clarify the situation.

In this article I will go into some detail to try to refute the claims of Creswell et al. Let me start though by trying to give a brief overview. In Creswell et al. the authors take LIGO data made available through the LIGO Open Science Data from the Hanford and Livingston observatories and perform a simple Fourier analysis on that data. They find the noise to be correlated as a function of frequency. They also perform a time-domain analysis and claim that there are correlations between the noise in the two observatories, which is present after removing the GW150914 signal from the data. These results are used to cast doubt on the reliability of the GW150914 observation. There are a number of reasons why this conclusion is incorrect: 1. The frequency-domain correlations they are seeing arise from the way they do their FFT on the filtered data. We have managed to demonstrate the same effect with simulated Gaussian noise. 2. LIGO analyses use whitened data when searching for compact binary mergers such as GW150914. When repeating the analysis of Creswell et al. on whitened data these effects are completely absent. 3. Our 5-sigma significance comes from a procedure of repeatedly time-shifting the data, which is not invalidated if correlations of the type described in Creswell et al. are present.

Section II: The curious case of the Fourier phase correlations?

The main result (in my opinion) from section II of Creswell et al. is Figure 3, which shows that, when one takes the Fourier transform of the LIGO data containing GW150914, and plots the Fourier phases as a function of frequency, one can see a clear correlation (ie. all the points line up, especially for the Hanford data). I was able to reproduce this with the LIGO Open Science Center data and a small ipython notebook. I make the ipython notebook available so that the reader can see this, and some additional plots, and reproduce this.

For Gaussian noise we would expect the Fourier phases to be distributed randomly (between -pi and pi). Clearly in the plot shown above, and in Creswell et al., this is not the case. However, the authors overlooked one critical detail here. When you take a Fourier transform of a time series you are implicitly assuming that the data are cyclical (i.e. that the first point is adjacent to the last point). For colored Gaussian noise this assumption will lead to a discontinuity in the data at the two end points, because these data are not causally connected. This discontinuity can be responsible for misleading plots like the one above.

To try to demonstrate this I perform two tests. First I whiten the colored LIGO noise by measuring the power spectral density (see the LOSC example, which I use directly in my ipython notebook, for some background on colored noise and noise power spectral density), then dividing the data in the Fourier domain by the power spectral density, and finally converting back to the time domain. This process will corrupt some data at the edges so after whitening we only consider the middle half of the data. Then we can make the same plot:

And we can see that there are now no correlations visible in the data. For white Gaussian noise there is no correlation between adjacent points, so no discontinuity is introduced when treating the data as cyclical. I therefore assert that Figure 3 of Creswell et al. actually has no meaning when generated using anything other than whitened data.

I would also like to mention that measuring the noise power spectral density of LIGO data can be challenging when the data are non-stationary and include spectral lines (as Creswell et al. point out). Therefore it can be difficult to whiten data in many circumstances. For the Livingston data some of the spectral lines are still clearly present after whitening (using the methods described in the LOSC example), and then mild correlations are present in the resulting plot (see ipython notebook). This is not indicative of any type of non-Gaussianity, but demonstrates that measuring the noise power-spectral density of LIGO noise is difficult, and, especially for parameter inference, a lot of work has been spent on answering this question.

To further illustrate that features like those seen in Figure 3 of Creswell et al. can be seen in known Gaussian noise I perform an additional check (suggested by my colleague Vivien Raymond). I generate a 128 second stretch of white Gaussian noise (using numpy.random.normal) and invert the whitening procedure employed on the LIGO data above to produce 128 seconds of colored Gaussian noise. Now the data, previously random, are ‘colored’ Coloring the data in the manner I did makes the full data set cyclical (the last point is correlated with the first) so taking the Fourier transform of the complete data set, I see the expected random distribution of phases (again, see the ipython notebook). However, If I select 32s from the middle of this data, introducing a discontinuity as I mention above, I can produce the following plot:

In other words, I can produce an even more extremely correlated example than on the real data, with actual Gaussian noise.

Section III: The data is strongly correlated even after removing the signal

The second half of Creswell et al. explores correlations between the data taken from Hanford and Livingston around GW150914. For me, the main conclusion here is communicated in Figure 7, where Creswell et al. claim that even after removal of the GW150914 best-fit waveform there is still correlation in the data between the two observatories. This is a result I have not been able to reproduce. Nevertheless, if such a correlation were present it would suggest that we have not perfectly subtracted the real signal from the data, which would not invalidate any detection claim. There could be any number of reasons for this, for example the fact that our best-fit waveform will not exactly match what is in the data as we cannot measure all parameters with infinite precision. There might also be some deviations because the waveform models we used, while very good, are only approximations to the real signal (LIGO put out a paper quantifying this possibility). Such deviations might also be indicative of a subtle deviation from general relativity. These are of course things that LIGO is very interested in pursuing, and we have published a paper exploring potential deviations from general relativity (finding no evidence for that), which includes looking for a residual signal in the data after subtraction of the waveform (and again finding no evidence for that).

Finally, LIGO runs “unmodelled” searches, which do not search for specific signals, but instead look for any coherent non-Gaussian behaviour in the observatories. These searches actually were the first to find GW150914, and did so with remarkably consistent parameters to the modelled searches, something which we would not expect to be true if the modelled searches are “missing” some parts of the signal.

With that all said I try to reproduce Figure 7. First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following:

There is a clear spike here at 7ms (which is GW150914), with some expected “ringing” behaviour around this point. This is a much less powerful method to extract the signal than matched-filtering, but it is completely signal independent, and illustrates how loud GW150914 is. Creswell et al. however, do not discuss their normalization of this cross-correlation, or how likely a deviation like this is to occur from noise alone. Such a study would be needed before stating that this is significant—In this case we know this signal is significant from other, more powerful, tests of the data. Then I repeat this but after having removed the best-fit waveform from the data in both observatories (using only products made available in the LOSC example notebooks). This gives:

This shows nothing interesting at all.

Section IV: Why would such correlations not invalidate the LIGO detections?

Creswell et al. claim that correlations between the Hanford and Livingston data, which in their results appear to be maximized around the time delay reported for GW150914, raised questions on the integrity of the detection. They do not. The authors claim early on in their article that LIGO data analysis assumes that the data are Gaussian, independent and stationary. In fact, we know that LIGO data are neither Gaussian nor stationary and if one reads through the technical paper accompanying the detection PRL, you can read about the many tests we run to try to distinguish between non-Gaussianities in our data and real signals. But in doing such tests, we raise an important question: “If you see something loud, how can you be sure it is not some chance instrumental artifact, which somehow was missed in the various tests that you do”. Because of this we have to be very careful when assessing the significance (in terms of sigmas—or the p-value, to use the correct term). We assess the significance using a process called time-shifts. We first look through all our data to look for loud events within the 10ms time-window corresponding to the light travel time between the two observatories. Then we look again. Except the second time we look we shift ALL of the data from Livingston by 0.1s. This delay is much larger than the light travel time so if we see any interesting “events” now they cannot be genuine astrophysical events, but must be some noise transient. We then repeat this process with a 0.2s delay, 0.3s delay and so on up to time delays on the order of weeks long. In this way we’ve conducted of order 10 million experiments. For the case of GW150914 the signal in the non-time shifted data was louder than any event we saw in any of the time-shifted runs—all 10 million of them. In fact, it was still a lot louder than any event in the time-shifted runs as well. Therefore we can say that this is a 1-in-10-million event, without making any assumptions at all about our noise. Except one. The assumption is that the analysis with Livingston data shifted by e.g. 8s (or any of the other time shifts) is equivalent to the analysis with the Livingston data not shifted at all. Or, in other words, we assume that there is nothing special about the non-time shifted analysis (other than it might contain real signals!). As well as the technical papers, this is also described in the science summary that accompanied the GW150914 PRL.

Nothing in the paper “On the time lags of the LIGO signals” suggests that the non-time shifted analysis is special. The claimed correlations between the two detectors due to resonance and calibration lines in the data would be present also in the time-shifted analyses—The calibration lines are repetitive lines, and so if correlated in the non-time shift analyses, they will also be correlated in the time-shift analyses as well. I should also note that potential correlated noise sources was explored in another of the companion papers to the GW150914 PRL. Therefore, taking the results of this paper at face value, I see nothing that calls into question the “integrity” of the GW150914 detection.

Section V: Wrapping up

I have tried to reproduce the results quoted in “On the time lags of the LIGO signals”. I find the claims of section 2 are due to an issue in how the data is Fourier transformed, and do not reproduce the correlations claimed in section 3. Even if taking the results at face value, it would not affect the 5-sigma confidence associated with GW150914. Nevertheless I am in contact with the authors and we will try to understand these discrepancies.

For people interested in trying to explore LIGO data, check out the LIGO Open Science Center tutorials. As someone who was involved in the internal review of the LOSC data products it is rewarding to see these materials being used. It is true that these tutorials are intended as an introduction to LIGO data analysis, and do not accurately reflect many of the intricacies of these studies. For the interested reader a number of technical papers, for example this one, accompany the main PRL and within this paper and its references you can find all the nitty-gritty about how our analyses work. Finally, the PyCBC analysis toolkit, which was used to obtain the 5-sigma confidence, and of which I am one of the primary developers, is available open-source on git-hub. There are instructions here and also a number of examples that illustrate a number of aspects of our data analysis methods.

This article was circulated in the LIGO-Virgo Public Outreach and Education mailing list before being made public, and I am grateful to comments and feedback from: Christopher Berry, Ofek Birnholtz, Alessandra Buonanno, Gregg Harry, Martin Hendry, Daniel Hoak, Daniel Holz, David Keitel, Andrew Lundgren, Harald Pfeiffer, Vivien Raymond, Jocelyn Read and David Shoemaker.

by Sean Carroll at June 18, 2017 09:18 PM

June 16, 2017

Robert Helling - atdotde

I got this wrong
In yesterday's post, I totally screwed up when identifying the middle part of the spectrum as low frequency. It is not. Please ignore what I said or better take it as a warning what happens when you don't double check.

Apologies to everybody that I stirred up!

by Robert Helling (noreply@blogger.com) at June 16, 2017 02:55 PM

Robert Helling - atdotde

Some DIY LIGO data analysis
UPDATE: After some more thinking about this, I have very serious doubt about my previous conclusions. From looking at the power spectrum, I (wrongly) assumed that the middle part of the spectrum is the low frequency part (my original idea was, that the frequencies should be symmetric around zero but the periodicity of the Bloch cell bit me). So quite to the opposite, when taking into account the wrapping, this is the high frequency part (at almost the sample rate). So this is neither physics nor noise but the sample rate. For documentation, I do not delete the original post but leave it with this comment.


Recently, in the Arnold Sommerfeld Colloquium, we had Andrew Jackson of NBI talk about his take on the LIGO gravitational wave data, see this announcement with link to a video recording. He encouraged the audience to download the freely available raw data and play with it a little bit. This sounded like fun, so I had my go at it. Now, that his paper is out, I would like to share what I did with you and ask for your comments.

I used mathematica for my experiments, so I guess the way to proceed is to guide you to an html export of my (admittedly cleaned up) notebook (Source for your own experiments here).

The executive summary is that apparently, you can eliminate most of the "noise" at the interesting low frequency part by adding to the signal its time reversal casting some doubt about the stochasticity of this "noise".


I would love to hear what this is supposed to mean or what I am doing wrong, in particular from my friends in the gravitational wave community.



by Robert Helling (noreply@blogger.com) at June 16, 2017 02:47 PM

June 15, 2017

Symmetrybreaking - Fermilab/SLAC

From the cornfield to the cosmos

Fermilab celebrates 50 years of discovery.

Collage: 50 years of Fermilab

Imagine how it must have felt to be Robert Wilson in the spring of 1967. The Atomic Energy Commission had hired him as the founding director of the planned National Accelerator Laboratory. Before him was the opportunity to build the most powerful particle accelerator in the world—and to create a great new American laboratory dedicated to giving scientists extraordinary new capabilities to explore the universe. 

Fifty years later, we marvel at the boldness and scope of the project, and at the freedom, the leadership, the confidence and the vision that it took to conceive and build it. If anyone was up for the challenge, it was Wilson. 

By the early 1960s, the science of particle physics had outgrown its birthplace in university laboratories. The accelerators and detectors for advancing research had grown too big, complex and costly for any university to build and operate alone. Particle physics required a new model: national laboratories where the resources of the federal government would bring together the intellectual, scientific, engineering, technical and management capabilities to give collaborations of scientists the ability to explore scientific questions that could no longer be addressed at individual universities. 

The NAL, later renamed Fermi National Accelerator Laboratory, would be a national facility where university physicists—“users”—would be “at home and loved,” in the words of physicist Leon Lederman, who eventually succeeded Wilson as Fermilab director. The NAL would be a truly national laboratory rising from the cornfields west of Chicago, open to scientists from across the country and around the world. 

The Manhattan Project in the 1940s had shown the young Wilson—had shown the entire nation—what teams of physicists and engineers could achieve when, with the federal government’s support, they devoted their energy and capability to a common goal. Now, Wilson could use his skills as an accelerator designer and builder, along with his ability to lead and inspire others, to beat the sword of his Manhattan Project experience into the plowshare of a laboratory devoted to peacetime physics research.  

When the Atomic Energy Commission chose Wilson as NAL’s director, they may have been unaware that they had hired not only a gifted accelerator physicist but also a sculptor, an architect, an environmentalist, a penny-pincher (that they would have liked), an iconoclast, an advocate for human rights, a Wyoming cowboy and a visionary. 

Over the dozen years of his tenure Wilson would not only oversee the construction of the world’s most powerful particle accelerator, on time and under budget, and set the stage for the next generation of accelerators. He would also shape the laboratory with a vision that included erecting a high-rise building inspired by a French cathedral, painting other buildings to look like children’s building blocks, restoring a tall-grass prairie, fostering a herd of bison, designing an 847-seat auditorium (a venue for culture in the outskirts of Chicago), and adorning the site with sculptures he created himself. 

Fermilab physicist Roger Dixon tells of a student who worked for him in the lab’s early days.

“One night,” Dixon remembers, “I had Chris working overtime in a basement machine shop. He noticed someone across the way grinding and welding. When the guy tipped back his helmet to examine his work, Chris walked over and asked, ‘What’ve they got you doin’ in here tonight?’ The man said that he was working on a sculpture to go into the reflecting pond in front of the high rise. ‘Boy,’ Chris said, ‘they can think of more ways for you to waste your time around here, can’t they?’ To which Robert Wilson, welder, sculptor and laboratory director, responded with remarks Chris will never forget on the relationship of science, technology and art.”

Wilson believed a great physics laboratory should look beautiful. “It seemed to me,” he wrote, “that the conditions of its being a beautiful laboratory were the same conditions as its being a successful laboratory.”

With the passage of years, Wilson’s outsize personality and gift for eloquence have given his role in Fermilab’s genesis a near-mythic stature. In reality, of course, he had help. He used his genius for bringing together the right people with the right skills and knowledge at the right time to recruit and inspire scientists, engineers, technicians, administrators (and an artist) not only to build the laboratory but also to stick around and operate it. Later, these Fermilab pioneers recalled the laboratory’s early days as a golden age, when they worked all hours of the day and night and everyone felt like family. 

By 1972, the Main Ring of the laboratory’s accelerator complex was sending protons to the first university users, and experiments proliferated in the laboratory’s particle beams. In July 1977, Experiment E-288, a collaboration Lederman led, discovered the bottom quark. 

Physicist Patty McBride, who heads Fermilab’s Particle Physics Division, came to Fermilab in 1979 as a Yale graduate student. McBride’s most vivid memory of her early days at the laboratory is meeting people with a wide variety of life experiences. 

“True, there were almost no women,” she says. “But out in this lab on the prairie were people from far more diverse backgrounds than I had ever encountered before. Some, including many of the skilled technicians, had returned from serving in the Vietnam War. Most of the administrative staff were at least bilingual. We always had Russian colleagues; in fact the first Fermilab experiment, E-36, at the height of the Cold War, was a collaboration between Russian and American physicists. I worked with a couple of guest scientists who came to Fermilab from China. They were part of a group who were preparing to build a new accelerator at the Institute of High Energy Physics there.” 

The diversity McBride found was another manifestation of Wilson’s concept of a great laboratory.

“Prejudice has no place in the pursuit of knowledge,” he wrote. “In any conflict between technical expediency and human rights, we shall stand firmly on the side of human rights. Our support of the rights of the members of minority groups in our laboratory and its environs is inextricably intertwined with our goal of creating a new center of technical and scientific excellence.”

Designing the future

Advances in particle physics depend on parallel advances in accelerator technology. Part of an accelerator laboratory’s charge is to develop better accelerators—at least that’s how Wilson saw it. With the Main Ring delivering beam, it was time to turn to the next challenge. This time, he had a working laboratory to help.  

The designers of Fermilab’s first accelerator had hoped to use superconducting magnets for the Main Ring, but they soon realized that in 1967 it was not yet technically feasible. Nevertheless, they left room in the Main Ring tunnel for a next-generation accelerator. 

Wilson applied his teambuilding gifts to developing this new machine, christened the Energy Doubler (and later renamed the Tevatron). 

In 1972, he brought together an informal working group of metallurgists, magnet builders, materials scientists, physicists and engineers to begin investigating superconductivity, with the goal of putting this exotic phenomenon to work in accelerator magnets. 

No one had more to do with the success of the superconducting magnets than Fermilab physicist Alvin Tollestrup. Times were different then, he recalls.

“Bob had scraped up enough money from here and there to get started on pursuing the Doubler before it was officially approved,” Tollestrup says. “We had to fight tooth and nail for approval. But in those days, Bob could point the whole machine shop to do what we needed. They could build a model magnet in a week.”

It took a decade of strenuous effort to develop the superconducting wire, the cable configuration, the magnet design and the manufacturing processes to bring the world’s first large-scale superconducting accelerator magnets into production, establishing Fermilab’s leadership in accelerator technology. Those involved say they remember it as an exhilarating experience. 

By March 1983, the Tevatron magnets were installed underneath the Main Ring, and in July the proton beam in the Tevatron reached a world-record energy of 512 billion electronvolts. In 1985, a new Antiproton Source enabled proton-antiproton collisions that further expanded the horizons of the subatomic world. 

Two particle detectors—called the Collider Detector at Fermilab, or CDF, and DZero—gave hundreds of collaborating physicists the means to explore this new scientific territory. Design for CDF began in 1978, construction in 1982, and CDF physicists detected particle collisions in 1985. Fermilab’s current director, Nigel Lockyer, first came to work at Fermilab on CDF in 1984. 

“The sheer ambition of the CDF detector was enough to keep everyone excited,” he says. 

The DZero detector came online in 1992. A primary goal for both experiments was the discovery of the top quark, the heavier partner of the bottom quark and the last undiscovered quark of the six that theory predicted. Both collaborations worked feverishly to be the first to accumulate enough evidence for a discovery. 

In March 1995, CDF and DZero jointly announced that they had found the top. To spread the news, Fermilab communicators tried out a fledgling new medium called the World Wide Web.

Five decades of particle physics

Reaching new frontiers

Meanwhile, in the 1980s, growing recognition of the links between subatomic interactions and cosmology—between the inner space of particle physics and the outer space of astrophysics—led to the formation of the Fermilab Theoretical Astrophysics Group, pioneered by cosmologists Rocky Kolb and Michael Turner. Cosmology’s rapid evolution from theoretical endeavor to experimental science demanded large collaborations and instruments of increasing complexity and scale, beyond the resources of universities—a problem that particle physics knew how to solve. 

In the mid-1990s, the Sloan Digital Sky Survey turned to Fermilab for help. Under the leadership of former Fermilab Director John Peoples, who became SDSS director in 1998, the Sky Survey carried out the largest astronomical survey ever conducted and transformed the science of astrophysics.  

The discovery of cosmological evidence of dark matter and dark energy had profound implications for particle physics, revealing a mysterious new layer to the universe and raising critical scientific questions. What are the particles of dark matter? What is dark energy? In 2004, in recognition of Fermilab’s role in particle astrophysics, the laboratory established the Center for Particle Astrophysics. 

As the twentieth century ended and the twenty-first began, Fermilab’s Tevatron experiments defined the frontier of high-energy physics research. Theory had long predicted the existence of a heavy particle associated with particle mass, the Higgs boson, but no one had yet seen it. In the quest for the Higgs, Fermilab scientists and experimenters made a relentless effort to wring every ounce of performance from accelerator and detectors. 

The Tevatron had reached maximum energy, but in 1999 a new accelerator in the Fermilab complex, the Main Injector, began giving an additional boost to particles before they entered the Tevatron ring, significantly increasing the rate of particle collisions. The experiments continuously re-invented themselves using advances in detector and computing technology to squeeze out every last drop of data. They were under pressure, because the clock was ticking.  

A new accelerator with seven times the Tevatron’s energy was under construction at CERN, the European laboratory for particle physics in Geneva, Switzerland. When Large Hadron Collider operations began, its higher-energy collisions and state-of-the-art detectors would eclipse Fermilab’s experiments and mark the end of the Tevatron’s long run.

In the early 1990s, the Tevatron had survived what many viewed as a near-death experience with the cancellation of the Superconducting Super Collider, planned as a 26-mile ring that would surpass Fermilab’s accelerator, generating beams with 20 times as much energy. Construction began on the SSC’s Texas site in 1991, but in 1993 Congress canceled funding for the multibillion-dollar project. Its demise meant that, for the time being, the high-energy frontier would remain in Illinois. 

While the SSC drama unfolded, in Geneva the construction of the LHC went steadily onward—helped and supported by US physicists and engineers and by US funding. 

Among the more puzzling aspects of particle physics for those outside the field is the simultaneous competition and collaboration of scientists and laboratories. It makes perfect sense to physicists, however, because science is the goal. The pursuit of discovery drives the advancement of technology. Particle physicists have decades of experience in working collaboratively to develop the tools for the next generation of experiments, wherever in the world that takes them. 

Thus, even as the Tevatron experiments threw everything they had into the search for the Higgs, scientists and engineers at Fermilab—literally across the street from the CDF detector—were building advanced components for the CERN accelerator that would ultimately shut the Tevatron down.  

Going global

Just as in the 1960s particle accelerators had outgrown the resources of any university, by the end of the century they had outgrown the resources of any one country to build and operate. Detectors had long been international construction projects; now accelerators were, too, as attested by the superconducting magnets accumulating at Fermilab, ready for shipment to Switzerland.

As the US host for CERN’s CMS experiment, Fermilab built an LHC Remote Operations Center so that the growing number of US collaborating physicists could work on the experiment remotely. In the early morning hours of September 10, 2008, a crowd of observers watched on screens in the ROC as the first particle beam circulated in the LHC. Four years later, the CMS and ATLAS experiments announced the discovery of the Higgs boson. One era had ended, and a new one had begun. 

The future of twenty-first century particle physics, and Fermilab’s future, will unfold in a completely global context. More than half of US particle physicists carry out their research at LHC experiments. Now, the same model of international collaboration will create another pathway to discovery, through the physics of neutrinos. Fermilab is hosting the international Deep Underground Neutrino Experiment, powered by the Long-Baseline Neutrino Facility that will send the world’s most powerful beam of neutrinos through the earth to a detector more than a kilometer underground and 1300 kilometers away in the Sanford Underground Research Facility in South Dakota. 

“We are following the CERN model,” Lockyer says. “We have split the DUNE project into an accelerator facility and an experiment. Seventy-five percent of the facility will be built by the US, and 25 percent by international collaborators. For the experiment, the percentages will be reversed.” 

The DUNE collaboration now comprises more than 950 scientists from 162 institutions in 30 countries. “To design the project,” Lockyer says, “we started with a clean piece of paper and all of our international collaborators and their funding agencies in the room. They have been involved since t=0.”

In Lockyer’s model for Fermilab, the laboratory will keep its historic academic focus, giving scientists the tools to address the most compelling scientific questions. He envisions a diverse science portfolio with a flagship neutrino program and layers of smaller programs, including particle astrophysics. 

At the same time, he says, Fermilab feels mounting pressure to demonstrate value beyond creating knowledge. One potential additional pursuit involves using the laboratory’s unequaled capability in accelerator design and construction to build accelerators for other laboratories. Lockyer says he also sees opportunities to contribute the computing capabilities developed from decades of processing massive amounts of particle physics data to groundbreaking next-generation computing projects. “We have to dig deeper and reach out in new ways.”

In the five decades since Fermilab began, knowledge of the universe has flowered beyond anything we could have imagined in 1967. Particles and forces then unknown have become familiar, like old friends. Whole realms of inner space have opened up to us, and outer space has revealed a new dark universe to explore. Across the globe, collaborators have joined forces to extend our reach into the unknown beyond anything we can achieve separately. 

Times have changed, but Wilson would still recognize his laboratory. As it did then, Fermilab holds the same deep commitment to the science of the universe that brought it into being 50 years ago. 

by Judith Jackson at June 15, 2017 09:15 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
July 25, 2017 08:21 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at