Particle Physics Planet


May 28, 2015

Christian P. Robert - xi'an's og

discussions on Gerber and Chopin

As a coincidence, I received my copy of JRSS Series B with the Read Paper by Mathieu Gerber and Nicolas Chopin on sequential quasi Monte Carlo just as I was preparing an arXival of a few discussions on the paper! Among the [numerous and diverse] discussions, a few were of particular interest to me [I highlighted members of the University of Warwick and of Université Paris-Dauphine to suggest potential biases!]:

  1. Mike Pitt (Warwick), Murray Pollock et al.  (Warwick) and Finke et al. (Warwick) all suggested combining quasi Monte Carlo with pseudomarginal Metropolis-Hastings, pMCMC (Pitt) and Rao-Bklackwellisation (Finke et al.);
  2. Arnaud Doucet pointed out that John Skilling had used the Hilbert (ordering) curve in a 2004 paper;
  3. Chris Oates, Dan Simpson and Mark Girolami (Warwick) suggested combining quasi Monte Carlo with their functional control variate idea;
  4. Richard Everitt wondered about the dimension barrier of d=6 and about possible slice extensions;
  5. Zhijian He and Art Owen pointed out simple solutions to handle a random number of uniforms (for simulating each step in sequential Monte Carlo), namely to start with quasi Monte Carlo and end up with regular Monte Carlo, in an hybrid manner;
  6. Hans Künsch points out the connection with systematic resampling à la Carpenter, Clifford and Fearnhead (1999) and wonders about separating the impact of quasi Monte Carlo between resampling and propagating [which vaguely links to one of my comments];
  7. Pierre L’Ecuyer points out a possible improvement over the Hilbert curve by a preliminary sorting;
  8. Frederik Lindsten and Sumeet Singh propose using ABC to extend the backward smoother to intractable cases [but still with a fixed number of uniforms to use at each step], as well as Mateu and Ryder (Paris-Dauphine) for a more general class of intractable models;
  9. Omiros Papaspiliopoulos wonders at the possibility of a quasi Markov chain with “low discrepancy paths”;
  10. Daniel Rudolf suggest linking the error rate of sequential quasi Monte Carlo with the bounds of Vapnik and Ĉervonenkis (1977).

 The arXiv document also includes the discussions by Julyan Arbel and Igor Prünster (Turino) on the Bayesian nonparametric side of sqMC and by Robin Ryder (Dauphine) on the potential of sqMC for ABC.


Filed under: Books, Kids, Statistics, University life Tagged: ABC, discussion paper, doubly intractable problems, Hilbert, Igor Prünster, Julyan Arbel, Mathieu Gerber, Nicolas Chopin, quasi-Monte Carlo methods, Read paper, Royal Statistical Society, Series B, systematic resampling, Turino, University of Warwick, Vapnik-Chervonenkis

by xi'an at May 28, 2015 10:15 PM

Christian P. Robert - xi'an's og

Symmetrybreaking - Fermilab/SLAC

Inside particle detectors: trackers

Fermilab physicist Jim Pivarski explains how particle detectors tell us about the smallest constituents of matter.

Much of the complexity of particle physics experiments can be boiled down to two basic types of detectors: trackers and calorimeters. They each have strengths and weaknesses, and most modern experiments use both. 

The first tracker started out as an experiment to study clouds, not particles. In the early 1900s, Charles Wilson built an enclosed sphere of moist air to study cloud formation. Dust particles were known to seed cloud formation—water vapor condenses on the dust to make clouds of tiny droplets. But no matter how clean Wilson made his chamber, clouds still formed.

Moreover, they formed in streaks, especially near radioactive sources. It turned out that subatomic particles were ionizing the air, and droplets condensed along these trails like dew on a spider web.

This cloud chamber was phenomenally useful to particle physicists—finally, they could see what they were doing! It's much easier to find strange, new particles when you have photos of them acting strangely. In some cases, they were caught in the act of decaying—the kaon was discovered as a V-shaped intersection of two pion tracks, since kaons decay into pairs of pions in flight.

In addition to turning vapor into droplets, ionization trails can cause bubbles to form in a near-boiling liquid. Bubble chambers could be made much larger than cloud chambers, and they produced clear, crisp tracks in photographs. Spark chambers used electric discharges along the ionization trails to collect data digitally.

More recently, time projection chambers measure the drift time of ions between the track and a high-voltage plate for more spatial precision, and silicon detectors achieve even higher resolution by collecting ions on microscopic wires printed on silicon microchips. Today, trackers can reconstruct millions of three-dimensional images per second.

The disadvantage of tracking is that neutral particles do not produce ionization trails and hence are invisible. The kaon that decays into two pions is neutral, so you only see the pions. Neutral particles that never or rarely decay are even more of a nuisance. Fortunately, calorimeters fill in this gap, since they are sensitive to any particle that interacts with matter.

Interestingly, the Higgs boson was discovered in two decay modes at once. One of these, Higgs to four muons, uses tracking exclusively, since the muons are all charged and deposit minimal energy in a calorimeter. The other, Higgs to two (neutral) photons, uses calorimetry exclusively.


A version of this article was published in Fermilab Today.

 

Like what you see? Sign up for a free subscription to symmetry!

by Jim Pivarski, Fermilab at May 28, 2015 03:50 PM

The n-Category Cafe

A 2-Categorical Approach to the Pi Calculus

guest post by Mike Stay

Greg Meredith and I have a short paper that’s been accepted for Higher-Dimensional Rewriting and Applications (HDRA) 2015 on modeling the asynchronous polyadic pi calculus with 2-categories. We avoid domain theory entirely and model the operational semantics directly; full abstraction is almost trivial. As a nice side-effect, we get a new tool for reasoning about consumption of resources during a computation.

It’s a small piece of a much larger project, which I’d like to describe here in a series of posts. This post will talk about lambda calculus for a few reasons. First, lambda calculus is simpler, but complex enough to illustrate one of our fundamental insights. Lambda calculus is to serial computation what pi calculus is to concurrent computation; lambda calculus talks about a single machine doing a computation, while pi calculus talks about a network of machines communicating over a network with potentially random delays. There is at most one possible outcome for a computation in the lambda calculus, while there are many possible outcomes in a computation in the pi calculus. Both the lazy lambda calculus and the pi calculus, however, have as an integral part of their semantics the notion of waiting for a sub-computation to complete before moving onto another one. Second, the denotational semantics of lambda calculus in Set is well understood, as is its generalization to cartesian closed categories; this semantics is far simpler than the denotational semantics of pi calculus and serves as a good introduction. The operational semantics of lambda calculus is also simpler than that of pi calculus and there is previous work on modeling it using higher categories.

History

Alonzo Church invented the lambda calculus as part of his attack on Hilbert’s third problem, also known as the Entscheidungsproblem, which asked for an algorithm to solve any mathematical problem. Church published his proof that no such algorithm exists in 1936. Turing invented his eponymous machines, also to solve the Entscheidungsproblem, and published his independent proof a few months after Church. When he discovered that Church had beaten him to it, Turing proved in 1937 that the two approaches were equivalent in power. Since Turing machines were much more “mechanical” than the lambda calculus, the development of computing machines relied far more on Turing’s approach, and it was only decades later that people started writing compilers for more friendly programming languages. I’ve heard it quipped that “the history of programming languages is the piecemeal rediscovery of the lambda calculus by computer scientists.”

The lambda calculus consists of a set of “terms” together with some relations on the terms that tell how to “run the program”. Terms are built up out of “term constructors”; in the lambda calculus there are three: one for variables, one for defining functions (Church denoted this operation with the Greek letter lambda, hence the name of the calculus), and one for applying those functions to inputs. I’ll talk about these constructors and the relations more below.

Church introduced the notion of “types” to avoid programs that never stop. Modern programming languages also use types to avoid programmer mistakes and encode properties about the program, like proving that secret data is inaccessible outside certain parts of the program. The “simply-typed” lambda calculus starts with a set of base types and takes the closure under the binary operation <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> to get a set of types. Each term is assigned a type; from this one can deduce the types of the variables used in the term. An assignment of types to variables is called a typing context.

The search for a semantics for variants of the lambda calculus has typically been concerned with finding sets or “domains” such that the interpretation of each lambda term is a function between domains. Scott worked out a domain <semantics>D<annotation encoding="application/x-tex">D</annotation></semantics> such that the continuous functions from <semantics>D<annotation encoding="application/x-tex">D</annotation></semantics> to itself are precisely the computable ones. Lambek and Scott generalized the category where we look for semantics from Set to arbitrary cartesian closed categories (CCCs).

Lambek and Scott constructed a CCC out of lambda terms; we call this category the syntactical category. Then a structure-preserving functor from the syntactical category to Set or some other CCC would provide the semantics. The syntactical category has types as objects and equivalence classes of certain terms as morphisms. A morphism in the syntactical category goes from a typing context to the type of the term.

John Baez has a set of lecture notes from Fall 2006 through Spring 2007 describing Lambek and Scott’s approach to the category theory of lambda calculus and generalizing it from cartesian closed categories to symmetric monoidal closed categories so it can apply to quantum computation as well: rather than taking a functor from the syntactical category into Set, we can take a functor into Hilb instead. He and I also have a “Rosetta stone” paper summarizing the ideas and connecting them with the corresponding generalization of the Curry-Howard isomorphism.

The Curry-Howard isomorphism says that types are to propositions as programs are to proofs. In practice, types are used in two different ways: one as propositions about data and the other as propositions about code. Programming languages like C, Java, Haskell, and even dynamically typed languages like JavaScript and Python use types to talk about propositions that data satisfies: is it a date or a name? In these languages, equivalence classes of programs constitute constructive proofs. Concurrent calculi are far more concerned about propositions that the code satisfies: can it reach a deadlocked state? In these languages, it is the rewrite rules taking one term to another that behave like proofs. Melliès and Zeilberger’s excellent paper “Functors are Type Refinement Systems” relates these two approaches to typing to each other.

Note that Lambek and Scott’s approach does not have the sets of terms or variables as objects! The algebra that defines the set of terms plays only a minor role in the category; there’s no morphism in the CCC, for instance, that takes a term <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> and a variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to produce the term <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t</annotation></semantics>. This failure to capture the structure of the term in the morphism wasn’t a big deal for lambda calculus because of “confluence” (see below), but it turns out to matter a lot more in calculi like Milner’s pi calculus that describe communicating over a network, where messages can be delayed and arrival times matter for the end result (consider, for instance, two people trying to buy online the last ticket to a concert).

The last few decades have seen domains becoming more and more complicated in order to try to “unerase” the information about the structure of terms that gets lost in the domain theory approach and recover the operational semantics. Fiore, Moggi, and Sangiorgi, Stark and Cattani, Stark, and Winskel all present domain models of the pi calculus that recursively involve the power set in order to talk about all the possible futures for a term. Industry has never cared much about denotational semantics: the Java Virtual Machine is an operational semantics for the Java language.

What we did

Greg Meredith and I set out to model the operational semantics of the pi calculus directly in a higher category rather than using domain theory. An obvious first question is, “What about types?” I was particularly worried about how to relate this approach to the kind of thing Scott and Lambek did. Though it didn’t make it into the HDRA paper and the details won’t make it into this post, we found that we’re able to use the “type-refinement-as-a-functor” idea of Melliés and Zeilberger to show how the algebraic term-constructor functions relate to the morphisms in the syntactical category.

We’re hoping that this categorical approach to modeling process calculi will help with reasoning about practical situations where we want to compose calculi; for instance, we’d like to put a hundred pi calculus engines around the edges of a chip and some ambient calculus engines, which have nice features for managing the location of data, in the middle to distribute work among them.

Lambda calculus

The lambda calculus consists of a set of “terms” together with some relations on the terms. The set <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> of terms is defined recursively, parametric in a countably infinite set <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> of variables. The base terms are the variables: if <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is an element of <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>, then <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is a term in <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>. Next, given any two terms <semantics>t,tT<annotation encoding="application/x-tex">t, t' \in T</annotation></semantics>, we can apply one to the other to get <semantics>t(t)<annotation encoding="application/x-tex">t(t')</annotation></semantics>. We say that <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> is in the head position of the application and <semantics>t<annotation encoding="application/x-tex">t'</annotation></semantics> in the tail position. (When the associativity of application is unclear, we’ll also use parentheses around subterms.) Finally, we can abstract out a variable from a term: given a variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and a term <semantics>t,<annotation encoding="application/x-tex">t,</annotation></semantics> we get a term <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t</annotation></semantics>.

The term constructors define an algebra, a functor <semantics>LC<annotation encoding="application/x-tex">LC</annotation></semantics> from Set to Set that takes any set of variables <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> to the set of terms <semantics>T=LC(V)<annotation encoding="application/x-tex">T = LC(V)</annotation></semantics>. The term constructors themselves become functions: <semantics>: VT mboxvariable (): T×TT mboxapplication λ: V×TT mboxabstraction<annotation encoding="application/x-tex"> \begin{array}{rll} -\colon & V \to T &\mbox{variable}\\ -(-)\colon & T \times T \to T &\mbox{application}\\ \lambda\colon & V \times T \to T &\mbox{abstraction} \end{array} </annotation></semantics>

Church described three relations on terms. The first relation, alpha, relates any two lambda abstractions that differ only in the variable name. This is exactly the same as when we consider the function <semantics>f(x)=x 2<annotation encoding="application/x-tex">f(x) = x^2</annotation></semantics> to be identical to the function <semantics>f(y)=y 2<annotation encoding="application/x-tex">f(y) = y^2</annotation></semantics>. The third relation, eta, says that there’s no difference between a function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> and a “middle-man” function that gets an input <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and applies the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> to it: <semantics>λx.f(x)=f<annotation encoding="application/x-tex">\lambda x.f(x) = f</annotation></semantics>. Both alpha and eta are equivalences.

The really important relation is the second one, “beta reduction”. In order to define beta reduction, we have to define the free variables of a term: a variable occurring by itself is free; the set of free variables in an application is the union of the free variables in its subterms; and the free variables in a lambda abstraction are the free variables of the subterm except for the abstracted variable. <semantics>FV(x)= {x} FV(t(t))= FV(t)FV(t) FV(λx.t)= FV(t)/{x} <annotation encoding="application/x-tex"> \begin{array}{rl} \mathrm{FV}(x) = & \{x\} \\ \mathrm{FV}(t(t')) = & \mathrm{FV}(t) \cup \mathrm{FV}(t') \\ \mathrm{FV}(\lambda x.t) = & \mathrm{FV}(t) / \{x\} \\ \end{array} </annotation></semantics>

Beta reduction says that when we have a lambda abstraction <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t</annotation></semantics> applied to a term <semantics>t<annotation encoding="application/x-tex">t'</annotation></semantics>, then we replace every free occurrence of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> by <semantics>t<annotation encoding="application/x-tex">t'</annotation></semantics>: <semantics>(λx.t)(t) βt{t/x},<annotation encoding="application/x-tex"> (\lambda x.t)(t') \downarrow_\beta t\{t' / x\},</annotation></semantics> where we read the right hand side as “<semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> with <semantics>t<annotation encoding="application/x-tex">t'</annotation></semantics> replacing <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>.” We see a similar replacement of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> in action when we compose the following functions: <semantics>f(x)= x+1 g(y)= y 2 g(f(x))= (x+1) 2 <annotation encoding="application/x-tex"> \begin{array}{rl} f(x) = & x + 1 \\ g(y) = & y^2 \\ g(f(x)) = & (x + 1)^2 \\ \end{array} </annotation></semantics>

We say a term has a normal form if there’s some sequence of beta reductions that leads to a term where no beta reduction is possible. When the beta rule applies in more than one place in a term, it doesn’t matter which one you choose to do first: any sequence of betas that leads to a normal form will lead to the same normal form. This property of beta reduction is called confluence. Confluence means that the order of performing various subcomputations doesn’t matter so long as they all finish: in the expression <semantics>(2+5)*(3+6)<annotation encoding="application/x-tex">(2 + 5) * (3 + 6)</annotation></semantics> it doesn’t matter which addition you do first or whether you distribute the expressions over each other; the answer is the same.

“Running” a program in the lambda calculus is the process of computing the normal form by repeated application of beta reduction, and the normal form itself is the result of the computation. Confluence, however, does not mean that when there is more than one place we could apply beta reduction, we can choose any beta reduction and be guaranteed to reach a normal form. The following lambda term, customarily denoted <semantics>ω<annotation encoding="application/x-tex">\omega</annotation></semantics>, takes an input and applies it to itself: <semantics>ω=λx.x(x)<annotation encoding="application/x-tex">\omega = \lambda x.x(x)</annotation></semantics> If we apply <semantics>ω<annotation encoding="application/x-tex">\omega</annotation></semantics> to itself, then beta reduction produces the same term, customarily called <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics>: <semantics>Ω=ω(ω)<annotation encoding="application/x-tex">\Omega = \omega(\omega)</annotation></semantics> <semantics>Ω βΩ.<annotation encoding="application/x-tex">\Omega \downarrow_\beta \Omega.</annotation></semantics> It’s an infinite loop! Now consider this lambda term that has <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> as a subterm: <semantics>(λx.λy.x)(λx.x)(Ω)<annotation encoding="application/x-tex">(\lambda x.\lambda y.x)(\lambda x.x)(\Omega)</annotation></semantics> It says, “Return the first element of the pair (identity function, <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics>)”. If it has an answer at all, the answer should be “the identity function”. The question of whether it has an answer becomes, “Do we try to calculate the elements of the pair before applying the projection to it?”

Lazy lambda calculus

Many programming languages, like Java, C, JavaScript, Perl, Python, and Lisp are “eager”: they calculate the normal form of inputs to a function before calculating the result of the function on the inputs; the expression above, implemented in any of these languages, would be an infinite loop. Other languages, like Miranda, Lispkit, Lazy ML, and Haskell and its predecessor Orwell are “lazy” and only apply beta reduction to inputs when they are needed to complete the computation; in these languages, the result is the identity function. Abramsky wrote a 48-page paper about constructing a domain that captures the operational semantics of lazy lambda calculus.

The idea of representing operational semantics directly with higher categories originated with R. A. G. Seely, who suggested that beta reduction should be a 2-morphism; Barney Hilken and Tom Hirschowitz have also contributed to looking at lambda calculus from this perspective. In the “Rosetta stone” paper that John Baez and I wrote, we made an analogy between programs and Feynman diagrams. The analogy is precise as far as it goes, but it’s unsatisfactory in the sense that Feynman diagrams describe processes happening over time, while Lambek and Scott mod out by the process of computation that occurs over time. If we use 2-categories that explicitly model rewrites between terms, we get something that could potentially be interpreted with concepts from physics: types would become analogous to strings, terms would become analogous to space, and rewrites would happen over time. The idea from the “algebra of terms” perspective is that we have objects <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> for variables and terms, term constructors as 1-morphisms, and the nontrivial 2-morphisms generated by beta reduction. Seely showed that this approach works fine when you’re unconcerned with the context in which reduction can occur.

This approach, however, doesn’t work for lazy lambda calculus! Horizontal composition in a 2-category is a functor, so if a term <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> reduces to a term <semantics>t<annotation encoding="application/x-tex">t'</annotation></semantics>, then by functoriality, <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t</annotation></semantics> must reduce to <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t'</annotation></semantics>—but this is forbidden in the lazy lambda calculus! Functoriality of horizontal composition is a “relativity principle” in the sense that reductions in one context are the same as reductions in any other context. In lazy programming languages, on the other hand, the “head” context is privileged: reductions only happen here. It’s somewhat like believing that measuring differences in temperature is like measuring differences in space, that only the difference is meaningful—and then discovering absolute zero. When beta reduction can happen anywhere in a term, there are too many 2-morphisms to model lazy lambda calculus.

In order to model this special context, we reify it: we add a special unary term constructor <semantics>[]:TT<annotation encoding="application/x-tex">[-]\colon T \to T</annotation></semantics> that marks contexts where reduction is allowed, then redefine beta reduction so that the term constructor <semantics>[]<annotation encoding="application/x-tex">[-]</annotation></semantics> behaves like a catalyst that enables the beta reduction to occur. This lets us cut down the set of 2-morphisms to exactly those that are allowed in the lazy lambda calculus; Greg and I did essentially the same thing in the pi calculus.

More concretely, we have two generating rewrite rules. The first propagates the reduction context to the head position of the term; the second is beta reduction restricted to a reduction context. <semantics>[t(t)] ctx[[t](t)]<annotation encoding="application/x-tex"> [t(t')]\, \downarrow_{ctx}\, [[t](t')]</annotation></semantics> <semantics>[[λx.t](t)] β[t]{t/x}<annotation encoding="application/x-tex"> [[\lambda x.t](t')]\, \downarrow_\beta\, [t]\, \{t'/x\}</annotation></semantics> When we surround the example term from the previous section with a reduction context marker, we get the following sequence of reductions: <semantics> [(λx.λy.x)(λx.x)(Ω)] ctx [[(λx.λy.x)(λx.x)](Ω)] ctx [[[λx.λy.x](λx.x)](Ω)] β [[λy.(λx.x)](Ω)] β [λx.x] <annotation encoding="application/x-tex"> \begin{array}{rl} & [(\lambda x.\lambda y.x)(\lambda x.x)(\Omega)] \\ \downarrow_{ctx}& [[(\lambda x.\lambda y.x)(\lambda x.x)](\Omega)] \\ \downarrow_{ctx}& [[[\lambda x.\lambda y.x](\lambda x.x)](\Omega)] \\ \downarrow_{\beta}& [[\lambda y.(\lambda x.x)](\Omega)]\\ \downarrow_{\beta}& [\lambda x.x] \\ \end{array} </annotation></semantics> At the start, none of the subterms were of the right shape for beta reduction to apply. The first two reductions propagated the reduction context down to the projection in head position. At that point, the only reduction that could occur was at the application of the projection to the first element of the pair, and after that to the second element. At no point was <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> ever in a reduction context.

Compute resources

In order to run a program that does anything practical, you need a processor, time, memory, and perhaps disk space or a network connection or a display. All of these resources have a cost, and it would be nice to keep track of them. One side-effect of reifying the context is that we can use it as a resource.

The rewrite rule <semantics> ctx<annotation encoding="application/x-tex">\downarrow_{ctx}</annotation></semantics> increases the number of occurrences of <semantics>[]<annotation encoding="application/x-tex">[-]</annotation></semantics> in a term while <semantics> β<annotation encoding="application/x-tex">\downarrow_\beta</annotation></semantics> decreases the number. If we replace <semantics> ctx<annotation encoding="application/x-tex">\downarrow_{ctx}</annotation></semantics> by the rule <semantics>[t(t)] ctx[t](t)<annotation encoding="application/x-tex"> [t(t')]\, \downarrow_{ctx'}\, [t](t') </annotation></semantics> then the number of occurences of <semantics>[]<annotation encoding="application/x-tex">[-]</annotation></semantics> can never increase. By forming the term <semantics>[[[t]]]<annotation encoding="application/x-tex">[[\cdots[t]\cdots]]</annotation></semantics>, we can bound the number of beta reductions that can occur in the computation of <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>.

If we have a nullary constructor <semantics>c:1T<annotation encoding="application/x-tex">c\colon 1 \to T</annotation></semantics>, then we can define <semantics>[t]=c(t)<annotation encoding="application/x-tex">[t] = c(t)</annotation></semantics> and let the program dynamically decide whether to evaluate an expression eagerly or lazily.

In the pi calculus, we have the ability to run multiple processes at the same time; each <semantics>[]<annotation encoding="application/x-tex">[-]</annotation></semantics> in that situation represents a core in a processor or computer in a network.

These are just the first things that come to mind; we’re experimenting with variations.

Conclusion

We figured out how to model the operational semantics of a term calculus directly in a 2-category by requiring a catalyst to carry out a rewrite, which gave us full abstraction without needing a domain based on representing all the possible futures of a term. As a side-effect, it also gave us a new tool for modeling resource consumption in the process of computation. Though I haven’t explained how yet, there’s a nice connection between the “algebra-of-terms” approach that uses <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> as objects and Lambek and Scott’s approach that uses types as objects, based on Melliès and Zeilberger’s ideas about type refinement. Next time, I’ll talk about the pi calculus and types.

by john (baez@math.ucr.edu) at May 28, 2015 03:39 PM

arXiv blog

Is This the First Computational Imagination?

The ability to read a description of a scene and then picture it has always been uniquely human. Not anymore.


Imagine an oak tree in a field of wheat, silhouetted against a cloudless blue sky on a dreamy sunny afternoon. The chances are that most people reading this sentence can easily picture a bucolic scene in their mind’s eye. This ability to read a description of a scene and then imagine it has always been uniquely human. But this precious skill may no longer be ours alone.

May 28, 2015 12:10 PM

Emily Lakdawalla - The Planetary Society Blog

Four mission assembly progress reports: ExoMars TGO, InSight, OSIRIS-REx, and BepiColombo
2015 has seen few deep-space-craft launches, but 2016 is shaping up to be a banner year with three launches, followed quickly by a fourth in early 2017. All of the missions under development have reported significant milestones recently.

May 28, 2015 11:12 AM

CERN Bulletin

CERN Bulletin Issue No. 22-23/2015
Link to e-Bulletin Issue No. 22-23/2015Link to all articles in this issue No.

May 28, 2015 09:19 AM

Peter Coles - In the Dark

Jazz and Quantum Entanglement

As regular readers of this blog (Sid and Doris Bonkers) will know, among the various things I write about apart from The Universe and Stuff is my love of Jazz. I don’t often get the chance to combine music with physics in a post so I’m indebted to George Ellis for drawing my attention to this fascinating little video showing a visualisation of the effects of quantum entanglement:

The experiment shown involves pairs of entangled photons. Here is an excerpt from the blurb on Youtube:

The video shows images of single photon patterns, recorded with a triggered intensified CCD camera, where the influence of a measurement of one photon on its entangled partner photon is imaged in real time. In our experiment the immediate change of the monitored mode pattern is a result of the polarization measurement on the distant partner photon.

You can find out more by clicking through to the Youtube page.

While most of my colleagues were completely absorbed by the pictures, I was fascinated by the choice of musical accompaniment. It is in fact Blue Piano Stomp, a wonderful example of classic Jazz from the 1920s featuring the great Johnny Dodds on clarinet (who also wrote the tune) and the great Lil Armstrong (née Hardin) on piano, who just happened to be the first wife of a trumpet player by the name of Louis Armstrong.

So at last I’ve found an example of Jazz entangled with Physics!

P.S. We often bemoan the shortage of female physicists, but Jazz is another field in which women are under-represented and insufficiently celebrated. Lil Hardin was a great piano player and deserves to be much more widely appreciated for her contribution to Jazz history.

 


by telescoper at May 28, 2015 08:55 AM

Lubos Motl - string vacua and pheno

Relaxion, a new paradigm explaining the Higgs lightness
The Quanta Magazine published a report
A New Theory to Explain the Higgs Mass
by Natalie Wolchover that promotes a one-month-old preprint
Cosmological Relaxation of the Electroweak Scale
by Graham, Kaplan, and Rajendran. So far, the paper has 1 (conference-related) citation but has already received great appraisals e.g. from Guidice, Craig, and Dine – and less great ones e.g. from Arkani-Hamed.

The Higgs mass, \(125\GeV\) or so (and the electroweak scale), is about \(10^{16}\) times lighter than the Planck mass, the characteristic scale of quantum gravity. Where does this large number come from? The usual wisdom, with a correction I add, is that the large number may be explained by one of the three basic ideas:
  1. naturalness, with new physics (SUSY, compositeness) near the Higgs mass
  2. anthropic principle, i.e. lots of vacua with different values of the Higgs mass, mostly comparable to the Planck mass; the light Higgs vacua are chosen because they admit life like ours
  3. Dirac's large number hypothesis: similar large dimensionless numbers are actually functions of the "age of the Universe" which is also large (but not a universal constant) and therefore evolve, or have evolved, as the Universe was expanding; see TRF
Too bad that the third option is often completely denied. Well, we sort of know that similar constants haven't been evolving in recent billions of years, at least not by \(O(1)\), but it's a shame that the Graham et al. paper doesn't refer to the Dirac's 1937 paper at all because this new proposal is a hybrid of all three paradigms above, I think.




What is the proposal and why it is a hybrid? Well, it needs inflation and an axion, possibly QCD axion, whose role is to drive the Higgs field to a region where its mass is low.




The axion \(\phi\) is coupled to the QCD instanton density \(G\wedge G\) and the theory dynamically creates the potential for this axion that may be schematically written as\[

V(\phi) = a(\phi/f) \cos (\phi / f) + b(\phi/f)

\] where \(a,b\) are very, very slowly changing functions of the axion \(\phi\); whether this slowness may be natural is debatable. So the field \(\phi\) has many very similar minima – like in the usual anthropic explanations. Around each minimum, the Higgs mass is different. But the right one isn't chosen anthropically, by metaphysical references to the need to produce intelligent observers. Instead, the right minimum is selected by cosmology in a calculable way.

As inflation continues, the Universe is trying numerous minima of the axion. Because it's the axion that drives the relaxation, it may be called the "relaxion". At some moment, this testing period stops and the Universe sits near one of the minima where the Higgs mass happens to be much lower than the Planck mass. At that time, we're left with the Standard Model physics around the minimum that looks like ours. No anthropic selection is needed in their picture and they also claim – controversially – that all the required coefficients in their model have technically natural values.

Their specific model should probably be viewed as a guess. I don't believe it's unique and even if it were unique, it could be modified in various ways. What's more sensible is to treat it as a paradigm. As I said, it's a mixture of the naturalness explanation – because the coefficients are said to be natural – with the anthropic explanations – because there are tons of minima to choose from – and with the Dirac's large number hypothesis – because the large numbers are linked to the duration of the cosmological eras, although it's only the inflation era (which already ended) which is relevant here.

Arkani-Hamed says that for the right minima to be chosen, the inflation has to take billions of years – much much longer than the tiny split second we usually expect. There are other features that you could consider big disadvantages of the model. For example, the extremely long range of the axion \(\phi\)'s inequivalent values which conflicts at least with the simple versions of the "weak gravity conjecture" arguments. The authors quote the axion monodromy inflation as another example of models in the literature that seems to circumvent this principle.

But where does the (huge) ratio of the Planck mass and the Higgs mass come from in their model? They need to get some large numbers from somewhere, right? Well, my understanding is that the particular values ultimately come from parameters that they need to insert mainly through \(g\ll m_{\rm Pl}\) and \(\Lambda \ll m_{\rm Pl}\). These hierarchies are said to be technically natural because the parameters \(g,\Lambda\) "break symmetries".

I tend to think that if you're satisfied with this narrow form of technical naturalness, you could find other, conceptually different, solutions. At the end, a complete microscopic model should allow you to calculate the ratio \(10^{16}\) in some way – perhaps as the exponential of some more natural expressions – and as far as I can see, they haven't done so.

When it comes to the details of the model, I think that it's at most a guess, a "proof of a concept", that I wouldn't take too seriously. On the other hand, the idea that models may exist that explain the large numbers in ways that are neither "full-fledged, metaphysical, anthropic explanation" nor "naturalness with new physics around the electroweak scale" is a correct one. There are other possibilities, possibilities that could make even the large dimensionless numbers "totally calculable" sometime in the future.

by Luboš Motl (noreply@blogger.com) at May 28, 2015 07:03 AM

astrobites - astro-ph reader's digest

Could Mars have experienced a colossal global warming?

Paper Title: Warming Early Mars with CO2 and H2

Authors: Ramses M. Ramirez, Ravi Kopparapu, Michael E. Zugger, Tyler D. Robinson, Richard Freedman and James F. Kasting

First Author’s Institution: at the time of writing, Pennsylvania State University

Paper status: accepted by Nature Geosciences

I recently wrote about new observations of Mars that reveal Mars once had a liquid ocean (post here) and it peaked a lot of interest. So, I thought it would be nice to follow up that story by offering one possibility for how a young Mars could have sustained a liquid water ocean. Just as a refresher though, the idea is that although Mars is cold and dry today, there is pervasive evidence that there was once liquid water. How is this possible though? Mars teeters on the edge of the canonical “Habitable Zone” (check out this bite on Hz definitions), where liquid water can exist on the surface, so there must’ve existed a period of time when the conditions on Mars allowed for a warm surface. Ramirez et. al. model the climate of Mars 3.8 billion years ago, in an attempt to settle this 30 year old mystery.

Problems with Previous Hypotheses

One possibility is that Mars was wet and warm because of the Late Heavy Bombardment. The Late Heavy Bombardment was a period of time about 4.1-3.8 billion years ago when the inner solar system was getting pummeled with a large number of asteroids. When an asteroid collided with the surface, the surface would temporarily heat up, due to the huge explosions. Each time there was a collision, the surface was warmed for brief intervals, allowing liquid water to flow. Although this seems reasonable, the valley networks on Mars are incredibly complex, analogous to the Grand Canyon. You need about three orders of magnitude more water to carve out the valley networks on Mars than the Heavy Bombardment predicts you would get. So unless geology on early Mars worked entirely different than geology on Earth, this hypothesis doesn’t really work.

Another possibility is that greenhouse gases warmed Mars with a thick CO2 H2O atmosphere. In other words: the most extreme form of global warming. Every time we introduce more greenhouse gases, such as CO2, into our own atmosphere, we raise the mean surface temperature. On Earth we have about 0.04% CO2, so one can only imagine the temperature of an atmosphere that was, let’s say, 95% CO2. As it turns out, greenhouse gases have not been able to produce above freezing temperatures. The problem is that not only was Mars different 3.8 billion years ago, so was the Sun. The Sun was 75% as luminous than the Sun today. The figure below shows the surface temperatures on Mars for a predominantly CO2 atmosphere at different solar luminosities. Between 70-80% present day solar luminosity (S0), Mars cannot be warmed with just CO2. Something is missing. Ramirez et. al. try and fill in that gap with their 1-D climate models.

 

This figure shows the mean surface temperature of early Mars as a function of surface pressure for a 95% CO2, 5% N2. Different colors correspond to different fractions of present day solar luminosity, S/S0. The curve of interest is the orange and blue which show that for 70-80% present day solar luminosity you cannot attain above freezing temperatures. When water was present on the surface, the Sun was roughly 75% its present day luminosity. Take away:  95% CO2, 5% N2 alone does not provide enough to warm early Mars.

This figure shows the mean surface temperature of early Mars as a function of surface pressure for a 95% CO2, 5% N2. Different colors correspond to different fractions of present day solar luminosity, S/S0. The curve of interest is the orange and blue which show that for 70-80% present day solar luminosity you cannot attain above freezing temperatures. When water was present on the surface, the Sun was roughly 75% its present day luminosity. Take away: 95% CO2, 5% N2 alone does not provide enough to warm early Mars.

New Climate Model with H2

It seems counterintuitive that hydrogen would be an effective greenhouse gas because it is not a strong absorber in infrared region of the electromagnetic spectrum. However, when H2 collides with CO2 and N2 molecules it bumps H2 into a higher excited state, letting it absorb in the 8-12μm region of the electromagnetic spectrum (smack dab in the middle of the mid-infrared region). Introducing this effect into the climate models Ramirez et al. were able to get above freezing temperatures, with one caveat: they need at least 5% H2 to do so. Figure 2 shows this nicely since the magenta 5% H2 95% CO2 atmosphere just barely skims the dashed line representing the freezing point of water, at a pressure of 3 bars. The conclusion: in order for Mars to be warm via this process, you need an atmosphere that was at least 5% H2, 95% CO2, with a surface pressure of 3 bars.

This figure is similar to the figure above but now the authors have added in warming effects from H2 and all these curves correspond to early Mars conditions (S/S0 = 0.75). Each curve corresponds to different amounts of H2.  Take away: For this mechanism to warm early Mars, you need at least 5% H2 and a surface pressure of 3 bars.

This figure is similar to the figure above but now the authors have added in warming effects from H2 and all these curves correspond to early Mars conditions (S/S0 = 0.75). Each curve corresponds to different amounts of H2. Take away: For this mechanism to warm early Mars, you need at least 5% H2 and a surface pressure of 3 bars.

Can Mars retain 5% H2?

Because hydrogen is so light, it gets flung out of the Martian atmosphere (for those in intro physics, the kinetic energy of H2 overcomes the gravitational energy of the planet). Therefore, the discussion leaves the realm of climate studies and enters the realm of geology. In order to know what was going on in the atmosphere 3.8 billion years ago, we need to know what was going on at the surface of Mars. If you can argue that there was a constant supply hydrogen getting pumped into the atmosphere through geologic processes on the surface (mostly volcanism), then the question of whether or not Mars can retain 5% H2 becomes irrelevant. Ramirez et al. argue that we are unsure of what was going on 3.8 billion years ago (seems understandable), but that if we use what we know about Martian meteorites and the Martian surface today, it is entirely possible to get enough H2 solely from volcanoes.

Hopefully new data from the Mars Science Laboratory will be able to test these hypotheses by looking at the ancient rock record on Mars. Until then, the 30-year-old Martian climate mystery will remain unsolved.

by Natasha Batalha at May 28, 2015 03:30 AM

May 27, 2015

Christian P. Robert - xi'an's og

simulating correlated random variables [cont’ed]

zerocorFollowing a recent post on the topic, and comments ‘Og’s readers kindly provided on that post, the picture is not as clear as I wished it was… Indeed, on the one hand, non-parametric measures of correlation based on ranks are, as pointed out by Clara Grazian and others, invariant under monotonic transforms and hence producing a Gaussian pair or a Uniform pair with the intended rank correlation is sufficient to return a correlated sample for any pair of marginal distributions by the (monotonic) inverse cdf transform.  On the other hand, if correlation is understood as Pearson linear correlation, (a) it is not always defined and (b) there does not seem to be a generic approach to simulate from an arbitrary triplet (F,G,ρ) [assuming the three entries are compatible]. When Kees pointed out Pascal van Kooten‘s solution by permutation, I thought this was a terrific resolution, but after thinking about it a wee bit more, I am afraid it is only an approximation, i.e., a way to return a bivariate sample with a given empirical correlation. Not the theoretical correlation. Obviously, when the sample is very large, this comes as a good approximation. But when facing a request to simulate a single pair (X,Y), this gets inefficient [and still approximate].

Now, if we aim at exact simulation from a bivariate distribution with the arbitrary triplet (F,G,ρ), why can’t we find a generic method?! I think one fundamental if obvious reason is that the question is just ill-posed. Indeed, there are many ways of defining a joint distribution with marginals F and G and with (linear) correlation ρ. One for each copula. The joint could thus be associated with a Gaussian copula, i.e., (X,Y)=(F⁻¹(Φ(A)),G⁻¹(Φ(B))) when (A,B) is a standardised bivariate normal with the proper correlation ρ’. Or it can be associated with the Archimedian copula

C(u; v) = (u + v − 1)-1/θ,

with θ>0 defined by a (linear) correlation of ρ. Or yet with any other copula… Were the joint distribution perfectly well-defined, it would then mean that ρ’ or θ (or whatever natural parameter is used for that copula) do perfectly parametrise this distribution instead of the correlation coefficient ρ. All that remains then is to simulate directly from the copula, maybe a theme for a future post…


Filed under: Books, Kids, Statistics Tagged: copulas, correlation, Monte Carlo methods, Monte Carlo Statistical Methods, simulating copulas

by xi'an at May 27, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

Why We Don't Know When the Europa Mission Will Launch
NASA has been vague about when the new mission to Europa will launch. There's a reason for that, and it's not just orbital mechanics.

May 27, 2015 06:32 PM

Quantum Diaries

Building an instrument to map the universe in 3-D

This article appeared in Fermilab Today on May 27, 2015.

The future Dark Energy Spectroscopic Instrument will be mounted on the Mayall 4-meter telescope. It will be used to create a 3-D map of the universe for studies of dark energy. Photo courtesy of NOAO

The future Dark Energy Spectroscopic Instrument will be mounted on the Mayall 4-meter telescope. It will be used to create a 3-D map of the universe for studies of dark energy. Photo courtesy of NOAO

Dark energy makes up about 70 percent of the universe and is causing its accelerating expansion. But what it is or how it works remains a mystery.

The Dark Energy Spectroscopic Instrument (DESI) will study the origins and effects of dark energy by creating the largest 3-D map of the universe to date. It will produce a map of the northern sky that will span 11 billion light-years and measure around 25 million galaxies and quasars, extending back to when the universe was a mere 3 billion years old.

Once construction is complete, DESI will sit atop the Mayall 4-Meter Telescope in Arizona and take data for five years.

DESI will work by collecting light using optical fibers that look through the instrument’s lenses and can be wiggled around to point precisely at galaxies. With 5,000 fibers, it can collect light from 5,000 galaxies at a time. These fibers will pass the galaxy light to a spectrograph, and researchers will use this information to precisely determine each galaxy’s three-dimensional position in the universe.

Lawrence Berkeley National Laboratory is managing the DESI experiment, and Fermilab is making four main contributions: building the instrument’s barrel, packaging and testing charge-coupled devices, or CCDs, developing an online database and building the software that will tell the fibers exactly where to point.

The barrel is a structure that will hold DESI’s six lenses. Once complete, it will be around 2.5 meters tall and a meter wide, about the size of a telephone booth. Fermilab is assembling both the barrel and the structures that will hold it on the telescope.

“It’s a big object that needs to be built very precisely,” said Gaston Gutierrez, a Fermilab scientist managing the barrel construction. “It’s very important to position the lenses very accurately, otherwise the image will be blurred.”

DESI’s spectrograph will use CCDs, sensors that work by converting light collected from distant galaxies into electrons, then to digital values for analysis. Fermilab is responsible for packaging and testing these CCDs before they can be assembled into the spectrograph.

Fermilab is also creating a database that will store information required to operate DESI’s online systems, which direct the position of the telescope, control and read the CCDs, and ensure proper functioning of the spectrograph.

Lastly, Fermilab is developing the software that will convert the known positions of interesting galaxies and quasars to coordinates for the fiber positioning system.

Fermilab completed these same tasks when it built the Dark Energy Camera (DECam), an instrument that currently sits on the Victor Blanco Telescope in Chile, imaging the universe. Many of these scientists and engineers are bringing this expertise to DESI.

“DESI is the next step. DECam is going to precisely measure the sky in 2-D, and getting to the third dimension is a natural progression,” said Fermilab’s Brenna Flaugher, project manager for DECam and one of the leading scientists on DESI.

These four contributions are set to be completed by 2018, and DESI is expected to see first light in 2019.

“This is a great opportunity for students to learn the technology and participate in a nice instrumentation project,” said Juan Estrada, a Fermilab scientist leading the DESI CCD effort.

DESI is funded largely by the Department of Energy with significant contributions from non-U.S. and private funding sources. It is currently undergoing the DOE CD-2 review and approval process.

“We’re really appreciative of the strong technical and scientific support from Fermilab,” said Berkeley Lab’s Michael Levi, DESI project director.

Diana Kwon

by Fermilab at May 27, 2015 06:29 PM

Emily Lakdawalla - The Planetary Society Blog

Pretty pictures of the Cosmos: Special Qualities
Award-winning astrophotographer Adam Block shares some images of nebulae and a galaxy with some special qualities to each of them.

May 27, 2015 04:42 PM

ZapperZ - Physics and Physicists

Wheeler's "Delayed Choice" Experiment Done With Single Atoms
Looks like we now have the first "Delayed Choice" experiment done with single atoms, this one with single He atoms.

Indeed, the results of both Truscott and Aspect's experiments shows that a particle's wave or particle nature is most likely undefined until a measurement is made. The other less likely option would be that of backward causation – that the particle somehow has information from the future – but this involves sending a message faster than light, which is forbidden by the rules of relativity.

There are now many experiments that support QM's non-realism and quantum contextuality. This latest experiment adds to the body of evidence.

Zz.

by ZapperZ (noreply@blogger.com) at May 27, 2015 01:29 PM

Peter Coles - In the Dark

Still Not Significant

telescoper:

I just couldn’t resist reblogging this post because of the wonderful list of meaningless convoluted phrases people use when they don’t get a “statistically significant” result. I particularly like:

“a robust trend toward significance”.

It’s scary to think that these were all taken from peer-reviewed scientific journals…

Originally posted on Probable Error:

Image

What to do if your p-value is just over the arbitrary threshold for ‘significance’ of p=0.05?

You don’t need to play the significance testing game – there are better methods, like quoting the effect size with a confidence interval – but if you do, the rules are simple: the result is either significant or it isn’t.

So if your p-value remains stubbornly higher than 0.05, you should call it ‘non-significant’ and write it up as such. The problem for many authors is that this just isn’t the answer they were looking for: publishing so-called ‘negative results’ is harder than ‘positive results’.

The solution is to apply the time-honoured tactic of circumlocution to disguise the non-significant result as something more interesting. The following list is culled from peer-reviewed journal articles in which (a) the authors set themselves the threshold of 0.05 for significance, (b) failed to achieve that threshold value for…

View original 2,779 more words


by telescoper at May 27, 2015 07:40 AM

Sean Carroll - Preposterous Universe

Warp Drives and Scientific Reasoning

A bit ago, the news streams were once again abuzz with claims that NASA was investigating amazing space drives that violate the laws of physics. And it’s true! If we grant that “NASA” includes “any person employed by NASA,” and “investigating” is defined as “wasting time and money thinking about.”

I say “again” because it was only a few years ago that news spread about a NASA effort aimed at a warp drive, a way to truly break the speed-of-light limit. Of course there are no realistic scenarios along those lines, so the investigators didn’t have any tangible results to present. Instead, they did the next best thing, releasing an artist’s conception of what a space ship powered by their (wholly imaginary) warp drive would look like. (What remains unclear is how the warpiness of the drive affected the design of their fantasy vessel.)

warpy

The more recent “news” is not actually about warp drive at all. It’s about propellantless space drives — which are, if anything, even less believable than the warp drives. (There is a whole zoo of nomenclature devoted to categorizing all of the non-existent technologies of this general ilk, which I won’t bother to keep straight.) Warp drives at least inspired by some respectable science — Miguel Alcubierre’s energy-condition-violating spacetime. The “propellantless” stuff, on the other hand, just says “Laws of physics? Screw em.”

You may have heard of a little thing called Newton’s Third Law of Motion — for every action there is an equal and opposite reaction. If you want to go forward, you have to push on something or propel something backwards. The plucky NASA engineers in question aren’t hampered by such musty old ideas. As others have pointed out, what they’re proposing is very much like saying that you can sit in your car and start it moving by pushing on the steering wheel.

I’m not going to go through the various claims and attempt to sort out why they’re wrong. I’m not even an engineer! My point is a higher-level one: there is no reason whatsoever why these claims should be given the slightest bit of credence, even by complete non-experts. The fact that so many media outlets (with some happy exceptions) have credulously reported on it is extraordinarily depressing.

Now, this might sound like a shockingly anti-scientific attitude. After all, I certainly haven’t gone through the experimental results carefully. And it’s a bedrock principle of science that all of our theories are fundamentally up for grabs if we collect reliable evidence against them — even one so well-established as conservation of momentum. So isn’t the proper scientific attitude to take a careful look at the data, and wait until more conclusive experiments have been done before passing judgment? (And in the meantime make some artist’s impressions of what our eventual spaceships might look like?)

No. That is not the proper scientific attitude. For a very scientific reason: life is too short.

There is a more important lesson here than any fever dreams about warp drives: how we evaluate scientific claims, especially ones we encounter in the popular media. Not all claims are created equal. This is elementary Bayesian reasoning about beliefs. The probability you should ascribe to a claim is not determined only by the chance that certain evidence would be gathered if that claim were true; it depends also on your prior, the probability you would have attached to the claim before you got the evidence. (I don’t think I’ve ever written a specific explanation of Bayesian reasoning, but it’s being discussed quite a bit in the comments to Don Page’s guest post.)

Think of it this way. A friend says, “I saw a woman riding a bicycle earlier today.” No reason to disbelieve them — probably they did see that. Now imagine the same friend instead had said, “I saw a real live Tyrannosaurus Rex riding a bicycle today.” Are you equally likely to believe them? After all, the evidence you’ve been given in either case is pretty equivalent. But in reality, you’re much more skeptical in the second case, and for good reason — the prior probability you would attach to a T-Rex riding a bicycle in your town is much lower than that for an ordinary human woman riding a bicycle.

The same thing is true for claims about new technology. If someone says, “NASA scientists are planning on sending a mission to Jupiter’s moon Europa,” you would have no reason to disbelieve them — that’s just the kind of thing NASA does. If, on the other hand, someone says “NASA scientists are building a space drive that violates Newton’s laws of motion” — you should be rather more skeptical.

Which is not to say you should be absolutely skeptical. It’s worth spending five seconds asking about what kind of evidence for this outlandish claim we have actually been given. I could certainly imagine getting enough evidence to think that momentum wasn’t conserved after all. The kind of thing I would like to see is highly respected scientists, working under exquisitely controlled conditions, doing everything they can to be hard on their own work, subjecting their experiments to intensive peer review, published in refereed journals, and ideally replicated by competing groups that would love to prove them wrong. That’s the kind of thing we got, for example, when the Higgs boson was discovered.

And what do we have for our propellantless space drive? Hmm — not quite that. No refereed publications — indeed, no publications at all. What started the hoopla was an article on a web forum called NASAspaceflight.com. Which sounds kind of respectable, until you notice it isn’t affiliated with NASA in any way. And the evidence that the article points to is — wait for it — a comment on a post on a forum on that very same web site. Admittedly, the comment was written by someone who actually does work for NASA. But, not to put too fine a point on it, lots of people work for NASA. The folks in this particular “Eagleworks” group at Johnson Spaceflight Center are a group of enthusiasts who feel that gumption and a bit of elbow grease might possibly enable them to build spaceships that do things beyond what the laws of physics might naively let you do.

And good for them! Enthusiasm is a virtue. Less virtuous is taking people’s enthusiasm at face value, rather than evaluating claims soberly. The Eagleworks group has succeeded in producing, essentially, nothing at all. Their primary mode of communication seems to be on Facebook. NASA officials, when asked by journalists for comment on the claims they leave on websites, remain silent — they don’t want to have anything to do with the whole mess.

So what we have is a situation where there’s a claim being made that is as extraordinary as it gets — conservation of momentum is being violated. And the evidenced adduced for that claim is, how shall we put it, non-extraordinary. Utterly unconvincing. Not worth a minute’s thought. Let’s get on with our lives.

by Sean Carroll at May 27, 2015 12:54 AM

May 26, 2015

ZapperZ - Physics and Physicists

The NSLS II
CERN Courier has a rather informative article on the start-up of NSLS II and its capabilities. It certainly is the newest "from scratch" light source facility (rather than just an upgrade of an existing facility).

I hope they save some parts of the original NSLS and commemorate it with some sort of a marker. After more than 30 years of service, that facility certainly was worth every penny spent on it.

Zz.

by ZapperZ (noreply@blogger.com) at May 26, 2015 10:18 PM

Christian P. Robert - xi'an's og

the Flatland paradox [#2]

flatlandAnother trip in the métro today (to work with Pierre Jacob and Lawrence Murray in a Paris Anticafé!, as the University was closed) led me to infer—warning!, this is not the exact distribution!—the distribution of x, namely

f(x|N) = \frac{4^p}{4^{\ell+2p}} {\ell+p \choose p}\,\mathbb{I}_{N=\ell+2p}

since a path x of length l(x) will corresponds to N draws if N-l(x) is an even integer 2p and p undistinguishable annihilations in 4 possible directions have to be distributed over l(x)+1 possible locations, with Feller’s number of distinguishable distributions as a result. With a prior π(N)=1/N on N, hence on p, the posterior on p is given by

\pi(p|x) \propto 4^{-p} {\ell+p \choose p} \frac{1}{\ell+2p}

Now, given N and  x, the probability of no annihilation on the last round is 1 when l(x)=N and in general

\frac{4^p}{4^{\ell+2p}}{\ell-1+p \choose p}\big/\frac{4^p}{4^{\ell+2p}}{\ell+p \choose p}=\frac{\ell}{\ell+p}=\frac{2\ell}{N+\ell}

which can be integrated against the posterior. The numerical expectation is represented for a range of values of l(x) in the above graph. Interestingly, the posterior probability is constant for l(x) large  and equal to 0.8125 under a flat prior over N.

flatelGetting back to Pierre Druilhet’s approach, he sets a flat prior on the length of the path θ and from there derives that the probability of annihilation is about 3/4. However, “the uniform prior on the paths of lengths lower or equal to M” used for this derivation which gives a probability of length l proportional to 3l is quite different from the distribution of l(θ) given a number of draws N. Which as shown above looks much more like a Binomial B(N,1/2).

flatpostHowever, being not quite certain about the reasoning involving Fieller’s trick, I ran an ABC experiment under a flat prior restricted to (l(x),4l(x)) and got the above, where the histogram is for a posterior sample associated with l(x)=195 and the gold curve is the potential posterior. Since ABC is exact in this case (i.e., I only picked N’s for which l(x)=195), ABC is not to blame for the discrepancy! I asked about the distribution on Stack Exchange maths forum (and a few colleagues here as well) but got no reply so far… Here is the R code that goes with the ABC implementation:

#observation:
elo=195
#ABC version
T=1e6
el=rep(NA,T)
N=sample(elo:(4*elo),T,rep=TRUE)
for (t in 1:T){
#generate a path
  paz=sample(c(-(1:2),1:2),N[t],rep=TRUE)
#eliminate U-turns
  uturn=paz[-N[t]]==-paz[-1]
  while (sum(uturn>0)){
    uturn[-1]=uturn[-1]*(1-
              uturn[-(length(paz)-1)])
    uturn=c((1:(length(paz)-1))[uturn==1],
            (2:length(paz))[uturn==1])
    paz=paz[-uturn]
    uturn=paz[-length(paz)]==-paz[-1]
    }
  el[t]=length(paz)}
#subsample to get exact posterior
poster=N[abs(el-elo)==0]

Filed under: Books, Kids, R, Statistics, University life Tagged: ABC, combinatorics, exact ABC, Flatland, improper priors, Larry Wasserman, marginalisation paradoxes, paradox, Pierre Druilhet, random walk, subjective versus objective Bayes, William Feller

by xi'an at May 26, 2015 10:15 PM

astrobites - astro-ph reader's digest

Production of the building blocks of life

Title: The complex chemistry of outflow cavity walls exposed: the case of low-mass protostars
Authors: M. N. Drozdovskaya, C. Walsh, R. Visser, D. Harsono, E. F. van Dishoeck
First Author’s institution: Leiden Observatory, The Netherlands
Status: Accepted for publication in MNRAS

(De-)motivating the authors model

As a frequent reader of Astrobite, chances are high that you are a passionate physicist. Then you might also agree with nobel price winner Rutherford, who said: “All of science is either physics or stamp collecting.” Well, I hope you like stamps. Today’s Astrobite deals with chemistry; especially with the production of complex organic molecules – here’s the savior for all of you ASTROwhatsoever – in outflows around low-mass stars.

Figure 1: Physical structure of the two-dimensional model around the protostar (located at the origin). Top: Density of the gas; Middle: Temperature of the dust; Bottom: Visual extinction. The three plots clearly illustrate the outflows and envelope as well as the cavity wall in between.

Figure 1: Physical structure of the two-dimensional model around the protostar (located at the origin). Top: Density of the gas; Middle: Temperature of the dust; Bottom: Visual extinction. The three plots clearly illustrate the outflows and envelope as well as the cavity wall in between.

To put it a little bit more into perspective here. A star forms due to gravitational collapse. While the star accretes, it ejects part of its mass in the form of bipolar outflows back into the lower density environment. Astronomers use the term ‘envelope’ when talking about the surrounding within a distance to the protostar of about 10 000 AU. In the last few years astronomers have observed many stars together with their envelope and they found signs of complex organic molecules (COMs). These COMs are particularly interesting for astrobiologists because they are considered the building blocks of life. However, we don’t know where and how they are formed. This is the point, where chemistry, modeling and the authors of today’s paper come into play. The authors use a static two-dimensional model. Static means that the physical conditions such as density or temperature do not change in time; two-dimensional means that the authors consider a slice as an approximation for the more complicated (and computationally more expensive) spherical like structure. Apart from that the authors implement a cavity between the outflow and the rest of the envelope. You can see an illustration of their physical model in Figure 1 (also Figure 1 in the article). For the chemistry, the authors take into account thousands of chemical reactions to model the distribution of COMs in the envelope of a protostar comparable to the young Sun. Additionally, the authors account for the fact that young stars undergo episodes of high luminosity (‘luminosity bursts’) by varying the luminosity of the star in their model.

Production and distribution of complex organics around protostar

The last aspect is critical for the production of COMs. In order to form complex molecules, you need to have molecules with at least one unpaired valence electron, so called free radicals. But how to produce them? Free radicals can be produced by breaking up the simpler molecules that are in the solid phase by a process called photodissociation. However, photodissociation requires energetic photons and these photons have to come from the star itself. It turns out that in order to produce a significant amount of COMs, one needs a fair amount of free radicals and thus an – at least temporally – high energy source, such as it could be provided by young stars undergoing ‘luminosity bursts‘.

Assuming that such periods of high luminosity exist, the authors find that there are three different zones around the young star and its bipolar outflow (see Figure 2, which is Figure 10 in the paper):

  • The cavity wall layer, which grows during the evolution of the star,
  • a torus with many complex organic molecules in ice form and
  • an outer envelope with a high abundance of simple molecules such as for instance water (H2O).

three_zones

Figure 2: Three distinct zones for different types of molecules around the protostar with its outflow. Solid complex molecules are most abundant in the COM torus (red), while simple molecules are highly predominant in the outer envelope (blue). Note that the envelope can be enriched through infall of molecules in the cavity wall layer (green).

‘Fresh’ molecules can fall from larger distances into the cavity wall and potentially enrich the envelope in COMs. In general, the authors stress that COM abundance is not unique for the solids and the gas phases. For instance; ices in the torus are rich in COMs, while the gas is very poor in COM. Another interesting result of their work is that different COMs in the torus have varying peak abundances. The authors conclude that this hints at different lifetimes of the molecules. This aspect is particularly interesting for astronomers since COMs could be a potential tracer of age for young stars.

Sort of a disclaimer

If you’re thinking: ‘Great! That’s how it looks like around a young star.’, I warn you. The authors do a great job in combining the complexity of chemical reactions and the physics of a young star, but as you probably know from your own life, it is impossible to account for everything. Considering a dynamical three-dimensional surrounding around the evolving protostar might affect parts of the results significantly. Nevertheless, this work provides new helpful constraints on the formation of ‘the building blocks of life’ by respecting the chemistry involved during star formation. By the way, do you remember the Rutherford statement from above? He won his Nobel prize in chemistry.

by Michael Küffmeier at May 26, 2015 09:54 PM

Emily Lakdawalla - The Planetary Society Blog

Software Glitch Pauses LightSail Test Mission
The Planetary Society’s LightSail test mission has been paused while engineers wait out a suspected software glitch that has silenced the solar sailing spacecraft.

May 26, 2015 09:35 PM

The n-Category Cafe

The Origin of the Word "Quandle"

A quandle is a set equipped with a binary operation with number of properties, the most important being that it distributes over itself:

<semantics>a(bc)=(ab)(ac)<annotation encoding="application/x-tex"> a \triangleright (b \triangleright c) = (a \triangleright b)\triangleright (a \triangleright c) </annotation></semantics>

They show up in knot theory, where they capture the essence of how the strands of a knot cross over each other… yet they manage to give an invariant of a knot, independent of the way you draw it. Even better, the quandle is a complete invariant of knots: if two knots have isomorphic quandles, there’s a diffeomorphism of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> mapping one knot to the other.

I’ve always wondered where the name ‘quandle’ came from. So I decided to ask their inventor, David Joyce—who also proved the theorem I just mentioned.

He replied:

I needed a usable word. “Distributive algebra” had too many syllables. Piffle was already taken. I tried trindle and quagle, but they didn’t seem right, so I went with quandle.

So there you go! Another mystery unraveled.

by john (baez@math.ucr.edu) at May 26, 2015 08:28 PM

The n-Category Cafe

PROPs for Linear Systems

PROPs were developed in topology, along with operads, to describe spaces with lots of operations on them. But now some of us are using them to think about ‘signal-flow diagrams’ in control theory—an important branch of engineering. I talked about that here on the n-Café a while ago, but it’s time for an update.

Eric Drexler likes to say: engineering is dual to science, because science tries to understand what the world does, while engineering is about getting the world to do what you want. I think we need a slightly less ‘coercive’, more ‘cooperative’ approach to the world in order to develop ‘ecotechnology’, but it’s still a useful distinction.

For example, classical mechanics is the study of what things do when they follow Newton’s laws. Control theory is the study of what you can get them to do.

Say you have an upside-down pendulum on a cart. Classical mechanics says what it will do. But control theory says: if you watch the pendulum and use what you see to move the cart back and forth correctly, you can make sure the pendulum doesn’t fall over!

Control theorists do their work with the help of ‘signal-flow diagrams’. For example, here is the signal-flow diagram for an inverted pendulum on a cart:

When I take a look at a diagram like this, I say to myself: that’s a string diagram for a morphism in a monoidal category! And it’s true. Jason Erbele wrote a paper explaining this. Independently, Bonchi, Sobociński and Zanasi did some closely related work:

• John Baez and Jason Erbele, Categories in control.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, Interacting Hopf algebras.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, A categorical semantics of signal flow graphs.

Next week I’ll explain some of the ideas at the Turin meeting on the categorical foundations of network theory. But I also want to talk about this new paper that Simon Wadsley of Cambridge University wrote with my student Nick Woods:

• Simon Wadsley and Nick Woods, PROPs for linear systems.

This makes the picture neater and more general!

You see, Jason and I used signal flow diagrams to give a new description of the category of finite-dimensional vector spaces and linear maps. This category plays a big role in the control theory of linear systems. Bonchi, Sobociński and Zanasi gave a closely related description of an equivalent category, <semantics>Mat(k),<annotation encoding="application/x-tex"> \mathrm{Mat}(k),</annotation></semantics> where:

• objects are natural numbers, and

• a morphism <semantics>f:mn<annotation encoding="application/x-tex"> f : m \to n</annotation></semantics> is an <semantics>n×m<annotation encoding="application/x-tex"> n \times m</annotation></semantics> matrix with entries in the field <semantics>k,<annotation encoding="application/x-tex"> k,</annotation></semantics>

and composition is given by matrix multiplication.

But Wadsley and Woods generalized all this work to cover <semantics>Mat(R)<annotation encoding="application/x-tex">\mathrm{Mat}(R)</annotation></semantics> whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig. A rig is a ‘ring without negatives’—like the natural numbers. We can multiply matrices valued in any rig, and this includes some very useful examples… as I’ll explain later.

Wadsley and Woods proved:

Theorem. Whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig, <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for bicommutative bimonoids over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics>

This result is quick to state, but it takes a bit of explaining! So, let me start by bringing in some definitions.

Bicommutative bimonoids

We will work in any symmetric monoidal category, and draw morphisms as string diagrams.

A commutative monoid is an object equipped with a multiplication:

and a unit:

obeying these laws:

For example, suppose <semantics>FinVect k<annotation encoding="application/x-tex"> \mathrm{FinVect}_k</annotation></semantics> is the symmetric monoidal category of finite-dimensional vector spaces over a field <semantics>k<annotation encoding="application/x-tex"> k</annotation></semantics>, with direct sum as its tensor product. Then any object <semantics>VFinVect k<annotation encoding="application/x-tex"> V \in \mathrm{FinVect}_k </annotation></semantics> is a commutative monoid where the multiplication is addition:

<semantics>(x,y)x+y<annotation encoding="application/x-tex"> (x,y) \mapsto x + y </annotation></semantics>

and the unit is zero: that is, the unique map from the zero-dimensional vector space to <semantics>V.<annotation encoding="application/x-tex"> V.</annotation></semantics>

Turning all this upside down, cocommutative comonoid has a comultiplication:

and a counit:

obeying these laws:

For example, consider our vector space <semantics>VFinVect k<annotation encoding="application/x-tex"> V \in \mathrm{FinVect}_k</annotation></semantics> again. It’s a commutative comonoid where the comultiplication is duplication:

<semantics>x(x,x)<annotation encoding="application/x-tex"> x \mapsto (x,x) </annotation></semantics>

and the counit is deletion: that is, the unique map from <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> to the zero-dimensional vector space.

Given an object that’s both a commutative monoid and a cocommutative comonoid, we say it’s a bicommutative bimonoid if these extra axioms hold:

You can check that these are true for our running example of a finite-dimensional vector space <semantics>V.<annotation encoding="application/x-tex"> V.</annotation></semantics> The most exciting one is the top one, which says that adding two vectors and then duplicating the result is the same as duplicating each one, then adding them appropriately.

Our example has some other properties, too! Each element <semantics>ck<annotation encoding="application/x-tex"> c \in k</annotation></semantics> defines a morphism from <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> to itself, namely scalar multiplication by <semantics>c:<annotation encoding="application/x-tex"> c:</annotation></semantics>

<semantics>xcx<annotation encoding="application/x-tex"> x \mapsto c x </annotation></semantics>

We draw this as follows:

These morphisms are compatible with the ones so far:

Moreover, all the ‘rig operations’ in <semantics>k<annotation encoding="application/x-tex"> k</annotation></semantics>—that is, addition, multiplication, 0 and 1, but not subtraction or division—can be recovered from what we have so far:

We summarize this by saying our vector space <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> is a bicommutative bimonoid ‘over <semantics>k<annotation encoding="application/x-tex"> k</annotation></semantics>’.

More generally, suppose we have a bicommutative bimonoid <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> in a symmetric monoidal category. Let <semantics>End(A)<annotation encoding="application/x-tex"> \mathrm{End}(A)</annotation></semantics> be the set of bicommutative bimonoid homomorphisms from <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> to itself. This is actually a rig: there’s a way to add these homomorphisms, and also a way to ‘multiply’ them (namely, compose them).

Suppose <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is any commutative rig. Then we say <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> is a bicommutative bimonoid over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> if it’s equipped with a rig homomorphism

<semantics>Φ:REnd(A)<annotation encoding="application/x-tex"> \Phi : R \to \mathrm{End}(A)</annotation></semantics>

This is a way of summarizing the diagrams I just showed you! You see, each <semantics>cR<annotation encoding="application/x-tex"> c \in R</annotation></semantics> gives a morphism from <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> to itself, which we write as

The fact that this is a bicommutative bimonoid endomorphism says precisely this:

And the fact that <semantics>Φ<annotation encoding="application/x-tex"> \Phi</annotation></semantics> is a rig homomorphism says precisely this:

So sometimes the right word is worth a dozen pictures!

What Jason and I showed is that for any field <semantics>k,<annotation encoding="application/x-tex"> k,</annotation></semantics> the <semantics>FinVect k<annotation encoding="application/x-tex"> \mathrm{FinVect}_k</annotation></semantics> is the free symmetric monoidal category on a bicommutative bimonoid over <semantics>k.<annotation encoding="application/x-tex"> k.</annotation></semantics> This means that the above rules, which are rules for manipulating signal flow diagrams, completely characterize the world of linear algebra!

Bonchi, Sobociński and Zanasi used ‘PROPs’ to prove a similar result where the field is replaced by a sufficiently nice commutative ring. And Wadlsey and Woods used PROPS to generalize even further to the case of an arbitrary commutative rig!

But what are PROPs?

PROPs

A PROP is a particularly tractable sort of symmetric monoidal category: a strict symmetric monoidal category where the objects are natural numbers and the tensor product of objects is given by ordinary addition. The symmetric monoidal category <semantics>FinVect k<annotation encoding="application/x-tex"> \mathrm{FinVect}_k</annotation></semantics> is equivalent to the PROP <semantics>Mat(k),<annotation encoding="application/x-tex"> \mathrm{Mat}(k),</annotation></semantics> where a morphism <semantics>f:mn<annotation encoding="application/x-tex"> f : m \to n</annotation></semantics> is an <semantics>n×m<annotation encoding="application/x-tex"> n \times m</annotation></semantics> matrix with entries in <semantics>k,<annotation encoding="application/x-tex"> k,</annotation></semantics> composition of morphisms is given by matrix multiplication, and the tensor product of morphisms is the direct sum of matrices.

We can define a similar PROP <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig, and Wadsley and Woods gave an elegant description of the ‘algebras’ of <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics>. Suppose <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> is a PROP and <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> is a strict symmetric monoidal category. Then the category of algebras of <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> in <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> is the category of strict symmetric monoidal functors <semantics>F:CD<annotation encoding="application/x-tex"> F : C \to D</annotation></semantics> and natural transformations between these.

If for every choice of <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> the category of algebras of <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> in <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> is equivalent to the category of algebraic structures of some kind in <semantics>D,<annotation encoding="application/x-tex"> D,</annotation></semantics> we say <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> is the PROP for structures of that kind. This explains the theorem Wadsley and Woods proved:

Theorem. Whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig, <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for bicommutative bimonoids over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics>

The fact that an algebra of <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is a bicommutative bimonoid is equivalent to all this stuff:

The fact that <semantics>Φ(c)<annotation encoding="application/x-tex"> \Phi(c)</annotation></semantics> is a bimonoid homomorphism for all <semantics>cR<annotation encoding="application/x-tex"> c \in R</annotation></semantics> is equivalent to this stuff:

And the fact that <semantics>Φ<annotation encoding="application/x-tex"> \Phi</annotation></semantics> is a rig homomorphism is equivalent to this stuff:

This is a great result because it includes some nice new examples.

First, the commutative rig of natural numbers gives a PROP <semantics>Mat.<annotation encoding="application/x-tex"> \mathrm{Mat}.</annotation></semantics> This is equivalent to the symmetric monoidal category <semantics>FinSpan,<annotation encoding="application/x-tex"> \mathrm{FinSpan},</annotation></semantics> where morphisms are isomorphism classes of spans of finite sets, with disjoint union as the tensor product. Steve Lack had already shown that <semantics>FinSpan<annotation encoding="application/x-tex"> \mathrm{FinSpan}</annotation></semantics> is the PROP for bicommutative bimonoids. But this also follows from the result of Wadsley and Woods, since every bicommutative bimonoid <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> is automatically equipped with a unique rig homomorphism

<semantics>Φ:End(V)<annotation encoding="application/x-tex"> \Phi : \mathbb{N} \to \mathrm{End}(V)</annotation></semantics>

Second, the commutative rig of booleans

<semantics>𝔹={F,T}<annotation encoding="application/x-tex"> \mathbb{B} = \{F,T\}</annotation></semantics>

with ‘or’ as addition and ‘and’ as multiplication gives a PROP <semantics>Mat(𝔹).<annotation encoding="application/x-tex"> \mathrm{Mat}(\mathbb{B}).</annotation></semantics> This is equivalent to the symmetric monoidal category <semantics>FinRel<annotation encoding="application/x-tex"> \mathrm{FinRel}</annotation></semantics> where morphisms are relations between finite sets, with disjoint union as the tensor product. Samuel Mimram had already shown that this is the PROP for special bicommutative bimonoids, meaning those where comultiplication followed by multiplication is the identity:

But again, this follows from the general result of Wadsley and Woods!

Finally, taking the commutative ring of integers <semantics>,<annotation encoding="application/x-tex"> \mathbb{Z},</annotation></semantics> Wadsley and Woods showed that <semantics>Mat()<annotation encoding="application/x-tex"> \mathrm{Mat}(\mathbb{Z})</annotation></semantics> is the PROP for bicommutative Hopf monoids. The key here is that scalar multiplication by <semantics>1<annotation encoding="application/x-tex"> -1</annotation></semantics> obeys the axioms for an antipode—the extra morphism that makes a bimonoid into a Hopf monoid. Here are those axioms:

More generally, whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative ring, the presence of <semantics>1R<annotation encoding="application/x-tex"> -1 \in R</annotation></semantics> guarantees that a bimonoid over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is automatically a Hopf monoid over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics> So, when <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative ring, Wadsley and Woods’ result implies that <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for Hopf monoids over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics>

Earlier, in their paper on ‘interacting Hopf algebras’, Bonchi, Sobociński and Zanasi had given an elegant and very different proof that <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for Hopf monoids over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a principal ideal domain. The advantage of their argument is that they build up the PROP for Hopf monoids over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> from smaller pieces, using some ideas developed by Steve Lack. But the new argument by Wadsley and Woods has its own charm.

In short, we’re getting the diagrammatics of linear algebra worked out very nicely, providing a solid mathematical foundation for signal flow diagrams in control theory!

by john (baez@math.ucr.edu) at May 26, 2015 08:28 PM

Emily Lakdawalla - The Planetary Society Blog

Here Are the Science Instruments NASA Will Use to Explore Europa
NASA just announced the science instruments that will be used to understand the enigmatic ocean moon of Europa. The mission is planned to launch sometime in the early 2020s.

May 26, 2015 06:01 PM

Lubos Motl - string vacua and pheno

Clifford Johnson's flawed ideas about string theory's raison d’être
String theory's founding fathers have been a heroic group of solitaires who were developing a remarkable theory whose X Factor only became self-evident to most of the competent theoretical high-energy physicists in the mid 1980s. At that time, during the First Superstring Revolution, string theory became a mainstream subject, the generally appreciated "only game in town" when it comes to the unification of gravity with the rest of fundamental physics.

A subject that becomes mainstream enters the same risk as a stable corporation that has already grown big: It absorbs too many people that are too ordinary, too opportunist, and too unaware of the reasons why they're in that subject and not another subject. It has too many "followers" which may also be a ticket to stagnation.

As a guy who came to string theory from an unfriendly environment – the post-communist Academia which was mostly hostile towards string theory because its own scientific results were basically zero, much like the scientific contributions of the critics of string theory elsewhere – I always felt the gap between the "real people" who know what they're doing and the "other people". And I had the worrying feeling that the younger generation contains way too many "other people", too many "followers". The "real people" among the younger generations have been dangerously rare for decades.




Clifford Johnson praises an article by Frank Close on the testability wars in the Prospect Magazine. Close tries to be more impartial than other inkspillers who love to write about this non-topic but it's still a redundant article.




I've proven that the authors of similar "testability" sermons are worthless piles of pseudointellectual crap many times and I won't do it again. Everyone who hasn't understood the emptiness of that religion is one of its "believers". What I find new and worse is the rather lame would-be defense by Clifford Johnson. Here are his "three key points":
(1) Many important ideas in physics started out as purely mathematical digressions inspired by physics… You can’t find those ideas and make them work without exploring where they lead for a while, perhaps long before you even know how to test them. So we need that component of physics research as much as we need some of the other aspects, like designing and performing new experiments, etc.
It's true that the character of some theories gets transformed as time goes by but this insight has no relevance for the "string wars" because string theory did not start as a purely mathematical digression and it was not a purely mathematical digression at any point of its history. String theory started as an extremely specific technical attempt to explain some features of extremely tangible experimental observations of the properties and collisions of hadrons.

It was a new paradigm to describe a set of totally empirical facts about a discipline in physics. Some of its predictions were nontrivially right for that purpose, others were wrong, and within a few years, a less revolutionary competitor – QCD – emerged as an explanation of the hadrons. All the string theorists understood that. In 1974, they also noticed that string theory was actually a consistent theory of quantum gravity (plus other forces and particle species). It had the technical properties needed to account for the empirical facts associated with general relativity as well as those linked to quantum mechanics. Some aspects of quantum gravity are similar to those of the strong interaction, others are different. But what one gets from string theory always coincides with what one needs in quantum gravity.

If this insight hadn't been made, string theory should have disappeared from the list of topics studied by physicists. But it was made, quantum gravity had been a puzzle for theorists par excellence for a few decades, and that string success had been the reason why physicists had and have a reason to investigate it.
(2) You never know where a good idea will ultimately find its applications… It is often not where we think it might be initially.
Again, it's true but the relevance of this ignorance for our evaluation of the scientific theories – string theory, its competitors, and all other theories in all scientific disciplines – is zero. Or it should be zero. If we don't know something, we can't use this something to make conclusions! In particular, it is not legitimate for a scientist to judge research directions according to promises.

If an idea, a theory etc. seems uninteresting, unsuccessful, or useless and you believe that it will actually be interesting, successful, or useful, it is up to you to develop the idea or the theory to a form that may be recognized as a sufficiently interesting, successful, or useful one. You may work on that idea or theory at home – like giants of science a few centuries ago. But if you simply don't get to the point where an impartial competent observer may see that there's something interesting, successful, or useful about the idea, your opinion about the virtues of the idea or theory is just faith, and such faith shouldn't play any role in science, much like promises that are not backed by anything.

String theory is considered the most profound theory in contemporary fundamental physics exactly because it's the only one (beyond local quantum field theories) that has delivered something else than just faith and promises.
(3) Of course, testability (confronting an idea with experiment/observation) is key to the enterprise of doing science, there is no doubt in my mind. I do not think we need to start considering whether testability is something we can abandon or not. That’s clearly silly. We just need to be careful about rushing in to declare something testable or not testable before it has had a chance to develop into something useful. Unfortunately, everyone has a different take on just when it is time to make that declaration… and that’s what causes all the shouting and political arguments that generate a lot of heat and precious little light.
I agree that the debates about abandoning testability are silly. All string theorists are only interested in the theory because it actually says something physically nontrivial about Nature – something that is in principle testable.

But I completely disagree with Johnson's "Unfortunately, everyone has a different take on just when it is time". Sorry, I don't have any take on that idea of "deadlines". Instead, I realize that everyone who believes in such an idea of "deadlines" is completely deluded, whatever his precise "dates" are.

A textbook example of these morons is Lee Sm*lin, a babbler who once suggested a five-year plan, like those of Yosip St*lin, in which a theory of everything has to be completed. This is totally silly because no one can know how much time it takes to develop a new paradigm in physics, what will happen in 5 or 20 or 50 years, or how big a currently unknown body of knowledge is going to be in the distant future.

You can't fix Sm*lin's childish criterion by changing the timescale. If you replace 5 years by 4 years, you will be more nationalist – the main difference between St*lin and H*tler was that the former had 5-year plans while the latter had 4-year plans – but the logical, qualitative reasons why this argument is completely irrational will be unchanged.

A sensible physicist – and a sane scientist or person – evaluates theories or "research programs" according to the evidence that is already available at this very moment. Speculations and promises about the future just cannot count! String theory is the only candidate theory of quantum gravity systematically studied by competent physicists simply because it's the only theory with the required properties that is known to science as of 2015. This has nothing to do with its being 5 year old, 30 years old, or 47 years old.

We don't need to speculate about the moment when the Planckian physics will be experimentally tested. I have never believed that it would be tested in any foreseeable future and it is in no way needed for some questions to be scientific. What's important that we already know now that the questions that string theory addresses are testable in principle. There are various possible answers and it just turns out that the existing empirical evidence combined with careful calculations and reasoning is enough to say a lot about the laws of physics because the things we already know, or can calculate, are extremely constraining.

The Asymptotia's comment section

In the comments, Moshe Rozali says that not even philosophers consider Popper's 1934 remarks as the final truth. He also says that string theory became understood to be "a method rather than a model". I don't agree with the sentence in this form. Like quantum field theory or even quantum mechanics in general, string theory is a theory or a theoretical framework. That means that there may be many "models" studied within this framework. But it is something else than saying that string theory is a "method".

String theory has taught us tons of methods and the only reason why we "clump them" is that all these methods are needed to study a particular physical theory, string theory. From a purely predictive viewpoint, string theory is a theoretical framework much like quantum field theory. There are different "versions" of string theory much like there are different "quantum field theories". One of them may be exactly right (quantum field theories in the spacetime may only be approximately right because they don't describe quantum gravity in its characteristic regime) and the other ones are its cousins.

However, if you're capable to look beyond superficial questions of predictivity, there's a big difference: different quantum field theories impose different laws of physics on Nature. But the different "versions" of string theory are actually solutions to the same laws of physics, same underlying equations. They are different vacua – different vacuum-like solutions or states – you may find within the same theory. String theory is primarily one theory. It is a very rich theory with lots of solutions and aspects but it is one theory, nevertheless.

To say that string theory is "one method" means to deny that there are hundreds of very different "methods" used to investigate string theory. And to say that string theory is "many methods" means to completely deny the reasons why these methods are being clumped into one group. The reason is that the methods are just servants to something much more important, namely the one unique theory unifying fundamental physics. Clifford Johnson:
Popular level articles tend not to care much about string theory as a powerful toolbox, presumably because they are aimed at audiences who have been bombarded only with discussions about theories of everything and the holy grail of physics, and the like….
Sorry but there is a very good reason why string theory is presented as the holy grail of physics or a theory of everything rather than "a powerful toolbox". It's the string theory's status as the "holy grail of physics" or a "theory of everything" that really justifies the application of the "tools". Tools are great but they are not the final justification of what's being done. A toilet brush in a composer's villa may be a powerful tool but it's only a tool, not something primary, and that's why it's not being emphasized. The composer may also have a detergent in his house plus many other tools. Some of them are dirty, like some tools used in physics. But none of these tools is the point and none of these tools is the source of controversy, either.

If someone is just using some tools, like the toilet brush, then he is a worker, like a janitor in the composer's house. But not surprisingly, the sensible popular books are about the composer, not the janitor or his toilet brush. It's the musical compositions that justify (and pay for) the hiring of the janitor. The janitor's job doesn't justify the compositions.
Pragmatic discussions of what is calculable in various fields of physics and useful tools for doing so are just not as sexy.
They're not only un-sexy. They're also genuinely secondary, less important. People interested in physics don't want to read about some boring technical stuff and they're right. They want to read about the big picture. It's also the big picture that determines – or should determine – which corners of the theories attract more "hard work with tools".

Unlike the would-be competitors, string theory allows the physicists to calculate lots of particular things. String theorists have a good idea what may be computed in principle (where we have a complete enough definition, for some purposes and levels of precision), what may be computed in practice, and what has actually been already computed by someone, but that still doesn't mean that they calculate everything that can be computed because it's not necessarily possible or important or interesting given the required hard work. And people are similarly interested in the "why" questions, the bosses, the justifications of the hard work, simply because those are primary. They mostly get completely wrong answers from the media and popular books these days but that doesn't mean that all of the questions they are asking are wrong. Well, some of the questions are wrong as well but questions whether string theory is the right theory of all interactions etc. is surely a very good and important question.

Clifford may find string theory's raison d’être unimportant and focus on some technical details of some "hard work with tools". But that doesn't mean that there are no organizing principles that determine "when it makes sense to use some tools or others".
Occasionally you do get the toolbox discussion, but then it is most couched as a separate issue on its own (the “shocking news – string theory may be useful for something!” Type articles….)….
These titles (and majorities of the articles beneath them) are dramatically distorting the status and achievements of string theory. No person who is at least slightly informed about string theory would have any doubts that string theory is useful and important for tons of things. These titles and articles keep on repeating themselves partly because they're being tolerated by opportunist cowards who know better.

But Clifford offers something worse, the C-word ("consensus"):
And on point 1… I tend to look to practicing scientists as the ones who really, as a group, carve out what science really is and isn’t.
Sorry, one can't define science as "whatever is being done by a group calling themselves the scientists". Lots of people – individuals or organized groups – may call themselves scientists but what they do is not science. These people's being numerous can't make their enterprise any better. I will avoid obvious examples from many corners of would-be science because every entry would make this paragraph more controversial than it should be. However, it is very important for a scientist to have non-sociological instincts about what science is and what it is not.

There are many exchanges between Clifford Johnson and Moshe Rozali. I would call them a waste of time. They're mostly politically correct clichés about the importance of communities for science etc., the kind of stuff that insults almost no one (except for people like me who find this PC stuff truly offensive). What's missing in their picture is the point that science is the ultimate meritocratic human activity.

To emphasize that his delusions about the consensus science weren't a typo, Clifford Johnson added:
I am curious though as to whether you [Moshe] have a historical precedent in mind for the “deep study of the subject” aspect. Can you point to a time where the physics community was all adrift and people from outside the community came and, after careful study, pointed the way?
It depends whom you consider a person from "outside". They were rarely idiots but they were often not considered insiders. Most of the top 19th century fundamental physicists were obsessed with the aether. An outsider, a patent clerk named Albert Einstein, concluded there had to be no aether and he discovered rocksolid evidence – a new theory of spacetime.

The even deeper revolution, quantum mechanics, was ignited by folks like Werner Heisenberg. He barely got his PhD because he wasn't considered a good enough insider – a chap obsessed with all the details of experimental physics and interferometers and similar physics of his time. But he just knew everything he needed to know to build completely new foundations for all of physics. He focused on the "new physics" – ideas as new as relativity or newer – and that worked extremely well for him. It has worked great for many string theorists, too. (There are also lots of string theorists who were or are incredibly good researchers in the older physics, too.)

Heisenberg's groundbreaking discoveries were quickly understood (and elaborated upon) by a bunch of similarly competent physicists but my point is that they always had to evaluate the ideas according to their beef, not according to the author's being an insider or an outsider, if science was or is supposed to be systematically making progress.

The research of quantum gravity had existed for decades – and was done mainly by the "relativistic" community – but up to the early 1970s, it was completely wrong, meaningless, or content-free. The subject needed relative "outsiders" with a much better training in quantum mechanics and particle physics – such as Hawking and string theorists – to acquire beef. Well, I would actually start with Feynman who developed the Feynman rules for general relativity, including the first appearance of the Faddeev-Popov ghosts (for diffeomorphisms). That's perhaps when quantum gravity (in the broader sense) began as a quantitative discipline.

But Clifford Johnson is clearly a general celebrator of the "consensus science" and its track record. Even though there's some "medical bias" in it, Michael Crichton has provided us with the best list of the failing track record of "consensus science" (taken from the Aliens Cause Global Warming speech in 2003):
In addition, let me remind you that the track record of the consensus is nothing to be proud of. Let’s review a few cases.

In past centuries, the greatest killer of women was fever following childbirth. One woman in six died of this fever.

In 1795, Alexander Gordon of Aberdeen suggested that the fevers were infectious processes, and he was able to cure them. The consensus said no.

In 1843, Oliver Wendell Holmes claimed puerperal fever was contagious, and presented compelling evidence. The consensus said no.

In 1849, Semmelweiss demonstrated that sanitary techniques virtually eliminated puerperal fever in hospitals under his management. The consensus said he was a Jew, ignored him, and dismissed him from his post. There was in fact no agreement on puerperal fever until the start of the twentieth century. Thus the consensus took one hundred and twenty five years to arrive at the right conclusion despite the efforts of the prominent “skeptics” around the world, skeptics who were demeaned and ignored. And despite the constant ongoing deaths of women.

There is no shortage of other examples. In the 1920s in America, tens of thousands of people, mostly poor, were dying of a disease called pellagra. The consensus of scientists said it was infectious, and what was necessary was to find the “pellagra germ.” The US government asked a brilliant young investigator, Dr. Joseph Goldberger, to find the cause. Goldberger concluded that diet was the crucial factor. The consensus remained wedded to the germ theory.

Goldberger demonstrated that he could induce the disease through diet. He demonstrated that the disease was not infectious by injecting the blood of a pellagra patient into himself, and his assistant. They and other volunteers swabbed their noses with swabs from pellagra patients, and swallowed capsules containing scabs from pellagra rashes in what were called “Goldberger’s filth parties.” Nobody contracted pellagra.

The consensus continued to disagree with him. There was, in addition, a social factor-southern States disliked the idea of poor diet as the cause, because it meant that social reform was required. They continued to deny it until the 1920s. Result – despite a twentieth century epidemic, the consensus took years to see the light.

Probably every schoolchild notices that South America and Africa seem to fit together rather snugly, and Alfred Wegener proposed, in 1912, that the continents had in fact drifted apart. The consensus sneered at continental drift for fifty years. The theory was most vigorously denied by the great names of geology – until 1961, when it began to seem as if the sea floors were spreading. The result: it took the consensus fifty years to acknowledge what any schoolchild sees.

And shall we go on? The examples can be multiplied endlessly. Jenner and smallpox, Pasteur and germ theory. Saccharine, margarine, repressed memory, fiber and colon cancer, hormone replacement therapy. The list of consensus errors goes on and on.

Finally, I would remind you to notice where the claim of consensus is invoked. Consensus is invoked only in situations where the science is not solid enough.

Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way.
The people who claim that it's necessary for a field to be flooded by outsiders if it want to be on the right track are wrong. But so are the people who say that as long as there are insiders, things will be on the right track. None of these sociological recipes works. People doing some research, whether they are considered insiders or outsiders, simply have to do the work correctly, carefully, honestly. They have to learn about the relevant results by others and they have to systematically eliminate ideas that have been falsified.

They have to maintain standards and that's the best thing they can do for their field to avoid dead ends and wrong tracks. Musings about mysterious abilities of outsiders or insiders can't replace the tough, rational, mathematically solid, empirically rooted arguments that science demands.

David Bailin:
Equally, particular superstring theories are obviously false, heterotic E8xE8 for example.
Heterotic \(E_8\times E_8\) string (on Calabi-Yaus or closely related manifolds) remains the one of the most viable – if not the most viable – category of models to describe all interactions and matter in Nature. See some recent heterotic pheno papers.

Sabine Hossenfelder:
Physicists are of course biased when it comes to judging their own theories. That is a priori not a problem. The problem is that they’re not educated to become aware of and account for their biases.
A good scientist may be excited, overexcited, or underexcited about some ideas. Science actually has tons of examples in which the big discoverers underestimated the importance of their discoveries – Max Planck and his black-body curve derivation is a great example. Albert Einstein and the GR's prediction of a non-static Universe is another one. Great scientists are actually more likely to underestimate their theories than to overestimate them – but it's still a mistake one should avoid!

Better scientists are generally able to divide ideas to good ones and bad ones more accurately than worse scientists and non-scientists. And a good scientific community is able to divide scientists to good ones and bad ones! The meritocracy at these two levels is what is needed for the scientific progress. There will always be biased people and incompetent people. They're not a problem for the progress in science as long as they have not conquered science.

Sabine Hossenfelder:
My pet peeve is that it’s extremely hard to change topics after PhD.
A normal intelligent person has his formative years up to a certain age – and he learns the majority of the framework while young enough. What he learns later is usually "incremental" and what he does are mostly "applications". But in general, good enough people may learn very new things throughout their lives and they may achieve great things in them.

The actual reason why "some people find it hard to change topics after their PhD" is that they are not really good at anything, not even the topic of their PhD which they would like to abandon. But they realize that they're not treated seriously in other topics. Here, the most typical problem is that they are treated seriously when it comes to the topic of their PhD thesis even though this shouldn't be the case, either. The actual problem is that some (and probably many) PhDs are thrown away to wrong people.

But if you think that the specialization of your PhD thesis is the only barrier that prevents you from successfully working on other topics, well, maybe you should get the PhD in the new field, too! Individual people and institutions may err in individual examples – when they don't trust people talking about a subject in which they don't have a PhD (even though these non-holders of the degree may be right and very wise) – but there is also a very good reason why much of this work on "out of expertise" topics is being ignored: Most of it is a bad quality work or downright rubbish. Most people are just the laymen when it comes to different subjects. Even if they are self-confident, they wouldn't really get the PhDs from the other subjects. They honestly wouldn't deserve it. This is the reason why it's usually totally right that it's not trivial for generic people to "change their topics after their PhD". There don't seem to be reasons to think that they would be good at it and in most cases, they are actually not good at it.

Hossenfelder:
And while I am here, let me also ask you a question. I have the vague impression that there are not so many people left working on string theory as ‘the theory of everything’ and instead most are now doing AdS/CFT and extensions thereof (dS, time-dependent, etc), dualities in general and applications. Do you share this impression?
This comment is a mixture of truths that are unfortunate – while she is happy about them – as well as some untruths and deep misunderstandings. To mention an example of the latter, it's complete nonsense to suggest that the research into string/M-theoretical dualities is not a "research of string theory as a theory of everything". It is arguably the key part of the foundational, big-picture work on string theory. Dualities (and the foundational parts of the AdS/CFT research, too) are important for our understanding what string theory is; and they are crucial for a mapping and understanding of realistic vacua, too.

When it comes to the atmosphere in the broader public, I surely do share her impression, however. Opportunist cowards such as Clifford Johnson and, to a lesser extent, Moshe Rozali prefer good relationships with aggressive subpar scientists such as Sabine Hossenfelder. So in many cases, it's dishonest zeroes similar to hers who determine the discourse while the likes of Rozali and Johnson shut their mouth about the actual status of string theory, minimize their research into far-reaching aspects of the theory because they saw that there are people who are hostile towards it, and reduce themselves to the masters of the "toolbox" whose main purpose is not to offend anyone.

Every competent high-energy theoretical physicist knows that the evidence supports string theory's being a "theory of everything" at least as much as it did 20 or 30 years ago. But most of these people are unfortunately not courageous so most people never learn that the Internet and newspapers has been flooded by false demagogy produced by subpar pseudointellectuals such as Hossenfelder.

Clifford Johnson replied by a tirade basically against string theory's being a unifying theory of all interactions. Random posters mix random ideas from random preprints into this conceptual debate. At least Moshe Rozali writes something sensible about the complex operations that tests of a theory typically demand.

There are lots of points in these discussions, some of them are valid, most of them are invalid. But the overall impression – and the actual, at least apparent goal – of all these texts and debates is very clear, namely to put string theory on trial. A universally overlooked key point is that a person willing not to celebrate the mankind's most viable, deepest, most unifying description of Nature and to put it on trial instead is an uncultural savage, a wild animal that should be treated seriously by no decent and educated human being.

Unfortunately, this point is not being made even by folks like Moshe Rozali and especially Clifford Johnson. Similar guys contribute to the proliferation of anti-science demagogues pretending to be scientists, such as Sabine Hossenfelder. In the environment defined by the likes of Clifford Johnson, the likes of Hossenfelder have no natural enemies. What a surprise that their percentage is growing.

by Luboš Motl (noreply@blogger.com) at May 26, 2015 05:05 PM

The n-Category Cafe

SoTFoM III and The Hyperuniverse Programme

Following SoTFom II, which managed to feature three talks on Homotopy Type Theory, there is now a call for papers announced for SoTFoM III and The Hyperuniverse Programme, to be held in Vienna, September 21-23, 2015.

Here are the details:

The Hyperuniverse Programme, launched in 2012, and currently pursued within a Templeton-funded research project at the Kurt Gödel Research Center in Vienna, aims to identify and philosophically motivate the adoption of new set-theoretic axioms.

The programme intersects several topics in the philosophy of set theory and of mathematics, such as the nature of mathematical (set-theoretic) truth, the universe/multiverse dichotomy, the alternative conceptions of the set-theoretic multiverse, the conceptual and epistemological status of new axioms and their alternative justificatory frameworks.

The aim of SotFoM III+The Hyperuniverse Programme Joint Conference is to bring together scholars who, over the last years, have contributed mathematically and philosophically to the ongoing work and debate on the foundations and the philosophy of set theory, in particular, to the understanding and the elucidation of the aforementioned topics. The three-day conference, taking place September 21-23 at the KGRC in Vienna, will feature invited and contributed speakers.

I wonder if anyone will bring some category theory along to the meeting. Perhaps they can answer my question here.

Further details:

Invited Speakers

  • T. Arrigoni (Bruno Kessler Foundation)
  • G. Hellman (Minnesota)
  • P. Koellner (Harvard)
  • M. Leng (York)
  • Ø. Linnebo (Oslo)
  • W.H. Woodin (Harvard) and
  • I. Jané (Barcelona) [TBC]

Call for papers: We invite (especially young) scholars to send their papers/abstracts, addressing one of the following topical strands:

  • new set-theoretic axioms
  • forms of justification of the axioms and their status within the philosophy of mathematics
  • conceptions of the universe of sets
  • conceptions of the set-theoretic multiverse
  • the role and importance of new axioms for non-set-theoretic mathematics
  • the Hyperuniverse Programme and its features
  • alternative axiomatisations and their role for the foundations of mathematics

Papers should be prepared for blind review and submitted through EasyChair on the following page:

https://easychair.org/conferences/?conf=sotfom3hyp

We especially encourage female scholars to send us their contributions. Accommodation expenses for contributed speakers will be covered by the KGRC.

Key Dates: * Submission deadline: 15 June 2015 * Notification of acceptance: 15 July 2015

For further information, please contact:

sotfom [at] gmail [dot] com

or alternatively one of: Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at); Neil Barton (bartonna [at] gmail [dot] com); Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at); John Wigglesworth (jmwigglesworth [at] gmail [dot] com)

by david (d.corfield@kent.ac.uk) at May 26, 2015 02:02 PM

Peter Coles - In the Dark

League Table Positions

Among the things I didn’t have time to blog about over a very busy Bank Holiday Weekend was the finish of the English Premiership season. I haven’t posted much about my own team, Newcastle United, this season because I haven’t been able to think of anything particularly positive to say. Since Alan Pardew quit in January to join Crystal Palace, Newcastle slumped to such an alarming extent that they went into their last game of the season (against West Ham) just two points above the drop zone. Had they lost their game, which did not seem unlikely on the basis of their recent form, and had Hull won against Manchester United, which did not seem unlikely on the grounds that Man Utd wwould finish in 4th place whatever happened in that game, then Newcastle would be relegated to the Championship. In the event, however, Newcastle won 2-0 which made them safe while Hull could only draw 0-0 which meant that Newcastle would have survived even if they had lost against West Ham. Moreover, Sunderland also lost their last game, which meant that the final Premier League Table looked like this:

Premiership_League

(courtesy of the BBC Website). The important places are 15 and 16, obviously. The natural order of things has been restored….

Another League Table came out over the Bank Holiday. This was the annual Guardian University Guide. I’m deeply sceptical of the value of these league tables, but there’s no question that they’re very important to potential students so we have to take them seriously. This year was pretty good for Sussex as far as the Guardian Table is concerned: the University of Sussex rose to 19th place overall and the two departments of the School of Mathematical and Physical Sciences both improved: Physics & Astronomy is back in the top 10 (at number 9, up from 11th place last year) and Mathematics rose 22 places to take 21st place. Gratifyingly, both finished well above Sunderland.

While these results are good news in themselves, at least around my neck of the woods, as they will probably lead to increased applications to Sussex from students next year, it is important to look behind the simplistic narrative of “improvements”. Since last year there have been several substantial changes to the Guardian’s methodology. The weighting given to “spend-per-student” has been reduced from 15% to 10% of the overall score and the method of calculating “value added” has excluded specific predictions based on “non-tariff” students (i.e. those without UK entry qualifications, especially A-levels). What the Guardian consistently fails to do is explain the relative size of the effect of arbitrary methodological changes on its tables compared to actual changes in, e.g., cash spent per student.

Imagine the outrage there would be if football teams were not told until the end of a Premier League season how many points would be awarded for a win….


by telescoper at May 26, 2015 11:53 AM

ATLAS Experiment

The oldest observer state of CERN is no longer just observing!

If you have ever been to a bazaar in Turkey, you would know that (1) you have to bargain hard; (2) you have to carefully examine what you buy. But sometimes this attitude goes way too far. In our case about half a century…

Turkey had been an observer state of CERN since 1961 but as of 6 May 2015; we are associate members! And it seems that we like being the betatester of whatever status is available around. In ’61 this was observer status, and now it is associate membership[1].

Jokes aside, of course we have not just been watching from a distance in all that many years. Turkish teams have been involved in a number of past experiments, such as CHORUS, SMC, CHARM-II in the 80s and 90s and even going back to NA31/2, PS160 and WA17, the days when collaborations did not try to find fancy backronyms such as our beloved ATLAS! Nowadays Turkish teams are involved in AMS, CAST, CLIC, ISOLDE and OPERA in addition to the four major LHC experiments. Around 150 Turkish nationals are users of CERN, three-fourth of which are from Turkish institutes.

So if you are in ATLAS you should know quite a few of us, after all we have been collaborating since 1994, when late Professor Engin Arık[2] joined ATLAS with her team from Boğaziçi University of İstanbul.

atlas-tr-ws-1-4c

Turkish teams meet at the national ATLAS workshop at Bogazici University on 23 June 2014. IMAGE: Serkant Çetin

Currently we are about 30 people from six different institutes clustered into two teams: Boğaziçi and Ankara, one in Europe one in Asia. We wonder if Engin could have imagined that we would one day be writing this blog article while sitting right under the bridge that connects Europe and Asia.

20150515_223152-1

<Serkant|Bosphorus|Erkcan> IMAGE: selfie by Erkcan Özcan

 

Confident that our associate membership will add another bridge — a scientific and cultural one —  between Europe and Asia, we now proceed to have our drinks on the shore of Bosphorus in this beautiful summer night to celebrate. You are all welcome to join.

 

 

[1] Turkey becomes associate member of CERN: http://home.web.cern.ch/about/updates/2015/05/turkey-becomes-associate-member-state-cern

[2] Turkish air crash is a great loss for physics: http://atlas-service-enews.web.cern.ch/atlas-service-enews/Turk_special/index.html; http://icpp-istanbul.dogus.edu.tr/in_memoriam.htm


sev-cern2-12-2014a Serkant Çetin is the chair of the Physics Department at Doğuş University of İstanbul. He is a member of the ATLAS Collaboration since 1997 and is currently acting as the national contact physicist whilst running the national funding project for ATLAS. Serkant is also participating in the CAST experiment at CERN, BESIII experiment at IHEP and is a member of the Turkish Accelerator Center project.
erkcan-2 Erkcan Özcan is the current leader of the ATLAS Boğaziçi team. He is happy that a whole lot of the group’s administrative workload is being born by the deputy team leader (Serkant), and most of the analysis code is being written by the capable graduate students in the team. When he finds time to do actual physics himself, he is happier and can be considered a likeable chap.

by serkant at May 26, 2015 08:57 AM

May 25, 2015

Clifford V. Johnson - Asymptotia

Because…
...going more than a week without baking something felt just. plain. wrong. Walnut bread. (Slightly dark in finish, but tasty. Click for larger view.) walnut_bread "...store overnight before serving." Let's pretend (munch, munch) that I did not see that instruction (munch, munch...) -cvj Click to continue reading this post

by Clifford at May 25, 2015 03:30 PM

Peter Coles - In the Dark

Laurie Anderson – All the Animals

Taking a short break from the combination of marking examinations and listening to cricket which has been my Bank Holiday Monday so far, so I thought I’d post a brief report on the show I went to last night, which happened to be the last night of this year’s Brighton Festival.

All the Animals was a show put together especially for this year’s Brighton Festival by renowned performance artist Laurie Anderson. She is most famous (at least in the UK) for the amazing record O Superman which was a smash hit in 1981; I posted about that on this blog here. A large number of last night’s audience members were clearly devout Laurie Anderson fans but I’ve never seen one of her live shows so wasn’t sure what to expect.

It turned out to be very much a one-woman show, with Laurie Anderson alone on stage. The show consisted of her telling stories about various animals, including her own pet terrier, Lula Belle, who is now sadly deceased. In between the stories there were musical interludes, with herself performing on an electric violin with various digital effects thrown in, and sometimes she accompanied herself as she performed the stories. The show was shot through with a wry humour and Laurie Anderson herself came across as a very engaging personality.

I had been told that her performances were often dazzling multimedia events, but this turned out not to be like that at all. The big screen at the back of the stage was only used a couple of times, once to show excerpts from a list of extinct animal species and once to show a couple of Youtube clips of Lula Belle. There were no dramatic lighting or other effects either. It was all very low key really. Far from the multimedia extravaganza I had anticipated.

There was enthusiastic applause at the end of the show, but to be honest I felt a little disappointed. Don’t get me wrong: I enjoyed the show, and still think Laurie Anderson is a really interesting artist but I suppose I just built up in myself an expectation of something with a more exciting visual element.

So that’s the end of this year’s Brighton Festival. Still, yesterday I posted the following tweet:

I guess all three predictions proved false. England didn’t lose on Sunday and indeed are very much favourites to win the Test match as I write this. Newcastle United won their game against West Ham and avoided relegation to the Championship. And Laurie Anderson, though definitely interesting, didn’t quite qualify as “fabulous”…


by telescoper at May 25, 2015 01:59 PM

Lubos Motl - string vacua and pheno

Possible particle discoveries at LHC
Guest blog by Paul Frampton

Dear Luboš, here is my guest blog associated with my recent paper entitled
"Lepton Number Conservation, Long-Lived Quarks and Superweak Bileptonic Decays"
posted at 1504.05877 [hep-ph] which suggests that LHC seek three additional quarks but, as promised, I shall include a general overview of what new particles might show up in Run II.

As is well known, discovery of the Higgs Boson in Run I completed the content of the standard model. Run II at \(13\TeV\), later expected to reach \(14\TeV\), is just beginning and what additional particle, if any, will be discovered is surely the central issue of particle phenomenology.




About the possible particle discoveries at LHC, there follow five subsections: (i) No new particle; (ii) A surprise particle; (iii) A super-partner and/or WIMP; (iv) Three additional quarks; (v) Discussion.

As a disclaimer, I shall assume as seems extremely likely that effects of extra dimensions are inaccessible at LHC energies.




(i) No new particle. It is a logical possibility that running at its maximum \(14\TeV\) center of mass energy and even after accumulating an integrated luminosity of a few inverse attobarns no additional particle will be detected beyond the standard model. I believe everybody would agree it is more agreeable if Nature is not like this although one must keep an open mind.

(ii) A surprise particle. The LHC may discover a new particle that nobody has predicted. By definition, it is hard to imagine what this could be and there is nothing further to discuss, except that in some ways this would be the most exciting possibility for stimulating theory.

(iii) A super-partner and/or WIMP. The squark, slepton and gaugino super-partners predicted by supersymmetry (Susy) were expected to show up during Run I. Susy theory goes back to the 1970s and three empirical evidences indirectly supporting Susy have arisen from
  1. canceling the quadratic divergence associated with the Higgs scalar (1970s);
  2. a dark matter WIMP candidate (H. Goldberg, Phys. Rev. Lett. 50, 1419 (1983))
    arising from R-parity as a mixture of gauging and higgsino;
  3. improved accuracy in grand unification of the three gauge couplings (U. Amaldi, W. de Boer and H. Furstenau, Phys. Lett. B260, 447 (1991)) when super-partners are included. In my view, all of (a, b, c) as empirical motivations for Susy were somewhat eroded by their all appearing in nonsupersymmetric quiver theories during the 2000s. Nevertheless NMSSM is the most popular candidate to go beyond the standard model. Regarding dark matter (b), independently of Susy, there is reason to think there might be a WIMP in the Run II energy region because of the WIMP miracle which leads naturally to the correct relic density for dark matter. If such a WIMP is produced at LHC, it will be essential to confirm its cosmological role by direct terrestrial detection and/or indirect astrophysical detection. More generally, however, masses considered for dark matter range by almost a hundred orders of magnitude from a particle whose Bohr radius is the galactic size to an intermediate-mass black hole with a hundred thousand solar masses.
(iv) Three additional quarks. I finally come to my paper 1504.05877 which is an example of motivated gauge theory model building. In choosing the gauge group and chiral matter representations, a crucial constraint is the absence of triangle anomalies, see S.L. Adler, Phys. Rev. 177, 2426 (1979). This constraint led to the confident prediction of the charmed quark (C. Bouchiat, J. Iliopoulos and P. Meyer, Phys. Lett. B38, 519 (1972); D.J. Gross and R. Jackiw, Phys. Rev. D6, 477 (1972)) and, after the bottom quark was discovered, an equally confident prediction of the top quark.

In the standard model each family separately cancels the triangle anomaly in a nontrivial way but this does not constrain the number of families. The simplest extension which addresses this is the 331-model which was invented in Phys. Rev. Lett. 69, 2889 (1992) and independently by F.Pisano and V. Pleitez, Phys. Rev. D46, 410 (1992). The electroweak \(SU(2)\) is enlarged to \(SU(3)\) and each family acquires one additional quark: \(D\) and \(S\) have \(Q=-4/3\) and \(T\) has \(Q=5/3\). There are five triangle anomalies in \(SU(3)_C \times SU(3)_L \times U(1)_X\) which are potentially troublesome:\[

(3_C)^3 \quad (3_C)^2 X \quad (3_L)^3 \quad (3_L)^2 X \quad X^3

\] in a self-explanatory notation. Each individual family possesses non-vanishing values for the 3rd, 4th and 5th anomalies which cancel among families only when the number of families equals three. Aside from this motivation of explaining the number of families, the 331 model contains a scale \(4\TeV\) below which the 331 symmetry must be broken to the SM, and thus the new physics is expected to be within reach of Run II.

There are five additional gauge bosons which include a \(Z\) prime and two pairs of bileptons \((Y^{++}, Y^+)\) and \((Y^{--}, Y^-)\) which carry lepton number \(L=-2\) and \(L=+2\) respectively. The three extra quarks carry \(L=+2\) (\(D\), \(S\)) and \(L=-2\) (\(T\)). This implies that these quarks decay are mediated by bileptons with a rate suppressed by the heavy mass of these gauge bosons. The decays are given explicitly in 1504.05877 but a key signature will be the displaced vertices caused by their long lifetime. Perhaps the silicon vertex detectors should have a radius bigger than the one meter selected.

It should be added that there exist variants of the 331-model achieved by changing the definition of the electric charge operator, see e.g. D. L. Anderson and M. Sher, Phys. Rev. D72, 095014 (2005), although this becomes non-minimal by adding more leptons.

Long-lived quarks have been studied for a fourth family where the longevity is due to the very small mixing allowed with the first three families, see P.H. Frampton and P.Q. Hung, Phys. Rev. B58, 057704 (1998) and H. Murayama, V. Rentala, J. Shu and T. Yangida, Phys. Lett. B705, 208 (2011). Here the long lifetime has a different cause. Discovery of the 331-model would provide a complementary explanation of three families to that provided in M. Kobayashi and T. Maskawa, Prog. Theor. Phys. 49, 652 (1973). It might suggest that the gauge structure be further extended, for example so that the weak hypercharge U(1) is subsumed into a further SU(3) group.

(v) Discussion.
I happen to be writing this in the mezzanine of the Andrew Wiles Building which houses the Oxford Mathematics Department so, especially as this is for Luboš, I would like to be able to write about string theory predictions for the LHC but string theory cannot now make any specific prediction. Further research on Calabi-Yau and other compactifications may succeed. Four-dimensional string compactifications generally aim at a Susy model of the type (iii) and hence string theorists hope that Susy shows up. Top-down compactifications do not, however, yield anything like (iv) which is an example of bottom-up model building which seems old fashioned but has been successful in the past.

The data which will emerge from Run II will be enlightening and tell us about how Nature really works.

So Luboš, that is my guest blog.

Best regards,
Paul Frampton

by Luboš Motl (noreply@blogger.com) at May 25, 2015 04:33 AM

May 24, 2015

Lubos Motl - string vacua and pheno

Far-reaching features of physical theories are not sociological questions
Backreaction has reviewed a book on the "philosophy of string theory" written by a trained physicist and philosopher Richard Dawid who may appear as a guest blogger here at some point.

Many of the statements sound reasonable – perhaps because they have a kind of a boringly neutral flavor. But somewhere in the middle, a reader must be shocked by this sentence – whose content is then repeated many times:
Look at the arguments [in favor of string theory] that he raises: The No Alternatives Argument and the Unexpected Explanatory Coherence are explicitly sociological.
Oh, really?




These two properties – or, if you want to be a skeptic, claimed properties – of string theory are self-evidently (claimed) intrinsic mathematical properties of string theory. String theory seems to have no mathematically possible alternatives; and its ideas fit together much more seamlessly than what you would expect for a generic man-made theory of this complexity a priori.




If you're not familiar with the recent 4 decades in theoretical physics, you may have doubts whether string theory actually has these properties. But why would you think that these very questions are sociological in character?

If real-world humans want to answer such questions, they have to rely on the findings that have been made by themselves or other humans (unless some kittens turn out to be really clever), and only those that have been done by now. But the same self-evident limitations apply to every other question in science. We only know things about Nature that followed from the experience of humans, and only those in the past (and present), not those in the future. Does it mean that we should declare all questions that scientists are interested in to be "sociological questions"?

Postmodern and feminist philosophers (mocked by Alan Sokal's hoax) surely want to believe such things. All of science is just a manifestation of sociology. But can the rest of us agree that these postmodern opinions are pure šit? And if we can, can we please recognize that statements about string theory don't "conceptually" differ from other propositions in science and mathematics, so they are obviously non-sociological, too?

Alternatives of string theory – non-stringy consistent theories of quantum gravity in \(d\geq 4\) – either exist or they don't exist. What does it have to do with the society? Ideas in string theory either fit together, are unified, and point to universal mechanisms, or they don't. What is the role of the society here?

If you study what Sabine Hossenfelder actually means by the claim that these propositions are sociological, you will see an answer: She wants these questions to be studied as sociological questions because that's where she has something to offer. What she has to offer are lame and insulting conspiracy theories. String theory can't have any good properties because some string theorists are well-funded, or something like that.

This kind of assertion may impress the low quality human material that reads her blog but they won't influence a rational person. A rational person knows that whether a theory is funded has nothing to do with its particular mathematical properties. And if someone uses the argument about funding – in one way or another – as an argument to establish a proposition about a mathematical property of the theory, he or she is simply not playing the game of science. He or she – in this case Sabine Hossenfelder – is working on a cheap propaganda.

A cheap propaganda may use various strategies. Global warming alarmists claim that the huge funding they are getting – really stealing – from the taxpayers' wallets proves that they alarming predictions are justified. They are attempting to intimidate everyone else. Sabine Hossenfelder uses the opposite strategy. Those who occasionally get a $3 million prize must be wrong – because that's what the jealous readers of Backreaction want to be true. None of these – opposite – kinds of propaganda has any scientific merit, however.

Needless to say, she is not the only one who would love to "establish" certain answers by sociological observations. It just can't be done. It can't be done by the supporters of a theory and it can't be done by its foes, either. To settle technical questions – even far-reaching, seemingly "philosophical" questions – about a theory, you simply need to study the theory technically, whether you like it or not. Hossenfelder doesn't have the capacity to do so in the case of string theory but that doesn't mean that she may meaningfully replace her non-existing expertise by something she knows how to do, namely by sociological conspiracy theories.

There is no rigorous yet universal proof but there are lots of non-rigorous arguments as well as context-dependent proofs that seem to imply that string theory is the only game in town. Also, thousands of papers about string theory are full of "unexpectedly coherent explanatory surprises" that physicists were "forced" to learn about when they investigated many issues.

I understand that you don't have to believe me that it's the case if you're actually unfamiliar with these "surprises". But you should still be able to understand that their existence is not a sociological question. And if they exist, those who know that they exist aren't affected and can't be affected by "sociological arguments" that would try to "deduce" something else. You should also be able to understand that those who have not mastered string theory can't actually deduce the answer to the question from any solid starting point. In the better case, they believe that string theory fails to have those important virtues. In the worse case, they force themselves to believe that string theory doesn't have these virtues because they are motivated to spread this opinion and they usually start with themselves.

At any rate, their opinion is nothing else than faith or noise – or something worse than that. There is nothing of scientific value to back it.

Now, while the review is basically a positive one, Backreaction ultimately denies all these arguments, anyway. Hossenfelder doesn't understand that the "only game in town" and "surprising explanatory coherence" are actually arguments that do affect a researcher's confidence that the theory is on the right track. And be sure that they do.

If string theory is the only game in town, well, then it obviously doesn't make sense to try to play any other games simply because there aren't any.

If string theory boasts this "surprising explanatory coherence", it means that the probability of its being right is (much) higher than it would be otherwise. Why?

Take dualities. They say that two theories constructed from significantly different starting points and looking different when studied too sloppily are actually exactly equivalent when you incorporate all conceivable corrections and compare the full lists of objects and phenomena. What does it imply for the probability that such a theory is correct?

A priori, \(A_i\) and \(B_j\) were thought to be different, mutually exclusive hypotheses. If you prove that \(A_i\equiv B_j\), they are no longer mutually exclusive. You should add up their prior probabilities. Both of them will be assigned the sum. The duality allowed you to cover a larger (twice as large) territory on the "landscape of candidate theories".

You may view this quasi-Bayesian argument to be an explanation why important theories in physics almost always admit numerous "pictures" or "descriptions". They allow you to begin from various starting points. Quantum mechanics may be formulated in the Schrödinger picture or the Heisenberg picture, using the Feynman path integrals. And there are lots of representations or bases of the Hilbert space you may pick, too. It didn't have to be like that. But important theories simply tend to have this property and while it seems impossible to calculate the probabilities accurately, the argument above explains why it's sensible to expect that important theories have many dual descriptions.

Return to the year 100 AD and ask the question what is the largest city in the world. There may be many candidates. Some candidate towns sit on several roads. There is one candidate where all roads lead. I am sure you understand where I am going: Rome was obviously the most important city in the world and the fact that all roads led to Rome was a legitimate argument to think that Rome was more likely to be the winner. The roads play the same role as the dualities and unexpected mathematical relationships discovered during the research of string theory. The analogy is in no way exact but it is good enough.

There is another, refreshingly different way to understand why the dualities and mathematical relationships make string theory more likely. They reduce the number of independent assumptions, axioms, concepts, and building blocks of the theory. In this way, the theory becomes more natural and less contrived. If you apply Occam's razor correctly, this reduction of the number of the independent building blocks, concepts, axioms, and assumptions occurs for string theory and makes its alternatives look contrived in comparison.

For example, strings may move but because they're extended, they may also wind around a circle in the spacetime. T-duality allows you to exactly interchange these two quantum numbers. They're fundamentally "the same kind of information" which means that you shouldn't double count it. The theory is actually much simpler, fundamentally speaking, than a theory in which "an object may move as well as wind" because these two verbs are just two different interpretations of the same thing.

In quantum field theory, solitons are objects such as magnetic monopoles that, in the weak coupling limit, may be identified with a classical solution of the field theory. If the theory has an S-duality – which may be the case of both string theory and quantum field theory – such a soliton may be interchanged with the fundamental string (or electric charge). Again, they're fundamentally the same thing in two limiting descriptions or interpretations. If you count how many independent building blocks (as described by Occam's razor) a theory has, and if you do so in some fundamentally robust way, a theory with an S-duality will have a fewer independent building blocks or concepts than a generic theory without any S-duality where the elementary electric excitations and the classical field-theoretical solutions would be completely unrelated! Not only all particle species are made of the same string in the weakly coupled string theory; the objects that seem more heavy or extended than a vibrating string are secretly "equivalent" to a vibrating string, too.

Similar remarks apply to all dualities and similar relationships in string theory, including S-duality, T-duality, U-duality, mirror symmetry, equivalence of Gepner models (conglomerates of minimal models) and particular Calabi-Yau shapes, string-string duality, IIA-M and HE-M duality, the existence of matrix models, AdS/CFT correspondence, conceptually different but agreeing calculations of the black hole entropy, ER-EPR correspondence, and others. All these insights are roads leading to Rome, arguments that the city at the end of several roads is actually the same one and it is therefore more interesting.



None of these properties of string theory prove that it's the right theory of quantum gravity. But they do make it meaningful for a rational theoretical physicist to spend much more time with the structure than with alternative structures that don't have these properties. People simply spend more time in a city with many roads and on roads close to this city. The reasons are completely natural and rationally justified. These reasons have something to do with a separation of the prior probabilities – and of the researchers' time.

I understand that a vast majority of people, even physicists with a general PhD, can't understand these matters because their genuine understanding depends on a specialized expertise. But I am just so incredibly tired of all those low quality people who try to "reduce" all these important physics questions to sociological memes and ad hominem attacks. You just can't that, you shouldn't do that, and it's always the people who "reduce" the discourse in this lame sociological direction who suck.

Sabine Hossenfelder is one of the people who badly suck.

by Luboš Motl (noreply@blogger.com) at May 24, 2015 07:54 PM

Clifford V. Johnson - Asymptotia

On Testability…
Here’s some interesting Sunday reading: Frank Close wrote a very nice article for Prospect Magazine on the business of testing scientific theories in Physics. Ideas about multiverses and also string theory are the main subjects under consideration. I recommend it. My own thoughts on the matter? Well, I think most … Click to continue reading this post

by Clifford at May 24, 2015 05:56 PM

Peter Coles - In the Dark

Another Lord’s Day

Just time for a quick post to record the fact that yesterday I made my annual pilgrimage to Lord’s Cricket Ground to watch the third day’s play of the First  Test between England and New Zealand.  On previous occasions I’ve had to make the trip from Cardiff to Paddington and back to take in a day at the Test, so had to get up at the crack of dawn, but this time I was travelling from Brighton which is a significantly shorter trip, so I only had to get up at 7 or so. Anyway, I got to the ground in time to have a bacon sandwich and a coffee before play started, with the added pleasure of listening to the jazz band as I consumed both items.

England had batted first in this game, and were on the brink of disaster at 30 for 4 at one stage, but recovered well to finish on 389 all out. Joe Root, Ben Stokes and Moeen Ali all made valuable runs in the middle order. Their performance was put into perspective by New Zealand, however, who had reached 303 for 2 at the end of the first day. It’s hard to say whether it was New Zealand’s strength in batting or England’s lacklustre bowling that was primarily responsible. I suspect it was a bit of both. Talk around the ground was if and when New Zealand might declare. I didn’t think I would declare on a score less than 600, even if tempted to have a go at the England batsman for 30 minutes in the evening, but that speculation turned out to be irrelevant.

Anyway on a cool and overcast morning, New Zealand resumed with Taylor and Williamson at the crease and England desperately needing to take quick wickets. The first breakthrough came after about 40 minutes, with Taylor well caught by wicketkeeper Buttler off the bowling of Stuart Broad. That served to bring in dangerman Brendan McCullum, who promptly hit his first ball for four through the covers. He continued to play his shots but never looked really convincing, eventually getting out to a wild shot off England’s debutant bowler Mark Wood, but not before he’d scored 42 runs at a brisk pace while Williamson at the other end continued to his century in much more sedate fashion.

Light drizzle had started to fall early on in the morning and shortly after McCullum was out it became much heavier. The players took an early lunch and play did not resume until 2.45pm, meaning that over an hour was lost. During the extended lunch interval I took a stroll around the ground, bought an expensive burger, and noted the large number of representatives of the Brigade of Gurkhas, who were collecting money for the Nepal Earthquake Appeal. Here are some of them making use of their vouchers in the Food Village:

Lords_Ghurkas

When play resumed, England quickly took another wicket, that of Anderson, at which point New Zealand were 420 for 5. Wicketkeeper Watling (who had an injury from the first innings) came to the crease and look all at sea, frequently playing and missing and surviving two umpire reviews. He led a charmed life however and ended up 61 not out when the New Zealand innings closed at 523 all out.

One interesting fact about this innings was that “Extras” scored 67. Quite a lot of those were leg-byes, but the number of wides and byes was quite embarrassing. Wicket-keeper Buttler did take a couple of fine catches, but he wasn’t as tidy as one would expect at Test level. England also dropped three catches in the field. New Zealand only added 212 runs for their last 8 wickets, which wasn’t as bad as it could have been for England but it could have been better too. I wasn’t impressed with their bowling, either. Neither Anderson nor Broad looked particularly dangerous, although both took wickets. Wood was erratic too, straying down the legside far too often, but he did improve in his second spell and managed to take three wickets. I think Moeen was the steadiest and most impressive bowler, actually. He also took three, including that of Williamson whose excellent innings ended on  132.

I took this picture from my vantage point in the Warner Stand  just a few minutes before the last New Zealand wicket fell:

Lords_NZ

You can see it was still quite gloomy and dark.

Incidentally, the Warner Stand is to be knocked down at the end of this season (in September 2015) and rebuilt much bigger and snazzier. I’ve got used to watching cricket from there during my occasional trips to Lord’s so I feel a little bit sad about its impending demise. On the other hand, it does need a bit of modernisation so perhaps it’s all for the best. The first phase of the rebuild should be ready for next season so I look forward to seeing what the new stand looks like in a year or so’s time.

England came out to bat with play extended until 7.30 to make up for the time lost for rain. Lyth faced the first ball, which was short. He played a hook shot which he mistimed. It went uppishly past the fielder at short midwicket for four, but it was a very risky shot to play at the very start of the innings given England’s situation and it made me worry about his temperament. He hit another couple of boundaries and then departed for 12, caught behind. Ballance  came in, faced twelve deliveries and departed, clean bowled, without troubling the scorers. At that point England were in deep trouble at 25-2, still needing over a hundred runs to make New Zealand bat again. With the weather brightening up considerably, Bell and Cook steadied the ship a little and no more wickets were lost before the close of play. I had to leave before the close in order to get the train back to Brighton but the day ended with England on 75-2.

I think New Zealand will win this game, for the simple reason that their bowling, fielding and batting are all better than England’s.  The biggest worry for England is their batting at the top of the order, which is far too fragile, but the bowling lacks penetration and the fielding is sloppy.  It doesn’t bode well for the forthcoming Ashes series but more immediately it doesn’t bode well for Alastair Cook’s position as England captain. But who could replace him?

UPDATE, 7pm Sunday. Contrary to my pessimistic assessment, England played very well on Day 4. Cook batted all day, ending on 153 not out but the star of the show was Ben Stokes who scored the fastest century ever in a test at Lord’s (85 balls). With England on 429 for 6, a lead of 295, any result is possible. England need to bat until about lunch to make the game safe, and only then think about winning it.

UPDATE, 5.38pm Monday. The morning didn’t go entirely England’s way. They only reached 478 all out, a lead of 344. However, New Zealand were in deep trouble straight away, losing both openers without a run on the board. They were in even deeper trouble a bit later when they slumped to 12-3 but then staged a mini-recovery only for two quick wickets to fall taking them to 61-5. There then followed an excellent partnership of 107 between Anderson and Watling who at one point looked like wresting the initiative away from England. Then both fell in quick succession and were soon followed by Craig and Southee. As I write this, New Zealand are 200 for 9. England need one more wicket and have 15 overs left to get it, with two tailenders at the crease.

UPDATE, 6.03pm Monday. It seemed to take forever to come, but Moeen has just caught last man Boult off the bowling of Broad. New Zealand all out for 220 and England win by 124 runs, a victory I simply could not have imagined when I left Lord’s on Saturday. I’ve never been happier to be proved wrong!

This has been one of the great Test matches and I’m really happy I was there for part of it – even if it was only one day! Well played both teams for making such an excellent game of it. Long live Test cricket. There’s nothing like it!


by telescoper at May 24, 2015 02:27 PM

May 23, 2015

John Baez - Azimuth

Network Theory in Turin

Here are the slides of the talk I’m giving on Monday to kick off the Categorical Foundations of Network Theory workshop in Turin:

Network theory.

This is a long talk, starting with the reasons I care about this subject, and working into the details of one particular project: the categorical foundations of networks as applied to electrical engineering and control theory. There are lots of links in blue; click on them for more details!


by John Baez at May 23, 2015 02:34 AM

May 22, 2015

Symmetrybreaking - Fermilab/SLAC

LHC restart timeline

Physics is just around the corner for the LHC. Follow this timeline through the most exciting moments of the past few months.

May 22, 2015 10:11 PM

Tommaso Dorigo - Scientificblogging

Highest Energy Collisions ? Not In My Book
Yesterday I posed a question - Are the first collisions recorded by the LHC running at 13 TeV the highest-energy ever produced by mankind with subatomic particles ? It was a tricky one, as usual, meant to think about the matter.

I received several tentative answer in the comments thread, and thus answered there. I paste the text here as it is of some interest to some of you and I wish it does not go overlooked.

---

Dear all, 

read more

by Tommaso Dorigo at May 22, 2015 07:57 PM

The n-Category Cafe

How to Acknowledge Your Funder

A comment today by Stefan Forcey points out ways in which US citizens can try to place legal limits on the surveillance powers of the National Security Agency, which we were discussing in the context of its links with the American Mathematical Society. If you want to act, time is of the essence!

But Stefan also tells us how he resolved a dilemma. Back here, he asked Café patrons what he should do about the fact that the NSA was offering him a grant (for non-classified work). Take their money and contribute to the normalization of the NSA’s presence within the math community, or refuse it and cause less mathematics to be created?

What he decided was to accept the funding and — in this paper at least — include a kind of protesting acknowledgement, citing his previous article for the Notices of the AMS.

I admire Stefan for openly discussing his dilemma, and I think there’s a lot to be said for how he’s handled it.

by leinster (tom.leinster@ed.ac.uk) at May 22, 2015 01:45 PM

astrobites - astro-ph reader's digest

Cosmic Reionization of Hydrogen and Helium

In a time long long ago…

The story we’re hearing today requires us to go back to the beginning of the Universe and explore briefly its rich yet mysterious history. The Big Bang marks the creation of the Universe 13.8 billion years ago. Seconds after the Big Bang, fundamental particles came into existence. They smashed together to form protons and neutrons, which collided to form the nuclei of hydrogen, helium, and lithium. Electrons, at this time, were wheezing by at such high velocities to be captured by their surrounding atomic nuclei.  The Universe was ionized and there were no stable atoms.

The Universe coming of age: Recombination and Reionization

After 300,000 or so years later, the Universe had cooled down a little bit. The electrons weren’t moving as fast as before and could be captured by atomic nuclei to form neutral atoms. This ushered in the era of recombination and propelled the Universe toward a neutral state. Structure formation happened next, where some of the first structures to form are thought to have been quasars (actively accreting supermassive black holes), massive galaxies, and the first generation of stars (population III stars). The intense radiation from incipient quasars and stars started to ionize the neutral hydrogen in their surroundings, beckoning the second milestone of the Universe known as the epoch of reionization (EoR). Recent cosmological studies suggested that the reionization epoch began no later than redshift (z) ~ 10.6, corresponding to ~350 Myr after the Big Bang.

To probe when reionization ended or completed, we can look at the spectra of high-redshift quasars and compare them with that of low-redshift quasars. Figure 1 shows this comparison. The spectrum of a quasar at z ~ 6 shows almost zero flux in the region of wavelengths shorter than the quasar’s redshifted Lyman-alpha line. This feature is known as the Gunn-Peterson trough and is caused by the absorption of the quasar light when it travels through the neutral space and gets absorbed by neutral hydrogen. Low-redshift quasars do not show this feature as the hydrogen along the path of the quasar light is already ionized. Quasar light does not get absorbed and can travel unobstructed to our view. The difference in the spectra of low- and high-redshift quasars suggests that the Universe approached the end of reionization around z ~ 6, corresponding to ~1 Gyr after the Big Bang. (This astrobite provides a good review of reionization and its relation to quasar spectum.)

qso

Fig 1 – The top panel is a synthetic quasar spectrum at z = 0, compared with the bottom panel showing the spectrum of the current known highest redshift quasar ULAS J112001.48+064124.3 (hereafter ULAS J1120+0641) at z ~ 7.1. While the Lyman-alpha line of the top spectrum is located at its rest-frame wavelength of 1216 nm, it is very redshifted in the spectrum of ULAS J1120+0641 (note the scale of the wavelengths). Compared to the low-redshift spectrum, there is a rapid drop in flux before the Lyman-alpha line for ULAS J1120+0641, signifying the Gunn-Peterson trough. [Top figure from P J. Francis et al. 1991 and bottom figure from Mortlock et al. 2011]

 

Problems with Reionization, and a Mini Solution

The topic of today’s paper concerns possible ionizing sources during the epoch of reionization, which also happens to be one of the actively-researched questions in astronomy. Quasars and stars in galaxies are the most probable ionizing sources, since they are emitters of the Universe most intense radiation (see this astrobite for how galaxies might ionize the early Universe). This intense radiation falls in the UV and X-ray regimes and can ionize neutral hydrogen (and potentially also neutral helium, which requires twice as much ionizing energy). But, there are problems with this picture.

First of all, the ionizing radiation from high-redshift galaxies are found to be insufficient to maintain the Universe’s immense bath of hydrogen in ionized state. To make up for this, the fraction of ionizing photons that escape the galaxies (and contribute to reionization) — known as the escape fraction — has to be higher than what we see observationally. Second of all, we believe that the contribution of quasars to the ionizing radiation becomes less important at higher and higher redshifts and is negligible at z >~ 6. So, we have a conundrum here. If we can’t solve the problem of reionization with quasars and galaxies, we need other ionizing sources. The paper today investigates one particular ionizing source: mini-quasars.

What are mini-quasars? Before that, what do I mean when I say quasars? Quasars in the normal sense of the word usually refer to the central accreting engines of supermassive black holes (~109  Msun) where powerful radiation escapes in the form of a jet. A mini-quasar is the dwarf version of a quasar. More quantitatively, it is the central engine of an intermediate-mass black hole (IMBH) that has a mass of ~ 102 – 105 Msun. Previous studies hinted at the role of mini-quasars toward the reionization of hydrogen; the authors in this paper went an extra mile and studied the combined impact of mini-quasars and stars not only on the reionization of hydrogen, but also on the reionization of helium.  Looking into the reionization of helium allows us to investigate the properties of mini-quasars. Much like solving a set of simultaneous equations, getting the correct answer to the problem of hydrogen reionization requires that we also simultaneously constrain the reionization of helium.

The authors calculated the number of ionizing photons from mini-quasars and stars analytically. They considered only the most optimistic case for mini-quasars where all ionizing photons contribute to reionization, i.e. the escape fraction fesc, BH = 1. Since the escape fraction of ionizing photons from stars is still poorly constrained, three escape fractions fesc are considered. Figure 2 shows the relative contributions of mini-quasars and stars in churning out hydrogen ionizing photons as a function of redshifts for different escape fractions from stars.  As long as fesc is small enough, mini-quasars are able to produce more hydrogen ionizing photons than stars.

fig1

Fig 2 – Ratio of the number of ionizing photons produced by mini-quasars relative to stars (y-axis) as a function of redshifts (x-axis). Three escape fractions of ionizing photons fesc from stars are considered. [Figure 2 of the paper]

Figure 3 shows the contributions of mini-quasars and mini-quasars plus (normal) quasars toward reionization of hydrogen and helium. Mini-quasars alone are found to contribute non-negligibly (~ 20%) toward hydrogen reionization at z ~ 6, while contribution from quasars starts to become more important at low redshifts . The combined contribution from mini-quasars and quasars is observationally consistent with when helium reionization ended. Figure 4 shows the combined contribution of mini-quasars and stars to hydrogen and helium reionization. The escape fraction of ionizing photons from stars significantly affects hydrogen and helium reionizations, ie they influence whether hydrogen and helium reionizations end earlier or later than current theory.

fig2

Fig 3 – Volume of space filled by ionized hydrogen and helium, Qi(z), as a function of redshift z. The different colored lines signify the contributions of mini-quasars (IMBH) and quasars (SMBH) to hydrogen and helium reionizations. [Figure 3 of the paper]

fig3

Fig 4 – Volume of space filled by ionized hydrogen and helium, Qi(z), as a function of redshift z. The two panels refer to the different assumptions on the mini-quasar spectrum, where the plot on the bottom is the more favorable of the two. The different lines refer to the different escape fractions of ionizing photons from stars that contribute to hydrogen and helium reionizations. [Figure 4 of the paper]

The authors caution against a couple of caveats in their paper. Although they demonstrate that contribution from mini-quasars is not negligible, this is only for the most optimistic case where all photons from the mini-quasars contribute to reionization. The authors also did not address the important issue of feedback from accretion onto IMBHs, which regulates BHs growth and consequently determines how common mini-quasars are. The escape fractions from stars also need to be better constrained in order to place a tighter limit on the joint contribution of mini-quasars and stars to reionization. Improved measurements of helium reionization would also help in constraining the properties of mini-quasars. Phew….sounds like we still have a lot of work to do. This paper presents some interesting results, but we are definitely still treading on muddy grounds and the business of cosmic reionization is not any less tricky than we hope.

 

by Suk Sien Tie at May 22, 2015 12:24 PM

arXiv blog

Computational Aesthetics Algorithm Spots Beauty That Humans Overlook

Beautiful images are not always popular ones, which is where the CrowdBeauty algorithm can help, say computer scientists.

One of the depressing truths about social media is that the popularity of an image is not necessarily an indication of its quality. It’s easy to find hugely popular content of dubious quality. But it’s much harder to find unpopular content of high quality.

May 22, 2015 05:00 AM

May 21, 2015

Tommaso Dorigo - Scientificblogging

Bang !! 13 TeV - The Highest Energy Ever Achieved By Mankind ?!
The LHC has finally started to produce 13-TeV proton-proton collisions!

The picture below shows one such collision, as recorded by the CMS experiment today. The blue boxes show the energy recorded in the calorimeter, which measures particle energy by "destroying" them as they interact with the dense layers of matter that this device is made up of; the yellow curves show tracks reconstructed by the ionization deposits of charged particles left in the silicon detector layers of the inner tracker. 

read more

by Tommaso Dorigo at May 21, 2015 08:01 PM

John Baez - Azimuth

Information and Entropy in Biological Systems (Part 4)

I kicked off the workshop on Information and Entropy in Biological Systems with a broad overview of the many ways information theory and entropy get used in biology:

• John Baez, Information and entropy in biological systems.

Abstract. Information and entropy are being used in biology in many different ways: for example, to study biological communication systems, the ‘action-perception loop’, the thermodynamic foundations of biology, the structure of ecosystems, measures of biodiversity, and evolution. Can we unify these? To do this, we must learn to talk to each other. This will be easier if we share some basic concepts which I’ll sketch here.

The talk is full of links, in blue. If you click on these you can get more details. You can also watch a video of my talk:


by John Baez at May 21, 2015 05:26 PM

Jester - Resonaances

How long until it's interesting?
Last night, for the first time, the LHC  collided particles at the center-of-mass energy of 13 TeV. Routine collisions should follow early in June. The plan is to collect 5-10 inverse femtobarn (fb-1) of data before winter comes, adding to the 25 fb-1 from Run-1. It's high time dust off your Madgraph and tool up for what may be the most exciting time in particle physics in this century. But when exactly should we start getting excited? When should we start friending LHC experimentalists on facebook? When is the time to look over their shoulders for a glimpse of of gluinos popping out of the detectors. One simple way to estimate the answer is to calculate what is the luminosity when the number of particles produced  at 13 TeV will exceed that produced during the whole Run-1. This depends on the ratio of the production cross sections at 13 and 8 TeV which is of course strongly dependent on the particle's mass and production mechanism. Moreover, the LHC discovery potential will also depend on how the background processes change, and on a host of other experimental issues.  Nevertheless, let us forget for a moment about  the fine-print, and  calculate the ratio of 13 and 8 TeV cross sections for a few particles popular among the general public. This will give us a rough estimate of the threshold luminosity when things should get interesting.

  • Higgs boson: Ratio≈2.3; Luminosity≈10 fb-1.
    Higgs physics will not be terribly exciting this year, with only a modest improvement of the couplings measurements expected. 
  • tth: Ratio≈4; Luminosity≈6 fb-1.
    Nevertheless, for certain processes involving the Higgs boson the improvement may be a bit  faster. In particular, the theoretically very important process of Higgs production in association with top quarks (tth) was on the verge of being detected in Run-1. If we're lucky, this year's data may tip the scale and provide an evidence for a non-zero top Yukawa couplings. 
  • 300 GeV Higgs partner:  Ratio≈2.7 Luminosity≈9 fb-1.
    Not much hope for new scalars in the Higgs family this year.  
  • 800 GeV stops: Ratio≈10; Luminosity≈2 fb-1.
    800 GeV is close to the current lower limit on the mass of a scalar top partner decaying to a top quark and a massless neutralino. In this case, one should remember that backgrounds also increase at 13 TeV, so the progress will be a bit slower than what the above number suggests. Nevertheless,  this year we will certainly explore new parameter space and make the naturalness problem even more severe. Similar conclusions hold for a fermionic top partner. 
  • 3 TeV Z' boson: Ratio≈18; Luminosity≈1.2 fb-1.
    Getting interesting! Limits on Z' bosons decaying to leptons will be improved very soon; moreover, in this case background is not an issue.  
  • 1.4 TeV gluino: Ratio≈30; Luminosity≈0.7 fb-1.
    If all goes well, better limits on gluinos can be delivered by the end of the summer! 

In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles  already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult.  On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.

by Jester (noreply@blogger.com) at May 21, 2015 05:20 PM

Clifford V. Johnson - Asymptotia

So the equations are not…
Working on rough layouts of one of the stories for the book. One rough panel ended up not looking so rough, and after Monday's ink dalliances I was itching to fiddle with brushes again, and then I thought I'd share. So... slightly less rough, shall we say? A more careful version would point the eyes a bit better, for example...(Much of the conversation, filling a bit more of the while space, has been suppressed - spoilers.) light_relief_II_sample -cvj Click to continue reading this post

by Clifford at May 21, 2015 03:49 PM

CERN Bulletin

First 13TeV collisions: reporting from the CCC

On Wednesday 20 May at around 10.30 p.m., protons collided in the LHC at the record-breaking energy of 13 TeV for the first time. These test collisions were to set up various systems and, in particular, the collimators. The tests and the technical adjustments will continue in the coming days.

 

The CCC was abuzz as the LHC experiments saw 13 TeV collisions.
 

Preparation for the first physics run at 6.5 TeV per beam has continued in the LHC. This included the set-up and verification of the machine protection systems. In addition, precise measurements of the overall focusing properties of the ring – the so-called “optics” – were performed by inducing oscillations of the bunches, and observing the response over many turns with the beam position monitors (BPM).

The transverse beam size in the accelerator changes from the order of a millimetre around most of the circumference down to some tens of microns at the centre of the experiments where the beams collide. Reducing the beam size to the micrometre level while at top energy at the interaction points is called “squeezing”. Quadrupole magnets shape the beam and small imperfections in magnetic field strength can mean that the actual beam sizes don’t exactly match the model. After an in depth analysis of the BPM measurements and after simulating the results with correction models, the operators made small corrections to the magnetic fields. As a result, the beam sizes fit the model to within a few percent. This is remarkable for a 27 km machine!

The preparation for first collisions at beam energies of 6.5 TeV started Wednesday, 20 May in the late evening. Soon after, the first record-breaking collisions were seen in the LHC experiments. On Thursday, 21 May, the operators went on to test the whole machine in collision mode with beams that were "de-squeezed" at the interaction points. During the “de-squeeze”, the beam is made larger at the experiment collision points than those used for standard operation. These large beams are interesting for calibration measurements at the experiments, during which the beams are scanned across each other – the so-called "Van der Meer scans".

The two spots are beam 1 (clockwise) and beam 2 (anti-clockwise) traveling inside the LHC in opposite directions. The images are elaborated from data from the synchrotron light monitors. The beam sizes aren’t exactly the same at the B1 and B2 telescopes as the beam intensity as well as the beam optics setup can differ.

Progress was also made on the beam intensity front. In fact, last week the LHC also broke the intensity record for 2015 by circulating 40 nominal bunches in each of the rings, giving a beam intensity of 4×1012 protons per beam. There were some concerns that the unidentified obstacle in the beam-pipe of a Sector 8-1 dipole could be affected by the higher beam currents. The good news is that this is not the case. No beam losses occurred at the location of the obstacle and, after two hours, the operators dumped the beams in the standard way. Commissioning continues and the LHC is on track for the start of its first high-energy physics run in a couple of weeks.

May 21, 2015 03:05 PM

Symmetrybreaking - Fermilab/SLAC

LHC achieves record-energy collisions

The Large Hadron Collider broke its own record again in 13-trillion-electronvolt test collisions.

Today engineers at the Large Hadron Collider successfully collided several tightly packed bunches of particles at 13 trillion electronvolts. This is one of the last important steps on the way toward data collection, which is scheduled for early June.

As engineers ramp up the energy of the collider, the positions of the beams of particles change. The protons are also focused into much tighter packets, so getting two bunches to actually intersect requires very precise tuning.

“Colliding protons inside the LHC is equivalent to firing two needles 6 miles apart with such precision that they collide halfway,” says Syracuse University physicist Sheldon Stone, a senior researcher on the LHCb experiment. “It takes a lot of testing to make sure the two bunches meet at the right spot and do not miss each other.”

Engineers spent the last two years outfitting the LHC to collide protons at a higher energy and faster rate than ever before. Last month they successfully circulated low-energy protons around the LHC for the first time since the shutdown. Five days later, they broke their own energy record by ramping up the energy of a single proton beam to 6.5 trillion electronvolts.

High-energy test collisions allow engineers to practice steering beams in the LHC.

“We have to find the positions where the two beams cross, so what we do is steer the beams up and down and left and right until we get the optimal collision rate,” says CERN engineer Ronaldus SuykerBuyk of the operations team.

In addition to finding the collision sweet spots, engineers will also use these tests to finish calibrating the machine components and positioning the collimators, which protect the accelerator and detectors from stray particles.

The design of the LHC allows more than 2800 bunches of protons to circulate in the machine at a time. But the LHC operations team is testing the machine with just one or two bunches per beam to ensure all is running smoothly.

The next important milestone will be preparing the LHC to consistently and safely ramp, steer and collide proton beams for up to eight consecutive hours.

Declaring stable beams will be only the beginning for the LHC operations team.

"The machine evolves around you," says CERN engineer Jorg Wenninger. "There are little changes over the months. There’s the reproducibility of the magnets. And the alignment of the machine moves a little with the slow-changing geology of the area. So we keep adjusting every day."

First 13 TeV collisions in the ALICE detector

Courtesy of: ALICE collaboration

First 13 TeV collisions in the ATLAS detector

Courtesy of: ATLAS collaboration

First 13 TeV collisions in the CMS detector

Courtesy of: CMS collaboration

First 13 TeV collisions in the LHCb detector

Courtesy of: LHCb collaboration

 

LHC restart timeline

February 2015

The Large Hadron Collider is now cooled to nearly its operational temperature.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC filled with liquid helium

The Large Hadron Collider is now cooled to nearly its operational temperature.
Read more…

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring.

Info-Graphic by: Sandbox Studio, Chicago
 

First LHC magnets prepped for restart

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring. Read more…

Engineers and technicians have begun to close experiments in preparation for the next run.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.
Read more…
March 2015

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC restart back on track

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week. Read more…
April 2015

The Large Hadron Collider has circulated the first protons, ending a two-year shutdown.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC sees first beams

The Large Hadron Collider has circulated the first protons, ending a two-year shutdown. Read more…

The Large Hadron Collider accelerated protons to the fastest speed ever attained on Earth.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC breaks energy record

The Large Hadron Collider accelerated protons to the fastest speed ever attained on Earth.
Read more…
May 2015

LHC sees first low-energy collisions

Info-Graphic by: Sandbox Studio, Chicago
 

LHC sees first low-energy collisions

The Large Hadron Collider is back in the business of colliding particles.
Read more…

The Large Hadron Collider broke its own record again in 13-trillion-electronvolt test collisions.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC achieves record-energy collisions

The Large Hadron Collider broke its own record again in 13-trillion-electronvolt test collisions.
Read more…

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at May 21, 2015 02:21 PM

CERN Bulletin

2015, the year of all dangers
On Thursday, May 7 many of you attended, in the packed Main Amphitheatre, the crisis meeting organized by the Staff Association. The main aim of this public meeting was to lift the veil on the intentions of certain CERN Council delegates who would like to: attack again and again our pensions; reduce the budget of CERN in the medium term; more generally, revise downward our employment and working conditions.   Since the beginning of 2014 some disturbing rumours circulate about our pensions. Several CERN Council delegates would like to re-open the balanced package of measures that they accepted in 2010 to ensure the full capitalization of the CERN Pension Fund on a 30-year horizon. This constitutes not only a non-respect of their commitments, but also a non-respect of the applicable rules and procedures. Indeed, the governance principles stipulate that the Pension Fund Governing Board ensures the health of the Fund, and, as such, alone has authority to propose to the CERN Council stabilization measures, if necessary. It should be noted that, to date, there is no indication that the measures in question do not meet expectations; the interference of the CERN Council is thus unjustified. As if this were not enough, the package of measures proposed by CERN Management intended to mitigate in 2015 the increase of the contributions of the Members States expressed in their national currencies following the Swiss National Bank’s decision on 15 January 2015 to discontinue the minimum exchange rate of CHF 1.20 per euro, found no consensus among the Member States. The Management had to withdraw its proposal, but the difficulties for the Members States to face this increase remains. Finally, these attacks come at the worst time for us since we are in the final phase of a five-yearly review exercise. This review is intended to verify that the financial and social conditions offered by the Organization are able to guarantee the attractiveness of CERN. In this atmosphere of attacks and threats, we fear the worst. Not only for us, the Staff Association, but for you all, employees and users of the Organization, the absolute priority is the optimal running of current projects and the realization of future scientific projects. Investments to launch these projects, in particular the HL-LHC, must be made now. A decrease in the budget, as mentioned by some delegates, would be catastrophic for the long-term future of the Organization. Some delegates arrive at CERN with economic viewpoints eying only the very short term. We know that austerity does not function, a fortiori in basic research, where one should take a long-term approach to enable discoveries tomorrow and create high value-added jobs the day after tomorrow. It is thus more than worrying that some delegations become agitated without reason and contaminate others. This must stop! We ask respect for commitments and procedures, and, above all, respect for the CERN personnel, who offered Nobel prizes and many discoveries to European and global science. All together, we must act with determination before adverse decisions are taken. We ask you to inform a maximum of your colleagues so that all the staff (employed and associated members of personnel) can say NO to jeopardizing our Organization. The video and the slides of the crisis meeting are available at https://indico.cern.ch/event/392832/.

by Staff Association at May 21, 2015 02:19 PM

ZapperZ - Physics and Physicists

What Is Really "Real" In Quantum Physics
This is an excellent article from this week's Nature. It gives you a summary of some of the outstanding issues in Quantum Physics that are actively being looked into. Many of these things are fundamental questions of the interpretation of quantum physics, and it is being done not simply via a philosophical discussion, but via experimental investigation. I do not know how long this article will be available to the public, so read it now quickly.

One of the best part about this article is that it clearly defines some of the philosophical terminologies in term of how they are perceived in physics. You get to understand the meanings of "psi-epistemic models" and "psi-ontic models", and the differences between them and how they can be distinguished in experiments.

But this is where the debate gets stuck. Which of quantum theory's many interpretations — if any — is correct? That is a tough question to answer experimentally, because the differences between the models are subtle: to be viable, they have to predict essentially the same quantum phenomena as the very successful Copenhagen interpretation. Andrew White, a physicist at the University of Queensland, says that for most of his 20-year career in quantum technologies “the problem was like a giant smooth mountain with no footholds, no way to attack it”.

That changed in 2011, with the publication of a theorem about quantum measurements that seemed to rule out the wavefunction-as-ignorance models. On closer inspection, however, the theorem turned out to leave enough wiggle room for them to survive. Nonetheless, it inspired physicists to think seriously about ways to settle the debate by actually testing the reality of the wavefunction. Maroney had already devised an experiment that should work in principle, and he and others soon found ways to make it work in practice. The experiment was carried out last year by Fedrizzi, White and others.
There is even a discussion on devising a test for Pilot wave model after the astounding demonstration of the concept using simple classical wave experiment.

Zz.

by ZapperZ (noreply@blogger.com) at May 21, 2015 01:24 PM

astrobites - astro-ph reader's digest

Super starbursts at high redshifts

Title: A higher efficiency of converting gas to stars push galaxies at z~1.6 well above the star forming main sequence
Authors: Silverman et al. (2015)
First author institution: Kavli Institude for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, the University of Tokyo, Kashiwa, Japan
Status: Submitted to Astrophysics Journal Letters

In the past couple of years there has been some observational evidence for a bimodal nature of the star formation efficiency (SFE) in galaxies. Whilst most galaxies lie on the typical relationship between mass and star formation rate (the star forming “main sequence”), slowly converting gas into stars, some form stars at a much higher rate. These “starburst” galaxies are much rarer than the typical galaxy, making up only ~2% of the population and yet ~10% of the total star formation. This disparity in the populations has only been studied for local galaxies and therefore more evidence is needed to back up these claims.

Figure 1: Hubble i band (and one IR K band) images of the seven galaxies studied by Silverman et al. (2015). Overlaid are the blue contours showing CO emission and red contours showing IR emission.  Note that the centre of CO emission doesn't always line up with the light seen in the Hubble image. Figure 2 in Silverman et al (2015).

Figure 1: Hubble i band (and one IR K band) images of the seven galaxies studied by Silverman et al. (2015). Overlaid are the blue contours showing CO emission and red contours showing IR emission. Note that the centre of CO emission doesn’t always line up with the light seen in the Hubble image. Figure 2 in Silverman et al (2015).

In this recent paper by Silverman et al. (2015), the authors have observed seven high redshift galaxies (or large distance, shown in Figure 1) at z~1.6  with ALMA (Atacama Large Millimeter Array, northern Chile) and IRAM (Institut de Radioastronomie Millimétrique, Spain), measuring the luminosity of the emission line from the 2-1 and 3-2 electron orbit transitions in carbon monoxide in each galaxy spectrum. The luminosity of the light from these transitions allows the authors to estimate the molecular hydrogen (H_2) gas mass of each galaxy.

Observations of each galaxy in the near-infrared (NIR; 24-500 μm) with Herschel (the SPIRE spectrograph) allow an estimation of the star formation rate (SFR) from the total luminosity integrated across the NIR range, L_{NIR}. The CO and NIR observations are shown by the red and blue contours respectively, overlaid on Hubble UV (i band) images in Figure 1. Notice how the CO/NIR emission doesn’t always coincide with the UV light, suggesting that a lot of the star formation is obscured in some of these galaxies.

Figure 2. The gas depletion timescale (1/SFE) against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at \latex $0 < z< 0.25$ (grey points) with the star forming main sequence show by the solid black line.  Figure 3c in Silverman et al. (2015).

Figure 2. The gas depletion timescale (1/SFE) against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at 0 < z< 0.25 (grey points) with the star forming main sequence shown by the solid black line. Figure 3c in Silverman et al. (2015).

With these measurements of the gas mass and the SFR, the SFE can be calculated and in turn the gas depletion timescale, which is the reciprocal of the SFE. This is plotted against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at 0 < z< 0.25 (grey points), with the star forming main sequence shown by the solid black line. These results show that the efficiency of star formation in starburst galaxies is highly elevated compared to those residing on the main sequence,but not as high as those galaxies in the local universe.

These observations therefore dilutes the theory of a bimodal nature of star formation efficiency, and leads to one of a continuous distribution of SFE as a function of distance from the star forming main sequence. The authors consider the idea that the mechanisms leading to such a continuous nature of elevated gas depletion timescales could be related to major mergers between galaxies, which lead to rapid gas compression, boosting star formation. This is supported also by the images in Figure 1, which show multiple clumps of UV emitting regions, as seen by the Hubble Space Telescope.

To really put some weight behind this theory though the authors conclude, like most astrophysical studies, that they need a much larger sample of starburst galaxies at these high redshifts ( z ~ 1.6) to determine what the heck is going on.

by Becky Smethurst at May 21, 2015 12:58 PM

Tommaso Dorigo - Scientificblogging

EU Grants Submitted And Won: Some Statistics
The European Union has released some data on the latest call for applications for ITN grants. These are "training networks" where academic and non-academic institutions pool up to provide innovative training to doctoral students, in the meanting producing excellent research outputs.

read more

by Tommaso Dorigo at May 21, 2015 11:16 AM

CERN Bulletin

Cine club
Wednesday 27 May 2015 at 20:00 CERN Council Chamber Wait Until Dark Directed by Terence Young USA, 1967, 108 minutes   When Sam Hendrix carries a doll across the Canada-US border, he sets off a chain of events that will lead to a terrifying ordeal for his blind wife, Susy. The doll was stuffed with heroin and when it cannot be located, its owner, a Mr. Roat, stages a piece of theatre in an attempt to recover it. He arranges for Sam to be away from the house for a day and then has two con men, Mike Talman and a Mr. Carlito, alternately encourage or scare Susy into telling them where the doll is hidden. Talman pretends to be an old friend of Sam's while Carlito pretends to be a police officer. Despite their best efforts they make little headway as Susy has no idea where the doll might be, leading Mr. Roat to take a somewhat more violent approach to getting the information from her. Original version English; French subtitles   Wednesday 3 Juin 2015 at 20:00 CERN Council Chamber Sogni d’ oro Directed by Nanni Moretti Italy, 1981, 105 minutes Michele Apicella is a young film and theater director, who lives his troubles as an artist. In Italy reach the Eighties, and Michele, who was contestant in the Sixties, now finds himself in a new era full of crisis of values and ignorance. So Michele, with his works, means to represent the typical outcast and left indifferent intellectual outcast who establishes a breach between him and the world of ordinary people. Original version Italian; English subtitles

by Cine club at May 21, 2015 07:02 AM

May 20, 2015

John Baez - Azimuth

Information and Entropy in Biological Systems (Part 3)

We had a great workshop on information and entropy in biological systems, and now you can see what it was like. I think I’ll post these talks one a time, or maybe a few at a time, because they’d be overwhelming taken all at once.

So, let’s dive into Chris Lee’s exciting ideas about organisms as ‘information evolving machines’ that may provide ‘disinformation’ to their competitors. Near the end of his talk, he discusses some new results on an ever-popular topic: the Prisoner’s Dilemma. You may know about this classic book:

• Robert Axelrod, The Evolution of Cooperation, Basic Books, New York, 1984. Some passages available free online.

If you don’t, read it now! He showed that the simple ‘tit for tat’ strategy did very well in some experiments where the game was played repeatedly and strategies who did well got to ‘reproduce’ themselves. This result was very exciting, so a lot of people have done research on it. More recently a paper on this subject by William Press and Freeman Dyson received a lot of hype. I think this is a good place to learn about that:

• Mike Shulman, Zero determinant strategies in the iterated Prisoner’s Dilemma, The n-Category Café, 19 July 2012.

Chris Lee’s new work on the Prisoner’s Dilemma is here, cowritten with two other people who attended the workshop:

The art of war: beyond memory-one strategies in population games, PLOS One, 24 March 2015.

Abstract. We show that the history of play in a population game contains exploitable information that can be successfully used by sophisticated strategies to defeat memory-one opponents, including zero determinant strategies. The history allows a player to label opponents by their strategies, enabling a player to determine the population distribution and to act differentially based on the opponent’s strategy in each pairwise interaction. For the Prisoner’s Dilemma, these advantages lead to the natural formation of cooperative coalitions among similarly behaving players and eventually to unilateral defection against opposing player types. We show analytically and empirically that optimal play in population games depends strongly on the population distribution. For example, the optimal strategy for a minority player type against a resident tit-for-tat (TFT) population is ‘always cooperate’ (ALLC), while for a majority player type the optimal strategy versus TFT players is ‘always defect’ (ALLD). Such behaviors are not accessible to memory-one strategies. Drawing inspiration from Sun Tzu’s the Art of War, we implemented a non-memory-one strategy for population games based on techniques from machine learning and statistical inference that can exploit the history of play in this manner. Via simulation we find that this strategy is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the Prisoner’s Dilemma, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies.

And now for the talk! Click on the talk title here for Chris Lee’s slides, or go down and watch the video:

• Chris Lee, Empirical information, potential information and disinformation as signatures of distinct classes of information evolving machines.

Abstract. Information theory is an intuitively attractive way of thinking about biological evolution, because it seems to capture a core aspect of biology—life as a solution to “information problems”—in a fundamental way. However, there are non-trivial questions about how to apply that idea, and whether it has actual predictive value. For example, should we think of biological systems as being actually driven by an information metric? One idea that can draw useful links between information theory, evolution and statistical inference is the definition of an information evolving machine (IEM) as a system whose elements represent distinct predictions, and whose weights represent an information (prediction power) metric, typically as a function of sampling some iterative observation process. I first show how this idea provides useful results for describing a statistical inference process, including its maximum entropy bound for optimal inference, and how its sampling-based metrics (“empirical information”, Ie, for prediction power; and “potential information”, Ip, for latent prediction power) relate to classical definitions such as mutual information and relative entropy. These results suggest classification of IEMs into several distinct types:

1. Ie machine: e.g. a population of competing genotypes evolving under selection and mutation is an IEM that computes an Ie equivalent to fitness, and whose gradient (Ip) acts strictly locally, on mutations that it actually samples. Its transition rates between steady states will decrease exponentially as a function of evolutionary distance.

2. “Ip tunneling” machine: a statistical inference process summing over a population of models to compute both Ie, Ip can directly detect “latent” information in the observations (not captured by its model), which it can follow to “tunnel” rapidly to a new steady state.

3. disinformation machine (multiscale IEM): an ecosystem of species is an IEM whose elements (species) are themselves IEMs that can interact. When an attacker IEM can reduce a target IEM’s prediction power (Ie) by sending it a misleading signal, this “disinformation dynamic” can alter the evolutionary landscape in interesting ways, by opening up paths for rapid co-evolution to distant steady-states. This is especially true when the disinformation attack targets a feature of high fitness value, yielding a combination of strong negative selection for retention of the target feature, plus strong positive selection for escaping the disinformation attack. I will illustrate with examples from statistical inference and evolutionary game theory. These concepts, though basic, may provide useful connections between diverse themes in the workshop.


by John Baez at May 20, 2015 07:58 PM

arXiv blog

Machine-Learning Algorithm Mines Rap Lyrics, Then Writes Its Own

An automated rap-generating algorithm pushes the boundaries of machine creativity, say computer scientists.

The ancient skill of creating and performing spoken rhyme is thriving today because of the inexorable rise in the popularity of rapping. This art form is distinct from ordinary spoken poetry because it is performed to a beat, often with background music.

May 20, 2015 05:31 PM

astrobites - astro-ph reader's digest

Merging White Dwarfs with Magnetic Fields

The Problem

White dwarfs, the final evolutionary state of most stars, will sometimes find themselves with another white dwarf nearby. In some of these binaries, gravitational radiation will bring the two white dwarfs closer together. When they get close enough, one of the white dwarfs will start transferring matter to the other white dwarf before they merge. These mergers are thought to produce a number of interesting phenomena. Rapid mass transfer from one white dwarf to the other could cause a collapse into a neutron star. The two white dwarfs could undergo a nuclear explosion as a Type 1a supernova. Least dramatically, these merging white dwarfs could also form into one massive, rapidly rotating white dwarf.

There have been many simulations over the last 35 years of white dwarfs merging as astronomers try to figure out the conditions that cause each of these outcomes. However, none of these simulations have included magnetic fields during the merging process, though it is well known that many white dwarfs have magnetic fields.. This is mostly because other astronomers have just been interested in different properties and results of mergers. Today’s paper simulates the merging of two white dwarfs with magnetic fields to see how these fields change and influence the merger.

The Method

The authors choose to simulate the merger of two fairly typical white dwarfs. They have Carbon-Oxygen cores and 0.625 and 0.65 solar masses. The magnetic fields in the core are 2 x 107 Gauss and 103 Gauss at the surface. Recall that the Earth has a magnetic field strength of about 0.5 Gauss. The temperature on the surface of each white dwarf is 5,000,000 K. The authors start the white dwarfs close to each other (about 2 x 109 cm apart and a period of 49.5 seconds) to simulate the merger.

To keep track of what is happening, the authors use a code called AREPO. AREPO works as a moving mesh code – the highest resolution is kept where interesting things are happening. There have been a number of past Astrobites that have covered how AREPO works and some of the applications to planetary disks and galaxy evolution.

The Findings

Figure 1:

Figure 1: Result from the simulation showing how the temperature (left) and magnetic field strength (right) change over time (top to bottom). We are looking down on the merger from above.

Figure 1 shows the main result from the paper. The left column is the temperature and the right column in the magnetic field strength at various times during the simulation. By 20 seconds, just a little mass is starting to transfer between the two white dwarfs.  Around 180 seconds, tidal forces finally tear the less massive white dwarf apart. Streams of material are wrapping around the system. These streams form Kelvin-Helmholtz instabilities that amplify the magnetic field. Note how in the second row of Figure 1, the streams with the highest temperatures also correspond to the largest magnetic field strengths. The strength of the magnetic field is changing quickly and increasing during this process. By 250 seconds, many of the streams have merged into a disk around the remaining white dwarf.

By 400 seconds (not shown in the figure), the simulations show a dense core surrounded by a hot envelope. A disk of material surrounds this white dwarf. The magnetic field structure is complex. In the core, the field strength is around 1010 Gauss, significantly stronger than at the start of the simulation. The field strength is about 109 Gauss at the interface of the hot envelope and the disk. The total magnetic energy grows by a factor of 109 from the start of the simulation to the end.

These results indicate that most of the magnetic field growth occurs from the Kelvin-Helmholtz instabilities during the merger. The field strength increases slowly at first, then very rapidly before plateauing out. The majority of the field growth occurs during the tidal disruption phase (between about 100 and 200 seconds in the simulation). Since accretion streams are a common feature of white dwarf mergers, these strong magnetic fields should be created in most white dwarf mergers. As this paper is the first to simulate the merging of two white dwarfs with magnetic fields, future work should continue to refine our understanding of this process and observational implications.

 

by Josh Fuchs at May 20, 2015 03:18 PM

Symmetrybreaking - Fermilab/SLAC

Small teams, big dreams

A small group of determined scientists can make big contributions to physics.

Particle physics is the realm of billion-dollar machines and teams of thousands of scientists, all working together to explore the smallest components of the universe.

But not all physics experiments are huge, as the scientists of DAMIC, Project 8, SPIDER and ATRAP can attest. Each of their groups could fit in a single Greyhound bus, with seats to spare.

Don’t let their size fool you; their numbers may be small, but their ambitions are not.

Smaller machines

Small detectors play an important role in searching for difficult-to-find particles.

Take dark matter, for example. Because no one knows what exactly dark matter is or what the mass of a dark matter particle might be, detection experiments need to cover all the bases.

DAMIC is an experiment that aims to observe dark matter particles that larger detectors can’t see. 

The standard strategy used in most experiments is scaling up the size of the detector to increase the number of potential targets for dark matter particles to hit. DAMIC takes another approach: eliminating all sources of background noise to allow the detector to see potential dark matter particle interactions of lower and lower energies.

The detector sits in a dust-free room 2 kilometers below ground at SNOLAB in Sudbury, Canada. To eliminate as much noise as possible, it is held in 10 tons of lead at around minus 240 degrees Fahrenheit. Its small size allows scientists to shield it more easily than they could a larger instrument.

DAMIC is currently the smallest dark matter detection experiment—both in the size of apparatus and the number of people on the team. While many dark matter detectors use more than a hundred thousand grams of active material, the current version of DAMIC runs on a mere five grams and the full detector will have 100 grams. Its team is made up of around ten scientists and students.

“What’s really nice is that even though this is a small experiment, it has the potential of making a huge contribution and having a big impact,” says DAMIC member Javier Tiffenberg, a postdoctoral fellow at Fermilab.

Top to bottom engagement

In collaborations larger than 100 people, specialized teams usually work on different parts of an experiment. In smaller groups, all members work together and engage in everything from machine construction to data analysis.

The 20 or so members of the Project 8 experiment are developing a new technique to measure the mass of neutrinos. On this experiment, moving quickly between designing, testing and analyzing an apparatus is of great importance, says Martin Fertl, a postdoctoral researcher at the University of Washington. Immediate access to hardware and analysis tools helps these projects move forward quickly and allows changes to be implemented with ease.

“A single person can install a new piece of hardware and within a day or so, test the equipment, take new data, analyze that data and then decide whether or not the system requires any additional modification,” he says.

Project 8 aims to determine the mass of neutrinos indirectly using tritium. Tritium decays to Helium-3, releasing an electron and a neutrino. Scientists can measure the energy emitted by these electrons to help them determine the neutrino mass.

“It was satisfying for us all when the first data came out and we were seeing electrons,” says UW postdoc Matthew Sternberg. “We basically all took a crack at the data to see what we could pull off of it.”

A fertile training ground

Small collaborations can be especially beneficial to fledging scientists entering the field.

Space-based projects carry a high cost and risk that can prevent students from being very involved. Balloon-borne experiments, however, are the next best thing. By getting above the atmosphere, balloons provide many of the same benefits for a fraction of the price.

In the roughly 30-member collaboration of the balloon-borne SPIDER experiment, graduate students played a role in designing, engineering, building and launching the instrument, and are now working on analysis.

“It’s great training for graduate students who end up working on large satellite experiments,” says Sasha Rahlin, a graduate student at Princeton University.

SPIDER is composed of six large cameras tethered to a balloon and was launched 110,000 feet above Antarctica to orbit the Earth for about 20 days in search of information about the early universe. Using measurements from this flight, researchers are looking for fluctuations in the polarization of cosmic background radiation, the light leftover from the big bang.

“When the balloon went up, all of us were in the control room watching each sub-system turn on and do exactly what it was supposed to,” Rahlin says. “There was a huge moment of ‘Wow, this actually works.’ And every component from start to finish had grad student blood, sweat and tears.”

Around 20 people went down to McMurdo Station in Antarctica to launch SPIDER with the help of a team from NASA that launches balloon experiments in several locations around the world. According to Zigmund Kermish, a postdoctoral fellow at Princeton University, being a small group sometimes means having to optimize time and manpower to get tasks done.

“It’s been really inspiring to see what we do with limited resources,” said Kermish. “It’s amazing what motivated graduated students can make happen.”

Big ambitions

Scientists on small collaborations are working toward big scientific goals. The ATRAP experiment is no exception; it will help answer some fundamental questions about why our universe exists.

Four members of the collaboration are located at CERN, where the apparatus is located, while only 15 people are involved overall.

ATRAP creates antihydrogen by confining positrons and antiprotons in a trap, cooling them to near absolute zero until they can combine to form atoms. ATRAP holds these atoms while physicists make precise measurements of their properties to compare with hydrogen atoms, their matter counterparts.

This can help determine whether nature treats matter and antimatter alike, says Eric Tardiff, a Harvard University postdoc at CERN. If researchers find evidence for violation of this symmetry, they will have a potential explanation for one of physics’ largest mysteries—why the universe contains unequal amounts of antimatter and matter particles. “No experiment has explained [this asymmetry] yet,” he says.

Think small

Small experiments play an important role in particle physics. They help train researchers early in their career by giving them experience across many parts of the scientific process. And despite their size, they hold enormous potential to make game-changing scientific discoveries. As Margaret Mead once said, “Never doubt that a small group of thoughtful, committed citizens can change the world.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Diana Kwon at May 20, 2015 01:00 PM

Jester - Resonaances

Antiprotons from AMS
This week the AMS collaboration released the long expected measurement of the cosmic ray antiproton spectrum.  Antiprotons are produced in our galaxy in collisions of high-energy cosmic rays with interstellar matter, the so-called secondary production.  Annihilation of dark matter could add more antiprotons on top of that background, which would modify the shape of the spectrum with respect to the prediction from the secondary production. Unlike for cosmic ray positrons, in this case there should be no significant primary production in astrophysical sources such as pulsars or supernovae. Thanks to this, antiprotons could in principle be a smoking gun of dark matter annihilation, or at least a powerful tool to constrain models of WIMP dark matter.

The new data from the AMS-02 detector extend the previous measurements from PAMELA up to 450 GeV and significantly reduce experimental errors at high energies. Now, if you look at the  promotional material, you may get an impression that a clear signal of dark matter has been observed.  However,  experts unanimously agree that the brown smudge in the plot above is just shit, rather than a range of predictions from the secondary production. At this point, there is certainly no serious hints for dark matter contribution to the antiproton flux. A quantitative analysis of this issue appeared in a paper today.  Predicting  the antiproton spectrum is subject to large experimental uncertainties about the flux of cosmic ray proton and about the nuclear cross sections, as well as theoretical uncertainties inherent in models of cosmic ray propagation. The  data and the predictions are compared in this Jamaican band plot. Apparently, the new AMS-02 data are situated near the upper end of the predicted range.

Thus, there is no currently no hint of dark matter detection. However, the new data are extremely useful to constrain models of dark matter. New constraints on the annihilation cross section of dark matter  are shown in the plot to the right. The most stringent limits apply to annihilation into b-quarks or into W bosons, which yield many antiprotons after decay and hadronization. The thermal production cross section - theoretically preferred in a large class of WIMP dark matter models - is in the  case of b-quarks excluded for the mass of the dark matter particle below 150 GeV. These results provide further constraints on models addressing the hooperon excess in the gamma ray emission from the galactic center.

More experimental input will allow us to tune the models of cosmic ray propagation to better predict the background. That, in turn, should lead to  more stringent limits on dark matter. Who knows... maybe a hint for dark matter annihilation will emerge one day from this data; although, given the uncertainties,  it's unlikely to ever be a smoking gun.

Thanks to Marco for comments and plots. 

by Jester (noreply@blogger.com) at May 20, 2015 08:40 AM

Jester - Resonaances

What If, Part 1
This is the do-or-die year, so Résonaances will be dead serious. This year, no stupid jokes on April Fools' day: no Higgs in jail, no loose cables, no discovery of supersymmetry, or such. Instead, I'm starting with a new series "What If" inspired  by XKCD.  In this series I will answer questions that everyone is dying to know the answer to. The first of these questions is

If HEP bloggers were Muppets,
which Muppet would they be? 

Here is  the answer.

  • Gonzo the Great: Lubos@Reference Frame (on odd-numbered days)
    The one true uncompromising artist. Not treated seriously by other Muppets, but adored by chicken.
  • Animal: Lubos@Reference Frame (on even-numbered days)
    My favorite Muppet. Pure mayhem and destruction. Has only two modes: beat it, or eat it.
  • Swedish Chef: Tommaso@Quantum Diaries Survivor
    The Muppet with a penchant for experiment. No one understands what he says but it's always amusing nonetheless.
  • Kermit the Frog: Matt@Of Particular Significance
    Born Muppet leader, though not clear if he really wants the job.
  • Miss Piggy: Sabine@Backreaction
    Not the only female Muppet, but certainly the best known. Admired for her stage talents but most of all for her punch.
  • Rowlf: Sean@Preposterous Universe
    The real star and one-Muppet orchestra. Impressive as an artist or and as a comedian, though some complain he's gone to the dogs.

  • Statler and Waldorf: Peter@Not Even Wrong
    Constantly heckling other Muppets from the balcony, yet every week back for more.
  • Fozzie Bear:  Jester@Résonaances
    Failed stand-up comedian. Always stressed that he may not be funny after all.
     
If you have a match for  Bunsen, Beaker, or Dr Strangepork, let me know in the comments.

In preparation:
-If theoretical physicists were smurfs... 

-If LHC experimentalists were Game of Thrones characters...
-If particle physicists lived in Middle-earth... 

-If physicists were cast for Hobbit's dwarves... 
and more. 


by Jester (noreply@blogger.com) at May 20, 2015 08:39 AM

May 19, 2015

arXiv blog

Quantum Life Spreads Entanglement Across Generations

The way creatures evolve in a quantum environment throws new light on the nature of life.

May 19, 2015 06:52 PM

ATLAS Experiment

From ATLAS Around the World: First Blog From Hong Kong

Guess who ATLAS’s youngest member is? It’s Hong Kong! We will be celebrating our first birthday in June, 2015. The Hong Kong ATLAS team comprises members from The Chinese University of Hong Kong (CUHK), The University of Hong Kong (HKU) and The Hong Kong University of Science and Technology (HKUST), operating under the umbrella of the Joint Consortium for Fundamental Physics formed in 2013 by physicists in the three universities. We have grown quite a bit since 2014. There are now four faculty members, two postdocs, two research assistants, and six graduate students in our team. In addition, five undergraduates from Hong Kong will spend a summer in Geneva at the CERN Summer Program. You can’t miss us if you are at CERN this summer (smile and say hi to us please)!

While half of our team is stationed at CERN, taking shifts and working on Higgs property analysis, SUSY searches, and muon track reconstruction software, the other half is working in Hong Kong on functionality, thermal, and radiation tests on some components of the muon system readout electronics, in collaboration with the University of Michigan group. We have recently secured funds to set up a Tier-2 computing center for ATLAS in Hong Kong, and we may work on ATLAS software upgrade tasks as well.

I have also been actively participating in education and outreach activities in Hong Kong. In October last year, I have invited two representatives from Hong Kong Science Museum to visit CERN, so that they can obtain first-hand information on its operation and the lives and work of students and scientists. This will help them to plan an exhibition there on CERN and LHC in 2016. The timing is just right to bring the excitement with the restart of the LHC to Hong Kong. I have been giving talks on particle physics and cosmology for students and the general public. The latest one was just two weeks ago, for the 60th anniversary of Queen Elizabeth School, where I was a student myself many years ago. So many memories came back to me! I was an active member of the astronomy club and a frequent user of the very modest telescope we had. I knew back then the telescope is a time machine that brings images of the past to our eyes. How fortunate I am now, to be a user of the LHC and ATLAS, the ultimate time machine, and a member of the ATLAS community studying the most fundamental questions about the universe. Even though the young students in the audience might find it difficult to understand everything we do, they can certainly feel our excitement in our quest for the scientific truth.

 


F00062 Ming-chung Chu is a professor at the Department of Physics, The Chinese University of Hong Kong. He did his undergraduate and graduate studies both at Caltech. After some years of postdoc at MIT and Caltech, he went back to Hong Kong in 1995, where he was born and grew up. h proud to have helped bring particle physics to Hong Kong.

 

Part of the Hong Kong team in a group meeting at CERN.

Part of the Hong Kong team in a group meeting at CERN. Photo courtesy Prof. Luis Flores Castillo.

The humble telescope I used at high school pointed me both to the past and to the future.

The humble telescope I used at high school pointed me both to the past and to the future. Photo courtesy Tat Hung Wan.

Secondary school students in Hong Kong after a popular science talk on particle physics at the Chinese University of Hong Kong.

Secondary school students in Hong Kong after a popular science talk on particle physics at the Chinese University of Hong Kong. Photo courtesy Florence Cheung.

by mchu at May 19, 2015 06:14 PM

Clifford V. Johnson - Asymptotia

Ready for the day…
I have prepared the Tools of the Office of Dad*: ready_for_the_day -cvj *At least until lunchtime. Then, another set to prep... Click to continue reading this post

by Clifford at May 19, 2015 04:45 PM

Clifford V. Johnson - Asymptotia

‘t Hooft on Scale Invariance…
Worth a read: This is 't Hooft's summary (link is a pdf) of a very interesting idea/suggestion about scale invariance and its possible role in finding an answer to a number of puzzles in physics. (It is quite short, but think I'll need to read it several times and mull over it a lot.) It won the top Gravity Research foundation essay prize this year, and there were several other interesting essays in the final list too. See here. -cvj Click to continue reading this post

by Clifford at May 19, 2015 04:27 PM

ZapperZ - Physics and Physicists

Review of Leonard Mlodinow's "Upright Tinkers"
This is a review of physicist's Leonard Mlodinow's new book "Upright Tinkers: : The Human Journey from Living in Trees to Understanding the Cosmos."

In it, he debunks the myths about famous scientists and how major discoveries and ideas came about.

With it, he hopes to correct the record on a number of counts. For instance, in order to hash out his theory of evolution, Darwin spent years post-Galapagos shifting through research and churning out nearly 700 pages on barnacles before his big idea began to emerge. Rather than divine inspiration, Mlodinow says, achieving real innovation takes true grit, and a willingness to court failure, a lesson we’d all be wise to heed.

“People use science in their daily lives all the time whether or not its what we think of as ‘science,’” he continues. “Data comes in that you have to understand. Life’s not simple. It require patience to solve problems, and I think science can teach you that if you know what it really is.”

Scientists would agree. Recently, psychologist Angela Duckworth has begun overturning fundamental conventional wisdom about the role intelligence plays in our life trajectories with research illustrating that, no matter the arena, it’s often not the smartest kids in the room who become the most successful; it’s the most determined ones.

As I've said many times on here, there is a lot of value in learning science, even for non-scientists, IF there is a conscious effort to reveal and convey the process of analytic, systematic thinking. We all live in a world where we try to find correlations among many things, and then try to figure out the cause-and-effect. This is the only way we make sense of our surrounding, and how we acquire knowledge of things. Science allows us to teach this skill to students, and letting them be aware of how we consider something to be valid.

This is what is sadly lacking today, especially in the world of politics and social policies.

Zz.

by ZapperZ (noreply@blogger.com) at May 19, 2015 04:18 PM

ZapperZ - Physics and Physicists

Record Number of Authors In Physics Paper
I don't know why this has been making the news reports a lot since last week. I suppose it must be a landmark even or something.

The latest paper on the Higgs is making the news, not for its results, but for setting the record for the largest number of authors on a paper, 5154 of them.

Only the first nine pages in the 33-page article, published on 14 May in Physical Review Letters, describe the research itself — including references. The other 24 pages list the authors and their institutions.

The article is the first joint paper from the two teams that operate ATLAS and CMS, two massive detectors at the Large Hadron Collider (LHC) at CERN, Europe’s particle-physics lab near Geneva, Switzerland. Each team is a sprawling collaboration involving researchers from dozens of institutions and countries.

And oh yeah, they reduced the uncertainty in the Higgs mass to 0.25%, but who cares about that!

This is neither interesting nor surprising to me. The number of collaborators in each of the ATLAS and CMS detector is already huge by themselves. So when they pool together their results and analysis, it isn't surprising that this happens.

Call me silly, but what I was more surprised with, and it is more unexpected, is that the research article itself is "nine pages". I thought PRL always limits its papers to only 4 pages!

BTW, this paper is available for free under the Creative Commons License, you may read it for yourself.

Zz.

by ZapperZ (noreply@blogger.com) at May 19, 2015 03:47 PM

Symmetrybreaking - Fermilab/SLAC

Looking to the heavens for neutrino masses

Scientists are using studies of the skies to solve a neutrino mystery.

Neutrinos may be the lightest of all the particles with mass, weighing in at a tiny fraction of the mass of an electron. And yet, because they are so abundant, they played a significant role in the evolution and growth of the biggest things in the universe: galaxy clusters, made up of hundreds or thousands of galaxies bound together by mutual gravity.

Thanks to this deep connection, scientists are using these giants to study the tiny particles that helped form them. In doing so, they may find out more about the fundamental forces that govern the universe.

Curiously light

When neutrinos were first discovered, scientists didn’t know right away if they had any mass. They thought they might be like photons, which carry energy but are intrinsically weightless.

But then they discovered that neutrinos came in three different types and that they can switch from one type to another, something only particles with mass could do.

Scientists know that the masses of neutrinos are extremely light, so light that they wonder whether they come from a source other than the Higgs field, which gives mass to the other fundamental particles we know. But scientists have yet to pin down the exact size of these masses.

It’s hard to measure the mass of such a tiny particle with precision.

In fact, it’s hard to measure anything about neutrinos. They are electrically neutral, so they are immune to the effects of magnetic fields and related methods physicists use to detect particles. They barely interact with other particles at all: Only a more-or-less direct hit with an atomic nucleus can stop a neutrino, and that doesn’t happen often.

Roughly a thousand trillion neutrinos pass through your body each second from the sun alone, and almost none of those end up striking any of your atoms. Even the densest matter is nearly transparent to neutrinos. However, by creating beams of neutrinos and by building large, sensitive targets to catch neutrinos from nuclear reactors and the sun, scientists have been able to detect a small portion of the particles as they pass through.

In experiments so far, scientists have estimated that the total mass of the three types of neutrinos together is roughly between 0.06 electronvolts and 0.2 electronvolts. For comparison, an electron’s mass is 511 thousand electronvolts and a proton weighs in at 938 million electronvolts.

Because the Standard Model—the theory describing particles and the interactions governing them—predicts massless neutrinos, finding the exact neutrino mass value will help physicists modify their models, yielding new insights into the fundamental forces of nature.

Studying galaxy clusters could provide a more precise answer.

Footprints of a neutrino

One way to study galaxy clusters is to measure the cosmic microwave background, the light traveling to us from 380,000 years after the big bang. During its 13.8-billion-year journey, this light passed through and near all the galaxies and galaxy clusters that formed. For the most part, these obstacles didn’t have a big effect, but taken cumulatively, they filtered the CMB light in a unique way, given the galaxies’ number, size and distribution.

The filtering affected the polarization—the orientation of the electric part of light—and originated in the gravitational field of galaxies. As CMB light traveled through the gravitational field, its path curved and its polarization twisted very slightly, an effect known as gravitational lensing. (This is a less dramatic version of lensing familiar from the beautiful Hubble Space Telescope images.)

The effect is similar to the one that got everyone excited in 2014, when researchers with the BICEP2 telescope announced they had measured the polarization of CMB light due to primordial gravitational waves, which subsequent study showed to be more ambiguous.

That ambiguity won’t be a problem here, says Oxford University cosmologist Erminia Calabrese, who studies the CMB on the Atacama Cosmology Telescope Polarization project. “There is one pattern of CMB polarization that is generated only by the deflection of the CMB radiation.” That means we won’t easily mistake gravitational lensing for anything else.

Small and mighty

Manoj Kaplinghat, a physicist at the University of California at Irvine, was one of the first to work out how neutrino mass could be estimated from CMB data alone. Neutrinos move very quickly relative to stuff like atoms and the invisible dark matter that binds galaxies together. That means they don’t clump up like other forms of matter, but their small mass still contributes to the gravitational field.

Enough neutrinos, even fairly low-mass ones, can deprive a newborn galaxy of a noticeable amount of mass as they stream away, possibly throttling the growth of galaxies that can form in the early universe. It’s nearly as simple as that: Heavier neutrinos mean galaxies must grow more slowly, while lighter neutrinos mean  faster galaxy growth.

Kaplinghat and colleagues realized the polarization of the CMB provides a measure the total amount of gravity from galaxies in the form of gravitational lensing, which working backward will constrain the mass of neutrinos. “When you put all that together, what you realize is you can do a lot of cool neutrino physics,” he says.

Of course the CMB doesn’t provide a direct measurement of the neutrino mass. From the point of view of cosmology, the three types of neutrinos are indistinguishable. As a result, what CMB polarization gives us is the total mass of all three types together.

However, other projects are working on the other end of this puzzle. Experiments such as the Main Injector Neutrino Oscillation Search, managed by Fermilab, have determined the differences in mass between the different neutrino types.

Depending on which neutrino is heaviest, we know how the masses of the other two types of neutrinos relate. If we can figure out the total mass, we can figure out the masses of each one. Together, cosmological and terrestrial measurements will get us the individual neutrino masses that neither is able to alone.

The space-based Planck observatory and POLARBEAR project in northern Chile have yielded preliminary results in this search already. And scientists at ACTPol, located at high elevation in Chile’s Atacama Desert, are working on this as well. They will determine the neutrino mass as well as the best estimates we have, down to the lowest possible values allowed, once the experiments are running at their highest precision, Calabrese says.

Progress is necessarily slow: The gravitational lensing pattern comes from seeing small patterns emerging from light captured across a large swath of the sky, much like the image in an Impressionist painting arises from abstract brushstrokes that look like very little by themselves.

In more scientific terms, it’s a cumulative, statistical effect, and the more data we have, the better chance we have to measure the lensing effect—and the mass of a neutrino.

 

Like what you see? Sign up for a free subscription to symmetry!

by Matthew R. Francis at May 19, 2015 01:00 PM

May 18, 2015

Tommaso Dorigo - Scientificblogging

The Challenges Of Scientific Publishing: A Conference By Elsevier
I spent the last weekend in Berlin, attending a conference for editors organized by Elsevier. And I learnt quite a bit during two very busy days. As a newbie - I am handling editor for the journal "Reviews in Physics" since January this year - I did expect to learn a lot from the event; but I will admit that I decided to accept the invitation to attend the event more out of curiosity for a world that is at least in part new to me, rather than out of professional sense of duty.

read more

by Tommaso Dorigo at May 18, 2015 03:28 PM

Quantum Diaries

Drell-Yan, Drell-Yan with Jets, Drell-Yan with all the Jets

All those super low energy jets that the LHC cannot see? LHC can still see them.

Hi Folks,

Particle colliders like the Large Hadron Collider (LHC) are, in a sense, very powerful microscopes. The higher the collision energy, the smaller distances we can study. Using less than 0.01% of the total LHC energy (13 TeV), we see that the proton is really just a bag of smaller objects called quarks and gluons.

myproton_profmattstrassler

This means that when two protons collide things are sprayed about and get very messy.

atlas2009-collision-vp1-142308-482137-web

One of the most important processes that occurs in proton collisions is the Drell-Yan process. When a quark, e.g., a down quark d, from one proton and an antiquark, e.g., an down antiquark d, from an oncoming proton collide, they can annihilate into a virtual photon (γ) or Z boson if the net electric charge is zero (or a W boson if the net electric charge is one). After briefly propagating, the photon/Z can split into a lepton and its antiparticle partner, for example into a muon and antimuon or electronpositron pair! In pictures, quark-antiquark annihilation into a lepton-antilepton pair (Drell-Yan process) looks like this

feynmanDiagram_DrellYan_Simple

By the conservation of momentum, the sum of the muon and antimuon momenta will add up to the photon/Z boson  momentum. In experiments like ATLAS and CMS, this gives a very cool-looking distribution

cms_DY_7TeV

Plotted is the invariant mass distribution for any muon-antimuon pair produced in proton collisions at the 7 TeV LHC. The rightmost peak at about 90 GeV (about 90 times the proton’s mass!) is a peak corresponding to the production Z boson particles. The other peaks represent the production of similarly well-known particles in the particle zoo that have decayed into a muon-antimuon pair. The clarity of each peak and the fact that this plot uses only about 0.2% of the total data collected during the first LHC data collection period (Run I) means that the Drell-Yan process is a very useful for calibrating the experiments. If the experiments are able to see the Z boson, the rho meson, etc., at their correct energies, then we have confidence that the experiments are working well enough to study nature at energies never before explored in a laboratory.

However, in real life, the Drell-Yan process is not as simple as drawn above. Real collisions include the remnants of the scattered protons. Remember: the proton is bag filled with lots of quarks and gluons.

feynmanDiagram_DrellYan_wRad

Gluons are what holds quarks together to make protons; they mediate the strong nuclear force, also known as quantum chromodynamics (QCD). The strong force is accordingly named because it requires a lot of energy and effort to overcome. Before annihilating, the quark and antiquark pair that participate in the Drell-Yan process will have radiated lots of gluons. It is very easy for objects that experience the strong force to radiate gluons. In fact, the antiquark in the Drell-Yan process originates from an energetic gluon that split into a quark-antiquark pair. Though less common, every once in a while two or even three energetic quarks or gluons (collectively called jets) will be produced alongside a Z boson.

feynmanDiagram_DrellYan_3j

Here is a real life Drell-Yan (Z boson) event with three very energetic jets. The blue lines are the muons. The red, orange and green “sprays” of particles are jets.

atlas_158466_4174272_Zmumu3jets

 

As likely or unlikely it may be for a Drell-Yan process or occur with additional energetic jets, the frequency at which they do occur appear to match very well with our theoretical predictions. The plot below show the likelihood (“Production cross section“) of a W or Z boson with at least 0, 1, 2, 3, or 4(!) very energetic jets. The blue bars are the theoretical predictions and the red circles are data. Producing a W or Z boson with more energetic jets is less likely than having fewer jets. The more jets identified, the smaller the production rate (“cross section”).

cms_StairwayHeaven_2014

How about low energy jets? These are difficult to observe because experiments have high thresholds for any part of a collision to be recorded. The ATLAS and CMS experiments, for example, are insensitive to very low energy objects, so not every piece of an LHC proton collision will be recorded. In short: sometimes a jet or a photon is too “dim” for us to detect it. But unlike high energy jets, it is very, very easy for Drell-Yan processes to be accompanied with low energy jets.

feynmanDiagram_DrellYan_wRadx6

There is a subtlety here. Our standard tools and tricks for calculating the probability of something happening in a proton collision (perturbation theory) assumes that we are studying objects with much higher energies than the proton at rest. Radiation of very low energy gluons is a special situation where our usual calculation methods do not work. The solution is rather cool.

As we said, the Z boson produced in the quark-antiquark annihilation has much more energy than any of the low energy gluons that are radiated, so emitting a low energy gluon should not affect the system much. This is like massive freight train pulling coal and dropping one or two pieces of coal. The train carries so much momentum and the coal is so light that dropping even a dozen pieces of coal will have only a negligible effect on the train’s motion. (Dropping all the coal, on the other hand, would not only drastically change the train’s motion but likely also be a terrible environmental hazard.) We can now make certain approximations in our calculation of a radiating a low energy gluon called “soft gluon factorization“. The result is remarkably simple, so simple we can generalize it to an arbitrary number of gluon emissions. This process is called “soft gluon resummation” and was formulated in 1985 by Collins, Soper, and Sterman.

Low energy gluons, even if they cannot be individually identified, still have an affect. They carry away energy, and by momentum conservation this will slightly push and kick the system in different directions.

feynmanDiagram_DrellYan_wRadx6_Text

 

If we look at Z bosons with low momentum from the CDF and DZero experiments, we see that the data and theory agree very well! In fact, in the DZero (lower) plot, the “pQCD” (perturbative QCD) prediction curve, which does not include resummation, disagrees with data. Thus, soft gluon resummation, which accounts for the emission of an arbitrary number of low energy radiations, is important and observable.

cdf_pTZ dzero_pTZ

In summary, Drell-Yan processes are a very important at high energy proton colliders like the Large Hadron Collider. They serve as a standard candle for experiments as well as a test of high precision predictions. The LHC Run II program has just begun and you can count on lots of rich physics in need of studying.

Happy Colliding,

Richard (@bravelittlemuon)

 

by Richard Ruiz at May 18, 2015 03:00 PM

John Baez - Azimuth

PROPs for Linear Systems

Eric Drexler likes to say: engineering is dual to science, because science tries to understand what the world does, while engineering is about getting the world to do what you want. I think we need a slightly less ‘coercive’, more ‘cooperative’ approach to the world in order to develop ‘ecotechnology’, but it’s still a useful distinction.

For example, classical mechanics is the study of what things do when they follow Newton’s laws. Control theory is the study of what you can get them to do.

Say you have an upside-down pendulum on a cart. Classical mechanics says what it will do. But control theory says: if you watch the pendulum and use what you see to move the cart back and forth correctly, you can make sure the pendulum doesn’t fall over!

Control theorists do their work with the help of ‘signal-flow diagrams’. For example, here is the signal-flow diagram for an inverted pendulum on a cart:

When I take a look at a diagram like this, I say to myself: that’s a string diagram for a morphism in a monoidal category! And it’s true. Jason Erbele wrote a paper explaining this. Independently, Bonchi, Sobociński and Zanasi did some closely related work:

• John Baez and Jason Erbele, Categories in control.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, Interacting Hopf algebras.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, A categorical semantics of signal flow graphs.

I’ll explain some of the ideas at the Turin meeting on the categorical foundations of network theory. But I also want to talk about this new paper that Simon Wadsley of Cambridge University wrote with my student Nick Woods:

• Simon Wadsley and Nick Woods, PROPs for linear systems.

This makes the picture neater and more general!

You see, Jason and I used signal flow diagrams to give a new description of the category of finite-dimensional vector spaces and linear maps. This category plays a big role in the control theory of linear systems. Bonchi, Sobociński and Zanasi gave a closely related description of an equivalent category, \mathrm{Mat}(k), where:

• objects are natural numbers, and

• a morphism f : m \to n is an n \times m matrix with entries in the field k,

and composition is given by matrix multiplication.

But Wadsley and Woods generalized all this work to cover \mathrm{Mat}(R) whenever R is a commutative rig. A rig is a ‘ring without negatives’—like the natural numbers. We can multiply matrices valued in any rig, and this includes some very useful examples… as I’ll explain later.

Wadsley and Woods proved:

Theorem. Whenever R is a commutative rig, \mathrm{Mat}(R) is the PROP for bicommutative bimonoids over R.

This result is quick to state, but it takes a bit of explaining! So, let me start by bringing in some definitions.

Bicommutative bimonoids

We will work in any symmetric monoidal category, and draw morphisms as string diagrams.

A commutative monoid is an object equipped with a multiplication:

and a unit:

obeying these laws:

For example, suppose \mathrm{FinVect}_k is the symmetric monoidal category of finite-dimensional vector spaces over a field k, with direct sum as its tensor product. Then any object V \in \mathrm{FinVect}_k is a commutative monoid where the multiplication is addition:

(x,y) \mapsto x + y

and the unit is zero: that is, the unique map from the zero-dimensional vector space to V.

Turning all this upside down, cocommutative comonoid has a comultiplication:

and a counit:

obeying these laws:

For example, consider our vector space V \in \mathrm{FinVect}_k again. It’s a commutative comonoid where the comultiplication is duplication:

x \mapsto (x,x)

and the counit is deletion: that is, the unique map from V to the zero-dimensional vector space.

Given an object that’s both a commutative monoid and a cocommutative comonoid, we say it’s a bicommutative bimonoid if these extra axioms hold:

You can check that these are true for our running example of a finite-dimensional vector space V. The most exciting one is the top one, which says that adding two vectors and then duplicating the result is the same as duplicating each one, then adding them appropriately.

Our example has some other properties, too! Each element c \in k defines a morphism from V to itself, namely scalar multiplication by c:

x \mapsto c x

We draw this as follows:

These morphisms are compatible with the ones so far:

Moreover, all the ‘rig operations’ in k—that is, addition, multiplication, 0 and 1, but not subtraction or division—can be recovered from what we have so far:

We summarize this by saying our vector space V is a bicommutative bimonoid ‘over k‘.

More generally, suppose we have a bicommutative bimonoid A in a symmetric monoidal category. Let \mathrm{End}(A) be the set of bicommutative bimonoid homomorphisms from A to itself. This is actually a rig: there’s a way to add these homomorphisms, and also a way to ‘multiply’ them (namely, compose them).

Suppose R is any commutative rig. Then we say A is a bicommutative bimonoid over R if it’s equipped with a rig homomorphism

\Phi : R \to \mathrm{End}(A)

This is a way of summarizing the diagrams I just showed you! You see, each c \in R gives a morphism from A to itself, which we write as

The fact that this is a bicommutative bimonoid endomorphism says precisely this:

And the fact that \Phi is a rig homomorphism says precisely this:

So sometimes the right word is worth a dozen pictures!

What Jason and I showed is that for any field k, the \mathrm{FinVect}_k is the free symmetric monoidal category on a bicommutative bimonoid over k. This means that the above rules, which are rules for manipulating signal flow diagrams, completely characterize the world of linear algebra!

Bonchi, Sobociński and Zanasi used ‘PROPs’ to prove a similar result where the field is replaced by a sufficiently nice commutative ring. And Wadlsey and Woods used PROPS to generalize even further to the case of an arbitrary commutative rig!

But what are PROPs?

PROPs

A PROP is a particularly tractable sort of symmetric monoidal category: a strict symmetric monoidal category where the objects are natural numbers and the tensor product of objects is given by ordinary addition. The symmetric monoidal category \mathrm{FinVect}_k is equivalent to the PROP \mathrm{Mat}(k), where a morphism f : m \to n is an n \times m matrix with entries in k, composition of morphisms is given by matrix multiplication, and the tensor product of morphisms is the direct sum of matrices.

We can define a similar PROP \mathrm{Mat}(R) whenever R is a commutative rig, and Wadsley and Woods gave an elegant description of the ‘algebras’ of \mathrm{Mat}(R). Suppose C is a PROP and D is a strict symmetric monoidal category. Then the category of algebras of C in D is the category of strict symmetric monoidal functors F : C \to D and natural transformations between these.

If for every choice of D the category of algebras of C in D is equivalent to the category of algebraic structures of some kind in D, we say C is the PROP for structures of that kind. This explains the theorem Wadsley and Woods proved:

Theorem. Whenever R is a commutative rig, \mathrm{Mat}(R) is the PROP for bicommutative bimonoids over R.

The fact that an algebra of \mathrm{Mat}(R) is a bicommutative bimonoid is equivalent to all this stuff:

The fact that \Phi(c) is a bimonoid homomorphism for all c \in R is equivalent to this stuff:

And the fact that \Phi is a rig homomorphism is equivalent to this stuff:

This is a great result because it includes some nice new examples.

First, the commutative rig of natural numbers gives a PROP \mathrm{Mat}. This is equivalent to the symmetric monoidal category \mathrm{FinSpan}, where morphisms are isomorphism classes of spans of finite sets, with disjoint union as the tensor product. Steve Lack had already shown that \mathrm{FinSpan} is the PROP for bicommutative bimonoids. But this also follows from the result of Wadsley and Woods, since every bicommutative bimonoid V is automatically equipped with a unique rig homomorphism

\Phi : \mathbb{N} \to \mathrm{End}(V)

Second, the commutative rig of booleans

\mathbb{B} = \{F,T\}

with ‘or’ as addition and ‘and’ as multiplication gives a PROP \mathrm{Mat}(\mathbb{B}). This is equivalent to the symmetric monoidal category \mathrm{FinRel} where morphisms are relations between finite sets, with disjoint union as the tensor product. Samuel Mimram had already shown that this is the PROP for special bicommutative bimonoids, meaning those where comultiplication followed by multiplication is the identity:

But again, this follows from the general result of Wadsley and Woods!

Finally, taking the commutative ring of integers \mathbb{Z}, Wadsley and Woods showed that \mathrm{Mat}(\mathbb{Z}) is the PROP for bicommutative Hopf monoids. The key here is that scalar multiplication by -1 obeys the axioms for an antipode—the extra morphism that makes a bimonoid into a Hopf monoid. Here are those axioms:

More generally, whenever R is a commutative ring, the presence of -1 \in R guarantees that a bimonoid over R is automatically a Hopf monoid over R. So, when R is a commutative ring, Wadsley and Woods’ result implies that \mathrm{Mat}(R) is the PROP for Hopf monoids over R.

Earlier, in their paper on ‘interacting Hopf algebras’, Bonchi, Sobociński and Zanasi had given an elegant and very different proof that \mathrm{Mat}(R) is the PROP for Hopf monoids over R whenever R is a principal ideal domain. The advantage of their argument is that they build up the PROP for Hopf monoids over R from smaller pieces, using some ideas developed by Steve Lack. But the new argument by Wadsley and Woods has its own charm.

In short, we’re getting the diagrammatics of linear algebra worked out very nicely, providing a solid mathematical foundation for signal flow diagrams in control theory!


by John Baez at May 18, 2015 02:31 AM

May 15, 2015

John Baez - Azimuth

Carbon Emissions Stopped Growing?

In 2014, global carbon dioxide emissions from energy production stopped growing!

At least, that’s what preliminary data from the International Energy Agency say. It seems the big difference is China. The Chinese made more electricity from renewable sources, such as hydropower, solar and wind, and burned less coal.

In fact, a report by Greenpeace says that from April 2014 to April 2015, China’s carbon emissions dropped by an amount equal to the entire carbon emissions of the United Kingdom!

I want to check this, because it would be wonderful if true: a 5% drop. They say that if this trend continues, China will close out 2015 with the biggest reduction in CO2 emissions every recorded by a single country.

The International Energy Agency also credits Europe’s improved attempts to cut carbon emissions for the turnaround. In the US, carbon emissions has basically been dropping since 2006—with a big drop in 2009 due to the economic collapse, a partial bounce-back in 2010, but a general downward trend.

In the last 40 years, there have only been 3 times in which emissions stood still or fell compared to the previous year, all during global economic crises: the early 1980’s, 1992, and 2009. In 2014, however, the global economy expanded by 3%.

So, the tide may be turning! But please remember: while carbon emissions may start dropping, they’re still huge. The amount of the CO2 in the air shot above 400 parts per million in March this year. As Erika Podest of NASA put it:

CO2 concentrations haven’t been this high in millions of years. Even more alarming is the rate of increase in the last five decades and the fact that CO2 stays in the atmosphere for hundreds or thousands of years. This milestone is a wake up call that our actions in response to climate change need to match the persistent rise in CO2. Climate change is a threat to life on Earth and we can no longer afford to be spectators.

Here is the announcement by the International Energy Agency:

Global energy-related emissions of carbon dioxide stalled in 2014, IEA, 13 March 2015.

Their full report on this subject will come out on 15 June 2015. Here is the report by Greenpeace EnergyDesk:

China coal use falls: CO2 reduction this year could equal UK total emissions over same period, Greenpeace EnergyDesk.

I trust them less than the IEA when it comes to using statistics correctly, but someone should be able to verify their claims if true.


by John Baez at May 15, 2015 04:46 PM

May 14, 2015

Symmetrybreaking - Fermilab/SLAC

The accelerator in the Louvre

The Accélérateur Grand Louvre d’analyse élémentaire solves ancient mysteries with powerful particle beams.

In a basement 15 meters below the towering glass pyramid of the Louvre Museum in Paris sits a piece of work the curators have no plans to display: the museum’s particle accelerator.

This isn’t a Dan Brown novel. The Accélérateur Grand Louvre d’analyse élémentaire is real and has been a part of the museum since 1988.

Researchers use AGLAE’s beams of protons and alpha particles to find out what artifacts are made of and to verify their authenticity. The amounts and combinations of elements an object contains can serve as a fingerprint hinting at where minerals were mined and when an item was made.

Scientists have used AGLAE to check whether a saber scabbard gifted to Napoleon Bonaparte by the French government was actually cast in solid gold (it was) and to identify the minerals in the hauntingly lifelike eyes of a 4500-year-old Egyptian sculpture known as The Seated Scribe (black rock crystal and white magnesium carbonate veined with thin red lines of iron oxide).

“What makes the AGLAE facility unique is that our activities are 100 percent dedicated to cultural heritage,” says Claire Pacheco, who leads the team that operates the machine. It is the only particle accelerator that has been used solely for this field of research.

Pacheco began working with ion-beam analysis at AGLAE while pursuing a doctorate degree in ancient materials at France’s University of Bordeaux. She took over as its lead scientist in 2011 and now operates the particle accelerator with a team of three engineers.

Jean-Claude Dran, a scientist who worked with AGLAE during its early days and served for several years as a scientific advisor, says the study methods pioneered for AGLAE are uniquely suited to art and archaeological artifacts. “These techniques are very powerful, very accurate and very sensitive to trace elements.”

Photo by: V. Fournier, C2RMF

Crucially, they are also non-destructive in most cases, Pacheco says.

“Of course, AGLAE is non-invasive, which is priority No. 1 for cultural heritage” she says. The techniques used at AGLAE include particle-induced X-ray and gamma-ray emission spectrometries, which can identify the slightest traces of elements ranging from lithium to uranium.

Before AGLAE, research facilities typically required samples to be placed in a potentially damaging vacuum for similar materials analysis. Researchers hoping to study pieces too large for a vacuum chamber were out of luck. AGLAE, because its beams work outside the vacuum, allows researchers to study objects of any size and shape.

The physicists and engineers who conduct AGLAE experiments typically work hand-in-hand with curators and art historians.

While AGLAE frequently studies items from the local collection, it has a larger mission to study art and relics from museums all around France. It is also available to outside researchers, who have used it on pieces from museums such as the J. Paul Getty Museum in Los Angeles and the Metropolitan Museum of Art in New York.

AGLAE has been used to study glasses, metals and ceramics. In one case, Pacheco’s team wanted to know the origins of pieces of lusterware, a type of ceramic that takes on a metallic shine when kiln-fired. The technique emerged in ninth-century Mesopotamia and was spread all around the Mediterranean during the Muslim conquests. It had mostly faded by the 17th century, but some potters in Spain still carry on the tradition.

Pacheco’s team used AGLAE to pinpoint the elements in the lusterware, and then they mixed up batches of raw materials from different locations. “What we have tried to do is make a kind of ‘identity card’ for every production center at every period in time,” Pacheco says.

Another, recently published study details how AGLAE was also used to analyze the chemical signature of traces of decorative paint on ivory tusks. Pacheco’s team determined that the tusks were likely painted during the seventh century B.C.

A limitation of the AGLAE particle analysis techniques is that they are not very effective for studying paintings because of a slight risk of damage. But Pacheco says that an upgrade now in progress aims to produce a lower-power beam that, coupled with more sensitive detectors, could solve this problem.

Dubbed NEW AGLAE, the upgraded setup could boost automation to allow the accelerator to operate around the clock—it now operates only during the day.

While public tours are not permitted of AGLAE, Pacheco says there are frequent visits by researchers working in cultural heritage.

“It’s so marvelous,” she says. “We are very, very lucky to work in this environment, to study these objects.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. and Kelen Tuttle at May 14, 2015 01:00 PM

Tommaso Dorigo - Scientificblogging

Burton Richter Advocates Electron-Positron Colliders, For A Change
Burton Richter, 1975 Nobel prize in Physics for the discovery of the J/ψ meson, speaks about the need of a new linear collider for the measurement of Higgs boson branching fractions in a video on Facebook (as soon as I understand how to paste here I will!)

Richter has been a fervent advocate of electron-positron machines over hadronic accelerators throughout his life. So you really could not expect anything different from him - but he still does it with all his might. At one point he says, talking of the hadron collider scientists who discovered the Higgs boson:

read more

by Tommaso Dorigo at May 14, 2015 12:06 PM

May 13, 2015

Symmetrybreaking - Fermilab/SLAC

LHC experiments first to observe rare process

A joint result from the CMS and LHCb experiments precludes or limits several theories of new particles or forces.

Two experiments at the Large Hadron Collider at CERN have combined their results and observed a previously unseen subatomic process.

As published in the journal Nature this week, a joint analysis by the CMS and LHCb collaborations has established a new and extremely rare decay of the Bs particle—a heavy composite particle consisting of a bottom antiquark and a strange quark—into two muons. Theorists had predicted that this decay would only occur about four times out of a billion, and that is roughly what the two experiments observed.

“It’s amazing that this theoretical prediction is so accurate and even more amazing that we can actually observe it at all,” says Syracuse University Professor Sheldon Stone, a member of the LHCb collaboration. “This is a great triumph for the LHC and both experiments.”

LHCb and CMS both study the properties of particles to search for cracks in the Standard Model, our best description so far of the behavior of all directly observable matter in the universe. The Standard Model is known to be incomplete since it does not address issues such as the presence of dark matter or the abundance of matter over antimatter in our universe. Any deviations from this model could be evidence of new physics at play, such as new particles or forces that could provide answers to these mysteries.

“Many theories that propose to extend the Standard Model also predict an increase in this Bs decay rate,” says Fermilab’s Joel Butler of the CMS experiment. “This new result allows us to discount or severely limit the parameters of most of these theories. Any viable theory must predict a change small enough to be accommodated by the remaining uncertainty.”

Courtesy of: LHCb collaboration

Researchers at the LHC are particularly interested in particles containing bottom quarks because they are easy to detect, abundantly produced and have a relatively long lifespan, according to Stone.

“We also know that Bs mesons oscillate between their matter and their antimatter counterparts, a process first discovered at Fermilab in 2006,” Stone says. “Studying the properties of B mesons will help us understand the imbalance of matter and antimatter in the universe.”

That imbalance is a mystery scientists are working to unravel. The big bang that created the universe should have resulted in equal amounts of matter and antimatter, annihilating each other on contact. But matter prevails, and scientists have not yet discovered the mechanism that made that possible.

“The LHC will soon begin a new run at higher energy and intensity,” Butler says. “The precision with which this decay is measured will improve, further limiting the viable Standard Model extensions. And of course, we always hope to see the new physics directly in the form of new particles or forces.”

Courtesy of: CMS collaboration


Fermilab published a version of this article as a press release.

 

Like what you see? Sign up for a free subscription to symmetry!

May 13, 2015 01:00 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
May 29, 2015 01:51 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at