Particle Physics Planet


February 28, 2015

Geraint Lewis - Cosmic Horizons

Shooting relativistic fish in a rational barrel
I need to take a breather from grant writing, which is consuming almost every waking hour in between all of the other things that I still need to do. So see this post as a cathartic exercise.

What makes a scientist? Is it the qualification? What you do day-to-day? The association and societies to which you belong? I think a unique definition may be impossible as there is a continuum of properties of scientists. This makes it a little tricky for the lay-person to identify "real science" from "fringe science" (but, in all honesty, the distinction between these two is often not particularly clear cut).

One thing that science (and many other fields) do is have meetings, conferences and workshops to discuss their latest results. Some people seem to spend their lives flitting between exotic locations essentially presenting the same talk to almost the same audience, but all scientists probably attend a conference or two per year.

In one of my own fields, namely cosmology, there are lots of conferences per year. But accompanying these there are another set of conferences going on, also on cosmology and often including discussions of gravity, particle physics, and the power of electricity in the Universe. At these meetings, the words "rational" and "logical" are bandied about, and it is clear that the people attending think that the great mass of astronomer and physicists have gotten it all wrong, are deluded, are colluding to keep the truth from the public for some bizarre agenda - some sort of worship of Einstein and "mathemagics" (I snorted with laughter when I heard this).

If I am being paid to lie to the public, I would like to point out that my cheque has not arrived and unless it does shortly I will go to the papers with a "tell all"!!

These are not a new phenomenon, but were often in shadows. But now, of course, with the internet, any one can see these conference in action with lots of youtube clips and lectures.

Is there any use for such videos? I think so, as, for the student of physics, they present an excellent place to tests one knowledge by identifying just where the presenters are straying off the path.

A brief search of youtube will turn up talks that point out that black holes cannot exist because
is the starting point for the derivation of the Schwarzschild solution.

Now, if you are not really familiar with the mathematics of relativity, this might look quite convincing. The key point is this equation

Roughly speaking, this says that space-time geometry (left-hand side) is related to the matter and energy density (right-hand side, and you calculate the Schwarzschild geometry for a black hole by setting the right-hand side equal to zero.

Now, with the right-hand side equal to zero that means there is no energy and mass, and the conclusion in the video says that there is no source, no thing to produce the bending of space-time and hence the effects of gravity. So, have the physicists been pulling the wool over everyones eyes for almost 100 years?

Now, a university level student may not have done relativity yet, but it should be simple to see the flaw in this argument. And, to do this, we can use the wonderful world of classical mechanics.

In classical physics, where gravity is a force and we deal with potentials, we have a similar equation to the relativistic equation above. It's known as Poisson's equation
The left-hand side is related to derivatives of the gravitational potential, whereas the right-hand side is some constants (including Newton's gravitational constant (G)) and the density given by the rho.

I think everyone is happy with this equation. Now, one thing you calculate early on in gravitational physics is that the gravitational potential outside of a massive spherical object is given by
Note that we are talking about the potential is outside of the spherical body (the simple V and Phi are meant to be the same thing). So, if we plug this potential into Poisson's equation, does it give us a mass distribution which is spherical?

Now, Poisson's equation can look a little intimidating, but let's recast the potential in Cartesian coordinates. Then it looks like this

Ugh! Does that make it any easier? Yes, let's just simply plug it into Wolfram Alpha to do the hard work. So, the derivatives have an x-part, y-part and z-part - here's the x-part.
Again, is you are a mathphobe, this is not much better, but let's add the y- and z-parts.

After all that, the result is zero! Zilch! Nothing! This must mean that Poisson's equation for this potential is
So, the density is equal to zero. Where's the mass that produces the gravitational field? This is the same as the apparent problem with relativity. What Poisson's equation tells us that the derivatives o the potential AT A POINT is related to the density AT THAT POINT! 

Now, remember these are derivatives, and so the potential can have a whole bunch of shapes at that point, as long as the derivatives still hold. One of these, of course, is there being no mass there and so no gravitational potential at all, but any vacuum, with no mass, will above Poisson = 0 equation, including the potential outside of any body (the one used in this example relied on a spherical source).

So, the relativistic version is that the properties of the space-time curvature AT A POINT is related to the mass and energy AT A POINT. A flat space-time is produced when there is no mass and energy, and so has G=0, but so does any point in a vacuum, but that does not mean that the space-time at that point is not curved (and so no gravity).

Anyway, I got that off my chest, and my Discovery Project submitted, but now it's time to get on with a LIEF application! 

by Cusp (noreply@blogger.com) at February 28, 2015 03:30 AM

Emily Lakdawalla - The Planetary Society Blog

Highlights from our reddit Space Policy AMA
The space policy and advocacy team at The Planetary Society held an AMA (ask me anything) on reddit, here are some of the highlights.

February 28, 2015 12:20 AM

February 27, 2015

Christian P. Robert - xi'an's og

Ubuntu issues

screen shot with ubuntu 10.10It may be that weekends are the wrong time to tamper with computer OS… Last Sunday, I noticed my Bluetooth icon had a “turn off” option and since I only use Bluetooth for my remote keyboard and mouse when in Warwick, I turned it off, thinking I would turn it on again next week. This alas led to a series of problems, maybe as a coincidence since I also updated the Kubuntu 14.04 system over the weekend.

  1. I cannot turn Bluetooth on again! My keyboard and mouse are no longer recognised or detected. No Bluetooth adapter is found by the system setting. Similarly, sudo modprobe bluetooth shows nothing. I have installed a new interface called Blueman but to no avail. The fix suggested on forums to run rfkill unblock bluetooth does not work either… Actually rfkill list all only returns the wireless device. Which is working fine.
  2. My webcam vanished as well. It was working fine before the weekend.
  3. Accessing some webpages, including all New York Times articles, now takes forever on Firefox! If less on Chrome.

Is this a curse of sorts?!

As an aside, I also found this week that I cannot update Adobe reader from version 9 to version 11, as Adobe does not support Linux versions any more… Another bummer.


Filed under: Kids, Linux Tagged: Bluetooth, Kubuntu, Linux, Ubuntu 14.04

by xi'an at February 27, 2015 11:15 PM

Peter Coles - In the Dark

Farewell, Mr Spock

image

I was very sad to hear this afternoon of the death at the age of 83 of actor Leonard Nimoy. Although he did a great many other things in a long and varied career, Leonard Nimoy will of course be remembered most fondly for his role as Mr Spock, Science Officer of the USS Enterprise, in Star Trek.

I was both fascinated and inspired by Mr Spock when I was young, and Leonard Nimoy’s death is like the loss of an old friend. I’m sure I’m not the only scientist of my generation who is feeling that way today.

“Of all the souls I have encountered in my travels, his was the most…. human.”

R.I.P. Leonard Nimoy (1931-2015)


by telescoper at February 27, 2015 09:56 PM

arXiv blog

The Emerging Challenge of Augmenting Virtual Worlds With Physical Reality

If you want to interact with real world objects while immersed in a virtual reality, how do you do it?

February 27, 2015 08:56 PM

Emily Lakdawalla - The Planetary Society Blog

Pluto Science, on the Surface
New Horizons' Principal Investigator Alan Stern gives an update on the mission's progress toward Pluto.

February 27, 2015 08:39 PM

The n-Category Cafe

Concepts of Sameness (Part 4)

This time I’d like to think about three different approaches to ‘defining equality’, or more generally, introducing equality in formal systems of mathematics.

These will be taken from old-fashioned logic — before computer science, category theory or homotopy theory started exerting their influence. Eventually I want to compare these to more modern treatments.

If you know other interesting ‘old-fashioned’ approaches to equality, please tell me!

The equals sign is surprisingly new. It was never used by the ancient Babylonians, Egyptians or Greeks. It seems to originate in 1557, in Robert Recorde’s book The Whetstone of Witte. If so, we actually know what the first equation looked like:

As you can see, the equals sign was much longer back then! He used parallel lines “because no two things can be more equal.”

Formalizing the concept of equality has raised many questions. Bertrand Russell published The Principles of Mathematics [R] in 1903. Not to be confused with the Principia Mathematica, this is where he introduced Russell’s paradox. In it, he wrote:

identity, an objector may urge, cannot be anything at all: two terms plainly are not identical, and one term cannot be, for what is it identical with?

In his Tractatus, Wittgenstein [W] voiced a similar concern:

Roughly speaking: to say of two things that they are identical is nonsense, and to say of one thing that it is identical with itself is to say nothing.

These may seem like silly objections, since equations obviously do something useful. The question is: precisely what?

Instead of tackling that head-on, I’ll start by recalling three related approaches to equality in the pre-categorical mathematical literature.

The indiscernibility of identicals

The principle of indiscernibility of identicals says that equal things have the same properties. We can formulate it as an axiom in second-order logic, where we’re allowed to quantify over predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>:

<semantics>xy[x=yP[P(x)P(y)]]<annotation encoding="application/x-tex"> \forall x \forall y [x = y \; \implies \; \forall P \, [P(x) \; \iff \; P(y)] ] </annotation></semantics>

We can also formulate it as an axiom schema in 1st-order logic, where it’s sometimes called substitution for formulas. This is sometimes written as follows:

For any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any formula <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, if <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is obtained by replacing any number of free occurrences of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that these remain free occurrences of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, then

<semantics>x=y[ϕϕ]<annotation encoding="application/x-tex"> x = y \;\implies\; [\phi \;\implies\; \phi' ] </annotation></semantics>

I think we can replace this with the prettier

<semantics>x=y[ϕϕ]<annotation encoding="application/x-tex"> x = y \;\implies\; [\phi \;\iff \; \phi'] </annotation></semantics>

without changing the strength of the schema. Right?

We cannot derive reflexivity, symmetry and transitivity of equality from the indiscernibility of identicals. So, this principle does not capture all our usual ideas about equality. However, as shown last time, we can derive symmetry and transitivity from this principle together with reflexivity. This uses an interesting form of argument where take “being equal to <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>” as one of the predicates (or formulas) to which we apply the principle. There’s something curiously self-referential about this. It’s not illegitimate, but it’s curious.

The identity of indiscernibles

Leibniz [L] is often credited with formulating a converse principle, the identity of indiscernibles. This says that things with all the same properties are equal. Again we can write it as a second-order axiom:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \implies \; x = y ] </annotation></semantics>

or a first-order axiom schema.

We can go further if we take the indiscernibility of identicals and identity of indiscernibles together as a package:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \iff \; x = y ] </annotation></semantics>

This is often called the Leibniz law. It says an entity is determined by the collection of predicates that hold of that entity. Entities don’t have mysterious ‘essences’ that determine their individuality: they are completely known by their properties, so if two entities have all the same properties they must be the same.

This principle does imply reflexivity, symmetry and transitivity of equality. They follow from the corresponding properties of <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics> in a satisfying way. Of course, if we were wondering why equality has these three properties, we are now led to wonder the same thing about the biconditional <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics>. But this counts as progress: it’s a step toward ‘logicizing’ mathematics, or at least connecting <semantics>=<annotation encoding="application/x-tex">=</annotation></semantics> firmly to <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics>.

Apparently Russell and Whitehead used a second-order version of the Leibniz law to define equality in the Principia Mathematica [RW], while Kalish and Montague [KL] present it as a first-order schema. I don’t know the whole history of such attempts.

When you actually look to see where Leibniz formulated this principle, it’s a bit surprising. He formulated it in the contrapositive form, he described it as a ‘paradox’, and most surprisingly, it’s embedded as a brief remark in a passage that would be hair-curling for many contemporary rationalists. It’s in his Discourse on Metaphysics, a treatise written in 1686:

Thus Alexander the Great’s kinghood is an abstraction from the subject, and so is not determinate enough to pick out an individual, and doesn’t involve the other qualities of Alexander or everything that the notion of that prince includes; whereas God, who sees the individual notion or ‘thisness’ of Alexander, sees in it at the same time the basis and the reason for all the predicates that can truly be said to belong to him, such as for example that he would conquer Darius and Porus, even to the extent of knowing a priori (and not by experience) whether he died a natural death or by poison — which we can know only from history. Furthermore, if we bear in mind the interconnectedness of things, we can say that Alexander’s soul contains for all time traces of everything that did and signs of everything that will happen to him — and even marks of everything that happens in the universe, although it is only God who can recognise them all.

Several considerable paradoxes follow from this, amongst others that it is never true that two substances are entirely alike, differing only in being two rather than one. It also follows that a substance cannot begin except by creation, nor come to an end except by annihilation; and because one substance can’t be destroyed by being split up, or brought into existence by the assembling of parts, in the natural course of events the number of substances remains the same, although substances are often transformed. Moreover, each substance is like a whole world, and like a mirror of God, or indeed of the whole universe, which each substance expresses in its own fashion — rather as the same town looks different according to the position from which it is viewed. In a way, then, the universe is multiplied as many times as there are substances, and in the same way the glory of God is magnified by so many quite different representations of his work.

(Emphasis mine — you have to look closely to find the principle of identity of indiscernibles, because it goes by so quickly!)

There have been a number of objections to the Leibniz law over the years. I want to mention one that might best be handled using some category theory. In 1952, Max Black [B] claimed that in a symmetrical universe with empty space containing only two symmetrical spheres of the same size, the two spheres are two distinct objects even though they have all their properties in common.

As Black admits, this problem only shows up in a ‘relational’ theory of geometry, where we can’t say that the spheres have different positions — e.g., one centered at the points <semantics>(x,y,z)<annotation encoding="application/x-tex">(x,y,z)</annotation></semantics>, the other centered at <semantics>(x,y,z)<annotation encoding="application/x-tex">(-x,-y,-z)</annotation></semantics> — but only speak of their position relative to one another. This sort of theory is certainly possible, and it seems to be important in physics. But I believe it can be adequately formulated only with the help of some category theory. In the situation described by Black, I think we should say the spheres are not equal but isomorphic.

As widely noted, general relativity also pushes for a relational approach to geometry. Gauge theory, also, raises the issue of whether indistinguishable physical situations should be treated as equal or merely isomorphic. I believe the mathematics points us strongly in the latter direction.

A related issue shows up in quantum mechanics, where electrons are considered indistinguishable (in a certain sense), yet there can be a number of electrons in a box — not just one.

But I will discuss such issues later.

Extensionality

In traditional set theory we try to use sets as a substitute for predicates, saying <semantics>xS<annotation encoding="application/x-tex">x \in S</annotation></semantics> as a substitute for <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>. This lets us keep our logic first-order and quantify over sets — often in a universe where everything is a set — as a substitute for quantifying over predicates. Of course there’s a glitch: Russell’s paradox shows we get in trouble if we try to treat every predicate as defining a set! Nonetheless it is a powerful strategy.

If we apply this strategy to reformulate the Leibniz law in a universe where everything is a set, we obtain:

<semantics>ST[S=TR[SRTR]]<annotation encoding="application/x-tex"> \forall S \forall T [ S = T \; \iff \; \forall R [ S \in R \; \iff \; T \in R]] </annotation></semantics>

While this is true in Zermelo-Fraenkel set theory, it is not taken as an axiom. Instead, people turn the idea around and use the axiom of extensionality:

<semantics>ST[S=TR[RSRT]]<annotation encoding="application/x-tex"> \forall S \forall T [ S = T \; \iff \; \forall R [ R \in S \; \iff \; R \in T]] </annotation></semantics>

Instead of saying two sets are equal if they’re in all the same sets, this says two sets are equal if all the same sets are in them. This leads to a view where the ‘contents’ of an entity as its defining feature, rather than the predicates that hold of it.

We could, in fact, send this idea back to second-order logic and say that predicates are equal if and only if they hold for the same entities:

<semantics>PQ[x[P(x)Q(x)]P=Q]<annotation encoding="application/x-tex"> \forall P \forall Q [\forall x [P(x) \; \iff \; Q(x)] \; \iff P = Q ] </annotation></semantics>

as a kind of ‘dual’ of the Leibniz law:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \iff \; x = y ] </annotation></semantics>

I don’t know if this has been remarked on in the foundational literature, but it’s a close relative of a phenomenon that occurs in other forms of duality. For example, continuous real-valued functions <semantics>F,G<annotation encoding="application/x-tex">F, G</annotation></semantics> on a topological space obey

<semantics>FG[x[F(x)=G(x)]F=G]<annotation encoding="application/x-tex"> \forall F \forall G [\forall x [F(x) \; = \; G(x)] \; \iff F = G ] </annotation></semantics>

but if the space is nice enough, continuous functions ‘separate points’, which means we also have

<semantics>xy[F[F(x)=F(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall F [ F(x) \; = \; F(y)] \; \iff \; x = y ] </annotation></semantics>

Notes

by john (baez@math.ucr.edu) at February 27, 2015 04:26 PM

ZapperZ - Physics and Physicists

Much Ado About Dress Color
Have you been following this ridiculous debate about the color of this dress? People are going nuts all over different social media about what the color of this dress is based on the photo that has exploded all over the internet.

I'm calling it ridiculous because people are actually arguing with each other, disagreeing about what they see, and then found it rather odd that other people do not see the same thing as they do, as if this is highly unusual and unexpected. Does the fact that different people see colors differently not a well-known fact? Seriously?

I've already mentioned about the limition of the human eye, and why it is really not a very good light detector in many aspects. So already using your eyes to determine the color of this dress is already suspect. Not only that, but due to such uncertainty, one should be to stuborn about what one sees, as if what you are seeing must be the ONLY way to see it.

But how would science solve this? Easy. Devices such as a UV-VIS can easily be used to measure the spectrum of reflected light, and the intensity of those spectral peaks. It tells you unambiguously the wavelengths that are reflected off the source, and how much of it is reflected. So to solve this debate, cut pieces of the dress (corresponding to all the different colors on it), and stick it into one of these devices. Voila! You have killed the debate of the "color".

This is something that can be determined objectively, without any subjective opinion of "color", and without the use of a poor light detector such as one's eyes. So, if someone can tell me where I can get a piece of this fabric, I'll test it out!

Zz.

by ZapperZ (noreply@blogger.com) at February 27, 2015 04:15 PM

Peter Coles - In the Dark

How Labour’s Tuition Fee Proposals Should Be Implemented

The big news today is Ed Milliband’s announcement that, if elected, the Labour Party would cut the maximum tuition fee payable by students in English universities from £9K to £6K. That will of course be broadly welcomed by prospective students (and indeed current ones, whose fees will be reduced from 2016 onwards). There is however considerable nervousness around the university sector about whether and how the cut of 33% in fee income will be made good. The proposal seems to be that the shortfall of around £3bn will be made up by grants from government to universities, funded by a reduction in tax relief on pension contributions made by high earners.  I have yet to see any concrete proposals on how these grants would be allocated.

I would like here to make a proposal on how this allocation should be done, in such a way that it corrects a serious anomaly in how the current funding arrangements from the Higher Education Funding Council for England (HEFCE) affect Science, Technology, Engineering and Mathematics (STEM) disciplines. For the record, I’ll declare my interest in this: I work in a STEM area and am therefore biased.

I’ll explain my reasoning by going back a few years. Before the introduction  of the £9K tuition fees in 2012  (i.e. in the `old regime’), a University would receive income from tuition fees of up to £3375 per student and from a `unit of resource’ or `teaching grant’ that depends on the subject. As shown in the upper part of Table C below which is taken from a HEFCE document:

Budgets

In the old regime, the  maximum income per student in Physics was thus £8,269 whereas for a typical Arts/Humanities student the maximum was £5,700. That means there was a 45% difference in funding between these two types of subject. The reason for this difference is that subjects such as physics are much more expensive to teach. Not only do disciplines like physics require expensive laboratory facilities (and associated support staff), they also involve many more contact hours between students and academic staff than in, e.g. an Arts subject.  However, the differential is not as large as you might think: there’s only a factor two difference in teaching grant between the lowest band (D, including Sociology, Economics, Business Studies, Law and Education) and the STEM band B (including my own subject, Physics). The real difference in cost is much larger than that, and not just because science subjects need laboratories and the like.

To give an example, I was talking recently to a student from a Humanities department at a leading University (not my employer). Each week she gets 3 lectures and one two-hour seminar, the latter  usually run by a research student. That’s it for her contact with the department. That meagre level of contact is by no means unusual, and some universities offer even less tuition than that. A recent report states that the real cost of teaching for Law and Sociology is less than £6000 per student, consistent with the level of funding under the “old” fee regime; teaching in STEM disciplines on the other hand actually costs over £11k. What this means, in effect, is that Arts and Humanities students are cross-subsidising STEM students. That’s neither fair nor transparent.

In my School, the School of Mathematical and Physical Sciences at the University of Sussex, a typical student can expect around 20 contact hours per week including lectures, exercise classes, laboratory sessions, and a tutorial (usually in a group of four). The vast majority of these sessions are done by full-time academic staff, not PDRAs or PhD students, although we do employ such folks in laboratory sessions and for a very small number of lectures. It doesn’t take Albert Einstein to work out that 20 hours of staff time costs a lot more than 3, and that’s even before you include the cost of the laboratories and equipment needed to teach physics.

Now look at what happens in the `new regime’, as displayed in the lower table in the figure. In the current system, students still pay the same fee for STEM and non-STEM subjects (£9K in most HEIs) but the teaching grant is now £1483 for Physics and nothing at all for Bands C and D. The difference in income is thus just £1,483, a percentage difference of just 16.4%. Worse than this, there’s no requirement that this extra resource be spent on the disciplines with which it is associated. In most universities, though gladly not mine, all the tuition income goes into central coffers and is dispersed to Schools and Departments according to the whims of the University Management.

Of course the higher  fee levels have led to an increase in income to Universities across all disciplines, which is welcome because it should allow institutions to improve the quality of their teaching bu purchasing better equipment, etc. But the current arrangements as a powerful disincentive for a university to invest in expensive subjects, such as Physics, relative to Arts & Humanities subjects such as English or History. It also rips off  staff and students in those disciplines, the students because they are given very little teaching in return for their fee, and the staff because we have to work far harder than our colleagues in other disciplines, who  fob off  most of what little teaching their supposed to do onto PhD students badged as Teaching Assistants. It is fortunate for this country that scientists working in its universities show such immense dedication to teaching as well as research that they’re prepared to carry on working in a University environment that is so clearly biased against STEM disciplines.

To get another angle on this argument, consider the comments made by senior members of the legal profession who are concerned about the drastic overproduction of law graduates. Only about half those doing the Bar Professional Training Course after a law degree stand any chance of getting a job as a lawyer in the UK. Contrast this with the situation in science subjects, where we don’t even produce enough graduates to ensure that schools have an adequate supply of science teachers. The system is completely out of balance. Here at Sussex, only about a quarter of students take courses in STEM subjects; nationally the figure is even lower, around 20%.

Now there’s a chance to reverse this bias and provide an incentive for universities to support STEM subjects. My proposal is simple: the government grants proposed to offset the loss of tuition fee income should be focussed on STEM disciplines. Income to universities from students in, especially laboratory-based subjects, could then be raised to about £12K, adequate to cover the real cost of teaching, whereas that in the less onerous Arts and Humanities could be fixed at about about £6K, again sufficient to cover the actual cost of teaching but funded by fees only.

I want to make it very clear that I am not saying that non-STEM subjects are of lower value, just that they cost less to teach.

Anyway, I thought I’d add a totally unscientific poll to see what readers of this blog make of the Labour proposals:

<noscript><a href="http://polldaddy.com/poll/8687744">Take Our Poll</a></noscript>


by telescoper at February 27, 2015 03:21 PM

Christian P. Robert - xi'an's og

je suis Avijit Roy

আমরা শোকাহত
কিন্তু আমরা অপরাজিত

[“We mourn but we are not out”]


Filed under: Uncategorized Tagged: atheism, Bangladesh, blogging, Mukto-Mona

by xi'an at February 27, 2015 02:18 PM

CERN Bulletin

CERN Bulletin

Qminder pictures-FR2
Qminder, application of the Registration Service

by Journalist, Student at February 27, 2015 10:13 AM

CERN Bulletin

CERN Bulletin

Klaus Winter (1930 - 2015)

We learned with great sadness that Klaus Winter passed away on 9 February 2015, after a long illness.

 

Klaus was born in 1930 in Hamburg, where he obtained his diploma in physics in 1955. From 1955 to 1958 he held a scholarship at the Collège de France, where he received his doctorate in nuclear physics under the guidance of Francis Perrin. Klaus joined CERN in 1958, where he first participated in experiments on π+ and K0 decay properties at the PS, and later became the spokesperson of the CHOV Collaboration at the ISR.

Starting in 1976, his work focused on experiments with the SPS neutrino beam. In 1984 he joined Ugo Amaldi to head the CHARM experiment, designed for detailed studies of the neutral current interactions of high-energy neutrinos, which had been discovered in 1973 using the Gargamelle bubble chamber at the PS. The unique feature of the detector was its target calorimeter, which used large Carrara marble plates as an absorber material.

From 1984 to 1991, Klaus headed up the CHARM II Collaboration. The huge detector, which weighed 700 tonnes and was principally a sandwich structure of large glass plates and planes of streamer tubes, was primarily designed to study high-energy neutrino-electron scattering through neutral currents.

In recognition of the fundamental results obtained by these experiments, Klaus was awarded the Stern-Gerlach Medal in 1993, the highest distinction of the German Physical Society for exceptional achievements in experimental physics. In 1997, he was awarded the prestigious Bruno Pontecorvo Prize for his major contributions to neutrino physics by the Joint Institute for Nuclear Research in Dubna.

The last experiment under his leadership, from 1991 until his retirement, was CHORUS, which used a hybrid emulsion-electronic detector primarily designed to search for νμ− ντ oscillations in the then-favoured region of large mass differences and small mixing angle.

Among other responsibilities, Klaus served for many years as editor of Physics Letters B and on the Advisory Committee of the International Conference on Neutrino Physics and Astrophysics. He was also the editor of two renowned books, Neutrino Physics (1991 and 2000) and Neutrino Mass with Guido Altarelli (2003).

An exceptional researcher, he also lectured physics at the University of Hamburg and – after the reunification of Germany – at the Humboldt University of Berlin, supervising 25 PhD theses and seven Habilitationen.

Klaus was an outstanding and successful leader, dedicated to his work, which he pursued with vision and determination. His intellectual horizons were by no means limited to science, extending far into culture and the arts, notably modern painting.

We have lost an exceptional colleague and friend.
 

His friends and colleagues from CHARM, CHARM II and CHORUS

February 27, 2015 10:02 AM

Georg von Hippel - Life on the lattice

Back from Mumbai
On Saturday, my last day in Mumbai, a group of colleagues rented a car with a driver to take a trip to Sanjay Gandhi National Park and visit the Kanheri caves, a Buddhist site consisting of a large number of rather simple monastic cells and some worship and assembly halls with ornate reliefs and inscriptions, all carved out out of solid rock (some of the cell entrances seem to have been restored using steel-reinforced concrete, though).

On the way back, we stopped at Mani Bhavan, where Mahatma Gandhi lived from 1917 to 1934, and which is now a museum dedicated to his live and legacy.

In the night, I flew back to Frankfurt, where the temperature was much lower than in Mumbai; in fact, on Monday there was snow.

by Georg v. Hippel (noreply@blogger.com) at February 27, 2015 10:01 AM

Lubos Motl - string vacua and pheno

Nature is subtle
Caltech has created their new Walter Burke Institute for Theoretical Physics. It's named after Walter Burke – but it is neither the actor nor the purser nor the hurler, it's Walter Burke the trustee so no one seems to give a damn about him.



Walter Burke, the actor

That's why John Preskill's speech [URL fixed, tx] focused on a different topic, namely his three principles of creating the environment for good physics.




His principles are, using my words,
  1. the best way to learn is to teach
  2. two-trick ponies (people working at the collision point of two disciplines) are great
  3. Nature is subtle
Let me say a few words about these principles.




Teaching as a way of learning

First, for many of us, teaching is indeed a great way to learn. If you are passionate about teaching, you are passionate about making things as clear to the "student" that he or she just can't object. But to achieve this clarity, you must clarify all the potentially murky points that you may be willing to overlook if the goal were just for you to learn the truth.

You "know" what the truth is, perhaps because you have a good intuition or you have solved similar or very closely related things in the past, and it's therefore tempting – and often useful, if you want to save your time – not to get distracted by every doubt. But a curious, critical student will get distracted and he or she will interrupt you and ask the inconvenient questions.

If you are a competent teacher, you must be able to answer pretty much all questions related to what you are saying, and by getting ready to this deep questioning, you learn the topic really properly.

I guess that John Preskill would agree that I am interpreting his logic in different words and I am probably thinking about these matters similarly to himself. Many famous physicists have agreed. For example, Richard Feynman has said that it was important for him to be hired as a teacher because if the research isn't moving forward, and it often isn't, he still knows that he is doing something useful.

But I still think it's fair to say that many great researchers don't think in this way – and many great researchers aren't even good teachers. Bell Labs have employed numerous great non-teacher researchers. And on the contrary, many good teachers are not able to become great researchers. For those reasons, I think that Preskill's implicit point about the link between teaching and finding new results isn't true in general.

Two-trick ponies

Preskill praises the concept of two-trick ponies – people who learn (at least) two disciplines and benefit from the interplay between them. He is an example of a two-trick pony. And it's great if it works.

On the other hand, I still think that a clear majority of the important results occurs within one discipline. And most combinations of disciplines end up being low-quality science. People often market themselves as interdisciplinary researchers because they're not too good in either discipline – and whenever their deficit in one discipline is unmasked, they may suggest that they're better in another one. Except that it often fails to be the case in all disciplines.

So the interdisciplinary research is often just a euphemism for bad research hiding its low quality. Moreover, even if one doesn't talk about imperfect people at all, I think that random pairs of disciplines (or subdisciplines) of science (or physics) are unlikely to lead to fantastic off-spring, at least not after a limited effort.

Combinations of two disciplines have led and will probably lead to several important breakthroughs – but they are very rare.

There is another point related to the two-trick ponies. Many breakthroughs in physics resulted from the solution to a paradox. The apparent paradox arose from two different perspectives on a problem. These perspectives may usually be associated with two subdisciplines of physics.

Einstein's special relativity is the resolution of disagreements between classical mechanics and classical field theory (electromagnetism) concerning the question how objects behave when you approach the speed of light. String theory is the reconciliation of the laws of the small (quantum field theory) and the laws of the large (general relativity), and there are other examples.

Even though the two perspectives that are being reconciled correspond to different parts of the physics research and the physics community, they are often rather close sociologically. So theoretical physicists tend to know both. The very question whether two classes of questions in physics should be classified as "one pony" or "two monies" (or "more than two ponies") is a matter of conventions. After all, there is just one science and the precise separation of science into disciplines is a human invention.

This ambiguous status of the term "two-trick pony" seriously weakens John Preskill's second principle. When we say that someone is a "two-trick pony", we may only define this proposition relatively to others. A "two-trick pony" is more versatile than others – he knows stuff from subdisciplines that are further from each other than the subdisciplines mastered by other typical ponies.

But versatility isn't really the general key to progress, either. Focus and concentration may often be more important. So I don't really believe that John Preskill's second principle may be reformulated as a general rule with the universal validity.

Nature is subtle

However, I fully agree with Preskill's third principle that says that Nature is subtle. Subtle is Nature but malicious She is not. ;-) Preskill quotes the holographic principle in quantum gravity as our best example of Nature's subtle character. That's a great (but not the greatest) choice of an example, I think. Preskill adds a few more words explaining what he means by the adjective "subtle":
Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.
Nature isn't a prostitute. She is hiding many of Her secrets. That's why the self-confidence of a man who declares himself to be the naturally born expert in Nature's intimate organs may often be unjustified and foolish. The appearances are often misleading. The men often confuse the pubic hair with the swimming suit, the \(\bra{\rm bras}\) with the \(\ket{\rm cats}\) beneath the \(\bra{\rm bras}\), and so on. We aren't born with the accurate knowledge of the most important principles of Nature.

We have to learn them by carefully studying Nature and we should always understand that any partial insight we make may be an illusion. To say the least, every generalization or extrapolation of an insight may turn out to be wrong.

And it may be wrong not just in the way we can easily imagine – a type of wrongness of our theories that we're ready to expect from the beginning. Our provisional theories may be wrong for much more profound reasons.

Of course, I consider the postulates of quantum mechanics to be the most important example of Nature's subtle character. A century ago, physicists were ready to generalize the state-of-the-art laws of classical physics in many "understandable" ways: to add new particles, new classical fields, new terms in the equations that govern them, higher derivatives, and so on. And Lord Kelvin thought that even those "relatively modest" steps had already been completed, so all that remained was to measure the parameters of Nature more accurately than before.

But quantum mechanics forced us to change the whole paradigm. Even though the class of classical (and usually deterministic) theories seemed rather large and tolerant, quantum mechanics showed that it's an extremely special, \(\hbar=0\) limit of more general theories of Nature (quantum mechanical theories) that we must use instead of the classical ones. The objective reality doesn't exist at the fundamental level.

(The \(\hbar=0\) classical theories may look like a "measure zero" subset of the quantum+classical ones with a general value of \(\hbar\). But because \(\hbar\) is dimensionful in usual units and its numerical value may therefore be changed by any positive factor by switching to different units, we may only qualitatively distinguish \(\hbar=0\) and \(\hbar\neq 0\). That means that the classical and quantum theories are pretty much "two comparably large classes" of theories. The classical theories are a "contraction" or a "limit" of the quantum ones; some of the quantum ones are "deformations" of the classical ones. Because of these relationships, it was possible for the physicists to think that the world obeys classical laws although for 90 years, we have known very clearly that it only obeys the quantum laws.)

Quantum mechanics demonstrated that people were way too restrictive when it came to the freedoms they were "generously" willing to grant to Nature. Nature just found the straitjacket to be unacceptably suffocating. It simply doesn't work like that. Quantum mechanics is the most important example of a previously unexpected difficulty but there are many other examples.

At the end, the exact theory of Nature – and our best approximations of the exact theory we may explain these days – are consistent. But the very consistency may sometimes look surprising to a person who doesn't have a sufficient background in mathematics, who hasn't studied the topic enough, or who is simply not sufficiently open-minded or honest.

The lay people – and some of the self-styled (or highly paid!) physicists as well – often incorrectly assume that the right theory must belong to a class of theories (classical theories, those with the objective reality of some kind, were my most important example) they believe is sufficiently broad and surely containing all viable contenders. They believe that all candidates not belonging to this class are crazy or inconsistent. They violate common sense, don't they?

But this instinctive expectation is often wrong. In reality, they have some evidence that their constraint on the theory is a sufficient condition for a theory to be consistent. But they often incorrectly claim that their restriction is actually a necessary condition for the consistency, even though it is not. In most cases, when this error takes place, the condition they were willing to assume is not only unnecessary; it is actually demonstrably wrong when some other, more important evidence or arguments are taken into account.

A physicist simply cannot ignore the possibility that assumptions are wrong, even if he used to consider these assumptions as "obvious facts" or "common sense". Nature is subtle and not obliged to pay lip service to sensibilities that are common. The more dramatic differences between the theories obeying the assumption and those violating the assumption are, the more attention a physicist must pay to the question whether his assumption is actually correct.

Physicists are supposed to find some important or fundamental answers – to construct the big picture. That's why they unavoidably structure their knowledge hierarchically to the "key things" and the "details", and they prefer to care about the former (leaving the latter to the engineers and others). However, separating ideas into "key things" and "details" mindlessly is very risky because the things you consider "details" may very well show that your "key things" are actually wrong, the right "key things" are completely different, and many of the things you consider "details" are not only different than you assumed, but they may actually be some of the "key things" (or the most important "key things"), too!

Of course, I was thinking about very particular examples when I was writing the previous, contrived paragraph. I was thinking about bad or excessively stubborn physicists (if you want me to ignore full-fledged crackpots) and their misconceptions. Those who believe that Nature must have a "realist" description – effectively one from the class of classical theories – may consider all the problems (of the "many worlds interpretation" or any other "realist interpretation" of quantum mechanics) pointed out by others to be just "details". If something doesn't work about these "details", these people believe, those defects will be fixed by some "engineers" in the future.

But most of these objections aren't details at all and it may be seen that no "fix" will ever be possible. They are valid and almost rock-solid proofs that the "key assumptions" of the realists are actually wrong. And if someone or something may overthrow a "key player", then he or it must be a "key player", too. He or it can't be just a "detail"! So if there seems to be some evidence – even if it looks like technical evidence composed of "details" – that actually challenges or disproves your "key assumptions", you simply have to care about it because all your opinions about the truth, along with your separation of questions to the "big ones" and "details", may be completely wrong.

If you don't care about these things, it's too bad and you're very likely to end in the cesspool of religious fanaticism and pseudoscience together with assorted religious bigots and Sean Carrolls.

by Luboš Motl (noreply@blogger.com) at February 27, 2015 06:34 AM

astrobites - astro-ph reader's digest

Corpse too Bright? Make it Bigger!

Title: “Circularization” vs. Accretion — What Powers Tidal Disruption Events?
Authors: T. Piran, G. Svirski, J. Krolik, R. M. Cheng, H. Shiokawa
First Author’s Institution: Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem

 

Our day-to-day experiences with gravity are fairly tame. It keeps our GPS satellites close and ready for last-minute changes to an evening outing, brings us the weekend snow and rain that beg for a cozy afternoon curled up warm and dry under covers with a book and a steaming mug, anchors our morning cereal to its rightful place in our bowls (or in our tummies, for that matter), and keeps the Sun in view day after day for millennia on end, nourishing the plants that feed us and radiating upon us its cheering light.  Combined with a patch of slippery ice, gravity may produce a few lingering bruises, and occasionally we’ll hear about the brave adventurers who, in search of higher vistas, slip tragically off an icy slope or an unforgiving cliff.  But all in all, our experiences with gravity in our everyday lives is a largely unnoticed, unheralded hero that works continually behind the scenes to maintain life as we know it.

Park yourself outside a relatively small but massive object such as the supermassive black hole lurking at the center of our galaxy, and you’ll discover sly gravity’s more feral side. Gravity’s inverse square law dependence on your distance from your massive object of choice dictates that as you get closer and closer to said object, the strength of gravity will increase drastically: if you halve your distance to the massive object, the object will pull four times as hard at you, if you quarter your distance towards the object, it’ll pull sixteen times as hard at you, and well, hang on tight to your shoes because you may start to feel them tugging away from your feet. At this point though, you should be high-tailing it as fast as you can away from the massive object rather than attending to your footwear, for if you’re sufficiently close, the difference in the gravitational pull between your head and your feet can be large enough that you’ll stretch and deform into a long string—or “spaghettify” as astronomers have officially termed this painful and gruesome path of no return.

piran2015-shocks

Figure 1. A schematic of the accretion disk created when a star passes too close to a supermassive black hole. The star is ripped up by the black hole, and its remnants form the disk. Shocks (red) generated as stellar material falls onto the disk produce the light we observe. [Figure taken from today’s paper.]

While it doesn’t look like there’ll be a chance for the daredevils among us to visit such an object and test these ideas any time soon, there are other things that have the unfortunate privilege of doing so: stars. If a star passes closely enough to a supermassive black hole so that the star’s self-gravity—which holds it together in one piece—is dwarfed by the difference in the gravitational pull of the black hole on one side to the star to the other, the black hole raises tides on the star (much like the oceanic tides produced by the Moon and the Sun on Earth) can become so large that it deforms until it rips apart.  The star spaghettifies in what astronomers call a tidal disruption event, or TDE, for short. The star-black hole separation below which the star succumbs to such a fate is called its tidal radius (see Nathan’s post for more details on the importance of the tidal radius in TDEs). A star that passes within this distance sprays out large quantities of its hot gas as it spirals to its eventual death in the black hole. But the star doesn’t die silently.  The stream of hot gas it sheds can produce a spectacular light show that can lasts for months. The gas, too, is eventually swallowed by the black hole, but first forms an accretion disk around the black hole that extends up to the tidal radius. The gas violently releases its kinetic energy in shocks that form near what would have been the original star’s point of closest approach (its periapsis) and where the gas wraps around the black hole then collides with the stream of newly infalling stellar gas at the edge of the disk (see Figure 1). It is the energy radiated by these shocks that eventually escape and make their way to our telescopes, where we can observe them—a distant flare at the heart of a neighboring galaxy.

Or so we thought.

 

TDEs, once just a theorist’s whimsy, have catapulted in standing to an observational reality as TDE-esque flares have been observed near our neighborly supermassive black holes. An increasing number of these have been discovered through UV/optical observations (the alternate method being X-rays), which have yielded some disturbing trends that contradict the predictions of the classic TDE picture. These UV/optical TDEs aren’t as luminous as we expect. They aren’t as hot as we thought they would be and many of them stay the same temperature rather than decrease with time. The light we do see seems to come from a region much larger than we expected, and the gas producing the light is moving more slowly than our classic picture suggested. Haven’t thrown in the towel already?

But hang on to your terrycloth—and cue in the authors of today’s paper. Inspired by new detailed simulations of TDEs, they suggested that what we’re seeing in the optical is not the light from shocks in an accretion disk that extends up to the tidal radius, but from a disk that extends about 100 times that distance. Again, shocks from interacting streams of gas—but this time extending up to and at the larger radius—produce the light we observe. The larger disk automatically solves the size problem, and also conveniently solves the velocity problem with it, since Kepler’s laws predict that material would be moving more slowly at the larger radius. This in turn reduces the luminosity of the TDE, which is powered by the loss of kinetic energy (which, of course, scales with the velocity) at the edge of the disk. A larger radius and lower luminosity work to reduce the blackbody temperature of the gas. The authors predicted the change that each of the observations inconsistent with the classic TDE model would undergo under the new model, and found that they agreed well with the measured peak luminosity, temperature, line width (a proxy for the speed of the gas), and estimated size of the emitting region for seven TDEs that had been discovered in the UV/optical, and found good agreement.

But as most theories are wont to do, while this model solves many observational puzzles, it opens another one: these lower luminosity TDEs radiate only 1% of the energy the stellar remains should lose as they are accreted onto the black hole.  So where does the rest of the energy go?  The authors suggest a few different means (photon trapping? outflows? winds? emission at other wavelengths?), but all of them appear unsatisfying for various reasons.  It appears that these stellar corpses will live on in astronomers’ deciphering minds.

by Stacy Kim at February 27, 2015 02:18 AM

February 26, 2015

Christian P. Robert - xi'an's og

Unbiased Bayes for Big Data: Path of partial posteriors [a reply from the authors]

[Here is a reply by Heiko Strathmann to my post of yesterday. Along with the slides of a talk in Oxford mentioned in the discussion.]

Thanks for putting this up, and thanks for the discussion. Christian, as already exchanged via email, here are some answers to the points you make.

First of all, we don’t claim a free lunch — and are honest with the limitations of the method (see negative examples). Rather, we make the point that we can achieve computational savings in certain situations — essentially exploiting redundancy (what Michael called “tall” data in his note on subsampling & HMC) leading to fast convergence of posterior statistics.

Dan is of course correct noticing that if the posterior statistic does not converge nicely (i.e. all data counts), then truncation time is “mammoth”. It is also correct that it might be questionable to aim for an unbiased Bayesian method in the presence of such redundancies. However, these are the two extreme perspectives on the topic. The message that we want to get along is that there is a trade-off in between these extremes. In particular the GP examples illustrate this nicely as we are able to reduce MSE in a regime where posterior statistics have *not* yet stabilised, see e.g. figure 6.

“And the following paragraph is further confusing me as it seems to imply that convergence is not that important thanks to the de-biasing equation.”

To clarify, the paragraph refers to the additional convergence issues induced by alternative Markov transition kernels of mini-batch-based full posterior sampling methods by Welling, Bardenet, Dougal & co. For example, Firefly MC’s mixing time is increased by a factor of 1/q where q*N is the mini-batch size. Mixing of stochastic gradient Langevin gets worse over time. This is not true for our scheme as we can use standard transition kernels. It is still essential for the partial posterior Markov chains to converge (if MCMC is used). However, as this is a well studied problem, we omit the topic in our paper and refer to standard tools for diagnosis. All this is independent of the debiasing device.

About MCMC convergence.
Yesterday in Oxford, Pierre Jacob pointed out that if MCMC is used for estimating partial posterior statistics, the overall result is not unbiased. We had a nice discussion how this bias could be addressed via a two-stage debiasing procedure: debiasing the MC estimates as described in the “Unbiased Monte Carlo” paper by Agapiou et al, and then plugging those into the path estimators — though it is (yet) not so clear how (and whether) this would work in our case.
In the current version of the paper, we do not address the bias present due to MCMC. We have a paragraph on this in section 3.2. Rather, we start from a premise that full posterior MCMC samples are a gold standard. Furthermore, the framework we study is not necessarily linked to MCMC – it could be that the posterior expectation is available in closed form, but simply costly in N. In this case, we can still unbiasedly estimate this posterior expectation – see GP regression.

“The choice of the tail rate is thus quite delicate to validate against the variance constraints (2) and (3).”

It is true that the choice is crucial in order to control the variance. However, provided that partial posterior expectations converge at a rate n with n the size of a minibatch, computational complexity can be reduced to N1-α (α<β) without variance exploding. There is a trade-off: the faster the posterior expectations converge, more computation can be saved; β is in general unknown, but can be roughly estimated with the “direct approach” as we describe in appendix.

About the “direct approach”
It is true that for certain classes of models and φ functionals, the direct averaging of expectations for increasing data sizes yields good results (see log-normal example), and we state this. However, the GP regression experiments show that the direct averaging gives a larger MSE as with debiasing applied. This is exactly the trade-off mentioned earlier.

I also wonder what people think about the comparison to stochastic variational inference (GP for Big Data), as this hasn’t appeared in discussions yet. It is the comparison to “non-unbiased” schemes that Christian and Dan asked for.


Filed under: Statistics, University life Tagged: arXiv, bias vs. variance, big data, convergence assessment, de-biasing, Firefly MC, MCMC, Monte Carlo Statistical Methods, telescoping estimator, unbiased estimation

by xi'an at February 26, 2015 11:15 PM

Quantum Diaries

Twitter, Planck et les supernovae

Matthieu Roman est un jeune chercheur CNRS à Paris, tout à, fait novice sur la twittosphère. Il nous raconte comment il est en pourtant arrivé à twitter « en direct de son labo » pendant une semaine. Au programme : des échanges à bâton rompu à propos de l’expérience Planck, des supernovae ou l’énergie noire, avec un public passionné et assidu. Peut-être le début d’une vocation en médiation scientifique ?

Mais comment en suis-je arrivé là ? Tout a commencé pendant ma thèse de doctorat en cosmologie au Laboratoire Astroparticule et Cosmologie (APC, CNRS/Paris Diderot), sous la direction de Jacques Delabrouille, entre 2011et 2014. Cette thèse m’a amené à faire partie de la grande collaboration scientifique autour du satellite Planck, et en particulier de son instrument à hautes fréquences plus connu sous son acronyme anglais HFI. Je me suis intéressé au cours de ces trois années à l’étude pour la cosmologie des amas de galaxies détectés par Planck à l’aide de « l’effet Sunyaev-Zel’dovich » (interaction des photons du fond diffus cosmologique avec les électrons piégés au sein des amas de galaxies). En mars 2013, j’étais donc aux premières loges au moment de la livraison des données en température de Planck qui ont donné lieu à un emballement médiatique impressionnant. Les résultats démontraient la solidité du modèle cosmologique actuel composé de matière noire froide et d’énergie noire.

A-t-on découvert les ondes gravitationnelles primordiales ?
Puis quelques mois plus tard, les américains de l’expérience BICEP2, située au Pôle Sud, ont convoqué les médias du monde entier afin d’annoncer la découverte des ondes gravitationnelles primordiales grâce à leurs données polarisées. Ils venaient simplement nous apporter le Graal des cosmologistes ! Nouvelle excitation, experts en tous genres invités sur les plateaux télés, dans les journaux pour expliquer que l’on avait détecté ce qu’avait prédit Einstein un siècle plus tôt.

Mais dans la collaboration Planck, nombreux étaient les sceptiques. Nous n’avions pas encore les moyens de répondre à BICEP2 car les données polarisées n’étaient pas encore analysées, mais nous sentions qu’une partie importante du signal polarisé de la poussière galactique n’était pas pris en compte.

Les derniers résultats ont montré une carte de poussière galactique sur laquelle a été rajoutée la direction du champ magnétique galactique. Je la trouve particulièrement belle ! Crédits : ESA - collaboration Planck

Les derniers résultats ont montré une carte de poussière galactique sur laquelle a été rajoutée la direction du champ magnétique galactique. Je lui trouve un aspect particulièrement artistique ! Crédits : ESA- collaboration Planck

Et voilà : depuis quelques jours, c’est officiel ! Planck, dans une étude conjointe avec BICEP2 et Keck, fixe une limite supérieure sur la quantité d’ondes gravitationnelles primordiales, et par conséquent pas de détection. En somme, retour à la case départ, mais avec beaucoup d’informations supplémentaires. Les futures missions spatiales, ou expériences au sol ou en ballon visant à détecter avec une grande précision la polarisation du fond diffus à grande échelle, dont l’intérêt aurait pu être remis en question si BICEP2 avait eu raison, viennent de prendre à nouveau tout leur sens. Car il faudra bien aller les chercher, ces ondes gravitationnelles primordiales, avec un nombre de détecteurs embarqués de plus en plus grand afin d’augmenter la sensibilité, et la capacité de confirmer à coup sûr l’origine cosmologique de tout signal détecté !

De la poussière galactique aux explosions d’étoiles
Entre temps, j’ai eu l’opportunité de prolonger mon activité de recherche pendant trois années supplémentaires avec un post-doctorat au Laboratoire de physique nucléaire et des hautes énergies (CNRS, Université Pierre et Marie Curie et Université Paris Diderot) sur un sujet complètement nouveau à mes yeux : les supernovae, ces étoiles en fin de vie dont l’explosion est très lumineuse. On les étudie dans le but ultime de connaître précisément la nature de l’énergie noire, tenue responsable de l’expansion accélérée de l’Univers. Au temps de la preuve de l’existence de l’énergie noire obtenue à l’aide des supernovae (1999), on imaginait que leur courbe de lumière était assez peu variable. On a pris d’ailleurs l’habitude de les appeler « chandelles standard ».

Sur cette  image de la galaxie M101 on peut voir distinctement une supernova qui a explosé en 2011 : c'est le gros point blanc en haut à droite. Crédit T.A. Rector (University of Alaska Anchorage), H. Schweiker & S. Pakzad NOAO/AURA/NSF

Sur cette image de la galaxie M101 on peut voir distinctement une supernova qui a explosé en 2011 : c’est le gros point blanc en haut à droite. Celle-ci se situe dans l’un des bras spiraux, mais ne brillerait pas de la même façon si elle était au centre. Crédit T.A. Rector (University of Alaska Anchorage), H. Schweiker & S. Pakzad NOAO/AURA/NSF

Avec l’affinement des méthodes de détection, on se rend compte que les supernovae ne sont pas vraiment les chandelles standard que l’on croit, ce qui relance complètement l’intérêt du domaine. En particulier, le type de galaxie dans laquelle explose une supernova peut créer des variations de luminosité, et ainsi affecter la mesure du paramètre décrivant la nature de l’énergie noire. C’est le projet dans lequel je me suis lancé au sein de la (petite) collaboration du Supernova Legacy Survey (SNLS). En espérant un jour pouvoir étudier ces objets sous la forme d’autres projets scientifiques, avec des détecteurs encore plus puissants comme Subaru ou LSST.

Twitter en direct de mon labo…
En fait c’est une amie, Agnès, qui m’a fait découvrir Twitter et m’a encouragé à raconter mon travail au jour le jour et pendant une semaine via le compte @EnDirectDuLabo. Il s’agissait d’un monde nouveau pour moi, qui n’était pas du tout actif sur ce que l’on appelle « la twittosphère ». C’est malheureusement le cas pour de nombreux chercheurs en France. Expérience très enrichissante s’il en est, puisqu’elle semble susciter l’intérêt de nombreux twittos, et a permis de porter le nombre d’abonnés à plus de 2000. Cela m’a permis par exemple d’expliquer les bases de l’électromagnétisme nécessaires en astronomie, des détails plus techniques sur les performances de l’expérience dans laquelle je travaille ou encore ma vie au quotidien dans mon laboratoire.

Ce fut très amusant de livrer mon travail quotidien au grand public, mais aussi très chronophage ! J’ai toujours été convaincu par l’importance de la médiation scientifique, sans jamais oser me lancer. Il était peut-être temps…

Matthieu Roman est actuellement post-doctorant au Laboratoire de physique nucléaire et de hautes énergies (CNRS, Université Pierre et Marie Curie et Université Paris Diderot)

by CNRS-IN2P3 at February 26, 2015 11:01 PM

The n-Category Cafe

Introduction to Synthetic Mathematics (part 1)

John is writing about “concepts of sameness” for Elaine Landry’s book Category Theory for the Working Philosopher, and has been posting some of his thoughts and drafts. I’m writing for the same book about homotopy type theory / univalent foundations; but since HoTT/UF will also make a guest appearance in John’s and David Corfield’s chapters, and one aspect of it (univalence) is central to Steve Awodey’s chapter, I had to decide what aspect of it to emphasize in my chapter.

My current plan is to focus on HoTT/UF as a synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids. But in order to say what that even means, I felt that I needed to start with a brief introduction about the phrase “synthetic theory”, which may not be familiar. Right now, my current draft of that “introduction” is more than half the allotted length of my chapter; so clearly it’ll need to be trimmed! But I thought I would go ahead and post some parts of it in its current form; so here goes.

In general, mathematical theories can be classified as analytic or synthetic. An analytic theory is one that analyzes, or breaks down, its objects of study, revealing them as put together out of simpler things, just as complex molecules are put together out of protons, neutrons, and electrons. For example, analytic geometry analyzes the plane geometry of points, lines, etc. in terms of real numbers: points are ordered pairs of real numbers, lines are sets of points, etc. Mathematically, the basic objects of an analytic theory are defined in terms of those of some other theory.

By contrast, a synthetic theory is one that synthesizes, or puts together, a conception of its basic objects based on their expected relationships and behavior. For example, synthetic geometry is more like the geometry of Euclid: points and lines are essentially undefined terms, given meaning by the axioms that specify what we can do with them (e.g. two points determine a unique line). (Although Euclid himself attempted to define “point” and “line”, modern mathematicians generally consider this a mistake, and regard Euclid’s “definitions” (like “a point is that which has no part”) as fairly meaningless.) Mathematically, a synthetic theory is a formal system governed by rules or axioms. Synthetic mathematics can be regarded as analogous to foundational physics, where a concept like the electromagnetic field is not “put together” out of anything simpler: it just is, and behaves in a certain way.

The distinction between analytic and synthetic dates back at least to Hilbert, who used the words “genetic” and “axiomatic” respectively. At one level, we can say that modern mathematics is characterized by a rich interplay between analytic and synthetic — although most mathematicians would speak instead of definitions and examples. For instance, a modern geometer might define “a geometry” to satisfy Euclid’s axioms, and then work synthetically with those axioms; but she would also construct examples of such “geometries” analytically, such as with ordered pairs of real numbers. This approach was pioneered by Hilbert himself, who emphasized in particular that constructing an analytic example (or model) proves the consistency of the synthetic theory.

However, at a deeper level, almost all of modern mathematics is analytic, because it is all analyzed into set theory. Our modern geometer would not actually state her axioms the way that Euclid did; she would instead define a geometry to be a set <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> of points together with a set <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> of lines and a subset of <semantics>P×L<annotation encoding="application/x-tex">P\times L</annotation></semantics> representing the “incidence” relation, etc. From this perspective, the only truly undefined term in mathematics is “set”, and the only truly synthetic theory is Zermelo–Fraenkel set theory (ZFC).

This use of set theory as the common foundation for mathematics is, of course, of 20th century vintage, and overall it has been a tremendous step forwards. Practically, it provides a common language and a powerful basic toolset for all mathematicians. Foundationally, it ensures that all of mathematics is consistent relative to set theory. (Hilbert’s dream of an absolute consistency proof is generally considered to have been demolished by Gödel’s incompleteness theorem.) And philosophically, it supplies a consistent ontology for mathematics, and a context in which to ask metamathematical questions.

However, ZFC is not the only theory that can be used in this way. While not every synthetic theory is rich enough to allow all of mathematics to be encoded in it, set theory is by no means unique in possessing such richness. One possible variation is to use a different sort of set theory like ETCS, in which the elements of a set are “featureless points” that are merely distinguished from each other, rather than labeled individually by the elaborate hierarchical membership structures of ZFC. Either sort of “set” suffices just as well for foundational purposes, and moreover each can be interpreted into the other.

However, we are now concerned with more radical possibilities. A paradigmatic example is topology. In modern “analytic topology”, a “space” is defined to be a set of points equipped with a collection of subsets called open, which describe how the points vary continuously into each other. (Most analytic topologists, being unaware of synthetic topology, would call their subject simply “topology.”) By contrast, in synthetic topology we postulate instead an axiomatic theory, on the same ontological level as ZFC, whose basic objects are spaces rather than sets.

Of course, by saying that the basic objects “are” spaces we do not mean that they are sets equipped with open subsets. Instead we mean that “space” is an undefined word, and the rules of the theory cause these “spaces” to behave more or less like we expect spaces to behave. In particular, synthetic spaces have open subsets (or, more accurately, open subspaces), but they are not defined by specifying a set together with a collection of open subsets.

It turns out that synthetic topology, like synthetic set theory (ZFC), is rich enough to encode all of mathematics. There is one trivial sense in which this is true: among all analytic spaces we find the subclass of indiscrete ones, in which the only open subsets are the empty set and the whole space. A notion of “indiscrete space” can also be defined in synthetic topology, and the collection of such spaces forms a universe of ETCS-like sets (we’ll come back to these in later installments). Thus we could use them to encode mathematics, entirely ignoring the rest of the synthetic theory of spaces. (The same could be said about the discrete spaces, in which every subset is open; but these are harder (though not impossible) to define and work with synthetically. The relation between the discrete and indiscrete spaces, and how they sit inside the synthetic theory of spaces, is central to the synthetic theory of cohesion, which I believe David is going to mention in his chapter about the philosophy of geometry.)

However, a less boring approach is to construct the objects of mathematics directly as spaces. How does this work? It turns out that the basic constructions on sets that we use to build (say) the set of real numbers have close analogues that act on spaces. Thus, in synthetic topology we can use these constructions to build the space of real numbers directly. If our system of synthetic topology is set up well, then the resulting space will behave like the analytic space of real numbers (the one that is defined by first constructing the mere set of real numbers and then equipping it with the unions of open intervals as its topology).

The next question is, why would we want to do mathematics this way? There are a lot of reasons, but right now I believe they can be classified into three sorts: modularity, philosophy, and pragmatism. (If you can think of other reasons that I’m forgetting, please mention them in the comments!)

By “modularity” I mean the same thing as does a programmer: even if we believe that spaces are ultimately built analytically out of sets, it is often useful to isolate their fundamental properties and work with those abstractly. One advantage of this is generality. For instance, any theorem proven in Euclid’s “neutral geometry” (i.e. without using the parallel postulate) is true not only in the model of ordered pairs of real numbers, but also in the various non-Euclidean geometries. Similarly, a theorem proven in synthetic topology may be true not only about ordinary topological spaces, but also about other variant theories such as topological sheaves, smooth spaces, etc. As always in mathematics, if we state only the assumptions we need, our theorems become more general.

Even if we only care about one model of our synthetic theory, modularity can still make our lives easier, because a synthetic theory can formally encapsulate common lemmas or styles of argument that in an analytic theory we would have to be constantly proving by hand. For example, just as every object in synthetic topology is “topological”, every “function” between them automatically preserves this topology (is “continuous”). Thus, in synthetic topology every function <semantics><annotation encoding="application/x-tex">\mathbb{R}\to \mathbb{R}</annotation></semantics> is automatically continuous; all proofs of continuity have been “packaged up” into the single proof that analytic topology is a model of synthetic topology. (We can still speak about discontinuous functions too, if we want to; we just have to re-topologize <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics> indiscretely first. Thus, synthetic topology reverses the situation of analytic topology: discontinuous functions are harder to talk about than continuous ones.)

By contrast to the argument from modularity, an argument from philosophy is a claim that the basic objects of mathematics really are, or really should be, those of some particular synthetic theory. Nowadays it is hard to find mathematicians who hold such opinions (except with respect to set theory), but historically we can find them taking part in the great foundational debates of the early 20th century. It is admittedly dangerous to make any precise claims in modern mathematical language about the beliefs of mathematicians 100 years ago, but I think it is justified to say that in hindsight, one of the points of contention in the great foundational debates was which synthetic theory should be used as the foundation for mathematics, or in other words what kind of thing the basic objects of mathematics should be. Of course, this was not visible to the participants, among other reasons because many of them used the same words (such as “set”) for the basic objects of their theories. (Another reason is that among the points at issue was the very idea that a foundation of mathematics should be built on precisely defined rules or axioms, which today most mathematicians take for granted.) But from a modern perspective, we can see that (for instance) Brouwer’s intuitionism is actually a form of synthetic topology, while Markov’s constructive recursive mathematics is a form of “synthetic computability theory”.

In these cases, the motivation for choosing such synthetic theories was clearly largely philosophical. The Russian constructivists designed their theory the way they did because they believed that everything should be computable. Similarly, Brouwer’s intuitionism can be said to be motivated by a philosophical belief that everything in mathematics should be continuous.

(I wish I could write more about the latter, because it’s really interesting. The main thing that makes Brouwerian intuitionism non-classical is choice sequences: infinite sequences in which each element can be “freely chosen” by a “creating subject” rather than being supplied by a rule. The concrete conclusion Brouwer drew from this is that any operation on such sequences must be calculable, at least in stages, using only finite initial segments, since we can’t ask the creating subject to make an infinite number of choices all at once. But this means exactly that any such operation must be continuous with respect to a suitable topology on the space of sequences. It also connects nicely with the idea of open sets as “observations” or “verifiable statements” that was mentioned in another thread. However, from the perspective of my chapter for the book, the purpose of this introduction is to lay the groundwork for discussing HoTT/UF as a synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids, and Brouwerian intuitionism would be a substantial digression.)

Finally, there are arguments from pragmatism. Whereas the modularist believes that the basic objects of mathematics are actually sets, and the philosophist believes that they are actually spaces (or whatever), the pragmatist says that they could be anything: there’s no need to commit to a single choice. Why do we do mathematics, anyway? One reason is because we find it interesting or beautiful. But all synthetic theories may be equally interesting and beautiful (at least to someone), so we may as well study them as long as we enjoy it.

Another reason we study mathematics is because it has some application outside of itself, e.g. to theories of the physical world. Now it may happen that all the mathematical objects that arise in some application happen to be (say) spaces. (This is arguably true of fundamental physics. Similarly, in applications to computer science, all objects that arise may happen to be computable.) In this case, why not just base our application on a synthetic theory that is good enough for the purpose, thereby gaining many of the advantages of modularity, but without caring about how or whether our theory can be modeled in set theory?

It is interesting to consider applying this perspective to other application domains. For instance, we also speak of sets outside of a purely mathematical framework, to describe collections of physical objects and mental acts of categorization; could we use spaces in the same way? Might collections of objects and thoughts automatically come with a topological structure by virtue of how they are constructed, like the real numbers do? I think this also starts to seem quite natural when we imagine topology in terms of “observations” or “verifiable statetments”. Again, saying any more about that in my chapter would be a substantial digression; but I’d be interested to hear any thoughts about it in the comments here!

by shulman (viritrilbia@gmail.com) at February 26, 2015 08:26 PM

Clifford V. Johnson - Asymptotia

Ceiba Speciosa
Saw all the fluffy stuff on the ground. Took me a while to "cotton on" and look up: silk_floss_tree (ceiba speciosa.. "silk floss" tree. click for larger view.) -cvj Click to continue reading this post

by Clifford at February 26, 2015 05:29 PM

Emily Lakdawalla - The Planetary Society Blog

Russia Moves to Support ISS through 2024, Create New Space Station
The future of the International Space Station is a little clearer this week, following a statement from Russia supporting an extension of the orbiting complex through 2024.

February 26, 2015 05:03 PM

astrobites - astro-ph reader's digest

Black Holes Grow First in Mergers

Title: Following Black Hole Scaling Relations Through Gas-Rich Mergers
Authors: Anne M. Medling, Vivian U, Claire E. MaxDavid B. Sanders, Lee Armus, Bradford Holden, Etsuko Mieda, Shelley A. Wright, James E. Larkin
First Author’s Institution: Research School of Astronomy & Astrophysics, Mount Stromlo Observatory, Australia National University, Cotter Road, Weston, ACT 2611, Australia
Status: Accepted to ApJ

It’s currently accepted theory that every galaxy has a super massive black hole (SMBH) at it’s center. These black holes have been observed to be strongly correlated with the galaxy’s bulge mass, total stellar mass and velocity dispersion.

Figure 1: NGC 2623 - one of the merging galaxies observed in this study. Image credit: NASA.

Figure 1: NGC 2623 – one of the merging galaxies observed in this study. Image credit: NASA.

The mechanism which drives this has long thought to be mergers (although there are recent findings showing the bulgeless galaxies which have not undergone a merger also host high mass SMBHs) which causes a funneling of gas into the center of a galaxy, which is either used in a burst of star formation or accreted by the black hole. The black hole itself can regulate both its and the galaxy’s growth if it becomes active and throws off huge winds which expel the gas needed for star formation and black hole growth on short timescales.

To understand the interplay between these effects the authors of this paper study 9 nearby ultra luminous infrared galaxies in a range of stages through a merger and measure the mass of the black holes at the center of each. They calculated these masses from spectra taken with the Keck telescopes on Mauna Kea, Hawaii by measuring the stellar kinematics (the movement of the stars around the black hole) as shown by the doppler broadening of the emission lines in the spectra. Doppler broadening occurs when gas is emitting light at a very specific wavelength but is also moving either towards or away from us (or both if it is rotating around a central object). Some of this emission is doppler shifted to larger or smaller wavelengths effectively smearing (or broadening) a narrow emission line into a broad one.

Figure 1: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies.

Figure 2: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies. Originally Figure 2 in Medling et al. (2015)

From this estimate of the rotational velocities of the stars around the centre of the galaxy, the mass of the black hole and the velocity dispersion can be calculated. These measurements for the 9 galaxies in this study are plotted in Figure 1 (originally Fig. 2 in Medling et al. 2015) and are shown to be either within the scatter or above the typical relationship between black hole mass and velocity dispersion.

The authors run a Kolmogorov-Smirnoff statistical test on the data to confirm that these merging galaxies are drawn from a completely different population to those that lie on the relation with a p-value of 0.003, i.e. the likelihood of the these merging galaxies being drawn from the same population as the typical galaxies is 0.3%.

The black holes therefore have a larger mass than they should for the stellar velocity dispersion in the galaxy. This suggests that the black hole grows first in a merger before the bulges of the two galaxies have fully merged and settled into a gravitationally stable structure (virialized). Although measuring the velocity dispersion in a bulge consisting of two bulges merging is difficult and can produce errors in the measurement, simulations have shown that the velocity dispersion will only be underestimated by 50%; an amount which is not significant enough to change these results.

The authors also consider whether there has been enough time since the merger began for these black holes to grow so massive. Assuming that both galaxies used to lay on typical scaling relations for the black hole mass, and that the black holes accreted at the typical rate (Eddington rate), they find that it should have taken somewhere in the range of a few 10-100  million years – a time much less than the simulated time for a merger to happen.

A second consideration is how long it will take for these galaxies to virialize and for their velocity dispersion to increase to bring each one back onto the typical scaling relation with the black hole mass. They consider how many more stars are needed to form in order for the velocity dispersion in the bulge to reach the required value. Taking measured star formation rates of these galaxies gives a range of timescales of about 1-2 billion years which is consistent with simulated merger timescales. It is therefore plausible that these galaxies can return to the black hole mass-velocity dispersion relation by the time they have finished merging.

The authors conclude therefore that black hole fueling and growth begins in the early stages of a merger and can outpace the formation of the bulge and any bursts in star formation. To confirm this result measurements of a much larger sample of currently merging galaxies needs to be taken – the question is, where do we look?

by Becky Smethurst at February 26, 2015 02:11 PM

Symmetrybreaking - Fermilab/SLAC

From the Standard Model to space

A group of scientists who started at particle physics experiments move their careers to the final frontier.

As a member of the ATLAS experiment at the Large Hadron Collider, Ryan Rios spent 2007 to 2012 surrounded by fellow physicists.

Now, as a senior research engineer for Lockheed Martin at NASA’s Johnson Space Center, he still sees his fair share.

He’s not the only scientist to have made the leap from experimenting on Earth to keeping astronauts safe in space. Rios works on a small team that includes colleagues with backgrounds in physics, biology, radiation health, engineering, information technology and statistics.

“I didn’t really leave particle physics, I just kind of changed venues,” Rios says. “A lot of the skillsets I developed on ATLAS I was able to transfer over pretty easily.”

The group at Johnson Space Center supports current and planned crewed space missions by designing, testing and monitoring particle detectors that measure radiation levels in space.

Massive solar flares and other solar events that accelerate particles, other sources of cosmic radiation, and weak spots in Earth’s magnetic field can all pose radiation threats to astronauts. Members of the radiation group provide advisories on such sources. This makes it possible to warn astronauts, who can then seek shelter in heavier-shielded areas of the spacecraft.

Johnson Space Center has a focus on training and supporting astronauts and planning for future crewed missions. Rios has done work for the International Space Station and the robotic Orion mission that launched in December as a test for future crewed missions. His group recently developed a new radiation detector for the space station crew.

Rios worked at CERN for four years as a graduate student and postdoc at Southern Methodist University in Dallas. At CERN he was introduced to a physics analysis platform called ROOT, which is also used at NASA. Some of the particle detectors he works with now were developed by a CERN-based collaboration.

Fellow Johnson Space Center worker Kerry Lee wound up a group lead for radiation operations after using ROOT during his three years as a summer student on the Collider Detector at Fermilab, or CDF experiment.

“As a kid, I just knew I wanted to work at NASA,” says Lee, who grew up in rural Wyoming. He pursued an education in engineering physics and “enjoyed the physics part more than the engineering.” He received a master’s degree in particle physics at Texas Tech University.

A professor there helped him attain his current position. “He asked me what I really wanted to do in life,” Lee says, “and I told him, ‘NASA.’”

He worked on data analysis for a detector aboard the robotic Mars Odyssey mission, which flew in 2001. “The tools I learned at Fermilab for data analysis were perfectly applicable for the analysis on this detector,” he says.

One of his most enjoyable roles was training astronauts to use radiation-monitoring equipment in space.

“Every one of the crew members would come through [for training],” he says. “Meeting the astronauts is very exciting—it is always a diverse and interesting group of people. I really enjoy that part of the job.”

Physics was also the starting point for Martin Leitgab, a senior research engineer who joined the Johnson Space Center group in 2013. As a PhD student, Leitgab worked at the PHENIX detector at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider. He also took part in the Belle Collaboration at the KEK B-factory in Japan.

A native of Austria who had attended the University of Illinois at Urbana-Champaign, Leitgab says his path to NASA was fairly roundabout.

“When I finished my PhD work I was at a crossroads—I did not have a master plan,” he says.

He says he became interested in aerospace and wrote some papers related to solar power in space. His wife is from Texas, so Johnson Space Center seemed to be a good fit.

“My job is to make sure that the detector built for the International Space Station works as it should, and to get data out of it,” he says. “It’s very similar to what I did before… The hardware is very different, but the experimental approach in testing and debugging detectors, debugging the software that reads out the data from the detectors and determining the system efficiency and calibration—that’s pretty much a one-to-one comparison with high-energy physics detectors work.”

Leitgab, Lee and Rios all say the small teams and tight, product-driven deadlines at NASA represent a departure from the typically massive collaborations for major particle physics experiments. But other things are very familiar: For example, NASA’s extensive collection of acronyms.

Rios says he relishes his new role but is glad to have worked on one of the experiments that in 2012 discovered the Higgs boson. “At the end of the day, I had the opportunity to work on a very huge discovery—probably the biggest one of the 21st century we’ll see,” he says.

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at February 26, 2015 02:00 PM

Peter Coles - In the Dark

Cosmology at NAM 2015

Just a quick post to plug this year’s forthcoming Royal Astronomical Society National Astronomy Meeting, which will be taking place at the splendid Venue Cymru conference centre, Llandudno, North Wales, from Sunday 5th July to Thursday 9th July 2015.

To whet your appetite, here are some pictures of lovely Llandudno  I took at the last National Astronomy Meeting there, back in 2011.

The draft science programme has  been posted and you can also find a full list of parallel sessions here.  The NAM 2015 website is now accepting proposals for contributed talks and posters relating to this and other sessions.

 

If you’re on Twitter you can keep up-to-date with developments by following their Twitter feed:

I’m actually on the Scientific Organizing Committee for NAM 2015 and as such I’ll be organizing a part of this meeting, namely a couple of sessions on Cosmology under the title Cosmology Beyond the Standard Model, with the following description.

Recent observations, particularly those from the Planck satellite, have provided strong empirical foundations for a standard cosmological model that is based on Einstein’s general theory of relativity and which describes a universe which is homogeneous and isotropic on large scales and which is dominated by dark energy and matter components. This session will explore theoretical and observational challenges to this standard picture, including modified gravity theories, models with large-scale inhomogeneity and/or anisotropy, and alternative forms of matter-energy. The aim will be to both take stock of the evidence for, and stimulate further investigation of, physics beyond the standard model.

It’s obviously quite a broad remit so I hope that there will be plenty of contributed talks and posters. The NAM 2015 website is now accepting proposals for contributed talks and posters relating to this and other sessions.

NAM is a particularly good opportunity for younger researchers – PhD students and postdocs – to present their work to a big audience so I particularly encourage such persons to submit abstracts. Would more senior readers please pass this message on to anyone they think might want to give a talk?

If you have any questions please feel free to use the comments box (or contact me privately).


by telescoper at February 26, 2015 12:12 PM

Emily Lakdawalla - The Planetary Society Blog

At last, Ceres is a geological world
I've been resisting all urges to speculate on what kinds of geological features are present on Ceres, until now. Finally, Dawn has gotten close enough that the pictures it has returned show geology: bright spots, flat-floored craters, and enigmatic grooves.

February 26, 2015 03:42 AM

February 25, 2015

Christian P. Robert - xi'an's og

Unbiased Bayes for Big Data: Path of partial posteriors

“Data complexity is sub-linear in N, no bias is introduced, variance is finite.”

Heiko Strathman, Dino Sejdinovic and Mark Girolami have arXived a few weeks ago a paper on the use of a telescoping estimator to achieve an unbiased estimator of a Bayes estimator relying on the entire dataset, while using only a small proportion of the dataset. The idea is that a sequence  converging—to an unbiased estimator—of estimators φt can be turned into an unbiased estimator by a stopping rule T:

\sum_{t=1}^T \dfrac{\varphi_t-\varphi_{t-1}}{\mathbb{P}(T\ge t)}

is indeed unbiased. In a “Big Data” framework, the components φt are MCMC versions of posterior expectations based on a proportion αt of the data. And the stopping rule cannot exceed αt=1. The authors further propose to replicate this unbiased estimator R times on R parallel processors. They further claim a reduction in the computing cost of

\mathcal{O}(N^{1-\alpha})\qquad\text{if}\qquad\mathbb{P}(T=t)\approx e^{-\alpha t}

which means that a sub-linear cost can be achieved. However, the gain in computing time means higher variance than for the full MCMC solution:

“It is clear that running an MCMC chain on the full posterior, for any statistic, produces more accurate estimates than the debiasing approach, which by construction has an additional intrinsic source of variance. This means that if it is possible to produce even only a single MCMC sample (…), the resulting posterior expectation can be estimated with less expected error. It is therefore not instructive to compare approaches in that region. “

I first got a “free lunch” impression when reading the paper, namely it sounded like using a random stopping rule was enough to overcome unbiasedness and large size jams. This is not the message of the paper, but I remain both intrigued by the possibilities the unbiasedness offers and bemused by the claims therein, for several reasons:

  • the above estimator requires computing T MCMC (partial) estimators  φt in parallel. All of those estimators have to be associated with Markov chains in a stationary regime and they all are associated with independent chains. While addressing the convergence of a single chain, the paper does not truly cover the simultaneous convergence assessment on a group of T parallel MCMC sequences. And the paragraph below is further confusing me as it seems to imply that convergence is not that important thanks to the de-biasing equation. In fact, further discussion with the authors (!) led me to understand this relates to the existing alternatives for handling large data, like firefly Monte Carlo: Convergence to the stationary remains essential (and somewhat problematic) for all the partial estimators.

“If a Markov chain is, in line with above considerations, used for computing partial posterior expectations 𝔼πt[ϕ(θ)], it need not be induced by any form of approximation, noise injection, or state-space augmentation of the transition kernel. As a result, the notorious difficulties of ensuring acceptable mixing and problems of stickiness are conveniently side-stepped –which is in sharp contrast to all existing approaches.”

  • the impact of the distribution of the stopping time T over the performances of the estimator is uncanny!  Its tail should simply decreases more slowly than the square difference between the partial estimators. This requirement is both hard to achieve [given that the variances of the—partial—MCMC estimators are hard to assess] and with negative consequences on the overall computing time. The choice of the tail rate is thus quite delicate to validate against the variance constraints (2) and (3).
  • the stopping time T must have a positive probability to take the largest possible value, corresponding to using the whole sample, in which case the approach gets worse than using directly the original MCMC algorithm, as noted on the first quote above. This shows (as stated in the first quote above) the approach cannot uniformly improve upon the standard MCMC.
  • the comparison in the (log-)normal toy example is difficult to calibrate. (And why differentiating a log normal from a normal sample? and why are the tails extremely wide for 2²⁶ datapoints?) The number of likelihood evaluations is 5e-4 times smaller for the de-biased version, hence means a humongous gain in computing time, but how does this partial exploration of the information contained in the data impact the final variability of the estimate? If I judge from Figure 4 (top) after 300 replications, one still observes a range of 0.1 at the end of the 300 iterations. If I could produce the Bayes estimate for the whole data, the variability would be of order 2e-4… If the goal is to find an estimate of the parameter of interest, with a predetermined error, this is fine. But then the comparison with the genuine Bayesian answer is not very meaningful.
  • when averaging the unbiased estimators R times, it is unclear whether or not the same subset of  nt datapoints is used to compute the partial estimator φt. On the one hand, using different subsets should improve the connection with the genuine Bayes estimator. On the other hand, this may induce higher storage and computing costs.
  • cases when the likelihood does not factorise are not always favourable to the de-biasing approach (Section 5.1) in that the availability of the joint density of a subset of the whole data may prove an issue. Take for instance a time series or a graphical network. Computing the joint density of a subset requires stringent conditions on the way the subset is selected.

Overall, I think I understand the purpose of the paper better now I have read it a few times. The comparison is only relevant against other limited information solutions. Not against the full Monty MCMC. However, in the formal examples processed in the paper, a more direct approach would be to compute (in parallel) MCMC estimates for increasing portions of the data, add a dose of bootstrap to reduce bias, and check for stabilisation of the averages.


Filed under: Statistics, University life Tagged: arXiv, bag of little bootstraps, bias vs. variance, big data, convergence assessment, de-biasing, MCMC, Monte Carlo Statistical Methods, telescoping estimator, unbiased estimation

by xi'an at February 25, 2015 11:15 PM

astrobites - astro-ph reader's digest

Open Educational Resources for Astronomy

Last Spring I taught an introductory astronomy course for non-science majors. It was difficult and fun. One of the most difficult parts was inventing activities and homework to teach specific concepts. Sometimes my activities fell flat. Thankfully, I had access to a number of astronomy education resources: the textbook, a workbook full of in-class tutorials, and the professors in my department who had previously taught introductory astronomy.

Open educational resources are meant to serve in this capacity, especially for teachers and students without the money to buy expensive texts. Like open source software, open educational resources are publicly-licensed and distributed in a format that encourages improvement and evolution. For examples, check out the resources hosted by Wikiversity. This is a sister project of Wikipedia’s, providing learning materials and a wiki forum to edit and remix those materials. It’s great for teachers in all disciplines! And it hosts a lot of astronomy material. But like Wikipedia, it’s deep and wide. It’s easy to get lost. (“Wait, why am I reading about R2D2?”) And like articles on Wikipedia, the learning materials on Wikiversity vary in quality.

Today’s paper introduces a project called astroEDU. They’re aiming to make astronomy learning resources, like those you can find on Wikiversity, easier to find and of higher quality. To do this, the authors introduce a peer-review structure for education materials modeled on the one widely-accepted for scholarly research. Educators may submit a learning activity to the astroEDU website. The project is evaluated by two blind reviewers, an educator and an astronomer. It may go through revision, or it may be scrapped. If it’s not scrapped, it’s published on the website, and sister sites like Open Educational Resources Commons. The result is a simple, excellent lesson plan describing the learning goals, any objects you need to complete the activity, step by step instructions, and ideas to find out what your students learned.

Screenshot from "Star in a Box", an educational activity freely available at astroEDU.

Screenshot from “Star in a Box”, an educational activity available at astroEDU.

To the right is a screenshot from an example activity, “Star in a Box“, which won an award last year from the Community for Science Education in Europe. It uses a web-based simulation tool developed by the Las Cumbres Observatory. Students are directed to vary the initial mass of a model star and explore its evolution in the Hertzsprung-Russell plane. This is the kind of thing I could have used to supplement the textbook in my introductory astronomy course. And so could a high school teacher struggling along without any textbooks.

AstroEDU is targeted at primary and secondary school teachers. It was launched only a year ago, supported by the Inernational Astronomical Union’s Office for Astronomy Development. It may grow into a powerful tool for open educational resources, something like a peer-reviewed Wikiversity. If you are a professional astronomer or an educator, it looks like you can help by signing up as a volunteer reviewer.

by Brett Deaton at February 25, 2015 08:35 PM

Jester - Resonaances

Persistent trouble with bees
No, I still have nothing to say about colony collapse disorder... this blog will stick to physics for at least 2 more years. This is an update on the anomalies in B decays reported by the LHCbee experiment. The two most important ones are:

  1. The  3.7 sigma deviation from standard model predictions in the differential distribution of the B➝K*μ+μ- decay products.
  2.  The 2.6 sigma violation of lepton flavor universality in B+→K+l+l- decays. 

 The first anomaly is statistically more significant. However, the theoretical error of the standard model prediction is not trivial to estimate and the significance of the anomaly is subject to fierce discussions. Estimates in the literature range from 4.5 sigma to 1 sigma, depending on what is assumed about QCD uncertainties. For this reason, the second anomaly made this story much more intriguing.  In that case, LHCb measures the ratio of the decay with muons and with electrons:  B+→K+μ+μ- vs B+→K+e+e-. This observable is theoretically clean, as large QCD uncertainties cancel in the ratio. Of course, 2.7 sigma significance is not too impressive; LHCb once had a bigger anomaly (remember CP violation in D meson decays?)  that is now long gone. But it's fair to say that the two anomalies together are marginally interesting.      

One nice thing is that both anomalies can be explained at the same time by a simple modification of the standard model. Namely, one needs to add the 4-fermion coupling between a b-quark, an s-quark, and two muons:

with Λ of order 30 TeV. Just this one extra coupling greatly improves a fit to the data, though other similar couplings could be simultaneously present. The 4-fermion operators can be an effective description of new heavy particles coupled to quarks and leptons.  For example, a leptoquark (scalar particle with a non-zero color charge and lepton number) or a Z'  (neutral U(1) vector boson) with mass in a few TeV range have been proposed. These are of course simple models created ad-hoc. Attempts to put these particles in a bigger picture of physics beyond  the standard model have not been very convincing so far, which may be one reason why the anomalies are viewed a bit skeptically. The flip side is that, if the anomalies turn out to be real, this will point to unexpected symmetry structures around the corner.

Another nice element of this story is that it will be possible to acquire additional relevant information in the near future. The first anomaly is based on just 1 fb-1 of LHCb data, and it will be updated to full 3 fb-1 some time this year. Furthermore, there are literally dozens of other B decays where the 4-fermion operators responsible for the anomalies could  also show up. In fact, there may already be some hints that this is happening. In the table borrowed from this paper we can see that there are several other  2-sigmish anomalies in B-decays that may possibly have the same origin. More data and measurements in  more decay channels should clarify the picture. In particular, violation of lepton flavor universality may come together with lepton flavor violation.  Observation of decays forbidden in the standard model, such as B→Keμ or  B→Kμτ, would be a spectacular and unequivocal signal of new physics.

by Jester (noreply@blogger.com) at February 25, 2015 08:32 PM

Emily Lakdawalla - The Planetary Society Blog

Dawn Journal: Ceres' Deepening Mysteries
Even as we discover more about Ceres, some mysteries only deepen. Mission Director Marc Rayman gives an update on Dawn as it moves ever closer to its next target.

February 25, 2015 08:00 PM

arXiv blog

Data Mining Indian Recipes Reveals New Food Pairing Phenomenon

By studying the network of links between Indian recipes, computer scientists have discovered that the presence of certain spices makes a meal much less likely to contain ingredients with flavors in common.


The food pairing hypothesis is the idea that ingredients that share the same flavors ought to combine well in recipes. For example, the English chef Heston Blumenthal discovered that white chocolate and caviar share many flavors and turn out to be a good combination. Other unusual combinations that seem to confirm the hypothesis include strawberries and peas, asparagus and butter, and chocolate and blue cheese.

February 25, 2015 06:05 PM

The n-Category Cafe

Concepts of Sameness (Part 3)

Now I’d like to switch to pondering different approaches to equality. (Eventually I’ll have put all these pieces together into a coherent essay, but not yet.)

We tend to think of <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> as a fundamental property of equality, perhaps the most fundamental of all. But what is it actually used for? I don’t really know. I sometimes joke that equations of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> are the only really true ones — since any other equation says that different things are equal — but they’re also completely useless.

But maybe I’m wrong. Maybe equations of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> are useful in some way. I can imagine one coming in handy at the end of a proof by contradiction where you show some assumptions imply <semantics>xx<annotation encoding="application/x-tex">x \ne x</annotation></semantics>. But I don’t remember ever doing such a proof… and I have trouble imagining that you ever need to use a proof of this style.

If you’ve used the equation <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> in your own work, please let me know.

To explain my question a bit more precisely, it will help to choose a specific formalism: first-order classical logic with equality. We can get the rules for this system by taking first-order classical logic with function symbols and adding a binary predicate “<semantics>=<annotation encoding="application/x-tex">=</annotation></semantics>” together with three axiom schemas:

1. Reflexivity: for each variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>,

<semantics>x=x<annotation encoding="application/x-tex"> x = x </annotation></semantics>

2. Substitution for functions: for any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any function symbol <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>,

<semantics>x=yf(,x,)=f(,y,)<annotation encoding="application/x-tex"> x = y \; \implies\; f(\dots, x, \dots) = f(\dots, y, \dots) </annotation></semantics>

3. Substitution for formulas: For any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any formula <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, if <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is obtained by replacing any number of free occurrences of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that these remain free occurrences of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, then

<semantics>x=y(ϕϕ)<annotation encoding="application/x-tex"> x = y \;\implies\; (\phi \;\implies\; \phi') </annotation></semantics>

Where did symmetry and transitivity of equality go? They can actually be derived!

For transitivity, use ‘substitution for formulas’ and take <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> to be <semantics>x=z<annotation encoding="application/x-tex">x = z</annotation></semantics>, so that <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is <semantics>y=z<annotation encoding="application/x-tex">y = z</annotation></semantics>. Then we get

<semantics>x=y(x=zy=z)<annotation encoding="application/x-tex"> x=y \;\implies\; (x = z \;\implies\; y = z) </annotation></semantics>

This is almost transitivity. From this we can derive

<semantics>(x=y&x=z)y=z<annotation encoding="application/x-tex"> (x = y \;\&\; x = z) \;\implies\; y = z </annotation></semantics>

and from this we can derive the usual statement of transitivity

<semantics>(x=y&y=z)x=z<annotation encoding="application/x-tex"> (x = y\; \& \; y = z) \;\implies\; x = z </annotation></semantics>

by choosing different names of variables and using symmetry of equality.

But how do we get symmetry? We can derive this using reflexivity and substitution for formulas. Take <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> to be <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> and take <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> be the result of substituting the first instance of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>: that is, <semantics>y=x<annotation encoding="application/x-tex">y = x</annotation></semantics>. Then we get

<semantics>x=y(x=xy=x)<annotation encoding="application/x-tex"> x = y \;\implies \;(x = x \;\implies \;y = x) </annotation></semantics>

Using <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics>, we can derive

<semantics>x=yy=x<annotation encoding="application/x-tex"> x = y \;\implies\; y = x </annotation></semantics>

This is the only time I remember using <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> to derive something! So maybe this equation is good for something. But if proving symmetry and transitivity of equality is the only thing it’s good for, I’m not very impressed. I would have been happy to take both of these as axioms, if necessary. After all, people often do.

So, just to get the conversation started, I’ll conjecture that reflexivity of equality is completely useless if we include symmetry of equality in our axioms. Namely:

Conjecture. Any theorem in classical first-order logic with equality that does not include a subformula of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> for any variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> can also be derived from a variant where we drop reflexivity, keep substitution for functions and substitution for formulas, and add this axiom schema:

1’. Symmetry: for any variables <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>,

<semantics>x=yy=x<annotation encoding="application/x-tex"> x = y \; \implies \; y = x </annotation></semantics>

Proof theorists: can you show this is true, or find a counterexample? We’ve seen that we can get transitivity from this setup, and then I don’t really see how it hurts to omit <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics>. I may be forgetting something, though!

by john (baez@math.ucr.edu) at February 25, 2015 04:10 PM

Peter Coles - In the Dark

What is the Scientific Method?

Twitter sent me this video about the scientific method yesterday, so I thought I’d share it via this blog.

The term Scientific Method is one that I find it difficult to define satisfactorily, despite having worked in science for over 25 years. The Oxford English Dictionary  defines Scientific Method as

..a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.

This is obviously a very general description, and the balance between the different aspects described is very different in different disciplines. For this reason when people try to define what the Scientific Method is for their own field, it doesn’t always work for others even within the same general area. It’s fairly obvious that zoology is very different from nuclear physics, but that doesn’t mean that either has to be unscientific. Moreover, the approach used in laboratory-based experimental physics can be very different from that used in astrophysics, for example. What I like about this video, though, is that it emphasizes the role of uncertainty in how the process works. I think that’s extremely valuable, as the one thing that I think should define the scientific method across all disciplines is a proper consideration of the assumptions made, the possibility of experimental error, and the limitations of what has been done. I wish this aspect of science had more prominence in media reports of scientific breakthroughs. Unfortunately these are almost always presented as certainties, so if they later turn out to be incorrect it looks like science itself has gone wrong. I don’t blame the media entirely about this, as there are regrettably many scientists willing to portray their own findings in this way.

When I give popular talks about my own field, Cosmology,  I often  look for appropriate analogies or metaphors in television programmes about forensic science, such as CSI: Crime Scene Investigation which I used to watch quite regularly (to the disdain of many of my colleagues and friends). Cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens. Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish “the truth” about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works.

 

 


by telescoper at February 25, 2015 11:59 AM

The n-Category Cafe

Concepts of Sameness (Part 2)

I’m writing about ‘concepts of sameness’ for Elaine Landry’s book Category Theory for the Working Philosopher. After an initial section on a passage by Heraclitus, I had planned to write a bit about Gongsun Long’s white horse paradox — or more precisely, his dialog When a White Horse is Not a Horse.

However, this is turning out to be harder than I thought, and more of a digression than I want. So I’ll probably drop this plan. But I have a few preliminary notes, and I might as well share them.

Gongsun Long

Gongsun Long was a Chinese philosopher who lived from around 325 to 250 BC. Besides the better-known Confucian and Taoist schools of Chinese philosophy, another important school at this time was the Mohists, who were more interested in science and logic. Gongsun Long is considered a member of the Mohist-influenced ‘School of Names’: a loose group of logicians, not really a school in any real sense. They are remembered largely for their paradoxes: for example, they independently invented a version of Zeno’s paradox.

As with Heraclitus, most of Gongsun Long’s writings are lost. Joseph Needham [N] has written that this is one of the worst losses of ancient Chinese texts, which in general have survived much better than the Greek ones. The Gongsun Longzi is a text that originally contained 14 of his essays. Now only six survive. The second essay discusses the question “when is a white horse not a horse?”

The White Horse Paradox

When I first heard this ‘paradox’ I didn’t get it: it just seemed strange and silly, not a real paradox. I’m still not sure I get it. But I’ve decided that’s what makes it interesting: it seems to rely on modes of thought, or speech, that are quite alien to me. What counts as a ‘paradox’ is more culturally specific than you might realize.

If a few weeks ago you’d asked me how the paradox goes, I might have said something like this:

A white horse is not a horse, because where there is whiteness, there cannot be horseness, and where there is horseness there cannot be whiteness.

However this is inaccurate because there was no word like ‘whiteness’ (let alone ‘horseness’) in classical Chinese.

Realizing that classical Chinese does not have nouns and adjectives as separate parts of speech may help explain what’s going on here. To get into the mood for this paradox, we shouldn’t think of a horse as a thing to which the predicate ‘whiteness’ applies. We shouldn’t think of the world as consisting of things <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and, separately, predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, which combine to form assertions <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>. Instead, both ‘white’ and ‘horse’ are on more of an equal footing.

I like this idea because it suggests that predicate logic arose in the West thanks to peculiarities of Indo-European grammar that aren’t shared by all languages. This could free us up to have some new ideas.

Here’s how the dialog actually goes. I’ll use Angus Graham’s translation because it tries hard not to wash away the peculiar qualities of classical Chinese:

Is it admissible that white horse is not-horse?

S. It is admissible.

O. Why?

S. ‘Horse’ is used to name the shape; ‘white’ is used to name the color. What names the color is not what names the shape. Therefore I say white horse is not horse.

O. If we take horses having color as nonhorse, since there is no colorless horse in the world, can we say there is no horse in the world?

S. Horse obviously has color, which is why there is white horse. Suppose horse had no color, then there would just be horse, and where would you find white horse. The white is not horse. White horse is white and horse combined. Horse and white is horse, therefore I say white horse is non-horse.

(Chad Hansen writes: “Most commentaries have trouble with the sentence before the conclusion in F-8, “horse and white is horse,” since it appears to contradict the sophist’s intended conclusion. But recall the Mohists asserted that ox-horse both is and is not ox.” I’m not sure if that helps me, but anyway….)

O. If it is horse not yet combined with white which you deem horse, and white not yet combined with horse which you deem white, to compound the name ‘white horse’ for horse and white combined together is to give them when combined their names when uncombined, which is inadmissible. Therefore, I say, it is inadmissible that white horse is not horse.

S. ‘White’ does not fix anything as white; that may be left out of account. ‘White horse’ has ‘white’ fixing something as white; what fixes something as white is not ‘white’. ‘Horse’ neither selects nor excludes any colors, and therefore it can be answered with either yellow or black. ‘White horse’ selects some color and excludes others, and the yellow and the black are both excluded on grounds of color; therefore one may answer it only with white horse. What excludes none is not what excludes some. Therefore I say: white horse is not horse.

One possible anachronistic interpretation of the last passage is

The set of white horses is not equal to the set of horses, so “white horse” is not “horse”.

This makes sense, but it seems like a way of saying we can have <semantics>ST<annotation encoding="application/x-tex">S \subseteq T</annotation></semantics> while also <semantics>ST<annotation encoding="application/x-tex">S \ne T</annotation></semantics>. That would be a worthwhile observation around 300 BC — and it would even be worth trying to get people upset about this, back then! But it doesn’t seem very interesting today.

A more interesting interpretation of the overall dialog is given by Chad Hansen [H]. He argues that to understand it, we should think of both ‘white’ and ‘horse’ as mass nouns or ‘kinds of stuff’.

The issue of how two kinds of stuff can be present in the same place at the same time is a bit challenging — we see Plato battling with it in the Parmenides — and in some sense western mathematics deals with it by switching to a different setup, where we have a universe of entities <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of which predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> can be asserted. If <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is a horse and <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is ‘being white’, then <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics> says the horse is white.

However, then we get Leibniz’s principle of the ‘indistinguishability of indiscernibles’, which is a way of defining equality. This says that <semantics>x=y<annotation encoding="application/x-tex">x = y</annotation></semantics> if and only if <semantics>P(x)P(y)<annotation encoding="application/x-tex">P(x) \iff P(y)</annotation></semantics> for all predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. By this account, an entity really amounts to nothing more than the predicates it satisfies!

This is where equality comes in — but as I said, all of this is seeming like too much of a distraction from my overall goals for this essay right now.

Notes

  • [N] Joseph Needham, Science and Civilisation in China vol. 2: History of Scientific Thought, Cambridge U. Press, Cambridge, 1956, p. 185.

  • [H] Chad Hansen, Mass nouns and “A white horse is not a horse”, Philosophy East and West 26 (1976), 189–209.

by john (baez@math.ucr.edu) at February 25, 2015 04:06 AM

February 24, 2015

Christian P. Robert - xi'an's og

Bayesian filtering and smoothing [book review]

When in Warwick last October, I met Simo Särkkä, who told me he had published an IMS monograph on Bayesian filtering and smoothing the year before. I thought it would be an appropriate book to review for CHANCE and tried to get a copy from Oxford University Press, unsuccessfully. I thus bought my own book that I received two weeks ago and took the opportunity of my Czech vacations to read it… [A warning pre-empting accusations of self-plagiarism: this is a preliminary draft for a review to appear in CHANCE under my true name!]

“From the Bayesian estimation point of view both the states and the static parameters are unknown (random) parameters of the system.” (p.20)

 Bayesian filtering and smoothing is an introduction to the topic that essentially starts from ground zero. Chapter 1 motivates the use of filtering and smoothing through examples and highlights the naturally Bayesian approach to the problem(s). Two graphs illustrate the difference between filtering and smoothing by plotting for the same series of observations the successive confidence bands. The performances are obviously poorer with filtering but the fact that those intervals are point-wise rather than joint, i.e., that the graphs do not provide a confidence band. (The exercise section of that chapter is superfluous in that it suggests re-reading Kalman’s original paper and rephrases the Monty Hall paradox in a story unconnected with filtering!) Chapter 2 gives an introduction to Bayesian statistics in general, with a few pages on Bayesian computational methods. A first remark is that the above quote is both correct and mildly confusing in that the parameters can be consistently estimated, while the latent states cannot. A second remark is that justifying the MAP as associated with the 0-1 loss is incorrect in continuous settings.  The third chapter deals with the batch updating of the posterior distribution, i.e., that the posterior at time t is the prior at time t+1. With applications to state-space systems including the Kalman filter. The fourth to sixth chapters concentrate on this Kalman filter and its extension, and I find it somewhat unsatisfactory in that the collection of such filters is overwhelming for a neophyte. And no assessment of the estimation error when the model is misspecified appears at this stage. And, as usual, I find the unscented Kalman filter hard to fathom! The same feeling applies to the smoothing chapters, from Chapter 8 to Chapter 10. Which mimic the earlier ones.

“The degeneracy problem can be solved by a resampling procedure.” (p.123)

By comparison, the seventh chapter on particle filters appears too introductory from my biased perspective. For instance, the above motivation for resampling in sequential importance (re)sampling is not clear enough. As stated it sounds too much like a trick, not mentioning the fast decrease in the number of first generation ancestors as the number of generations grows. And thus the need for either increasing the number of particles fast enough or checking for quick-forgetting. Chapter 11 is the equivalent of the above for particle smoothing. I would have like more details on the full posterior smoothing distribution, instead of the marginal posterior smoothing distribution at a given time t. And more of a discussion on the comparative merits of the different algorithms.

Chapter 12 is much longer than the other chapters as it caters to the much more realistic issue of parameter estimation. The chapter borrows at time from Cappé, Moulines and Rydèn (2007), where I contributed to the Bayesian estimation chapter. This is actually the first time in Bayesian filtering and smoothing when MCMC is mentioned. Including reference to adaptive MCMC and HMC. The chapter also covers some EM versions. And pMCMC à la Andrieu et al. (2010). Although a picture like Fig. 12.2 seems to convey the message that this particle MCMC approach is actually quite inefficient.

“An important question (…) which of the numerous methods should I choose?”

The book ends up with an Epilogue (Chapter 13). Suggesting to use (Monte Carlo) sampling only after all other methods have failed. Which implies assessing that those methods have indeed failed. Maybe the suggestion of running what seems like the most appropriate method first with synthetic data (rather than the real data) could be included. For one thing, it does not add much to the computing cost. All in all, and despite some criticisms voiced above, I find the book quite an handy and compact introduction to the field, albeit slightly terse for an undergraduate audience.


Filed under: Books, Statistics, Travel, University life Tagged: book review, CHANCE, EM algorithm, filtering, IMS Textbooks, Kalman filter, MAP estimators, particle filter, particle MCMC, plagiarism, Simo Särkkä, smoothing, The Monty Hall problem

by xi'an at February 24, 2015 11:15 PM

The n-Category Cafe

Concepts of Sameness (Part 1)

Elaine Landry is a philosopher at U. C. Davis, and she’s editing a book called Categories for the Working Philosopher. Tentatively, at least, it’s supposed to have chapters by these folks

  • Colin McLarty (on set theory)
  • David Corfield (on geometry)
  • Michael Shulman (on univalent foundations)
  • Steve Awodey (on structuralism, invariance, and univalence)
  • Michael Ernst (on foundations)
  • Jean-Pierre Marquis (on first-order logic with dependent sorts)
  • John Bell (on logic and model theory)
  • Kohei Kishida (on modal logic)
  • Robin Cockett and Robert Seely (on proof theory and linear logic)
  • Samson Abramsky (on computer science)
  • Michael Moortgat (on linguistics and computational semantics)
  • Bob Coecke and Aleks Kissinger (on quantum mechanics and ontology)
  • James Weatherall (on spacetime theories)
  • Jim Lambek (on special relativity)
  • John Baez (on concepts of sameness)
  • David Spivak (on mathematical modeling)
  • Hans Halvorson (on the structure of physical theories)
  • Elaine Landry (on structural realism)
  • Andrée Ehresmann (on a topic to be announced)

We’re supposed to have our chapters done by April. To make writing my part more fun, I thought I’d draft some portions here on the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-Café.

Looking at the heavy emphasis on topics connected to logic, I’m sort of wishing I’d gone off in some other direction, like David Corfield, or Bob Coecke and Aleks Kissinger. I’d originally been going to write about my current love: category theory in applied mathematics and electrical engineering. But I decided that’s still research in progress, not something that’s ready to offer for the delectation of philosophers.

Anyway, my chosen topic is ‘concepts of sameness’ — meaning equality, isomorphism, equivalence and other related notions — and how they get re-examined in <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-category theory, homotopy theory, and homotopy type theory. But I don’t want to merely explain piles of mathematics: I also want to think about the question what does it mean to be the same, in a somewhat philosophical way.

So, I might start with a bit of ancient philosophy. Something like this:

In classical Greece and China, philosophers were very concerned about the concept of sameness — and its flip side, the concept of change. Their questions may seem naive today, because we’ve developed ways of talking to sidestep the issues they found puzzling. We’ve certainly made progress over the centuries. But we’re not done understanding these issues — indeed, mathematics is in the middle of a big shift in its attitude toward ‘equality’. So it pays to look back at the history.

Indeed, progress in mathematics and philosophy often starts by revisiting issues that seemed settled. When we regain a sense of childlike wonder at things we’d learned to take for granted, a space for new thoughts opens up.

With this in mind, and no pretense at good classical scholarship, let us look at a fragment of Heraclitus and Gongsun Long’s “white horse paradox”.

Heraclitus

Heraclitus lived roughly from 535 to 475 BC. Only fragments of his writings remain. Most of what we know about him comes from Diogenes Laertius, a notoriously unreliable biographer who lived six hundred years later, and Aristotle, who was concerned not with explaining Heraclitus but demolishing his ideas on physics. Among later Greeks Heraclitus was famous for his obscurity, nicknamed “the riddler” and “the dark one”. Nonetheless a certain remark of his has always excited people interested in the concepts of sameness and change.

In a famous passage of the Cratylus (402d), Plato has Socrates say:

Heraclitus is supposed to say that all things are in motion and nothing at rest; he compares them to the stream of a river, and says that you cannot go into the same water twice.

This is often read as saying that all is in flux; nothing stays the same. But a somewhat more reliable quote passed down through Cleanthes says:

On those stepping into rivers staying the same other and other waters flow.

Here it seems that while the river stays the same, the water does not. To me, this poses the great mystery of time: we can only say an entity changes if it is also the same in some way — because if it were completely different, we could not speak of an entity. Of course we can mentally separate the aspect that stays the same and the aspect that changes. But these two aspects must be bound together, if we are to say that ‘the same thing is changing’.

In category theory, we try to negotiate these deep waters using the concept of ‘isomorphism’. If we have an isomorphism <semantics>f:xy<annotation encoding="application/x-tex">f : x \to y</annotation></semantics>, the objects <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> can be unequal and yet ‘the same in a way’. Alternatively, we can have an isomorphism from an object to itself, <semantics>f:xx<annotation encoding="application/x-tex">f : x \to x</annotation></semantics>, where clearly <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is the same as <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> yet <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> describes some sort of change. So, isomorphisms exhibit a subtle interplay between sameness and difference that may begin to do justice to Heraclitus’ thoughts.

In mathematical physics, the passage of time is often described using isomorphisms: most simply, a one-parameter family of automorphisms <semantics>f t:xx<annotation encoding="application/x-tex">f_t : x \to x</annotation></semantics>, one for each time <semantics>t<annotation encoding="application/x-tex">t \in \mathbb{R}</annotation></semantics>. The automorphisms describe how a physical system is the same yet changing. The same idea generalizes to situations where time is not merely a line of real numbers.

In general, given an object <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in a category, the automorphisms <semantics>f:xx<annotation encoding="application/x-tex">f : x \to x</annotation></semantics> form a group called the ‘automorphism group’ or ‘symmetry group’ of that object. The automorphisms can be seen as ‘ways for to change the object without changing it’. For example, a square has symmetries, which are ways you can rotate and/or reflect it that don’t change its appearance at all. Symmetry is very important in physics, and it is worth thinking about why, because this takes us back to some of the questions Heraclitus raised.

Notes

I’ll continue next time. I may write more than I wind up using, but that’s okay. Here are some notes from the Stanford Encyclopedia of Philosophy article on Heraclitus:

There are three alleged “river fragments”:

B12. potamoisi toisin autoisin embainousin hetera kai hetera hudata epirrei.

“On those stepping into rivers staying the same other and other waters flow.” (Cleanthes from Arius Didymus from Eusebius)

B49a. potamois tois autois…

“Into the same rivers we step and do not step, we are and are not.” (Heraclitus Homericus)

B91[a]. potamôi… tôi autôi…

“It is not possible to step twice into the same river according to Heraclitus, or to come into contact twice with a mortal being in the same state.” (Plutarch)

Of these only the first has the linguistic density characteristic of Heraclitus’ words. The second starts out with the same three words as B12, but in Attic, not in Heraclitus’ Ionic dialect, and the second clause has no grammatical connection to the first. The third is patently a paraphrase by an author famous for quoting from memory rather than from books. Even it starts out in Greek with the word ‘river,’ but in the singular. There is no evidence that repetitions of phrases with variations are part of Heraclitus’ style (as they are of Empedocles’). To start with the word ‘river(s)’ goes against normal Greek prose style, and on the plausible assumption that all sources are trying to imitate Heraclitus, who does not repeat himself, we would be led to choose B12 as the one and only river fragment, the only actual quotation from Heraclitus’ book. This is the conclusion of Kirk (1954) and Marcovich (1967), based on an interpretation that goes back to Reinhardt (1916). That B12 is genuine is suggested by the features it shares with Heraclitean fragments: syntactic ambiguity (toisin autoisin ‘the same’ [in the dative] can be construed either with ‘rivers’ [“the same rivers”] or with ‘those stepping in’ [“the same people”], with what comes before or after), chiasmus, sound-painting (the first phrase creates the sound of rushing water with its diphthongs and sibilants), rhyme and alliteration.[1]

If B12 is accepted as genuine, it tends to disqualify the other two alleged fragments. The major theoretical connection in the fragment is that between ‘same rivers’ and ‘other waters.’ B12 is, among other things, a statement of the coincidence of opposites. But it specifies the rivers as the same. The statement is, on the surface, paradoxical, but there is no reason to take it as false or contradictory. It makes perfectly good sense: we call a body of water a river precisely because it consists of changing waters; if the waters should cease to flow it would not be a river, but a lake or a dry streambed. There is a sense, then, in which a river is a remarkable kind of existent, one that remains what it is by changing what it contains (cf. Hume Treatise 1.4.6, p. 258 Selby-Bigge). Heraclitus derives a striking insight from an everyday encounter. Further, he supplies, via the ambiguity in the first clause, another reading: on the same people stepping into rivers, other and other waters flow. With this reading it is people who remain the same in contrast to changing waters, as if the encounter with a flowing environment helped to constitute the perceiving subject as the same. (See Kahn 1979.) B49a, by contrast, contradicts the claim that one can step into the same rivers (and also asserts that claim), and B91[a], like Plato in the Cratylus, denies that one can step in twice. Yet if the rivers remain the same, one surely can step in twice—not into the same waters, to be sure, but into the same rivers. Thus the other alleged fragments are incompatible with the one certifiably genuine fragment.

by john (baez@math.ucr.edu) at February 24, 2015 10:30 PM

Symmetrybreaking - Fermilab/SLAC

Physics in fast-forward

During their first run, experiments at the Large Hadron Collider rediscovered 50 years' worth of physics research in a single month.

In 2010, the brand-spanking-new CMS and ATLAS detectors started taking data for the first time. But the question physicists asked was not, “Where is the Higgs boson?” but rather “Do these things actually work?”

“Each detector is its own prototype,” says UCLA physicist Greg Rakness, run coordinator for the CMS experiment. “We don’t get trial runs with the LHC. As soon as the accelerator fires up, we’re collecting data.”

So LHC physicists searched for a few old friends: previously discovered particles.

“We can’t say we found a new particle unless we find all the old ones first,” says Fermilab senior scientist Dan Green. “Well, you can, but you would be wrong.”

Rediscovering 50 years' worth of particle physics research allowed LHC scientists to calibrate their rookie detectors and appraise their experiments’ reliability.

The CMS collaboration produced this graph using data from the first million LHC particle collisions identified as interesting by the experiment's trigger. It represents the instances in which the detector saw a pair of muons.

Muons are heavier versions of electrons. The LHC can produce muons in its particle collisions. It can also produce heavier particles that decay into muon pairs.

On the x-axis of the graph is the combined mass of two muons that appeared simultaneously in the aftermath of a high-energy LHC collision. On the y-axis is the number of times scientists saw each muon+muon mass combination.

On top of a large and raggedy-looking half-parabola, six sharp peaks emerge.

“Each peak represents a parent particle, which was produced during the collision and then spat out two muons during its decay,” Green says.

When muon pairs appear at a particular mass more often than random chance can explain, scientists can deduce that there must some other process tipping the scale. This is how scientists find new particles and processes—by looking for an imbalance in the data and then teasing out the reason why.

Each of the six peaks on this graph can be traced back to a well-known particle that decays to two muons.

Courtesy of: Dan Green

 

  • The rho [ρ] was discovered in 1961.
  • The J-psi [J/ Ψ] was discovered in 1974 (and earned a Nobel Prize for experimenters at the Massachusetts Institute of Technology and SLAC National Accelerator Laboratory).
  • The upsilon [Y] was discovered in 1977.
  • The Z was discovered in 1983 (and earned a Nobel Prize for experimenters at CERN).

What originally took years of work and multiple experiments to untangle, the CMS and ATLAS collaborations rediscovered after only about a month.

“The LHC is higher energy and produces a lot more data than earlier accelerators,” Green says. “It’s like going from a garden hose to a fire hose. The data comes in amazingly fast.”

But even the LHC has its limitations. On the far-right side, the graph stops looking like a half-parabola and start looking like a series of short, jutting lines.

“It looks chaotic because we just didn’t have enough data for events at higher masses,” Green says. “Eventually, we would expect to see a peak representing the Higgs decaying to two muons popping up at around 125 GeV. But we just hadn’t produced enough high-mass muons to see it yet.”

Over the summer, the CMS and ATLAS detectors will resume taking data—this time with collisions containing 60 percent more energy. Green says he and his colleagues are excited to push the boundaries of this graph to see what lies just out of reach.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Sarah Charley at February 24, 2015 09:42 PM

astrobites - astro-ph reader's digest

A New Type of Stellar Feedback

Title: Stellar feedback from high-mass X-ray binaries in cosmological hydrodynamical simulations

Authors: M. C. Artale, P. B. Tissera, & L. J. Pellizza

First Author’s Institution: Instituto de Astronomia y Fisica del Espacio, Ciudad de Buenos Aires, Argentina

Paper Status: Accepted for publication in MNRAS

The commonly accepted theory of dark matter (called Lambda-CDM) provides us with good general understanding of how structure (filaments, galaxy clusters, galaxies, etc.) forms in our Universe. This is shown again and again in simulations of our Universe in a box, made by following the motion of dark matter particles and their clustering as controlled by gravity. However, problems arise in trying to reproduce the detailed baryonic properties (namely stars and gas) of real galaxies. One of the biggest problems is faithfully reproducing the star formation history of real galaxies, both in terms of when they form (early in the Universe or more recently) and how many form.  As it turns out, “feedbackprocesses are essential in reproducing properties of galaxies, including supernova explosions which can heat, ionize, and remove gas from galaxies (shutting off star formation). In many cases supernovae may be the dominant form of feedback, but, as the authors of today’s paper explore, it may not be the only important form of feedback, especially in the early Universe.

High-mass X-ray binaries (HMXB’s) are binary star systems consisting of a massive star orbiting either a neutron star or a black hole. These systems produce a significant amount of X-ray emission, originating from the accretion of the massive star onto the more compact companion; this is shown in an artist’s drawing in Fig. 1. In some cases, these systems can produce fast moving jets, dumping kinetic energy into the surrounding gas. Both of these can heat, ionize, and blow away gas within a galaxy, and in turn play a significant role in how stars form within the galaxy. The authors indicate that recent work has shown that these systems are more powerful in the early Universe, and may play a role in controlling star formation in early galaxies that will have a lasting impact for their entire evolution. For the first time, the authors implement a HMXB feedback method into a cosmological hydrodynamics code, and explore how it affects galaxy evolution in the early Universe.

Fig. 1:

Fig. 1: An artist’s conception of a black hole binary, where a massive star (right) is orbiting a black hole (left). The massive star is loosing gas to the black hole, forming an accretion disk and jets. (Image Credit: NASA)

Modelling the Feedback from HMXB’s

The authors use a version of the GADGET-3 smooth particle hydrodynamics (SPH) code that includes a chemical evolution model, radiative heating and cooling (allowing the gas to heat and cool by radiating away or absorbing energy from photons) , and methods for star formation and supernova feedback from Type II and Type Ia supernova. New in this work, however, is a model to simulate the feedback from HMXB’s. Due to the finite resolution of the simulations, they construct a model that accounts for a population of HMXB’s in a galaxy, rather than individual HMXB’s. Using known estimate for the number distribution of stars (known as the IMF), the authors estimate that about 20% of massive stars that form black holes in a given galaxy form in binary systems. In their simulations, these systems deposit 1052 erg of kinetic energy into their surroundings (about 10 times that of a single supernova explosion) , and spend about 3 million years radiating X-rays at a luminosity of about 1038 erg s-1. These numbers are not well constrained, however, but the authors tested a few different values and found these to produce the most realistic amount of feedback (as compared to observations).

The authors produce two primary cosmological simulations, evolving from high redshift down to a redshift of about 2 (about 5 billion years from present day), using only supernova feedback (designated S230-SN) and supernova feedback + HMXB feedback (designated S230-BHX). The authors use these two simulations to compare how a HMXB feedback model affects the total rate of star formation in their simulations, and the properties of star formation of individual galaxies in their simulations. Some of these results are discussed below.

Controlling Star Formation

Fig. 2:

Fig. 2: The star formation rate density in the entire simulation (i.e. how much mass is converted into stars per year per unit volume, computed for the entire simulation box) as a function of redshift for the SN only (blue, dashed) and SN + HMXB (red, solid) simulations. The horizontal axis is redshift, with lower redshift (z=2) closer to present day than higher redshift (z=10). The black data points are observations from Behroozi et. al. 2013. (Source: Fig. 1 of Artale et. al.)

Fig. 2 compares the effect of the SN (blue) vs. SN + HMXB (red) feedback models on the star formation of the entire simulation box as a function of redshift, from the early Universe (right) to z = 2 (left). The vertical axis gives the star formation rate (SFR) density in the entire simualtion. The SFR density is computed as the total mass of gas converted into stars per unit time per unit volume. At high redshift (early time), the HMXB feedback suppresses the total SFR density, and delays star formation towards lower redshift (later time). The higher SFR density in the HMXB model at lower redshifts is due to the fact that there is much more gas leftover at lower redshifts to be transformed into stars, since it wasn’t used up as dramatically at high redshifts as the SN feedback only model.

Fig. 3: Ratio of galaxy gas mass to dark matter halo mass for galaxies in the simulations as a function of their dark matter halo mass. This is plotted for three redshifts, z = 7, z = 6, and z = 4, for the SN only model (dashed) and the SN + HMXB model (solid). (Source: )

Fig. 3: Ratio of galaxy gas mass to dark matter halo mass for galaxies in the simulations as a function of their dark matter halo mass. This is plotted for three redshifts, z = 7, z = 6, and z = 4, for the SN only model (dashed) and the SN + HMXB model (solid). (Source: Artale et. al. 2015 )

The three most significant (in terms of mass) components in a galaxy are its dark matter halo, gas, and stars. The two feedback models have a significantly different effect on the amount of gas contained within galaxies over time. Fig. 3 shows the ratio of galaxy gas mass to galaxy dark matter mass as a function of the galaxy’s dark matter halo mass over three redshifts, z =7 (green triangles), z = 6 (blue squares), and z = 4 (violet circles). The SN only simulated is shown with the dashed lines, and SN + HMXB with solid lines. As shown, the SN only simulations have less gas at all redshifts and at almost all halo masses. In the SN only model, the higher SFR at early times uses up more gas, and drives bigger outflows, causing overall less gas to remain within the galaxy. The HMXB + SN, however, reduces SFR at early times without driving large gas outflows, allowing for gassier galaxies that have gradually increasing SFR (Fig. 2) towards low redshifts, as the effectiveness of HMXB’s decreases.

Finding the Right Balance

Getting feedback and star formation right in simulations (in part) means reproducing star formation history over cosmic time. The authors used HMXB with SN feedback to reduce star formation in galaxies at high redshift (in better agreement with observations than SN feedback alone), while maintaining it at low redshift (i.e. delaying star formation). This is a promising step in developing a complete model of feedback in galaxies.

by Andrew Emerick at February 24, 2015 08:57 PM

Peter Coles - In the Dark

The Welsh University Funding Debacle Continues…

Although I no longer work in Wales, I still try to keep up with developments in the Welsh Higher Education sector as they might affect friends and former colleagues who do. I noticed yet another news item on the BBC a week or so ago as a kind of update to another one published a few years ago about the effect of the Welsh Government’s policy of giving Welsh students bursaries to study at English universities. The gist of the argument is that:

For every Welsh student that goes to university across the border the fee subsidy costs the Welsh government around £4,500.

It means this year’s 7,370 first-year students from Wales who study in other parts of the UK could take more than £33m with them. Including last year’s students, the total figure is over £50m.

According to the latest news story on this, the initial estimate of £50M estimate grew first to £77M and is now put at a figure closer to £90M.

I did in fact make exactly the same point about five years ago on this blog, when former Welsh Education Minister Leighton Andrews announced that students domiciled in Wales would be protected from then (then) impending tuition fee rises by a new system of grants. In effect the Welsh Assembly Government would pick up the tab for Welsh students; they would still have to pay the existing fee level of £3290 per annum, but the WAG would pay the extra £6K. I wrote in May 2010:

This is good news for the students of course, but the grants will be available to Welsh students not just for Welsh universities but wherever they choose to study. Since about 16,000 Welsh students are currently at university in England, this means that the WAG is handing over a great big chunk (at least 16,000 × £3000 = £48 million) of its hard-earned budget straight back to England. It’s a very strange thing to do when the WAG is constantly complaining that the Barnett formula doesn’t give them enough money in the first place.

What’s more, the Welsh Assembly grants for Welsh students will be paid for by top-slicing the teaching grants that HECFW makes to Welsh universities. So further funding cuts for universities in Wales are going to be imposed precisely in order to subsidise English universities. This is hardly in the spirit of devolution either!

English students wanting to study in Wales will have to pay full whack, but will be paying to attend universities whose overall level of state funding is even lower than in England (at least for STEM subjects whose subsidy is protected in England). Currently about 25,000 English students study in Wales compared with the 16,000 Welsh students who study in England. If the new measures go ahead I can see fewer English students coming to Wales, and more Welsh students going to England. This will have deeply damaging consequences for the Welsh Higher Education system.

It’s very surprising that the Welsh Nationalists, Plaid Cymru, who form part of the governing coalition in the Welsh Assembly, have gone along with this strange move. It’s good for Welsh students, but not good for Welsh universities. I would have thought that the best plan for Welsh students would be to keep up the bursaries but apply them only for study in Wales. That way both students and institutions will benefit and the Welsh Assembly’s budget will actually be spent in Wales, which is surely what is supposed to happen…

Well, the changes did go ahead, and now the consequences are becoming clearer. The Chief Executive of Welsh university funding agency HEFCW, Dr David Blaney, is quoted as saying

“…in England, English students have to get a loan, so the top universities there have £9,000 coming from each student and also funding from the funding council.

In Wales, a lot of the funding council funding is now spent on the tuition fee grant and that means there’s less money available to invest in the Welsh sector than is the case in England,” he told BBC Wales in an exclusive interview.”

This also mirrors a concern I’ve also discussed in a blog post, which is that the Welsh Government policy might actually increase the number of Welsh students deciding to study in England, while also decreasing the number of other students deciding to study in Wales. Why would this happen? Well, it’s because, at least in STEM subjects, the tuition fee paid in England attracts additional central funding from HEFCE. This additional resource is nowhere near as much as it should be, but is still better than in Wales. Indeed it was precisely by cutting the central teaching grant that the Welsh Government was able to fund its bursaries in the first place. So why should an English student decide to forego additional government support by choosing to study in Wales, and why should a Welsh student decide to do likewise by not going to England?

I really hope the Welsh Government decides to change its policy, though whether an imminent General Election makes that more or less likely is hard to say.


by telescoper at February 24, 2015 02:44 PM

Clifford V. Johnson - Asymptotia

Simulated meets Real!
Here's a freshly minted Oscar winner who played a scientist surrounded by... scientists! I'm with fellow physicists Erik Verlinde, Maria Spiropulu, and David Saltzberg at an event last month. Front centre are of course actors Eddie Redmayne (Best Actor winner 2015 for Theory of Everything) and Felicity Jones (Best Actress - nominee) along with the screenwriter of the film, Anthony McCarten. The British Consul-General Chris O'Connor is on the right. (Photo was courtesy of Getty Images.) [...] Click to continue reading this post

by Clifford at February 24, 2015 02:06 AM

astrobites - astro-ph reader's digest

No Need for De-trending: Finding Exoplanets in K2

Title: A systematic search for transiting planets in the K2 data

Authors: Daniel Foreman-Mackey et al.

First Author’s Institution: New York University

In May 2013, the Kepler mission came to an end with the failure of a critical reaction wheel. This reaction wheel was one of four that was responsible for keeping Kepler focused on a fixed field of view so that it could perform its mission: to continuously monitor the brightness of the same 150,000 stars, in order to detect the periodic dimming caused by extrasolar planets crossing in front of their host star.

A year later, in May 2014, NASA announced the approval of the K2 “Second Light” proposal,  a follow-up to the primary mission (described in Astrobites before: here, here), allowing Kepler to observe —although in a crippled state— a dozen fields, each for around 80 days at a time, extending the lifetime of humanity’s most prolific planet hunter.

The degraded stabilization system induces, however, severe pointing variations; the spacecraft does not stay locked-on target as well as it could previously. This leads to increased systematic drifts and thus larger uncertainties in the measured stellar brightnesses, degrading the transit-detection precision — especially problematic for traditional Kepler analysis techniques. The authors of today’s paper present a new analysis technique for the K2 data-set, a technique specifically designed to be insensitive to Kepler’s current pointing variations.

Fig 1: The K2 mission is Kepler’s second chance to get back into the planet-hunting game. Kepler’s pointing precision has however degraded, but novel pointing-insensitive analysis techniques aim to make up for that. Image credit: NASA/JPL.

Fig 1: The K2 mission is Kepler’s second chance to get back into the planet-hunting game. Kepler’s pointing precision has however degraded, but novel pointing-insensitive analysis techniques aim to make up for that. Image credit: NASA/JPL.

Traditional Kepler Analysis Techniques

Most of the previous analysis techniques developed for the original Kepler mission included a “de-trending” or a “correction” step —where long-term systematic trends are removed from a star’s light curve— before the search for a transiting planet even begins. Examples of light-curves are shown in Figure 2. Foreman-Mackey et al. argue that such a step is statistically dangerous: “de-trending” is prone to over-fitting. Over-fitting generally reduces the amplitude of true exoplanet signals, making their transits appear shallower — smaller planets might be lost in the noise. In other words, de-trending might be throwing away precious transits, and real planets might be missed in the “correction” step!

Fig 2: Upper left panel: An illustration of the maximum likelihood fit (green lines) to a K2 light-curve (black dots) obtained from the authors’ new data-driven model. Bottom left panel: The residual scatter i.e. the “de-trended” light-curve, for a given star in the K2 field. The right panel shows another “de-trended” light curve for a different star where the transit events are more evident (marked with green vertical lines). However, unlike traditional previous analysis techniques, this new method never uses detrended light curves in the analysis; it is only used for qualitative visualization and manual hand-vetting purposes.

Fig 2: Upper left panel: An illustration of the maximum likelihood fit (green lines) to a K2 light-curve (black dots) obtained from the authors’ new data-driven model. Bottom left panel: The residual scatter i.e. the “de-trended” light-curve, for a given star in the K2 field. The residuals are very small, indicating a good fit. The right panel shows a de-trended light-curve for another star (EPIC 201613023) where the transit events are more evident (marked with green vertical lines). De-trended light-curves like the one on the right were commonly used with past analysis techniques. Foreman-Mackey’s et al. method, however, never uses de-trended light-curves in the search for transits, only for qualitative visualization and hand-vetting purposes (see further description of their method below). Figures 2 (left), and 4 (right) from the paper.

A new analysis method

In light of these issues, the authors therefore sat down to revise the traditional analysis process. They propose a new method that simultaneously fits for a) the transit signals and b) systematic trends, at the same time, effectively bypassing the “de-trending” step.

Their transit model is a rigid model that models each transit to have a specific phase, period and duration. On the other hand, their method to model the systematic trends is much more flexible than the rigid parametrization for the transits. The authors assume that the dominant source of light-curve systematics is due to Kepler’s pointing variability. There are other factors that play a role too, like stellar variability, and detector thermal variations, but the authors focus specifically on modelling the spacecraft-induced pointing variability.

Fig 3: An illustration of the top 10 eigen light-curves. A set of 150 make up the authors’ linear data-driven analysis technique. Having a linear model has enormous computational advantages.

Fig 3: An illustration of the top 10 basis light-curves. A set of 150 make up the authors’ linear data-driven analysis technique. Having a linear model has enormous computational advantages.

Equipped with large amounts of computer power, Foreman-Mackey et al. create a flexible systematics model, consisting of a set of 150 linearly independent basis light-curves (see Figure 3). These are essentially a set of light-curves that one can add together—with different amplitudes and signs— to effectively recreate any observed light-curve in the K2 data-set. The basis light-curves themselves are found by a statistical method called Principal Component Analysis (PCA) —a method to find linearly uncorrelated variables that describe an observed data-set— which will not be described further in this astrobite, but the interested reader can read more here. The choice to model the systematics linearly has enormous computational advantages, as the computations reduce to using familiar linear algebra techniques. The authors note that the choice to use exactly 150 of them was not strictly optimized, but that number allowed for enough model-flexibility to effectively model the non-linear pointing-systematics, while keeping computational-costs reasonable.

With a set of basis light-curves the analysis pipeline then proceeds to systematically evaluate how well the joint transit-and-systematics model describes, and finds transits, in the raw light-curves (see again Figure 2). Finally, the pipeline returns the signals that have passed the systematic cuts: light-curves with the highest probability of having transits.

Results, Performance & Evaluation

The authors apply their analysis pipeline to the full K2 Campaign 1 data-set (including 21,703 light-curves, all publicly available here). The systematic vetting step returned 741 signals, which were further manually hand-vetted by the authors; throwing out, for example, obvious eclipsing binaries. The authors end with a list of 36 planet candidates transiting 31 stars —effectively multiplying the current known yield from K2 by a factor of five! Figure 4 (left) summarizes the distribution of the reported candidates in the radius-period plane.

Fig 4: Left: The fractional radii of the reported 36 planet candidates as a function of their period. Right: Detection efficiency in the radius-period plane, calculated by injecting synthetic transit signals into real K2 light curves, and calculating the fraction that are successfully recovered. Figures 11 (left) and 9 (right) from the paper.

Fig 4: Left: The fractional radii of the reported 36 planet candidates as a function of their period. Right: Detection efficiency in the radius-period plane, calculated by injecting synthetic transit signals into real K2 light curves, and calculating the fraction that are successfully recovered. Figures 11 (left) and 9 (right) from the paper.

The authors also discuss the performance and detection efficiency of their method. By injecting synthetic transit signals into real K2 light curves and measuring the fraction that are correctly identified and recovered by their pipeline, gives an estimate of the expected performance under real analysis conditions. From Figure 4 (right) we see that their analysis technique performs the best for short-period large-radius planets. This makes sense: larger planets have larger transit signals, and shorter orbital periods increase the number of observed transits; they are more probable to be successfully found.

Lastly, the authors share their data products online, along with their pipeline implementation under the MIT open-source software license. This means that anyone can take a stab at reproducing their findings, or perhaps even find new transits signals! So, now I want to ask you, dear reader, will you lengthen the list of known planet candidates today?

by Gudmundur Stefansson at February 24, 2015 12:25 AM

February 23, 2015

Sean Carroll - Preposterous Universe

I Wanna Live Forever

If you’re one of those people who look the universe in the eyeball without flinching, choosing to accept uncomfortable truths when they are supported by the implacable judgment of Science, then you’ve probably acknowledged that sitting is bad for you. Like, really bad. If you’re not convinced, the conclusions are available in helpful infographic form; here’s an excerpt.

Sitting-Infographic

And, you know, I sit down an awful lot. Doing science, writing, eating, playing poker — my favorite activities are remarkably sitting-based.

So I’ve finally broken down and done something about it. On the good advice of Carl Zimmer, I’ve augmented my desk at work with a Varidesk on top. The desk itself was formerly used by Richard Feynman, so I wasn’t exactly going to give that up and replace it with a standing desk. But this little gizmo lets me spend most of my time at work on my feet instead of sitting on my butt, while preserving the previous furniture.

IMG_1173

It’s a pretty nifty device, actually. Room enough for my laptop, monitor, keyboard, mouse pad, and the requisite few cups for coffee. Most importantly for a lazybones like me, it doesn’t force you to stand up absolutely all the time; gently pull some handles and the whole thing gently settles down to desktop level, ready for your normal chair-bound routine.

IMG_1174

We’ll see how the whole thing goes. It’s one thing to buy something that allows you to stand while working, it’s another to actually do it. But at least I feel like I’m trying to be healthier. I should go have a sundae to celebrate.

by Sean Carroll at February 23, 2015 09:08 PM

ZapperZ - Physics and Physicists

Which Famous Physicist Should Be Depicted In The Movie Next?
Eddie Redmayne won the Oscar last night for his portrayal of Stephen Hawking in the movie "The Theory of Everything". So this got me into thinking of which famous physicist should be portrayed next in a movie biography. Hollywood won't choose someone who isn't eccentric, famous, or in the news. So that rules out a lot.

I would think that Richard Feynman would make a rather compelling biographical movie. He certainly was a very complex person, and definitely not boring. They could give the movie a title of "Sure You Must Be Joking", or "Six Easy Pieces", or "Shut Up And Calculate", although the latter may not be entirely attributed to Feynman.

Hollywood, I'm available for consultation!

Zz.

by ZapperZ (noreply@blogger.com) at February 23, 2015 08:10 PM

Symmetrybreaking - Fermilab/SLAC

Video: LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.

The LHC is preparing to restart at almost double the collision energy of its previous run. The new energy will allow physicists to check previously untestable theories and explore new frontiers in particle physics.

When the LHC is on, counter-rotating beams of particles will be made to collide at four interaction points 100 meters underground, around which sit the huge detectors ALICE, ATLAS, CMS and LHCb.

In the video below, engineers and technicians prepare these four detectors to receive the showers of particles that will be created in collisions at energies of 13 trillion electronvolts.

The giant endcaps of the ATLAS detector are back in position and the wheels of the CMS detector are moving it back into its “closed” configuration. The huge red door of the ALICE experiment is closed up ready for restart, and the access door to the LHC tunnel is sealed with concrete blocks.


A version of this article was published by CERN.

 

Like what you see? Sign up for a free subscription to symmetry!

by Cian O&#039;Luanaigh at February 23, 2015 04:50 PM

arXiv blog

Computational Anthropology Reveals How the Most Important People in History Vary by Culture

Data mining Wikipedia people reveals some surprising differences in the way eastern and western cultures identify important figures in history, say computational anthropologists.

 

February 23, 2015 04:18 PM

Tommaso Dorigo - Scientificblogging

New CP-Odd Higgs Boson Results By ATLAS
The paper to read today is one from the ATLAS collaboration at the CERN Large Hadron Collider -my competitors, as I work for the other experiment across the ring, CMS. ATLAS has just produced a new article which describes the search for the CP-odd A boson, a particle which arises in Supersymmetry as well as in more generic extensions of the Standard Model called "two-higgs doublet models". What are these ?

read more

by Tommaso Dorigo at February 23, 2015 03:00 PM

February 21, 2015

arXiv blog

How Malware Can Track Your Smartphone Without Using Location Data

The way your smartphone uses power provides a simple way to track it, say computer scientists who have developed an app to prove it.

February 21, 2015 08:04 PM

Jester - Resonaances

Weekend plot: rare decays of B mesons, once again
This weekend's plot shows the measurement of the branching fractions for neutral B and Bs mesons decays into  muon pairs:
This is not exactly a new thing. LHCb and CMS separately announced evidence for the B0s→μ+μ- decay in summer 2013, and a preliminary combination of their results appeared a few days after. The plot above comes from the recent paper where a more careful combination is performed, though the results change only slightly.

A neutral B meson is a  bound state of an anti-b-quark and a d-quark (B0) or an s-quark (B0s), while for an anti-B meson the quark and the antiquark are interchanged. Their decays to μ+μ- are interesting because they are very suppressed in the standard model. At the parton level, the quark-antiquark pair annihilates into a μ+μ- pair. As for all flavor changing neutral current processes, the lowest order diagrams mediating these decays occur at the 1-loop level. On top of that, there is the helicity suppression by the small muon mass, and the CKM suppression by the small Vts (B0s) or Vtd (B0) matrix elements. Beyond the standard model one or more of these suppression factors may be absent and the contribution could in principle exceed that of the standard model even if the new particles are as heavy as ~100 TeV. We already know this is not the case for the B0s→μ+μ- decay. The measured branching fraction (2.8 ± 0.7)x10^-9  agrees within 1 sigma with the standard model prediction (3.66±0.23)x10^-9. Further reducing the experimental error will be very interesting in view of observed anomalies in some other b-to-s-quark transitions. On the other hand, the room for new physics to show up  is limited,  as the theoretical error may soon become a showstopper.

Situation is a bit different for the B0→μ+μ- decay, where there is still relatively more room for new physics. This process has been less in the spotlight. This is partly due to a theoretical prejudice: in most popular new physics models it is impossible to generate a large effect in this decay without generating a corresponding excess in B0s→μ+μ-. Moreover,  B0→μ+μ- is experimentally more difficult:  the branching fraction predicted by the standard model is (1.06±0.09)x10^-10, which is 30 times smaller than that for  B0s→μ+μ-. In fact, a 3σ evidence for the B0→μ+μ- decay appears only after combining LHCb and CMS data. More interestingly, the measured branching fraction, (3.9±1.4)x10^-10, is some 2 sigma above the standard model value. Of course, this is  most likely a statistical fluke, but nevertheless it will be interesting to see an update once the 13-TeV LHC run collects enough data.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Jester - Resonaances

Do-or-die year
The year 2015 began as any other year... I mean the hangover situation in particle physics. We have a theory of fundamental interactions - the Standard Model - that we know is certainly not the final  theory because it cannot account for dark matter, matter-antimatter asymmetry, and cosmic inflation. At the same time, the Standard Model perfectly describes any experiment we have performed here on Earth (up to a few outliers that can be explained as statistical fluctuations).  This is puzzling, because some these experiments are in principle sensitive to very heavy particles, sometimes well beyond the reach of the LHC or any future colliders. Theorists cannot offer much help at this point. Until recently,  naturalness was the guiding principle in constructing new theories, but  few have retained confidence in it. No other serious paradigm has appeared to replace naturalness. In short, we know for sure there is new physics beyond the Standard Model, but have absolutely no clue what it is and how big energy is needed to access it.

Yet 2015 is different because it is the year when LHC restarts at 13 TeV energy.  We should expect high-energy collisions some time in summer, and around 10 inverse femtobarns of data by the end of the year. This is the last significant energy jump most of us may experience before retirement, therefore this year is going to be absolutely crucial for the future of particle physics. If, by next Christmas, we don't hear any whispers of anomalies in LHC data, we will have to brace for tough times ahead. With no energy increase in sight, slow experimental progress, and no theoretical hints for a better theory, particle physics as we know it will be in deep merde.

You may protest this is too pessimistic. In principle, new physics may show up at the LHC anytime between this fall and the year 2030 when 3 inverse attobarns of data will have been accumulated. So the hope will not die completely anytime soon. However, the subjective probability of making a discovery will decrease exponentially as time goes on, as you can see in the attached plot. Without a discovery, the mood will soon plummet, resembling something of the late Tevatron, rather than the thrill of pushing the energy frontier that we're experiencing now.

But for now, anything may yet happen. Cross your fingers.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Jester - Resonaances

Weekend plot: spin-dependent dark matter
This weekend plot is borrowed from a nice recent review on dark matter detection:
It shows experimental limits on the spin-dependent scattering cross section of dark matter on protons. This observable is not where the most spectacular race is happening, but it is important for constraining more exotic models of dark matter. Typically, a scattering cross section in the non-relativistic limit is independent of spin or velocity of the colliding particles. However, there exist reasonable models of dark matter where the low-energy cross section is more complicated. One possibility is that the interaction strength is proportional to the scalar product of spin vectors of a dark matter particle and a nucleon (proton or neutron). This is usually referred to as the spin-dependent scattering, although other kinds of spin-dependent forces that also depend on the relative velocity are possible.

In all existing direct detection experiments, the target contains nuclei rather than single nucleons. Unlike in the spin-independent case, for spin-dependent scattering the cross section is not enhanced by coherent scattering over many nucleons. Instead, the interaction strength is proportional to the expectation values of the proton and neutron spin operators in the nucleus.  One can, very roughly, think of this process as a scattering on an odd unpaired nucleon. For this reason, xenon target experiments such as Xenon100 or LUX are less sensitive to the spin-dependent scattering on protons because xenon nuclei have an even number of protons.  In this case,  experiments that contain fluorine in their target molecules have the best sensitivity. This is the case of the COUPP, Picasso, and SIMPLE experiments, who currently set the strongest limit on the spin-dependent scattering cross section of dark matter on protons. Still, in absolute numbers, the limits are many orders of magnitude weaker than in the spin-independent case, where LUX has crossed the 10^-45 cm^2 line. The IceCube experiment can set stronger limits in some cases by measuring the high-energy neutrino flux from the Sun. But these limits depend on what dark matter annihilates into, therefore they are much more model-dependent than the direct detection limits.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Clifford V. Johnson - Asymptotia

Pre-Oscar Bash: Hurrah for Science at the Movies?
It is hard to not get caught up each year in the Oscar business if you live in this town and care about film. If you care about film, you're probably just mostly annoyed about the whole thing because the slate of nominations and eventual winners hardly represents the outcome of careful thought about relative merits and so forth. The trick is to forget being annoyed and either hide from the whole thing or embrace it as a fun silly thing that does not mean too much. british_film_oscar_bash_smaller_05 This year since there has been a number of high profile films that help raise awareness of and interest in science and scientists, I have definitely not chosen the "hide away" option. Whatever one thinks of how good or bad "The Theory of Everything", "The Imitation Game" and "Interstellar" might be, I think that is simply silly to ignore the fact that it is a net positive thing that they've got millions of people taking about science and science-related things while out on their movie night. That's a good thing, and as I've been saying for the last several months (see e.g. here and here), good enough reason for people interested in science engagement to be at least broadly supportive of the films, because that'll encourage more to be made, an the more such films are being made, the better the chances are that even better ones get made. This is all a preface to admitting that I went to one of those fancy pre-Oscar parties last night. It was put on by the British Consul-General in Los Angeles (sort of a followup to the one I went to last month mentioned here) in celebration of the British Film industry and the large number of British Oscar [...] Click to continue reading this post

by Clifford at February 21, 2015 06:15 PM

February 20, 2015

Lubos Motl - string vacua and pheno

Barry Kripke wrote a paper on light-cone-quantized string theory
In the S08E15 episode of The Big Bang Theory, Ms Wolowitz died. The characters were sad and Sheldon was the first one who said something touching. I think it was a decent way to deal with the real-world death of Carol Ann Susi who provided Ms Wolowitz with her voice.

The departure of Ms Wolowitz abruptly solved a jealousy-ignited argument between Stewart and Howard revolving around the furniture from the Wolowitz house.

Also, if you missed that, Penny learned that she's been getting tests from Amy who was comparing her intelligence to the intelligence of the chimps. Penny did pretty well, probably more so than Leonard.




But the episode began with the twist described in the title. Barry brought a bottle to Amy because she had previously helped him with a paper that Kripke wrote and that was apparently very successful.




Kripke revealed that the paper was on the wight-cone quantization (I suppose he meant light-cone quantization) of string theory.

It's funny because some of my most well-known papers were about the light-cone quantization (in particular, Matrix theory is a light-cone-quantized description of string/M-theory), and I've been a big fan of this (not terribly widely studied) approach to string theory since 1994 when I began to learn string theory at the technical level. There are no bad ghosts (negative-norm states) or local redundancies in that description (well, except for the \(U(N)\) gauge symmetry if we use the matrix model description) which is "very physical" from a certain perspective.

Throughout the episode, Sheldon was jealous and unhappy that he had left string theory. Penny was trying to help him to "let it go"; this effort turned against her later. Recall that in April 2014, the writers turned Sheldon into a complete idiot who had only been doing string theory because some classmates had been beating him with a string theory textbook and who suddenly decided that he no longer considered string theory a vital branch of the scientific research.

The yesterday's episode fixed that harm to string theory – but it hasn't really fixed the harm done to Sheldon's image. Nothing against Kripke but the path that led him to write papers to string theory seems rather bizarre to me. When he appeared in the sitcom for the first time, I was convinced that we had quite some data that he was just some low-energy, low-brow, and perhaps experimental physicist. Those usually can't write papers on light-cone-quantized string theory.

But the writers have gradually transformed the subdiscipline of physics that Kripke is good at (this was not the first episode in which Kripke looked like a theoretical physicist). Of course, this is a twist that I find rather strange and unlikely but what can we do? Despite his speech disorder and somewhat obnoxious behavior, we should praise a new string theorist. Welcome, Barry Kripke. ;-)

An ad:
Adopt your own Greek for €500!

He will do everything that you don't have time to do:

* sleep until 11 am
* regularly have coffee
* honor siesta after the lunch
* spend evenings by sitting in the bar

You will finally have the time to work from dawn to dusk.

by Luboš Motl (noreply@blogger.com) at February 20, 2015 06:42 PM

ZapperZ - Physics and Physicists

Unfortunate Superfish
I hope this doesn't taint the name "Superfish".

In case you missed it, this week came the news that Lenovo, the Chinese computer company (to whom IBM sold their ThinkPad laptop series) had been installing on some of their computers a rather nasty tracking software called Superfish Visual Discovery.

I would have paid that much attention to stuff like this weren't it for two reasons: (i) I own a rather new Lenovo laptop and (ii) I am familiar with the name "Superfish" but under a different context.

Luckily, after doing a thorough check of my system, I found no sign of this intrusive software. As for the second reason, there is a rather popular software called "Superfish" out of Los Alamos National Lab that we use quite often. It is a Poisson EM solver often used to solve for EM fields in complex geometry/boundary conditions. I'm guessing that they gave it that name because "poisson" is French for "fish", and it really is a super software! :)

It is unfortunate that in the context of computer technology, the name Superfish is "poison".

Zz.

by ZapperZ (noreply@blogger.com) at February 20, 2015 06:37 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Five
Today's programme started with a talk by Santanu Mondal on baryons in the sextet gauge model, which is a technicolor-style SU(3) gauge theory with a doublet of technifermions in the sextet (two index symmetric) representation, and a minimal candidate for a technicolor-like model with an IR almost-fixed point. Using staggered fermions, he found that when setting the scale by putting the technipion's decay constant to the value derived from identifying the Higgs vacuum expectation value as the technicondensate, the baryons had masses in excess of 3 TeV, heavy enough to not yet have been discovered by the LHC, but to be within reach of the next run. However, the anomaly cancellation condition when embedding the theory into the Standard Model of the electroweak interactions requires charge assignments such that the lightest technibaryon (which would be a stable particle) would have a fractional electrical charge of 1/2, and while the cosmological relic density can be made small enough to evade detection, the technibaryons produced by the cosmic rays in the Earth's atmosphere should have been able to accumulate (there currently appear to be no specific experimental exclusions for charge-1/2 particles though).

Next was Nilmani Mathur speaking about mixed action simulations using overlap valence quarks on the MILC HISQ ensembles (which include the radiative corrections to the lattice gluon action from the quarks). Tuning the charm quark mass via the kinetic rather than rest mass of charmonium, the right charmonium hyperfine splitting is found, as well as generally correct charmonium spectra. Heavy-quark baryons (up to and including the Ωccc) have also been simulated, with results in good agreement with experimental ones where the latter exist. The mixed-action effects appear to be mild small in mixed-action χPT, and only half as large as those for domain-wall valence fermions on an asqtad sea.

In a brief note, Gunnar Bali encouraged the participants of the workshop to seek out opportunities for Indo-German research collaboration, of which there are still only a limited number of instances.

After the tea break, there were two more theoretical talks, both of them set in the framework of Hamiltonian lattice gauge theory: Indrakshi Raychowdhury presented a loop formulation of SU(2) lattice gauge theory based on the prepotential formalism, where both the gauge links and their conjugate electrical fields are constructed from harmonic oscillator variables living on the sites using the Schwinger construction. By some ingenious rearrangements in terms of "fusion variables", a representation of the perturbative series for Hamiltonian lattice gauge theory purely in terms of integer-valued quantum numbers in a geometric-combinatorial construction was derived.

Lastly, Sreeraj T.P. presented a derivation of an analogy between the Gauss constraint in Hamiltonian lattice gauge theory and the condition of equal "angular impulses" in the SU(2) x SU(2) description of the SO(4) symmetry of the Coulomb potential to derive a description of the Hilbert space of SU(2) lattice gauge theory in terms of hydrogen atom (n,l,m) variables located on the plaquettes subject only to the global constraint of vanishing total angular momentum, from where a variational ansatz for the ground state can be constructed.

The workshop closed with some well-deserved applause for the organisers and all of the supporting technical and administrative staff, who have ensured that this workshop ran very smoothly indeed. Another excellent lunch (I understand that our lunches have been a kind of culinary journey through India, starting out in the north on Monday and ending in Kerala today) concluded the very interesting workshop.

I will keep the small subset of my readers whom it may interest updated about my impressions from an excursion planned for tomorrow and my trip back.

by Georg v. Hippel (noreply@blogger.com) at February 20, 2015 12:02 PM

February 19, 2015

Lubos Motl - string vacua and pheno

A good story on proofs of inevitability of string theory
Natalie Wolchover is one of the best popular physics writers in the world, having written insightful stories especially for the Simons Foundation and the Quanta Magazine (her Bc degree in nonlinear optics from Tufts helps). Yesterday, she added
In Fake Universes, Evidence for String Theory
It is a wonderful article about the history of string theory (Veneziano-related history; thunderstorms by which God unsuccessfully tried to kill Green and Schwarz in Aspen, Colorado, which would postpone the First Superstring Revolution by a century; dualities; AdS/CFT etc.) with a modern focus on the research attempting to prove the uniqueness of string theory.



At least since the 1980s, we were saying that "string theory is the only game in town". This slogan was almost universally understood as a statement about the sociology or comparative literature. If you look at proposals for a quantum theory of gravity, aside from string theory, you won't find any that work.




However, one may adopt a different, bolder, and non-sociological perspective on the slogan. One may actually view it as a proposition in a theorem (or theorems) that could be proved. "The proof" that would settle all these questions isn't available yet but lots of exciting ideas and papers with partial proofs are already out there.

I've been promoting the efforts to prove that string theory is the only game in town for a decade. On this blog, you may look e.g. at Two Roads from \(\NNN=8\) SUGRA to String Theory (2008) arguing that the extra dimensions and stringy and brane-like objects are unavoidable additions to supergravity if you want to preserve consistency.




Such partial proofs are usually limited to "subclasses" of vacua of quantum gravity. However, in these subclasses, they show that "a consistent theory of quantum gravity" and "string theory" are exactly the same thing described by two seemingly different phrases.

Wolchover mentions a pretty large number of recent papers that extended the case supporting the claim that string theory is the only consistent theory of quantum gravity. Her list of preprints includes
Maldacena+Žibojedov 2011
Maldacena+Žibojedov+2 2014
Rangamani+Haehl 2014
Maloney+2 2014
Veneziano+3 2015
Some of them were discussed in my blog post String theory cleverly escapes acausality traps two weeks ago.

She communicated with some of the authors of the papers as well as with Tom Hartman whom I remember as a very smart student at Harvard. ;-)

Wolchover also gave some room to "uninterested, neutral" voices such as Matt Strassler and Steve Carlip; as well as to two full-fledged crackpots, Carlo Rovelli and Lee Smolin. The latter two just deny that mathematics works or that it can teach us anything. They have nothing to say – except for some pure fog to emit.
String-related: Supreme found a newly posted 2-part lecture by Witten, 50 minutes plus 90 minutes.
Skeptics' concerns

The uninterested folks ask the obvious skeptical questions: Can these results be generalized to all of classes of quantum gravity vacua? And can these sketches of proofs be made rigorous?

They're of course legitimate questions – and the physicists working on these ideas are surely trying to be more rigorous and more general than ever before – but these questions are tendentious, too. Even if the proofs of string theory's uniqueness only cover some classes of (unrealistic) vacua of quantum gravity, they still contain some information. Why?

First, I think that it seems rather awkward to believe that in several very different classes of vacua of quantum gravity, string theory is the only consistent description, while in other classes not yet studied, the conclusion will be very different. But of course, it's always possible that your results simply do not generalize. But imagine that you prove that a game is the only game in Boston, Beijing, and San Francisco. Will you think that there are completely different games in Moscow? Well, you will tend to think that the answer is No, I guess. What about Las Vegas? Maybe there are different games there – but it would still be strange that not a single one among these games may be found in the 3 cities above. So a proof in "seemingly random" towns or subsets of the vacua is always nontrivial evidence increasing the odds that the statement holds in general – or at least, that the statement holds in the other random "town" which happens to be relevant for our observations.

(By the way, Wolchover's term "fake Universes" for the "other towns" is perhaps making them sound more funny and unreal than they are. If string theory is right, these other vacua are as genuine as other towns where you just don't happen to live.)

Second, while the proofs that "the consistent theory must be string/M-theory" are not rigorous at this moment (they cannot be because we can't even define string/M-theory in way that a mathematician could call rigorous, and the same comment mostly applies to the phrase "quantum gravity", too), one "subset" of the results is much more rocksolid, and it's the elimination of specific enough candidate alternatives.

What do I mean?

The papers are usually not formulated in this way but I do think that the people who write them do have a very robust understanding why random cheap ideas that not a terribly bright kid could invent in a few minutes, like loop quantum gravity, cannot be a consistent description of any vacuum of quantum gravity that is studied in these papers.

So even if the "positive proof" isn't rigorous at all, the "negative proofs" may be rigorous because these researchers have appreciated many properties that a consistent theory of quantum gravity simply has to have, and it's easy to see that particular classes of alternative theories – pretty much all of them in the literature – simply don't have these properties.

Third, some of the specific reasons that lead the skeptics to their skepticism may very well be demonstrably wrong. For example, Matt Strassler discusses some of the papers above that conclude that a consistent theory of quantum gravity has to have a stringy density of states. And he comments on this result dismissively:
And just finding a stringy density of states — I don’t know if there’s a proof in that. This is just one property.
It's one property but it may be a sufficient property to locate the theory, too. If you focus on perturbative string theory, it seems that all of its vacua may be described in terms of a world sheet CFT. A consistent perturbative string theory (which is classified as a "vacuum or solution of string theory" nonperturbatively, but let's think that they're different theories) is given by a two-dimensional conformal field theory that obeys modular invariance and perhaps a finite number of extra conditions, and that's it.

If you get the stringy density of states, you may see that the density grows more quickly with energy than the density in point-like particle field theories. So you may determine that these "particles" have to have infinitely many internal degrees of freedom. If you assume that these are organized as fields in a world volume, the parameters of the string-like density are enough to determine that there is 1 internal spatial dimension inside the particles – they have to be strings.

Once you know that the particles are strings, you may be able to determine the other defining properties of string theory (such as modular invariance) by a careful analysis of other consistency conditions. Again, I can't immediately show you all of these proofs in a rigorous way but I am pretty much confident that the statements are true, at least morally true. Again, I am able to much more easily prove that sufficiently "particular" or "[constructively] well-defined" alternative theories or deformations of string theory that no longer obey the strict rules of string theory will be inconsistent. They will either violate the conditions determined in the recent papers, or some older conditions known to be essential for the consistency since the 1970s, 1980s, or 1990s.

In the previous paragraph, an argument of mine only talked about the elimination of sufficiently "particular" or "[constructively] well-defined" theories. What about some hypothetical vague theories that are not clearly well-defined, at least not by a constructive set of definitions? What about some "exceptional" solutions to the string-theoretical or quantum-gravitational "bootstrap" conditions?

Indeed, for those hypothetical theories, I am much less certain that they're impossible. It would be very interesting – even for most string theorists – to know some of these "seemingly non-stringy" theories that manage to obey the consistency conditions as well. However, for these "not really well-defined" theories, it is also much less easy to argue that they are not string theory. They could be unusual solutions of string theory. And their belonging to string theory could depend on your exact definition of string theory.

For example, pure \(AdS_3\) gravity on the minimum radius was shown to include a hidden monster group symmetry (by Witten). Is that theory a part of string theory? I think it is even though I don't know what's the right way to get it as a compactification of a critical, \(d=10\) or \(d=26\) string theory (or whether it's right to demand this kind of construction at all). But I actually do think that such a compactification (perhaps bosonic M-theory on the Leech 24-torus) is a possible way to define it. Even if it is not, there may be reasons to call the theory "a part of string/M-theory". The AdS/CFT correspondence works for this AdS vacuum much like it does for many "typical" stringy AdS spaces.

But you may see that there's some uncertainty here. On the other hand, I think that there is no ambiguity about the claim that the \(AdS_3\) gravity with the monster group is not a solution to loop quantum gravity. ;-) (Even though I have actually spent many many hours by trying to connect these sets of ideas as tightly as possible, but that's a story for another blog post.)



Again, I would like to stress that this whole line of research is powerful primarily because it may "immediately" eliminate some (huge) classes of possible alternative theories that are actually hopeless for reasons that used to be overlooked. If you can eliminate a theory by showing it's inconsistent, you simply don't need any real experiments! This is a trivial point that all those anti-string crackpots seem to completely misunderstand or just dishonestly deny.

It's like considering competing theories that also have their ideas about the value of \(d=3\times 3^2-1\). Some theories say that the result is \(d=1917\), others prefer \(d=-1/\pi\). And the string haters scream that without experiments, all these theories with different values of \(d\) are equally unlikely. I apologize but they are not. Even without experiments, it is legitimate to only consider theories which say \(d=26\). Sorry for having used bosonic string theory as my example. ;-) All the theories with other values of \(d\) may simply be killed before you even start!

In some sense, I am doing an experiment when I am eliminating all the wrong theories. What's special about this experiment is that the number of gadgets I have to buy and arrange; and the number of physical moves of my arms that I have to do is zero. It's still an experiment – the simplest one which requires nothing aside from mathematics to be don. But it is totally enough to eliminate all known alternatives to string theory (at least in the classes of vacua described by the papers) and the people who don't understand why this reasoning is perfectly kosher and perfectly scientific are just hopeless imbeciles.

And that's the memo.

by Luboš Motl (noreply@blogger.com) at February 19, 2015 07:27 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Four
Today was dedicated to topics and issues related to finite temperature and density. The first speaker of the morning was Prasad Hegde, who talked about the QCD phase diagram. While the general shape of the Columbia plot seems to be fairly well-established, there is now a lot of controversy over the details. For example, the two-flavour chiral limit seems to be well-described by either the O(4) or O(2) universality class, it isn't currently possible to exclude that it might be Z(2), and while the three-flavour transition appears to be known to be Z(2), simulations with staggered and Wilson quarks give disagreeing results for its features. Another topic that gets a lot of attention is the question of U(1)A restoration; of course, U(1)A is broken by the axial anomaly, which arises from the path integral measure and is present at all temperatures, so it cannot be expected to be restored in the same sense that chiral symmetry is, but it might be that as the temperature gets larger, the influence of the anomaly on the Dirac eigenvalue spectrum gets outvoted by the temporal boundary conditions, so that the symmetry violation might disappear from the correlation functions of interest. However, numerical studies using domain-wall fermions suggest that this is not the case. Finally, the equation of state can be obtained from stout or HISQ smearing with very similar results and appears well-described by a hadron resonance gas at low T, and to match reasonably well to perturbation theory at high T.

The next speaker was Saumen Datta speaking on studies of the QCD plasma using lattice correlators. While the short time extent of finite-temperature lattices makes it hard to say much about the spectrum without the use of techniques such as the Maximum Entropy Method, correlators in the spatial directions can be readily used to obtain screening masses. Studies of the spectral function of bottomonium in the Fermilab formalism suggest that the Y(1S) survives up to at least twice the critical temperature.

Sorendu Gupta spoke next about the equation of state in dense QCD. Using the Taylor expansion (which was apparently first invented in the 14th-15th century by the Indian mathematician Madhava) method together with Padé approximants to reconstruct the function from the truncated series, it is found that the statistical errors on the reconstruction blow up as one nears the suspected critical point. This can be understood as a specific instance of the "no-free-lunch theorem", because a direct simulation (were it possible) would suffer from critical slowing down as the critical point is approached, which would likewise lead to large statistical errors from a fixed number of configurations.

The last talk before lunch was an investigation of an alternative formulation of pure gauge theory using auxiliary bosonic fields in an attempt to render the QCD action amenable to a dual description that might allow to avoid the sign problem at finite baryon chemical potential. The alternative formulation appears to describe exactly the same physics as the standard Wilson gauge action at least for SU(2) in 3D, and in 2D and/or in certain limits, its a continuum limit is in fact known to be Yang-Mills theory. However, when fermions are introduced, the dual formulation still suffers from a sign problem, but it is hoped that any trick that might avoid this sign problem would then also avoid the finite-μ one.

After lunch, there were two non-lattice talks. The first one was given by Gautam Mandal, who spoke about thermalisation in integrable models and conformal field theories. In CFTs, it can be shown that for certain initial states, the expectation value of an operator equilibrates to a certain "thermal" expectation value, and a generalisation to integrable models, where the "thermal" density operator includes chemical potentials for all (infinitely many) conserved charges, can also be given.

The last talk of the day was a very lively presentation of the fluid-gravity correspondence by Shiraz Minwalla, who described how gravity in Anti-deSitter space asymptotically goes over to Navier-Stokes hydrodynamics in some sense.

In the evening, the conference banquet took place on the roof terrace of a very nice restaurant serving very good European-inspired cuisine and Indian red wine (also rather nice -- apparently the art of winemaking has recently been adapted to the Indian climate, e.g. the growing season is during the cool season, and this seems to work quite well).

by Georg v. Hippel (noreply@blogger.com) at February 19, 2015 06:32 PM

John Baez - Azimuth

Scholz’s Star

100,000 years ago, some of my ancestors came out of Africa and arrived in the Middle East. 50,000 years ago, some of them reached Asia. But between those dates, about 70,000 years ago, two stars passed through the outer reaches of the Solar System, where icy comets float in dark space!

One was a tiny red dwarf called Scholz’s star. It’s only 90 times as heavy as Jupiter. Right now it’s 20 light years from us, so faint that it was only discovered in 2013, by Ralf-Dieter Scholz—an expert on nearby stars, high-velocity stars, and dwarf stars.

The other was a brown dwarf: a star so small that it doesn’t produce energy by fusion. This one is only 65 times the mass of Jupiter, and it orbits its companion at a distance of 80 AU.

(An AU, or astronomical unit, is the distance between the Earth and the Sun.)

A team of scientists has just computed that while some of my ancestors were making their way to Asia, these stars passed about 0.8 light years from our Sun. That’s not very close. But it’s close enough to penetrate the large cloud of comets surrounding the Sun: the Oort cloud.

They say this event didn’t affect the comets very much. But if it shook some comets loose from the Oort cloud, they would take about 2 million years to get here! So, they won’t arrive for a long time.

At its closest approach, Scholz’s star would have had an apparent magnitude of about 11.4. This is a bit too faint to see, even with binoculars. So, don’t look for it myths and legends!

As usual, the paper that made this discovery is expensive in journals but free on the arXiv:

• Eric E. Mamajek, Scott A. Barenfeld, Valentin D. Ivanov, Alexei Y. Kniazev, Petri Vaisanen, Yuri Beletsky, Henri M. J. Boffin, The closest known flyby of a star to the Solar System.

It must be tough being a scientist named ‘Boffin’, especially in England! Here’s a nice account of how the discovery was made:

• University of Rochester, A close call of 0.8 light years, 16 February 2015.

The brown dwarf companion to Scholz’s star is a ‘class T’ star. What does that mean? It’s pretty interesting. Let’s look at an example just 7 light years from Earth!

Brown dwarfs

 

Thanks to some great new telescopes, astronomers have been learning about weather on brown dwarfs! It may look like this artist’s picture. (It may not.)

Luhman 16 is a pair of brown dwarfs orbiting each other just 7 light years from us. The smaller one, Luhman 16B, is half covered by huge clouds. These clouds are hot—1200 °C—so they’re probably made of sand, iron or salts. Some of them have been seen to disappear! Why? Maybe ‘rain’ is carrying this stuff further down into the star, where it melts.

So, we’re learning more about something cool: the ‘L/T transition’.

Brown dwarfs can’t fuse ordinary hydrogen, but a lot of them fuse the isotope of hydrogen called deuterium that people use in H-bombs—at least until this runs out. The atmosphere of a hot brown dwarf is similar to that of a sunspot: it contains molecular hydrogen, carbon monoxide and water vapor. This is called a class M brown dwarf.

But as they run out of fuel, they cool down. The cooler class L brown dwarfs have clouds! But the even cooler class T brown dwarfs do not. Why not?

This is the mystery we may be starting to understand: the clouds may rain down, with material moving deeper into the star! Luhman 16B is right near the L/T transition, and we seem to be watching how the clouds can disappear as a brown dwarf cools. (Its larger companion, Luhman 16A, is firmly in class L.)

Finally, as brown dwarfs cool below 300 °C, astronomers expect that ice clouds start to form: first water ice, and eventually ammonia ice. These are the class Y brown dwarfs. Wouldn’t that be neat to see? A star with icy clouds!

Could there be life on some of these stars?

Caroline Morley regularly blogs about astronomy. If you want to know more about weather on Luhman 16B, try this:

• Caroline Morley, Swirling, patchy clouds on a teenage brown dwarf, 28 February 2012.

She doesn’t like how people call brown dwarfs “failed stars”. I agree! It’s like calling a horse a “failed giraffe”.

For more, try:

Brown dwarfs, Scholarpedia.


by John Baez at February 19, 2015 05:26 PM

Sean Carroll - Preposterous Universe

The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics

Longtime readers know that I’ve made a bit of an effort to help people understand, and perhaps even grow to respect, the Everett or Many-Worlds Interpretation of Quantum Mechanics (MWI) . I’ve even written papers about it. It’s a controversial idea and far from firmly established, but it’s a serious one, and deserves serious discussion.

Which is why I become sad when people continue to misunderstand it. And even sadder when they misunderstand it for what are — let’s face it — obviously wrong reasons. The particular objection I’m thinking of is:

MWI is not a good theory because it’s not testable.

It has appeared recently in this article by Philip Ball — an essay whose snidely aggressive tone is matched only by the consistency with which it is off-base. Worst of all, the piece actually quotes me, explaining why the objection is wrong. So clearly I am either being too obscure, or too polite.

I suspect that almost everyone who makes this objection doesn’t understand MWI at all. This is me trying to be generous, because that’s the only reason I can think of why one would make it. In particular, if you were under the impression that MWI postulated a huge number of unobservable worlds, then you would be perfectly in your rights to make that objection. So I have to think that the objectors actually are under that impression.

An impression that is completely incorrect. The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”)

Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:

  1. The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
  2. The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.

That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.

Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.

Ah, but the MWI-naysayers say (as Ball actually does say), but every version of quantum mechanics has those two postulates or something like them, so testing them doesn’t really test MWI. So what? If you have a different version of QM (perhaps what Ted Bunn has called a “disappearing-world” interpretation), it must somehow differ from MWI, presumably by either changing the above postulates or adding to them. And in that case, if your theory is well-posed, we can very readily test those proposed changes. In a dynamical-collapse theory, for example, the wave function does not simply evolve according to the Schrödinger equation; it occasionally collapses (duh) in a nonlinear and possibly stochastic fashion. And we can absolutely look for experimental signatures of that deviation, thereby testing the relative adequacy of MWI vs. your collapse theory. Likewise in hidden-variable theories, one could actually experimentally determine the existence of the new variables. Now, it’s true, any such competitor to MWI probably has a limit in which the deviations are very hard to discern — it had better, because so far every experiment is completely compatible with the above two axioms. But that’s hardly the MWI’s fault; just the opposite.

The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics. Hilbert space is big, regardless of one’s personal feelings on the matter.

Which saddens me, as an MWI proponent, because I am very quick to admit that there are potentially quite good objections to MWI, and I would much rather spend my time discussing those, rather than the silly ones. Despite my efforts and those of others, it’s certainly possible that we don’t have the right understanding of probability in the theory, or why it’s a theory of probability at all. Similarly, despite the efforts of Zurek and others, we don’t have an absolutely airtight understanding of why we see apparent collapses into certain states and not others. Heck, you might be unconvinced that the above postulates really do lead to the existence of distinct worlds, despite the standard decoherence analysis; that would be great, I’d love to see the argument, it might lead to a productive scientific conversation. Should we be worried that decoherence is only an approximate process? How do we pick out quasi-classical realms and histories? Do we, in fact, need a bit more structure than the bare-bones axioms listed above, perhaps something that picks out a preferred set of observables?

All good questions to talk about! Maybe someday the public discourse about MWI will catch up with the discussion that experts have among themselves, evolve past self-congratulatory sneering about all those unobservable worlds, and share in the real pleasure of talking about the issues that matter.

by Sean Carroll at February 19, 2015 04:59 PM

Clifford V. Johnson - Asymptotia

Ahead of Myself…
(In which I talk about script work on the graphic book, and a useful writer's tool for you writers out there of all kinds.) scrivening_the_book_2 I've been easing my brain back into thinking regularly about the book project and getting momentum on it again. [As you recall, I've been distracted by family things, and before that, focussed on finding a publisher for it.) The momentum part is not easy because... newborn. (I've been saying that a lot: "because ...newborn." I am tempted to make a (drool-covered) t-shirt with that as a slogan, but the trouble with that idea is that I do not wear t-shirts with things written on them if I can help it. Uh-huh, I'm weird.] My plan is to finish writing the scripts for the book, including storyboarding/thumbnailing the whole thing out to get the page designs right. In essence, flesh out the book with enough of the main stuff of it so that I can then work on tinkering with structure, etc. This involves not just moving words around as you would a prose book, but planning how the words work on the page in concert with the drawings. I've often done this by just scribbling in a notebook, but ultimately one wants to be able to have everything in a form one can refer to easily, revise, cut and paste, etc. That's where this marvellous tool called a computer comes in. A lot of writers in comics use the same sorts of software that is used for plays or screenplays (Final Draft and the like). People have even written comics script templates for such programs. They allow for page descriptions, panel descriptions, etc. (At this point I should acknowledge that the typical reader probably did not know that comics and graphic books had scripts. Well, they do. There's a lot more to say about that, but I won't do that here. Google it.) Over the years I've been slowly putting my scribblings into a piece of software [...] Click to continue reading this post

by Clifford at February 19, 2015 04:37 PM

Symmetrybreaking - Fermilab/SLAC

Physics for the people

Citizen scientists dive into particle physics and astrophysics research.

Citizen science, scientific work done by the general public, is having a moment.

In June 2014, the term “citizen science” was added to the Oxford English Dictionary. This month, the American Association for the Advancement of Science—one of the world’s largest general scientific societies—dedicated several sessions at its annual meeting to the topic. A two-day preconference organized by the year-old Citizen Science Association attracted an estimated 700 participants.

Citizen scientists interested in taking part in particle physics research have few options at the moment, but they may have a new opportunity on the horizon with the Large Synoptic Survey Telescope.

Hunting the Higgs

Citizen science projects have helped researchers predict the structure of proteins, transcribe letters from Albert Einstein, and monitor populations of bees and invasive crabs. The citizen science portal “Zooniverse,” launched in 2007, has attracted 1.3 million users from around the world. According to a report by Oxford University astronomer Brooke Simmons, the first Zooniverse project, “Galaxy Zoo,” has so far published 57 scientific papers with the help of citizen scientists.

Of the 27 projects on the Zooniverse portal, just one allows volunteers to help with the analysis of real data from a particle physics experiment. “Higgs Hunters,” launched in November 2014, invites citizen scientists to help physicists find evidence of strange particle behavior in images of collisions from the Large Hadron Collider.

When protons collide in the LHC, their energy transfers briefly into matter, forming different types of particles, which then decay into less massive particles and eventually dissipate back into energy. Some particle collisions create Higgs bosons, particles discovered in 2012 at the LHC.

“We don’t yet know much about how the Higgs boson decays,” says particle physicist Alan Barr at Oxford University in the UK, one of the leads of the Higgs Hunters project. “One hypothesis is that the Higgs decays into new, lighter Higgs particles, which would travel some distance from the center of our detector where LHC’s protons collide. We wouldn’t see these new particles until they decayed themselves into known particles, generating tracks that emerge ‘out of thin air,’ away from the center.”

So far, almost 5,000 volunteers have participated in the Higgs Hunters project. Over the past three months, they have classified 600,000 particle tracks.

Why turn to citizen science for this task?

“It turns out that our current algorithms aren’t trained well enough to identify the tracks we’re interested in,” Barr says. “The human eye can do much better. We hope that we can use the information from our volunteers to train our algorithms and make them better for the second run of LHC.”

Humans are also good at finding problems an algorithm might miss. Many participants flagged as “weird” an image showing what looked like a shower of particles called muons passing through the detector, Barr says. “When we looked at it in more detail, it turned out that it was a very rare detector artifact, falsely identified as a real event by the algorithms.”

Volunteers interested in Higgs Hunters have only a couple of months left to participate. Barr estimates that by April, the project will have collected enough data for researchers to proceed with an in-depth analysis.

Distortions in space

Armchair astrophysicists can find their own project in the Zooniverse. “SpaceWarps” asks volunteers to look for distortions in images of faraway galaxies—evidence of gravitational lensing.

Gravitational lensing occurs when the gravitational force of massive galaxies or galaxy clusters bends the space around them so that light rays traveling near them follow curved paths.

Einstein predicted this effect in his Theory of General Relativity. You can see an approximation of it by looking at a light through the bottom of a wine glass. Gravitational lensing is used to determine distances in the universe—key information in measuring the expansion of the universe and understanding dark energy.

Recognizing gravitational lensing is a difficult task for a computer program, but a relatively easy one for a human, says Phil Marshall, a scientist at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University and SLAC National Accelerator Laboratory.

Marshall, one of three principal investigators for SpaceWarps, says he sees a lot of potential in the interface between humans and machines. “They both have different skills that complement each other.”

According to the SpaceWarps website, more than 51,000 volunteers have made more than 8 million classifications to date and have discovered dozens of candidates for gravitational lenses that were not detected by algorithms. The project is currently adding new data for people to analyze.

The Large Synoptic Survey Telescope

Citizen science may become particularly important for another project Marshall is interested in: the Large Synoptic Survey Telescope, to be built on a mountaintop in Chile.

Technicians recently completed a giant double mirror for the project, and its groundbreaking will take place this spring. Beginning in 2022, LSST will take a complete image of the entire southern sky every few nights. It is scheduled to run for a decade, collecting 6 million gigabytes of data each year. The information collected may help scientists unravel cosmic mysteries such as dark matter and dark energy.

“Nobody really knows what citizen science will look like for LSST,” Marshall says. “However, a good approach would be to make use of the fact that humans are very good at understanding confusing things. They could help us inspect images for odd features, potentially spotting new things or pointing out problems with the data.”

Citizen scientists could also help with the LSST budget.

Henry Sauermann at the Georgia Institute of Technology and Chiara Franzoni at the Politecnico di Milano in Italy recently studied seven Zooniverse projects started in 2010. They calculated the efforts of unpaid volunteers over just the first 180 days to be worth $1.5 million.

But the value of citizen science to LSST may depend on whether it can attract a dedicated group of amateur researchers.

Sauermann and Franzoni’s study showed that 10 percent of contributors to the citizen science projects they studied completed an average of almost 80 percent of all of the work.

“We also see that with SpaceWarps,” Marshall says. “Most Internet users have a very short attention span.”

It’s all about how well the researchers design the project, he says.

“It must be easy to get started and, at the same time, empower the participant enough to make serious contributions to science,” Marshall says. “It’s on us to provide volunteers with interesting things to do.”

 

Editor's note: Reader Richard Mitnick points out that there are additional ways volunteers can contribute to particle physics. You can contribute computing power through the long-running lhc@home project and, if you have the expertise, you can also analyze data provided through CERN's Open Data portal.

 

Like what you see? Sign up for a free subscription to symmetry!

by Manuel Gnida and Kathryn Jepsen at February 19, 2015 02:00 PM

February 18, 2015

John Baez - Azimuth

Higher-Dimensional Rewriting in Warsaw

This summer there will be a conference on higher-dimensional algebra and rewrite rules in Warsaw. They want people to submit papers! I’ll give a talk about presentations of symmetric monoidal categories that arise in electrical engineering and control theory. This is part of the network theory program, which we talk about so often here on Azimuth.

There should also be interesting talks about combinatorial algebra, homotopical aspects of rewriting theory, and more:

Higher-Dimensional Rewriting and Applications, 28-29 June 2015, Warsaw, Poland. Co-located with the RDP, RTA and TLCA conferences. Organized by Yves Guiraud, Philippe Malbos and Samuel Mimram.

Description

Over recent years, rewriting methods have been generalized from strings and terms to richer algebraic structures such as operads, monoidal categories, and more generally higher dimensional categories. These extensions of rewriting fit in the general scope of higher-dimensional rewriting theory, which has emerged as a unifying algebraic framework. This approach allows one to perform homotopical and homological analysis of rewriting systems (Squier theory). It also provides new computational methods in combinatorial algebra (Artin-Tits monoids, Coxeter and Garside structures), in homotopical and homological algebra (construction of cofibrant replacements, Koszulness property). The workshop is open to all topics concerning higher-dimensional generalizations and applications of rewriting theory, including

• higher-dimensional rewriting: polygraphs / computads, higher-dimensional generalizations of string/term/graph rewriting systems, etc.

• homotopical invariants of rewriting systems: homotopical and homological finiteness properties, Squier theory, algebraic Morse theory, coherence results in algebra and higher-dimensional category theory, etc.

• linear rewriting: presentations and resolutions of algebras and operads, Gröbner bases and generalizations, homotopy and homology of algebras and operads, Koszul duality theory, etc.

• applications of higher-dimensional and linear rewriting and their interactions with other fields: calculi for quantum computations, algebraic lambda-calculi, proof nets, topological models for concurrency, homotopy type theory, combinatorial group theory, etc.

• implementations: the workshop will also be interested in implementation issues in higher-dimensional rewriting and will allow demonstrations of prototypes of existing and new tools in higher-dimensional rewriting.

Submitting

Important dates:

• Submission: April 15, 2015

• Notification: May 6, 2015

• Final version: May 20, 2015

• Conference: 28-29 June, 2015

Submissions should consist of an extended abstract, approximately 5 pages long, in standard article format, in PDF. The page for uploading those is here. The accepted extended abstracts will be made available electronically before the
workshop.

Organizers

Program committee:

• Vladimir Dotsenko (Trinity College, Dublin)

• Yves Guiraud (INRIA / Université Paris 7)

• Jean-Pierre Jouannaud (École Polytechnique)

• Philippe Malbos (Université Claude Bernard Lyon 1)

• Paul-André Melliès (Université Paris 7)

• Samuel Mimram (École Polytechnique)

• Tim Porter (University of Wales, Bangor)

• Femke van Raamsdonk (VU University, Amsterdam)


by John Baez at February 18, 2015 09:51 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Three
Today's first talk was given by Rainer Sommer, who presented two effective field theories for heavy quarks. The first one was non-perturbatively matched HQET, which has been the subject of a long-running effort by the ALPHA collaboration. This programme is now reaping its first dividends in the form of very reliable fully non-perturbative results for B physics observables. Currently, the form factors for B->πlν decays, which are very important for determining the CKM matrix element Vub (currently subject to some significant tension between inclusive and exclusive determinations) are in the final stages of analysis. The other effective theory was QCD with Nf<6 flavours -- which is of course technically an effective theory where the heavy quarks have been integrated out! Rainer presented a new factorisation formula that relates the mass of a light hadron in the theory with a heavy quark to that of the same hadron in a theory in which the heavy quark is massless by a factor dependent on the hadron and a universal perturbative factor. The factorisation formula has been tested for gluonic observables in the pure gauge theory matched to the two-flavour theory.

After tea, we had a session focussed on algorithms and machines. The first speaker was Andreas Frommer speaking about multigrid solvers for the Dirac equation in lattice QCD. A multigrid solver consists of a smoother and a coarse-grid correction. For the smoother for the Dirac equation, the Schwartz Alternating Procedure (SAP) is a natural choice, whereas for the coarse-grid correction, aggregate-based interpolation (essentially the same idea as Lüscher-style inexact deflation) can be used. The resulting multigrid algorithm is very similar to the domain-decomposed algorithm used in the DD-HMC and openQCD codes, but generalises to more than two levels, which may lead to better performance. Applications to the overlap operator were presented.

Next, Stephan Solbrig presented the QPACE2 project, which aims to build a supercomputer based on Intel Knight's Corner (Xeon Phi) cards as processors, where each node consists of four Xeon Phis linked to each other, a weak host CPU used only for booting, and to an Infiniband card via a PCIe switch. The whole system uses hot water cooling, building on experience gathered in the iDataCool project. The 512bit wide registes of the Xeon Phi necessitate several programming tricks such as site fusing to make optimal use of computing resources; the resulting code seems to scale almost perfectly as long as there are sufficient numbers of domains to keep all nodes busy. An interesting side note was that apparently there are extremophile bacteria that thrive in the copper pipes of water-cooled computer clusters.

Pushan Majumdar rounded off the session with a talk about QCD on GPUs. The special programming model of GPUs (small amount of memory per core, restrictions on branching, CPU/GPU data transfer as a bottleneck) makes programming GPUs challenging. The OpenACC compiler standard, which aims to offload the burden of dealing with GPU particulars onto the compiler vendor, may offer a possibility to easily port OpenMP-based code written for CPUs on GPUs, and Pushan showed some worked examples of Fortran 90 OpenMP code adapted for OpenACC.

After lunch, I had to retire to my room for a little (let me hasten to add that the truly excellent lunch provided by the extremely hospitable TIFR is definitely absolutely blameless in this), and thus missed the afternoon's first two talks, catching only the end of Jyotirmoy Maiti's talk about exploring the spectrum of the pure SU(3) gauge theory using the Wilson flow.

Gunnar Bali closed the day's proceeding with a very nice colloquium talk for a larger scientific audience, summarising the Standard Model and lattice QCD in an accessible manner for non-experts before proceeding to present recent results on the sea quark content and spin structure of the proton.

by Georg v. Hippel (noreply@blogger.com) at February 18, 2015 04:09 PM

ZapperZ - Physics and Physicists

2015 International Year of Light
We had the 2005 International Year of Physics, and 2009 International Year of Astronomy. Now in 2015, UNESCO has declared that this will be the International Year of Light.

In case you missed it, the APS has made available a series of significant articles and papers related to this topic. Check them out.

Personally, as someone who has performed work at a synchrotron light source, and done studies using photoemission phenomenon, I can truly appreciate "light" beyond just what we normally do everyday.

Zz.

by ZapperZ (noreply@blogger.com) at February 18, 2015 01:44 PM

Tommaso Dorigo - Scientificblogging

The Quote Of The Week: Resolving The Mass Hierarchy With A Little Help From A Supernova
"1. Interaction with matter changes the neutrino mixing and effective mass splitting in a way that depends on the mass hierarchy. Consequently, results of oscillations and flavor conversion are different for the two hierarchies.
2. Sensitivity to the mass hierarchy appears whenever the matter effect on the 1-3 mixing and mass splitting becomes substantial. This happens in supernovae in large energy range, and in the matter of the Earth.[...] 
4. Multi-megaton scale under ice (water) atmospheric neutrino detectors with low energy threshold (2-3 GeV) may establish mass hierarchy with (3-10)σ confidence level in few years. [...]

read more

by Tommaso Dorigo at February 18, 2015 10:45 AM

February 17, 2015

Symmetrybreaking - Fermilab/SLAC

10 unusual detector materials

The past century has generated some creative ideas for tracking particles.

Hans had been waiting in the darkened room for 45 minutes. It was a dull part of his day, but acclimating his eyes was a necessary part of his experiment—counting faint sparkles of light caused by alpha particles deflecting off a thin metal foil.

The experiment was part of a series organized by Ernest Rutherford in 1908, and it led to the discovery of the atomic nucleus. Rutherford’s assistant, physicist Hans Geiger, would share credit for the discovery.

Their experiment was particle physics in its infancy.

Studying particle physics requires revealing the smallest bits of matter. This work might involve hurling billions of accelerated particles at a target and watching for the flash of energy that results from the crash. It might involve setting up a detector to wait for particles created in nature to pass through.

Over the years, electronics and mainframe computers have taken over Rutherford and Geiger’s painstaking particle-counting duties. And physicists have used a host of materials other than foil to lure those particles—including hard-to-catch neutrinos—into view.

1. Dry cleaning fluid.

Physicist Ray Davis had either a terrific idea for a particle detector or a tremendous load of laundry. In a few years leading up to 1966, he obtained 600 tons of a common dry cleaning solvent, perchloroethylene, and deposited it nearly a mile beneath the Black Hills of South Dakota in a detector stationed in the Homestake gold mine. He hoped to count solar neutrinos, which trigger a detectable chemical reaction when they pass through this fluid. Davis’ perchloroethylene-filled particle detector was a success, even though he tallied only a third of the neutrinos he was expecting. Revelations that neutrinos change form as they travel were soon to follow.

2. Soviet-era artillery shells.

In the 1940s, the Russian navy armed its vessels with a grade of brass specifically designed to hold its shape under extreme stress and for long periods of time. More than 50 years later, the CMS particle detector under construction at the Large Hadron Collider at CERN required brass with the same high standards. It needed to be able to withstand a bombardment of particles with unflinching consistency over its lifetime. The lab struck a deal with Russian officials to melt down old, unused shells for the CMS hadron calorimeter, a part of the detector that measures the energy of particles produced in collisions in the LHC.

3. 2.5 million gallons of mineral oil.

Fermilab’s 14,000-ton NOvA neutrino detector in northern Minnesota, possibly the largest freestanding plastic structure in the world, is filled with a liquid substance that is 95 percent mineral oil. That single raw material took up 108 rail cars and a barge as it left a refinery in southwest Louisiana for a facility 1000 miles away near Chicago, where it was blended with the remaining ingredients 110,000 gallons at time. The result was a liquid scintillator, which releases measurable light as a result of collisions between neutrinos and particles in the liquid.

4. Lead bricks wrapped in foil by robots.

The OPERA detector at Gran Sasso National Laboratory catches neutrinos with something a bit more, as they say in Italy, duro—a wall of 150,000 18-pound bricks. The bricks themselves are stacks of lead sheets and radiation-sensitive film, wrapped in reflective aluminum tape and sealed in an airtight container. When neutrinos collide with the lead, they create other particles that streak across the film and leave tracks that can be analyzed after the film is developed. The 11 robots of Gran Sasso’s brick-assembly machine, otherwise known as BAM, cranked out 750 bricks per day, faster and with much less complaining than an army of graduate students.

5. Smartphones. Yep, there’s an app for that.

Actually, there are at least two. A physicist at the University of Wisconsin, Madison, and a director of citizen science at the LA Makerspace are working on one called DECO, an educational app that records speedy cosmic-ray particles that your phone’s camera accidentally detects. Two more physicists, one from University of California, Irvine, and the other from University of California, Davis, are at work on a similar app called CRAYFIS. Their objective: gather enough users to create a functional cosmic ray detector from a massive network of devices.

6. A crystal ball.

No, SLAC National Accelerator Laboratory did not enlist a psychic medium to locate subatomic particles when they built the Crystal Ball detector in the late ’70s. They did, however, arrange more than 600 sodium iodide crystals into a sphere 13 ½ feet around to detect neutral particles at the SPEAR particle collider. The crystals work in similar fashion to the liquid inside the NOvA detector (see No. 3 in this list), converting energy from particle collisions to measurable light. The detector is still in use, currently at Johannes Gutenberg University in Mainz, Germany. Its future, ironically, is uncertain.

7. Antarctica, from below.

When penguins look down, there’s a chance they might discover one of 86 holes drilled more than a mile deep into the Antarctic ice for the IceCube experiment. When turbocharged cosmic neutrinos collide with ice, the resulting particle shrapnel creates a blue flash of light otherwise known as Cherenkov light. Scientists survey the ice sheet for that light with an array of more than 5000 separate, bauble-like detectors strung on wires running down each hole.

8. Antarctica, from above.

Should penguins look up instead, they may spot the Antarctic Impulsive Transient Antenna, or ANITA, floating above them, suspended from a massive scientific balloon. ANITA listens for radio waves emanating from the ice below. The pure, polar ice makes an unbelievably clear medium for the Askaryan effect, discovered only in 2000, in which cosmic neutrinos similar to the ones that produce light for the IceCube experiment generate a signature radio signal. The floating antenna is so sensitive that it can detect a handheld radio up to 400 miles away.

9. A breath of fresh Martian air.

Our descendants may well enjoy a beautiful sunset on Mars—if we can engineer its atmosphere to warm the planet from its current average temperature of about minus 60 degrees Celsius to something more friendly to vacationing humans. For such a project, some researchers have singled out the compound octofluoropropane as the greenhouse gas of choice. In the meantime, researchers on the PICO experiment at underground Canadian laboratory SNOLAB are using octofluoropropane in its liquid state to detect dark matter. If a particle of dark matter can knock one fluorine nucleus hard enough, it will cause the superheated liquid to boil and form a telltale bubble in the chamber.

10. Dry ice, alcohol and a fish tank.

This one you can build yourself. The cloud chamber earned its inventor the 1927 Nobel Prize in physics, and variations of it—including No. 9 on this list— have a long history of use in particle physics labs. But many DIY varieties exist online, too. The gist is usually to create a thick vapor (of alcohol) that is cooled (by dry ice). Be patient, and you’ll catch a passing particle such as a cosmic muon as it bumps into vapor molecules and triggers a cloudy streak of condensation through the chamber (a.k.a. fish tank).

 

Like what you see? Sign up for a free subscription to symmetry!

by Troy Rummler at February 17, 2015 07:54 PM

Quantum Diaries

Ten unusual detector materials

This article appeared in symmetry on Feb. 17, 2015.

The past century has generated some creative ideas for tracking particles. Image: Sandbox Studio

The past century has generated some creative ideas for tracking particles. Image: Sandbox Studio

Hans had been waiting in the darkened room for 45 minutes. It was a dull part of his day, but acclimating his eyes was a necessary part of his experiment—counting faint sparkles of light caused by alpha particles deflecting off a thin metal foil.

The experiment was part of a series organized by Ernest Rutherford in 1908, and it led to the discovery of the atomic nucleus. Rutherford’s assistant, physicist Hans Geiger, would share credit for the discovery.

Their experiment was particle physics in its infancy.

Studying particle physics requires revealing the smallest bits of matter. This work might involve hurling billions of accelerated particles at a target and watching for the flash of energy that results from the crash. It might involve setting up a detector to wait for particles created in nature to pass through.

Over the years, electronics and mainframe computers have taken over Rutherford and Geiger’s painstaking particle-counting duties. And physicists have used a host of materials other than foil to lure those particles—including hard-to-catch neutrinos—into view.

1. Dry cleaning fluid.

Physicist Ray Davis had either a terrific idea for a particle detector or a tremendous load of laundry. In a few years leading up to 1966, he obtained 600 tons of a common dry cleaning solvent, perchloroethylene, and deposited it nearly a mile beneath the Black Hills of South Dakota in a detector stationed in the Homestake gold mine. He hoped to count solar neutrinos, which trigger a detectable chemical reaction when they pass through this fluid. Davis’ perchloroethylene-filled particle detector was a success, even though he tallied only a third of the neutrinos he was expecting. Revelations that neutrinos change form as they travel were soon to follow.

2. Soviet-era artillery shells.

In the 1940s, the Russian navy armed its vessels with a grade of brass specifically designed to hold its shape under extreme stress and for long periods of time. More than 50 years later, the CMS particle detector under construction at the Large Hadron Collider at CERN required brass with the same high standards. It needed to be able to withstand a bombardment of particles with unflinching consistency over its lifetime. The lab struck a deal with Russian officials to melt down old, unused shells for the CMS hadron calorimeter, a part of the detector that measures the energy of particles produced in collisions in the LHC.

3. 2.5 million gallons of mineral oil.

Fermilab’s 14,000-ton NOvA neutrino detector in northern Minnesota, possibly the largest freestanding plastic structure in the world, is filled with a liquid substance that is 95 percent mineral oil. That single raw material took up 108 rail cars and a barge as it left a refinery in southwest Louisiana for a facility 1000 miles away near Chicago, where it was blended with the remaining ingredients 110,000 gallons at time. The result was a liquid scintillator, which releases measurable light as a result of collisions between neutrinos and particles in the liquid.

4. Lead bricks wrapped in foil by robots.

The OPERA detector at Gran Sasso National Laboratory catches neutrinos with something a bit more, as they say in Italy, duro—a wall of 150,000 18-pound bricks. The bricks themselves are stacks of lead sheets and radiation-sensitive film, wrapped in reflective aluminum tape and sealed in an airtight container. When neutrinos collide with the lead, they create other particles that streak across the film and leave tracks that can be analyzed after the film is developed. The 11 robots of Gran Sasso’s brick-assembly machine, otherwise known as BAM, cranked out 750 bricks per day, faster and with much less complaining than an army of graduate students.

5. Smartphones. Yep, there’s an app for that.

Actually, there are at least two. A physicist at the University of Wisconsin, Madison, and a director of citizen science at the LA Makerspace are working on one called DECO, an educational app that records speedy cosmic-ray particles that your phone’s camera accidentally detects. Two more physicists, one from University of California, Irvine, and the other from University of California, Davis, are at work on a similar app called CRAYFIS. Their objective: gather enough users to create a functional cosmic ray detector from a massive network of devices.

6. A crystal ball.

No, SLAC National Accelerator Laboratory did not enlist a psychic medium to locate subatomic particles when they built the Crystal Ball detector in the late ’70s. They did, however, arrange more than 600 sodium iodide crystals into a sphere 13 ½ feet around to detect neutral particles at the SPEAR particle collider. The crystals work in similar fashion to the liquid inside the NOvA detector (see No. 2 in this list), converting energy from particle collisions to measurable light. The detector is still in use, currently at Johannes Gutenberg University in Mainz, Germany. Its future, ironically, is uncertain.

7. Antarctica, from below.

When penguins look down, there’s a chance they might discover one of 86 holes drilled more than a mile deep into the Antarctic ice for the IceCube experiment. When turbocharged cosmic neutrinos collide with ice, the resulting particle shrapnel creates a blue flash of light otherwise known as Cherenkov light. Scientists survey the ice sheet for that light with an array of more than 5000 separate, bauble-like detectors strung on wires running down each hole.

8. Antarctica, from above.

Should penguins look up instead, they may spot the Antarctic Impulsive Transient Antenna, or ANITA, floating above them, suspended from a massive scientific balloon. ANITA listens for radio waves emanating from the ice below. The pure, polar ice makes an unbelievably clear medium for the Askaryan effect, discovered only in 2000, in which cosmic neutrinos similar to the ones that produce light for the IceCube experiment generate a signature radio signal. The floating antenna is so sensitive that it can detect a handheld radio up to 400 miles away.

9. A breath of fresh Martian air.

Our descendants may well enjoy a beautiful sunset on Mars—if we can engineer its atmosphere to warm the planet from its current average temperature of about minus 60 degrees Celsius to something more friendly to vacationing humans. For such a project, some researchers have singled out the compound octofluoropropane as the greenhouse gas of choice. In the meantime, researchers on the PICO experiment at underground Canadian laboratory SNOLAB are using octofluoropropane in its liquid state to detect dark matter. If a particle of dark matter can knock one fluorine nucleus hard enough, it will cause the superheated liquid to boil and form a telltale bubble in the chamber.

10. Dry ice, alcohol and a fish tank.

This one you can build yourself. The cloud chamber earned its inventor the 1927 Nobel Prize in physics, and variations of it—including No. 9 on this list— have a long history of use in particle physics labs. But many DIY varieties exist online, too. The gist is usually to create a thick vapor (of alcohol) that is cooled (by dry ice). Be patient, and you’ll catch a passing particle such as a cosmic muon as it bumps into vapor molecules and triggers a cloudy streak of condensation through the chamber (a.k.a. fish tank).

 

Troy Rummler

by Fermilab at February 17, 2015 06:49 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Two
Today's first session started with a talk by Wolfgang Söldner, who reviewed the new CLS simulations using 2+1 flavours of dynamical fermions with open boundary conditions in the time direction to avoid the freezing of topology at small lattice spacing. Besides the new kind of boundary conditions, these simulations use a number of novel tricks, such as twisted mass reweighting, to make the simulations more stable at light pion masses. First studies of the topology and of the scale setting look promising, and there will likely be some interesting first physics results at the lattice conference in Kobe.

After the tea break, Asit Kumar De talked about lattice gauge theory with equivariant gauge fixing. This is an attempt to evade the Neuberger 0/0 problem with BRST invariance on a lattice by leaving a subgroup of the gauge group unfixed. As a result, on gets four-ghost interactions in the gauge fixed action (this seems to be a general feature of theories trying to extend BRST symmetry; the Curci-Ferrari model for massive gauge fields also has such an interaction).

This was followed Mughda Sarkar speaking about simulations of the gauge-fixed compact U(1) gauge theory. Apparently, the added parameters of the gauge fixing part appear to allow for changing the nature of the phase transition between strong and weak coupling from first to second order, although I didn't quite understand how that is compatible with the idea of having all gauge-invariant quantities be unaffected by the gauge fixing.

After lunch, we had an excursion to the island of Elephanta, where there are some great temples carved out of the rock. Today was a festival of Shiva, so admission was free (otherwise the price structure is quite interesting: र10 for Indians, र250 for foreigners), and there were many people on the island and in the caves. The site is certainly well worth the visit, although many of the statues have been damaged quite severely in the past.

by Georg v. Hippel (noreply@blogger.com) at February 17, 2015 03:44 PM

Lubos Motl - string vacua and pheno

ATLAS, CMS: small SUSY deviations
Both ATLAS and CMS, the two main detectors at the Large Hadron Collider, published some preprints about the search for SUSY or new SUSY-like Higgs bosons. No formidable deviation from the Standard Model was found. However...

ATLAS was looking for a CP-odd Higgs boson, \(A\), in decays to \(Zh\). It turned out that there is approximately a 2.5-sigma excess for \(m_A=220\GeV\): look in the conclusions. I won't seriously mention the below-2-sigma excess for \(m_A=260\GeV\) at all.




If you search through the whole Internet and all YouTube videos for a prediction of new Higgs bosons near \(220\GeV\), or below \(230\GeV\) ;-), you will be led to this 2012 blog post about a talk by Nima Arkani-Hamed. After 59:00, he talks about some possible looming proofs of naturalness. In his special NMSSM scenario, he says that some data would imply that the new Higgs doublet below \(230\GeV\) is unavoidable.




CMS was employing the MT2 variable to look for superpartners. In the PDF file, find "Figure 12" on page 23 (the number shown on the paper: it's 25 according to PDF viewers).

Among the three exclusion charts over there, the stop-neutralino plot at the bottom indicates that they expected the exclusion of stops up to \(m_{\tilde t}=(700\pm 100)\GeV\) or so but they only got an exclusion \(m_{\tilde t}\lt 500\GeV\) in that lower part of the graph which is about 2 experimental standard deviations below the expectations.

They say that you should ignore this apparent excess altogether because, as they think, it's caused by a downward fluke in the control sample, not an upward fluke in the signal region. I wonder whether they couldn't use more accurate predictions for the background (not necessarily a control sample) to avoid similar explanations that sound as excuses.

Nothing to be enthusiastic about but 2-sigmaish anomalies keep on occurring at a rate that is so far comparable to what statistics predicts.

It is not guaranteed that the LHC will be enough to discover supersymmetry but there is no crisis for SUSY. If you doubt that this assertion is right while the critics of SUSY who contradict it are wrong, check the first hep-ph paper today which contains these two assertions in the very title ;-), along with the information that supergravity gauge theories strike back.

by Luboš Motl (noreply@blogger.com) at February 17, 2015 09:06 AM

February 16, 2015

Tommaso Dorigo - Scientificblogging

Neutrinos From An Atomic Bomb
Less than three weeks separate us from the XVI Neutrino Telescopes, a very interesting conference held in Venice every two years. The physics of neutrinos is a very special niche in the realm of particle physics, one not devoid of cunning experimental techniques, brilliant theoretical ideas, and offering possible avenues to discover new physics. Hence I am quite happy to be attending the event, from where I will also be blogging (hopefully with the help of a few students in Padova).(NB this article, as others with neutrinos as a subject for the next month or so, appears also in the conference blog).

read more

by Tommaso Dorigo at February 16, 2015 03:47 PM

Lubos Motl - string vacua and pheno

BBC friendly towards gluinos at LHC
After Two Years' Vacation, the Large Hadron Collider will be restarted next month. At least since the discovery of the Higgs boson, most of the readers of mainstream media were overwhelmed by tirades against modern particle physics – especially supersymmetry and similar things. The writers of such stories have often emulated assorted Shwolins and Shmoits in effect, if not in intent.



Gluino vampire alchemist. The doll only costs $690, below several billions needed for a chance to see the much smaller gluino at the LHC.

Well, ATLAS' new (deputy) spokeswoman Beate Heinemann of UC Berkeley (Gianotti is superseding Heuer as CERN's director general) made a difference today and several stories that she inspired at visible places have conveyed the excitement in the particle physics community and the nonzero chance that a bigger discovery than the Higgs boson may be made in 2015 – and perhaps announced at the SUSY-related conferences in August and September.




The three main stories involving her and gluino that I am talking about are
BBC: Collider hopes for a 'super' restart

NBC: After the Higgs, LHC Rounds Up the Unusual Suspects in Particle Physics

Wall Street Hedge: What’s next for Large Hadron Collider? May be a supersymmetric particle

Other sources via Google News
The titles look more upbeat than the stuff that dominated in recent years, don't they? ;-)




Amos, Boyle, and Baits communicate some of the emotions as well as the detailed expectations about "what the LHC may tell us soon". The official authority of one of the ATLAS' spokespersons, combined with his or her politically correct sex and gender, was enough to amplify the message.

So of course that supersymmetry remains the most important "class of phenomena" that may occur – but aren't quite obliged to occur – at the LHC during the looming run. Physicists are waiting for it and will be carefully looking for its signs.

Heinemann says that we may be at the threshold – and the discovery of the first superpartner would be analogously important as the discovery of antimatter a century ago. I think that this comparison is more or less fair although I can think of reasons to think that either antimatter is more important; or supermatter is more revolutionary.

Equally interestingly, she offered one detailed piece of quasi-information. It's the gluino, the fermionic superpartner of the gauge boson called gluon (which communicates the strong force between the colored quark) that could be the first supersymmetric partner particle that is going to be discovered at the LHC.

The question which particle should be the first one – if we don't know the exact model or "version" of supersymmetry incorporated in Nature – is rather complicated and lots of experience may be needed for a qualified opinion. And lots of experience may also be useless because no one really knows. ;-)

But to say the least, I do think that Heinemann correctly reproduces the expectation of top particle phenomenologists. I think that if you asked 20 most cited beyond-the-Standard-Model phenomenologists what is their idea about the most likely first particle that will be observed in the superworld, the largest fraction of them if not the majority would answer "gluino".

If the masses etc. were equal, gluino is the easiest particle to be produced, largely because the very high energy proton-proton collisions are microscopically "mostly" gluon-gluon collisions and the gluino is the easiest particle to be obtained from the boson. This is a reason why the gluino could be the first one to be created.

However, the very same argument has a negative side. Because the gluino should be so easy to produce and it hasn't been produced yet, the lower limit on the gluino mass is also higher (more constraining) than the lower limit on other superpartner masses. Except for some loopholes that may exist in some models, the gluino should be heavier than about \(0.9\)-\(1.4\TeV\) (depending especially on the LSP mass) according to results from the previous LHC run. But up to \(1.6\TeV\) or perhaps \(2\TeV\) as the gluino mass, the models still look natural.

Which of the arguments for/against gluino is stronger? They would be about equally strong if the energy were unchanged. But I think that the higher center-of-mass energy, \(13\TeV\) to start with, is making the dominance of the gluon-gluon collision even more intense and the higher energy brings us into a new realm. These two observations strengthen the first positive argument and weaken the second negative argument. So there are reasons to think that the gluino could be the first superpartner to be observed.



ATLAS' new spokeswoman

Of course, supersymmetry doesn't have to apply to low-energy physics at all. The naturalness problems may be either just self-inflicted injuries or Nature may solve the problem very differently than many people were thinking.

But even if the conventional ideas about naturalness are mostly right, gluino may still be much heavier. I admit that I am quite fond of the growing literature of papers with new models involving Dirac gluinos – effectively gluinos resulting from a theory that allows extended, \(\NNN=2\) supersymmetry for the gauge bosons. These models may be natural even if the gluino mass is something like \(5\TeV\).

One should also appreciate that a light gluino is somewhat less needed for naturalness – for the supersymmetric explanation why the Higgs boson is so light – than other particles. It's especially the stop and the higgsino that should be very light if this explanation of the hierarchy problem is right. But the stop and the higgsino have various ways to hide. The gluino is produced more easily and can't hide too well.

We will see – or we will see nothing. And we will see which part of the previous sentence is more relevant – and maybe we won't see even that. We will see. And so on. ;-)

Stay tuned.

by Luboš Motl (noreply@blogger.com) at February 16, 2015 10:05 AM

February 15, 2015

Clifford V. Johnson - Asymptotia

Coming Along Nicely, Broadly Speaking
Coming soon next to one of my favourite buildings... downtown_shots_valentines_day_2015_2 Probably a new favourite building, the Broad Museum for Comtemporary Art. (Click for larger image.) It has been a while since I've been down there during the day (mostly been at Disney Concert Hall (on the right) at nights the last few months, for concerts) and so I was happy to pass by it yesterday on a [...] Click to continue reading this post

by Clifford at February 15, 2015 09:02 PM

Tommaso Dorigo - Scientificblogging

Future Physicists In Belluno
On Friday I traveled to Belluno, a town just south of the north-eastern Italian alps, to give a lecture on particle physics to high-school students for the "International Masterclasses". This was the umpteenth time that I gave more or less the same talk in the last decade or so; but it's not my fault, as particle physics has changed very little in the meantime. Yes, we discovered the Higgs boson, and yes, we excluded many possible extensions of the standard model. But the one-line summary remains the same: we continue to seek, but are not quite sure we'll find, a hint of what lies beyond.

read more

by Tommaso Dorigo at February 15, 2015 11:26 AM