Particle Physics Planet


January 17, 2017

Christian P. Robert - xi'an's og

Bayesian methods in cosmology

A rather massive document was arXived a few days ago by Roberto Trotta on Bayesian methods for cosmology, in conjunction with an earlier winter school, the 44th Saas Fee Advanced Course on Astronomy and Astrophysics, “Cosmology with wide-field surveys”. While I never had the opportunity to give a winter school in Saas Fee, I will give next month a course on ABC to statistics graduates in another Swiss dream location, Les Diablerets.  And next Fall a course on ABC again but to astronomers and cosmologists, in Autrans, near Grenoble.

The course document is an 80 pages introduction to probability and statistics, in particular Bayesian inference and Bayesian model choice. Including exercises and references. As such, it is rather standard in that the material could be found as well in textbooks. Statistics textbooks.

When introducing the Bayesian perspective, Roberto Trotta advances several arguments in favour of this approach. The first one is that it is generally easier to follow a Bayesian approach when compared with seeking a non-Bayesian one, while recovering long-term properties. (Although there are inconsistent Bayesian settings.) The second one is that a Bayesian modelling allows to handle naturally nuisance parameters, because there are essentially no nuisance parameters. (Even though preventing small world modelling may lead to difficulties as in the Robbins-Wasserman paradox.) The following two reasons are the incorporation of prior information and the appeal on conditioning on the actual data.

trottaThe document also includes this above and nice illustration of the concentration of measure as the dimension of the parameter increases. (Although one should not over-interpret it. The concentration does not occur in the same way for a normal distribution for instance.) It further spends quite some space on the Bayes factor, its scaling as a natural Occam’s razor,  and the comparison with p-values, before (unsurprisingly) introducing nested sampling. And the Savage-Dickey ratio. The conclusion of this model choice section proposes some open problems, with a rather unorthodox—in the Bayesian sense—line on the justification of priors and the notion of a “correct” prior (yeech!), plus an musing about adopting a loss function, with which I quite agree.


Filed under: Statistics Tagged: ABC, astrostatistics, Autrans, Bayesian Methods in Cosmology, Grenoble, Les Diablerets, nested sampling, Ockham's razor, Pierre Simon de Laplace, Roberto Trotta, Saas Fee, Switzerland, Vercors

by xi'an at January 17, 2017 11:17 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A new year, a new semester

I always enjoy the start of the second semester. There’s usually a great atmosphere around the college – after long weeks of quiet, it’s great to see the students back and all the restaurants, shops and canteens back open. The students themselves always seem to be in good form too. I suspect it’s the prospect of starting afresh with new modules, one of the benefits of semesterisation.

I’m particularly enjoying the start of term this year as I managed to finish a hefty piece of research before the teaching semester got under way. I’ve been working steadily on the project, a review of a key paper published by Einstein in 1917, since June 1st, so it’s nice to have it off my desk for a while. Of course, the paper will come back in due course with corrections and suggestions from the referees, but I usually enjoy that part of the process.

In the meantime, I’d forgotten how much I enjoy teaching, especially in the absence of a great cloud of research to be done in the evenings. One of the courses I’m teaching this semester is a history of the atomic hypothesis. It’s fascinating to study how the idea emerged from different roots: philosophical considerations in ancient Greece, considerations of chemical reactions in the 18th and 19th century , and considerations of statistical mechanics in the 19th century. The big problem  was how to test the hypothesis: at least until a brilliant young patent clerk suggested that the motion of small particles suspended in water might betray the presence of millions of water molecules.  Einstein’s formula was put to the test by the French physicist Jean Perrin in 1908, and it is one of Einstein’s great triumphs that by 1910, most scientists no longer talked of the ‘atomic hypothesis’, but of ‘atoms’.

fig-2

In 1905, a young Albert Einstein developed a formula describing the motion of particles  suspended in a liquid, based on the hypothesis that the liquid was made up of millions of molecules. In 1908, the French physicist Jean Perrin demonstrated that the motion of such particles matched Einstein’s formula, giving strong support for the atomic hypothesis.  

For more on Perrin’s exeriment see here

 


by cormac at January 17, 2017 05:47 PM

Symmetrybreaking - Fermilab/SLAC

The value of basic research

How can we measure the worth of scientific knowledge? Economic analysts give it a shot.

Header Image: Economic Impact

Before building any large piece of infrastructure, potential investors or representatives from funding agencies or governments have to decide whether it’s worth it. Teams of economists perform a cost-benefit analysis to help them determine how a project will affect a region and whether it makes sense to back and build it. 

But when it comes to building infrastructure for basic science, the process gets a little more complicated. It’s not so easy to pin an exact value on the benefits of something like the Large Hadron Collider.

“The main goal is priceless and therefore has no price attached,” says Stefano Forte, a professor of theoretical physics at the University of Milan and part of a team that developed a new method of economic analysis for fundamental science. “We give no value to discovering the Higgs boson in the past or supersymmetry or extra dimensions in the future, because we wouldn’t be able to say what the value of the discovery of extra dimensions is.”

Forte’s team was co-led by two economists, academic Massimo Florio, also of the University of Milan, and private business consultant Silvia Vignetti. They answered a 2012 call by the European Investment Bank’s University Sponsorship Program, which provides grants to university research centers, for assistance with this issue. The bank funded their research into a new way to evaluate proposed investments in science.

Before anyone can start evaluating any sort of impact, they have to define what they’re measuring. Generally, economic impact analyses are highly local, measuring exclusively money flowing in and out of a particular area. 

Because of the complicated nature of financing any project, the biggest difficulty for economists performing an analysis is usually coming up with an appropriate counterfactual: If the project isn’t built, what will happen? As Forte asks, “If you hadn’t spent the money there, where else would you have spent it, and are you sure that by spending it there rather than somewhere else you actually gain something?” 

Based on detailed information about where a scientific collaboration intends to spend their money, economists can take the first step in painting a picture of how that funding will affect the region. The next step is accounting for the secondary spending that this brings.

Companies are paid to do construction work for a scientific project, “and then it sort of cascades throughout the region,” says Jason Horwitz of Anderson Economic Group, which regularly performs economic analyses for universities and physics collaborations. “As they hire more people, the employees themselves are probably going to local grocery stores, going to local restaurants, they might go to a movie now and then—there’s just more local spending.”  

These first parts of the analysis account only for the tangible, concrete-and-steel process of building and maintaining an experiment, though. 

“If you build a bridge, the main benefit is from people who use the build—transportation of goods over the bridge and whatnot,” Forte says. But the benefit of constructing a telescope array or a huge laser interferometer is knowledge-formation, “which is measured in papers and publications, references and so on,” he says. 

One way researchers like Horwitz and Forte have begun to assign value to such projects is by measuring the effect of the project on the people who run it. Like attending university, working on a scientific collaboration gives you an education—and an education changes your earning capabilities. 

“Fundamental research has a huge added value in terms of human capital formation, even if you work there for two years and then you go and work in a company on Wall Street,” Forte says. Using the same methods used by universities, they found doing research at the LHC would raise students’ earning potential by about 11 percent over a 40-year career.

This method of measuring the value of scientific projects still has limitations. In it, the immeasurable, grander purpose of a fundamental science experiment is still assigned no value at all. When it comes down to it, Forte says, if all we cared about were a big construction project, technology spinoffs and the earning potential of students, we wouldn’t have fundamental physics research. 

“The actual purpose of this is not a big construction project,” Horwitz says. “It’s to do this great research which obviously has other benefits of its own, and we really don’t capture any of that.” Instead, his group appends qualitative explanations of the knowledge to be gained to their economic reports. 

Forte explains, “The fact that this kind of enterprise exists is comparable and evaluated in the same way as, say, the value of the panda not being extinct. If the panda is extinct, there is no one who’s actually going to lose money or make money—but many taxpayers would be willing to pay money for the panda not to be extinct.” 

Forte and his colleagues found a 90 percent chance of the LHC’s benefits exceeding its costs (by 2.9 billion euros, they estimate). But even in the 10 percent chance that its economics aren’t quite so Earth-shaking, its discoveries could change the way we understand our universe.

by Leah Crane at January 17, 2017 04:53 PM

Peter Coles - In the Dark

Hard BrExit Reality Bites UK Science

Before lunch today I listened to the Prime Minister’s much-heralded speech (full text here) at Lancaster House giving a bit more detail about the UK government’s approach to forthcoming negotiations to leave the European Union. As I had expected the speech was mainly concerned with stating the obvious – especially about the UK leaving the so-called Single Market – though there was an interesting, if rather muddled, discussion of some kind of associate membership of the Customs Union.

As I said when I blogged about the EU Referendum result back in June last year

For example, there will be no access to the single market post-BrExit without free movement of people.

The EU has made it perfectly clear all along that it will not compromise on the “four freedoms” that represent the principles on which the Single Market (correct name; “Internal Market”) is based. The UK government has also made it clear that it is running scared of the anti-immigration lobby in the Conservative Party and UKIP, despite the mountain of evidence (e.g. here) that immigration actually benefits the UK economy rather than harming it. A so-called “hard BrExit” approach has therefore been inevitable from the outset.

In any case, it always seemed to me that leaving the EU (and therefore giving up democratic representation on the bodies that govern the single market) but remaining in the Single Market would be completely illogical to anyone motivated by the issue of “sovereignty” (whatever that means).  So I think it always was – and still is – a choice between a hard BrExit and no BrExit at all. There’s no question in my mind – and Theresa May’s speech has hardened my views considerably – that remaining in the EU is by far the best option for the UK. That outcome is looking unlikely now, but there is still a long way to go and many questions have still to be answered, including whether the Article 50 notification can be revoked and whether the devolved assemblies in Scotland and Northern Ireland have to give separate consent. Interestingly, the Conservative Party manifesto for the 2015 General Election included a commitment to work within the Single Market, so it would be within the constitutional limits on the House of Lords to vote down any attempt to leave it.

Overall, I felt the speech was worthwhile insofar as it gave a bit of clarity on some issues, but it was also full of contradictions on others. For example, early on the PM stated:

Parliamentary sovereignty is the basis of our constitution.

Correct, but in that case why did the UK government appeal the High Court’s decision that this was the case (i.e. that Parliamentary consent was needed to invoke Article 50)? Moreover, why if she thinks Parliament is so important did she not give today’s speech in the House of Commons?

This brings me to what the speech might imply for British science in a post-BrExit era. Here’s what I said in June 2016:

It’s all very uncertain, of course, but it seems to me that as things stand, any deal that involves free movement within Europe would be unacceptable to the powerful  UK anti-immigration lobby. This rules out a “Norway” type deal, among others, and almost certainly means there will be no access to any science EU funding schemes post 2020. Free movement is essential to the way most of these schemes operate anyway.

I’m by no means always right, but I think I was right about that. It is now clear that UK scientists will not be eligible for EU funding under the Horizon 2020 programme.  Switzerland (which is in the Single Market) wasn’t allowed to remain in Horizon 2020 without freedom of movement, and neither will the UK. If the PM does indeed trigger Article 50 by the end of March 2017 then we will leave the EU by April 2019. That means that existing EU projects and funding will probably be stopped at that point, although the UK government has pledged to provide short-term replacement funding for grants already awarded. From now on it seems likely that EU teams will seek to exclude UK scientists.

This exclusion is not an unexpected outcome, but still disappointing. The PM’s speech states:

One of our great strengths as a nation is the breadth and depth of our academic and scientific communities, backed up by some of the world’s best universities. And we have a proud history of leading and supporting cutting-edge research and innovation.

So we will also welcome agreement to continue to collaborate with our European partners on major science, research, and technology initiatives.

From space exploration to clean energy to medical technologies, Britain will remain at the forefront of collective endeavours to better understand, and make better, the world in which we live.

Warm words, but it’s hard to reconcile them with reality.  We used to be “leading” EU collaborative teams. In a few years we’ll  be left standing on the touchlines. The future looks very challenging for science, and especially for fundamental science, in the UK.

But the politics around EU science programmes pales into insignificance compared the toxic atmosphere of xenophobia that has engulfed much of the UK. The overt policy of the government to treat EU citizens in the UK as bargaining chips will cause untold stress, as will the Home Office’s heavy-handed approach to those who seek to confirm the permanent residence they will otherwise lose when the UK leaves the EU. Why should anyone – scientist or otherwise – stay in this country to be treated in such a way? 

All of this makes me think those scientists I know who have already left the UK for EU institutions probably made the right decision. The question is how many more will follow?


by telescoper at January 17, 2017 02:29 PM

Peter Coles - In the Dark

John Baez - Azimuth

The Irreversible Momentum of Clean Energy

The president of the US recently came out with an article in Science. It’s about climate change and clean energy:

• Barack Obama, The irreversible momentum of clean energy, Science, 13 January 2017.

Since it’s open-access, I’m going to take the liberty of quoting the whole thing, minus the references, which provide support for a lot of his facts and figures.

The irreversible momentum of clean energy

The release of carbon dioxide (CO2) and other greenhouse gases (GHGs) due to human activity is increasing global average surface air temperatures, disrupting weather patterns, and acidifying the ocean. Left unchecked, the continued growth of GHG emissions could cause global average temperatures to increase by another 4°C or more by 2100 and by 1.5 to 2 times as much in many midcontinent and far northern locations. Although our understanding of the impacts of climate change is increasingly and disturbingly clear, there is still debate about the proper course for U.S. policy — a debate that is very much on display during the current presidential transition. But putting near-term politics aside, the mounting economic and scientific evidence leave me confident that trends toward a clean-energy economy that have emerged during my presidency will continue and that the economic opportunity for our country to harness that trend will only grow. This Policy Forum will focus on the four reasons I believe the trend toward clean energy is irreversible.

ECONOMIES GROW, EMISSIONS FALL

The United States is showing that GHG mitigation need not conflict with economic growth. Rather, it can boost efficiency, productivity, and innovation. Since 2008, the United States has experienced the first sustained period of rapid GHG emissions reductions and simultaneous economic growth on record. Specifically, CO2 emissions from the energy sector fell by 9.5% from 2008 to 2015, while the economy grew by more than 10%. In this same period, the amount of energy consumed per dollar of real gross domestic product (GDP) fell by almost 11%, the amount of CO2 emitted per unit of energy consumed declined by 8%, and CO2 emitted per dollar of GDP declined by 18%.

The importance of this trend cannot be overstated. This “decoupling” of energy sector emissions and economic growth should put to rest the argument that combatting climate change requires accepting lower growth or a lower standard of living. In fact, although this decoupling is most pronounced in the United States, evidence that economies can grow while emissions do not is emerging around the world. The International Energy Agency’s (IEA’s) preliminary estimate of energy related CO2 emissions in 2015 reveals that emissions stayed flat compared with the year before, whereas the global economy grew. The IEA noted that “There have been only four periods in the past 40 years in which CO2 emission levels were flat or fell compared with the previous year, with three of those — the early 1980s, 1992, and 2009 — being associated with global economic weakness. By contrast, the recent halt in emissions growth comes in a period of economic growth.”

At the same time, evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term. Estimates of the economic damages from warming of 4°C over preindustrial levels range from 1% to 5% of global GDP each year by 2100. One of the most frequently cited economic models pins the estimate of annual damages from warming of 4°C at ~4% of global GDP, which could lead to lost U.S. federal revenue of roughly $340 billion to $690 billion annually.

Moreover, these estimates do not include the possibility of GHG increases triggering catastrophic events, such as the accelerated shrinkage of the Greenland and Antarctic ice sheets, drastic changes in ocean currents, or sizable releases of GHGs from previously frozen soils and sediments that rapidly accelerate warming. In addition, these estimates factor in economic damages but do not address the critical question of whether the underlying rate of economic growth (rather than just the level of GDP) is affected by climate change, so these studies could substantially understate the potential damage of climate change on the global macroeconomy.

As a result, it is becoming increasingly clear that, regardless of the inherent uncertainties in predicting future climate and weather patterns, the investments needed to reduce emissions — and to increase resilience and preparedness for the changes in climate that can no longer be avoided — will be modest in comparison with the benefits from avoided climate-change damages. This means, in the coming years, states, localities, and businesses will need to continue making these critical investments, in addition to taking common-sense steps to disclose climate risk to taxpayers, homeowners, shareholders, and customers. Global insurance and reinsurance businesses are already taking such steps as their analytical models reveal growing climate risk.

PRIVATE-SECTOR EMISSIONS REDUCTIONS

Beyond the macroeconomic case, businesses are coming to the conclusion that reducing emissions is not just good for the environment — it can also boost bottom lines, cut costs for consumers, and deliver returns for shareholders.

Perhaps the most compelling example is energy efficiency. Government has played a role in encouraging this kind of investment and innovation. My Administration has put in place (i) fuel economy standards that are net beneficial and are projected to cut more than 8 billion tons of carbon pollution over the lifetime of new vehicles sold between 2012 and 2029 and (ii) 44 appliance standards and new building codes that are projected to cut 2.4 billion tons of carbon pollution and save $550 billion for consumers by 2030.

But ultimately, these investments are being made by firms that decide to cut their energy waste in order to save money and invest in other areas of their businesses. For example, Alcoa has set a goal of reducing its GHG intensity 30% by 2020 from its 2005 baseline, and General Motors is working to reduce its energy intensity from facilities by 20% from its 2011 baseline over the same timeframe. Investments like these are contributing to what we are seeing take place across the economy: Total energy consumption in 2015 was 2.5% lower than it was in 2008, whereas the economy was 10% larger.

This kind of corporate decision-making can save money, but it also has the potential to create jobs that pay well. A U.S. Department of Energy report released this week found that ~2.2 million Americans are currently employed in the design, installation, and manufacture of energy-efficiency products and services. This compares with the roughly 1.1 million Americans who are employed in the production of fossil fuels and their use for electric power generation. Policies that continue to encourage businesses to save money by cutting energy waste could pay a major employment dividend and are based on stronger economic logic than continuing the nearly $5 billion per year in federal fossil-fuel subsidies, a market distortion that should be corrected on its own or in the context of corporate tax reform.

MARKET FORCES IN THE POWER SECTOR

The American electric-power sector — the largest source of GHG emissions in our economy — is being transformed, in large part, because of market dynamics. In 2008, natural gas made up ~21% of U.S. electricity generation. Today, it makes up ~33%, an increase due almost entirely to the shift from higher-emitting coal to lower-emitting natural gas, brought about primarily by the increased availability of low-cost gas due to new production techniques. Because the cost of new electricity generation using natural gas is projected to remain low relative to coal, it is unlikely that utilities will change course and choose to build coal-fired power plants, which would be more expensive than natural gas plants, regardless of any near-term changes in federal policy. Although methane emissions from natural gas production are a serious concern, firms have an economic incentive over the long term to put in place waste-reducing measures consistent with standards my Administration has put in place, and states will continue making important progress toward addressing this issue, irrespective of near-term federal policy.

Renewable electricity costs also fell dramatically between 2008 and 2015: the cost of electricity fell 41% for wind, 54% for rooftop solar photovoltaic (PV) installations, and 64% for utility-scale PV. According to Bloomberg New Energy Finance, 2015 was a record year for clean energy investment, with those energy sources attracting twice as much global capital as fossil fuels.

Public policy — ranging from Recovery Act investments to recent tax credit extensions — has played a crucial role, but technology advances and market forces will continue to drive renewable deployment. The levelized cost of electricity from new renewables like wind and solar in some parts of the United States is already lower than that for new coal generation, without counting subsidies for renewables.

That is why American businesses are making the move toward renewable energy sources. Google, for example, announced last month that, in 2017, it plans to power 100% of its operations using renewable energy — in large part through large-scale, long-term contracts to buy renewable energy directly. Walmart, the nation’s largest retailer, has set a goal of getting 100% of its energy from renewables in the coming years. And economy-wide, solar and wind firms now employ more than 360,000 Americans, compared with around 160,000 Americans who work in coal electric generation and support.

Beyond market forces, state-level policy will continue to drive clean-energy momentum. States representing 40% of the U.S. population are continuing to move ahead with clean-energy plans, and even outside of those states, clean energy is expanding. For example, wind power alone made up 12% of Texas’s electricity production in 2015 and, at certain points in 2015, that number was >40%, and wind provided 32% of Iowa’s total electricity generation in 2015, up from 8% in 2008 (a higher fraction than in any other state).

GLOBAL MOMENTUM

Outside the United States, countries and their businesses are moving forward, seeking to reap benefits for their countries by being at the front of the clean-energy race. This has not always been the case. A short time ago, many believed that only a small number of advanced economies should be responsible for reducing GHG emissions and contributing to the fight against climate change. But nations agreed in Paris that all countries should put forward increasingly ambitious climate policies and be subject to consistent transparency and accountability requirements. This was a fundamental shift in the diplomatic landscape, which has already yielded substantial dividends. The Paris Agreement entered into force in less than a year, and, at the follow-up meeting this fall in Marrakesh, countries agreed that, with more than 110 countries representing more than 75% of global emissions having already joined the Paris Agreement, climate action “momentum is irreversible”. Although substantive action over decades will be required to realize the vision of Paris, analysis of countries’ individual contributions suggests that meeting mediumterm respective targets and increasing their ambition in the years ahead — coupled with scaled-up investment in clean-energy technologies — could increase the international community’s probability of limiting warming to 2°C by as much as 50%.

Were the United States to step away from Paris, it would lose its seat at the table to hold other countries to their commitments, demand transparency, and encourage ambition. This does not mean the next Administration needs to follow identical domestic policies to my Administration’s. There are multiple paths and mechanisms by which this country can achieve — efficiently and economically — the targets we embraced in the Paris Agreement. The Paris Agreement itself is based on a nationally determined structure whereby each country sets and updates its own commitments. Regardless of U.S. domestic policies, it would undermine our economic interests to walk away from the opportunity to hold countries representing two-thirds of global emissions — including China, India, Mexico, European Union members, and others — accountable. This should not be a partisan issue. It is good business and good economics to lead a technological revolution and define market trends. And it is smart planning to set long term emission-reduction targets and give American companies, entrepreneurs, and investors certainty so they can invest and manufacture the emission-reducing technologies that we can use domestically and export to the rest of the world. That is why hundreds of major companies — including energy-related companies from ExxonMobil and Shell, to DuPont and Rio Tinto, to Berkshire Hathaway Energy, Calpine, and Pacific Gas and Electric Company — have supported the Paris process, and leading investors have committed $1 billion in patient, private capital to support clean-energy breakthroughs that could make even greater climate ambition possible.

CONCLUSION

We have long known, on the basis of a massive scientific record, that the urgency of acting to mitigate climate change is real and cannot be ignored. In recent years, we have also seen that the economic case for action — and against inaction — is just as clear, the business case for clean energy is growing, and the trend toward a cleaner power sector can be sustained regardless of near-term federal policies.

Despite the policy uncertainty that we face, I remain convinced that no country is better suited to confront the climate challenge and reap the economic benefits of a low-carbon future than the United States and that continued participation in the Paris process will yield great benefit for the American people, as well as the international community. Prudent U.S. policy over the next several decades would prioritize, among other actions, decarbonizing the U.S. energy system, storing carbon and reducing emissions within U.S. lands, and reducing non-CO2 emissions.

Of course, one of the great advantages of our system of government is that each president is able to chart his or her own policy course. And President-elect Donald Trump will have the opportunity to do so. The latest science and economics provide a helpful guide for what the future may bring, in many cases independent of near-term policy choices, when it comes to combatting climate change and transitioning to a clean energy economy.


by John Baez at January 17, 2017 01:00 AM

January 16, 2017

Christian P. Robert - xi'an's og

optimal Bernoulli factory

One of the last arXivals of the year was this paper by Luis Mendo on an optimal algorithm for Bernoulli factory (or Lovàsz‘s or yet Basu‘s) problems, i.e., for producing an unbiased estimate of f(p), 0<p<1, from an unrestricted number of Bernoulli trials with probability p of heads. (See, e.g., Mark Huber’s recent book for background.) This paper drove me to read an older 1999 unpublished document by Wästlund, unpublished because of the overlap with Keane and O’Brien (1994). One interesting gem in this document is that Wästlund produces a Bernoulli factory for the function f(p)=√p, which is not of considerable interest per se, but which was proposed to me as a puzzle by Professor Sinha during my visit to the Department of Statistics at the University of Calcutta. Based on his 1979 paper with P.K. Banerjee. The algorithm is based on a stopping rule N: throw a fair coin until the number of heads n+1 is greater than the number of tails n. The event N=2n+1 occurs with probability

{2n \choose n} \big/ 2^{2n+1}

[Using a biased coin with probability p to simulate a fair coin is straightforward.] Then flip the original coin n+1 times and produce a result of 1 if at least one toss gives heads. This happens with probability √p.

Mendo generalises Wästlund‘s algorithm to functions expressed as a power series in (1-p)

f(p)=1-\sum_{i=1}^\infty c_i(1-p)^i

with the sum of the weights being equal to one. This means proceeding through Bernoulli B(p) generations until one realisation is one or a probability

c_i\big/1-\sum_{j=1}^{i-1}c_j

event occurs [which can be derived from a Bernoulli B(p) sequence]. Furthermore, this version achieves asymptotic optimality in the number of tosses, thanks to a form of Cramer-Rao lower bound. (Which makes yet another connection with Kolkata!)


Filed under: Statistics Tagged: Bernoulli factory, Cramer-Rao lower bound, Darjeeling, Debabrata Basu, Himalayas, India, Kangchenjunga, Kolkata, Lovàsz, Mark Huber, University of Calcutta

by xi'an at January 16, 2017 11:17 PM

ZapperZ - Physics and Physicists

Fermions and Bosons
Fermilab's Don Lincoln describes what bosons and fermions are, for those who don't know.



Zz.

by ZapperZ (noreply@blogger.com) at January 16, 2017 09:15 PM

Peter Coles - In the Dark

Cotton Tail

It’s been a very busy and rather trying day so I’m in need of a bit of a pick-me-up. This will do nicely! It’s the great Duke Ellington band of 1940 playing Cotton Tail. This tune – yet another constructed on the chord changes to George Gershwin’s I Got Rhythm – was written by Ben Webster and arranged by Duke Ellington for his orchestra in a characteristically imaginative and inventive way. Webster’s “heavy” tenor saxophone dominates the first half of the track, but the real star of the show (for me) is the superb brass section of the Ellington Orchestra whose tight discipline allows it to punch out a series of complicated riffs with a power and precision that would terrify most classical orchestras. And no wonder! The Ellington band of this era was jam-packed packed with talent, including: Rex Stewart (cornet); Wallace Jones, Ray Nance, and Cootie Williams (trumpet); Juan Tizol, “Tricky” Sam Nanton, and Lawrence Brown (trombones). Listen particularly to the two sequences from 1.33-1.49 and 2.35-2.59, which are just brilliant! Enjoy!

P.S. The drummer is the great Sonny Greer.


by telescoper at January 16, 2017 05:44 PM

Peter Coles - In the Dark

Cardiff Brewery Tap wins Beard Friendly Pub of the Year title (UK)

Another important accolade for Cardiff, winner of this year’s Beard Friendly Pub of the Year in the “Outside London” category!

There’s a news item about this prestigious award in the local media here.

Kmflett's Blog

Beard Liberation Front

January 15th

Contact Keith flett                                            07803 167266

CARDIFF BREWERY TAP WINS BEARD FRIENDLY PUB OF THE YEAR TITLE

crafty

The Beard Liberation Front, the informal network of beard wearers, has said that the contest for the Beard Friendly Pub of the Year has concluded with the Crafty Devil Beer Cellar in Cardiff bearding the Cloudwater brewery tap in Manchester for the UK (outside of London) title

The Cock Tavern in Hackney won the overall poll but the result of the on-line vote saw a major new development with Brewery Taps- where drinkers socialise at the breweries themselves- coming second and third in the overall national vote.

The winners in 2016 included the Jolly Butchers in Stoke Newington, the Cock Tavern in central Hackney and the Bag of Nails in Bristol

Beard Friendly Pub, Bar, Tap 2017

UK

1 Crafty Devil Beer Cellar, Cardiff

2 Cloudwater Brewery Tap, Manchester

View original post 308 more words


by telescoper at January 16, 2017 05:05 PM

Clifford V. Johnson - Asymptotia

Just When You’re Settling…

You know how this goes: He's not going to let the matter drop. He's thinking of a comeback. Yeah, don't expect to finish that chapter any time soon...

-cvj Click to continue reading this post

The post Just When You’re Settling… appeared first on Asymptotia.

by Clifford at January 16, 2017 04:53 PM

CERN Bulletin

As every year, 2016 ended with the Council Week

The Finance Committee met on 14 December 2016. This Committee is comprised of delegates representing national administrations and deals with all questions related to the financial contributions of the Member States, the budget of the Organization and the expenditure of the Laboratory.

The main decisions with a direct or indirect impact on the financial and social conditions of the personnel were:

  • non-indexation of salaries as of 1st January 2017, and for six consecutive years, with a negative memory of -0.4 %;
  • negative cost-variation index of -4.93 %;
  • acceptance of proposed changes to the CHIS (CERN Health Insurance Scheme) Rules, primarily on CERN Health Insurance membership conditions.

However, the highlight of this Finance Committee was the Director-General’s announcement of her decision to recruit 80 additional staff on limited-duration contracts.

The Staff Association supported this decision with a declaration by its President at the Finance Committee. It was emphasized that the current personnel is under considerable pressure to achieve the objectives of the Organization and to respond to the growing demands of the Laboratory.

The Staff Association would like to recall that it has on numerous occasions expressed the concerns of the CERN staff regarding the shortage of personnel to ensure the supervision of students, the implementation of increasingly complicated projects and other activities. Indeed, the ratio of staff members to other members of the personnel (fellows and associated members) has more than doubled over the last ten years. Lastly, the Staff Association stated that the long-term viability of the Laboratory’s activities relies heavily on a stable and very experienced workforce, which requires more than just the 80 limited-duration positions announced by the Management, and should include also positions with indefinite contracts.

Evolution of various categories of personnel with time

 

Challenges for 2017: burning issues and finalizing the implementation of the 2015 Five-Yearly Review

Now, at the beginning of 2017, the issues that require our full attention are related to the annual MERIT exercise and the Promotion exercise. These two issues are highly sensitive and of great importance to the staff members because they concern your advancement and your career development prospects. We will come back to you shortly with a detailed article on these issues.

Moreover, in 2017, we will work with the Management on internal mobility, career development interviews and the internal validation of skills acquired through experience (VAE, validation des acquis de l’expérience). All of these issues are part of the 2015 Five-Yearly Review package. These processes must be implemented as soon as possible and by the end of 2017 at the latest. Again, we will keep you informed of the progress.

At the beginning of this New Year, we would like to remind you that the Staff Association represents all members of the personnel (MPE and MPA) in discussions with the Management and the Member States. Please do not hesitate to contact your staff delegates and enrich the discussion by sharing your views on the issues we are currently working on or any other topic. Your contributions are invaluable to us. It is also very important to support the Staff Association by joining and engaging in the activities of the Association that will renew its Staff Council at the end of 2017.

Lastly, as a year comes to an end and a new one begins, we wish you and your loved ones all the best: health, happiness and success for 2017.

 

 

Declaration at Finance Committee by Staff Association on paper CERN/FC/6065/RA

Thank you, Madam Chair!

The Staff Association would like to first state its strong support for the position presented by the Management to recruit some 80 additional staff on Limited Duration posts. The analysis provided by the Management matches and exemplifies many of our own concerns and our current knowledge of the situation across the Organization.

The CERN Management comes first to state that the “current staff complement is under significant strain to continue meeting the objectives of the Organization […] and to satisfy the increasing demands being made on the Laboratory.” Further the paper mentions “concerns […] expressed by Member State delegates, by the SPC and the CERN Directorate at meetings of TREF, the SPC, the FC and the Council about the shortage of staff in key fields…” The Staff Association would like to recall that it has on numerous occasions also expressed the concerns of the CERN staff regarding staff complement reductions that lead to such shortage in many important fields.

The analysis presented by the Management further gives very valuable information in section 2 titled “History and motivations”. The second paragraph in particular shows that the initial intention to reduce the staff complement by one thousand FTE was actually achieved by reducing the number of Indefinite Contracts by one thousand, from about 2700 in 1994 to about 1700 today. The efforts by Management to maintain a viable level of staff complement were materialized in the increase of the fraction of Limited Duration contracts, from 10% to now 30%.

This leads inevitably to two effects. The first effect is that we observe a loss of expertise in some areas of the Organization. When Limited Duration posts succeed one another, without any tiling, there is no longer an opportunity to transmit knowledge and know-how. This was pointed out by the Review Committee for the Cost and Schedule review for LIU and HL-LHC: “In many cases the expertise required does/will not exist (anymore) at CERN and must be recruited as soon as possible, be trained and brought into the projects.” The analysis is correct and the remedy of providing a number of Limited Duration contracts appears to be as well, because this concerns a couple of large scale projects which are time limited. However, there is significant extra cost in the hiring and training processes that come along.

And this brings us to the second effect which is the supervision effort required of the remaining staff on Indefinite Contract. With a higher fraction of staff on Limited Duration contracts, the investment to hire and train new LD staff falls upon the IC staff to a large extent. Beyond this extra cost, this is also lost productivity felt by the experienced long-term staff. However, we do recognize that training young professionals and giving them an opportunity for a work experience at CERN is one of our missions, and rest assured that all staff at CERN take this to heart and are eager to share their knowledge. We know that such training is essential for the Member states and constitutes a very valuable return to their own laboratories and industry. It remains that there is a significant shift of activities, for many IC staff, from production work towards training and supervision.

In Section 3 the Management points to another but similar effect: Figure 1 shows that the ratio of staff to all members of personnel (employed and associated) has more than doubled in the last ten years. Associated members of personnel and fellows also require supervision and services, which are provided, to a large extent, by the stable workforce of the Organization.

Figure 2 gives further insight concerning users on the one hand and fellows and students on the other hand. Users expect a certain level of service to be provided by the Laboratory but the ratio of users to staff has been multiplied by 2.5 in 20 years! Fellows and Students require active and significant supervision – they should never be thought as replacing missing staff as is sometimes the temptation – and the ratio of fellows to staff has been multiplied by 4.5 in 20 years!

The CERN staff fulfils its missions in a very enthusiastic way, be it to directly participate in forefront research in physics, provide and operate the accelerators and high technology equipment and tools required to carry out this research, provide all site services to support these activities, but also to supervise and train young scientists coming from all our member states. The trends regarding the workloads are however worrying, and we see little short-term relief. With new member states and associated member states, we look forward to welcoming on site more users, more students and young scientists. We are very happy and excited to work with our new colleagues and we really want to give them the service and attention that they deserve. But we are anxious to know whether we have the means.

Further, we would like to point out that, while the transfer of funds from “budget lines that have shown a slower time profile than planned” is certainly a sound decision, the technical infrastructure consolidation budget is also directly affected by the lack of personnel in key areas. The consolidation of the laboratory’s infrastructure must be considered with priority as well, and appropriate staff complement allocated in order to carry out the work and meet the timeline.

In conclusion, we would like to again state our strong support for the initiative of the CERN management to relieve the strain that has been placed on CERN staff over the last decades. However, this strain relief should be extended, beyond the scope of the high priority projects identified by the Management, to other key activities of the Laboratory, which in turn requires more posts than the number of 80 quoted by the Management. We also want to point out that the long-term viability of the activities of the Laboratory strongly relies on a stable and very experienced work force, which means also the need for an increase in the number of Indefinite Contracts in the staff complement.

Thank you, Madam Chair.

January 16, 2017 04:01 PM

Axel Maas - Looking Inside the Standard Model

Writing a review
As I have mentioned recently on Twitter, I have been given the opportunity, and the mandate, to write a review on Higgs physics. Especially, I should describe how the connection is established from the formal basics to what we see in experiment. While I will be writing in the next time a lot about the insights I gain and the connection I make during writing, this time I want to talk about something different. About what this means, and what the purpose of reviews is.

So what is a review good for? Physics is not static. Physics is about our understanding of the world around us. It is about making things we experience calculable. This is done by phrasing so-called laws of nature as mathematical statements. Then making predictions (or explaining something what happens) is, essentially, just evaluating equations. At least in principle, because this may be technically extremely complicated and involved. There are cases in which our current abilities are not even yet able to do so. But this is technology and, often, resources in form of computing time. Not some conceptual problem.

But there is also a conceptual problem. Our mathematical statements encode what we know. One of their most powerful feature is that they tell us themselves that they are incomplete. That our mathematical formulation of nature only reaches this far. That are things, we do not even yet know what they are, which we cannot describe. Physics is at the edge of knowledge. But we are not lazy. Every day, thousands of physicists all around the world work together to push this edge daily a little bit farther out. Thus, day by day, we know more. And, in a global world, this knowledge is shared almost instantaneously.

A consequence of this progress is that the textbooks at the edge become outdated. Because we get a better understanding. Or we figure out that something is different than we thought. Or because we find a way to solve a problem which withstood solution for decades. However, what we find today or tomorrow is not yet confirmed. Every insight we gain needs to be checked. Has to be investigated from all sides. And has to be fitted into our existing knowledge. More often that not some of these insights turn out to be false hopes. That we thought we understood something. But there is still that one little hook, this one tiny loop, which in the end lets our insight crumble. This can take a day or a month or a year, or even decades. Thus, insights should not directly become part of textbooks, which we use to teach the next generation of students.

To deal with this, a hierarchy of establishing knowledge has formed.

In the beginning, there are ideas and first results. These we tell our colleagues at conferences. We document the ideas and first results in write-ups of our talks. We visit other scientists, and discuss our ideas. By this we find many loopholes and inadequacies already, and can drop things, which do not work.

Results which survive this stage then become research papers. If we write such a paper, it is usually about something, which we personally believe to be well funded. Which we have analyzed from various angles, and bounced off the wisdom and experience of our colleagues. We are pretty sure that it is solid. By making these papers accessible to the rest of the world, we put this conviction to the test of a whole community, rather than some scientists who see our talks or which we talk to in person.

Not all such results remain. In fact, many of these are later to be found to be only partly right, or still have overlooked a loophole, or are invalidated by other results. But this stage already a considerable amount of insights survive.

Over years, and sometimes decades, insights in papers on a topic accumulate. With every paper, which survives the scrutiny of the world, another piece in the puzzle fits. Thus, slowly a knowledge base emerges on a topic, carried by many papers. And then, at some point, the amount of knowledge has provided a reasonable good understanding of the topic. This understanding is still frayed at the edges towards the unknown. There is still here and there some holes to be filled. But overall, the topic is in fairly good condition. That is the point where a review is written on the topic. Which summarizes the finding of the various papers, often hundreds of them. And which draws the big picture, and fits all the pieces into it. Its duty is also to point out all remaining problems, and where the ends are still frayed. But at this point usually the things are well established. They often will not change substantially in the future. Of course, no rule without exception.

Over time, multiple reviews will evolve the big picture, close all holes, and connect the frayed edges to neighboring topics. By this, another patch in the tapestry of a field is formed. It becomes a stable part of the fabric of our understanding of physics. When this process is finished, it is time to write textbooks. To make even non-specialist students of physics aware of the topic, its big picture, and how it fits into our view of the world.

Those things, which are of particular relevance, since they form the fabric of our most basic understanding of the world, will eventually filter further down. At some point, the may become part of the textbooks at school, rather then university. And ultimately, they will become part of common knowledge.

This has happened many times in physics. Mechanics, classical electrodynamics, thermodynamics, quantum and nuclear physics, solid state physics, particle physics, and many other fields have undergone these level of hierarchies. Of course, often only with hindsight the transitions can be seen, which lead from the first inspiration to the final revelation of our understanding. But in this way our physics view of the world evolves.

by Axel Maas (noreply@blogger.com) at January 16, 2017 10:34 AM

CERN Bulletin

Exhibition

Œuvres recentes

Fabienne Wyler

Du 6 au 17 février 2017
CERN Meyrin, Bâtiment principal

L'escalier du diable B - aquarelle, encre de Chine XLV - Fabienne Wyler.

En relation avec certains procédés d’écriture contemporaine (par ex. Webern ou certaines musiques conçues par ordinateur), les compositions picturales de Fabienne Wyler s’élaborent à partir de « modules » (groupes de quadrangles) qu’elle reproduit en leur faisant subir toutes sortes de transformations et de déplacements : étirements, renversements, rotations, effet miroir, transpositions, déphasages, superpositions, etc., et ceci à toutes les échelles.

Au fil des œuvres sont apparues des séries intitulées, Bifurcations, Intermittences, Attracteurs étranges, Polyrythmies. Ces titres ont un lien étroit avec la science moderne ; le chaos déterministe et les « fractales » l’intéressent.

Ses sources d’inspiration sont en référence avec les recherches effectuées au Bauhaus (spécialement celles de Klee et Kandinsky). Influence également de Mondrian, Escher et, surtout en ce qui concerne la couleur, de la peinture japonaise de l’époque Heian (Kyoto, XIe et XIIe siècle).

Pour plus d’informations : staff.association@cern.ch | Tél: 022 766 37 38

January 16, 2017 10:01 AM

CERN Bulletin

Cine club

Wednesday 18 January 2017 at 20:00
CERN Council Chamber

35 Up


Directed by Michael Apted
UK, 1991, 123 minutes

Director Michael Apted revisits the same group of British-born adults after a seven-year wait. The subjects are interviewed as to the changes that have occurred in their lives during the last seven years.

Original version English; French subtitles


Wednesday 25 January 2017 at 20:00
CERN Council Chamber

56 Up

Directed by Michael Apted, Paul Almond
UK, 2012, 144 minutes

Director Michael Apted revisits the same group of British-born adults after a seven-year wait. The subjects are interviewed as to the changes that have occurred in their lives during the last seven years.

Original version English; French subtitles

January 16, 2017 10:01 AM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre.

La prochaine permanence se tiendra le :
Mardi 31 janvier de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 28 février, 28 mars, 25 avril, 30 mai, 29 août, 26 septembre, 31 octobre et 28 novembre 2017.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/.
e-mail : gac-epa@gac-epa.org.

January 16, 2017 09:01 AM

CERN Bulletin

January 15, 2017

Christian P. Robert - xi'an's og

a new Editor for Series B

As every odd year, the Royal Statistical Society is seeking a new joint editor for Series B! After four years of dedication to the (The!) journal, Piotr Fryzlewicz is indeed going to retire from this duty by the end of 2017. Many thanks to Piotr for his unfailing involvement in Series B and the preservation of its uncompromising selection of papers! The call thus open for candidates for the next round of editorship, from 2018 to 2021, with a deadline of 31 January, 2017. Interested candidates should contact Martin Owen, at the Society’s address or by email at rss.org.uk with journal as recipient (local-part). The new editor will work with the current joint editor, David Dunson, whose term runs till December 2019. (I am also looking forward working with Piotr’s successor in developing the Series B blog, Series’ Blog!)


Filed under: Statistics Tagged: blog, JRSSB, Royal Statistical Society, Series B

by xi'an at January 15, 2017 11:17 PM

January 14, 2017

John Baez - Azimuth

Solar Irradiance Measurements

guest post by Nadja Kutz

This blog post is based on a thread in the Azimuth Forum.

The current theories about the Sun’s life-time indicate that the Sun will turn into a red giant in about 5 billion years. How and when this process is going to be destructive to the Earth is still debated. Apparently, according to more or less current theories, there has been a quasilinear increase in luminosity. On page 3 of

• K.-P. Schröder and Robert Connon Smith, Distant future of the Sun and Earth revisited, 2008.

we read:

The present Sun is increasing its average luminosity at a rate of 1% in every 110 million years, or 10% over the next billion years.

Unfortunately I feel a bit doubtful about this, in particular after I looked at some irradiation measurements. But let’s recap a bit.

In the Azimuth Forum I asked for information about solar irradiance measurements . Why I was originally interested in how bright the Sun is shining is a longer story, which includes discussions about the global warming potential of methane. For this post I prefer to omit this lengthy historical survey about my original motivations (maybe I’ll come back to this later). Meanwhile there is an also a newer reason why I am interested in solar irradiance measurements, which I want to talk about here.

Strictly speaking I was not only interested in knowing more about how bright the sun is shining, but how bright each of its ‘components’ is shining. That is, I wanted to see spectrally resolved solar irradiance measurements—and in particular, measurements in the range between the wavelengths of roughly 650 and 950 nanometers.

This led me to the the Sorce mission, which is a NASA sponsored satellite mission, whose website is located at the University of Colorado. The website very nicely provides an interactive interface including a fairly clear and intuitive LISIRD interactive app with which the spectral measurements of the Sun can be studied.

As a side remark I should mention that this NASA mission belongs to the NASA Earth Science mission, which is currently threatened to be scrapped.

By using this app, I found in the 650–950 nanometer range a very strange rise in radiation between 2003 and 2016, which happened mainly in the last 2-3 years. You can see this rise here (click to enlarge):

verlauf774-51linie
spectral line 774.5nm from day 132 to 5073, day 132 starting Jan 24 in 2003, day 5073 is end of 2016

Now, fluctuations within certain spectral ranges within the Sun’s spectrum are not news. Here, however, it looked as if a rather stable range suddenly started to change rather “dramatically”.

I put the word “dramatically” in quotes for a couple of reasons.

Spectral measurements are complicated and prone to measurement errors. Subtle issues of dirty lenses and the like are already enough to suggest that this is no easy feat, so that this strange rise might easily be due to a measurement failure. Moreover, as I said, it looked as this was a fairly stable range over the course of ten years. But maybe this new rise in irradiation is part of the 11 years solar cycle, i.e., a common phenomenon. In addition, although the rise looks big, it may overall still be rather subtle.

So: how subtle or non-subtle is it then?

In order to assess that, I made a quick estimate (see the Forum discussion) and found that if all the additional radiation would reach the ground (which of course it doesn’t due to absorption), then on 1000 square meters you could easily power a lawn mower with that subtle change! I.e., my estimate was 1200 watts for that patch of lawn. Whoa!

That was disconcerting enough to download the data and linearly interpolate it and calculate the power of that change. I wrote a program in Javascript to do that. The computer calculations revealed an answer of 1000 watts, i.e., my estimate was fairly close. Whoa again!

How does this translate to overall changes in solar irradiance? Some increase had already been noticed. NASA wrote 2003 on its webpage:

Although the inferred increase of solar irradiance in 24 years, about 0.1 percent, is not enough to cause notable climate change, the trend would be important if maintained for a century or more.

That was 13 years ago.

I now used my program to calculate the irradiance for one day in 2016 between the wavelengths of 180.5 nm and 1797.62 nm, a quite big part of the solar spectrum, and got the value 627 W/m2. I computed the difference between this and one day in 2003, approximately one solar cycle earlier. I got 0.61 W/m2, which is 0.1% in 13 years, rather then 24 years. Of course this is not an average value, and not really well adjusted to the sun cycle, and fluctuations play a big role in some parts of the spectrum, but well—this might indicate that the overall rate of rise in solar radiation may have doubled. Likewise concerning the question of the sun’s luminosity: for assessing luminosity one would need to take the concrete satellite-earth orbit at the day of measurement into account, as the distance to the sun varies. But still, on a first glance this all appears disconcerting.

Given that this spectral range has for example an overlap with the absorption of water (clouds!), this should at least be discussed.

See how the spectrum splits into a purple and dark red line in the lower circle? (Click to enlarge.)

bergbildtag132tag5073at300kreis
Difference in spectrum between day 132 and 5073

The upper circle displays another rise, which is discussed in the forum.

So concluding, all this looks as if this needs to be monitored a bit more closely. It is important to see whether these rises in irradiance are also displayed in other measurements, so I asked in the Azimuth Forum, but so far have gotten no answer.

The Russian Wikipedia site about solar irradiance unfortunately contains no links to Russian satellite missions (if I haven’t overlooked something), and there exists no Chinese or Indian Wikipedia about solar irradiance. I also couldn’t find any publicly accessible spectral irradiance measurements on the ESA website (although they have some satellites out there). In December I wrote an email to the head of the section solar radiometry of the World Radiation Center (WRC) Wolfgang Finsterle, but I’ve had no answer yet.

In short: if you know about publicly available solar spectral irradiance measurements other than the LISIRD ones, then please let me know.


by John Baez at January 14, 2017 11:24 PM

Christian P. Robert - xi'an's og

incredible India

[The following is a long and fairly naïve rant about India and its contradiction, without pretence at anything else than writing down some impressions from my last trip. JATP: Just another tourist post!]

Incredible India (or Incredible !ndia) is the slogan chosen by the Indian Ministry of Tourism to promote India. And it is indeed an incredible country, from its incredibly diverse landscapes [and not only the Himalayas!] and eco-systems, to its incredibly huge range of languages [although I found out during this trip that the differences between Urdu and Hindi are more communitarian and religious than linguistic, as they both derive from Hindustani, although the alphabets completely differ] and religions [a mixed blessing], to its incredibly rich history and culture, to its incredibly wide offer of local cuisines [as shown by the Bengali sample below, where the mustard seed fish cooked in banana leaves and the fried banana flowers are not visible!] and even wines [like Sula Vineyards, which offers a pretty nice Viognier]. Not to mention incredibly savoury teas from Darjeeling and Assam.

But India is also in-credible in that it is fairly hard to believe it can function at all and still function it does! Despite or due to a massive bureaucracy, the federal and local states do not seem to operate with much or any efficiency [or such is the impression I gathered from my few trips there]. At least at the level of doing little against extreme poverty and extreme inequalities, or against massive air and water pollution [which puts India signing the Paris COP21 agreement under a bleak light, like this sun in the haze of a Kolkata highway], or towards urban planning, from garbage collection to traffic regulations, or women’s and children’s conditions. And the current BJP government seems more intent towards encouraging Hindu nationalism and religion [despite India secular constitution] than operating a rationalisation of Indian bureaucracy and politics. Although a side effect of the sudden demonetisation of 500 and 1000 rupee notes [which means one can only withdraw 2000 rupees at once, a slight nuisance when visiting India for a few days] may induce a massive jump into a cash-free economy.  In Kolkata I noticed the smallest street food stalls posting about pay-by-phone abilities. Since about everyone has a mobile phonbuse, if phones can be used as virtual wallets, this may represent a incredible move towards that cash-free market. (But also a risk of massive fraud targeting those with no other means of payment.)

The country is thus incredible in its numerous ways of bypassing the State inaction, not all to be commended of course and with extreme consequences for the poorest fraction of the population. But far from being a dystopia, it may open a window on the future metropolis all around the World, when environmental and migration pressures will see the collapse of our welfare states.

 

 

 

 


Filed under: Kids, Mountains, pictures, Running, Travel Tagged: air pollution, Bengali food, cash-free economy, cellphone, child labour, Darjeeling, ghee, India, Kolkata, panipuri, pollution, puri, Ravi Shankar, street food, traffic

by xi'an at January 14, 2017 11:17 PM

Peter Coles - In the Dark

Paddy’s Market

When I was a kid my Mum would use the expression “Paddy’s Market” quite often, to describe a messy, chaotic place e.g.

Tidy up your bedroom! It’s like Paddy’s Market!

Actually, that’s not so much an “e.g.” as an “invariably”.

Anyway, I always assumed that “Paddy’s Market” was a well-known term, but later began to think it wasn’t used very much at all in the Big Wide World.

The name “Paddy’s Market” clearly derives from the name of a place in Glasgow, which is perhaps testament to my family’s Scottish connections but it may be commonplace on Tyneside (where I was born) and even elsewhere. I just don’t know how widespread is its use.

Anyone out there in the blogosphere care to comment?


by telescoper at January 14, 2017 10:00 PM

January 13, 2017

Christian P. Robert - xi'an's og

la maison des mathématiques

ihp0When I  worked with Jean-Michel Marin at Institut Henri Poincaré the week before Xmas, there was this framed picture standing on the ground, possibly in preparation for exhibition in the Institute. I found this superposition of the lady cleaning the blackboard from its maths formulas and of the seemingly unaware mathematician both compelling visually in the sheer geometric aesthetics of the act and somewhat appalling in its message. Especially when considering the initiatives taken by IHP towards reducing the gender gap in maths. After inquiring into the issue, I found that this picture was part of a whole photograph exhibit on IHP by Vincent Moncorgé, now published into a book, La Maison des Mathématiques by Villani, Uzan, and Moncorgé. Most pictures are on-line and I found them quite appealing. Except again for the above.


Filed under: Books, Kids, pictures, Statistics, University life Tagged: Akashic Books, book review, exhibit, IHP, Institut Henri Poincaré, la maison des mathématiques, Paris, photograph, Vincent Moncorgé

by xi'an at January 13, 2017 11:17 PM

Clifford V. Johnson - Asymptotia

Handy

Hands have become almost as important as faces for helping communicate both ideas and emotions in the book. I've become a fan of constructing hands in various positions. In fact, I like it to an almost perverse degree, some might think, especially given how little people might even look at them. But then again, I'm known for actually enjoying -even looking forward to- dentist visits, so maybe this was predictable.

-cvj Click to continue reading this post

The post Handy appeared first on Asymptotia.

by Clifford at January 13, 2017 07:41 PM

Symmetrybreaking - Fermilab/SLAC

STOMP visits CERN

A group known for making music with everyday objects recently got their hands on some extraordinary props.

STOMP performers drum on a retired LHC cavity

CERN, home to the Large Hadron Collider, is known for high-speed, high-energy feats of coordination, so it’s only fitting that the touring percussion group STOMP would stop by for a visit.

After taking a tour of the research center, STOMP performers were game to share their talent by turning three pieces of retired scientific equipment into a gigantic drum set. Check out the video below to hear the beat of an LHC dipole magnet, the Gargamelle bubble chamber and a radiofrequency cavity from the former Large Electron-Positron Collider.

As CERN notes, these are trained professionals who were briefed on how to avoid damaging the equipment they used. Lab visitors are generally discouraged from hitting the experiments.

Video of dJdYXX1VVVI

by Kathryn Jepsen at January 13, 2017 07:40 PM

Emily Lakdawalla - The Planetary Society Blog

Want to build on our LightSail work? Here are some resources to get started
The Planetary Society is launching a new webpage showcasing LightSail academic papers, schematics, parts and imagery.

January 13, 2017 04:30 PM

January 12, 2017

ZapperZ - Physics and Physicists

Imaging Fukushima Reactor Core Using Muons
If you are in the US, did you see the NOVA episode on PBS last night titled "The Nuclear Option"? If you did, did you miss, or not miss, the technique of imaging the Fukushima reactor core using the muon tomography developed at Los Alamos?

You see, whenever I see something like this, I want to shout out loud to the public on another example where our knowledge from high energy physics/elementary particle physics can produce a direct practical benefit. A lot of people still question whether our efforts in these so-called esoteric areas are worth funding. So whenever I see something like this, there should be a conscious and precise effort to point out that:

1. We had to first understand the physics of muons from our knowledge of the Standard Model of elementary particle.

2. Then those who do understand this often will start to figure out, often with collaboration of those in other areas of physics, of what could possibly be done with such knowledge.

3. And finally, they come up with a practical application of that knowledge, which originated out of an area that often produces no immediate and obvious application.

Things like this must be pointed out in SIMPLE TERMS to both the public and the politicians, because that is the only level that they can comprehend. I've pointed out previously many examples of the benefits that we get, directly or indirectly, from such field of study. It should be a requirement that any practical application should present a short "knowledge genealogy" of where the idea came from. It will be an eye-opener to many people.

Zz.

by ZapperZ (noreply@blogger.com) at January 12, 2017 11:25 PM

Symmetrybreaking - Fermilab/SLAC

Twinkle, twinkle, little supernova

Using Twinkles, the new simulation of images of our night sky, scientists get ready for a gigantic cosmological survey unlike any before.

A simulation of stars against a black background

Almost every worthwhile performance is preceded by a rehearsal, and scientific performances are no exception. Engineers test a car’s airbag deployment using crash test dummies before incorporating them into the newest model. Space scientists fire a rocket booster in a test environment before attaching it to a spacecraft in flight.

One of the newest “training grounds” for astrophysicists is called Twinkles. The Twinkles dataset, which has not yet been released, consists of thousands of simulated, highly realistic images of the night sky, full of supernovae and quasars. The simulated-image database will help scientists rehearse a future giant cosmological survey called LSST.

LSST, short for the Large Synoptic Survey Telescope, is under construction in Chile and will conduct a 10-year survey of our universe, covering the entire southern sky once a year. Scientists will use LSST images to explore our galaxy to learn more about supernovae and to shine a light on the mysterious dark energy that is responsible for the expansion of our universe.

It’s a tall order, and it needs a well prepared team. Scientists designed LSST using simulations and predictions for its scientific capabilities. But Twinkles’ thousands of images will give them an even better chance to see how accurately their LSST analysis tools can measure the changing brightness of supernovae and quasars. That’s the advantage of using simulated data. Scientists don’t know about all the objects in the sky above our heads, but they do know their simulated sky— there, they already know the answers. If the analysis tools make a calculation error, they’ll see it.

The findings will be a critical addition to LSST’s measurements of certain cosmological parameters, where a small deviation can have a huge impact on the outcome.

“We want to understand the whole path of the light: From other galaxies through space to our solar system and our planet, then through our atmosphere to the telescope – and from there through our data-taking system and image processing,” says Phil Marshall, a scientist at the US Department of Energy's SLAC National Accelerator Laboratory who leads the Twinkles project. “Twinkles is our way to go all the way back and study the whole picture instead of one single aspect.”

Scientists simulate the images as realistically as possible to figure out if some systematic errors add up or intertwine with each other. If they do, it could create unforeseen problems, and scientists of course want to deal with them before LSST starts.

Twinkles also lets scientists practice sorting out a different kind of problem: A large collaboration spread across the whole globe that will perform numerous scientific searches simultaneously on the same massive amounts of data.

Richard Dubois, senior scientist at SLAC and co-leader of the software infrastructure team, works with his team of computing experts to create methods and plans to deal with the data coherently across the whole collaboration and advise the scientists to choose specific tools to make their life easier.

“Chaos is a real danger; so we need to keep it in check,” Dubois says. “So with Twinkles, we test software solutions and databases that help us to keep our heads above water.”

The first test analysis using Twinkles images will start toward the end of the year. During the first go, scientists extract type 1a supernovae and quasars and learn how to interpret the automated LSST measurements.

“We hid both types of objects in the Twinkles data,” Marshall says. “Now we can see whether they look the way they’re supposed to.”

LSST will start up in 2022, and the first LSST data will be released at the end of 2023.

“High accuracy cosmology will be hard,” Marshall says. “So we want to be ready to start learning more about our universe right away!”

by Ricarda Laasch at January 12, 2017 08:15 PM

Emily Lakdawalla - The Planetary Society Blog

Blitzing Congress for NASA
Last February, a group called the Space Exploration Alliance held their annual "legislative blitz," walking the halls of Congress to sway lawmakers toward increased support for NASA's 2017 budget.

January 12, 2017 05:40 PM

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at January 12, 2017 04:38 PM

January 11, 2017

Emily Lakdawalla - The Planetary Society Blog

CYGNSS Launch: The Human Side
What is it like behind the scenes before, during, and after the launch of a spacecraft?

January 11, 2017 05:36 PM

The n-Category Cafe

Category Theory in Barcelona

I’m excited to be in Barcelona to help Joachim Kock teach an introductory course on category theory. (That’s a link to bgsmath.cat — categorical activities in Catalonia have the added charm of a .cat web address.) We have a wide audience of PhD and masters students, specializing in subjects from topology to operator algebras to number theory, and representing three Barcelona universities.

We’re taking it at a brisk pace. First of all we’re working through my textbook, at a rate of one chapter a day, for six days spread over two weeks. Then we’re going to spend a week on more advanced topics. Today Joachim did Chapter 1 (categories, functors and natural transformations), and tomorrow I’ll do Chapter 2 (adjunctions).

I’d like to use this post for two things: to invite questions and participation from the audience, and to collect slogans. Let me explain…

Joachim pointed out today that category theory is full of slogans. Here’s the first one:

It’s more important how things interact than what they “are”.

As he observed, the question of what things “are” is slippery. Let me quote a bit from my book:

In his excellent book Mathematics: A Very Short Introduction, Timothy Gowers considers the question: “What is the black king in chess?”. He swiftly points out that this question is rather peculiar. It is not important that the black king is a small piece of wood, painted a certain colour and carved into a certain shape. We could equally well use a scrap of paper with “BK” written on it. What matters is what the black king does: it can move in certain ways but not others, according to the rules of chess.

In a categorical context, what an object “does” means how it interacts with the world around it — the category in which it lives.

Tomorrow I’ll proclaim some more slogans — I have some in mind. But I’d like to hear from you too. What are the most important slogans in category theory? And what do they mean to you?

I’d also like to try an experiment. The classes move rather quickly, so there’s not a huge amount of time in them for discussion or questions. But I’d like to invite students in the class to ask questions here. You can post anonymously — no one will know it’s you — and with any luck, you’ll get interesting answers from multiple points of view. So please, don’t be inhibited: ask whatever’s on your mind. You can even include LaTeX, in more or less the usual way: just put stuff between dollar signs. No tinguis por!

by leinster (Tom.Leinster@ed.ac.uk) at January 11, 2017 02:11 PM

Emily Lakdawalla - The Planetary Society Blog

Hidden Figures: Triumphant in the theater, sobering after
Go see Hidden Figures, and bring your kids. Despite its serious subject matter, the movie is joyful, often funny, and, in the end, triumphant.

January 11, 2017 12:48 AM

January 10, 2017

Lubos Motl - string vacua and pheno

CMS: a small Higgs to \(\mu\mu\tau\tau\) decay hint of a \(19\GeV\) boson
Statistics hasn't ceased to hold in 2017, even though the latter is a prime integer. So excesses keep on appearing in the LHC experiment, including the newly published CMS preprint about an analysis based on 20 inverse femtobarns of the 2012, i.e. center-of-mass energy \(8\TeV\) data:
Search for light bosons in decays of the \(125\GeV\) Higgs boson in proton-proton collisions at \(\sqrt{s} = 8\TeV\) (by Aaallah and 2000+ co-authors)
They only look at events in which the Higgs boson discovered in 2012 is produced – the number of collisions of this type (which were not known at all before late 2011) is so high that the experimenters may look at small special subsets and still say something interesting about these subsets.



Off-topic but fun chart of the day. Source.

So they focus on events in which the \(125\GeV\) Higgs decays to four fermions, as if it were first decaying to two lighter bosons, \(h\to aa\). The final states they probe include "four taus", "two muons plus two taus", and "two muons and two bottom quarks". It's not quite clear to me why they omit the other combinations, e.g. "two taus and two bottom quarks" etc. (except that I know that "four muons" was focused on in a special paper), but there may be some mysterious explanation.




They say that there's no statistical excess anywhere. But what this statement means should be interpreted a bit carefully because it potentially understates the deviations from the Standard Model they are seeing. By "no statistical excesses", they mean that there's no excess whose global significance, i.e. significance reduced by the look-elsewhere correction, exceeds 2 sigma.




In other words, the statement "nothing can be seen here" is compatible with the existence of more than 2-sigma – and perhaps a bit higher – excesses if evaluated locally, i.e. without any look-elsewhere reduction of the confidence level. And yes, those are seen.



Click to zoom in.

This chart – Figure 6 on Page 18 (page 20 of 48 according to the PDF file) – shows the Brazil bands for the final state with \(\mu\mu\tau\tau\). The tau leptons quickly decay and they split the final channels according to the decay products of these \(\tau\) as well – although, even in this case, it doesn't quite seem to me that they have listed all the options. ;-)

You see that the black, observed curves are sometimes smooth, sometimes very wiggly. The wiggles are sometimes unusually periodic – like in the upper left channel. But the most remarkable excess is seen in the upper right channel in which the two \(\tau\) leptons decay to one electron and one muon, respectively (plus neutrinos – missing energy).

You see that the distance from the Brazil band – for the mass \(m_a\) of the new light bosons depicted on the \(x\)-axis that is around \(20\GeV\) – is substantial. If a Brazilian soccer player deviated from the Brazilian land this severely, he would surely get drowned in the Atlantic Ocean. It looks like a "many sigma" deviation locally and I am a bit surprised that it doesn't make it to 2 sigma globally.

Four other channels show nothing interesting around \(m_a\sim 20\GeV\) but the last one, the lower left channel with both \(\tau\) decaying hadronically – shows a small local (and in this case, much narrower – the energy is measured accurately because no energy is lost to ghostly neutrinos in hadronic decays) excess for \(m_a\sim 19\GeV\). When these two excesses (and the flat graphs from the other channels) are added, we see the combined graph in the lower right corner which shows something like a locally 3-sigma excess for \(m_a\sim 19\GeV\).

It's almost certainly a fluctuation. If it weren't one, it should be interpreted as the "second Higgs boson" in a general 2HDM (two-Higgs-doublet model) which is ugly and unmotivated by itself. But such models may be typically represented as the Higgs part of the NMSSM (next-to-minimal supersymmetric standard model) which is very nice and explains the hierarchy problem more satisfactorily than MSSM. Even though it also has two Higgs doublets and therefore two CP-even neutral Higgs bosons in them, MSSM itself cannot reproduce these rather general 2HDM models.

It would be of course exciting if the LHC could suddenly discover a new \(20\GeV\) Higgs-like boson and potentially open the gates to truly new physics like supersymmetry but like in so many cases, I would bet on "probably not" when it comes to this modest excess.

Don't you find it a bit surprising that now, in early 2017, we are still getting preprints based on the evaluation of the 2012 LHC data? The year was called "now" some five years ago. Are they hiding something? And when they complete an analysis like that, why don't they directly publish the same analysis including all the 2015+2016 = 4031 data as well? Surely the analysis of the same channel applied to the newer, \(13\TeV\) data is basically the same work.

Maybe they're trying to pretend that they're writing more papers, and therefore doing more work? I don't buy it and neither should the sponsors and others. Things that may be done efficiently should be done efficiently. If it leads to the people's having more time to enjoy their lives instead of writing very similar long papers that almost no one reads, they should have more time to enjoy their life – and to collect energy needed to make their work better and more happily.

Another new CMS paper searching for SUSY with top tagging shows no excess, not even 2-sigma excess locally, but there's a nice more than 1-sigma repulsion from the point with a \(600\GeV\) top squark and a \(300\GeV\) neutralino or so.

by Luboš Motl (noreply@blogger.com) at January 10, 2017 06:01 PM

Symmetrybreaking - Fermilab/SLAC

How heavy is a neutrino?

The question is more complicated than it seems.

Header: How heavy is a neutrino?

Neutrinos are elementary particles first discovered six decades ago. 

Over the years, scientists have learned several surprising things about them. But they have yet to answer what might sound like a basic question: How much do neutrinos weigh? The answer could be key to understanding the nature of the strange particles and of our universe.

To understand why figuring out the mass of neutrinos is such a challenge, first you must understand that there’s more than one way to picture a neutrino.

Neutrinos come in three flavors: electron, muon and tau. When a neutrino hits a neutrino detector, a muon, electron or tau particle is produced. When you catch a neutrino accompanied by an electron, you call it an electron neutrino, and so on. 

Knowing this, you might be forgiven for thinking that there are three types of neutrinos: electron neutrinos, muon neutrinos and tau neutrinos. But that’s not quite right. 

That’s because every neutrino is actually a quantum superposition of all three flavors. Depending on the energy of a neutrino and where you catch it on its journey, it has a different likelihood of appearing as electron-flavored, muon-flavored or tau-flavored.

Armed with this additional insight, you might be forgiven for thinking that, when all is said and done, there is actually just one type of neutrino. But that’s even less right. 

Scientists count three types of neutrino after all. Each one has a different mass and is a different mixture of the three neutrino flavors. These neutrino types are called the three neutrino mass states.

Inline 1: How heavy is a neutrino?
Illustration by Sandbox Studio, Chicago with Corinne Mucha

A weighty problem

We know that the masses of these three types of neutrinos are small. We know that the flavor mixture of the first neutrino mass state is heavy on electron flavor. We know that the second is more of an even blend of electron, muon and tau. And we know that the third is mostly muon and tau.

We know that the masses of the first two neutrinos are close together and that the third is the odd one out. What we don’t know is whether the third one is lighter or heavier than the others. 

The question of whether this third mass state is the heaviest or the lightest mass state is called the neutrino mass hierarchy (or neutrino mass ordering) problem.

Inline 2: How heavy is a neutrino?
Illustration by Sandbox Studio, Chicago with Corinne Mucha

Easy as 1,2,3—or 3,1,2?

Some models that unify the different forces in the Standard Model of particle physics predict that the neutrino mass ordering will follow the pattern 1, 2, 3—what they call a normal hierarchy. Other models predict that the mass ordering will follow the pattern 3, 1, 2—an inverted hierarchy. Knowing whether the hierarchy is normal or inverted can help theorists answer other questions.

For example, four forces—the strong, weak, electromagnetic and gravitational forces—govern the interactions of the smallest building blocks of matter. Some theorists think that, in the early universe, these four forces were united into a single force. Most theories about the unification of forces predict a normal neutrino mass hierarchy. 

Scientists’ current best tools for figuring out the neutrino mass hierarchy are long-baseline neutrino experiments, most notably one called NOvA.

Inline 3: How heavy is a neutrino?
Illustration by Sandbox Studio, Chicago with Corinne Mucha

Electron drag

The NOvA detector, located in Minnesota near the border of Canada, studies a beam of neutrinos that originates at Fermi National Accelerator Laboratory in Illinois.

Neutrinos very rarely interact with other matter. That means they can travel 500 miles straight through the Earth from the source to the detector. In fact, it’s important that they do so, because as they travel, they pass through trillions of electrons.

This affects the electron-flavor neutrinos—and only the electron-flavor neutrinos—making them seem more massive. Since the first and second mass states contain more electron flavor than the third, those two experience the strongest electron interactions as they move through the Earth. 

This interaction has different effects on neutrinos and antineutrinos—and the effects depend on the mass hierarchy. If the hierarchy is normal, muon neutrinos will be more likely to turn into electron neutrinos, and muon antineutrinos will be less likely to turn into electron antineutrinos. If the hierarchy is inverted, the opposite will happen. 

So if NOvA scientists see that, after traveling through miles of rock and dirt, more muon neutrinos and fewer muon antineutrinos than expected have shifted flavors, it will be a sign the mass hierarchy is normal. If they see fewer muon neutrinos and more muon antineutrinos have shifted flavors, it will be a sign that the mass hierarchy is inverted. 

The change is subtle. It will take years of data collection to get the first hint of an answer. Another, shorter long-baseline neutrino experiment, T2K, is taking related measurements. The JUNO experiment under construction in China aims to measure the mass hierarchy in a different way. The definitive measurement likely won’t come until the next generation of long-baseline experiments, DUNE in the US and the proposed Hyper-Kamiokande experiment in Japan.

Neutrinos are some of the most abundant particles in the universe. As we slowly uncover their secrets, they give us more clues about how our universe works.

by Kathryn Jepsen at January 10, 2017 04:13 PM

Emily Lakdawalla - The Planetary Society Blog

SpaceX is ready to fly rockets again. An expert talks about the reason a Falcon 9 blew up last year
SpaceX says they fixed a problem with the helium pressurization system that destroyed a Falcon 9 rocket last year. The company pushes the boundaries of rocket science, creating an occasional jaw-dropping fireball in the process. But will the risk-reward equation change when SpaceX starts flying astronauts?

January 10, 2017 12:02 PM

January 09, 2017

ZapperZ - Physics and Physicists

Mpemba Effect Is Still Hot After All These Years
OK, maybe not hot, but it is certainly at least lukewarm.

If you don't know anything about this, I've made several posts on the Mpemba effect before (read here, here, here, and here). Briefly, this is the effect where hot water is seen to freeze faster than cold water. Even after its purported discovery many years ago, the validity of this effect, and the possible explanation for it are still being debated.

Add this report to the body of discussion. It seems that there are new papers that are using molecular bonds in water as the possible explanation for this effect.

Now researchers from the Southern Methodist University in Dallas and Nanjing University in China think they might have a solution - strange properties of bonds formed between hydrogen and oxygen atoms in water molecules could be the key to explaining the elusive Mpemba effect.

Simulations of water molecule clusters revealed that the strength of hydrogen bonds (H-bonds) in a given water molecule depends on the arrangements of neighbouring water molecules.

"As water is heated, weaker bonds break, and groups of molecules form into fragments that can realign to form the crystalline structure of ice, serving as a starting point for the freezing process," Emily Conover reports for Science News.

"For cold water to rearrange in this way, weak hydrogen bonds first have to be broken."
I'm sure this will not be the last time we hear about this.

Zz.

by ZapperZ (noreply@blogger.com) at January 09, 2017 04:18 PM

Tommaso Dorigo - Scientificblogging

Getting Married
I am happy to report, with this rather unconventional blog posting, that I am getting married on January 12. My companion is Kalliopi Petrou, a lyrical singer. There will be no huge party involved in the event, as Kalliopi and I have lived together for some time already and the ceremony will be minimalistic. None the less, we do give importance to this common decision, so much so that I thought it would be a good thing to broadcast in public - here.

read more

by Tommaso Dorigo at January 09, 2017 03:25 PM

Lubos Motl - string vacua and pheno

Disappointing composition of top-cited 2016 HEP papers
Stephen Hawking celebrated his 75th birthday yesterday, congratulations! Lots of other websites remind you of the basic facts. He's well-known to the physicists primarily for the Hawking radiation of black holes and related insights about black hole thermodynamics; but also for his and Penrose's singularity theorems and other things. He's also revolutionized the popular physics book market. As Hawking mentioned, he has sold more books about physics than Madonna has about sex.

The experimental counterpart of this statement isn't quite true. We have observed fewer evaporating black holes than Madonna's sex scenes, however, namely zero.

I found it interesting to look at the 2016 data papers on high energy physics that already have over 100 citations according to INSPIRE, the database of particle physics papers. This particular search finds 126 papers right now.




The beginning of the list of the papers looks like "almost everything would be experimental papers". But after a few dozens, you must change your mind. A majority of the papers is about the \(750\GeV\) diphoton excess that was exciting many particle physicists a year ago. Recall that it was announced in December 2015 and before the more than 4-sigma excess seen both by CMS and ATLAS was buried by the new data published in Summer 2016, hundreds of papers – often interesting papers – were written to offer possible explanations of this possible new phenomenon.




Many of these papers rightfully cited their counterparts which is why dozens of them have surpassed the threshold of 100 citations by now. Needless to say, not it looks rather clear that there are no new elementary particles of mass close to \(750\GeV\) – to say the least, no new particles of this mass are as easily visible as we could imagine one year ago.

So this tremendous activity was a kind of a bet on an event that couldn't have been predicted. Those who invested their time and energy have basically lost the bet. The particle isn't there, after all. But just because they lost the bet doesn't mean that the time and energy were completely wasted or that it was torture for the physicists to do the work.

Instead, the physicists were genuinely excited by the chance for a new discovery and when a physicist (or someone else) is excited, the work is much easier. And even though the particle isn't there, the papers have clearly articulated and sharpened the details with which various theoretically intriguing models could explain a similar new effect if one turned out to be real.

But aside from the papers on the \(750\GeV\) diphoton excess, most of the top-cited papers are experimental, indeed. Something like 5 papers in the list are analyses by the LIGO collaboration of their exciting direct detection of the gravitational waves, especially GW150914, which allowed us to hear the Universe. Many physics pundits have identified this discovery as the most important development in physics in 2016 and I would probably agree.

Additional top-cited experimental papers include the new Review of Particle Physics and articles by the teams at the LHC, Fermi (the \({\rm GeV}\) galactic excess is an intriguing topic of one of the papers), and the terrestrial searches for dark matter, especially LUX and XENON. None of these experimental papers has made a clear experimental discovery but that's not necessarily the fault of the experimenters.

Formal theoretical papers – or, almost equivalently, theoretical papers after the diphoton models are subtracted – are rare. One successful paper is about tetraquarks and pentaquarks. I find these QCD bound states rather messy yet boring but they represent serious work on messy and boring topics.

One paper by Strominger and two junior colleagues is about dS/CFT applied to Vasiliev's higher-spin theory. The holographic model living on the "far future" dS boundary includes anticommuting scalars with a symplectic symmetry. I haven't discussed that paper but the degree of detail they may deduce for this dS holographic duality makes the whole work persuasive and intriguing.

Hawking, Perry, Strominger wrote about their "soft hair" solution of the black hole information paradox. I've discussed that paper many times and unfortunately I don't think that the far-reaching conceptual claims are correct and I am convinced that most true quantum gravity experts find them flawed, too.

Maldacena, Shenker, and Stanford have proposed a very interesting general bound on chaos.

Several top-cited papers in the list are older and only got there that they were published in paper journals in 2016. If you think that I should have discussed some paper in the list, or some paper is missing for some undeserved reasons, let me know.

I find it unfortunate that for a few years, there hasn't been a too specific "fad" or concentrated activity in formal theoretical particle physics – string theory etc. – that would make it to such lists. Well, just to be sure, several papers above could be viewed as representatives of the "information in quantum gravity" subindustry but I think that this topic is too broad to be called a "fad". Apparently, there aren't any realistic problems that could be solved – or looming discoveries that could be made – that are eagerly expected by a significant fraction of the world's elite theoretical physicists right now. So I think that if there are some ingenious undergraduate seniors at a university anywhere in the world, they have a much harder time to turn into stars than in other periods of the history of physics.

This negative situation may be partly due to historical coincidences, partly due to the decreased funding of the theorists in recent years, and partly due to the hostility towards theoretical physics that became rather widespread in the same years. I think that it's obvious by now that the jihadists who have fought against string theory and supersymmetry, among related key disciplines, have fought against theoretical physics as a whole – simply because there aren't any solid yet exciting ideas in the field that would be quite independent of string theory – and I think that they have harmed the field, indeed – well, at least sufficiently for them to deserve a severe punishment.

by Luboš Motl (noreply@blogger.com) at January 09, 2017 10:32 AM

January 08, 2017

Clifford V. Johnson - Asymptotia

Of Course You Knew…

Of course you knew that I had to do this... Let me explain, perhaps for your Sunday reading pleasure.

The prevailing culture is surprisingly and frustratingly simplistic when it comes to graphic books and comics. As late as 2017, we're still at the stage that most people in the USA (and the UK), if asked, will associate the form with the superhero genre: people in capes and/or masks fighting crime and/or saving the world. The other association is with the Sunday funnies. This is unfortunate, and, in case you don't know, far from the case in other places such as various European and Asian countries where the boundaries between written and visual literature are less rigid.

The confusion of form (visual narrative on the page) with genre (the subject of the narrative itself) drives me nuts, as it makes it very hard to get people to [...] Click to continue reading this post

The post Of Course You Knew… appeared first on Asymptotia.

by Clifford at January 08, 2017 06:52 PM

January 07, 2017

Tommaso Dorigo - Scientificblogging

The Three Cubes Problem
Two days ago, before returning from Israel, my fiancee Kalliopi and I had a very nice dinner in a kosher restaurant near Rehovot in the company of Eilam Gross, Zohar Komargodski, and Zohar's wife Olga. 
The name of Eilam should be familiar to regulars of this blog as he wrote a couple of guest posts here, in similar occasions (in the first case it was a few before the Higgs discovery was announced, when the signal was intriguing but not yet decisive; and in the second case it was about the 750 GeV resonance, which unfortunately did not concretize into a discovery). As for Zohar, he is a brilliant theorist working in applications of quantum field theory. He is young but already won several awards, among them the prestigious New Horizons in Physics prize.

read more

by Tommaso Dorigo at January 07, 2017 10:27 AM

January 06, 2017

ZapperZ - Physics and Physicists

The Brachistochrone Problem
There are many sources that describes this problem. Mary Boas also devoted a substantial portion of it in her classic text "Mathematical Methods in the Physical Sciences". Here, Rhett Allain describes it once more in his Wired article.

Laymen might find it fascinating just to know the shape of the path, while physics students might find it useful especially if you're just about to take class in Least Action principle.

Zz.

by ZapperZ (noreply@blogger.com) at January 06, 2017 08:45 PM

Symmetrybreaking - Fermilab/SLAC

CERN ramps up neutrino program

The research center aims to test two large prototype detectors for the DUNE experiment.

Image: DUNE prototype at CERN

In the midst of the verdant French countryside is a workshop the size of an aircraft hangar bustling with activity. In a well lit new extension, technicians cut through thick slices of steel with electric saws and blast metal joints with welding torches.

Inside this building sits its newest occupant: a two-story-tall cube with thick steel walls that resemble castle turrets. This cube will eventually hold a prototype detector for the Deep Underground Neutrino Experiment, or DUNE, the flagship research program hosted at the Department of Energy’s Fermi National Accelerator Laboratory to better understand the weird properties of neutrinos.

Neutrinos are the second-most abundant fundamental particle in the visible universe, but because they rarely interact with atoms, little is known about them. The little that is known presents a daunting challenge for physicists since neutrinos are exceptionally elusive and incredibly lightweight.

They’re so light that scientists are still working to pin down the masses of their three different types. They also continually morph from one of their three types into another—a behavior known as oscillation, one that keeps scientists on their toes.

“We don’t know what these masses are or have a clear understanding of the flavor oscillation,” says Stefania Bordoni, a CERN researcher working on neutrino detector development. “Learning more about neutrinos could help us better understand how the early universe evolved and why the world is made of matter and not antimatter.”

In 2015 CERN and the United States signed a new cooperation agreement that affirmed the United States’ continued participation in the Large Hadron Collider research program and CERN's commitment to serve as the European base for the US-hosted neutrino program. Since this agreement, CERN has been chugging full-speed ahead to build and refurbish neutrino detectors.

“Our past and continued partnerships have always shown the United States and CERN are stronger together,” says Marzio Nessi, the head of CERN’s neutrino platform. “Our big science project works only because of international collaboration.”

The primary goal of CERN’s neutrino platform is to provide the infrastructure to test two large prototypes for DUNE’s far detectors. The final detectors will be constructed at Sanford Lab in South Dakota. Eventually they will sit 1.5 kilometers underground, recording data from neutrinos generated 1300 kilometers away at Fermilab.

Two 8-meter-tall cubes, currently under construction at CERN, will each contain 770 metric tons of liquid argon permeated with a strong electric field. The international DUNE collaboration will construct two smaller, but still large, versions of the DUNE detector to be tested inside these cubes.

In the first version of the DUNE detector design, particles traveling through the liquid knock out a trail of electrons from argon atoms. This chain of electrons is sucked toward the 16,000 sensors lining the inside of the container. From this data, physicists can derive the trajectory and energy of the original particle.

In the second version, the DUNE collaboration is working on a new type of technology that introduces a thin layer of argon gas hovering above the liquid argon. The idea is that the additional gas will amplify the signal of these passing particles and give scientists a higher sensitivity to low-energy neutrinos. Scientists based at CERN are currently developing a 3-cubic-meter model, which they plan to scale up into the much larger prototype in 2017.

In addition to these DUNE prototypes, CERN is also refurbishing a neutrino detector, called ICARUS, which was used in a previous experiment at the Italian Institute for Nuclear Physics’ Gran Sasso National Laboratory in Italy. ICARUS will be shipped to Fermilab in March 2017 and incorporated into a separate experiment.

CERN plans to serve as a resource for neutrino programs hosted elsewhere in the world as scientists delve deeper into this enigmatic niche of particle physics.

A version of this article was published by Fermilab.

by Sarah Charley at January 06, 2017 06:09 PM

January 05, 2017

Symmetrybreaking - Fermilab/SLAC

Anything to declare?

Sometimes being a physicist means giving detector parts the window seat.

Image: Detector by air

John Conway knows the exact width of airplane aisles (15 inches). He also personally knows the Transportation Security Administration operations manager at Chicago’s O’Hare Airport. That’s because Conway has spent the last decade transporting extremely sensitive detector equipment in commercial airline cabins.

“We have a long history of shipping particle detectors through commercial carriers and having them arrive broken,” says Conway, who is a physicist at the University of California, Davis. “So in 2007 we decided to start carrying them ourselves. Our equipment is our baby, so who better to transport it than the people whose work depends on it?”

Their instrument isn’t musical, but it’s just as fragile and irreplaceable as a vintage Italian cello, and it travels the same way. Members of the collaboration for the CMS experiment at CERN research center tested different approaches for shipping the instrument by embedding accelerometers in the packages. Their best method for safety and cost-effectiveness? Reserving a seat on the plane for the delicate cargo.

In November Conway accompanied parts of the new CMS pixel detector from the Department of Energy's Fermi National Accelerator Laboratory in Chicago to CERN in Geneva. The pixels are very thin silicon chips mounted inside a long cylindrical tube. This new part will sit in the heart of the CMS experiment and record data from the high-energy particle collisions generated by the Large Hadron Collider.

“It functions like the sensor inside a digital camera,” Conway said, “except it has 45 megapixels and takes 40 million pictures every second.”

Scientists and engineers assembled and tested these delicate silicon disks at Fermilab before Conway and two colleagues escorted them to Geneva. The development and construction of the component pieces took place at Fermilab and universities around the United States.

Conway and his colleagues reserved each custom-made container its own economy seat and then accompanied these precious packages through check-in, security and all the way to their final destination at CERN. And although these packages did not leave Fermilab through the shipping department, each carried its own official paperwork.

“We’d get a lot of weird looks when rolling them onto the airplane,” Conway says. “One time the flight crew kept joking that we were transporting dinosaur eggs.”

After four trips by three people across the Atlantic, all 12 components of the US-built pixel detectors are at CERN and ready for integration with their European counterparts. This winter the completed new pixel detector will replace its time-worn predecessor currently inside the CMS detector.

A version of this article was published by Fermilab.

by Sarah Charley at January 05, 2017 04:56 PM

ZapperZ - Physics and Physicists

Happy New Year!
A belated Happy New Year to everyone. I hope you all had a great holiday season.

Those of us in the US are facing a rather uncertain next few months. With the new administration taking office and the issue of science and science funding being trivialized during this last presidential election, no one knows where things are going. With Rick Perry slated to be nominated as the Secretary for the Dept. of Energy, it is like having the wolf looking after the sheep, since he had stated on more than one occasion of abolishing this part of the US govt. Sorry, but I don't think he has a clue what the DOE actually does.

This is not the first time someone who has no expertise in STEM is heading a dept. that deals with STEM. I've always wondered about the logic and rational of doing that. You never seen someone who is not an expert in finance or economics heading, say, the Treasury! So why is the DOE, which has been a significant engine in research, science, and technology, and which is the area that has been attributed to be responsible for the significant growth in our economy, being relegated as an ugly stepchild? Is it because STEM and STEM funding does not have a built-in constituent that will make public and political noise?

At this point, I have very low expectations for a lot of things during these next few years.

Zz.

by ZapperZ (noreply@blogger.com) at January 05, 2017 03:24 PM

Tommaso Dorigo - Scientificblogging

Anomaly! At 35% Discount For Ten More Days
I thought it would be good to let you readers of this column know that in case you wish to order the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab" (or any other title published by World Scientific, for that matter) you have 10 more days to benefit of a 35% discount off the cover price. Just visit the World Scientific site of the book and use the discount code WS16XMAS35).

read more

by Tommaso Dorigo at January 05, 2017 02:39 PM

Lubos Motl - string vacua and pheno

XENON100 rejects DAMA/LIBRA dark matter modulation at 5.7 sigma
DAMA/LIBRA is an Italian dark matter experiment that most colleagues apparently don't take too seriously. In recent years, it has claimed to detect some clear signatures of dark matter, especially through the dark matter seasonal modulation. Some effects are different in the summer and in the winter, and so on. This is what you would expect from a dark matter counterpart of the "aether wind", if I dare to borrow to a debunked concept. ;-)



DAMA/LIBRA is some microwave oven with some sensitive pieces within 1 meter of concrete in all directions. A theorist must have a similar idea about it as I have about a laser printer, Canon LBP7018C, whose 3-of-4 cartridges I attempted to replace yesterday but the non-original compatible ones got stuck in it and the printer seems broken now. Probably not my fault but I can't be sure. ;-) The Rutgers+Harvard experience taught me that a dedicated professional is needed to maintain a laser printer.

The most recent DAMA/LIBRA paper is this 2013 update which says that the statistical significance of their observed – nominally discovered – dark matter modulation signal is 9.3 sigma. If true, it's a discovery on steroids.

When the pro-dark matter side seemed to be winning in the dark matter wars, there were other reasons to think that this modulation exists. CoGeNT has confirmed some modulation in 2011, too.

The atmosphere is different now and the anti-dark matter side seems to be on an offensive. It's particularly clear from the today's new preprint by the XENON collaboration:
Search for Electronic Recoil Event Rate Modulation with 4 Years of XENON100 Data
XENON, one of the most formidable dark matter detectors in the world (I think that LUX and XENON are upgrading and fighting for the leadership), investigated the annual modulation as well. Using the 2010-2014 data, they have evaluated the theory with dark matter modulation and parameters suggested by the claimed DAMA/LIBRA signal. And they have excluded this theory. Experimenters normally tend to disprove theories with newly proposed effects at 2 sigma or 95% confidence level – because the "null hypothesis" to which they return to isn't extraordinary and doesn't need extraordinary evidence.

But in this case, they could take the DAMA/LIBRA claim to be a "natural hypothesis" and XENON excluded it at 5.7 sigma – because it's more than five, you may say that XENON has made a discovery that DAMA/LIBRA is wrong.




Now, an alternative explanation is, of course, that XENON is wrong and DAMA/LIBRA (and perhaps CoGeNT) are still right and the dark modulation exists. If you want to defend this viewpoint, you may point out that the number 9.3 (the significance level of DAMA/LIBRA) is greater than the number 5.7 (XENON's negative significance level). Not too many people in the field will take this attitude – but needless to say, they may be wrong and their attitude may mainly reflect the group think.




At any rate, if there's some group think in the discipline, it's complex and softly self-contradictory. The researchers in that field significantly favor dark matter. But they also favor the idea that the Italians are too messy to discover it. ;-) (Just to be sure, I know that XENON is Italian, too.)

I don't know who is right. At least with some significant certainty. It would be painful if XENON missed this discovery despite their vastly more expensive experiment. But it may happen. For example, I still do tend to think that the positive claim about the discoversable primordial gravitational waves by BICEP* has a reasonable chance (20%?) to be right, despite the mission of their competitors Planck to kill anything of the sort.

The situation of XENON and DAMA/LIBRA has one more subtlety. In the new paper, XENON does see some modulation but it is weak – 1.9 sigma – and its period is 431 plus minus 15 days, rather safely away from the expected 365 days. So everyone is gonna assume that this weak "signal" is just noise.

Yesterday in The New York Times, Lisa Randall argued that Why Vera Rubin Deserved a Nobel. (Vera Rubin died on Christmas Day.) Well, maybe, yes, no. I think that Lisa's article is highly incomplete and tendentious.

You would think that an article titled "Why Vera Rubin Deserved a Nobel" (because of her contributions to the research of dark matter) would contain at least most of the basic data about the discoverers of dark matter. I don't think it does. Lisa writes that Rubin is "most often attributed" with establishing dark matter's existence. The NYT article doesn't even mention Fritz Zwicky who deduced dark matter (dunkle Materie) in 1933, using the virial theorem. I think that Zwicky is actually the scientist most often attributed with the discovery of dark matter, and rightfully so, and Randall's claim to the contrary is a part of the feminist propaganda.



XENON100 seems larger than DAMA/LIBRA.

Well, people may equally say that my perspective is biased because Zwicky was also a male, like me, who was also surrounded by many spherical bastards (they are equally bastards from any direction) and whose mother was Františka Vrček, an ethnic Czech – so Zwicky was Czech to the same extent as Ivanka Trump. But I think that it's ludicrously obvious that the priority would have to belong to Zwicky.

Another reason is that if dark matter were really proven, Zwicky was more certain of its existence while Rubin was hesitating, see this comment of mine about the (somewhat overstated) article "Vera Rubin Didn't Discover Dark Matter" by Richard Panek, a big Zwicky fan.

But these Zwicky-vs-Rubin disputes aren't too relevant for one reason: We are not terribly certain that dark matter is the right explanation of the anomalies. Given the not quite negligible "risk" that the right explanation is completely different, something like MOND, it could be very strange to give the Nobel prize for "it". What "it" even means? Look at the list of the Nobel prize winners. No one has ever received the Nobel prize for the discovery of "something" that no one knew what it actually was – a new particle? Black holes everywhere? A new term in Newton's gravitational law? The normal contribution rewarded by Nobel prizes is a clearcut theory that was experimentally proven, or the experimental proof of a clear theory. Even though most cosmologists and particle physicists etc. tend to assume dark matter, dark matter-suggesting observations aren't really belonging to this class yet.

And I think that this is the actual main reason why Vera Rubin hasn't gotten the prize for dark matter – and no one else has received it, either.

by Luboš Motl (noreply@blogger.com) at January 05, 2017 12:32 PM

January 04, 2017

John Baez - Azimuth

Information Processing in Chemical Networks

There’s a workshop this summer:

• Dynamics, Thermodynamics and Information Processing in Chemical Networks, 13-16 June 2017, Complex Systems and Statistical Mechanics Group, University of Luxembourg. Organized by Massimiliano Esposito and Matteo Polettini.

They write, “The idea of the workshop is to bring in contact a small number of high-profile research groups working at the frontier between physics and biochemistry, with particular emphasis on the role of Chemical Networks.”

Some invited speakers include Vassily Hatzimanikatis, John Baez, Christoff Flamm, Hong Qian, Joshua D. Rabinowitz, Luca Cardelli, Erik Winfree, David Soloveichik, Stefan Schuster, David Fell and Arren Bar-Even. There will also be a session of shorter seminars by researchers from the local institutions such as Luxembourg Center for System Biomedicine. I believe attendance is by invitation only, so I’ll endeavor to make some of the ideas presented available here at this blog.

Some of the people involved

I’m looking forward to this, in part because there will be a mix of speakers I’ve met, speakers I know but haven’t met, and speakers I don’t know yet. I feel like reminiscing a bit, and I hope you’ll forgive me these reminiscences, since if you try the links you’ll get an introduction to the interface between computation and chemical reaction networks.

In part 25 of the network theory series here, I imagined an arbitrary chemical reaction network and said:

We could try to use these reactions to build a ‘chemical computer’. But how powerful can such a computer be? I don’t know the answer.

Luca Cardelli answered my question in part 26. This was just my first introduction to the wonderful world of chemical computing. Erik Winfree has a DNA and Natural Algorithms Group at Caltech, practically next door to Riverside, and the people there do a lot of great work on this subject. David Soloveichik, now at U. T. Austin, is an alumnus of this group.

In 2014 I met all three of these folks, and many other cool people working on these theme, at a workshop I tried to summarize here:

Programming with chemical reaction networks, Azimuth, 23 March 2014.

The computational power of chemical reaction networks, 10 June 2014.

Chemical reaction network talks, 26 June 2014.

I met Matteo Polettini about a year later, at a really big workshop on chemical reaction networks run by Elisenda Feliu and Carsten Wiuf:

Trends in reaction network theory (part 1), Azimuth, 27 January 2015.

Trends in reaction network theory (part 2), Azimuth, 1 July 2015.

Polettini has his own blog, very much worth visiting. For example, you can see his view of the same workshop here:

• Matteo Polettini, Mathematical trends in reaction network theory: part 1 and part 2, Out of Equilibrium, 1 July 2015.

Finally, I met Massimiliano Esposito and Christoph Flamm recently at the Santa Fe Institute, at a workshop summarized here:

Information processing and biology, Azimuth, 7 November 2016.

So, I’ve gradually become educated in this area, and I hope that by June I’ll be ready to say something interesting about the semantics of chemical reaction networks. Blake Pollard and I are writing a paper about this now.


by John Baez at January 04, 2017 05:16 PM

The n-Category Cafe

Globular for Higher-Dimensional Knottings (Part 3)

guest post by Scott Carter

This is my 3rd post a Jamie Vicary’s program Globular. And here I want to give you an exercise in manipulating a sphere in 4-dimensional space until it is demonstrably unknotted. But first I’ll need to remind you a lot about knotting phenomena. By the way, I lied. In the previous post, I said that the next one would be about braiding. I will write the surface braid post soon, but first I want to give you a fun exercise.

This post, then, will describe a 2-sphere embedded in 4-space, and we’ll learn to try and unknot it.

Loops of string can be knotted in 3-dimensional space. For example, go out to your tool shed and get out your orange heavy-duty 25 foot long extension cord. Plug the male end into the female and tape them together so that the plug cannot become undone. I would wager that as you try to unravel this on your living room floor, or your front lawn, you’ll discover that it is knotted.

Rather than using a physical model such as an extension cord, we can also create knots using the classical knot template of which I wrote in the first post. There you create knots by beginning with as many cups as you like in whatever nesting pattern that you like. For example:

And yes, these nestings are associated to elements in the Temperley-Lieb algebra. Then you can click and swipe left or right at the top endpoints and thereby entangle strings as you choose:

Close the result with a collection of caps, and a link results:

It is possible that the resulting link can be disentangled. To play with Globular, keep your link in the workspace, click on the identity menu item on the right, and then start trying to apply Reidemeister moves to it. For example, when I finished simplifying my diagram, I got the trefoil. I didn’t expect this!

If you want to see how I got the trefoil, you can look at my sequence of isotopy moves within globular.

By clicking the identity button on the right, you preserved the moves you used. The graphic immediately above indicates an annulus embedded in 3-space times an interval <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics>. At the bottom is the knot that I drew, at the top is the result of the isotopy.

You can go into that isotopy, click the identity button, and modify it further to find a more efficient path between the knots!

Just as circles can be linked and knotted in 3-space, surfaces can be knotted and linked in 4-space. The knotting of higher dimensional spheres was observed by Emil Artin in a 1925 paper. Most progress about higher-dimensional knots occurred in the era circa 1960 through 1975. At that time, new algebraic topological techniques, particularly homological studies of covering spaces, occurred. Some authors, Yajima in particular, also initiated a diagrammatic theory. The diagram of a knotted surface is its projection from 4-space into 3-space with crossing information indicated. I like to think of the diagram as representing the knotted surface in a thin neighborhood of 3-space. The bits of surface that are indicated by breaks protrude into 4-space in that thin neighborhood. Still this imagery does not help manipulate the surface. By analogy, if you think of a classical knot as being confined to a thin sheet of space, then you’ll feel constrained in pulling the under-crossing arc.

As sighted humans, we perceive only surface. We posit solid. So as I sit at my desk, I see its top and I presume that it is made of a thick wood. The drawer in front of me defines a cavity in which paper clips, rubber bands, and old papers sit. But I can’t see through this. I only see the front of the drawer. When I look at the diagram of a knotted surface, I create visual tropes to help me understand. How many layers are there behind the visible layer? Where does the surface fold? Where does it interweave? Within the globular view (project 2) of a knotted surface, we see (1) the face of the surface that lies closest to us, and (2) the collection of double curves, triple points, folds, and cusps that induce the knotting. Globular is new — only a year old. So its depiction of these things is not as elegant as it might be, but all the information is there. Mouse-overs let us know the type and the levels of all the singular sets. Cusps and optimal points of double curves (these are double curves in the projection of the surface into 3-space not double curves in 4-space) have the same shape. They should have different colors. Similarly birth, deaths, saddles, and crotches will all be cup or cap like. Hover the mouse to the critical point, and you’ll see what it is.

Here:

is the image of a sphere in 4-space that looks like it might be knotted. But in fact it is not. This is the image from a worksheet that I created specifically for the energetic readers of this blog. In the worksheet, I created a sphere embedded in 4-space that is constructed as Zeeman’s 1-twist spin of the figure-8 knot (4sub1) in the tables. At least I think I did! Zeeman’s general twist spinning theorem says that the n-twist-spin of a classical knot is fibered with its fibre being the (punctured) n-fold branched cover of the 3-sphere branched along the given knot. When n=1, this branched cover is the 3-ball, and so the embedded sphere bounds a ball, and therefore is unknotted.

The worksheet that I created here is a quebra-cabeça — a mind-bending puzzle for the reader. Can you use globular to unknot this embedded sphere? By the way, I am not 100 percent sure that I constructed this example correctly ;-) But here is my advise for unknotting it. There are two critical points, one saddle and one crotch, that need to have their heights interchanged. To interchange these heights, add two swallow tails: a left (up or down) swallow tail (L ST (up or down)) on the interior red fold line, and a right (down or up) (R ST (down or up)) on the interior green fold. These folds are mouse-over named cap and cup, respectively. The swallow tails allow you to turn the surface on its side. Then pull the stuff (type I, ysp, and psy) that lie along these folds into the swallowtail regions. meanwhile interchange the heights of the crotch and saddle. When you get done with that, I’ll give another hint, and I may have done these operations myself.

by john (baez@math.ucr.edu) at January 04, 2017 07:05 AM

The n-Category Cafe

Field Notes on the Behaviour of a Large Assemblage of Ecologists

I’ve just come back from the annual conference of the British Ecological Society in Liverpool. For several years I’ve had a side-interest in ecology, but I’d never spent time with a really large group of ecologists before, and it taught me some things. Here goes:

  1. Size and scale. Michael Reed memorably observed that the American Mathematical Society is about the same size as the American Society for Nephrology, “and that’s just the kidney”. Simply put: not many people care about mathematics.

    The British Ecological Society (BES) meeting had 1200 participants, which is about ten times bigger than the annual international category theory meeting, and still only a fraction of the size of the conference run by the Ecological Society of America. You may reply that the US Joint Mathematics Meetings attract about 7000 participants; but as Reed pointed out (under the heading “Most of Science is Biology”), the Society for Neuroscience gets about 30,000. Even at the BES meeting in our small country, there were nearly 600 talks, 70 special sessions, and 220 posters. In the parallel sessions, you had a choice of 12 talks to go to at any given moment in time.

  2. Concision. Almost all talks were 12 minutes, with 3 minutes for questions. You cannot, of course, say much in that time.

    With so many people attending and wanting to speak, it’s understandable that the culture has evolved this way. And I have to say, it’s very nice that if you choose to attend a talk and swiftly discover that you chose badly, you’ve only lost 15 minutes.

    But there are many critiques of enforced brevity, including from some very distinguished academics. It’s traditionally held that the most prestigious journals in all of science are Nature and Science, and in both cases the standard length of an article is only about three pages. The style of such papers is ludicrously condensed, and from my outsider’s point of view I gather that there’s something of a backlash against Nature and Science, with less constipated publications gaining ground in people’s mental ranking systems. When science is condensed too much, it takes on the character of a sales pitch.

    This is part of a wider phenomenon of destructive competition for attention. For instance, almost all interviews on TV news programmes are under ten minutes, and most are under five, with much of that taken up by the interviewer talking. The very design favours sloganeering and excludes all points that are too novel or controversial to explain in a couple of sentences. (The link is to a video of Noam Chomsky, who makes this point very effectively.) Not all arguments can be expressed to a general audience in a few minutes, as every mathematician knows.

  3. The pleasure of introductions. Many ecologists study one particular natural system, and often the first few minutes of their talks are a delight. You learn something new and amazing about fungi or beavers or the weird relationships between beetles and ants. Did you know that orangutans spend 80% of the day resting in their nests? Or that if you give a young orangutan some branches, he or she will instinctively start to weave them together in a nest-like fashion, as an innate urge that exists whether or not they’ve been taught how to do it? I didn’t.

    Orangutan resting in nest

  4. Interdisciplinarity. I’ve written before about the amazing interdisciplinarity of biologists. It seems to be ingrained in the intellectual culture that you need people who know stuff you don’t know, obviously! And that culture just isn’t present within mathematics, at least not to anything like the same extent.

    For instance, this afternoon I went to a talk about the diversity of microbiomes. The speaker pointed out that for what she was doing, you needed expertise in biology, chemistry, and informatics. She was unusual in actually spelling it out and spending time talking about it. Most of the time, speakers moved seamlessly from ecology to statistics to computation (typically involving processing of large amounts of DNA sequence data), without making a big deal of it.

    But there’s a byproduct of interdisciplinarity that troubles my mathematical soul:

  5. The off-the-shelf culture. Some of the speakers bowled me over with their energy, vision, tenacity, and positive outlook. But no one’s superhuman, so it’s inevitable that if your work involves serious aspects of multiple disciplines, you’re probably not going to look into everything profoundly. Or more bluntly: if you need some technique from subject X and you know nothing about subject X, you’re probably just going to use whatever technique everybody else uses.

    The ultimate reason why I ended up at this conference is that I’m interested in the quantification of biological diversity. So, much of the time I chose to go to talks that had the word “diversity” in the title, just to see what measure of diversity was used by actual practising ecologists.

    It wasn’t very surprising that almost all the time, as far as I could tell, there was no apparent examination of what the measures actually measured. They simply used whatever measure was predominant in the field.

    Now, I need to temper that with the reminder that the talks are ultra-short, with no time for subtleties. But still, when I asked one speaker why he chose the measure that he chose, the answer was that it’s simply what everyone else uses. And I can’t really point a finger of blame. He wasn’t a mathematician, any more than I’m an ecologist.

  6. The lack of theory. If this conference was representative of ecology, the large majority of ecologists study some specific system. By “system” I mean something like European hedgerow ecology, or Andean fungal ecology, or the impact of heatwaves on certain types of seaweed.

    This is, let me be clear, not a bad thing. Orders of magnitude more people care about seaweed than <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-categories. But still, I was surprised by the sheer niche-ness of general theory in the context of ecology as a whole. A group of us are working on a system of diversity measures that are general in a mathematician’s sense; they effortlessly take in such examples as human demography, tropical forestry, epidemiology, and resistance to antibiotics. This didn’t seem like that big a deal to me previously — it’s just the bog-standard generality of mathematics. But after this week, I can see that from many ecologists’ eyes, it may seem insanely general.

    Actually, the most big-picture talks I saw were very unmathematical. They were, in fact, about policy and the future of humanity. I’m not being flippant:

  7. Unabashed politics. Mathematics is about an idealized world of imagination. Ecology is about our one and only natural world — one that we happen to be altering at an absolutely unprecedented rate. Words like “Brexit” and “Trump” came up dozens of times in the conference talks, and not in a tittery jocular way. The real decisions of people with real political power will have real, irreversible effect in the real world.

    Once again, this brought home to me that mathematics is not like (the rest of) science.

    It’s not just that we don’t have labs or experiments or hypothesis testing (at least, not in the same way). It’s that we can do mathematics in complete isolation from the realities of the world that human beings have made.

    We don’t have to think about deforestation or international greenhouse gas treaties or even local fishery byelaws. We might worry about the applications of mathematics — parasitic investment banks or deadly weapons or governments surveilling and controlling their citizens — but we can actually do mathematics in lamb-like innocence.

    On the other hand, for large parts of ecology, the political reality is an integral consideration.

    I saw some excellent talks, especially from Georgina Mace and Hugh Possingham, on policy and influencing governments. Possingham was talking about saving Portugal-sized areas of Australia from industrial destruction. (His advice for scientists engaging with governments: “Turn up. Have purpose. Maintain autonomy.”) Mace spoke on what are quite possibly the biggest threats to the entire planet: climate change, floods and heatwaves, population growth, and fragmentation and loss of habitats.

    It’s inspiring to see senior scientists being unafraid to repeat basic truths to those in power, to gather the available evidence and make broad estimates with much less than 100% of the data that one might wish for, in order to push changes that will actually improve human and other animal lives.

by leinster (Tom.Leinster@ed.ac.uk) at January 04, 2017 06:08 AM

January 03, 2017

Jon Butterworth - Life and Physics

How the rainbow illuminates the enduring mystery of physics

At Aeon Magazine (who commissioned it) but also in The Atlantic and Quartz.

idea_main-rainbow2

This summer I went on a family holiday to Cornwall, on the Helford River. The peninsula south of the river is, rather wonderfully, called The Lizard. Standing on its cliffs, you are at the southernmost point of mainland Britain. North of the river is the port of Falmouth, from where packet-ships kept the mail services of the British Empire running until 1851. The area is lush with the sort of half-tamed beauty that England does so well, and the boundary with the contrasting wildness of the Atlantic is never far away.

Walking home from a riverside pub one evening, I witnessed a stirring natural display. With the Sun setting at our backs, a fantastically bright and complete double rainbow framed the river-opening leading out to the sea. I have always loved rainbows, and this was the best one I have seen. It even came in front of the horizon, past the trees on the riverbank opposite, and merged with its reflection in the river. It was an invitation and a dare to scientific understanding.

The image hovering in front of me was formed by sunlight streaming from behind us, internally reflected back towards us by myriad tiny suspended raindrops. When light meets an interface between air and water – the surface of a raindrop – some of it is reflected and some of it passes through. The angle of refraction depends both on the light’s wavelength and on the angle at which it hits the surface. Parallel rays entering a spherical raindrop bounce off the inside and come back towards us. The reflection angle from the back of the raindrop, combined with the two refractions (one as it enters the drop, and one as it leaves) conspire to concentrate each wavelength of light at a certain return angle. Wavelength corresponds to colour, so the colours separate into the familiar bands.

That is really all the physics you need to understand the basics of a rainbow. The first rainbow calculations were carried out by René Descartes back in 1637, although the dependence on wavelength, and hence the colours, needed Isaac Newton and Thomas Young’s later contributions. Although I already knew this history in principle, it was fresh in my mind because of a lovely review article by Alexander Haussmann of the Technical University in Dresden that I’d come across the same week. Haussmann’s review is full of both science and charm, and, unlike John Keats’s poor Lamia, the rainbow unwoven shines brighter for it.

inline-rainbow1

One thing adding to my wonder at the Cornwall rainbow was my realisation that the reflection I could see in the river was, despite appearances, not a reflection of the rainbow in the sky. A true reflection happens when light from the same object, but at a different angle, reaches the eye after bouncing off a surface. For instance, the clouds I could see mirrored in the surface of the river were true reflections. But a rainbow is made up from light reflected internally in raindrops and emitted at a specific angle; that is why we see the bow at all. It is not possible for the mirror image to be made from light reflected from the same drops, because it would have to have been emitted at a different angle in order to get to our eye. Instead, the mirror image comes from light from drops at a different position in the sky, radiated at the same angle.

Local changes in the rain shower can affect the mirror rainbow completely differently from the direct-view rainbow. There was more to that ‘reflection’ rainbow than met the eye, and there is more to a rainbow than the basic physics explanation, too.

As the theory of rainbows has developed over history, more odd or unusual features were explained. Not only was the separation of colours understood, but so was the presence of secondary rainbows that display their colours in the opposite order; these are caused by a double internal reflection within the raindrops. There is a darker band between the primary and secondary rainbows, known as ‘Alexander’s dark band’ after Alexander of Aphrodisias, who observed it around 200 AD. Alexander’s dark band lies at an angle forbidden by the optics of the simplest paths the light rays can travel along, making it darker than the rest of the sky. There are also rare ‘supernumerary arcs’ that can appear within the arc of the main bow, caused by interference between two light rays coming to the eye at the same angle but travelling slightly different distances.

All these features have been understood by developing Descartes’s original theory. The assumptions and approximations he made have been examined and refined, aided in recent years by increased computing power. Step by step, the details change and become more faithful to real life. You might think there is nothing left to learn about something so simple as a rainbow, but no – the process continues and has even picked up pace, as the ubiquity of high-quality digital cameras has yielded more and better images of startling, unusual rainbow features.

One recent example is a newfound appreciation of the importance of different drop sizes and shapes. The original theory of rainbows assumes perfectly spherical raindrops, and does not depend on the size of the drops. In reality, larger raindrops are squished flatter on the underside by air resistance as they fall, developing a shape that rainbow enthusiasts call the ‘hamburger bun’. Large drops usually make up a small fraction of a rain shower, but when they do contribute, calculations show that their oblateness leads to a change in the position of the top of the rainbow, while leaving the base unaffected. This is why the base of a rainbow often appears brighter than the top of the bow: all drop sizes contribute at the base, whereas near the top, light from the large, hamburger-bun drops is dispersed.

Using precise numerical calculations, performed on high-powered computers coded to describe the full scope of Maxwell’s equations of electromagnetism, increasingly esoteric features can be understood, and omissions and approximations in the theory can be fixed. That’s the way with science, whether we are talking about rainbows or high-energy particle collisions. Finding a flaw doesn’t mean the theory is worthless, but it does mean that the theory can be improved.

Scientists must avoid falling into the trap of defending all aspects of current thought because we feel the underlying truth needs protecting. Details of theory, or even broader aspects, can often be improved. Doing so strengthens the core of the theory – that is, if it is fundamentally correct. If it isn’t, then we need a better theory as soon as possible. The most effective way to find a deeper description of nature is to seek more observations and to push the existing conception to breaking point. All of those thoughts were hovering along with the rainbow over the Helford River.

Our understanding of rainbows is very robust now, but not complete. There are undoubtedly details that can still be improved. That walk home along the river, however, was definitely perfect.


Filed under: Philosophy, Physics, Rambling, Science, Travel Tagged: optics

by Jon Butterworth at January 03, 2017 04:47 PM

The n-Category Cafe

Basic Category Theory Free Online

My textbook Basic Category Theory, published by Cambridge University Press, is now also available free as arXiv:1612.09375.

Cover of Basic Category Theory

As I wrote when I first announced the book:

  • It doesn’t assume much.
  • It sticks to the basics.
  • It’s short.

I can now add a new property:

  • It’s free.

And it’s not only free, it’s freely editable. The book’s released under a Creative Commons licence that allows you to edit and redistribute it, just as long as you state the authorship accurately, don’t use it for commercial purposes, and preserve the licence. Click the link for details.

Why might you want to edit it?

Well, maybe you want to use it to teach a category theory course, but none of your students have taken topology, so you’d rather remove all the topological examples. That’s easy to do. Or maybe you want to add some examples, or remove whole sections. Or it could just be that you can’t stand some of the notation, in which case all you need to do is change some macros. All easy.

Alternatively, perhaps you’re not planning to teach from it — you just want to read it, but you want to change the formatting so that it’s comfortable to read on your favourite device. Again, this is very easy to do.

Emily recently announced the dead-tree debut of her own category theory textbook, published by Dover. She did it the other way round from me: the online edition came first, then the paper version. (I also did it that way round for my first book.) But the deal I had with Cambridge was that they’d publish first, then I could put it on the arXiv under a Creative Commons licence 18 months later.

We’ve talked a lot on this blog about parasitic academic publishers, so I’d like to emphasize here what a positive contribution Cambridge University Press has made, and is continuing to make, to the academic community. CUP is a part of Cambridge University, and I think I’m right in saying that it’s not allowed to make a profit. (Correction: I was wrong. However, maximizing profits is not CUP’s principal aim.) It has led the way in allowing mathematics authors to post free versions of their books online. For instance, apart from my own two books, you quite likely know of Allen Hatcher’s very successful book Algebraic Topology, also published in paper form by CUP and, with their permission, available free online.

Since a few people have asked me privately for opinions on publishers, I’ll also say that working with CUP for this book was extremely smooth. The contract (including the arXiv release) was easily arranged, and the whole production process was about as low-stress as I can imagine it being. This wasn’t the case for my first book in 2003, also with CUP, which because of editing/production problems was a nightmare of stress. That made me very reluctant to go with CUP again, but I’m really glad that I chose to do so.

The low stress this time was partly because of one key request that I made at the beginning: we agreed that I would not share the Latex files with anyone at CUP. Thus, all I ever sent CUP was the PDF, and no one except me had ever seen my Latex source until the arXiv release just now. What that meant was that all changes, down to the comma, had to go through me. For example, the way the proofreading worked was that the proofreader would send me corrections and suggestions and I’d implement them, rather than him making changes first and me approving or reverting them second.

For anyone with a perfectionist/pedantic/… streak like mine (insert your own word), that’s an enormous stress relief. I’d recommend it to any authors of a similar personality. Again, it’s to CUP’s credit that they agreed to doing things this way — I’m not sure that all publishers would.

So the book’s now free to all. If you make heavy use of it and can afford to do so, I hope you’ll reciprocate the support that CUP has shown the mathematical community by buying a copy. But in any case, I hope you enjoy it.

by leinster (Tom.Leinster@ed.ac.uk) at January 03, 2017 12:07 PM

Clifford V. Johnson - Asymptotia

Through a Glass….

Loving this looser, pencil-finish style. My only wish is that I'd discovered it in August. But of course I know that I needed to do what I was doing in August in order to get to where I am now. So there it is. Personal evolution is a wonderful thing, isn't it ? (Click for larger view. More about the book here.)

-cvj Click to continue reading this post

The post Through a Glass…. appeared first on Asymptotia.

by Clifford at January 03, 2017 12:52 AM

January 02, 2017

Tommaso Dorigo - Scientificblogging

A Visit To Israel
I am spending a week in Israel to visit three physics institutes for colloquia and seminars: the Tel Aviv University (where I gave a colloquium yesterday), the Haifa Technion (where I am giving a seminar today), and the Weizmann institute in Rehovot (where I'll speak next Wednesday).

read more

by Tommaso Dorigo at January 02, 2017 12:32 PM

Lubos Motl - string vacua and pheno

Nautilus' disillusioned ex-physicist
Bob Henderson wrote an autobiography for Nautil.Us (via CIP):
What Does Any of This Have To Do with Physics?

Einstein and Feynman ushered me into grad school, reality ushered me out.
He's a Rochester theoretical physics PhD who had come to the grad school after he read some New Agey pop-science books and left a cushy engineering job. He grew disillusioned and at the time of the PhD defense, he decided to switch to Wall Street which he left in 2012, i.e. 15 years later, and became a science writer. While the content of the article is annoying, I think that he is an excellent prospective novelist.

Henderson complains that his dreams were destroyed, he lost the faith that theoretical physics is meaningful or theoretical physicists are marching towards a holy grail. His reasons to leave the university world have nothing whatever to do with my reasons – in some sense, they are the opposite ones – but I am highly familiar with this kind of a frustrated talk because it's widespread among (especially young) physicists. Well, this frustrated talk about physics is less widespread among older physicists because before they reach the higher age, most of the young whiners get eliminated. It's that simple.

Before we look at Henderson's whining a bit more closely, I want to say two more general things. First, it's probably not an accident that the "hero of 2016-2017" in such a popular article is someone who left theoretical physics and mostly began to hate it. Decades ago, such popular journals preferred to celebrate successful theoretical physicists but quitters are apparently more fashionable nowadays. This subtlety strengthens the claims that the science media have switched to a new mission, to hurt theoretical physics.

Second, there are surely lots of other fields in which most people remain relatively unsuccessful and disillusioned. I am sure that there are lots of boys who want to be the world's best athletes and tennis stars and anything of the sort (or actors, add your favorite famous occupation) but find out that not everyone becomes successful and the life of the unsuccessful ones may be hard. Djokovic's life may be comfortable (although I am not certain even about these statements) but for every Djokovic, there are thousands of would-be stars who remain broke and don't get rewarded for their efforts.

Nevertheless, many people play tennis in the afternoon even if they don't earn the same money as Djokovic. Most of those just accept that they're not as good as Djokovic – but tennis is still fun for them, anyway. For some reason, people (including Henderson) don't want to accept that they're less successful than the top theoretical physicists simply because they're not as good. Without the decent salaries and big prizes, these would-be "physicists" find out that they don't actually like physics at all.

Third, Henderson's personality is clearly not that of a theoretical physicist, a fact that the pop-science books have obscured to him. You may see that pop-science books often present physicists as some kind of magicians who are having a great time under shining lights all the time – like Harry Potter or at least the Hollywood stars. What a surprise that many people who actually try to do physics grow disillusioned.




OK, let's start to review his memoir. In 1993, he went to Rochester's graduate school to study theoretical physics. He had read a lot about Einstein and Feynman, they were great guys. But Henderson also mentions The Tao of Physics and Zen and the Art of Motorcycle Maintenance. I know these two titles but haven't read the books.

In spite of that, I feel almost certain that these are not the books that the people whom I consider physicists or prospective physicists are attracted to. Books like that may use the words borrowed from physics but their whole way of thinking is largely unscientific. If you are a physicist who has a friend who believes in mysterious stuff peppered with physics vocabulary (or vice versa), I don't have to explain to you what's the difference between you and your friend, do I?

There may be some similarities – some shared excitement about mental, spiritual, or non-practical questions – but the differences between science and religion/superstition are perhaps greater than the similarities.




It is imaginable that people attracted to New Agey books could do good physics. But in general, I think that it's safe to say that an overwhelming majority of readers of similar books are simply not equipped to do physics. You know, the "opinion" that these superstitious and religious approaches aren't the most sensible way to approach the fundamental laws of physics isn't something that people like me were adopting when they joined a graduate school.

With a debatable 1-week high school exception, I have never had any inclination to look into these superstitious and religious books claiming to be books about physics. Those books reflect a naive, unscientific approach to the truth. They propose easy solutions. Just believe in something, we're all united, God penetrates all of us and is spread to all our bodies, whatever (I am vaguely reproducing some excited lessons I received from a New Age friend LOL), and you get close to the deepest truths about the Universe.

Sorry, you can't. With these mysterious vague superstitious proclamations, you haven't learned a damn thing. The learning of the physical truth about the Universe obviously does require some calculations, often long ones, or a careful argumentation and hours of mental work in which the brain often burns and it is producing nothing useful for the path most of the time.

This is a sketch of the "path towards the deep laws of the Universe" that I already had in mind when I was 4 years old or so – and I think that other physicists who don't relate to Henderson's complaints would tell you something similar. Henderson is telling us that he was gradually discovering some of these things during his grad school years. One actually has to work hard at some moment, be materially modest, be confused much of the time, and try many paths that don't lead to interesting outcomes, while the greatest discovery in a century arrives relatively rarely (approximately once a century, if you want to know).

Those are shocking facts!

You should have known it before you entered the graduate school. Quite generally, I would guess that people who read about "tao" and "zen" are likely to face some problems as grad students of theoretical physics – as far as I can say, those problems may be exactly as severe as the problems of those whose background is all about "Jesus" or "Mohammed". Those are not helpful prerequisites for the discipline. And if the readers are told that those are good prerequisites for the research in theoretical physics, I think that these readers including Henderson have been deceived by the writers of the superstitious books and they may demand a compensation.

Another detail is that Henderson went to University of Rochester, NY to study theoretical physics. It may be an OK school but it is in no way a university that is close to the top in the world's cutting-edge theoretical physics. Henderson's adviser Sarada G. Rajeev may be a local Rochester star in theoretical physics but that doesn't necessarily mean that he's a global star. Click at the hyperlink to see his papers. It's a decent list for a career at such a university but it's not quite the same list as if you look at e.g. Polchinski's record.

I am saying it because if Henderson wanted to search for a theory of everything, going to Rochester doesn't look like a straightforward, sensible path towards that goal. It's plausible that someone at Rochester – or someone with a degree from Rochester – would find a theory of everything. But if that's so, she will have to be repeatedly lucky. The starting point looks more troublesome when you combine all these strange details. If you want to professionally search for a theory of everything, read "tao" and "zen" and go to Rochester. Well, not really. ;-)

A minute ago, I mentioned tennis and the people's ability to understand that they're not as good tennis players as the world's best tennis players. In fact, I am absolutely convinced that the intellectual gap between the best theoretical physics groups in the world and those at Rochester (or worse) is far deeper than the difference between Djokovic and the average Portuguese players, to pick a random non-stellar tennis nation. It's questionable whether places at the level of Rochester (or Portugal) should claim to produce "researchers of a theory of everything" at all. A theory of everything could be too big a game for such places that simply don't belong to the elite. The very statement that they're doing something of the sort is deceptive for most of those non-elite places. These non-elite places should describe their work with a more humble language, otherwise they're deceiving prospective students and sponsors.

A big part of Henderson's story is about the modest material conditions that physics graduate students – and even postdocs etc. – sometimes experience. They're sometimes poor, sometimes they're not. But I do think that the folks who have no problem with such modest conditions – similar to those of monks – are more likely to be "natural theoretical physicists". Readers of "tao" and "zen" books may think that it's cool to search for the deep truths and be as materially undemanding as monks – when it comes to the housing, food, beverages, traveling, sex, whatever – but when the reality arrives, they may find out that they are not really this modest and the usual biological needs do play a big role for them.

Again, I can't relate to Henderson's story because I really don't have a problem with extremely modest material conditions but also solitude and other things. Some people earn bucks by writing about saints and then there are people who just shut up and they are saints. I must humbly admit that I am one of those ;-) while Henderson probably never was. Just to be sure, I am not saying that all theoretical physicists live like monks. You can earn lots of money (think of the Milner $3 million prizes), jobs like the Harvard Junior Fellowship bring some mandatory opulent life, and people who become career professors are materially insured for their life from most viewpoints.

But I want to spend more time with Henderson's disillusion about the research projects. He had to learn lots of papers and he really didn't know how much he has to learn. When you're thrown into research, it is different from a university course. At school, the instructor may have outlined the path for you and you are just following the plan. Many students have probably done "almost the same sequence of steps" before you. Locally, in each lecture, you may deviate a bit, you may calculate various things in different methods, learn something differently than others, but the big picture of the path is clear.

There's nothing of the sort when you're an independent researcher. Tens of thousands of papers (and thousands of books) have been written about theoretical physics. You can't – well, you shouldn't – read all of them. You must pick a subset that is useful for your goals or, to say the least, that is useful for a goal that you may pick as your own even though your expectations could have been different.

You should have a rough plan to get through this hopeless chaos. One aspect of the plan is the realization that most of the papers that have been written are redundant noise (or they are wrong). You want to want to do something more interesting than what the authors of average papers did. The second aspect is that even among the valuable papers, there's a lot of redundancy so you don't need to read everything – it gets repeated – and you may and you should rediscover many of the important things yourself, anyway. And the third aspect is some degree of specialization. You must admit that you won't understand absolutely everything that was written by the other physicists, even if it is correct, and you must live with this fact. Non-scientists live with it happily. As a physicist, you should still understand a vastly greater percentage of the physics wisdom than the non-physicists.

Some self-confidence is therefore highly desirable, much like some humility. On one hand, you must know that you will rely on the work of others, stand on the shoulders of giants from the past and present, use some textbooks or reviews or standard courses, and use the skills and comparative advantages of your collaborators. On the other hand, you must feel that you basically don't need those things. You don't need to read tens of thousands of papers most of the time. You may rediscover everything you need or at least find the right place where you may learn a known thing when you need it. The dominant theme should be that you are refining your own picture of the laws of physics and all the other people from the past and present are just helping you. Most of the time, you are thinking for yourself and you believe that you're smarter than almost everyone else. If this ambitious belief of yours is rubbish, you should get eliminated. But some people may survive and they really are using their brains independently, their intellectual self-confidence is justified (even though they sometimes hide it), and those have really created and are creating the skeleton of physics.

For a grad student or any "junior" member of a collaboration, it's quite normal – and logically justifiable – to do some brute force work whose broader importance isn't understandable to him. Professors are sometimes abusing grad students as slaves or robots. And they love to repeat (true) jokes about it and to count the research work in kilo-graduate-student-hours. But this fact should be obvious to everyone who cares. It is not an exclusive feature of physics and it has a understandable justification, too.

You know, the professor who directs the "big picture" of the research project may be doing the seemingly "easier" part of the job – and he may work for much less than 15 hours a day, a figure that Henderson mentions – but he may still be doing the more important part, just like the boss of an innovative company. His skills to direct the "big picture" of the project are the most scarce resources. Imagine that you live 100 years ago and want to produce cars. To do so, you need some experience e.g. from Henry Ford's company. Well, you are probably going to do some more ordinary, boring work. That has an easy explanation: You're not (a) Henry Ford (yet). Of course you're not the one who is inventing the big strategy and giving the orders to lots of employees. You're not the damn Henry Ford. It is not even clear whether you're good at the things that Henry Ford is reasonably good at. So how could you be Henry Ford?

There's a simple recipe if you're dissatisfied with your place. If you want to do things like Ford and give orders to others, become a Henry Ford yourself, if you can. You must accumulate some capital – money, fame, and credibility, whatever you need – and then you will be able to employ your workers. Or your graduate students. These two examples – and many others – are obviously analogous.

Theoretical physics research may be among the occupations with the smallest role played by plans. One really has a lot of freedom in making his decisions – what he should read and study and calculate and focus on – and indeed, that's why one can get completely lost, too. The shape of the final product (theories of Nature) is almost completely unpredictable, too. But this freedom (which may lead to good or bad outcomes) and the unparalleled depth of the initially unknown wisdom is one of the features that makes theoretical physics so remarkable.

It's hard to give some recommendations that would help everyone escape the potential mess. No universal solutions like that exist. It's unavoidable that they don't exist and it's good that they don't exist. There are many decisions to make, so some people – and probably most people – will unavoidably get lost. What should you do if you don't want to get lost? Be smart, be hard-working, but don't be submissive, be stubborn, be successful, and don't be unsuccessful. These recipes are not too helpful, of course. Some people aren't that smart. They aren't independent enough. They get manipulated. And if they don't get manipulated, they really don't know what to do. Indeed, being an independent researcher – and especially a "principal investigator", if I put it in this way – means to be able to make many such decisions. So the whole idea of recommendations "what you should do" in such an occupation is an oxymoron. If someone else could tell you what to do, nothing would be left for your actual job. The decisions are your job. To ask "what to do" is basically equivalent to asking "do the job for me".

In the college but also in the later years, I was talking to lots of people who begged for recommendations like that. What should I do not to get lost? My answer was never so direct but yes, my current answer would be: If you need this leadership repeatedly, just quit it. If you don't know what you're doing, why you're doing it, and where you are going, and how you may roughly get there, then it's a bad idea to start or continue the journey. People who are picking an occupation should feel some "internal drive" and they should have at least a vague idea what they're doing, why, and how. Again, I don't think that this common sense only holds in theoretical physics. Theoretical physics only differs by the deeper caves in which one may get lost – because deeper caves are being discovered or built by theoretical physicists, too.

Another complaint by Henderson was that his adviser (who was 5 years older) "knew" what the result of their joint project was supposed to be and that's where they ultimately got, indeed. This finding was shocking and disappointing for Henderson, a junior collaborator. I don't understand why it's disappointing. It's common sense. Many projects work like that: One has a hope that there's a certain kind of an answer that can be found and sufficiently rigorously justified. The "senior", usually more experienced (and sometimes, indeed, "more talented") members of the collaborations have some hopefully correct vision about the "big picture" while the other members are expected to do much of the brute force calculations. How could it be otherwise? This story only says that some researchers should have some idea where they're roughly going. And then it's saying that some collaborators – well, the "senior ones" – have a better idea than others. Does one really need to torture himself for years in the graduate school to understand these common-sense tautologies?

In the previous paragraph, I've used some big words. But the actual project that Henderson discussed was his paper with Rajeev, Quantum gravity on a circle and the diffeomorphism invariance of the Schrödinger equation. Well, this paper from 1994 only has 3 citations at this moment. I know the rough content. The tiny number of citations after 22 years indicates that this was probably not a paper that finally found the theory of everything. Or anything else that was revolutionary. Well, it was a much weaker paper than the average paper in the field, too.

Some appraisals by Henderson are therefore correct. This paper couldn't have fulfilled Henderson's dreams about "tao" and "zen". Also, if you have this particular paper in mind, new light is shed on many other claims by Henderson. For example, he said that Rajeev's vision about the final result was finally confirmed, after difficult calculations. Was Rajeev a visionary? Well, a more accurate evaluation could be a bit different: Rajeev simply invented some kind of a paper, including the conclusions, and he employed his grad student Henderson to fill in some details so that the story looks at least somewhat convincing. This is the "Al Gore Rhythm" to write papers in some disciplines that is being used often if not predominantly in soft scientific disciplines such as the climate science. The conclusion is decided in advance and all the seemingly complex, long, and technical language and formulae is only inserted to make the conclusion look more scientific! It's not real hard science, however. If you verify the argumentation really carefully, you usually find out that something important is wrong with the paper even though "local regions" of the paper may look kosher.

But the paper still doesn't look too convincing. You know, there are better physicists than Rajeev and most of them would probably agree that the paper hasn't found any important principle or mechanism in quantum gravity at all. 22 years after the paper appeared, most top theoretical physicists would almost certainly disagree with the conclusions by Rajeev and Henderson, e.g. that there's a canonical link between distance and the phase of a wave function in quantum gravity. It is one of the papers that try to study quantum gravity as if it were a local field theory. But quantum gravity isn't quite a local field theory. In spacetime dimensions lower than four, theories of quantum gravity may look almost indistinguishable from local field theories (and there exists e.g. a formal proof of the equivalence of 3D quantum gravity and 3D Chern-Simons theory) but I think it's right to say that even in the low dimensions, this similarity is deceitful and overlooks some delicate details that become very important in higher dimensions. At any rate, what they found couldn't have been meaningfully applied in the theories of quantum gravity that are really interesting and that we care about, in \(d\geq 4\).

It means that Henderson was 1) a junior member of this collaboration, a status that understandably involves the shortage of independence. But 22 years after the paper was written, we may see that the shortage of independence was more severe than previously thought. Henderson still failed to understand that their "solution" to the problem of the "quantum mechanics on a circle" wasn't necessarily "the" right solution or "the" right approach to this kind of a problem – according to the truly best physicists in the world. While Henderson understands that "quantum gravity on a circle" is a special toy model that isn't likely to teach us much about the big problems of quantum gravity, he still doesn't see that even this toy model was probably solved in a way that is conceptually uninteresting if not strictly wrong. Henderson misunderstands his own paper to the extent of not being able to imagine that something could be problematic about it.

You know, only a small portion of physics PhDs get really close to the world's elite. But I think that after some years, even the other ones should be able to understand and see the difference between the top physicists and those who are not top physicists at all, at least in a fuzzy way. If they can't even see why top physicists are generally more influential than the mediocre ones, it shows that they really don't have the talent for the discipline.

We also learn that Henderson began to hate Rajeev because the latter didn't care about the suffering of the former and dashed his dreams. For a year, Henderson tried to work in isolation. It didn't work too well. He returned, Rajeev accepted him, but soon afterwards, Henderson was hurt when Rajeev asked "Do I have to explain the fiber bundles again?" Come on, is it so terrible to hear this question? Fiber bundles are a hard enough concept – used by people who really want to think like trained mathematicians – but if they're important enough for some project and if Rajeev spends some time by explaining them to someone else, it may be frustrating for Rajeev to see that he has wasted his time by the pedagogic efforts. So why couldn't Rajeev ask "Do I have to explain the fiber bundles again?" Is it a question that one may really get offended by? Have you tried to think about the interaction from Rajeev's perspective, Mr Henderson? Again, I think that this situation is not specifically tied to theoretical physics. If a coach teaches something to a tennis player and it's completely ignored a day later, the coach may also get reasonably upset and emit an irritating remark, can't he?

A theme underlying the story is the tough job market. The number of faculty (and postdoc) jobs is too small relatively to the number of theoretical physics graduate students. I think it's true, the tension has gotten even more extreme in recent years, and the suffering that many young brilliant theoretical physicists I have known had to repeatedly go through was almost heartbreaking. On the other hand, I am pretty sure that the number of faculty jobs shouldn't grow enough to turn e.g. Mr Henderson into a theoretical physics professor. I think that his – nicely written – story makes it clear that he pretty much never had a clue about theoretical physics and he still doesn't have a clue. He isn't thinking as a physicist.

And it's not just about the Virasoro algebra and Yamabe problem, phrases that Henderson used in his and Rajeev's 1994 problem but Henderson "couldn't define them for us today", as he told us. He was clearly misunderstanding and he is still misunderstanding some much more general issues about theoretical physics and what it really means to do research on it (and maybe in science in general). Years after he joined that field, he may still be shocked when he discovers that physicists sometimes have to make independent decisions and similar spectacularly profound wisdoms. ;-)

Again, his prose is impressive – and includes all the linguistically colorful, redundant, and emotional inserted details that make some writers famous and that guarantee that I have never been a reader of novels LOL :-) – but his opinions about physical concepts that are described in his prose are typical opinions held by the laymen, especially when it comes to the frustration caused by some features of physical theories that physicists actually love. A paragraph complains that there are at least three "pictures" to define the time evolution in quantum mechanical theories – the Heisenberg picture, the Feynman approach, and the Schrödinger picture. Henderson was apparently disappointed – and is still disgusted – by the huge number of the pictures (three) – it's not shocking that many crackpots display irrational, anxious reactions to theories with \(10^{500}\) solutions because many people find "three" to be a terrifyingly high integer, too – and he was and he still is repelled by the idea that the deeper theories of particle physics could suffer from the same "problem". He says that the Holy Grail could be a hall of mirrors. It's a great literary metaphor but what's not great is that the hall of mirrors clearly scares him.

Please, give me a break. The transition from the Heisenberg picture to the Schrödinger picture is a simple time-dependent unitary change of the coordinates on the Hilbert space. It's obvious that in every theory that has some time-dependent quantities (and every theory that we use deals with those), one may redefine them by field redefinitions and, when they carry Hilbert space vector indices, those include the unitary transformations of the Hilbert space. Of course this freedom will always exist as long as physics will be based on some quantities (undoubtedly) or on Hilbert spaces (almost certainly as well). Why would one be disappointed by the existence of the two pictures? How could someone possibly think about doing research on quantum gravity if he's frightened by the existence of the Schrödinger and Heisenberg pictures?

In a similar way, one may show the equivalence of these two pictures with the Feynman path integral approach whenever some quantities similar to those in classical physics – like \(x(t), \phi(x,y,z,t)\) – exist in the theory. The proof of the equivalence of the path integral to the operator approaches indeed works (before Feynman, it was already sketched by Dirac) and is rather universally applicable. It's enough to learn it once and you're done. It's a cute piece of the puzzle that has been mastered and that a theoretical physicist happily learns and teaches. Yes, it's one of the mirrors in the hall surrounding the room with the Holy Grail. Why would one be disappointed by those? It makes absolutely no sense.

In fact, these mirrors – rhetorically different but physically equivalent descriptions – became even more widespread, important, and omnipresent in theoretical physics of recent decades when the string and field theory dualities were uncovered. And they're absolutely wonderful, not disappointing. It's surprising that a guy who claims to have been shaped by books about Feynman would think that this multitude of descriptions is disappointing. Feynman always emphasized his hobby to look at problems from many different perspectives. It's so great. Even Apple had the slogan "think different" years before it has turned its consumers to a brain-dead mass of sheep that are using the same boring uninnovative smartphones who suffer from the maximum imaginable group think (not only when it comes to phones but even politics and other things). New perspectives – including new equivalent pictures in quantum mechanics and new descriptions of string or field theory related by dualities – enrich our mind, give us new abilities to solve certain problems or see previously overlooked analogies and isomorphisms. A mirror is an object that a kid physicist likes and is intrigued by. There's just nothing wrong about the idea that a mature physicist who makes important steps towards an important theory has to master a hall of mirrors. Isn't it exactly the kind of an activity that he was trained for as a kid and that he liked? Well, one may see that "tao" and "zen" books are encouraging the readers to do very different and less physical things than to investigate a network of mirrors and how it works.

If Mr Henderson doesn't like the physicist's ability to look at the phenomena through many perspectives or pictures, his thinking is clearly nothing like Feynman's. So maybe Mr Henderson was excited to hear that Feynman was picking locks but he must have understood that picking locks is not the most characteristic kind of work done by theoretical physicists, right? Looking at things with new eyes is what theoretical physicists often need to do – they must be good at it and they're happy and proud about it. If those things (looking at the Universe with new eyes) make you frustrated instead, theoretical physics just clearly isn't the occupation for you.

The final theory may indeed be a "hall of mirrors" in some literary metaphor but if it is so, it's great. A big part of the physicists' task will be to understand how the mirrors work, where they are located, and learn how to use their seemingly complex reflections to learn about phenomena of Nature, including the phenomena that previously looked "trivial" but they were hiding a complex game with mirrors. Again, this is a development that makes a true theoretical physicist happy. A theoretical physicist just wants to see under the surface. He wants to ask "why" even when the practically oriented laymen are "satisfied" and don't ask a damn thing. Many things look simple but this impression is misleading and something rather elaborate may be hiding behind the surface. Theoretical physicists naturally have the desire to remove the surface layer of illusions and see what's inside – and if the interior includes a hall of mirrors, then it's very interesting to know and understand in detail.

I could discuss other aspects of his opinions about physics. One implicit assumption at Rochester – and other schools that don't belong to the global elite – is that you may search for a theory of everything or a theory of quantum gravity while ignoring string theory. This is of course a lie, a lie that certain people maliciously try to spread, and if you combine this ignoring of string theory with the hatred towards the pictures of quantum mechanics, dualities, fiber bundles, and other things, your chances to contribute to the search for a theory of everything really drop close to zero.

At the end, even though this guy is a good writer and I would prefer if people were never emotionally frustrated or disappointed, it's hard for me to feel much sympathy for him. He may have been deceived by pop-science books which made him believe that theoretical physics is something entirely different than what it is. But he continued to lie to himself and to others and he's still searching for problems at the wrong places. Sorry, Mr Henderson, but the end of your love affair with theoretical physics wasn't the fault of theoretical physics.

by Luboš Motl (noreply@blogger.com) at January 02, 2017 10:41 AM

Geraint Lewis - Cosmic Horizons

Blog rebirth - a plan for 2017
It is now the twilight zone between Christmas and New Year. 2016 has been a difficult and busy year, and my recreational physics and blogging has suffered. But it is time for a rebirth and I plan to get back to the writing about science and space here. But here's some things from 2016.

A Fortunate Universe: Life in a finely tuned cosmos was published. This has sucked up a huge amount of time and mental activity, and that continues. I will blog about the entire writing and publishing process at some point in the future, but it really is quite a complex process with many mine-fields to navigate. But it is done, and am planning to write more in the future.
We also made a video to advertise the book!

I've done a lot of writing in other places, including Cosmos magazine on "A universe made for me? Physics, fine-tuning and life", and commentary in New Scientist and several articles in The Conversation including

Peering into the future: does science require predictions?

and

The cosmic crime-scene hunt for clues on how galaxies are formed

And one of my articles from last year, We are lucky to live in a universe made for us was selected for inclusion in The Best Australian Science Writing 2016,
There has been a whole bunch of science papers as well, but I will write about those when the blog is up and running at full speed :)

by Cusp (noreply@blogger.com) at January 02, 2017 02:31 AM

December 31, 2016

The n-Category Cafe

NSA Axes Math Grants

Old news, but interesting: the US National Security Agency (NSA) announced some months ago that it was suspending funding to its Mathematical Sciences Program. The announcement begins by phrasing it as a temporary suspension—

…[we] will be unable to fund any new proposals during FY2017 (i.e. Oct. 1, 2016–Sept. 30, 2017)

—but by the end, sounds resigned to a more permanent fate:

We thank the mathematics community and especially the American Mathematical Society for its interest and support over the years.

We’ve discussed this grant programme before on this blog.

The NSA is said to be the largest employer of mathematicians in the world, and has been under political pressure for obvious reasons over the last few years, so it’s interesting that it cut this programme. Its British equivalent, GCHQ, is doing the opposite, expanding its mathematics grants aggressively. But still, GCHQ consistently refuses to engage in any kind of adult, evidence-based discussion with the mathematical community on what the effect of its actions on society might actually be.

by leinster (Tom.Leinster@ed.ac.uk) at December 31, 2016 03:59 AM

Clifford V. Johnson - Asymptotia

Another Signing!

Now here's an interesting coincidence! I came on to write a post about something I did earlier today - signing a contract for publishing The Book, with an exciting new publisher(!) - and then I was reminded of a post I did here exactly two years ago: it was about signing a contract with the previous publisher (who I later parted ways with - see this post).

Anyway, I had a picture in that post (have a look) of me signing the actual paper contract (in triplicate) that had been sent over the ocean on nice paper by pony and so forth, and then I sent it back over the ocean by return pony, and then a countersigned copy was sent over again by yet another pony... Instead, all I have to show you (above) is a screen shot of the electronic signing process I did this morning. Minutes later the countersigned version came back and all was done.

Anyway, in brief, because I should be working on the book (trying to finish a remarkable four pages of art today in one long 15 hour session in the office...), the back story is as follows: [...] Click to continue reading this post

The post Another Signing! appeared first on Asymptotia.

by Clifford at December 31, 2016 01:55 AM

December 30, 2016

Lubos Motl - string vacua and pheno

Gauge symmetry: its virtues and vices don't contradict each other
Three physicists affiliated with Princeton (now or recently) published an interesting preprint,
Locality and Unitarity from Singularities and Gauge Invariance
I know Nima from Harvard very well, he's brilliant and fun. Jaroslav Trnka is a big mind and my countrymate. Although I am a French writer (a month ago, I had to memorize sentences like "Je suis un écrivain français" for my sister's BF, one of the 21 cops who shot the terrorist in Nice), I only know that both Laurentia and Rodinia were supercontinents about 1 billion years ago.

Laurentiu Rodina is a particularly interesting hybrid name of an author especially because the supercontinent Laurentia (basically Eastern 2/3 of North America now) was a portion of the supercontinent Rodinia. Laurentia was named after the St Lawrence River which was named after Lawrence of Rome. Rodinia is named after Rodina – a Slavic word meaning "the motherland" in Russian but "the family" in Czech. Yes, this "subtle difference" appears on the Czech-Russian edition of the false friends of a Slavist.

At any rate, the Rodinia was a motherland or a family of smaller supercontinents that included Laurentia. (Rodinia was a more ancient counterpart of Pangaea – a clumping of all continents into one – except that Pangaea existed between 300 and 200 million years ago, much more recently.) There's some redundancy in Laurentiu Rodina's name – and this redundancy and the subtleties linked to it may be similar to those of the gauge symmetry.

OK, after this silly geological introduction, we are finally getting to theoretical physics.




A nasty crackpot who was very influential 10 years ago recently claimed that by co-authoring the new paper, Nima Arkani-Hamed has made a big U-turn because he liked to dismiss the gauge symmetry.

Well, this criticism is inadequate for two reasons. First, as the current Czech president likes to say, only a moron never changes his opinions. Indeed, physicists are often fortunate to make great advances that prove their old opinions incorrect and open the path towards a much deeper or more accurate truth. Good scientists do care about the evidence and proving that they have been morons is actually one of their favorite sports, at least for some of them.

Second, no change of the opinions was needed in this case because the "old and seemingly dismissive" comments about the concept of gauge symmetry don't really contradict the "new and flattering" adjectives. Gauge symmetry is really a redundancy, not a physical symmetry – and it is very useful and may have deep implications for the existence of the spacetime and the form of laws of physics, too.




Needless to say, the Standard Model of particle physics – the last undisputed "theory of nearly everything" describing all non-gravitational phenomena – is tightly linked to the concept of the gauge symmetry. At every point \((x,y,z,t)\) of the spacetime, we may pick an element \(g(x,y,z,t)\) of the gauge group which is basically \(SU(3)\times SU(2)\times U(1)\), and perform a transformation of the fields. The fields generally change. The fields transform in various representations of the gauge group (products of singlets, doublets, or triplets under both \(SU(2)\) and \(SU(3)\), marked by particular values of the hypercharge \(Y\) generating the \(U(1)\) – all irreps of \(U(1)\) are one-dimensional). On top of the expected transformation rules, the gauge fields \(A_\mu(x,y,z,t)\) also get modified by terms proportional to \(g^{-1}(x,y,z,t) \partial_\mu g(x,y,z,t)\) – which take values in the Lie algebra of the gauge group, just like the gauge fields.

The gauge symmetry is the requirement that when the Standard Model fields obeyed the (field) equations of motion before the transformation, the transformed fields obey them, too. The previous sentence sounds like a standard physicist's definition of a symmetry. If something obeys the laws of physics before the transformation, it does obey them after the transformation, too.

There is a reason why we say that the gauge symmetry is not a real symmetry. The "something" before the transformation and the "something" after the transformation are actually objects that must be physically identified with each other. When you make any measurement by your apparatuses, they will produce the same results whether or not the transformation took place. The untransformed configuration of the fields and the transformed one only differ in "physically unmeasurable" quantities.

To be specific, in electromagnetism, the electric and magnetic vectors \(\vec E(x,y,z,t)\) and \(\vec B(x,y,z,t)\) may be measured by some gadgets. And they also happen to be gauge-invariant – their numerical values don't depend on the gauge transformation parameter \(g(x,y,z,t)\) I mentioned above. They're the same "before" and "after". On the other hand, the gauge potentials \(\phi(x,y,z,t)\) and \(\vec A(x,y,z,t)\) do change after the transformation. But they cannot be measured. The value of the potential depends on your "gauge choice", "calibration", i.e. basically on human conventions.

This situation differs from global symmetries. When you rotate an object (a spaceship with a laboratory) by the angle \(\alpha\), it may obey the laws of physics after the rotation if it obeyed them before it. The astronauts inside the spaceship may be unable to determine whether their spaceship is a rotated one or not – after all, the word "rotated" should be supplemented by "with respect to whom", they may object. However, it makes sense to distinguish the rotated and unrotated spaceship. An external observer really has to distinguish them because his relative orientation with respect to the spaceship may be measured. That's why the rotation of a spaceship is a true, "global" symmetry.

On the other hand, the relative orientations are equally unmeasurable in the case of the gauge symmetry. You may choose the value of \(g(x,y,z,t)\) in the gauge group that is nontrivial inside a spaceship but trivial outside the spaceship. But the observer outside the spaceship will still be unable to distinguish the untransformed and transformed state. They're physically identical.

In a quantum mechanical theory, if the states \(\ket\psi\) and \(\ket\psi + \epsilon G \ket\psi\) of quantum fields are physically identical, it means – by linearity – that the state \(\epsilon G\ket\psi\) is physically identical to the zero vector of the Hilbert space. Omit the coefficient: \(G\ket\psi\) must be physically identical to zero. Such "pure gauge" states (the variations of some states under infinitesimal gauge transformations) must behave as unphysical ghosts of a sort. Physics allows ghosts and angels on needles as long as they behave: They are not allowed to mess with the experiments (and with Texas). The condition I mentioned is equivalent to the requirement that the projection of \(G\ket\psi\) into the physical Hilbert space is zero. If \(G\ket\psi\) were not zero, it would mean that the state carries some "charges" (or "charge densities") under the gauge group i.e. that the state is non-invariant, a non-singlet. A physical state must be a singlet (invariant) under the gauge symmetry, however! That's different from a true, global symmetry: Physical states are allowed to be (and usually are) non-singlets i.e. non-invariant under the global symmetries. For example, planets carry a nonzero angular momentum even though \(\vec J\) generates a (global) rotational symmetry.

One may show that it means that these states are null states (their norm is zero, as expected e.g. from null polarizastions of a photon, in this case \(\epsilon^\mu\sim k^\mu\)). But they must also be decoupled from the physical states we care about. If you guarantee that states not orthogonal to such pure states are absent in the initial state, those must be absent in the final state, too. Gauge symmetry removes two polarizations out of four that could existed for a photon field \(A_\mu(x,y,z,t)\). One of them has \(\epsilon^\mu\sim k^\mu\) and is the "pure gauge" state that is null and harmless. The other one is a state obeying \(k^\mu \epsilon_\mu \neq 0\); which of those vectors is chosen is irrelevant. This other unphysical state must be banned in the initial state by Gauss' law (a non-dynamical equation of motion such as the Maxwell's equation \({\rm div}\,\vec D = \rho\) – "non-dynamical" means "not containing time derivatives") and the gauge symmetry (more or less equivalent to a conservation of the charge, thanks to Emmy Noether's theorem) basically guarantees that if there's no violation of Gauss' law in the initial state, there won't be any violation in the final state.

So you may consistently demand that the "evil unphysical" polarizations of the photon, those with \(\epsilon_\mu k^\mu \neq 0\) that could be sensitive to the "harmless unphysical" polarizations \(\epsilon^\mu \sim k^\mu\), are never produced by the scattering or other physical processes (in evolution). That's needed for your ability to consistently demand that they're absent in all (initial and final) states.

(In the BRST formalism, the BRST-exact states \(Q\ket\lambda\) explain why the polarizations created with \(\epsilon^\mu\sim k^\mu\) as well as those created by the \(c\)-ghost are unphysical, while the BRST-non-closed states \(Q\ket\psi \neq 0\) may be consistently forbidden, which is what eliminates the unphysical \(k^\mu \epsilon_\mu \neq 0\) states as well as those created with the \(b\)-antighost. The BRST formalism makes many loop calculations elegant if the symmetry is non-Abelian – but the whole BRST formalism and the new \(b,c\) fields added to it become pretty much worthless for an Abelian symmetry.)

Gauge symmetry kills negative-norm states

OK, I have implicitly explained why the gauge symmetry is "needed". It kills unphysical polarizations of the photon and quanta of other spin-1 (or higher) fields – and those could be harmful. The polarization with \(\epsilon^\mu \sim k^\mu\) is null (probabilities are equal to zero by Born's rule) and harmless by itself. But the polarizations with \(\epsilon^\mu k_\mu \neq 0\) are "sensitive" (not orthogonal) to the harmless null polarizations and that could be dangerous. These dangerous polarizations would behave as psychics who can feel the angels on the needle – and it's really the psychics, not the angels, who are dangerous because the angels and psychics change the probabilities by zero and nonzero, respectively. ;-)

In effect, that would force us to keep the time-like polarization with \(\epsilon^\mu = (1,0,0,0)\) in the spectrum, because it's one of the harmful non-orthogonal polarizations that cannot be consistently removed, and because gauge bosons with this time-like polarization possess a negative-norm, some processes that include them would be predicted to occur with negative probabilities. That would be trouble for the LHC experimenters because most of them are unable to observe minus one million collisions of a certain kind. ;-)

The gauge symmetry removes the "harmless" null longitudinal polarization, \(\epsilon^\mu\sim k^\mu\), as well as the "harmful" polarizations such as the time-like one. That's great because only the polarizations in the \(x\)- and \(y\)-directions – with \(\vec k\) in the \(z\)-direction – are kept in the physical spectrum.

So if we need a consistent theory which predicts non-negative probabilities and we want to use a quantum field \(A_\mu(x,y,z,t)\) with a Lorentz index (i.e. fields of spin equal to one or greater than one), we simply need a gauge symmetry with the same number of generators to cure the potential diseases. In other words, the gauge symmetry is needed in every manifestly Lorentz-invariant quantum field theory with elementary particles of spin \(j\geq 1\).

The need for the gauge symmetry is fatally important if the conditions are met. In particular, a gauge anomaly (a one-loop quantum process that violates a symmetry that used to hold at the tree level) means an inconsistency of a theory with a gauge symmetry because such an anomaly would prevent us from consistently banning the psychics. (Only when I was editing the blog post, I realized that the terminology involving angels-and-psychics for the two unphysical polarizations should be used much more systematically, even in textbooks and courses.)

Note that the gauge symmetry is only "needed" if both conditions are satisfied. There must be some spin-1 or greater particles. And we want a formalism where the Lorentz symmetry is manifest. If you violate either condition, the need for the gauge symmetry goes away. If you have a quantum field theory with scalar and spinor fields only, \(j=0\) and \(j=1/2\), you won't need a gauge symmetry. And even if you include spin-1 particles, you may get rid of all the gauge fields by working with some gauge-fixing condition and eliminating the unphysical polarizations from scratch.

There exist more or less elegant methods to achieve it – to get rid of the unphysical polarizations of photons created by \(A_0\) and \(A_z\), if you wish, and the corresponding gauge symmetry that makes them harmless. In the twistor-based business, the gauge symmetry was basically non-existent from scratch because only the (two transverse) physical polarizations naturally arise in the twistor formalism. So while the removal of the unphysical polarizations and the corresponding gauge symmetry may look messy in the normal Lorentz-index-based formalisms, it's the other way around in the twistor formalism. Only the two transverse, \(x,y\) physical polarizations are natural in the twistor formalism. The remaining two are unnatural and unnecessary.

Arkani-Hamed et al. have learned lessons from their extensive work on the twistor formalism – where the gauge symmetry doesn't arise at all which supports the dismissive claims about the gauge symmetry. But they mentally returned to the normal non-twistor spacetime with Lorentz indices that most of us are familiar with and imported some wisdom from the twistor world. And they claim that some amplitudes may be fully reconstructed from some singularities and the gauge symmetry. This is basically analogous to a claim in the twistor world that all amplitudes are obtained from some singular ones by recursive formulae. In the non-twistor world, there are new degrees of freedom – the unphysical polarizations of the fields – that could destroy the uniqueness of the answers. But if one adds the corresponding requirement of the gauge symmetry, this non-uniqueness goes away. That's pretty much why the paper they released has to work.

Most of us who have ever thought about standard particle physics consider gauge symmetry to be a "master" principle. Even though it is not a real, global symmetry, its choice determines a large portion of the physical content of a quantum field theory. Well, the choice of the gauge symmetry in the gauge-symmetric formalism basically determines the list of elementary spin-1 particles (and if we generalize the Yang-Mills symmetry to local supersymmetry in supergravity or local diffeomorphisms in general relativity, we may also talk about spin-3/2 and spin-2 particles) and their tree-level cubic and quartic interactions. Moreover, the gauge symmetry invites us to classify the remaining fields as representations of the gauge group which organizes the other fields and determines their interactions with the gauge fields, too. It's pretty.

It's pretty but it's not "absolutely needed". A formulation without an explicit Lorentz invariance may avoid the gauge symmetry. Moreover, such a formulation may be needed because the gauge invariance is only needed for a Lorentz-covariant treatment of elementary spin-1 particles and one might conjecture that all the spin-1 particles may be composite in some sense and there's no truly universal litmus test to distinguish elementary and composite particles. (Well, there are some theorems banning composite massless or other spin-1 bosons under certain assumptions but there surely exist spin-1 composite bosons in QCD, among other things, and those could emulate at least massive gauge bosons under certain circumstances.)

While it's indisputable that the gauge symmetry is useful as an organizing principle to build important theories; and that it's not quite necessary in physics, a pragmatic question remains: Will you miss anything if you choose one of the extreme viewpoints? You may view gauge symmetries as accidents that have helped us to find some theories but those can be formulated without gauge symmetries as well. In other words, you may downplay the importance of the gauge symmetries in physics.

Alternatively, you may say that gauge symmetries really are important and will keep their place in the future of physics. You may dismiss all the "gauge symmetry is just a redundancy" talk as some irrelevant babbling that only led to the awkward definition of theories that should "naturally" be defined with the gauge symmetries, anyway. After all, you may argue (and I often do argue) that a subset of gauge symmetries (which change the fields at infinity, typically by a "constant" transformation) does behave as a set of true global symmetries and non-singlet/charged states under these transformations should be considered physical – so at least if you use fields (degrees of freedom) that allow you to talk about both kinds of symmetries, you can't really throw away all the gauge symmetries without throwing away the truly physical global symmetries.

I am not sure about the answer – and I think that no one has a proof that settles the answer – and I guess that Nima and others are sort of open-minded, too. Depending on the available evidence and fresh new arguments, calculations, and ideas, one's opinions may drift from "gauge symmetry will survive in the future of physics" to "gauge symmetry will fade away" and back. In particular, Nima likes to change his perspective in rather extreme ways – it's perhaps a part of the personality. He may have made a U-turn concerning the anthropic principle (perhaps a few times) and he's happy about the freedom that science allows to passionate yet flexible physicists like himself. ;-) Just to be sure, I do think that true science allows this attitude – but it doesn't really require it. There exist perspectives from which Nima is a conservative physicist – every great physicist has to be conservative according to a suitable definition of conservativeness. On the other hand, there are perspectives from which Nima is a classic revolutionary if not "chaos maker". Both attitudes may be useful when one is smart and lucky, of course.

There are various indications that the gauge symmetry could be rather fundamental. For example, in a January 2015 paper, Mintun, Polchinski, and Rosenhaus have argued that the gauge symmetry plays a truly fundamental role in the "quantum error correction" mechanism that allows gravity to be holographically encoded. There exist other papers – whose authors mostly don't read each other – with a similar message. I really do think that they should talk to each other much more than they do.

I am actually often colliding with this topic of "gauge symmetry as a master principle of the spacetime" in the part of my research focusing on the emergence of the spacetime. One reason why this topic is omnipresent in my approach is that the pure gauge polarizations of gauge bosons ultimately arise from the Virasoro-exact states of strings in perturbative string theory, \(L_{-1}\ket\psi\), roughly speaking, so all spacetime gauge symmetries are "offspring" of the key gauge symmetry on the string world sheet, the conformal symmetry. I often tend to imagine that a "more abstract and complex" symmetry like the conformal one exists even nonperturbatively, \(U(N)\) in the BFSS matrix model is a close cousin, and the identification of the right symmetry basically "implies" the right identification of all the spacetime gauge symmetries as well as the difference between physical and unphysical polarizations, and therefore the causal structure of the spacetime and other things, too.

And the gauge symmetry is also important because Wilson lines may be nonlocal, and sometimes heavily nonlocal, degrees of freedom that determine the geometric relationships between regions that may be a priori connected or disconnected etc. To say the least, I believe that the gauge theories with a manifest gauge-covariant formalism encourage (and maybe are needed) for the definition of truly natural non-local degrees of freedom in local theories – and those may be helpful to discuss wormholes and entanglement in quantum gravity and related issues.

So I think that there's a lot of potential for the gauge symmetries to become less important than they seem today – and lots of potential for them to be more important than they seem today. I am not sure about the future evolution of their status in the world class physics research. But what I am sure about is that Nima's statements weren't ever meant to say that 0% or 100% of the future papers in theoretical physics will allow an important role to be played by gauge symmetries. The percentage is unknown and almost certainly strictly in between 0% and 100%. The statement that "the gauge symmetry isn't a real symmetry" was always meant to say something very specific and something that has been settled by a proof, one that shold be understandable to a good student.

People who don't understand any of these "nuances" should better use their opportunity to shut up. Theoretical physics is surely not a kindergarten pissing contest where one side mindlessly shouts "gauge symmetry akbar" and the other side shouts critical slogans about the gauge symmetry. To follow theoretical physics of 2016, you simply need to learn much more than the four words "gauge symmetry, yes, no".

by Luboš Motl (noreply@blogger.com) at December 30, 2016 05:59 PM

Marco Frasca - The Gauge Connection

Yang-Mills theory paper gets published!

ResearchBlogging.org

Exact solutions of quantum field theories are very rare and, normally, refer to toy models and pathological cases. Quite recently, I put on arxiv a pair of papers presenting exact solutions both of the Higgs sector of the Standard Model and the Yang-Mills theory made just of gluons. The former appeared a few month ago (see here) while the latter has been accepted for publication a few days ago (see here). I have updated the latter just today and the accepted version will appear on arxiv on 2 January next year.

What does it mean to solve exactly a quantum field theory? A quantum field theory is exactly solved when we know all its correlation functions. From them, thanks to LSZ reduction formula, we are able to compute whatever observable in principle being these cross sections or decay times. The shortest way to correlation functions are the Dyson-Schwinger equations. These equations form a set with the former equation depending on the higher order correlators and so, they are generally very difficult to solve. They were largely used in studies of Yang-Mills theory provided some truncation scheme is given or by numerical studies. Their exact solutions are generally not known and expected too difficult to find.

The problem can be faced when some solutions to the classical equations of motion of a theory are known. In this way there is a possibility to treat the Dyson-Schwinger set. Anyhow, before to enter into their treatment, it should be emphasized that in literature the Dyson-Schwinger equations where managed just in one way: Carl BenderUsing their integral form and expressing all the correlation functions by momenta. It was an original view by Carl Bender that opened up the way (see here). The idea is to write the Dyson-Schwinger equations into their differential form in the coordinate space. So, when you have exact solutions of the classical theory, a possibility opens up to treat also the quantum case!

This shows unequivocally that a Yang-Mills theory can display a mass gap and an infinite spectrum of excitations. Of course, if nature would have chosen the particular ground state depicted by such classical solutions we would have made bingo. This is a possibility but the proof is strongly related to what is going on for the Higgs sector of the Standard Model that I solved exactly but without other matter interacting. If the decay rates of the Higgs particle should agree with our computations we will be on the right track also for Yang-Mills theory. Nature tends to repeat working mechanisms.

Marco Frasca (2015). A theorem on the Higgs sector of the Standard Model Eur. Phys. J. Plus (2016) 131: 199 arXiv: 1504.02299v3

Marco Frasca (2015). Quantum Yang-Mills field theory arXiv arXiv: 1509.05292v1

Carl M. Bender, Kimball A. Milton, & Van M. Savage (1999). Solution of Schwinger-Dyson Equations for ${\cal PT}$-Symmetric Quantum Field Theory Phys.Rev.D62:085001,2000 arXiv: hep-th/9907045v1


Filed under: Applied Mathematics, Mathematical Physics, Particle Physics, Physics, QCD Tagged: Correlation functions, Dyson-Schwinger equations, Exact solutions, Exact solutions of nonlinear PDEs, Mass Gap, Quantum Field Theory, Yang-Mills spectrum, Yang-Mills theory

by mfrasca at December 30, 2016 05:20 PM

December 29, 2016

Tommaso Dorigo - Scientificblogging

INFN Gives 73 Permanent Positions To Young Researchers In Physics
Today I am actually quite proud of my research institute, the "Istituto Nazionale di Fisica Nucleare, INFN, which leads Italian research in fundamental physics. In fact a selection to hire 73 new researchers with permanent positions has reached its successful conclusion. Rather than giving you my personal opinions (very positive!) I think it is better to let speak the INFN president Fernando Ferroni, and the numbers themselves.

read more

by Tommaso Dorigo at December 29, 2016 03:45 PM

December 28, 2016

John Baez - Azimuth

Give the Earth a Present: Help Us Save Climate Data

getz_ice_shelf

We’ve been busy backing up climate data before Trump becomes President. Now you can help too, with some money to pay for servers and storage space. Please give what you can at our Kickstarter campaign here:

Azimuth Climate Data Backup Project.

If we get $5000 by the end of January, we can save this data until we convince bigger organizations to take over. If we don’t get that much, we get nothing. That’s how Kickstarter works. Also, if you donate now, you won’t be billed until January 31st.

So, please help! It’s urgent.

I will make public how we spend this money. And if we get more than $5000, I’ll make sure it’s put to good use. There’s a lot of work we could do to make sure the data is authenticated, made easily accessible, and so on.

The idea

The safety of US government climate data is at risk. Trump plans to have climate change deniers running every agency concerned with climate change. So, scientists are rushing to back up the many climate databases held by US government agencies before he takes office.

We hope he won’t be rash enough to delete these precious records. But: better safe than sorry!

The Azimuth Climate Data Backup Project is part of this effort. So far our volunteers have backed up nearly 1 terabyte of climate data from NASA and other agencies. We’ll do a lot more! We just need some funds to pay for storage space and a server until larger institutions take over this task.

The team

Jan Galkowski is a statistician with a strong interest in climate science. He works at Akamai Technologies, a company responsible for serving at least 15% of all web traffic. He began downloading climate data on the 11th of December.

• Shortly thereafter John Baez, a mathematician and science blogger at U. C. Riverside, joined in to publicize the project. He’d already founded an organization called the Azimuth Project, which helps scientists and engineers cooperate on environmental issues.

• When Jan started running out of storage space, Scott Maxwell jumped in. He used to work for NASA—driving a Mars rover among other things—and now he works for Google. He set up a 10-terabyte account on Google Drive and started backing up data himself.

• A couple of days later Sakari Maaranen joined the team. He’s a systems architect at Ubisecure, a Finnish firm, with access to a high-bandwidth connection. He set up a server, he’s downloading lots of data, he showed us how to authenticate it with SHA-256 hashes, and he’s managing many other technical aspects of this project.

There are other people involved too. You can watch the nitty-gritty details of our progress here:

Azimuth Backup Project – Issue Tracker.

and you can learn more here:

Azimuth Climate Data Backup Project.


by John Baez at December 28, 2016 07:18 PM

December 23, 2016

John Baez - Azimuth

Saving Climate Data (Part 3)

You can back up climate data, but how can anyone be sure your backups are accurate? Let’s suppose the databases you’ve backed up have been deleted, so that there’s no way to directly compare your backup with the original. And to make things really tough, let’s suppose that faked databases are being promoted as competitors with the real ones! What can you do?

One idea is ‘safety in numbers’. If a bunch of backups all match, and they were made independently, it’s less likely that they all suffer from the same errors.

Another is ‘safety in reputation’. If a bunch of backups of climate data are held by academic institutes of climate science, and another are held by climate change denying organizations (conveniently listed here), you probably know which one you trust more. (And this is true even if you’re a climate change denier, though your answer may be different than mine.)

But a third idea is to use a cryptographic hash function. In very simplified terms, this is a method of taking a database and computing a fairly short string from it, called a ‘digest’.

740px-cryptographic_hash_function-svg

A good hash function makes it hard to change the database and get a new one with the same digest. So, if the person owning a database computes and publishes the digest, anyone can check that your backup is correct by computing its digest and comparing it to the original.

It’s not foolproof, but it works well enough to be helpful.

Of course, it only works if we have some trustworthy record of the original digest. But the digest is much smaller than the original database: for example, in the popular method called SHA-256, the digest is 256 bits long. So it’s much easier to make copies of the digest than to back up the original database. These copies should be stored in trustworthy ways—for example, the Internet Archive.

When Sakari Maraanen made a backup of the University of Idaho Gridded Surface Meteorological Data, he asked the custodians of that data to publish a digest, or ‘hash file’. One of them responded:

Sakari and others,

I have made the checksums for the UofI METDATA/gridMET files (1979-2015) as both md5sums and sha256sums.

You can find these hash files here:

https://www.northwestknowledge.net/metdata/data/hash.md5

https://www.northwestknowledge.net/metdata/data/hash.sha256

After you download the files, you can check the sums with:

md5sum -c hash.md5

sha256sum -c hash.sha256

Please let me know if something is not ideal and we’ll fix it!

Thanks for suggesting we do this!

Sakari replied:

Thank you so much! This means everything to public mirroring efforts. If you’d like to help further promoting this Best Practice, consider getting it recognized as a standard when you do online publishing of key public information.

1. Publishing those hashes is already a major improvement on its own.

2. Publishing them on a secure website offers people further guarantees that there has not been any man-in-the-middle.

3. Digitally signing the checksum files offers the best easily achievable guarantees of data integrity by the person(s) who sign the checksum files.

Please consider having these three steps included in your science organisation’s online publishing training and standard Best Practices.

Feel free to forward this message to whom it may concern. Feel free to rephrase as necessary.

As a separate item, public mirroring instructions for how to best download your data and/or public websites would further guarantee permanence of all your uniquely valuable science data and public contributions.

Right now we should get this message viral through the government funded science publishing people. Please approach the key people directly – avoiding the delay of using official channels. We need to have all the uniquely valuable public data mirrored before possible changes in funding.

Again, thank you for your quick response!

There are probably lots of things to be careful about. Here’s one. Maybe you can think of more, and ways to deal with them.

What if the data keeps changing with time? This is especially true of climate records, where new temperatures and so on are added to a database every day, or month, or year. Then I think we need to ‘time-stamp’ everything. The owners of the original database need to keep a list of digests, with the time each one was made. And when you make a copy, you need to record the time it was made.


by John Baez at December 23, 2016 05:28 PM

December 22, 2016

Quantum Diaries

The forgotten life of Einstein’s wife

19 December was the 141th anniversary of the birth of Mileva Marić Einstein. But who remembers this brilliant scientist? While her husband, Albert Einstein is celebrated as perhaps the best physicist of the century, one question about his career remains: How much did his first wife contribute to his groundbreaking science? While nobody has been able to credit her with any specific part of his work, their letters and numerous testimonies presented in the books dedicated to her(1-5) provide substantial evidence on how they collaborated from the time they met in 1896 up to their separation in 1914. They depict a couple united by a shared passion for physics, music and for each other. So here is their story.

Mileva Marić was born in Titel in Serbia in 1875. Her parents, Marija Ruzić and Miloš Marić, a wealthy and respected member of his community, had two other children: Zorka and Miloš Jr. Mileva attended high school the last year girls were admitted in Serbia. In 1892, her father obtained the authorization of the Minister of Education to allow her to attend physics lectures reserved to boys. She completed her high school in Zurich in 1894 and her family then moved to Novi Sad. Mileva’s classmates described her as brilliant but not talkative. She liked to get to the bottom of things, was perseverant and worked towards her goals.

Albert Einstein was born in Ulm in Germany in 1879 and had one sister Maja. His father, Hermann, was an industrial. His mother, Pauline Koch came from a rich family. Albert was inquisitive, bohemian and rebel. Being undisciplined, he hated the rigor of German schools so he too finished his high school in Switzerland and his family relocated to Milan.

mileva-eth

Mileva Marić in 1896 when she entered the Polytechnic Institute in Zurich

Albert and Mileva were admitted to the physics-mathematics section of the Polytechnic Institute in Zurich (now ETH) in 1896 with three other students: Marcel Grossmann, Louis Kollros and Jakob Ehrat. Albert and Mileva became inseparable, spending countless hours studying together. He attended only a few lectures, preferring to study at home. Mileva was methodical and organized. She helped him channel his energy and guided his studies as we learn from Albert’s letters, exchanged between 1899-1903 during school holidays: 43 letters from Albert to Mileva have been preserved but only 10 of hers remain(5). These letters provide a first-hand account on how they interacted at the time.

In August 1899, Albert wrote to Mileva: « When I read Helmholtz for the first time, it seemed so odd that you were not at my side and today, this is not getting better. I find the work we do together very good, healing and also easier.” Then on 2 October 1899, he wrote from Milan: “… the climate here does not suit me at all, and while I miss work, I find myself filled with dark thoughts – in other words, I miss having you nearby to kindly keep me in check and prevent me from meandering”.

Mileva boarded in a pension for women where she met her life-long friends Helene Kaufler-Savić and Milana Bota. Both spoke of Albert’s continuous presence at Mileva’s place, where he would come freely to borrow books in Mileva’s absence. Milan Popović, Helene’s grandson, published the letters Mileva exchanged with her throughout her life(4).

 By the end of their classes in 1900, Mileva and Albert had similar grades (4.7 and 4.6, respectively) except in applied physics where she got the top mark of 5 but he, only 1. She excelled at experimental work while he did not. But at the oral exam, Professor Minkowski gave 11 out of 12 to the four male students but only 5 to Mileva. Only Albert got his degree.

Meanwhile, Albert’s family strongly opposed their relationship. His mother was adamant. “By the time you’re 30, she’ll already be an old hag!” as Albert reported to Mileva in a letter dated 27 July 1900, as well as « She cannot enter a respectable family ”. Mileva was neither Jewish, nor German. She had a limp and was too intellectual in his mother’s opinion, not to mention prejudices against foreign people. Moreover, Albert’s father insisted his son found work before getting married.

In September 1900, Albert wrote to Mileva: “I look forward to resume our new common work. You must now continue with your research – how proud I will be to have a doctor for my spouse when I’ll only be an ordinary man.“ They both came back to Zurich in October 1900 to start their thesis work. The other three students all received assistant positions at the Institute, but Albert did not. He suspected that professor Weber was blocking him. Without a job, he refused to marry her. They made ends meet by giving private lessons and “continue[d] to live and work as before.“ as Mileva wrote to her friend Helene Savić.

On 13 December 1900, they submitted a first article on capillarity signed only under Albert’s name. Nevertheless, both referred to this article in letters as their common article. Mileva wrote to Helene Savić on 20 December 1900. We will send a private copy to Boltzmann to see what he thinks and I hope he will answer us.” Likewise, Albert wrote to Mileva on 4 April 1901, saying that his friend Michele Besso “visited his uncle on my behalf, Prof. Jung, one of the most influential physicists in Italy and gave him a copy of our article.”

The decision to publish only under his name seems to have been taken jointly. Why? Radmila Milentijević, a former history professor at City College in New York, published in 2015 Mileva’s most comprehensive biography(1). She suggests that Mileva probably wanted to help Albert make a name for himself, such that he could find a job and marry her. Dord Krstić, a former physics professor at Ljubljana University, spent 50 years researching Mileva’s life. In his well-documented book(2), he suggests that given the prevalent bias against women at the time, a publication co-signed with a woman might have carried less weight.

We will never know. But nobody made it clearer than Albert Einstein himself that they collaborated on special relativity when he wrote to Mileva on 27 March 1901: “How happy and proud I will be when the two of us together will have brought our work on relative motion to a victorious conclusion.”

 Then Mileva’s destiny changed abruptly. She became pregnant after a lovers’ escapade in Lake Como. Unemployed, Albert would still not marry her. With this uncertain future, Mileva took her second and last attempt at the oral exam in July 1901. This time, Prof. Weber, whom Albert suspected of blocking his career, failed her. Forced to abandon her studies, she went back to Serbia, but came back briefly to Zurich to try to persuade Albert to marry her. She gave birth to a girl named Liserl in January 1902. No one knows what happened to her. She was probably given to adoption. No birth or death certificates were ever found.

Earlier in December 1901, their classmate Marcel Grossman’s father intervened to get Albert a post at the Patent Office in Bern. He started work in June 1902. In October, before dying, his father granted him his permission to marry. Albert and Mileva married on 6 January 1903. Albert worked 8 hours a day, 6 days a week at the Patent Office while Mileva assumed the domestic tasks. In the evenings, they worked together, sometimes late in the night. Both mentioned this to friends, he to Hans Wohlwend, she to Helene Savić on 20 March 1903 where she expressed how sorry she was to see Albert working so hard at the office. On 14 May 1904, their son Hans-Albert was born.

photo-mariage-1903

Mileva and Albert’s wedding picture in 1903

Despite this, 1905 is now known as Albert’s “miracle year”: he published five articles: one on the photoelectric effect (which led to the 1921 Nobel Prize), two on Brownian motion, one on special relativity and the famous E = mc2. He also commented on 21 scientific papers for a fee and submitted his thesis on the dimensions of molecules. Much later, Albert told R. S. Shankland(6) that relativity had been his life for seven years and the photoelectric effect, for five years. Peter Michelmore, one of his biographers(7), wrote that after having spent five weeks to complete the article containing the basis of special relativity, Albert “went to bed for two weeks. Mileva checked the article again and again, and then mailed it”. Exhausted, the couple made the first of three visits to Serbia where they met numerous relatives and friends, whose testimonies provide a wealth of information on how Albert and Mileva collaborated.

Mileva’s brother, Miloš Jr, a person known for his integrity, stayed on several occasions with the Einstein family while studying medicine in Paris. Krstić(2) wrote: “[Miloš] described how during the evenings and at night, when silence fell upon the town, the young married couple would sit together at the table and at the light of a kerosene lantern, they would work together on physics problems. Miloš Jr. spoke of how they calculated, wrote, read and debated.” Krstić heard this directly from relatives of Mileva, Sidonija Gajin and Sofija Galić Golubović.

Zarko Marić, a cousin of Mileva’s father, lived in the countryside property where the Einsteins stayed during their visit. He told Krstić how Mileva calculated, wrote and worked with Albert. The couple often sat in the garden to discuss physics. Harmony and mutual respect prevailed. Gajin and Zarko Marić also reported hearing from Mileva’s father that during the Einstein’s visit to Novi Sad in 1905, Mileva confided to him: “Before our departure, we finished an important scientific work which will make my husband known around the world.” Krstić got this same information in 1961 from Mileva’s cousin, Sofija Galić Golubović, who was present when Mileva said it to her father.

ae_mm_son_1905

Mileva, Albert and their son Hans-Albert in 1905

Desanka Trbuhović-Gjurić published Mileva’s first biography in Serbian in 1969(3). It later appeared in German and French. She described how Mileva’s brother often hosted gatherings of young intellectuals at his place. During one of these evenings, Albert would have declared: “I need my wife. She solves for me all my mathematical problems”, something Mileva is said to have confirmed.

In 1908, the couple constructed with Conrad Habicht an ultra-sensitive voltmeter. Trbuhović-Gjurić attributes this experimental work to Mileva and Conrad, and wrote: “When they were both satisfied, they left to Albert the task of describing the apparatus, since he was a patent expert.” It was registered under the Einstein-Habicht patent. When Habicht questioned Mileva’s choice not to include her name, she replied making a pun in German: “Warum? Wir beide sind nur ein Stein.“ (“Why? The two of us are but one stone”, meaning, we are one entity).

The first recognition came in 1908. Albert gave unpaid lectures in Bern, then was offered his first academic position in Zurich in 1909. Mileva was still assisting him. Eight pages of Albert’s first lecture notes are in her handwriting. So is a letter drafted in 1910 in reply to Max Planck who had sought Albert’s opinion. Both documents are kept in the Albert Einstein Archives (AEA) in Jerusalem. On 3 September 1909, Mileva confided to Helene Savić: “He is now regarded as the best of the German-speaking physicists, and they give him a lot of honours. I am very happy for his success, because he fully deserves it; I only hope and wish that fame does not have a harmful effect on his humanity.” Later, she added: “With all this fame, he has little time for his wife. […] What is there to say, with notoriety, one gets the pearl, the other the shell.”

ae_mm_1910

Mileva and Albert in 1910.

Their second son, Eduard, was born on 28 July 1910. Up to 1911, Albert still sent affectionate postcards to Mileva. But in 1912, he started an affair with his cousin, Elsa Löwenthal while visiting his family who had moved to Berlin. They maintained a secret correspondence over two years. Elsa kept 21 of his letters, now in the Collected Papers of Albert Einstein. During this period, Albert held various faculty positions first in Prague, back in Zurich and finally in Berlin in 1914 to be closer to Elsa.

This caused their marriage’s collapse. Mileva moved back to Zurich with her two sons on 29 July 1914. In 1919, she agreed to divorce, with a clause stating that if Albert ever received the Nobel Prize, she would get the money. When she did, she bought two small apartment buildings and lived poorly from their income. Her son, Eduard stayed frequently in a sanatorium. He later developed schizophrenia and was eventually internalised. Due to these medical expenses, Mileva struggled financially all her life and eventually lost both buildings. She survived by giving private lessons and on the alimony Albert sent, albeit irregularly.

In 1925, Albert wrote in his will that the Nobel Prize money was his sons’ inheritance. Mileva strongly objected, stating the money was hers and considered revealing her contributions to his work. Radmila Milentijević quote from a letter Albert sent her on 24 October 1925 (AEA 75-364). ”You made me laugh when you started threatening me with your recollections. Have you ever considered, even just for a second, that nobody would ever pay attention to your says if the man you talked about had not accomplished something important. When someone is completely insignificant, there is nothing else to say to this person but to remain modest and silent. This is what I advise you to do.

Mileva remained silent but her friend Milana Bota told a Serbian newspaper in 1929 that they should talk to Mileva to find out about the genesis of special relativity, since she was directly involved. On 13 June 1929, Mileva wrote to Helene Savić: ”Such publications in newspapers do not suit my nature at all, but I believe that all that was for Milana’s joy, and that she probably thought that this would also be a joy for me, as I can only suppose that she wanted to help me receive some public rights with regard to Einstein. She has written to me in that way, and I let it be accepted that way, for otherwise the whole thing would be nonsense.”

mileva-plus-vieille

Mileva later on (unknown date)

According to Krstić(2), Mileva spoke of her contributions to her mother and sister. She also wrote to her godparents explaining how she had always collaborated with Albert and how he had ruined her life, but asked them to destroy the letter. Her son, Hans-Albert, told Krstić(2) how his parents’ “scientific collaboration continued into their marriage, and that he remembered seeing [them] work together in the evenings at the same table.” Hans-Albert’s first wife, Frieda, tried to publish the letters Mileva and Albert had sent to their sons but was blocked in court by the Einstein’s Estate Executors, Helen Dukas and Otto Nathan in an attempt to preserve the “Einstein’s myth”. They prevented other publications, including one from Krstić(2) on his early findings in 1974. Krstić mentions that Nathan even “visited” Mileva’s apartment after her death in 1948. On July 1947, Albert wrote to Dr Karl Zürcher, his divorce lawyer: “When Mileva will no longer be there, I’ll be able to die in peace.”

 Their letters and the numerous testimonies show that Mileva Marić and Albert Einstein collaborated closely from their school days up to 1914. Albert referred to it repeatedly in his letters, like when he wrote: « our work on relative motion”. Their union was based on love and mutual respect, which allowed them together to produce such uncommon work. She was the first person to recognize his talent. Without her, he would never have succeeded. She abandoned her own aspirations, happy to work with him and contribute to his success, feeling they were one unique entity. Once started, the process of signing their work under his unique name became impossible to reverse. She probably agreed to it since her own happiness depended on his success. Why did Mileva remain silent? Being reserved and self-effaced, she did not seek honors or public attention. And as is always the case in close collaborations, the individual contributions are nearly impossible to disentangle.

Pauline Gagnon

This article first appeared in Scientific American as an Opinion piece

To find out more about particle physics and dark matter, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN ».

To be notified of new blogs, follow me on Twitter : @GagnonPauline or sign up on this distribution list

References:

(1) Radmila Milentijević: Mileva Marić Einstein: Life with Albert Einstein, United World Press, 2015.

(2) Dord Krstić: Mileva & Albert Einstein: Their Love and Scientific Collaboration, Didakta, 2004.

(3) Desanka Trbuhović-Gjurić: Mileva Marić Einstein: In Albert Einstein’s shadow: in Serbian, 1969, German, 1982, and French, 1991.

(4) Milan Popović: In Albert’s Shadow, the Life and Letters of Mileva Marić, Einstein’s First Wife, The John Hopkins University Press, 2003.

(5) Renn and Schulmann, Albert Einstein / Mileva Marić, The Love Letters, Princeton University Press, 1992.

(6) Peter Michelmore, Einstein, Profile of the Man, Dodd, Mead & Company, 1962.

(7) R.S. Shankland, Conversation with Albert Einstein, Am. J. of Physics, 1962.

by Pauline Gagnon at December 22, 2016 02:09 PM

Quantum Diaries

La vie oubliée de la femme d’Einstein

Le 19 décembre a marqué le 141ième anniversaire de naissance de Mileva Marić Einstein. Mais qui se souvient de cette brillante physicienne? Alors que son mari, Albert Einstein, est célébré comme étant peut-être le meilleur physicien du siècle, une ombre demeure sur sa carrière: quelles furent les contributions de sa première femme à son oeuvre scientifique? Même si personne n’a encore pu déterminer ses contributions exactes à son travail, leurs lettres et les nombreuses preuves présentées dans les livres consacrés à Mileva Marić(1-5) nous éclairent hors de tout doute sur la façon dont ils ont collaboré depuis leur rencontre en 1896 jusqu’à leur séparation en 1914. L’ensemble de ces documents dépeint le tableau d’un couple uni par une passion mutuelle pour la physique, la musique et l’un pour l’autre. Voici leur histoire.

Mileva Marić est née à Titel en Serbie en 1875. Ses parents, Marija Ruzić et Miloš Marić, un homme riche et respecté dans sa communauté, eurent deux autres enfants: Zorka et Miloš Jr. Mileva fréquenta l’école secondaire la dernière année où les filles y étaient encore admises. En 1892, son père obtint une autorisation du Ministre de l’Éducation pour qu’elle puisse assister aux cours de physique alors réservés qu’aux garçons. Elle compléta son secondaire à Zurich en 1894, date à laquelle sa famille déménagea à Novi Sad. Ses camarades de classe décrivirent Mileva comme étant brillante, mais peu bavarde. Elle aimait aller au fond de choses, était persévérante et marchait droit au but.

Albert Einstein est né à Ulm en Allemagne en 1879 et n’avait qu’une sœur, Maja. Hermann, son père, était un industriel et sa mère, Pauline Koch, était issue d’une famille riche. Albert était curieux, bohème et rebelle. Indiscipliné de nature, il détestait la rigueur des écoles allemandes et alla finir ses études secondaires en Suisse. Sa famille déménagea alors à Milan.

 mileva-eth
Mileva Marić en 1896 lorsqu’elle fut admise à l’Institut Polytechnique de Zurich

En 1896, Albert et Mileva furent admis dans la section de mathématiques et physique de l’Institut Polytechnique à Zurich (maintenant l’ETH) avec trois autres étudiants: Marcel Grossmann, Louis Kollros et Jakob Ehrat. Albert et Mileva devinrent vite inséparables, étudiant sans cesse ensemble. Il n’assista qu’à quelques cours, préférant étudier par lui-même. Mileva était méthodique et très organisée. Elle l’aidait à canaliser son énergie et guidait ses lectures comme nous le révèlent leurs lettres, échangées entre 1899 et 1903 durant les congés scolaires: 43 lettres d’Albert à Mileva ont été préservées mais seulement 10 lettres de Mileva subsistent(5). Ces lettres fournissent un témoignage direct sur la façon dont ils interagissaient à l’époque.

En août 1899, Albert écrit à Mileva : « Quand j’ai lu Helmholtz pour la première fois, il me semblait tout à fait inconcevable que tu ne sois pas à mes côtés et aujourd’hui, ça ne s’améliore pas. Je trouve le travail que nous faisons en commun très bon, curatif et aussi moins ardu.” Le 2 octobre 1899, il lui écrivit de Milan : “… le climat ici ne me convient pas du tout et, un certain travail me manquant, je me laisse aller à ruminer des idées noires – bref, je vois et sens que votre bienfaisante férule ne plane plus au-dessus de moi pour m’empêcher de divaguer “.

Mileva logeait dans une pension pour jeunes femmes où elle rencontra ses amies Helene Kaufler-Savić et Milana Bota. Toutes deux témoignèrent de la présence constante d’Albert chez Mileva, où il venait librement y emprunter des livres même en son absence. Milan Popović, le petit-fils d’Helene, a publié les lettres que Mileva écrivit à Helene tout au long de sa vie(4).

A la fin de leurs cours en 1900, Mileva et Albert avaient des résultats semblables (une moyenne de 4.7 et 4.6, respectivement) sauf en physique appliquée, où elle obtint la note maximale de 5, mais Albert, seulement 1. Elle excellait en travaux pratiques tandis qu’il n’y avait aucun talent. Cependant, lors de leur examen oral, le Professeur Minkowski accorda une note de 11 sur 12 aux quatre étudiants masculins, mais Mileva ne reçut que 5. Tous obtinrent leur diplôme sauf Mileva.

Entre temps, la famille d’Albert s’opposait fortement à leur relation. Sa mère était inflexible. « Quand tu auras 30 ans, elle sera déjà une vieille sorcière! », comme Albert le rapporta à Mileva dans une lettre datée du 27 juillet 1900, de même que “Elle ne peut pas entrer dans une famille convenable“. Mileva n’était ni juive, ni allemande. Elle boitait et était trop intellectuelle de l’avis de sa mère, sans compter les préjugés contre les étrangers. De son côté, le père d’Albert insistait pour que son fils trouve du travail avant de se marier.

En septembre 1900, Albert écrivit à Mileva : « Comme je me réjouis à l’avance de notre nouveau travail conjoint. Tu dois maintenant continuer avec ton investigation – comme je serai fier lorsque j’aurai un docteur comme compagne alors que je serai juste un homme ordinaire. » Les deux revinrent à Zurich en octobre 1900 commencer leur travail de thèse. Les trois autres étudiants se virent tous offrir des postes d’assistants à l’Institut, mais pas Albert. Il soupçonna le professeur Weber de malveillance. Pour joindre les deux bouts, ils donnèrent des leçons privées et « continuèrent à vivre et travailler comme avant », comme Mileva l’écrivit à son amie Helene Savić.

Le 13 décembre 1900, ils soumirent sous le seul nom d’Albert un premier article sur la capillarité. Néanmoins, tous deux référèrent à cet article dans leurs lettres comme leur article commun. Mileva écrivit à Helene Savić le 20 décembre 1900. « Nous enverrons une copie privée à Boltzmann pour voir ce qu’il pense et j’espère qu’il nous répondra. » De même, Albert écrivit à Mileva le 4 avril 1901, disant que son ami Michele Besso « a rendu visite à son oncle en mon nom, le Prof. Jung, un des physiciens les plus influents en Italie et lui a aussi donné une copie de notre article. »

La décision de publier sous le seul nom d’Albert semble avoir été prise en commun. Pourquoi ? Radmila Milentijević, ancienne professeure d’histoire au City College de New York, a publié en 2014 la biographie la plus complète à ce jour sur Mileva(1). Elle suggère que Mileva voulait probablement aider Albert à se faire un nom, pour qu’il puisse trouver un travail et l’épouser. Dord Krstić, ancien professeur de physique à l’Université de Ljubljana, passa près de 50 ans à enquêter sur la vie de Mileva. Dans son livre(2) fort bien documenté, il suggère qu’une publication co-signée avec une femme aurait pu en réduire l’impact étant donné les préjugés sexistes de l’époque.

Nous ne le saurons jamais. Mais personne ne peut être plus clair qu’Albert Einstein sur l’existence de leur collaboration sur la relativité spéciale lorsqu’il écrivit à Mileva le 27 mars 1901 : «Comme je serai heureux et fier quand nous aurons tous les deux ensemble mené notre travail sur le mouvement relatif à une conclusion victorieuse ! »

C’est à ce moment que le destin de Mileva bascula. Suite à une escapade amoureuse au Lac de Côme, elle tomba enceinte. Toujours sans emploi, Albert refuse toujours de l’épouser. C’est avec un avenir on ne peut plus incertain que Mileva tenta sa seconde et dernière chance à l’examen oral en juillet 1901. Cette fois, c’est le professeur Weber, celui qu’Albert soupçonnait de bloquer sa carrière, qui lui refuse la note de passage. Forcée d’abandonner ses études, elle retourna en Serbie, mais revint brièvement à Zurich pour essayer en vain de persuader Albert de l’épouser. Elle donna naissance à une petite fille nommée Liserl en janvier 1902. Personne ne sait ce qui lui est arrivé. Elle fut probablement donnée en adoption. Aucun acte de naissance ou de décès n’a été retrouvé.

Auparavant, en décembre 1901, le père de leur camarade de classe Marcel Grossman obtint pour Albert un poste à l’Office des Brevets à Berne, où il débuta en juin 1902. En octobre, juste avant sa mort, son père lui accorda la permission de se marier. Albert épousa Mileva le 6 janvier 1903. Albert travaillait 8 heures par jour, 6 jours semaine tandis que Mileva assumait les tâches ménagères. En soirée, ils travaillaient ensemble, parfois tard dans la nuit. Les deux le mentionnèrent à des amis, lui à Hans Wohlwend, elle à Helene Savić le 20 mars 1903, se désolant de le voir travailler si dur au bureau. Leur fils Hans-Albert naquit le 14 mai 1904.

photo-mariage-1903

Photo de marriage de Mileva et Albert en 1903

Malgré cette charge de travail, 1905 devint « l’année miraculeuse » d’Albert où il publia cinq articles: un sur l’effet photoélectrique (ce qui lui valut le Prix Nobel en 1921), deux sur le mouvement Brownien, un sur la relativité restreinte et un contenant la célèbre équation E = mc2. Il soumit des commentaires sur 21 articles scientifiques contre rémunération de même que sa thèse sur les dimensions des molécules

Bien plus tard, Albert confia à R. S. Shankland(6) que la relativité avait été sa vie pendant sept ans et l’effet photoélectrique, cinq ans. Peter Michelmore, un de ses biographes(7), écrivit qu’après avoir passé cinq semaines à compléter l’article sur la relativité restreinte, Albert « passa deux semaines au lit pendant que Mileva relisait inlassablement l’article avant de le poster ». Épuisé, le couple part en Serbie pour une première de trois visites où ils rencontrèrent de nombreux parents et amis. Les témoignages de ces derniers foisonnent d’information sur la façon dont Albert et Mileva collaboraient à l’époque.

Le frère de Mileva, Miloš Jr, une personne reconnue pour son intégrité, séjourna à plusieurs reprises chez les Einstein durant ses études de médecine à Paris. Krstić(2) écrivit: « [Miloš] décrivit comment en soirée et durant la nuit, quand le silence tombait sur la ville, le jeune couple s’assoyait à la table, et à la lumière d’une lampe au kérosène, travaillait à des problèmes de physique. Miloš Jr. mentionna comment ils calculaient, écrivaient, lisaient et débattaient. » Krstić recueillit ce témoignage directement de la marraine de Mileva, Sidonija Gajin et de sa cousine, Sofija Galić Golubović.

Zarko Marić, un cousin du père de Mileva, vivait dans la maison de campagne où les Einstein séjournèrent durant leurs visites. Il raconta à Krstić comment Mileva calculait, écrivait et travaillait avec Albert. Le couple s’assoyait souvent au jardin pour discuter de physique. L’harmonie et le respect mutuel prévalaient. Gajin et Zarko Marić rapportèrent aussi que le père de Mileva leur confia que lors de la visite des Einstein à Novi Sad en 1905, Mileva lui dit: « Nous venons de terminer un travail de recherche scientifique très important qui va rendre mon mari célèbre. » Krstić récolta les mêmes propos de la cousine de Mileva, Sofija Galić Golubović, qui était présente lorsque Mileva parla à son père.

Desanka Trbuhović-Gjurić a publié la première biographie de Mileva en serbe en 1969(3). Cet ouvrage paru plus tard en allemand puis en français. Elle y décrit comment le frère de Mileva accueillait souvent de jeunes intellectuels chez lui. Lors d’une de ces soirées, Albert aurait déclaré: « J’ai besoin de ma femme. Elle résout pour moi tous mes problèmes mathématiques », fait que Mileva aurait confirmé.

ae_mm_son_1905Mileva et Albert avec leur fils Hans-Albert en 1905

En 1908, le couple construisit avec Conrad Habicht un voltmètre ultrasensible. Trbuhović-Gjurić attribue ce travail expérimental à Mileva et Conrad. Elle écrit : “« Quand [Mileva et Conrad] furent tous les deux satisfaits, ils laissèrent à Albert le soin de décrire cet appareil, en expert des brevets». Ce fut enregistré sous le nom d’Einstein-Habicht. Quand Habicht interrogea Mileva sur son choix de ne pas y inclure son nom, elle répondit en faisant un jeu de mots en allemand : « Warum ? Wir beide sind nur ein Stein. » (Pourquoi ? Nous deux ne sommes qu’une seule pierre”, signifiant, nous ne faisons qu’un.)

La reconnaissance vint enfin en 1908. Albert fut invité à donner des cours non rémunérés à Berne, puis on lui offrit un premier poste académique à Zurich en 1909. Mileva l’aidait toujours. Huit pages des premières notes de cours d’Albert sont rédigées de sa main, de même qu’une lettre écrite en 1910 en réponse à Max Planck qui avait sollicité l’avis d’Albert. Ces deux documents se trouvent dans les Archives d’Albert Einstein (AEA) à Jérusalem. Le 3 septembre 1909, Mileva confia à Helene Savić : « Mon mari […] est maintenant perçu comme le meilleur physicien de langue allemande et on le couvre d’honneur. Je suis très heureuse pour son succès parce qu’il le mérite pleinement; je souhaite simplement et espère que la gloire n’aura pas d’effets adverses sur son humanité. » Plus tard, elle ajouta : « Avec toute cette gloire, il a peu de temps pour sa femme. […] Que peut-on faire, avec la notoriété, une personne reçoit la perle, l’autre la coquille. »

ae_mm_1910

Mileva et Albert en 1910

Leur deuxième fils, Eduard, vint au monde le 28 juillet 1910. Jusqu’à 1911, Albert envoyait toujours des cartes postales affectueuses à Mileva. Mais en 1912, il commença une relation avec sa cousine, Elsa Löwenthal, lors d’une visite à sa famille qui avait déménagé à Berlin. Ils entretinrent une correspondance secrète pendant plus de deux ans. Elsa conserva 21 des lettres d’Albert, qu’on retrouve aujourd’hui dans Collected Papers of Albert Einstein. Durant cette période, Albert occupa différents postes de professeur d’abord à Prague, de retour à Zurich et finalement à Berlin en 1914 afin de se rapprocher d’Elsa.

Cela causa l’effondrement de leur mariage. Mileva retourna à Zurich avec ses deux fils le 29 juillet 1914. En 1919, elle consentit à divorcer, exigeant d’inclure une clause dans leur contrat de divorce stipulant que si Albert recevait le Prix Nobel, elle seule obtiendrait l’argent. Lorsqu’elle le reçut, elle acheta deux petits immeubles et vécut maigrement de leurs revenus. Son fils, Eduard séjourna à plusieurs reprises dans un sanatorium. Il souffrit plus tard de schizophrénie et dut finalement être interné. En raison de ces dépenses médicales, Mileva eut de graves soucis financiers toute sa vie et éventuellement perdit les deux immeubles. Elle survécut en donnant des cours particuliers et grâce à la pension alimentaire qu’Albert lui envoyait, bien qu’irrégulièrement.

En 1925, Albert voulut inclure dans son testament que l’argent du Prix Nobel constituait l’héritage de ses fils. Mileva s’y opposa fortement, lui rappelant que cet argent était le sien propre et envisagea de révéler ses contributions au travail d’Albert. Radmila Milentijević cite une lettre qu’Albert lui adressa le 24 octobre 1925 (AEA 75-364). « Mais tu m’as fait vraiment rire quand tu as commencé à me menacer de tes mémoires. T’est-il jamais venu à l’esprit, ne serait-ce qu’une seconde, que personne ne prêterait la moindre attention à tes salades si l’homme dont tu parles n’avait pas accompli quelque chose d’important? Quand une personne est quelqu’un de complètement insignifiant, il n’y a rien d’autre à dire à cette personne que de rester modeste et de se taire. C’est ce que je te conseille de faire. »

Mileva est resté silencieuse mais son amie Milana Bota déclara à un journal serbe en 1929 que Mileva pourrait les renseigner sur l’origine de la relativité restreinte, puisqu’elle y avait directement contribué. Le 13 juin 1929, Mileva écrivit à Helene Savić : « De telles publications dans les journaux ne correspondent pas du tout à ma nature mais je crois que cela a fait plaisir à Milana et qu’elle a probablement pensé que cela me ferait plaisir aussi et que, d’une certaine façon, cela m’aiderait à obtenir certains droits vis-à-vis d’Einstein aux yeux du public. Elle m’a écrit en ce sens, et je l’accepte ainsi, autrement tout cela n’aurait pas beaucoup de sens. »

mileva-plus-vieille

Mileva Marić quelques années plus tard (date inconnue)

Selon Krstić(2), Mileva parla de ses contributions à sa mère et sa sœur. Elle écrivit aussi à ses parrain et marraine comment elle collabora avec Albert et comment il avait ruiné sa vie, mais leur demanda de détruire sa lettre. Son fils, Hans-Albert, confia à Krstić comment “la collaboration scientifique de ses parents continua après leur mariage et qu’il se rappelait les voir travailler ensemble en soirée à la même table.” La première femme d’Hans-Albert, Frieda, essaya de publier les lettres que Mileva et Albert avaient envoyé à leurs fils, mais fut bloquée en cour par les exécuteurs testamentaires d’Einstein, Helen Dukas et Otto Nathan afin de préserver le « mythe Einstein ». Ils empêchèrent aussi d’autres publications, y compris lorsque Krstić(2) voulu publier ses premières découvertes en 1974. Krstić mentionne que Nathan « visita » même l’appartement de Mileva après sa mort en 1948. En juillet 1947, Albert écrivit au Dr Karl Zürcher, l’avocat qui avait réglé son divorce : « Lorsque Mileva ne sera plus de ce monde, je pourrai mourir en paix. »

Leurs lettres et les nombreux témoignages attestent que Mileva Marić et Albert Einstein collaborèrent étroitement depuis leur rencontre jusqu’à 1914. Albert le mentionna à plusieurs reprises dans ses lettres, comme lorsqu’il écrivit : “notre travail sur mouvement relatif“. Leur union était faite d’amour et de respect mutuel. C’est ce qui leur a permis de produire ensemble un travail hors du commun. Elle fut la première à reconnaître son talent. Sans elle, il n’aurait jamais réussi. Elle abandonna ses propres aspirations, heureuse de travailler avec lui et de contribuer à son succès, sentant qu’ils ne faisaient qu’un. Une fois enclenché, il devint impossible de faire marche arrière sur le processus de signer leur travail sous le seul nom d’Albert. Elle l’avait probablement accepté puisque son propre bonheur dépendait de son succès. Pourquoi Mileva est-elle restée silencieuse? Étant de nature discrète, elle ne recherchait pas les honneurs ou l’attention publique. Et comme dans tous les cas de collaboration étroite, les contributions individuelles de chacun sont presque toujours impossibles à départager.

Pauline Gagnon

Cet article fut d’abord publié en anglais au Magazine Scientific American dans la section Opinions

Pour en savoir plus sur la physique des particules et la matière sombre, consultez mon livre “Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels“.

Pour être au courant des nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou inscrivez-vous sur cette liste de distribution

Références :
(1) Radmila Milentijević
: Mileva Marić Einstein : Vivre avec Albert Einstein, Éditions L’Age d’Homme, 2014.
(2) Dord Krstić: Mileva & Albert Einstein: Their Love and Scientific Collaboration, Didakta, 2004.

(3) Desanka Trbuhović-Gjurić Mileva Marić Einstein : Dans l’ombre d’Albert Einstein : en serbe, 1969, allemand, 1982 et français, 1991.
(4) Milan Popović: In Albert’s Shadow, the Life and Letters of Mileva Marić, Einstein’s First Wife, The John Hopkins University Press, 2003.

(5) Renn and Schulmann, Albert Einstein / Mileva Marić, The Love Letters, Princeton University Press, 1992.

(6) Peter Michelmore, Einstein, Profile of the Man, Dodd, Mead & Company, 1962.

(7) R.S. Shankland, Conversation with Albert Einstein, Am. J. of Physics, 1962.

by Pauline Gagnon at December 22, 2016 01:56 PM

December 21, 2016

Sean Carroll - Preposterous Universe

Memory-Driven Computing and The Machine

Back in November I received an unusual request: to take part in a conversation at the Discover expo in London, an event put on by Hewlett Packard Enterprise (HPE) to showcase their new technologies. The occasion was a project called simply The Machine — a step forward in what’s known as “memory-driven computing.” On the one hand, I am not in any sense an expert in high-performance computing technologies. On the other hand (full disclosure alert), they offered to pay me, which is always nice. What they were looking for was simply someone who could speak to the types of scientific research that would be aided by this kind of approach to large-scale computation. After looking into it, I thought that I could sensibly talk about some research projects that were relevant to the program, and the technology itself seemed very interesting, so I agreed stop by London on the way from Los Angeles to a conference in Rome in honor of Georges Lemaître (who, coincidentally, was a pioneer in scientific computing).

Everyone knows about Moore’s Law: computer processing power doubles about every eighteen months. It’s that progress that has enabled the massive technological changes witnessed over the past few decades, from supercomputers to handheld devices. The problem is, exponential growth can’t go on forever, and indeed Moore’s Law seems to be ending. It’s a pretty fundamental problem — you can only make components so small, since atoms themselves have a fixed size. The best current technologies sport numbers like 30 atoms per gate and 6 atoms per insulator; we can’t squeeze things much smaller than that.

So how do we push computers to faster processing, in the face of such fundamental limits? HPE’s idea with The Machine (okay, the name could have been more descriptive) is memory-driven computing — change the focus from the processors themselves to the stored data they are manipulating. As I understand it (remember, not an expert), in practice this involves three aspects:

  1. Use “non-volatile” memory — a way to store data without actively using power.
  2. Wherever possible, use photonics rather than ordinary electronics. Photons move faster than electrons, and cost less energy to get moving.
  3. Switch the fundamental architecture, so that input/output and individual processors access the memory as directly as possible.

Here’s a promotional video, made by people who actually are experts.

The project is still in the development stage; you can’t buy The Machine at your local Best Buy. But the developers have imagined a number of ways that the memory-driven approach might change how we do large-scale computational tasks. Back in the early days of electronic computers, processing speed was so slow that it was simplest to store large tables of special functions — sines, cosines, logarithms, etc. — and just look them up as needed. With the huge capacities and swift access of memory-driven computing, that kind of “pre-computation” strategy becomes effective for a wide variety of complex problems, from facial recognition to planing airline routes.

It’s not hard to imagine how physicists would find this useful, so that’s what I briefly talked about in London. Two aspects in particular are pretty obvious. One is searching for anomalies in data, especially in real time. We’re in a data-intensive era in modern science, where very often we have so much data that we can only find signals we know how to look for. Memory-driven computing could offer the prospect of greatly enhanced searches for generic “anomalies” — patterns in the data that nobody had anticipated. You can imagine how that might be useful for something like LIGO’s search for gravitational waves, or the real-time sweeps of the night sky we anticipate from the Large Synoptic Survey Telescope.

The other obvious application, of course, is on the theory side, to large-scale simulations. In my own bailiwick of cosmology, we’re doing better and better at including realistic physics (star formation, supernovae) in simulations of galaxy and large-scale structure formation. But there’s a long way to go, and improved simulations are crucial if we want to understand the interplay of dark matter and ordinary baryonic physics in accounting for the dynamics of galaxies. So if a dramatic new technology comes along that allows us to manipulate and access huge amounts of data (e.g. the current state of a cosmological simulation) rapidly, that would be extremely useful.

Like I said, HPE compensated me for my involvement. But I wouldn’t have gone along if I didn’t think the technology was intriguing. We take improvements in our computers for granted; keeping up with expectations is going to require some clever thinking on the part of engineers and computer scientists.

by Sean Carroll at December 21, 2016 03:11 AM

December 20, 2016

Symmetrybreaking - Fermilab/SLAC

2016 year in particle physics

Scientists furthered studies of the Higgs boson, neutrinos, dark matter, dark energy and cosmic inflation and continued the search for undiscovered particles, forces and principles.

Header: 2016 year in particle physics

Working together, particle physicists from the US and around the globe made exciting advances this year in our understanding of the universe at the smallest and largest scales. 

The LIGO experiment made the first detection of gravitational waves, originally predicted by Albert Einstein in 1916 in his general theory of relativity. And scientists have pushed closer to the next big discovery at experiments such as those at the Large Hadron Collider and at ultra-sensitive underground neutrino detectors.

The pursuit of particle physics is a truly international effort. It takes the combined resources and expertise of partnering nations to develop and use unique world-class facilities and advanced technology detectors. 

Efforts in particle physics can be divided into five intertwined lines of inquiry: explorations of the Higgs boson, neutrinos, dark matter, cosmic acceleration and the unknown. Following this community vision enabled physicists to make major scientific advances in 2016 and set the stage for a fascinating future.

Inline 1: 2016 year in particle physics

Using the Higgs boson as a new tool for discovery

The discovery of the Higgs boson in 2012 at the Large Hadron Collider at CERN opened a new door to understanding the universe. In 2016, the LHC produced roughly the same number of particle collisions that it did during all of its previous years of operation combined. At its current collision rate, it produces a Higgs boson about once per second.

While it will take time for the ATLAS and CMS experiment collaborations to digest this deluge of data, early results are already probing for any signs of unexpected Higgs boson behavior. In August, the ATLAS and CMS collaborations used data from the highest energy LHC collisions to “rediscover” the Higgs boson and confirm that it agrees with the predictions of the Standard Model of particle physics—so far. Deviations from the predictions would signal new physics beyond the Standard Model.

Since the LHC aims to continue running at its record pace for the next two years and more than double the delivered particle collisions to the experiments, this window to the universe is only beginning to open. The latest theoretical calculations of all of the major ways a Higgs boson can be produced and decay will enable rigorous new tests of the Standard Model.

US scientists are also ramping up efforts with their international partners to develop future upgrades for a High-Luminosity LHC that would provide 10 times the collisions and launch an era of high-precision Higgs-boson physics. Scientists have made significant progress this year in the development of more powerful superconducting magnets for the HL-LHC, including the production of a successful prototype that is currently the strongest accelerator magnet ever created.

Inline 2: 2016 year in particle physics

Illustration by Sandbox Studio, Chicago with Ana Kova

Pursuing the physics associated with neutrino mass

In 2016, several experiments continued to study ghostly neutrinos—particles so pervasive and aloof that 100 trillion of them pass through you each second. In the late ’90s and early ’00s, experiments in Japan and Canada found proof that these peculiar particles have some mass and that they can transform between types of neutrino as they travel.

A global program of experiments aims to address numerous remaining questions about neutrinos. Long-baseline experiments study the particles as they fly through the earth between Tokai and Kamioka in Japan or between Illinois and Minnesota in the US. These experiments aim to discern what masses neutrinos have and whether there are differences between the transformations of neutrinos and their antimatter partners, antineutrinos.

In July, the T2K experiment in Japan announced that their data showed a possible difference between the rate at which a muon neutrino turns into an electron neutrino and the rate at which a muon antineutrino turns into an electron antineutrino. The T2K data hint at a combination of neutrino properties that would also give the NOvA experiment in the US their most favorable chance of making a discovery about neutrinos in the next few years.

In China, construction is underway for the Jiangmen Underground Neutrino Observatory, which will investigate neutrino mass in an effort to determine which neutrino is the lightest.

In the longer term, particle physicists aim to definitively determine these answers by hosting the world-class Long-Baseline Neutrino Facility, which would send a high-intensity neutrino beam 800 miles from Illinois to South Dakota. There, the international Deep Underground Neutrino Experiment a mile beneath the surface would enable precision neutrino science.

Inline 3: 2016 year in particle physics

Illustration by Sandbox Studio, Chicago with Ana Kova

Identifying the new physics of dark matter

Overwhelming indirect evidence indicates that more than a quarter of the mass and energy in the observable universe is made up of an invisible substance called dark matter. But the nature of dark matter remains a mystery. Little is known about it other than that it interacts through gravity. 

To guide the experimental search for dark matter, theorists have studied the possible interactions that known particles might have with a wide variety of potential dark matter candidates with possible masses ranging over more than a dozen orders of magnitude. 

Huge sensitive detectors, such as the Large Underground Xenon, or LUX, experiment located a mile beneath the Black Hills of South Dakota, directly search for the dark matter particles that may be continually passing through Earth. This year, LUX completed the world’s most sensitive search for direct evidence of dark matter, improving upon its own previous world’s best search by a factor of four and narrowing the hiding space for an important class of theoretical dark matter particles. 

In addition, data from the Fermi Gamma-ray Space Telescope and other facilities continued to tighten constraints on dark matter through indirect searches.

This sets the stage for a suite of complementary next-generation experiments—including LZ, SuperCDMS-SNOLAB and ADMX-G2 in the US—that aim to significantly improve sensitivity and reveal the nature of dark matter.

Inline 4: 2016 year in particle physics

Illustration by Sandbox Studio, Chicago with Ana Kova

Understanding cosmic acceleration

Particle physicists turn to the sky in their efforts investigate a different mystery: Our universe is expanding at an accelerating rate. Scientists seek to understand the nature of dark energy, responsible for overcoming the force of gravity and pushing our universe apart. 

Large-scale, ground-based cosmic surveys aim to measure the long-term expansion history of the universe and improve our understanding of dark energy. This year, scientists on the Baryon Oscillation Spectroscopic Survey used their final data set, comprising 1.5 million galaxies and quasars, to make improved measurements of the cosmological scale of the universe and the rate of cosmic structure growth. These measurements will allow theorists to test and refine models that aim to explain the origin of the current era of cosmic acceleration.

Through efforts that include private sector partnerships and international collaborations, US physicists aim to rapidly usher in the era of precision cosmology—and shed light on dark energy—with the ongoing Dark Energy Survey and the upcoming Dark Energy Spectroscopic Instrument and Large Synoptic Survey Telescope. 

Community efforts are also underway to develop a next-generation cosmic microwave background experiment, CMB-S4. Precision measurements from CMB-S4 will not only advance dark energy studies and provide cosmic constraints on neutrino properties, but offer a way to probe the early era of cosmic acceleration known as inflation, which occurred at energies far greater than can be achieved in an accelerator on Earth.

Inline 5: 2016 year in particle physics

Illustration by Sandbox Studio, Chicago with Ana Kova

Exploring the unknown

Oftentimes, results from an experiment show a hint of something new and unexpected, and scientists must design new technology to determine if what they’ve seen is real. But between 2015 and 2016, scientists at the LHC both raised and answered their own question. 

In late 2015, LHC scientists found an unexpected bump in their data, a possible first hint of a new particle. Theorists were on the case; early in 2016 they laid the framework for possible interpretations of the data and explored how it might impact the Standard Model of particle physics. But in August, experimentalists had gathered enough new data to deem the hint a statistical fluctuation. 

Stimulated by the discovery of pentaquark and tetraquark states, some theorists have predicted that bound states of four b quarks should soon be observable at the LHC.

Experimentalists continue to test theorists’ predictions against data by performing high-precision measurements or studying extremely rare particle decays at experiments such as the LHCb experiment at the LHC, the upcoming Belle II experiment in Japan and the Muon g-2 and Muon to Electron Conversion experiments at Fermi National Accelerator Laboratory.

Inline 6: 2016 year in particle physics

Illustration by Sandbox Studio, Chicago with Ana Kova

Investing in the future of discovery science

The world-class facilities and experiments that enable the global program of particle physics are built on a foundation of advanced technology. Ongoing research and development of particle accelerator and detector technology seed the long-term future prospects for discovery. 

In 2016, scientists and engineers continued to make advances in particle accelerator technology to prepare to build next-generation machines and possible far-future facilities. 

Advances in the efficiency of superconducting radio-frequency cavities will lead to cost savings in building and operating machines such as the Linac Coherent Light Source II. In February, researchers at the Berkeley Lab Laser Accelerator, or BELLA, demonstrated the first multi-stage accelerator based on “tabletop” laser-plasma technology. This key step is necessary to push toward far-future particle colliders that could be thousands of times shorter than conventional accelerators.

These results reflect only a small portion of the total scientific output of the particle physics community in 2016. The stage is set for exciting discoveries that will advance our understanding of the universe.

by Jim Siegrist, US DOE Office of High Energy Physics at December 20, 2016 03:18 PM

December 16, 2016

Quantum Diaries

Latest news from outer space on dark matter

To celebrate the first five years of operation on board the International Space Station, Professor Sam Ting, the spokesperson for the Alpha Magnetic Spectrometer (AMS-02) Collaboration just presented their latest results at a recent seminar held at CERN. With a sample of 90 million events collected in cosmic rays, they now have the most precise data on a wide range of particles found in outer space.

ams-02

source: ©NASA

Many physicists wonder if the AMS Collaboration will resolve the enigma on the origin of the excess of positrons found in cosmic rays. Positrons are the antimatter of electrons. Given that we live in a world made almost uniquely of matter, scientists have been wondering for more than a decade where these positrons come from. It is well known that some positrons are produced when cosmic rays interact with the interstellar material. What is puzzling is that more positrons are observed than what is expected from this source alone.

Various hypotheses have been formulated to explain the origin of these extra positrons. One particularly exciting possibility is that these positrons could emanate from the annihilation of dark matter particles. Dark matter is some form of invisible matter that is observed in the Universe mostly through its gravitational effects. Regular matter, everything we know on Earth but also everything found in stars and galaxies, emits light when heated up, just like a piece of heated metal glows.

Dark matter emits no light, hence its name. It is five times more prevalent than regular matter. Although no one knows, we suspect dark matter, just like regular matter, is made of particles but no one has yet been able to capture a particle of dark matter. However, if dark matter particles exist, they could annihilate with each other and produce an electron and a positron, or a proton and antiproton pair. This would at long last establish that dark matter particles exist and reveal some clues on their characteristics.

An alternative but less exotic explanation would be that the observed excess of positrons comes from pulsars. Pulsars are neutron stars with a strong magnetic field that emit pulsed light. But light is made of photons and photons can also decay into an electron and a positron. So both the pulsar and the dark matter annihilation provide a plausible explanation on the source of these positrons.

To tell the difference, one must measure the energy of all positrons found in cosmic rays and see how many are found at high energy. This is what AMS has done and their data are shown on the left plot below, where we see the flux of positrons (vertical axis) found at different energies (horizontal axis). The flux combines the number of positrons found with their energy cube. The green curve gives how many positrons are expected from cosmic rays hitting the interstellar material (ISM).

If the excess of positrons were to come from dark matter annihilation, no positron would be found with an energy exceeding the mass of the dark matter particle. They would have an energy distribution similar to the brown curve on the plot below as expected for dark matter particles having a mass of 1 TeV, a thousand times heavier than a proton. In that case, the positrons energy distribution curve would drop off sharply. The red dots represent the AMS data with their experimental errors shown by the vertical bars. If, on the other end, the positrons came from pulsars, the drop at high energy would be less pronounced.

ams-2016

source: AMS Collaboration

The name of the game is therefore to figure out precisely what is happening at high energy. But there are much fewer positrons there, making it very difficult to see what is happening as indicated by the large error bars attached to the data points at higher energy. These indicate the size of the experimental errors.

But by looking at the fraction of positrons found in all data collected for electrons and positrons (right plot above), some of the experimental errors cancel out. AMS has collected over a million positrons and 16 million electrons. The red dots on the right plot show the fraction of positrons found in their sample as a function of energy. Given the actual precision of these measurements, it is still not completely clear if this fraction is really falling off at higher energy or not.

The AMS Collaboration hopes however to have enough data to distinguish the two hypotheses by 2024 when the ISS will cease operation. These projections are shown on the next two plots both for the positrons flux (left) and the positron fraction (right). As it stands today, both hypotheses are still possible given the size of the experimental errors.

ams-2024

source: AMS Collaboration

There is another way to test the dark matter hypothesis. By interacting with the interstellar material, cosmic rays produce not only positrons, but also antiprotons. And so would dark matter annihilations but pulsars cannot produce antiprotons. If there were also an excess of antiprotons in outer space that could not be accounted for by cosmic rays, it would reinforce the dark matter hypothesis. But this entails knowing precisely how cosmic rays propagate and interact with the interstellar medium.

Using the AMS large sample of antiprotons, Prof. Sam Ting claimed that such excess already exists. He showed the following plot giving the fraction of antiprotons found in the total sample of protons and antiprotons as a function of their energy. The red dots represent the AMS measurements, the brown band, some theoretical calculation for cosmic rays, and the blue band, what could be coming from dark matter.

antiproton-fraction

source: AMS Collaboration

This plot clearly suggests that more antiprotons are found than what is expected from cosmic rays interacting with the interstellar material (ISM). But both Dan Hooper and Ilias Cholis, two theorists and experts on this subject, strongly disagree, saying that the uncertainty on this calculation is much larger. They say that the following plot (from Cuoco et al.) is by far more realistic. The pink dots represent the AMS data for the antiproton fraction. The data seem in good agreement with the theoretical prediction given by the black line and grey bands. So there are no signs of a large excess of antiprotons here. We need to wait for a few more years before the AMS data and the theoretical estimates are precise enough to determine if there is an excess or not.

antiprotons-theorie

source: Cuoco, Krämer and Korsmeier, arXiv:1610.03071v1

The AMS Collaboration could have another huge surprise is stock: discovering the first antiatoms of helium in outer space. Given that anything more complex than an antiproton is much more difficult to produce, they will need to analyze huge amounts of data and further reduce all their experimental errors before such a discovery could be established.

Will AMS discover antihelium atoms in cosmic rays, establish the presence of an excess of antiprotons or even solve the positron enigma? AMS has lots of exciting work on its agenda. Well worth waiting for it!

Pauline Gagnon

To find out more about particle physics and dark matter, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN ».

To be notified of new blogs, follow me on Twitter : @GagnonPauline or sign up on this distribution list

 

by Pauline Gagnon at December 16, 2016 05:54 PM

Quantum Diaries

Du beau pain sur la planche pour la matière sombre

Pour célébrer les cinq premières années d’opération à bord de la Station spatiale internationale, le Professeur Sam Ting, porte-parole de la Collaboration Alpha Magnetic Spectrometer (AMS-02) vient de présenter leurs derniers résultats lors d’un récent séminaire tenu au CERN. Avec plus de 90 millions d’évènements recueillis dans les rayons cosmiques, ce groupe dispose des données les plus précises sur une vaste gamme de particules trouvées dans l’espace.

ams-02

source: ©NASA

La question qui intrigue de nombreux scientifiques est de savoir s’ils pourront résoudre l’énigme de l’origine de l’excès de positrons trouvés dans les rayons cosmiques. Les positrons sont l’antimatière des électrons. Étant donné que nous vivons dans un monde fait presque uniquement de matière, les scientifiques se demandent depuis plus d’une décennie d’où émanent ces positrons. Il est bien connu que des positrons sont produits lorsque les rayons cosmiques interagissent avec la matière interstellaire mais on en observe bien plus que ce à quoi on s’attendait de cette seule source.

Des hypothèses diverses ont été formulées pour expliquer l’origine de ces positrons excédentaires. Une des plus fascinantes suggère que ces positrons pourraient venir de l’annihilation de particules de matière sombre. La matière sombre est une nouvelle forme de matière invisible qu’on détecte dans l’Univers par ses effets gravitationnels. La matière régulière, tout ce que nous voyons sur la Terre, mais aussi dans les étoiles et les galaxies, émet de la lumière lorsque chauffée, tout comme une pièce métallique irradie à haute température.

La matière sombre n’émet aucune lumière, d’où son nom. Elle est cinq fois plus répandue que la matière régulière. Personne ne le sait encore mais on soupçonne que cette matière, tout comme la matière ordinaire, soit faite de particules, mais on n’a toujours pas capturé de particules de matière sombre. Mais si de telles particules existaient, elles pourraient s’annihiler entre elles, produisant des électrons et des positrons, ou des paires de protons et d’antiprotons. Si un tel processus était établi, cela confirmerait enfin l’existence de particules de matière sombre et révèlerait quelques indices sur leurs caractéristiques.

Une explication alternative mais moins exotique serait que l’excès observé de positrons provienne de pulsars. Les pulsars sont des étoiles à neutrons ayant un fort champ magnétique et qui émettent de la lumière pulsée. Mais la lumière est faite de photons et les photons peuvent eux aussi produire des paires d’électrons et de positrons. Donc, les pulsars tout comme l’annihilation de matière sombre, fournissent une explication plausible quant à la source de ces positrons.

Pour les distinguer, il faut mesurer l’énergie des positrons captés dans les rayons cosmiques et voir combien on en trouve à haute énergie. C’est ce que AMS a fait et leurs résultats sont visibles dans le graphe de gauche ci-dessous où nous voyons le flux de positrons (axe vertical) trouvé à une énergie particulière (axe horizontal). Le flux combine le nombre de positrons trouvés avec leur énergie au cube. La courbe en vert donne le nombre de positrons produits lorsque des rayons cosmiques frappent de la matière interstellaire (ISM).

Si l’excès de positrons devait venir de l’annihilation de matière sombre, on ne trouverait aucun positron au-delà de l’énergie correspondant à la masse des particules de matière sombre. Ils auraient une distribution d’énergie semblable à la courbe en brun sur le graphe ci-dessous tel que prédit pour des particules de matière sombre ayant une masse de 1 TeV, soit mille fois plus lourd qu’un proton. Dans ce cas, la courbe de distribution d’énergie des positrons chuterait rapidement. Les points en rouge représentent les données d’AMS avec leurs erreurs expérimentales indiquées par les barres verticales. Par contre, si les positrons venaient de pulsars, la chûte à haute énergie serait moins prononcée.

ams-2016

source: Collaboration AMS

Toute la difficulté consiste à comprendre précisément leur comportement à haute énergie. Mais comme on y trouve moins de positrons, il est beaucoup plus difficile de voir ce qu’il en est comme l’indiquent les larges marges d’erreur associées aux mesures faites à plus haute énergie.

Mais si on mesure plutôt la fraction de positrons trouvés dans les données en combinant positrons et électrons, certaines des erreurs expérimentales s’annulent. AMS a rassemblé plus d’un million de positrons et 16 millions d’électrons. Les points en rouge sur le graphe de droite ci-dessus montrent la fraction de positrons trouvée dans leur échantillon en fonction de leur énergie. Malgré les pas de géants accomplis, la précision actuelle de ces mesures ne permet toujours pas d’établir clairement si cette fraction tombe abruptement à haute énergie ou pas.

La Collaboration AMS espère toutefois avoir assez de données pour distinguer les deux hypothèses d’ici à 2024, date à laquelle la Station Spatiale Internationale cessera ses opérations. On peut voir ces projections sur les deux graphes suivants tant pour le flux de positrons (à gauche) que pour la fraction de positrons (à droite). À ce jour, les deux hypothèses sont toujours valides étant donné la taille des erreurs expérimentales.

ams-2024

source: Collaboration AMS

L’hypothèse de la matière sombre peut aussi être testée d’une autre façon. En interagissant avec la matière interstellaire, les rayons cosmiques produisent non seulement des positrons mais aussi des antiprotons. Les annihilations de matière sombre pourraient aussi en produire mais pas les pulsars. Il faut donc déterminer s’il y a ou pas plus d’antiprotons dans l’espace que ce que les rayons cosmiques peuvent produire. Si c’était établi, ce serait un argument de plus contre l’hypothèse des pulsars. Mais pour ce faire, il faut savoir précisément comment les rayons cosmiques se propagent et interagissent avec la matière interstellaire.

S’appuyant sur le vaste échantillon d’antiprotons recueillis par AMS, le Prof. Sam Ting a soutenu qu’un tel excès existe, présentant le graphe suivant à l’appui. On y voit la fraction d’antiprotons trouvés dans l’échantillon total de protons et des antiprotons en fonction de leur énergie. Les points en rouge représentent les mesures d’AMS, la bande brune, les calculs théoriques pour les rayons cosmiques et la bande bleue, ce qui pourrait venir de la matière sombre.

antiproton-fraction

source: Collaboration AMS

Ce graphe suggère fortement un surplus d’antiprotons par rapport à ce que l’on s’attend des rayons cosmiques interagissant avec la matière interstellaire (ISM). Mais tant Dan Hooper qu’Ilias Cholis, deux théoriciens experts en la matière, s’objectent carrément, disant que l’incertitude sur les prédictions théoriques sont beaucoup plus grandes que ce que ce graphe suggère. Ils soutiennent que le graphe suivant (de Cuoco etal.) est de loin plus réaliste. Les points en rose représentent les données d’AMS pour la fraction d’antiprotons et le trait en noir, les prédictions théoriques avec leur marge d’erreur. Les deux concordent ou presque, suggérant l’absence de tout excès. Nous devrons patienter encore quelques années avant que les données d’AMS et les prédictions théoriques soient assez précises pour savoir s’il y a excès ou pas.

antiprotons-theorie

            source : Cuoco, Krämer and Korsmeier, arXiv:1610.03071v1

La Collaboration AMS pourrait nous réserver une autre belle surprise : la découverte d’antiatomes d’hélium dans l’espace. Étant donné l’extrême difficulté à produire une particule d’antimatière plus complexe qu’un antiproton, les scientifiques d’AMS devront trier d’énormes quantités de données et réduire toutes les erreurs expérimentales encore davantage avant qu’une telle découverte ne puisse être établie.

La découverte d’antihélium, ou celle d’un excès d’antiprotons ou encore la résolution de l’énigme des positrons, tout cela vaut bien la peine d’attendre encore quelques années. AMS a du beau pain sur la planche!
Pauline Gagnon

Pour en savoir plus sur la physique des particules et la matière sombre, consultez mon livre “Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels“.

Pour être au courant des nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou inscrivez-vous sur cette liste de distribution

by Pauline Gagnon at December 16, 2016 05:46 PM

Symmetrybreaking - Fermilab/SLAC

The ABCs of Particle Physics board book

The ABCs of Particle Physics is available online and at public libraries and stores near Fermilab and SLAC.

Header: Board Book

For lovers of rhymes and anthropomorphic Higgs bosons, Symmetry presents its first published board book, The ABCs of Particle Physics. Use it as an illustrated guide to basic particle- and astrophysics terms, or read it to your infant at bedtime, if you don’t mind their first word being “quark.”

Find The ABCs of Particle Physics at Stanford Bookstore online and at these locations near Fermi National Accelerator Laboratory and SLAC National Accelerator Laboratory:

Symmetry is published by Fermilab and SLAC. The ABCs of Particle Physics is educational in nature and the national laboratories do not profit from its sale.

December 16, 2016 02:00 PM

December 15, 2016

Sean Carroll - Preposterous Universe

Quantum Is Calling

Hollywood celebrities are, in many important ways, different from the rest of us. But we are united by one crucial similarity: we are all fascinated by quantum mechanics.

This was demonstrated to great effect last year, when Paul Rudd and some of his friends starred with Stephen Hawking in the video Anyone Can Quantum, a very funny vignette put together by Spiros Michalakis and others at Caltech’s Institute for Quantum Information and Matter (and directed by Alex Winter, who was Bill in Bill & Ted’s Excellent Adventure). You might remember Spiros from our adventures emerging space from quantum mechanics, but when he’s not working as a mathematical physicist he’s brought incredible energy to Caltech’s outreach programs.

Now the team is back again with a new video, this one titled Quantum is Calling. This one stars the amazing Zoe Saldana, with an appearance by John Cho and the voices of Simon Pegg and Keanu Reeves, and of course Stephen Hawking once again. (One thing about Caltech: we do not mess around with our celebrity cameos.)

If you’re interested in the behind-the-scenes story, Zoe and Spiros and others give it to you here:

If on the other hand you want all the quantum-mechanical jokes explained, that’s where I come in:

Jokes should never be explained, of course. But quantum mechanics always should be, so this time we made an exception.

by Sean Carroll at December 15, 2016 05:06 PM

Symmetrybreaking - Fermilab/SLAC

Physics books of 2016

As 2016 comes to a close, Symmetry writer Mike Perricone takes us through the latest additions to his collection of popular science books related to particle physics.

Header: Physics books of 2016

The year 2016 brought us books on topics such as gravitational waves, the “Pope” of physics, the history of science from the paper of record, and the concept of “now.”

Inline 1: Physics books of 2016

Black Hole Blues, and Other Songs From Outer Space, by Janna Levin

The oldest sound scientists have ever heard was the “chirp” of gravitational waves emanating from a billions-of-years-old collision of two black holes. The sound was intercepted by the Laser Interferometer Gravitational-Wave Observatory, 40 years after the proposal for the detector was rejected.

With the deft touch of a novelist (A Madman Dreams of Turing Machines, How the Universe Got its Spots), Janna Levin, professor of physics and astronomy at Columbia University, follows the struggles of the project’s original 1970s troika—Rai Weiss, Ron Drever and theorist Kip Thorne—and the eventual success of director Barry Barish, who spent 1994 to 2004 putting the project on solid footing.

Inline 2: Physics books of 2016

Seven Brief Lessons on Physics, by Carlo Rovelli

Carlo Rovelli, one of the founders of the loop quantum gravity theory and head of the quantum gravity group at the Centre de Physique Theorique of Aix-Marseille Université, takes readers through a history of physics from Einstein and Bohr to Heisenberg to Hawking. 

Special acclaim goes to his translators, Simon Carnell and Erica Segre, who bring us phrases such as these from Rovelli’s original Italian: “[B]efore experiments, measurements, mathematics and rigorous deductions, science is above all about visions. Science begins with a vision. Scientific thought is fed by the capacity to ‘see’ things differently than they have previously been seen.” You’ll want to memorize this poetic gem.

Inline 3: Physics books of 2016

The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age, by Bettina Hoerlein and Gino Segré

Fermi method. Fermi questions. Fermi surface. Fermi sea. Fermions. Fermi Institute. Fermi Gamma-ray Space Telescope. Physicist Enrico Fermi, known in part for creating the world’s first nuclear reactor, definitely left his mark on physics.

Fermi won the Nobel Prize in 1938, and in the following years the prize went to no less than six of Fermi’s students. As a scientist, he was considered infallible: Colleagues and students in Rome dubbed him “the Pope.” 

Co-authors Bettina Hoerlein and spouse Gino Segré—the nephew of Nobel Laureate Emilio Segré, Fermi’s student and lifelong friend—piece together a human picture of the brilliant scientist.

Inline 4: Physics books of 2016

A Very Short Introduction to . . .

Part of a long-running and incredibly far-reaching series from Oxford University Press, Very Short Introductions combines sound science with brisk, accessible writing by eminent scientists. Averaging about 150 pages, this year’s top physics-related offerings include:

  • Black Holes, by Katherine Blundell: What we know and don’t know about black holes; how they are created and discovered; separating fact from fiction. This title is especially timely this year with LIGO’s detection of gravitational waves from the collision of two black holes. Blundell is a Professor of Astronomy at Oxford. 
  • Astrophysics, by James Binney: The physics of supernovae, planetary systems, and the application of special and general relativity. Binney, an astronomer at Oxford University, has won the Maxwell and Dirac Medals.
  • Copernicus, by Owen Gingerich: Regarded as the major authority on Copernicus, Gingerich places Copernicus in the context of his time and his place in the scientific revolution. Gingerich is Senior Astronomer Emeritus at Smithsonian Astrophysical Observatory.
Inline 5: Physics books of 2016

The New York Times Book of Science: 150 Years of Science Reporting in the New York Times, Edited by David Corcoran, former editor of weekly Science Times

In this tour through a century and a half of science reporting by The New York Times, the sections on astronomy and physics are not to be missed. 

From the archives come headlines such as “Star Birth Sudden, Lemaitre Asserts,” from a 1933 conference in Britain (with quotes from early cosmology luminaries William deSitter and Sir Arthur Eddington) and “Einstein Expounds His New Theory,” written in 1919. In the 1919 article, Einstein insists to the reporter endeavoring to explain his extraordinary concepts to lay readers, “I am trying to talk as plainly as possible.”

Inline 6: Physics books of 2016

NOW: The Physics of Time, by Richard A. Muller

Einstein was somewhat casual about time, saying “The only reason for time is so that everything doesn’t happen at once.” 

Richard Muller, experimental cosmologist, professor of physics at the University of California, Berkeley and author of Physics for Future Presidents, has more use for the concept. In this book, he explains that “the flow of time is the continual creation of new nows.” Muller takes on all comers and gets into plenty of arguments along the way.

Inline 7: Physics books of 2016

Who Cares About Particle Physics? Making Sense of the Higgs Boson, the Large Hadron Collider and CERN, by Pauline Gagnon

Pauline Gagnon, an experimenter on the LHC’s CMS experiment, cut her teeth writing a widely read blog during the final two years of the search for the Higgs boson. In her first book, Gagnon explains the experimental process to non-scientists. 

Each chapter concludes with summaries of key points, and in the final chapter, she assures readers the LHC is still in its early stages. Don’t miss the appendix on the possible (and probable) contributions to Einstein’s stunning early work by his first wife, Mileva Maric Einstein.

Inline 8: Physics books of 2016

Welcome to the Universe: An Astrophysical Tour, by Neil deGrasse Tyson, Michael A. Struss and J. Richard Gott

Looking like a cross between a textbook and a coffee-table book, Welcome to the Universe is an extremely readable compilation of introductory astronomy lectures for non-science students given by Neil deGrasse Tyson, Michael A. Strauss and J. Richard Gott at Princeton University. Their talks present physics with clarity and a little levity—with references to pop culture items such as Toy Story and Bill and Ted’s Excellent Adventure. Gott even tackles time travel. What’s not to like?

Inline 9: Physics books of 2016

The Cosmic Web: Mysterious Architecture of the Universe, by J. Richard Gott

J. Richard Gott was one of the first to describe the structure of the universe as being similar to a sponge, made up of holey surfaces divided into equal, interlocked parts. The concept may sound strange, but it has since been confirmed by numerous surveys of the sky.

A combination of anecdotes, physics and math, this one is a challenge. You’ll need your cosmic thinking cap.

Inline 10: Physics books of 2016

13.8: The Quest to Find the True Age of the Universe and the Theory of Everything, by John Gribbin

Visiting Fellow in Astronomy at the University of Sussex in the UK and veteran science author John Gribbin (best known for In Search of Shrödinger’s Cat) wants to synthesize the great theories of the 20th century—general relativity and quantum mechanics—into his own search for a Theory of Everything. 

In his explanation, related to the estimated age of the universe—13.8 billion years—Gribbin pays special attention to often-overlooked women scientists Henrietta Swan Leavitt (who proposed using Cepheid variable stars as standard candles) and Cecilia Payne (who first predicted that hydrogen was the most common element in the universe).

by Mike Perricone at December 15, 2016 02:00 PM

December 14, 2016

Jaques Distler - Musings

MathML Update

For a while now, Frédéric Wang has been urging me to enable native MathML rendering for Safari. He and his colleagues have made many improvements to Webkit’s MathML support. But there were at least two show-stopper bugs that prevented me from flipping the switch.

Fortunately:

  • The STIX Two fonts were released this week. They represent a big improvement on Version 1, and are finally definitively better than LatinModern for displaying MathML on the web. Most interestingly, they fix this bug. That means I can bundle these fonts1, solving both that problem and the more generic problem of users not having a good set of Math fonts installed.
  • Thus inspired, I wrote a little Javascript polyfill to fix the other bug.

While there are still a lot of remaining issues (for instance this one fixed), I think Safari’s native MathML rendering is now good enough for everyday use (and, in enough respects, superior to MathJax’s) to enable it by default in Instiki, Heterotic Beast and on this blog.

Of course, you’ll need to be using2 Safari 10.1 or Safari Technology Preview.

Update:

Another nice benefit of STIX Two fonts is that itex can support both Chancery (\mathcal{}) and Roundhand (\mathscr{}) symbols <semantics>\mathcal{}: 𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵 \mathscr{}: 𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵<annotation encoding="application/x-tex"> \begin{split} \backslash\mathtt{mathcal}\{\}:&\,\mathcal{ABCDEFGHIJKLMNOPQRSTUVWXYZ}\\ \backslash\mathtt{mathscr}\{\}:&\,\mathscr{ABCDEFGHIJKLMNOPQRSTUVWXYZ} \end{split} </annotation></semantics>

1 In an ideal world, OS vendors would bundle the STIX Two fonts with their next release (as Apple previously bundled the STIX fonts with MacOSX ≥10.7) and motivated users would download and install them in the meantime.

2 N.B.: We’re not browser-sniffing (anymore). We’re just checking for MathML support comparable to Webkit version 203640. If Google (for instance) decided to re-enable MathML support in Chrome, that would work too.

by distler (distler@golem.ph.utexas.edu) at December 14, 2016 03:00 PM

December 13, 2016

Symmetrybreaking - Fermilab/SLAC

Science with sprinkles

Holiday guests will gravitate toward these physics cookies.

Header: Science with sprinkles

Want your holiday cookies to stand out this year among the usual snowflakes and Santa Clauses? Show your smarts with these scientific cookie decorations.

Inline 1: Science with sprinkles

Gravitational wave cookies

Cookies by Sandbox Studio, Chicago with Jill Preston

This winter, why not celebrate the recent discovery of gravitational waves? Albert Einstein first predicted them 100 years ago in his general theory of relativity. Now you can depict them in dessert form. 

Two dark brown M&Ms in the center of this physics cookie represent massive black holes that merged billions of years ago in a collision whose impact was, according to Caltech physicist Kip Thorne, “50 times greater than all the power put out by all the stars in the universe put together.” 

The swirl design of the pinwheel sugar cookie represents the resulting ripples in space-time, which eventually made their way to the twin detectors of the Laser Interferometer Gravitational-Wave Observatory. Sprinkles around the edge are just for show.

Inline 2: Science with sprinkles

Neutrino cookies

Cookies by Sandbox Studio, Chicago with Jill Preston

Neutrinos come in three types, appropriately called “flavors.” The symbol for neutrinos is the Greek letter “nu,” which resembles a lowercase “v.” Three nu’s, each drawn in a different flavor of icing, will fit perfectly on a snowflake- or flower-shaped cookie.

If you spin your cookie, you can observe a fascinating behavior of neutrinos: oscillation. Neutrinos change from one flavor into the other as they travel, a fact that might have influenced the evolution of our universe. 

Just like snowflakes, neutrinos are elusive; even if you catch them you can’t enjoy them for long. But they are also one of the most abundant particles in the universe, so don’t skimp on the sprinkles.

Inline 3: Science with sprinkles

Detector cookies

Cookies by Sandbox Studio, Chicago with Jill Preston

To learn more about the building blocks of our universe, scientists build particle accelerators such as the Large Hadron Collider and cause particles to collide at velocities close to the speed of light. Huge detectors are built around collision points to spot new particles, such as the Higgs boson, that are created out of the impact’s energy. 

What could be sweeter than a sugar cookie that depicts the beautiful layering of the cross-section of one of these gigantic detectors?

Inline 4: Science with sprinkles

Penguin diagram cookies

Cookies by Sandbox Studio, Chicago with Jill Preston

In 1977 John Ellis, a theoretical physicist, lost a bet in a pub to Melissa Franklin, same profession, and was compelled to use the word “penguin” in his next scientific publication. 

He decided a drawing called a Feynman diagram—a way to sketch a particle decay process—somewhat resembled the flightless Antarctic bird. He dubbed the diagram for a decay of the bottom quark a “penguin diagram.” It caught on, and now the term is well known in the particle physics community. 

If you happen to have a penguin cookie cutter, you’re in luck. Decorate it as you’d like (add a scarf if you want) and add the lines of the Feynman diagram in icing on top. See? It fits!

Inline 5: Science with sprinkles

Universe cookies

Cookies by Sandbox Studio, Chicago with Jill Preston

Now you can make a cookie that is not only delicious, but also shows how much we don’t know about the contents of our universe. 

To make a universe cookie, cover 73 percent of it with Oreo chunks representing dark energy. Dark energy is responsible for the accelerating expansion of our universe, but there's a lot we don't know about it. 

Cover another 23 percent of your cookie with glitter representing dark matter. Scientists have seen the gravitational effect of dark matter on galaxies and stars, but they’ve never seen it directly.

Cover the last 4 percent of your cookie with a tiny stripe of crushed peppermint representing the known matter in the universe. This includes all of the planets and stars that we can see.


These eye-catching physics cookies aren’t just delicious, they’re also great conversation-starters. So grab your mug of hot cocoa and be ready to talk about sprinkles, the universe and everything.

by Ricarda Laasch at December 13, 2016 03:32 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
January 18, 2017 01:36 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at