Particle Physics Planet

September 02, 2014

Clifford V. Johnson - Asymptotia

Meanwhile, Somewhere Down South…
imageSo while at a hotel somewhere down South for a few days (pen and watercolour pencil sketch to the right), I finally found time to sit and read Graham Farmelo's book "The Strangest Man", a biography of Dirac. (It has a longer subtitle as well, but the book is way over in the next room far from my cosy spot...) You may know from reading here (or maybe even have guessed) that if I were to list a few of my favourite 20th century physicists, in terms of the work they did and their approach and temperament, Dirac would be a strong contender for being at the top of the list. I am not a fan of the loudmouth and limelight-seeking school of doing physics that seems all so popular, and I much prefer the approach of quietly chipping away at interesting (not always fashionable) problems to see what might turn up, guided by a mixture of physical intuition, aesthetics, and a bit of pattern-spotting. It works, as Dirac showed time and again. I've read a lot about Dirac over the years, and was, especially in view of the title of the book, a little wary of reading the book when I got it four years ago, since I am not a fan of going for the "weren't they weird?" approach to biographies of scientists since they serve too [...] Click to continue reading this post

by Clifford at September 02, 2014 03:16 PM

Matt Strassler - Of Particular Significance

Be Careful Waking Up a Sleeping Blog

After a very busy few months, in which a move to a new city forced me to curtail all work on this website, I’m looking to bring the blog gradually out of hibernation.  [Wordsmiths and Latinists: what is the summer equivalent?] Even so, a host of responsibilities, requirements, grant  applications, etc. will force me to ramp up the frequency of posts rather slowly.  In the meantime I will be continuing for a second year as a Visiting Scholar at the Harvard physics department, where I am doing high-energy physics research, most of it related to the Large Hadron Collider [LHC].

Although the LHC won’t start again until sometime next year (at 60% more energy per proton-proton collision than in 2012), the LHC experimenters have not been sleeping through the summer of 2014… far from it.  The rich 2011-2012 LHC data set is still being used for new particle physics measurements by ATLAS, CMS, and LHCb. These new and impressive results are mostly aimed at answering a fundamental question that faces high-energy physics today: Is the Standard Model* the full description of particle physics at the energies accessible to the LHC?  Our understanding of nature at the smallest distances, and the future direction of high-energy physics, depend crucially on the answer.  But an answer can only be obtained by searching for every imaginable chink in the Standard Model’s armor, and thus requires a great diversity of measurements. Many more years of hard and clever work lie ahead, and — at least for the time being — this blog will help you follow the story.


*The “Standard Model” is the theory — i.e., the set of mathematical equations — used to describe and predict the behavior of all the known elementary particles and forces of nature, excepting gravity. We know the Standard Model doesn’t describe everything, not only because of gravity’s absence, but because dark matter and neutrino masses aren’t included; and also the Standard Model fails to explain lots of other things, such as the overall strengths of the elementary forces, and the pattern of elementary particle types and particle masses. But its equations might be sufficient, with those caveats, to describe everything the LHC experiments can measure.  There are profound reasons that many physicists will be surprised if it does… but so far the Standard Model is working just fine, thank you.

Filed under: Housekeeping

by Matt Strassler at September 02, 2014 02:38 PM

arXiv blog

Pitfalls Emerge In The Analysis of Mobile Phone Datasets

Mobile phone data is revolutionising the way researchers study human mobility. But these analyses are worryingly susceptible to hidden bias, say researchers

September 02, 2014 02:23 PM

Quantum Diaries

Finding tomorrow’s scientists

Last week I was at a family reunion where I had the chance to talk to one of my more distant relations, Calvin. At 10 years old he seems to know more about particle physics and cosmology than most adults I know. We spent a couple of hours talking about the LHC, the big bang, trying to solve the energy crisis, and even the role of women in science . It turns out that Calvin had wanted to speak with a real scientist for quite a while, so I agreed to have a chat next time I was in the area. To be honest when I first agreed I was rolling my eyes at the prospect. I’ve had so many parents tell me about their children who are “into science” only to find out that they merely watch Mythbusters, or enjoyed reading a book about dinosaurs. However when I spoke to Calvin I found he had huge concentration and insight for someone of his age, and that he was enthusiastically curious about physics to the point where I felt he would never tire of the subject. Each question would lead to another, in the meantime he’d wait patiently for the answer, giving the discussion his full attention. He seemed content with the idea that we don’t have answers to some of these questions yet, or that it can take decades for someone to understand just one of the answers properly. The road to being a scientist is a long one and you’ve got to really want it and work hard to get there, and Calvin has what it takes.

Real scientists don't merely observe, they don't merely interact, they create.  (Child at the Science Museum London, studying an optical exhibit.  Nevit Dilmen 2008)

Real scientists don’t merely observe, they don’t merely interact, they create. (Child at the Science Museum London, studying an optical exhibit. Nevit Dilmen 2008)

Next month Calvin will start his final year in primary school and his teacher will be the same teacher I had at that age, Mark (a great name for a teacher!) From an early age I was fascinated by mathematics and computation, and without Mark I would not have discovered how much fun it was to play with numbers and shapes, something I’ve enjoyed ever since. Without his influence I probably would not have chosen to be a scientist. So once I found out Mark was going to teach Calvin I got in touch and told him that Calvin had the spark within him to get to university, but only if he had the right help along the way. In the area we are from, an industrial town in the North West of England, it is not usual for children to go to university, and there’s often strong peer pressure to not study hard. In this kind of environment it’s important to give encouragement to the children who can do well in academia. (Of course it would be better to change the environments in schools, but changing attitudes and cultures takes decades.)

All this made me think about my own experiences on the way to university, and I’m sure everyone had their own memories of the teachers who inspired them, and the frustrations of how much of high school focuses on learning facts instead of critical thinking. At primary school I had exhausted the mathematics textbooks very early on, under the guidance of Maggie Miller. From there Mark took over and taught me puzzles that went beyond anything I was taught in maths classes at high school. It was unfortunate that I was assigned a rather uninspiring maths teacher who would struggle to understand what I said at times, and it took the school about four years to organise classes that stretched its top students. This was mostly a matter of finding the resources than anything else; the school was caught in the middle of a regional educational crisis, and five small schools were fighting to stay open in a region that could only support four larger schools. One of the schools had to close and that would mean a huge upheaval for everyone. Challenging the brightest students became one of the ways that the school could show its worth and boost its statistics, so the pupils and school worked together to improve both their prospects. Since then the school has encouraged pupils to on extra subjects and exams if they want to, and I’m glad to stay that not only has it stayed open but it’s now going from strength to strength, and I’m glad to have played a very small part in that success.

By the time I was at college there was a whole new level of possibilities, as they had teams dedicated to helping students get to university, and some classes were arranged to fit around the few students that needed them, rather than the other way around. Some of the support still depended on individuals putting in extra effort though, including staff pulling strings to arrange a visit to Oxford where we met with tutors and professors who could give us practice interviews. I realised there was quite a coincidence, because one of the people who gave a practice interview, Bobbie Miller, was the son of Maggie Miller, one of my primary school teachers. At the same time one of my older and more dedicated tutors, Lance, had to take time off for ill health. He invited me and two others over to his house in the evenings for extra maths lessons, some of which went far beyond the scope of the syllabus and instead explored critical and creative mathematical thinking to give us a much deeper understanding of what we were studying. After one of my exams I heard the sad news that he’d passed away, but we knew that he was confident of our success and all three of us got the university positions we wanted, largely thanks to his help.

Unable to thank Lance, I went to visit Maggie Miller and thanked her. It was a surreal experience to go into her classroom and see how small the tables and chairs were, but it brings me back to the main point. Finding tomorrow’s scientists means identifying and encouraging them from an early age. The journey from primary school to university is long, hard, full of distractions and it’s easy to become unmotivated. It’s only through the help of dozens of people putting in extra effort that I got to where I am today, and I’m going to do what I can to help Calvin have the same opportunities. Looking back I am of course very grateful for this, but I also shudder to think of all the pupils who weren’t so lucky, and never got a chance to stretch their intellectual muscles. It doesn’t benefit anyone to let these children fall through the cracks of the educational system simply because it’s difficult to identify those who have the drive to be scientists, or because it’s hard work to give them the support they need. Once we link them up to the right people it’s a pleasure to give them the support they need.

There have always been scientists who have come from impoverished or unlikely backgrounds, from Michael Faraday to Sophie Germaine, who fought hard to find their own way, often educating themselves. Who knows how many more advances we would have today if more of their contemporaries had access to a university education? In many cases the knowledge of children quickly outpaces that of their parents, and since parents can’t be expected to find the right resources the support must come from the schools. On the other hand there are many parents who desperately want their children to do well at school and encourage them to excel in as many subjects as possible (hence my initial skepticism when I first heard Calvin was “into science”.) This means that we also need to be wary of imposing our own biases on children. I can talk about particle physics with Calvin all day, but if he wants to study acoustic engineering then nobody should try to dissuade him from that. Nobody has a crystal ball that can tell them what path Calvin will choose to take, not even Calvin, so he needs the freedom to explore his interests in his own way.

Michael Faraday, a self-taught physicist from a poor background, giving a Royal Society Christmas Lecture, perhaps inspiring aspiring scientists in the audience. (Alexander Blaikley)

Michael Faraday, a self-taught physicist from a poor background, giving a Royal Society Christmas Lecture, perhaps inspiring aspiring scientists in the audience. (Alexander Blaikley)

So how can we encourage young scientists-in-the-making? It can be a daunting task, but from my own experience the key is to find the right people to help encourage the child. Finding someone who can share their joy and experiences of science is not easy, and it may mean second or third hand acquaintances. At the same time, there are many resources online you can use. Give a child a computer, a book of mathematical puzzles, and some very simple programming knowledge, and see them find their own solutions. Take them to museums, labs, and universities where they can meet real scientists who love to talk about their work. The key is to engage them and allow them to take part in the process. They can watch all the documentaries and read all the science books in the world, but that’s a passive exercise, and being a scientist is never passive. If a child wants to be an actor it’s not enough to ask them to read plays, they want to perform them. You’ll soon find out if your child is interested in science because they won’t be able to stop themselves being interested. The drive to solve problems and seek answers is not something that can be taught or taken away, but it can be encouraged or frustrated. Encouraging these interests is a long term investment, but one that is well worth the effort in every sense. Hopefully Calvin will be one of tomorrow’s scientists. He certainly has the ability, but more importantly he has the drive, and that means given the right support he’ll do great things.

“Girls aren’t good at science!”, Calvin said. So I told him that some of the best physicists I know are women. I explained how Marie Curie migrated from Poland to France about a century ago to study the new science of radioactivity, how she faced fierce sexism, and despite all that still became the first person in history to win two Nobel Prizes, for chemistry and physics. If a 10 year old thinks that only men can be good scientists then either the message isn’t getting through properly, or as science advocates we’re failing in our role to make it accessible to everyone. We need to move beyond the images of Einstein, Feynman, Cox, and Tyson in the public image of science.

by Aidan Randle-Conde at September 02, 2014 01:44 PM

Christian P. Robert - xi'an's og


[An announcement from ISBA about sponsoring young researchers at NIPS that links with my earlier post that our ABC in Montréal proposal for a workshop had been accepted and a more global feeling that we (as a society) should do more to reach towards machine-learning.]

The International Society for Bayesian Analysis (ISBA) is pleased to announce its new initiative *ISBA@NIPS*, an initiative aimed at highlighting the importance and impact of Bayesian methods in the new era of data science.

Among the first actions of this initiative, ISBA is endorsing a number of *Bayesian satellite workshops* at the Neural Information Processing Systems (NIPS) Conference, that will be held in Montréal, Québec, Canada, December 8-13, 2014.

Furthermore, a special ISBA@NIPS Travel Award will be granted to the best Bayesian invited and contributed paper(s) among all the ISBA endorsed workshops.

ISBA endorsed workshops at NIPS

  1. ABC in Montréal. This workshop will include topics on: Applications of ABC to machine learning, e.g., computer vision, other inverse problems (RL); ABC Reinforcement Learning (other inverse problems); Machine learning models of simulations, e.g., NN models of simulation responses, GPs etc.; Selection of sufficient statistics and massive dimension reduction methods; Online and post-hoc error; ABC with very expensive simulations and acceleration methods (surrogate modelling, choice of design/simulation points).
  2.  Networks: From Graphs to Rich Data. This workshop aims to bring together a diverse and cross-disciplinary set of researchers to discuss recent advances and future directions for developing new network methods in statistics and machine learning.
  3. Advances in Variational Inference. This workshop aims at highlighting recent advancements in variational methods, including new methods for scalability using stochastic gradient methods, , extensions to the streaming variational setting, improved local variational methods, inference in non-linear dynamical systems, principled regularisation in deep neural networks, and inference-based decision making in reinforcement learning, amongst others.
  4. Women in Machine Learning (WiML 2014). This is a day-long workshop that gives female faculty, research scientists, and graduate students in the machine learning community an opportunity to meet, exchange ideas and learn from each other. Under-represented minorities and undergraduates interested in machine learning research are encouraged to attend.

ISBA@NIPS Travel Award

The ISBA Program Council will grant two ISBA special Travel Award to two selected young participants, one in the category of Invited Paper and one in the category of Contributed Paper. Each Travel Award will be of at most 1000 USD. Organisers of ISBA endorsed workshops at NIPS are all invited to propose candidates.


  • Only participants of ISBA-endorsed Workshops at NIPS will be considered.
  • The recipients should be graduate students or junior researchers (up to five years after graduation) presenting at the workshop.
  • The recipients should be ISBA members at the moment of receiving the award.

Application procedure

The organizers of ISBA-endorsed Workshops at NIPS who wish to apply, select one or two candidates and postulate them as candidates to the ISBA Program Council by no later than:

  • September the 5th, 2014 (for the category of Invited Paper)
  • October the 29th, 2014 (for the category of Contributed Paper)

The ISBA Program Council selects the two winners among the candidates proposed by all ISBA-endorsed Workshops. The outcome of the above procedure will be communicated to the Workshop Organisers by no later than:

  •  September the 9th, 2014 (for the category of Invited Paper)
  • November the 7th, 2014 (for the category of Contributed Paper)

The winners will present a special ISBA@NIPS Travel Award recipient’s seminar at the workshops at NIPS.

Filed under: Statistics, Travel, University life Tagged: ABC in Montréal, Canada, graphical models, ISBA, machine learning, Montréal, NIPS 2014, Québec, travel award, variational Bayes methods

by xi'an at September 02, 2014 12:18 PM

September 01, 2014

Emily Lakdawalla - The Planetary Society Blog

ESA invites amateurs to produce portraits of comet 67P
After a pause of about a week in daily image releases from Rosetta, ESA has begun sharing four-image sets of photos of comet Churyumov-Gerasimenko and invited the public to help produce pretty pictures from them.

September 01, 2014 05:01 PM

Christian P. Robert - xi'an's og

a day of travel

Bham2I had quite a special day today as I travelled through Birmingham, made a twenty minutes stop in Coventry to drop my bag in my office, went down to London to collect a most kindly loaned city-bike and took the train back to Coventry with the said bike… On my way from Bristol to Warwick, I decided to spend the night in downtown Birmingham as it was both easier and cheaper than to find accommodation on Warwick campus. However, while the studio I rented was well-designed and brand-new, my next door neighbours were not so well-designed in that I could hear them and the TV through the wall, despite top-quality ear-plugs! After a request of mine, they took the TV off but kept to the same decibel level for their uninteresting exchanges. In the morning I tried to go running in the centre of Birmingham but, as I could not find the canals, I quickly got bored and gave up. As Mark had proposed to lend me a city bike for my commuting in [and not to] Warwick, I then decided to take the opportunity of a free Sunday to travel down to London to pick the bike, change the pedals in a nearby shop, add an anti-theft device, and head back to Coventry. Which gave me the opportunity to bike in London by Abbey Road, Regent Park, and Hampstead, before [easily] boarding a fast train back to Coventry and biking up to the University of Warwick campus. (Sadly to discover that all convenience stores had closed by then… )

Filed under: pictures, Running, Travel Tagged: biking, Birmingham, Coventry, England, London, Regent Park, University of Warwick

by xi'an at September 01, 2014 12:18 PM

astrobites - astro-ph reader's digest

Taking a Gap Year – Part 1

This is our first installment in a series of posts that will discuss the option of taking a gap year prior to starting graduate school. While many students choose to go to graduate school right after finishing up their bachelor’s degree, taking some time away from school before starting a PhD program is becoming an increasingly popular option. However, this is still somewhat of a non-conventional route, and there is a large amount of uncertainty and doubt about what to do and what to expect for those considering taking a gap year.

Fortunately, several of our authors here at Astrobites have taken a gap year prior to starting graduate school, and we’d like to share our experiences and advice with our readers. Even though we are speaking from an astronomy background to other astronomers, our gap year experiences are diverse enough such that students in other fields might find this information helpful as well.

Several of our authors spent their gap year working on a research project at their undergraduate institutions. In this post, we will share our experiences with being a “pseudo-grad” student (i.e. having the research responsibilities of a graduate student, but without being formally enrolled at an institution and having to take classes, teach, etc.)

It Actually Is Rocket Science! (Anson Lam)

Anson (top center) and the CIBER group in front of our rocket at the White Sands Missile Range.

The CIBER group in front of our rocket at the White Sands Missile Range. (Anson at top center, and P.I. Jamie Bock at bottom left)

When I started my senior year at Caltech, I wasn’t terribly motivated to apply to grad school. Even though I wanted to get a PhD at some point, I also wanted a break from the endless cycle of classes and problem sets. I still enjoyed doing research though, so I sent a bunch of emails around the astronomy department asking if anyone would be willing to take me into their group for a year. It took a number of tries until I was successful, but I ultimately ended up working on the Cosmic Infrared Background ExpeRiment (CIBER) as a full-time research assistant. Even though I had a lot of experience doing other types of research as an undergraduate, this project was quite a unique experience. For one, I had an opportunity to work on instrumentation, which was something I had never done before, nor had I really considered as a research option. I also had the opportunity to go on a month-long field deployment at the White Sands Missile Range in New Mexico, where I was helping out with assembling and launching the CIBER rocket.

Even with doing research full time, it was still easier to finish up my graduate school applications and GREs without the usual craziness that I had to endure as an undergraduate. I don’t think I would have fared as well if I had applied during my senior year. My graduate school visits were more relaxed as well, since I didn’t have to worry about classes. In fact, a number of graduate students I had met during my visits mentioned that they wished they had taken a gap year too, so I knew I wasn’t doing anything wrong.

My gap year wasn’t all work though, and I still had opportunities to do fun and interesting things outside of research. We had a number of visiting graduate students from Japan and Korea in our research group, and it was fun getting to socialize and mingle with collaborators from different cultures (I’m Chinese-Canadian-American myself). I even started learning Korean as a foreign language just for fun. I also enjoy endurance sports, and I spent a considerable amount of my free time doing a lot of distance running and racing in various triathlons. I didn’t always have the time to do these sorts of things as an undergraduate, so it was definitely a cool way to spend time before starting graduate school.

Other gap year tips:

  • It’s good to start looking for research opportunities early on, since space and funding can be limited.
  • Working on a project continuously for a year is very different from the summer-long research stints that a lot of undergraduates do. It’s hard to get a lot done in a single summer, and a gap year is a good opportunity to try something new before attaching yourself to a particular area in graduate school.

When Life Gives You Lemons, It’s Okay to Ask Around for Some Sugar Water (Korey Haynes)


Graduation is so much more awesome when you know what you’re doing next.

When I was finishing my senior year of undergrad, I had limited research experience and–I will own this–terrible Physics GRE scores. I got into one graduate program I regretted applying to at all (I had done an REU there and knew I could get in, but didn’t actually like any of their active research areas), and so I had a sit-down with my adviser to discuss my options. I was debating whether to enter a program I didn’t like just so I could keep moving forward, or move back in with my parents and either change career paths entirely (and I had no idea what to do with a B.S. in astronomy), or wait tables for a year and attempt the subject GRE again. My adviser came through for me in a huge way by offering funding for a year to do full-time research with him. My college didn’t have a graduate program, so the idea of getting this kind of position hadn’t even occurred to me. I wasn’t even aware this was a possibility without graduate experience, but I jumped at the chance.

That was the year I learned how to be an astronomer. I had limited programming experience up until that point, so I taught myself IDL that year, as well as finally getting comfortable with DS9, IRAF, and general Unix scripting. I learned a whole new portion of the electromagnetic spectrum (my experience so far had been visual or radio data, and I spent the year doing infrared spectroscopy), learned how to run an independent research project and collaborate with other scientists, presented my work at that year’s AAS, and by the time I left a year later, I had a first author paper in press and offer letters from multiple graduate institutions. It was the most scientifically productive year I’ve had yet.

My advice? Do talk honestly with your adviser. I still feel incredibly grateful to have had such supportive mentors, and my experience, time and again, has been that astronomers really do want to help each other. Talk to your adviser, talk to other professors. Mine was a bit of a special case, so if you’re planning on finding a research position, you should look around 9-10 months in advance. But don’t assume that just because it’s late in the year (this all fell into place around the end of April for me) that you’re out of options.

Work and play: the benefits of extra time (Elisabeth Newton)

In my senior year, I was faced with the endless circuit diagrams and oscilloscope drawings of my work-intensive electronics lab and the challenge of teaching for the first time. Midway through fall, I had given no thought to graduate school or the GREs and so I was quick to decide that grad school could wait another year. Many of my classmates were making similar decisions, so I never felt that taking a year off wasn’t an option. Not having to worry about applying to graduate school gave me the time to spend my fall semester learning electronics, teaching astronomy, and fencing with my club team.

Like Korey, my undergraduate thesis advisor offered to keep me on as a full-time research assistant after I graduated, which is what I eventually chose to do. I also had the option to teach full-time at our University’s tutoring center, continuing the teaching I’d been doing. Both opportunities opened up in March. I don’t remember why I chose research over teaching, but in retrospect I see both as having been wonderful opportunities. One thing I did learn from being a researcher is that I enjoy being a full-time astronomer; knowing this was a good source of motivation during the grad school application process.

For me, there were two very big benefits to taking a year off. First, I was able to devote a significant amount of time to my graduate school and NSF applications. Because my position was flexible, I could take the time I needed, and because I was immersed in a supportive academic environment, I also was never far from advice. Second, I was privileged enough that after working for part of the year, I was able to take time off. Encouraged by my advisors, I spent the remainder of the year really taking a break: I traveled both in the US and abroad and spent much-needed time with family and friends back home.

by Anson Lam at September 01, 2014 03:52 AM

August 31, 2014

Christian P. Robert - xi'an's og

efficient exploration of multi-modal posterior distributions

The title of this recent arXival had potential appeal, however the proposal ends up being rather straightforward and hence  anti-climactic! The paper by Hu, Hendry and Heng proposes to run a mixture of proposals centred at the various modes of  the target for an efficient exploration. This is a correct MCMC algorithm, granted!, but the requirement to know beforehand all the modes to be explored is self-defeating, since the major issue with MCMC is about modes that are  omitted from the exploration and remain undetected throughout the simulation… As provided, this is a standard MCMC algorithm with no adaptive feature and I would rather suggest our population Monte Carlo version, given the available information. Another connection with population Monte Carlo is that I think the performances would improve by Rao-Blackwellising the acceptance rate, i.e. removing the conditioning on the (ancillary) component of the index. For PMC we proved that using the mixture proposal in the ratio led to an ideally minimal variance estimate and I do not see why randomising the acceptance ratio in the current case would bring any improvement.

Filed under: Books, Statistics, University life Tagged: acceptance probability, Metropolis-Hastings algorithms, multimodal target, population Monte Carlo, Rao-Blackwellisation

by xi'an at August 31, 2014 10:14 PM

Emily Lakdawalla - The Planetary Society Blog

Hayabusa 2 complete, ready to begin its journey to asteroid 1999 JU3
The excitement is building for Hayabusa 2! The spacecraft is now complete and ready to be shipped to its launch site. JAXA unveiled its next interplanetary traveler to the media in a special event on August 31.

August 31, 2014 09:09 PM

Tommaso Dorigo - Scientificblogging

How The Higgs Became The Target Of Run 2 At The Tevatron
Until the second half of the nineties, when the LEP collider started to be upgraded to investigate higher centre-of-mass energies of electron-positron collisions than those until then produced at the Z mass, the Higgs boson was not the main focus of experiments exploring the high-energy frontier. The reason is that the expected cross section of that particle was prohibitively small for the comparatively low luminosities provided by the facilities available at the time. Of course, one could still look for anomalously high-rate production of final states possessing the characteristics of a Higgs boson decay; but those searches had a limited appeal.

read more

by Tommaso Dorigo at August 31, 2014 05:45 PM

Peter Coles - In the Dark

Writer’s Block

A few people have asked why I’ve had the sheer effrontery to take a week off and come to Cardiff. Well, it may surprise you to learn that even Heads of School have a holiday entitlement, and in the 18 months I’ve been in that position I’ve only managed to take a small fraction of mine!

But the real reason for this break is that I need some time without disturbance to finish off the long-awaited Second Edition of Cosmology: A Very Short Introduction. Dorothy made me a subtle sign for my office door, but it has proved largely ineffective at preventing distractions. So here I am, back in the Cardiff residence, blocking out as much as I can to get on with some writing.

I hope this clarifies the situation.

by telescoper at August 31, 2014 05:44 PM

The n-Category Cafe

Why It Matters

One interesting feature of the Category Theory conference in Cambridge last month was that lots of the other participants started conversations with me about the whole-population, suspicionless surveillance that several governments are now operating. All but one were enthusiastically supportive of the work I’ve been doing to try to get the mathematical community to take responsibility for its part in this, and I appreciated that very much.

The remaining one was a friend who wasn’t unsupportive, but said to me something like “I think I probably agree with you, but I’m not sure. I don’t see why it matters. Persuade me!”

Here’s what I replied.

“A lot of people know now that the intelligence agencies are keeping records of almost all their communications, but they can’t bring themselves to get worked up about it. And in a way, they might be right. If you, personally, keep your head down, if you never do anything that upsets anyone in power, it’s unlikely that your records will end up being used against you.

But that’s a really self-centred attitude. What about people who don’t keep their heads down? What about protesters, campaigners, activists, people who challenge the establishment — people who exercise their full democratic rights? Freedom from harassment shouldn’t depend on you being a quiet little citizen.

“There’s a long history of intelligence agencies using their powers to disrupt legitimate activism. The FBI recorded some of Martin Luther King’s extramarital liaisons and sent the tape to his family home, accompanied by a letter attempting to blackmail him into suicide. And there have been many many examples since then (see below).

“Here’s the kind of situation that worries me today. In the UK, there’s a lot of debate at the moment about the oil extraction technique known as fracking. The government has just given permission for the oil industry to use it, and environmental groups have been protesting vigorously.

“I don’t have strong opinions on fracking myself, but I do think people should be free to organize and protest against it without state harassment. In fact, the state should be supporting people in the exercising of their democratic rights. But actually, any anti-fracking group would be sensible to assume that it’s the object of covert surveillance, and that the police are working against them, perhaps by employing infiltrators — because they’ve been doing that to other environmental groups for years.

“It’s the easiest thing in the world for politicians to portray anti-fracking activists as a danger to the UK’s economic well-being, as a threat to national energy security. That’s virtually terrorism! And once someone’s been labelled with the T word, it immediately becomes trivial to justify actually using all that surveillance data that the intelligence agencies are gathering routinely. And I’m not exaggerating — anti-terrorism laws really have been used against environmental campaigners in the recent past.

“Or think about gay rights. Less than fifty years ago, sex between men in England was illegal. This law was enforced, and it ruined people’s lives. For instance, my academic great-grandfather Alan Turing was arrested under this law and punished by chemical castration. He’s widely thought to have killed himself as a direct result. But today, two men in England can not only have sex legally, they can marry with the full endorsement of the state.

“How did this change so fast? Not by people writing polite letters to the Times, or by going through official parliamentary channels (at least, not only by those means). It was mainly through decades of tough, sometimes dangerous, agitation, campaigning and protest, by small groups and by courageous individual citizens.

“By definition, anyone campaigning for anything to be decriminalized is siding with criminals against the establishment. It’s the easiest thing in the world for politicians to portray campaigners like this as a menace to society — a threat to law and order. Any nation state with the ability to monitor, infiltrate, harass and disrupt such “menaces” will be very sorely tempted to use it. And again, that’s no exaggeration: in the US at least, this has happened to gay rights campaigners over and over again, from the 1950s to nearly the present day, even sometimes — ludicrously — under the banner of fighting terrorism (1, 2, 3, 4).

“So government surveillance should matter to you in a very direct way if you’re involved in any kind of activism or advocacy or protest or campaigning or dissent. It should also matter to you if you’re not, but you quietly support any of this activism — or if you reap its benefits. Even if not (which is unlikely), it matters if you simply want to live in a society where people can engage in peaceful activism without facing disruption or harassment by the state. And it matters more now than it ever did before, because we live so much of our lives on the internet, and government surveillance powers are orders of magnitude greater than they’ve ever been before.”

That’s roughly what I said. I think we then talked a bit about mathematicians’ role in enabling whole-population surveillance. Here’s Thomas Hales’s take on this:

If privacy disappears from the face of the Earth, mathematicians will be some of the primary culprits.

Of course, there are lots of other reasons why the activities of the NSA, GCHQ and their partners might matter to you. Maybe you object to industrial espionage being carried out in the name of national security, or the NSA supplying data to the CIA’s drone assassination programme (“we track ‘em, you whack ‘em”), or the raw content of communications between Americans being passed en masse to Israel, or the NSA hacking purely civilian infrastructure in China, or government agencies intercepting lawyer-client and journalist-source communications, or that the existence of mass surveillance leads inevitably to self-censorship. Or maybe you simply object to being watched, for the same reason you close the bathroom door: you’re not doing anything to be ashamed of, you just want your privacy. But the activism point is the one that resonates most deeply with me personally, and it seemed to resonate with my friend too.

You may think I’m exaggerating or scaremongering — that the enormous power wielded by the US and UK intelligence agencies (among others) could theoretically be used against legitimate citizen activism, but hasn’t been so far.

There’s certainly an abstract argument against this: it’s simply human nature that if you have a given surveillance power available to you, and the motive to use it, and the means to use it without it being known that you’ve done so, then you very likely will. Even if (for some reason) you believe that those currently wielding these powers have superhuman powers of self-restraint, there’s no guarantee that those who wield them in future will be equally saintly.

But much more importantly, there’s copious historical evidence that governments routinely use whatever surveillance powers they possess against whoever they see as troublemakers, even if this breaks the law. Without great effort, I found 50 examples in the US and UK alone — see below.

Six overviews

If you’re going to read just one thing on government surveillance of activists, I suggest you make it this:

Among many other interesting points, it reminds us that this isn’t only about “leftist” activism — three of the plaintiffs in this case are pro-gun organizations.

Here are some other good overviews:

And here’s a short but incisive comment from journalist Murtaza Hussain.

50 episodes of government surveillance of activists

Disclaimer   Journalism about the activities of highly secretive organizations is, by its nature, very difficult. Even obtaining the basic facts can be a major feat. Obviously, I can’t attest to the accuracy of all these articles — and the entries in the list below are summaries of the articles linked to, not claims I’m making myself. As ever, whether you believe what you read is a judgement you’ll have to make for yourself.


1. FBI surveillance of War Resisters League (1, 2), continuing in 2010 (1)


2. FBI surveillance of the National Association for the Advancement of Colored People (1)

3. FBI “surveillance program against homosexuals” (1)


4. FBI’s Sex Deviate programme (1)

5. FBI’s Cointelpro projects, aimed at “surveying, infiltrating, discrediting, and disrupting domestic political organizations”, and NSA’s Project Minaret, targeting leading critics of Vietnam war including senators, civil rights leaders and journalists (1)

6. FBI attempt to blackmail Martin Luther King into suicide with surveillance tape (1)

7. NSA interception of antiwar activists, including Jane Fonda and Dr Benjamin Spock (1)

8. Harassment of California student movement (including Stephen Smale’s free speech advocacy) by FBI, with support of Ronald Reagan (1, 2)


9. FBI surveillance and attempted deportation of John Lennon (1)

10. FBI burglary of the office of the psychiatrist of Pentagon Papers whistleblower Daniel Ellsberg (1)


11. Margaret Thatcher had the Canadian national intelligence agency CSEC surveil two of her own ministers (1, 2, 3)

12. MI5 tapped phone of founder of Women for World Disarmament (1)

13. Ronald Reagan had the NSA tap the phone of congressman Michael Barnes, who opposed Reagan’s Central America policy (1)


14. NSA surveillance of Greenpeace (1)

15. UK police’s “undercover work against political activists” and “subversives” — including ex-home secretary Jack Straw (1)

16. UK undercover policeman Peter Francis “undermined the campaign of a family who wanted justice over the death of a boxing instructor who was struck on the head by a police baton” (1)

17. UK undercover police secretly gathered intelligence on 18 grieving families fighting to get justice from police (1, 2)

18. UK undercover police spied on lawyer for family of murdered black teenager Stephen Lawrence; police also secretly recorded friend of Lawrence and his lawyer (1, 2)

19. UK undercover police spied on human rights lawyers Bindmans (1)

20. GCHQ accused of spying on Scottish trade unions (1)


21. US military spied on gay rights groups opposing “don’t ask, don’t tell” (1)

22. Maryland State Police monitored nonviolent gay rights groups as terrorist threat (1)

23. NSA monitoring email of American citizen Faisal Gill, including while he was running as Republican candidate for Virginia House of Delegates (1)

24. NSA surveillance of Rutgers professor Hooshang Amirahmadi and ex-California State professor Agha Saeed (1)

25. NSA tapped attorney-client conversations of American lawyer Asim Ghafoor (1)

26. NSA spied on American citizen Nihad Awad, executive director of the Council on American-Islamic Relations, the USA’s largest Muslim civil rights organization (1)

27. NSA analyst read personal email account of Bill Clinton (date unknown) (1)

28. Pentagon counterintelligence unit CIFA monitored peaceful antiwar activists (1)

29. Green party peer and London assembly member Jenny Jones was monitored and put on secret police database of “domestic extremists” (1, 2)

30. MI5 and UK police bugged member of parliament Sadiq Khan (1, 2)

31. Food Not Bombs (volunteer movement giving out free food and protesting against war and poverty) labelled as terrorist group and infiltrated by FBI (1, 2, 3)

32. Undercover London police infiltrated green activist groups (1)

33. Scottish police infiltrated climate change activist organizations, including anti-airport expansion group Plane Stupid (1)

34. UK undercover police had children with activists in groups they had infiltrated (1)

35. FBI infiltrated Muslim communities and pushed those with objections to terrorism (and often mental health problems) to commit terrorist acts (1, 2, 3)


36. California gun owners’ group Calguns complains of chilling effect of NSA surveillance on members’ activities (1, 2, 3)

37. GCHQ and NSA surveilled Unicef and head of Economic Community of West African States (1)

38. NSA spying on Amnesty International and Human Rights Watch (1)

39. CIA hacked into computers of Senate Intelligence Committee, whose job it is to oversee the CIA
(1, 2, 3, 4, 5, 6; bonus: watch CIA director John Brennan lie that it didn’t happen, months before apologizing)

40. CIA obtained legally protected, confidential email between whistleblower officials and members of congress, regarding CIA torture programme (1)

41. Investigation suggests that CIA “operates an email surveillance program targeting senate intelligence staffers” (1)

42. FBI raided homes and offices of Anti-War Committee and Freedom Road Socialist Organization, targeting solidarity activists working with Colombians and Palestinians (1)

43. Nearly half of US government’s terrorist watchlist consists of people with no recognized terrorist group affiliation (1)

44. FBI taught counterterrorism agents that mainstream Muslims are “violent” and “radical”, and used presentations about the “inherently violent nature of Islam” (1, 2, 3)

45. GCHQ has developed tools to manipulate online discourse and activism, including changing outcomes of online polls, censoring videos, and mounting distributed denial of service attacks (1, 2)

46. Green member of parliament Caroline Lucas complains that GCHQ is intercepting her communications (1)

47. GCHQ collected IP addresses of visitors to Wikileaks websites (1, 2)

48. The NSA tracks web searches related to privacy software such as Tor, as well as visitors to the website of the Linux Journal (calling it an “extremist forum”) (1, 2, 3)

49. UK police attempt to infiltrate anti-racism, anti-fascist and environmental groups, anti-tax-avoidance group UK Uncut, and politically active Cambridge University students (1, 2)

50. NSA surveillance impedes work of investigative journalists and lawyers (1, 2, 3, 4, 5).

Back to mathematics

As mathematicians, we spend much of our time studying objects that don’t exist anywhere in the world (perfect circles and so on). But we exist in the world. So, being a mathematician sometimes involves addressing real-world concerns.

For instance, Vancouver mathematician Izabella Laba has for years been writing thought-provoking posts on sexism in mathematics. That’s not mathematics, but it’s a problem that implicates every mathematician. On this blog, John Baez has written extensively on the exploitative practices of certain publishers of mathematics journals, the damage it does to the universities we work in, and what we can do about it.

I make no apology for bringing political considerations onto a mathematical blog. The NSA is a huge employer of mathematicians — over 1000 of us, it claims. Like it or not, it is part of our mathematical environment. Both the American Mathematical Society and London Mathematical Society are now regularly publishing articles on the role of mathematicians in enabling government surveillance, in recognition of our responsibility for it. As a recent New York Times article put it:

To say mathematics is political is not to diminish it, but rather to recognize its greater meaning, promise and responsibilities.

by leinster ( at August 31, 2014 04:59 PM

The n-Category Cafe

Uncountably Categorical Theories

Right now I’d love to understand something a logician at Oxford tried to explain to me over lunch a while back. His name is Boris Zilber. He’s studying what he informally calls ‘logically perfect’ theories — that is, lists of axioms that almost completely determine the structure they’re trying to describe. He thinks that we could understand physics better if we thought harder about these logically perfect theories:

His ways of thinking, rooted in model theory, are quite different from anything I’m used to. I feel a bit like Gollum here:

A zeroth approximation to Zilber’s notion of ‘logically perfect theory’ would be a theory in first-order logic that’s categorical, meaning all its models are isomorphic. In rough terms, such a theory gives a full description of the mathematical structure it’s talking about.

The theory of groups is not categorical, but we don’t mind that, since we all know there are lots of very different groups. Historically speaking, it was much more upsetting to discover that Peano’s axioms of arithmetic, when phrased in first-order logic, are not categorical. Indeed, Gödel’s first incompleteness theorem says there are many statements about natural numbers that can neither be proved nor disproved starting from Peano’s axioms. It follows that for any such statement we can find a model of the Peano axioms in which that statement holds, and also a model in which it does not. So while we may imagine the Peano axioms are talking about ‘the’ natural numbers, this is a false impression. There are many different ‘versions’ of the natural numbers, just as there are many different groups.

The situation is not so bad for the real numbers — at least if we are willing to think about them in a somewhat limited way. There’s a theory of a real closed field: a list of axioms governing the operations <semantics>+,×,0<annotation encoding="application/x-tex">+, \times, 0</annotation></semantics> and <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> and the relation <semantics><annotation encoding="application/x-tex">\le</annotation></semantics>. Tarski showed this theory is complete. In other words, any sentence phrased in this language can either be proved or disproved starting from the axioms.

Nonetheless, the theory of real closed fields is not categorical: besides the real numbers, there are many ‘nonstandard’ models, such as fields of hyperreal numbers where there are numbers bigger than <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, <semantics>1+1<annotation encoding="application/x-tex">1+1</annotation></semantics>, <semantics>1+1+1<annotation encoding="application/x-tex">1+1+1</annotation></semantics>, <semantics>1+1+1+1<annotation encoding="application/x-tex">1+1+1+1</annotation></semantics> and so on. These models are all elementarily equivalent: any sentence that holds in one holds in all the rest. That’s because the theory is complete. But these models are not all isomorphic: we can’t get a bijection between them that preserves <semantics>+,×,0,1<annotation encoding="application/x-tex">+, \times, 0, 1</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\le</annotation></semantics>.

Indeed, only finite-sized mathematical structures can be ‘nailed down’ up to isomorphism by theories in first-order logic. After all, the Löwenheim–Skolem theorem says that if a first-order theory in a countable language has an infinite model, it has at least one model of each infinite cardinality. So, if we’re trying to use this kind of theory to describe an infinitely big mathematical structure, the most we can hope for is that after we specify its cardinality, the axioms completely determine it.

And this actually happens sometimes. It happens for the complex numbers! Zilber believes this has something to do with why the complex numbers show up so much in physics. This sounds very implausible at first, but there are some amazing results in logic that one needs to learn before dismissing the idea out of hand.

Say <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics> is some cardinal. A first-order theory describing structure on a single set is called κ-categorical if it has a unique model of cardinality <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics>. And in 1965, a logician named Michael Morley showed that if a list of axioms is <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics>-categorical for some uncountable <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics>, it’s <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics>-categorical for every uncountable <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics>. I have no idea why this is true. But such theories are called uncountably categorical.

A great example is the theory of an algebraically closed field of characteristic zero.

When you think of algebraically closed fields of characteristic zero, the first example that comes to mind is the complex numbers. These have the cardinality of the continuum. But because this theory is uncountably categorical, there is exactly one algebraically closed field of characteristic zero of each uncountable cardinality… up to isomorphism.

This implies some interesting things. For example, we can take the complex numbers, throw in an extra element, and let it freely generate a bigger algebraically closed field. It’s ‘bigger’ in the sense that it contains the complex numbers as a proper subset, indeed a subfield. But since it has the same cardinality as the complex numbers, it’s isomorphic to the complex numbers!

And then, because this ‘bigger’ field is isomorphic to the complex numbers, we can turn this argument around. We can take the complex numbers, remove a lot of carefully chosen elements, and get a subfield that’s isomorphic to the complex numbers.

Or, if we like, we can take the complex numbers, adjoin a really huge set of extra elements, and let them freely generate an algebraically closed field of characteristic zero. The cardinality of this field can be as big as we want. It will be determined up to isomorphism by its cardinality. But it will be elementarily equivalent to the ordinary complex numbers! In other words, all the same sentences written in the language of <semantics>+,×,0<annotation encoding="application/x-tex">+, \times, 0</annotation></semantics> and <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> will hold. See why?

The theory of a real closed field is not uncountably categorical. This implies something really strange. Besides the ‘usual’ real numbers <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics> there’s another real closed field <semantics><annotation encoding="application/x-tex">\mathbb{R}'</annotation></semantics>, not isomorphic to <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics>, with the same cardinality. We can build the complex numbers <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> using pairs of real numbers. We can use the same trick to build a field <semantics><annotation encoding="application/x-tex">\mathbb{C}'</annotation></semantics> using pairs of guys in <semantics><annotation encoding="application/x-tex">\mathbb{R}'</annotation></semantics>. But it’s easy to check that this funny field <semantics><annotation encoding="application/x-tex">\mathbb{C}'</annotation></semantics> is algebraically closed and of characteristic zero. So, it’s isomorphic to <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>.

In short, different ‘versions’ of the real numbers can give rise to the same version of the complex numbers! This is stuff they didn’t teach me in school.

All this is just background.

To a first approximation, Zilber considers uncountably categorical theories ‘logically perfect’. Let me paraphrase him:

There are purely mathematical arguments towards accepting the above for a definition of perfection. First, we note that the theory of the field of complex numbers (in fact any algebraically closed field) is uncountably categorical. So, the field of complex numbers is a perfect structure, and so are all objects of complex algebraic geometry by virtue of being definable in the field.

It is also remarkable that Morley’s theory of categoricity (and its extensions) exhibits strong regularities in models of categorical theories generally. First, the models have to be highly homogeneous, in a sense technically different from the one discussed for manifolds, but similar in spirit. Moreover, a notion of dimension (the Morley rank) is applicable to definable subsets in uncountably categorical structures, which gives one a strong sense of working with curves, surfaces and so on in this very abstract setting. A theorem of the present author states more precisely that an uncountably categorical structure <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is either reducible to a 2-dimensional “pseudo-plane” with at least a 2-dimensional family of curves on it (so is non-linear), or is reducible to a linear structure like an (infinite dimensional) vector space, or to a simpler structure like a <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-set for a discrete group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. This led to a Trichotomy Conjecture, which specifies that the non-linear case is reducible to algebraically closed fields, effectively implying that <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> in this case is an object of algebraic geometry over an algebraically closed field.

I don’t understand this, but I believe that in rough terms this would amount to getting ahold of algebraic geometry from purely ‘logical’ principles, not starting from ideas in algebra or geometry!

Ehud Hrushovski showed that the Trichotomy Conjecture is false. However, Zilber has bounced back with a new improved notion of logically perfect theory, namely a ‘Noetherian Zariski theory’. This sounds like something out of algebraic geometry, but it’s really a concept from logic that takes advantage of the eerie parallels between structures defined by uncountably categorical theories and algebraic geometry.

Models of Noetherian Zariski theories include not only structures from algebraic geometry, but also from noncommutative algebraic geometry, like quantum tori. So, Zilber is now trying to investigate the foundations of physics using ideas from model theory. It seems like a long hard project that’s just getting started.

Here’s a concrete conjecture that illustrates how people are hoping algebraic geometry will spring forth from purely logical principles:

The Algebraicity Conjecture. Suppose <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is a simple group whose theory (consisting of all sentences in first-order theory of groups that hold for this group) is uncountably categorical. Then <semantics>G=𝔾(K)<annotation encoding="application/x-tex">G = \mathbb{G}(K)</annotation></semantics> for some simple algebraic group <semantics>𝔾<annotation encoding="application/x-tex">\mathbb{G}</annotation></semantics> and some algebraically closed field <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>.

Zilber has a book on these ideas:

But there are many prerequisites I’m missing, and Richard Elwes, who studied with Zilber, has offered me some useful pointers:

If you want to really understand the Geometric Stability Theory referred to in your last two paragraphs, there’s a good (but hard!) book by that name by Anand Pillay. But you don’t need to go anywhere near that far to get a good idea of Morley’s Theorem and why the complex numbers are uncountably categorical. These notes look reasonable:

Basically the idea is that a theory is uncountably categorical if and only if two things hold: firstly there is a sensible notion of dimension (Morley rank) which can be assigned to every formula quantifying its complexity. In the example of the complex numbers Morley rank comes out to be pretty much the same thing as Zariski dimension. Secondly, there are no ‘Vaughtian pairs’ meaning, roughly, two bits of the structure whose size can vary independently. (Example: take the structure consisting of two disjoint non-interacting copies of the complex numbers. This is not uncountably categorical because you could set the two cardinalities independently.)

It is not too hard to see that the complex numbers have these two properties once you have the key fact of ‘quantifier elimination’, i.e. that any first order formula is equivalent to one with no quantifiers, meaning that all they can be are sets determined by the vanishing or non-vanishing of various polynomials. (Hence the connection to algebraic geometry.) In one dimension, basic facts about complex numbers tell us that every definable subset of <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> must therefore be either finite or co-finite. This is the definition of a strongly minimal structure, which automatically implies both of the above properties without too much difficulty. So the complex numbers are not merely ‘perfect’ (though I’ve not heard this term before) but are the very best type of structure even among the uncountably categorical.

If you know anything else that could help me out, I’d love to hear it!

by john ( at August 31, 2014 07:31 AM

August 30, 2014

Peter Coles - In the Dark

Cold War Cardiff

Here I am, back in Cardiff. Though given the roadblocks metal fences and hordes of armed police, you might be forgiven for thinking it was Berlin at the height of the Cold War.

All of beautiful Bute Park is fenced off, and there’s considerable disruption to the traffic through the city. All this will last over a month, and the NATO boondoggle isn’t even here. It actually takes place near Newport, twenty miles away. They’re just using Cardiff for two days..

Whoever made the decision to force this nonsense on the City should be held to account. There should be a full public inquiry into the gross abuse of power. And I hope the people of Cardiff remember this, and express their outrage at the Council when they vote at the next elections..

The photo, by the way, is at the bridge over the Taff at Cowbridge Road. Or is it Checkpoint Charlie?

by telescoper at August 30, 2014 04:57 PM

The Great Beyond - Nature blog

NASA extends Mars rover and Moon orbiter missions
A false-colour image of the Mars Opportunity rover, taken in March 2014.

A false-colour image of the Mars Opportunity rover, taken in March 2014.

NASA/JPL-Caltech/Cornell Univ./Arizona State Univ.

NASA is on the verge of releasing its long-awaited prioritization of planetary missions, meant to guide the agency if tight budgets force it to switch off an operating spacecraft. But two missions that had been considered on the verge of closure — the Mars Opportunity rover and the Lunar Reconnaissance Orbiter (LRO) — have each received a reprieve of another two years of operations, scientists close to the projects have confirmed.

Although NASA officials had insisted otherwise, Opportunity and LRO were considered particularly vulnerable because funding for them was included in a supplement to the White House’s annual budget request to Congress, rather than as part of the main planetary sciences division budget.

In a decade of operation, Opportunity has rolled more than 40.6 kilometres across Mars, exploring areas including the most ancient habitable environment known on the planet. The rover is suffering from several mechanical issues as well as problems with its flash memory that have triggered computer resets in recent weeks. Opportunity, which costs on the order of US$13 million annually, is heading for a region called Marathon Valley where scientists think clay minerals formed in a watery environment.

The LRO finished its main task in 2010: mapping possible  locations for astronauts to return to the Moon. More recently it has focused on studying change on the lunar surface, such as from fresh meteorite impacts.

The complete ‘senior review’, encompassing five other planetary missions, will be released at a planetary sciences advisory group meeting in Washington DC on 3 September.

Of the five other missions, two of them are big-ticket items — on the order of $60 million annually — that are considered shoo-ins for approval. The Curiosity rover landed on Mars two years ago and is still heading for its ultimate goal, a mountain named Mount Sharp. (The harsh rocks of Mars have taken a toll on Curiosity, however, and the rover recently had to backtrack out of a sandy valley so as not to get stuck, as well as give up on drilling what would have been its fourth hole on Mars.)

The Cassini mission has been orbiting Saturn since 2004, but as seasons change it has been observing new phenomena on the planet. “In many ways it’s a brand-new mission,” project scientist Linda Spilker, of NASA’s Jet Propulsion Laboratory in Pasadena, California, said earlier this month. Cassini engineers are planning for a ‘grand finale’ in 2017, when the probe will repeatedly dive between the gaseous planet and its ring system to make unprecedented close-up measurements. “It will be seven seconds of terror every 22 days,” Spilker said.

The three remaining missions under scrutiny are the Mars Reconnaissance Orbiter, which costs around $30 million annually and plays a crucial communications relay role at Mars; the 13-year-old Mars Odyssey orbiter, at $12 million annually; and a $3 million contribution for an instrument aboard the European Space Agency’s Mars Express spacecraft, launched in 2003.

Jim Green, head of NASA’s planetary sciences division, has said repeatedly that the agency will work within its budgetary constraints to try to fulfill the recommendations of the senior review panel. The big unknown is how much money the agency will have to spend for each of the extended missions. NASA typically allocates around $1.3 billion annually to planetary sciences, but Congress has yet to decide the numbers for fiscal year 2015, which begins on 1 October.

by Alexandra Witze at August 30, 2014 03:44 PM

August 29, 2014

Emily Lakdawalla - The Planetary Society Blog

The Pivotal Discovery You’ve Probably Never Heard Of
Karl Battams highlights the historic discovery, by an Air Force satellite, of a sungrazing comet.

August 29, 2014 07:09 PM

Emily Lakdawalla - The Planetary Society Blog

The Rise and Fall (and Rise and Fall) of Planetary Exploration Funding
NASA has explored the solar system since the 1960s, but it has rarely been the top priority for the space agency. Jason Callahan breaks down how planetary science has been funded over the years within NASA's larger budget.

August 29, 2014 07:06 PM

Emily Lakdawalla - The Planetary Society Blog

The Birth of the Modern Universe
Amir Alexander reviews Alan Hirshfeld's newest book, "Starlight Detectives: How Astronomers, Inventors, and Eccentrics Discovered the Modern Universe."

August 29, 2014 05:29 PM

Peter Coles - In the Dark

NATO Cardiff: is this what democracy looks like?


I’m off to Cardiff this evening, and hope at some point over the weekend to take some pictures of the monstrous barrier described in this post by Keith Flett that has been put up all around town. One of the most important points about this month-long fiasco is that there was no consultation whatsoever with the people of Cardiff before the decision was taken to waste such a vast amount of money. No doubt that’s because if there had been a consultation the response would have been overwhelmingly negative. Who will be held to account? My guess is “nobody”…

Originally posted on Kmflett's Blog:

Nato Cardiff: is this what democracy looks like?
I live in North London and central Cardiff something that seems to surprise some of my social media followers but is explained by the nature of my job as a union officer and the fact that my partner happens to live in Cardiff…

Next week on 4/5th September there is a Nato Summit meeting, not in Cardiff but at the Celtic Manor hotel outside Newport on the M4.

I’m no fan of Nato. It contains the word ‘treaty’ in its name and history suggests that treaties are an excellent way of starting wars. In addition it appears to be run largely by people who have more than a passing similarity to Dr Strangelove. Of course its opponents are mostly unlovely as well.

Anyway if you are going to have a Nato summit and lots of, at the least, self styled statesmen [&…

View original 371 more words

by telescoper at August 29, 2014 05:01 PM

Sean Carroll - Preposterous Universe

Should Scientific Progress Affect Religious Beliefs?

Sure it should. Here’s a new video from Closer to Truth, in which I’m chatting briefly with Robert Lawrence Kuhn about the question. “New” in the sense that it was just put on YouTube, although we taped it back in 2011. (Now my formulations would be considerably more sophisticated, given the wisdom that comes with age).

It’s interesting that the “religious beliefs are completely independent of evidence and empirical investigation” meme has enjoyed such success in certain quarters that people express surprise to learn of the existence of theologians and believers who still think we can find evidence for the existence of God in our experience of the world. In reality, there are committed believers (“sophisticated” and otherwise) who feel strongly that we have evidence for God in the same sense that we have evidence for gluons or dark matter — because it’s the best way to make sense of the data — just as there are others who think that our knowledge of God is of a completely different kind, and therefore escapes scientific critique. It’s part of the problem that theism is not well defined.

One can go further than I did in the brief clip above, to argue that any notion of God that can’t be judged on the basis of empirical evidence isn’t much of a notion at all. If God exists but has no effect on the world whatsoever — the actual world we experience could be precisely the same even without God — then there is no reason to believe in it, and indeed one can draw no conclusions whatsoever (about right and wrong, the meaning of life, etc.) from positing it. Many people recognize this, and fall back on the idea that God is in some sense necessary; there is no possible world in which he doesn’t exist. To which the answer is: “No he’s not.” Defenses of God’s status as necessary ultimately come down to some other assertion of a purportedly-inviolable metaphysical principle, which can always simply be denied. (The theist could win such an argument by demonstrating that the naturalist’s beliefs are incoherent in the absence of such principles, but that never actually happens.)

I have more sympathy for theists who do try to ground their belief in evidence, rather than those who insist that evidence is irrelevant. At least they are playing the game in the right way, even if I disagree with their conclusions. Despite what Robert suggests in the clip above, the existence of disagreement among smart people does not imply that there is not a uniquely right answer!

by Sean Carroll at August 29, 2014 04:45 PM

Quantum Diaries

Coffee and Code (Part Deux)

Further to my entry on the CERN Summer Student Webfest 2014 posted a few weeks ago (here), there is a short video about the event available to view HERE. Enjoy!


by James Doherty at August 29, 2014 04:29 PM

Symmetrybreaking - Fermilab/SLAC

Massive neutrino experiment proposed in China

China’s neutrino physics program could soon expand with a new experiment aimed at cracking a critical neutrino mystery.

Physicists have proposed building one of the largest-ever neutrino experiments in the city of Jiangamen, China, about 60 miles outside of Hong Kong. It could help answer a fundamental question about the nature of neutrinos.

The Jiangmen Underground Neutrino Observatory, or JUNO, gained official status in 2013 and established its collaboration this month.  Scientists are currently awaiting approval to start constructing JUNO’s laboratory near the Yangjiang and Taishan nuclear power plants. If it is built, current projections anticipate it will start taking data in 2020.

The plan is to bury the laboratory in a mountain under roughly half of a mile of rock and earth, a shield from distracting cosmic rays. From this subterranean seat, JUNO’s primary scientific goal would be to resolve the question of neutrino mass. There are three known neutrino types, or flavors: electron, muon and tau. Scientists know the difference between the masses of each neutrino, but not their specific values—so they don’t yet know which neutrino is heaviest or lightest.

“This is very important for our understanding of the neutrino picture,” says Yifang Wang, spokesperson for JUNO and director of the Institute of High Energy Physics of the Chinese Academy of Sciences. “For almost every neutrino model, you need to know which neutrino is heavier and which one is lighter. It has an impact on almost every other question about neutrinos.”

To reach this goal, JUNO needs to acquire a hoard of data, which requires two key elements: a large detector and a high influx of neutrinos.

The proposed detector design is called a liquid-scintillator—the same basic set-up used to detect neutrinos for the first time in 1956. The detector consists primarily of an acrylic sphere 34.5 meters (or nearly 115 feet) in diameter, filled with fluid engineered specifically for detecting neutrinos. When a neutrino interacts with the fluid, a chain reaction creates two tiny flashes of light. An additional sphere, made of photomultiplier tubes, would surround the ceramic sphere and capture these light signals.

The more fluid the detector has, the more neutrino interactions the experiment can expect to see. Current liquid scintillator experiments include the Borexino experiment at the Gran Sasso Laboratory in Italy, which contains 300 tons of target liquid, and KamLand in Japan, which contains a 1000-ton target. If plans go ahead, JUNO will be the largest liquid scintillator detector ever built, containing 20,000 tons of target liquid.

To discover the mass order of the three neutrino flavors, JUNO will look specifically at electron antineutrinos produced by the two nearby nuclear power plants.

“Only in Asia are there relatively new reactor power plants that can have four to six reactor cores in the same place,” Wang says. With the potential to run four to six cores each, the Chinese reactors would send a dense shower of neutrinos toward JUNO’s detector. Over time, a picture of the antineutrino energies would emerge. The order of the neutrino masses influences what that energy spectrum looks like.

Experiment representatives say JUNO could reach this goal by 2026.

It’s possible that the NOvA experiment in the United States or the T2K experiment in Japan, both of which are currently taking data, could make a measurement of the neutrino mass hierarchy before JUNO. At least four proposed experiments could also reach the same goal. But only JUNO would make the measurement via this particular approach.

The JUNO experiment would also tackle various other questions about the nature of neutrinos and refine some previously made measurements. If a supernova went off in our galaxy, JUNO would be able to observe the neutrinos it released. JUNO would also be the largest and most sensitive detector for geoneutrinos, which are produced by the decay of radioactive elements in the earth.

Six nations have officially joined China in the collaboration: the Czech Republic, France, Finland, Germany, Italy and Russia. US scientists are actively participating in JUNO, but the United States is not currently an official member of the collaboration. 


Like what you see? Sign up for a free subscription to symmetry!

by Calla Cofield at August 29, 2014 04:01 PM

The Great Beyond - Nature blog

US government labs plan biohazard-safety sweep

The discovery of smallpox in a refrigerator at the US National Institutes of Health (NIH) in Bethesda, Maryland, on 9 July has apparently sparked some soul searching in the US government. On 27 August, the NIH designated September as National Biosafety Stewardship Month, encouraging researchers to take inventory of their freezers for potentially dangerous agents such as pathogens and toxins, and review their biosafety protocols. The White House Office of Science and Technology Policy (OSTP) did the same in a memo released to the public on 28 August, suggesting “a government-wide ‘safety stand-down,’” and “strongly urging” both federal agencies and independent labs to complete these steps within the month.

Although the OSTP does not have the regulatory power to enforce inspections, documents obtained exclusively by Nature show that some government agencies are already starting strict surveillance of their labs. In July, the NIH began scouring its own facilities for any misplaced hazards. Its rigorous strategy, obtained through public-records request, requires laboratories at all of its campuses — whether they work with infectious diseases or not — to survey their vials and boxes for potentially dangerous pathogens, venoms, toxins and other agents.  The scientific directors of each NIH institute have until 30 September to submit affidavits confirming that this has been completed by the laboratories in their institutes.

The protocols for this comprehensive sweep describe steps that the laboratory directors must take “including, but not limited to: a) randomly choosing several containers in the inventoried repository and confirming that their contents are as expected; b) if feasible, visually inspecting the contents of a substantial number of containers in the repository to be sure they hold vials of the expected type.” Anything unlabelled must be thrown away, and labs are instructed to pay specific attention not only to pathogens, but also to other hazardous materials such as poisons, venoms and explosive materials.

For extremely large collections with more than 10 million vials, such as the tissue sample repositories managed by the National Cancer Institute (NCI), researchers will not need to evaluate every single sample. Instead, they can apply a  statistical algorithm to determine how many are likely to be misidentified. The NCI’s algorithm, for instance, would require examining 10,000 out of 10 million samples and matching them to existing electronic records of the inventory, and then extrapolating the rate of mismatches to the entire sample collection.

Other government agencies that work with infectious diseases are also beginning laboratory sweeps. In testimony to Congress on 16 July, Thomas Frieden, director of the US Centers of Disease Control and Prevention (CDC) promised a sweeping inventory of all CDC labs at the wake of a pair of incidents in which scientists were exposed to anthrax, and accidentally shipped flu virus to another lab.

On 25 August, the US Department of Veterans Affairs (VA) sent out a memo to its staff announcing that it would be complying with a “government-wide safety stand-down” while the agency makes sure that none of its labs have unregistered dangerous biological agents or toxins and reviews its security practices. VA scientists have until 24 September to submit affidavits that they have complied with this order.

Carrie Wolinetz, the associate vice-president for federal relations at the Association of American Universities in Washington DC, says that a number of scientists were initially concerned about that the vaguely worded VA requirement meant that research would be suspended for an unspecified amount of time. She sent out a memo to universities on 26 August, clarifying that the OSTP would not be enforcing any mandates.  “It’s saying just take a day to take a look through your freezer,” she says. “It’s a good opportunity to do some reflection on what’s in your lab without it being burdensome or regulatory.”

But the lack of regulatory power is what worries epidemiologist Marc Lipsitch of the Harvard School of Public Health in Boston, Massachusetts. “Overall the White House memo is encouraging as the first, small step in a comprehensive approach to biosafety and biosecurity, but it will have little effect unless many other changes are put in place, which remain unspecified at this time,” he wrote in an e-mail to Nature.

Lipsitch is particularly concerned about regulation of experiments that make pathogens such as influenza virus more dangerous, and incidents such as those at the CDC. “The three incidents with [dangerous pathogens] in federal labs that spurred this action are among hundreds that happen each year in US laboratories,” Lipsitch writes. “Given the magnitude of the response that these three incidents have provoked, it is unsupportable to keep secret the details of [these] incidents in general. The poorly justified ‘security’ reason for keeping such incidents secret cannot outweigh the need to understand and learn from them.”

by Sara Reardon at August 29, 2014 03:13 PM

arXiv blog

Evidence Grows That Online Social Networks Have Insidious Negative Effects

A study of 50,000 people in Italy concludes that online social networks have a significant negative impact on individual welfare.

August 29, 2014 02:36 PM

Peter Coles - In the Dark

Pallas’s Cat

Too busy for a proper post today as I’ve got a lot to do before going off for a spot of annual leave. I’m therefore resorting to a standard ploy in such situation, posting a video of a cat. The short clip below features no ordinary cat, however. It’s an example of Pallas’s Cat, Otocolobus manul, a wonderful – but sadly endangered – creature which lives wild in the steppes of Central Asia. Here’s a fine specimen captured in a still photograph:


Although it appears very stocky because of its long fur, it’s actually no bigger than an average domestic cat.

The clip is a valuable reminder to us all that even the coolest and most dignified animals on Earth  can be hilarious when placed in an unfamiliar situation. This one has clearly just spotted a camera outside its lair….


by telescoper at August 29, 2014 12:26 PM

CERN Bulletin

CERN Bulletin Issue No. 35-36/2014
Link to e-Bulletin Issue No. 35-36/2014Link to all articles in this issue No.

August 29, 2014 07:58 AM

Clifford V. Johnson - Asymptotia

And Back…
subway_sketches_27_08_2014It is a new semester, and a new academic year. So this means getting back into the routine of lots of various aspects of the standard professor gig. For me this also involves being back in LA and taking the subway, and so this means getting (when it is not too busy as it seems to get a lot now) to sketch people. The guy with the red sunglasses was on his way to USC as well, and while he was reading or playing a game on his phone (such things are a blessing to sketchers... they help people hold in a fixed position for good stretches of time) I was able to get a quick sketch done in a few stops on the Expo line before we all got off. The other guy with the oddly trimmed beard was just briefly seen on the Red line, and so I did not get to make much of him... I'm teaching electromagnetism at graduate level again this semester, and so it ought to be fun, given that it is such a fun topic. I hope that the group of [...] Click to continue reading this post

by Clifford at August 29, 2014 06:30 AM

August 28, 2014

astrobites - astro-ph reader's digest

Applying to grad school in the US: a timeline

I find that thinking about major undertakings and not knowing where to start can be extremely stressful. How am I supposed to know to be on top of something if I don’t even know I’m supposed to do it? In my experience, and maybe in yours as well, applying to grad school can be like that. This timeline is supposed to be a general outline for applying to astronomy graduate schools in the US generally from the perspective of a US-based student. Astrobites has a lot of other resources about graduate school as well. Check out our glossary on the application process or, if you’re interested in applying to PhDs in Europe, check out Yvette’s post on applying as a US undergraduate student. For full disclosure, in writing this I did not consult with any faculty who had served on an admissions committee, nor have I done so myself. As our commenting remains broken, ask us questions or offer your own thoughts on Twitter or Facebook.

File:Kansas banner.jpg

Wikimedia commons/ James Watkins //


Now: Sort out your tests.

  • The physics GRE subject test is required for most, but not all, physics and astronomy graduate schools. Register for the physics GREs if you haven’t done so already. The late registration deadline for the September test is 8/29/2014 and the deadline for the October test is 9/19/2014. Note that if you choose to take the GREs multiple times, you will not be able to review your scores for the September test before sitting the October test. Both tests should get you your scores in time for your applications.
  • You will also need to take the regular GREs at some point. These are easier to schedule because the tests are computerized. There are many more locations and times available than for the subject GREs.
  • If you did your undergraduate degree at a non-English speaking institution, you may be required to demonstrate English proficiency with the TOEFL. Dates and locations vary by country.
  • Note that some departments impose a lower threshold on GRE scores, and say that they are unlikely to seriously consider those with scores below save for exceptional cases. You’re probably a bad judge of whether or not you are an exceptional applicant, so apply anyways. Do take the tests seriously, but don’t stress about about getting a perfect score. Check out these Astrobites for some thoughts on the verbal GRE and the physics GRE.

Soon: Ask yourself serious questions.

  • Do you want to go graduate school? What’s your purpose in going to grad school? Sometimes, graduate school can be viewed as a default option, but grad school is long and hard and there are no promises at the end, so make your decision thoughtfully.
  • Where do you want to apply? Among other things, you might consider where you want to live, what research interests you, and the resources available at different schools. Consider resources such as telescope access and conference funding, but you also might want think about a university’s alumni network and non-academic career services.
  • What is the financial cost of graduate school for you? Most science PhD programs cover your tuition and pay you $20-30k per year as part of a typical teaching and research fellowship; question any that don’t. Especially relative to other jobs that an undergraduate science degree may qualify you for, your stipend may not afford you much extra room in your budget. Remember that graduate applications can also be very expensive, at the cost of $50-$100 per application, but that most schools will pay for you to come visit if they accept you.

File:Fall foliage Vermont banner.jpg

Wikimedia commons/user chensiyuan


September: Letters of recommendation.

  • Ask for letters of recommendation from those who know you well; you want people who can write in support of your application and whose letters will be meaningful to admissions committees. Past research supervisors are ideal people to ask; you might also consider asking your academic advisor or a professor you got to know particularly well. I bet they’ll be happy to do it. One tip for getting good letters of reference is to explicitly ask someone if they can write you a “strong” letter of recommendation (even if that feels awkward). You will probably need three letters.
  • Once you’ve decided where to apply, send your letter writers information: the list of places to which you need them to send letters, links providing information about what the letter should contain, and the application deadlines. Keep the list updated. They may also ask you to provide extra information.
  • Send your transcripts for fellowships. Fellowships and graduate schools both typically require transcripts with your application. Depending on your university, it can take a long time to get transcripts sent; get started in advance and avoid getting stuck with rush charges.
  • If you have the opportunity and research to present, consider attending the winter meeting of the American Astronomical Society (AAS). Some REUs support attending AAS, but you can also talk to your research advisor about presenting your work at AAS. Attending AAS can be rewarding, and as someone applying to graduate school you will get a chance to network with graduate students and professors. Going to AAS is definitely not required (I didn’t go), but departments do keep an eye out for potential applicants at AAS. Networking could be more important for you if your letter writers are not known by the faculty at the departments to which you are applying. September 11 is the early registration deadline for AAS.

Early October: Work on your fellowship applications.

  • The deadline to submit an abstract to AAS is October 1st.
  • Don’t think you’ll get that fellowship? Apply anyways. You might surprise yourself and it’s great practice. Plus, it’ll help with writing your graduate applications. Check out the AstroBetter list of fellowships and see if any might be right for you. The National Science Foundation Graduate Research Fellowship Program (NSF GRFP) is the big opportunity when it comes to astronomy. For more tips from Astrobites on applying for the NSF, check out this post on the program and application and this one for some of our own experiences.
  • Once you’ve got drafts of your personal statements and essays, send your essays to your letter writers so that they can write in support of what you’ve said.
  • Send your transcripts for graduate schools. See above. Note that some graduate applications only require unofficial transcripts at first, and allow you to send the official ones later if you are accepted.

Late October: Apply for fellowships.

  • Get help with your essays. Ask graduate students, post-docs, and/or professors at your university or that you’ve met elsewhere what a good essay looks like. Some might be willing to share their essays with you. Ask them to look over your statement, and get a friend to read for clarity and grammar. It probably goes without saying, but if you want people to read over your essays, you have to give them more than 24 hours to do so.
  • The NSF (see above) application is due October 30th. Note that the deadlines differ by field.
  • The Hertz Foundation Fellowship is due October 31st.
  • Check when the Department of Energy Computational Science Graduate Fellowship application is due. As of this Astrobite, the deadline wasn’t stated, but the application becomes available in October.

File:Flatirons Winter Sunrise banner.jpg

Wikimedia commons/ Jesse Varner //


November: Work on your graduate school applications.

  • The application for the National Defence Science and Engineering Graduate Fellowship is due December 12th.
  • Make sure your letter writers get your recommendations in! Almost always, you will get a notification when a letter has been submitted on your behalf or you can check on line to see if your application is complete. If they are not in yet, remind your letter writers! They’re busy and probably forgot.
  • Keep an eye out for graduate-school specific fellowships. Some you’ll be automatically considered for, for others you might need to provide extra information.
  • If you choose to, November is a good time to email professors at the schools to which you are applying. This is definitely not necessary (I didn’t do this), but I do acknowledge that this could help your admissions prospects. Getting in contact with faculty could be particularly worthwhile if you don’t have the opportunity to go to AAS and network in person, and if your letter writers are not known by the department’s faculty. Admissions aside, you might personally find it useful to communicate with people. If you are interested in a non-standard graduate path, can it work here? Is Professor X around next year? While you would also have the opportunity to ask these questions if you were accepted and visited, knowing in advance that Professor X is not taking students next year could influence your decision on whether to apply. If you choose to send emails, put some thought into it: ask a good, insightful question that’s specific to person with whom you are corresponding (i.e., not one that you could answer with a quick search), and write back if they respond.
  • The regular registration deadline for AAS in November 13th.

December: Send in your grad school applications.

  • Deadlines for international students may be earlier than for domestic students. The earliest deadline I have found is December 1st.
  • For US students, applications deadlines are early December to mid-January. The earliest I have found is December 6th.
  • Remind your letter writers of upcoming deadlines.
  • The late registration deadline for AAS is December 18th.

January: Try not to stress out!

  • Some universities have application deadlines in January. The latest I have seen is January 15th.

February-March: Hear back from schools.

  • Some schools will contact you for additional information or for interviews. Skype interviews are not uncommon.
  • Decisions for astronomy graduate schools begin to come in early February, and continue to come throughout February and March.

March-April: School visits.

  • If you are accepted, someone from the University will be in touch about visiting the department. Some schools organize group visits, bringing together many prospective students at once, and others do individual visits.
  • If you are making multiple visits, try stringing a few of them together. It’s tiring but can save you a lot of travel. The schools you visit will be happy to share your costs as well.
  • Check out our tips for visiting graduate programs.

April 15th: Make your decision.

  • If you were not accepted, and especially if you were wait-listed, and receive a prestigious fellowship such as the NSF, you can politely let your contact at that school know you received the fellowship.

by Elisabeth Newton at August 28, 2014 09:02 PM

Symmetrybreaking - Fermilab/SLAC

Particle physics to aid nuclear cleanup

Cosmic rays can help scientists do something no one else can: safely image the interior of the nuclear reactors at the Fukushima Daiichi plant.

A little after lunchtime on Friday, March 11, 2011, a magnitude 9.0 earthquake violently shook the Pacific Ocean off the northeast coast of Japan. The intense motion did more than jostle buildings and roads; it moved the entirety of Japan’s largest island, Honshu, a few meters east. At eastern Honshu’s Fukushima Daiichi power plant, one of the 25 largest nuclear power stations in the world, all operating units shut down automatically when the quake hit, showing no significant damage.

Then, almost an hour after the quake, a 50-foot tsunami wave traveling 70 miles per hour swept over the facility. The wave drenched Fukushima Daiichi and some of its sensitive electronics in seawater, dislodged large objects that pummeled buildings with the water’s ebb and flow, and—worst of all—cut its power supply and critically disabled the reactors’ power systems. Of the six reactor units at Fukushima Daiichi, three were in operation when the earthquake struck. Without power, operators had little ability to control or cool the reactor cores, resulting in hydrogen explosions that may have released radiation.

Today, radioactive water continues to leak from the damaged reactors at Fukushima Daiichi. It’s thought that the nuclear material in three reactor cores melted, but a full assessment of their condition is still too treacherous to carry out. Plant workers cannot safely enter the areas containing the remains of fuel rods, which are thought to have re-solidified wrapped around the floors and substructures of the reactor buildings. Robotic cameras can’t be sent inside the reactor cores because doing so could release a plume of radiation. And even if there were a safe way to get a camera inside, it wouldn’t last long in the high-radiation environment.

Without eyes on the inside, it’s difficult to know what exactly is inside each of the reactor cores.

Yet there is one thing that can—and regularly does—move safely through the reactor core: a particle called the muon. A heavy cousin of the electron, these naturally occurring particles are made when cosmic rays—mostly protons—from space charge through Earth’s atmosphere. This generates a shower of other particles, including muons, that continually rain down over every square inch of Earth’s surface. In fact, more than 500 such muons have zipped through your body since you started reading this article. 

An international team of physicists and engineers plans to use these particles to peek inside Fukushima Daiichi’s reactor cores. The team hopes that with muon-vision, the exact level of destruction inside—and consequently the best method of decommissioning the site—will become clear.

Los Alamos National Laboratory postdoc Elena Guardincerri, right, and undergraduate research assistant Shelby Fellows prepare a lead hemisphere inside a muon tomography machine.

Courtesy of: Los Alamos National Laboratory

An idea borne from physics

Since the early 2000s, a small team at Los Alamos National Laboratory in New Mexico has developed technology that uses muons to examine fragile or otherwise inaccessible nuclear materials. Using much of the same technology employed by particle physicists and astrophysicists, they’ve successfully viewed the insides of aging nuclear weapons systems.

In the past decade, they have also worked with Decision Sciences Corporation, a technology company headquartered in Virginia, to commercialize a muon-detection system for scanning cargo crossing international borders, looking for smuggled nuclear material.

The process they use, called muon tomography, is similar to taking an X-ray, only it uses naturally produced muons. These particles don’t damage the imaged materials and, because they already stream through everything on Earth, they can be used to image even the most sensitive objects. Better yet, a huge amount of shielding is needed to stop muons from passing through an object, making it nearly impossible to hide from muon tomography.

“Everything around you is constantly being radiographed by muons,” says Christopher Morris, who leads the Los Alamos muon tomography team. “All you have to do is set some detectors above and below it, and measure the angles well enough to make a picture.”

By determining how muons scatter as they interact with electrons and nuclei within the item, the team’s software creates a three-dimensional picture of what’s inside.

And then came the tsunami

The idea of using the technology to image Fukushima Daiichi’s nuclear cores surfaced just days after the 2011 earthquake and tsunami. At the time, Morris was on vacation in Mexico. He recalls receiving urgent emails from Cas Milner and Haruo Miyadera, members of the Los Alamos muon team. Miyadera in particular was gravely concerned about the incident and suggested that their muon imaging techniques could be used to image the inside of the reactor cores.

“It was becoming clear that things were pretty dire over there, and that the cores had melted down,” Morris says. “While I was still in Mexico, we discussed the idea in great detail over email. We worked through the equations, and it became clear that our technique really could make a difference. We figured out that we would be able to see the reactor core.”

After concluding with Morris that it was feasible, Miyadera wrote a letter to Banri Kaieda, Japan’s then-Minister of Economy, Trade and Industry, suggesting the use of muon tomography at Fukushima Daiichi. The Minister replied positively, leading the Los Alamos group to begin feasibility studies that attracted the interest of Toshiba, which leads many of the recovery projects that require R&D efforts.

“Haruo has devoted his career to find a way to make this happen,” Morris says. “Without him, it wouldn’t be happening at all—or at least not on this timescale.”

To prove the technology, the Los Alamos team shipped a demo detector system to a small, working nuclear reactor in a Toshiba facility in Kawasaki, Japan. There, they placed one detector on either side of the reactor core.

“When we analyzed our data we discovered that in addition to the fuel in the reactor core, they had put a few fuel bundles off to the side that we didn’t know about,” Morris says. “They were really impressed that not only could we image the core, but that we also found those bundles.”

Based on that successful test, Toshiba signed an agreement with Los Alamos and later with Decision Sciences to design and manufacture muon-detector components for use at Fukushima Daiichi. The Los Alamos team will develop the software for particle tracking and data analysis, while Decision Sciences will build major components of two large muon detectors and their associated structural elements. The Toshiba team, now led by Miyadera, will develop read-out electronics, manage the project and construct the detectors at the Fukushima Daiichi site.

“We’re confident in this technology,” says Konstantin Borozdin, a former member of Morris’ Los Alamos team who now manages Decision Sciences’ work on the Fukushima project. “We can absolutely make a difference.”

A test run at a small, working reactor in Kawasaki, Japan, successfully revealed the location of nuclear fuel. These images, compiled from muon data taken over the course of four weeks, show how the picture improves over time, as more muons pass through.

Courtesy of: Christopher Morris, Los Alamos National Laboratory

At Fukushima Daiichi

That said, the conditions at Fukushima Daiichi will be more challenging than any previous muon tomography experiment. Radiation levels are high, space is tight, and there are many unknowns.

“You’ve probably heard about Three Mile Island,” Miyadera says. “In that case, people were surprised at every turn because what they found inside was beyond their speculation. Many of their plans didn’t work because they didn’t understand what was inside.”

To get a better understanding of the area, the team plans to start by imaging the reactor pressure vessel—the innermost container that holds nuclear fuel—within the facility’s “Unit 2” reactor. Unit 2 was one of the three reactors in operation at the time of the earthquake. When the unit’s cooling systems failed after the tsunami, it’s thought that the fuel rods may have melted. At this point, the nuclear material may not even be fully contained within the pressure vessel, having leaked onto the floor below.

“Right now, what we know is based on speculation and indirect measurements,” Miyadera says. “We have no images at all inside the reactor pressure vessel.”

The team will begin by placing detectors on either side of the unit. At 50 square meters, the detectors will be larger than any previously built for muon tomography. One will need to be built on the second floor of a nearby building, transported through the building’s doorways piece by piece. The second will be installed in front of the reactor building’s exterior walls.

Those exterior walls, made of concrete 10 feet thick, offer their own challenge. Based on computer simulations run with the particle physics software GEANT4, the walls are expected to reduce the resolution to about 30 centimeters.

In addition, the team must also prepare for the high radiation levels present just outside of the reactor units.

Beginning next year, two detectors (shown here in green) on either side of Fukushima Daiichi’s Unit 2 will record the path of muons (represented by the orange line) that have passed through the reactor. By determining how the muons scatter between the detectors, scientists will compile the first picture of the damaged reactor’s interior.

Artwork by: Sandbox Studio, Chicago with Shawna X.

“The demonstration at Kawasaki wasn’t operating in a situation where the vessel had been breached and there were high levels of gamma radiation,” says Decision Sciences CEO Stanton Sloane. “That means that we’ll need to design different electronics that can withstand such conditions.”

To see through the gamma radiation and into the reactor, the team will need to develop new software logic and design a relay system that keeps the two detectors—which are farther apart than in any previous muon tomography experiment—synchronized.

“The biggest change will be adding a hardware trigger that downselects the amount of data”—essentially weeding out gamma background noise, Borozdin says. “This approach is similar to what’s used in high-energy physics experiments at places like CERN and SLAC.”

But unlike most experiments at CERN and SLAC, the muon detector will be installed in a radiation environment where workers need to severely limit their time.

“We’re still in the early stages,” Borozdin says. “We have a great concept and a clear path forward. But we still have a lot of work to do.”

Identifying a path forward

The system is expected to be in place by mid-2015, and the team hopes to have a good picture of Unit 2’s reactor pressure vessel by autumn of that year.

Full decommissioning of the plant is expected to take at least three decades. Even with that extended timeframe, everyone involved in the project agrees that seeing the interior of the reactor units as soon as possible is essential to an efficient and thorough decommissioning. Muons, with their ability to pass right through anything—even a melted reactor core—might just provide the insight needed to identify the best path forward.

“Once we make 3D images of what’s inside the reactor, others can design and build the special tools and robots that they’ll need to go in and take out what’s there,” Morris says. “If you know what’s there now, and how much is there, then you’re way ahead of the game.”


Like what you see? Sign up for a free subscription to symmetry!

by Kelen Tuttle at August 28, 2014 06:55 PM

Peter Coles - In the Dark

Round the Horn Antenna

The other day I was looking through my copy of Monthly Notices of the Royal Astronomical Society (which I buy for the dirty pictures).  Turning my attention to the personal columns, I discovered an advertisement for the Science & Technology Facilities Council which is, apparently, considering investing in new space missions related to astronomy and cosmology. Always eager to push back the frontiers of science, I hurried down to their address in Swindon to find out what was going on.


ME: (Knocks on door) Hello. Is there anyone there?

JULIAN: Oh hello! My name’s Julian, and this is my friend Sandy.

SANDY: Oooh hello! What can we do for you?

ME: Hello to you both. Is this Polaris House?

JULIAN: Not quite. Since we took over we changed the name…

ME: To?

SANDY: It’s now called Polari House…

JULIAN: ..on account of that’s the only language spoken around here.

ME: So you’re in charge of the British Space Programme then?

JULIAN:  Yes, owing to the budget, the national handbag isn’t as full as it used to be so now it’s just me and her.

SANDY: But never fear we’re both dab hands with thrusters.

JULIAN: Our motto is “You can vada about in any band, with a satellite run  by Jules and…

SANDY: …Sand.

ME: I heard that you’re looking for some input.

SANDY: Ooooh. He’s bold, in’e?

ME: I mean for your consultation exercise…

JULIAN: Oh yes. I forgot about that. Well I’m sure we’d welcome your contribution any time, ducky.

ME: Well I was wondering what you could tell me about Moonlite?

SANDY: You’ve come to the right place. She had an experience by Moonlight, didn’t you Jules?

JULIAN: Yes. Up the Acropolis…

ME: I mean the Space Mission “Moonlite”

SANDY: Oh, of course. Well, it’s only small but it’s very stimulating.


SANDY: Yes. It gets blasted off into space and whooshes off to the Moon…

JULIAN: …the backside thereof…

SANDY: ..and when it gets there it shoves these probes in to see what happens.

ME: Why?

SANDY: Why not?

ME: Seems a bit pointless to me.

JULIAN: There’s no pleasing some people is there?

ME: Haven’t you got anything more impressive?

SANDY: Like what?

ME:  Maybe something that goes a bit further out? Mars, perhaps?

JULIAN: Well the French have this plan to send some great butch omi to troll around on Mars but we haven’t got the metzas so we have to satisfy ourselves with something a bit more bijou…

SANDY: Hmm…You can say that again.

JULIAN: You don’t have to be big to be bona.

SANDY: Anyway, we had our shot at Mars and it went willets up.

ME: Oh yes, I remember that thing named after a dog.

JULIAN: That’s right. Poodle.

ME: Do you think a man will ever get as far as Uranus?


SANDY: Well I’ll tell you what. I’ll show you something that can vada out to the very edge of the Universe!

ME: That sounds exciting.

JULIAN: I’ll try to get it up right now.

ME: Well…er…

JULIAN: I mean on the computer

ME: I say, that’s an impressive piece of equipment

JULIAN: Thank you

SANDY: Oh don’t encourage her…

ME: I meant the computer.

JULIAN: Yes, it’s a 14″ console.

SANDY:  And, believe me, 14 inches will console anyone!

JULIAN; There you are. Look at that.

ME: It looks very impressive. What is it?

SANDY: This is an experiment designed to charper for the heat of the Big Bang.


SANDY: The Americans launched WMAP and the Europeans had PLANCK. We’ve merged the two ideas and have called it ….PLMAP.

ME: Wouldn’t it have been better if you’d made the name the other way around? I mean with the first bit of WMAP and the second bit of Planck. On second thoughts maybe not..

JULIAN: It’s a little down-market but we have high hopes.

SANDY: Yes, Planck had two instruments called HFI and LFI. We couldn’t afford two so we made do with one.

JULIAN: It’s called MFI. That’s why it’s a bit naff.

ME: I see. What are these two round things either side?

SANDY: They’re the bolometers…

ME: What is this this long thing in between pointing up? And why is it leaning to one side?

SANDY: Well that’s not unusual in my experience …

JULIAN:  Shush. It’s an off-axis Gregorian telescope if you must know.

ME: And what about this round the back?

SANDY: That’s your actual dish. It’s very receptive, if you know what I mean.

ME: What’s that inside?

JULIAN: That’s a horn antenna. We didn’t make that ourselves. We had to get it from elsewhere.

ME: So who gave you the horn?

SANDY: That’s for us to know and you to find out!

ME: So what does it all do?

JULIAN: It’s designed to make a map of what George Smoot called “The Eek of God”.

ME: Can it do polarization?

JULIAN: But of course! We polari-ize everything!


JULIAN: Cheeky!

SANDY: Of course. We’re partial to a nice lally too!

JULIAN: But seriously, it’s fabulosa…

SANDY: …Or it would be if someone hadn’t neglected to read the small print.

ME: Why? Is there a problem?

JULIAN: Well, frankly, yes. We ran out of money.

SANDY: It was only when we got it out the box we realised.

ME: What?

JULIAN & SANDY: Batteries Not Included!

With apologies to Barry Took and Marty Feldman, who wrote the original Julian and Sandy sketches performed by Hugh Paddick (Julian) and Kenneth Williams (Sandy) for the radio show Round the Horne. Here’s an example of the real thing:







by telescoper at August 28, 2014 08:26 AM

John Baez - Azimuth

The Stochastic Resonance Program (Part 2)

guest post by David Tanzer

Last time we introduced the concept of stochastic resonance. Briefly, it’s a way that noise can amplify a signal, by giving an extra nudge that helps a system receiving that signal make the jump from one state to another. Today we’ll describe a program that demonstrates this concept. But first, check it out:

Stochastic resonance.

No installation required! It runs as a web page which allows you to set the parameters of the model and observe the resulting output signal. It has a responsive behavior, because it runs right in your browser, as javascript.

There are sliders for controlling the amounts of sine wave and noise involved in the mix. As explained in the previous article, when we set the wave to a level not quite sufficient to cause the system to oscillate between states, and we add in the right amount of noise, stochastic resonance should kick in:

The program implements a mathematical model that runs in discrete time. It has two stable states, and is driven by a combination of a sine forcing function and a noise source.

The code builds on top of a library called JSXGraph, which supports function plotting, interactive graphics, and data visualization.

Running the program

If you haven’t already, go try the program. On one plot it shows a sine wave, called the forcing signal, and a chaotic time-series, called the output signal.

There are four sliders, which we’ll call Amplitude, Frequency, Noise and Sample-Path.

• The Amplitude and Frequency sliders control the sine wave. Try them out.

• The output signal depends, in a complex way, on the sine wave. Vary Amplitude and Frequency to see how they affect the output signal.

• The amount of randomization involved in the process is controlled by the Noise slider. Verify this.

• Change the Sample-Path slider to alter the sequence of random numbers that are fed to the process. This will cause a different instance of the process to be displayed.

Now try to get stochastic resonance to kick in…

Going to the source

Time to look at the blueprints. It’s easy.

• Open the model web page. The code is now running in your browser.

• While there, run your browser’s view-source function. For Firefox on the Mac, click Apple-U. For Firefox on the PC, click Ctrl-U.

• You should see the html file for the web page itself.

• See the “script” directives at the head of this file. Each one refers to javascript program on the internet. When the browser sees it, the program is fetched and loaded into the browser’s internal javascript interpreter. Here are the directives:

<script src=

<script src=

<script src="./StochasticResonanceEuler.js"></script>

<script src="./normals.js"></script>

The first one loads MathJax, which is a formula-rendering engine. Next comes JSXGraph, a library that provides support for plotting and interactive graphics. Next, StochchasticResonanceEuler.js is the main code for the model, and finally, normals.js provides random numbers.

• In the source window, click on the link for StochasticResonanceEuler.js — and you’ve reached the source!

Anatomy of the program

The program implements a stochastic difference equation, which defines the changes in the output signal as a function of its current value and a random noise value.

It consists of the following components:

1. Interactive controls to set parameters

2. Plot of the forcing signal

3. Plot of the output signal

4. A function that defines a particular SDE

5. A simulation loop, which renders the output signal.

The program contains seven functions. The top-level function is initCharts. It dispatches to initControls, which builds the sliders, and initSrBoard, which builds the curve objects for the forcing function and the output signal (called “position curve” in the program). Each curve object is assigned a function that computes the (x,t) values for the time series, which gets called whenever the input parameters change. The function that is assigned to the forcing curve computes the sine wave, and reads the amplitude and frequency values from the sliders.

The calculation method for the output signal is set to the function mkSrPlot, which performs the simulation. It begins by defining a function for the deterministic part of the derivative:

deriv = Deriv(t,x) = SineCurve(t) + BiStable(x),

Then it constructs a “stepper” function, through the call Euler(deriv, tStep). A stepper function maps the current point (t,x) and a noise sample to the next point (t’,x’). The Euler stepper maps

((t,x), noiseSample)


(t + tStep, x + tStep * Deriv(t,x) + noiseSample).

The simulation loop is then performed by the function sdeLoop, which is given:

• The stepper function

• The noise amplitude (“dither”)

• The initial point (t0,x0)

• A randomization offset

• The number of points to generate

The current point is initialized to (t0,x0), and then the stepper is repeatedly applied to the current point and the current noise sample. The output returned is the sequence of (t,x) values.

The noise samples are normally distributed random numbers stored in an array. They get scaled by the noise amplitude when they are used. The array contains more values than are needed. By changing the starting point in the array, different instances of the process are obtained.

Making your own version of the program

Now let’s tweak the program to do new things.

First let’s make a local copy of the program on your local machine, and get it to run there. Make a directory, say /Users/macbookpro/stochres. Open the html file in the view source window. Paste it into the file /Users/macbookpro/stochres/stochres.html. Next, in the view source window, click on the link to StochasticResonanceEuler.js. Paste the text into /Users/macbookpro/stochres/StochasticResonanceEuler.js.

Now point your browser to the file, with the URL file:///Users/macbookpro/stochres/stochres.html. To prove that you’re really executing the local copy, make a minor edit to the html text, and check that it shows up when you reload the page. Then make a minor edit to StochasticResonanceEuler.js, say by changing the label text on the slider from “forcing function” to “forcing signal.”

Programming exercises

Now let’s get warmed up with some bite-sized programming exercises.

1. Change the color of the sine wave.

2. Change the exponent in the bistable polynomial to values other than 2, to see how this affects the output.

3. Add an integer-valued slider to control this exponent.

4. Modify the program to perform two runs of the process, and show the output signals in different colors.

5. Modify it to perform ten runs, and change the output signal to display the point-wise average of these ten runs.

6. Add an input slider to control the number of runs.

7. Add another plot, which shows the standard deviation of the output signals, at each point in time.

8. Replace the precomputed array of normally distributed random numbers with a run-time computation that uses a random number generator. Use the Sample-Path slider to seed the random number generator.

9. When the sliders are moved, explain the flow of events that causes the recalculation to take place.

A small research project

What is the impact of the frequency of the forcing signal on its transmission through stochastic resonance?

• Make a hypothesis about the relationship.

• Check your hypothesis by varying the Frequency slider.

• Write a function to measure the strength of the output signal at the forcing frequency. Let sinwave be a discretely sampled sine wave at the forcing frequency, and coswave be a discretely sampled cosine wave. Let sindot = the dot product of sinwave and the output signal, and similarly for cosdot. Then the power measure is sindot2 + cosdot2.

• Modify the program to perform N trials at each frequency over some specified range of frequency, and measure the average power over all the N trials. Plot the power as a function of frequency.

• The above plot required you to fix a wave amplitude and noise level. Choose five different noise levels, and plot the five curves in one figure. Choose your noise levels in order to explore the range of qualitative behaviors.

• Produce several versions of this five-curve plot, one for each sine amplitude. Again, choose your amplitudes in order to explore the range of qualitative behaviors.

by John Baez at August 28, 2014 05:44 AM

August 27, 2014

Symmetrybreaking - Fermilab/SLAC

First measurement of sun’s real-time energy

The Borexino neutrino experiment in Italy found that the sun releases the same amount of energy today as it did 100,000 years ago.

For the first time, scientists have measured solar energy at the moment of its generation.

Physicists on the Borexino neutrino experiment at Italian physics laboratory INFN in Gran Sasso announced in Nature that they have detected neutrinos produced deep inside the sun.

Neutrinos, which constantly stream through us, interact very rarely with other matter. When created in nuclear reactions inside the sun, they fly through dense solar matter in seconds and can reach the Earth in eight minutes.

The sun produces energy in a sequence of nuclear reactions that convert hydrogen to helium. This is thought to begin with the fusion of two protons, a process that releases a neutrino. Neutrinos produced in this process have extremely reduced energy compared to other neutrinos released by the sun, making them difficult to detect. This is the first time an experiment has measured neutrinos from this step in the process.

Scientists previously measured the energy released by the sun by studying photons, particles of light. But photons take a considerably longer time to escape our solar system’s main star—more than 100,000 years.

Borexino scientists found measurements using solar neutrinos matched previous measurements using photons, revealing that the sun releases as much energy today as it did 100,000 years ago.

“In short, this proves that the sun is an enormous nuclear fusion plant,” said Gianpaolo Bellini, one of the founders of the Borexino experiment, in a press release.

The Borexino experiment is a collaboration between Italy, Germany, France, Poland, the United States and Russia. It will continue to take data for at least four more years.


Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at August 27, 2014 10:29 PM

astrobites - astro-ph reader's digest

What’s in a Heartbeat?

The Kepler satellite is really good at finding things that go bump in the night. Even though the spacecraft was crippled and has been repurposed as K2, there are tons of data to analyze. While Kepler’s specialty is hunting for exoplanets, it also finds lots of interesting stars along the way. As a rule, the more we look at stars imaged by Kepler, the more oddballs we find. This paper by Thompson et al. describes a new category of eccentric binary stars discovered by Kepler and dubs them “heartbeat stars.”


Example light curves of heartbeat stars found in the Kepler data. The light curves look somewhat similar to an echocardiogram, which prompted the name.

As you can see, heartbeat star light curves somewhat resemble an echocardiogram, which is what got them their name. But what is physically causing this odd pattern?

In a traditional eclipsing binary star system, we see a dip in the light curve when one star passes in front of the other. Heartbeat stars, on the other hand, do not always eclipse. Instead, they have extremely elliptical orbits, which causes the two stars to spend a short amount of time very close together as they race past each other and a long amount of time farther apart. The main dip and subsequent peak of the “heartbeat” signature occurs when the stars are closest together and tidal forces are incredibly strong.

When tidal forces affect a star, they distort its shape and cause brightness variations. If you visualize two perfectly spherical stars orbiting each other, the out-of-eclipse luminosity should be constant. But if the two stars slightly distort each another with tidal forces, they become somewhat squished and more like ellipsoids than spheres. As a result, you see more star surface area during one part of the orbit than another, and that causes a slight increase in overall brightness.

Now, imagine dialing this effect up by drastically increasing the orbital eccentricity. Instead of happily orbiting in circles with constant velocity, the two stars spend most of their time far apart, and a few harrowing hours racing past each other. Or, to put it another way: hours and hours of boredom punctuated by moments of sheer terror. This is a heartbeat star.

Tidally distorted light curves were first theorized in 1995, but the first well-studied example was published in 2011 using Kepler light curves. This paper presents 17 new systems on top of four that were previously known, and successfully models how the tidal distortions contribute to light curves. They also measure radial velocity points for some systems and show that these are generally consistent with the orbital solution.

Observed light curves (red) and model radial velocity curves (blue) for four heartbeat stars. The black points are observed radial velocities. The x-axis is Orbital Phase, which means that both curves are “folded” on the orbital period of each system. (Phase = 0.5 is the same physical orientation as Phase = 1.5.) The blue radial velocity model is not fit to the black points; rather, it is calculated from a model fit to the light curve and shown to be generally consistent with observations.

Observed light curves (red) and model radial velocity curves (blue) for four heartbeat stars. The black points are observed radial velocities. The x-axis is Orbital Phase, which means that both curves are “folded” on the orbital period of each system. (Phase = 0.5 is the same physical orientation as Phase = 1.5.) The blue radial velocity model is not fit to the black points; rather, it is calculated from a model fit to the light curve and shown to be generally consistent with observations.

The discovery of so many heartbeat stars raises an important question: how can they exist at all? Over time, the orbits of relatively short-period systems (it only takes these stars several days to fully orbit each other!) should become less eccentric and more circular. But if this always happened, we wouldn’t see any “heartbeats” at all. One possible explanation could be the presence of a so-called third body—a faint star or planet that perturbs the orbit and keeps it eccentric. But this is not fully understood.

A smaller effect seen in many heartbeat star light curves is pulsations. These are the little wiggles in the red light curves above—one or both stars are pulsing in and out, or “breathing,” as they orbit (see a similar situation explored in this astrobite). The authors find that the majority of the heartbeat star pulsations are some harmonic of the orbital period. This suggests that tidal distortions may be causing the pulsations, and is a very interesting topic for follow-up studies.

by Meredith Rawls at August 27, 2014 07:00 PM

Lubos Motl - string vacua and pheno

Kaggle Higgs contest: the solution file for everyone
Well, an approximate one

Have you ever searched for the solution file for the Higgs Kaggle contest? Have you ever asked why the organizers don't just publish it so that everyone is smarter? ;-)

Did you ever want to be able to estimate your submission's score without sending it to the Kaggle server? Have you ever been confused by the normalization of the weights?

Because I just got a permission from my teammate and it is allowed for the contestants to be generous and share their wisdom with the whole Internet and all the competitors, I just decided to help everyone – and, perhaps, to re-energize the contest a little bit.

The solution file you are going to get isn't quite the real one – it isn't the file that only the organizers possess (and hopefully protect by a highly confidential behavior and a strong enough password). But the file below is "rather close" to the right one.

You unzip the file and extract the CSV file that is inside the archive. This file will look as follows:
The silly unrounded figures are due to a Mathematica bug – they have appeared despite the command Round[x,0.0001] etc.

At any rate, the file tells you which of the 550,000 events are "signal" and which of them are "background", and it tells you the weights, too!

You are supposed to use the whole file to rate your submission.

Take your candidate submission. Look at all the events you conjectured to be "s". Compute \(s\) as the sum of the "fake weights" in my solution file of all the events that you labeled "s" and that actually are "s" (true positives). Compute \(b\) as the sum of all the "fake weights" in my solution file that you guessed to be "s" but they are actually identified as "b" in my solution file (false positives). You are supposed to use all the 550,000 events in the solution file (even though only 100,000 are being used in the preliminary leaderboard and the remaining 450,000 will be used to calculate the winners at the end).

Now, compute\[

{\rm AMS}_{\rm approx} = \frac{s}{\sqrt{b}}

\] or, more precisely, \({\rm AMS}=\dots\)\[

= \sqrt{ 2\zav{ (s+b+10)\log\zav{ 1+\frac{s}{b+10} } -s } }

\] and you will get a pretty good estimate of the score that your submission will produce at the Kaggle server. For example, perform this procedure for a random submission file with 30% entries identified as "s" and 70% labeled as "b" and you get the AMS score slightly above 0.58 that you may find in the leaderboard repeatedly.

Invent a better submission and you may get closer to 3.8 – the ballpark of the scores achieved by the three or four semigods at the top of the leaderboard. ;-) I promise you that the file was created from a submission whose score earns the bronze medal on the preliminary leaderboard which for 95+ percent of the competitor is almost equivalent to the "nearly perfect submission" they are dreaming about now, before their self-confidence reaches the heaven.

Finally, here is the OneDrive URL of the folder where you may download the zipped file:
Higgs Kaggle approximate solution file (CSV)
Happy kaggling and higgsing. Incidentally, Microsoft's OneDrive is a really handy place to get tens of gigabytes for free. I am particularly satisfied with the way how nicely and quickly it synchronizes photographs between my Lumia 520 that takes them and the laptop.

You may create your OneDrive account, too. You will need to register a Microsoft (Hotmail-like) account if you don't have one.

by Luboš Motl ( at August 27, 2014 05:54 PM

The Great Beyond - Nature blog

Japanese lab at centre of stem-cell scandal to be reformed

The Japanese research centre where one researcher was found guilty of scientific misconduct and another died in an apparent suicide this year will be renamed and reduced in size, the institute announced today.

The RIKEN Centre for Developmental Biology (CDB) in Kobe is renowned as a world-leading organization for studying stem cells. But its reputation has been severely damaged by this year’s scandal: CDB biochemist Haruko Obokata was found guilty of scientific misconduct in work that claimed an easy way to make embryonic-like stem cells, but which no-one has been able to replicate. In July, her two Nature papers published on the technique, called stimulus-triggered acquisition of pluripotency, or STAP, were retracted. In August, Yoshiki Sasai, a senior co-author of the papers and a pioneering researcher at the CDB, was found dead: he left a suicide note that blamed the storm of media attention around the retraction of the two papers.

An independent RIKEN reform committee had recommended in June that the CDB be entirely dismantled. But that call led to a groundswell of support for the centre from stem-cell researchers around the world. They argued that one case of research misconduct did not mean an entire institute should be closed, even if a new centre replaced it. The committee’s proposals for the CDB “may even be more damaging than the incident itself”, noted Maria Leptin, a molecular biologist and director of the European Molecular Biology Organization in Heidelberg, Germany.

On 27 August, RIKEN said that the centre would be renamed, and its number of laboratories cut. It was not clear how many of its 540 staff would lose their jobs, if any. Masatoshi Takeichi, who has led the CDB since it was founded 14 years ago, will step down.

RIKEN also revealed in an interim report that its attempt to replicate the stem-cell findings have been unsuccessful. Histoshi Niwa, who is leading the replication effort and was a co-author on the original STAP papers, said he hadn’t yet managed to generate embryonic-like stem cells after treating mouse spleen cells with acid. The final report is expected by March. Obokata is also working on a replication attempt.

by Richard Van Noorden at August 27, 2014 05:47 PM

arXiv blog

The Search For Extraterrestrial Civilizations' Waste Energy

If they’re out there, other advanced civilisations should be emitting waste energy like hot exhaust. And that provides a good way to spot them, argue SETI experts.

Back in 1974, the American astronomer Michael Hart published a paper in the Quarterly Journal of the Royal Astronomical Society entitled “An Explanation For The Absence Of Extraterrestrials On Earth”. In it, he pointed out that there are no intelligent beings from outer space on Earth now, a statement that he famously referred to as Fact A.

August 27, 2014 02:21 PM

August 26, 2014

Geraint Lewis - Cosmic Horizons

Sailing under the Magellanic Clouds: A DECam View of the Carina Dwarf

Where did that month go? Winter is almost over and spring will be breaking, and my backlog of papers to comment on is getting longer and longer.

So a quick post this morning on a recent cool paper by PhD student, Brendan McMonigal, called "Sailing under the Magellanic Clouds: A DECAm View of the Carina Dwarf". The title tells a lot of the story, but it all starts with a telescope with a big camera.

The camera is DECam, the Dark Energy Camera located on the 4m CTIO telescope in Chile. This is what it looks like;
It's not one CCD, but loads of them butted together allowing us to image a large chunk of sky. Over the next few years, this amazing camera will allow the Dark Energy Survey which will hopefully reveal what is going on in the dark sector of the Universe, a place where Australia will play a key-role through OzDES.

But one of the cool things is that we can use this superb facility to look at other things, and this is precisely what Bendan did. And the target was the Carina Dwarf Galaxy. Want to see this impressive beast! Here it is;
See it? It is there, but it's a dwarf galaxy, and is so quite faint. Still can't see it? Bring on DECam. We pointed DECam at Carina and took a picture. Well, a few. What did we see?
So, as you can see, we took 5 fields (in two colours) centred on the Carina dwarf. And with the superb properties of the camera, the dwarf galaxy nicely pops out.

But science is not simply about taking pictures, so we constructed colour-magnitude diagrams for each of the fields. Here's what we see (and thanks Brendan for constructing the handy key for the features in the bottom-right corner).
All that stuff in the labelled MW are stars in our own Milky Way, which is blooming contamination getting in our way. The blob at the bottom is where the we are hitting the observational limits of the camera, and can't really tell faint stars from galaxies.

The other bits labelled Young, Intermediate and Old tell us that Carina has had several bursts of star-formation during its life, some recent, some a little while ago, and some long ago (to express it in scientific terms), while the RGB is the Red Giant Branch, RC is the Red Clump and HB is the Horizontal Branch.

We can make maps of each of the Young, Intermediate and Old population stars, and what we see is this;
The Young and Intermediate appear to be quite elliptical and smooth, but the Old population appears to be a little ragged. This suggests that long ago, Carina has been shaken up through some gravitational shocks when it interacted with the larger galaxies of the Local Group, but the dynamics of these interactions are poorly understood.

But there is more. Look back up there at the Colour-Magnitude Diagram schematic and there is a little yellow wedge labelled LMC, the Large Magellanic Cloud; what's that doing there?

What do we see if we look at just those stars? Here's what we see.
So, they are not all over the place, but are located only in the southern field, overlapping with Carina itself (and making it difficult to separate the Old Carina population from the Magellanic Cloud stars).

But still, what are they doing there? Here's a rough map of the nearby galaxies.
As we can see, from the view inside the Milky Way, Carina and the LMC appear (very roughly) in the same patch of sky but are completely different distances. But it means that the Large Magellanic Cloud must have a large halo of stars surrounding it, possibly puffed up through interactions with the Small Magellanic Cloud as they orbit together, and with the mutual interaction with the Milky Way.

It's a mess, a glorious, horrendous, dynamically complicated mess. Wonderful!

Well done Brendan!

Sailing under the Magellanic Clouds: A DECam View of the Carina Dwarf

We present deep optical photometry from the DECam imager on the 4m Blanco telescope of over 12 deg[Math Processing Error] around the Carina dwarf spheroidal, with complete coverage out to 1 degree and partial coverage extending out to 2.6 degrees. Using a Poisson-based matched filter analysis to identify stars from each of the three main stellar populations, old, intermediate, and young, we confirm the previously identified radial age gradient, distance, tidal radius, stellar radial profiles, relative stellar population sizes, ellipticity, and position angle. We find an angular offset between the three main elliptical populations of Carina, and find only tentative evidence for tidal debris, suggesting that past tidal interactions could not have significantly influenced the Carina dwarf. We detect stars in the vicinity of, but distinct to, the Carina dwarf, and measure their distance to be 46[Math Processing Error]2 kpc. We determine this population to be part of the halo of the Large Magellanic Cloud at an angular radius of over 20 degrees. Due to overlap in colour-magnitude space with Magellanic stars, previously detected tidal features in the old population of Carina are likely weaker than previously thought.

by Cusp ( at August 26, 2014 11:20 PM

Quantum Diaries

Do we live in a 2-D hologram?

This Fermilab press release was published on Aug. 26, 2014.

A Fermilab scientist works on the laser beams at the heart of the Holometer experiment. The Holometer will use twin laser interferometers to test whether the universe is a 2-D hologram. Photo: Fermilab

A Fermilab scientist works on the laser beams at the heart of the Holometer experiment. The Holometer will use twin laser interferometers to test whether the universe is a 2-D hologram. Photo: Fermilab

A unique experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe – including whether we live in a hologram.

Much like characters on a television show would not know that their seemingly 3-D world exists only on a 2-D screen, we could be clueless that our 3-D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.

Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.

“We want to find out whether space-time is a quantum system just like matter is,” said Craig Hogan, director of Fermilab’s Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.”

Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty. The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.

Essentially, the experiment probes the limits of the universe’s ability to store information. If there is a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location – even in principle. The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.

Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way – being carried along on a jitter of space itself.

“Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high – millions of cycles per second – that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.

“If we find a noise we can’t get rid of, we might be detecting something fundamental about nature – a noise that is intrinsic to space-time,” said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. “It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works.”

The Holometer experiment, funded by the U.S. Department of Energy Office of Science and other sources, is expected to gather data over the coming year.

The Holometer team comprises 21 scientists and students from Fermilab, the Massachusetts Institute of Technology, the University of Chicago and the University of Michigan. For more information about the experiment, visit

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit


by Fermilab at August 26, 2014 03:22 PM

Tommaso Dorigo - Scientificblogging

The Quote Of The Week - Neutrino Mass Hierarchy and Matter Effects
Interaction with matter changes the neutrino mixing and effective mass splitting in a way that depends on the mass hierarchy. Consequently, results of oscillations and flavor conversion are different for the two hierarchies.
Sensitivity to the mass hierarchy appears whenever the matter effect on the 1-3 mixing and mass splitting becomes substantial. This happens in supernovae in large energy range, and in the matter of the Earth.
The Earth density profile is a multi-layer medium where the resonance enhancement of oscillations as well as the parametric enhancement of oscillations occur. The enhancement is realized in neutrino (antineutrino) channels for normal (inverted) mass hierarchy.

read more

by Tommaso Dorigo at August 26, 2014 01:37 PM

Symmetrybreaking - Fermilab/SLAC

Holographic universe experiment begins

The Holometer experiment will test whether our universe is coded into 2-D packets many trillion times smaller than an atom.

A unique experiment at Fermi National Accelerator Laboratory has started collecting data that will answer some mind-bending questions about our universe—including whether we live in a hologram.

Much like characters on a television show would not know that their seemingly 3-D world exists only on a 2-D screen, we could be clueless that our 3-D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.

Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.

“We want to find out whether space-time is a quantum system just like matter is,” says Craig Hogan, director of Fermilab’s Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.”

Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty. The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.

Essentially, the experiment probes the limits of the universe’s ability to store information. If there is a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location—even in principle. The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.

Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a 1-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way, being carried along on a jitter of space itself.

“Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high—millions of cycles per second—that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.

“If we find a noise we can’t get rid of, we might be detecting something fundamental about nature—a noise that is intrinsic to space-time,” says Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. “It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works.”

The Holometer experiment, funded by the US Department of Energy Office of Science and other sources, is expected to gather data over the coming year.

Fermilab published a version of this article as a press release.


Like what you see? Sign up for a free subscription to symmetry!

August 26, 2014 01:00 PM

August 25, 2014

astrobites - astro-ph reader's digest

Cosmic rays on the sky – where do they come from?

The Earth is constantly reached by highly energetic nuclei from our Galaxy and beyond that we call “cosmic rays”. When these nuclei, mostly protons, interact with our atmosphere, they produce showers of particles that can be detected by balloon experiments or by experiments on the ground. The origin of these cosmic rays is not well understood. They span such a large range of energies (from 108 eV to 1020 eV, roughly), that it is hard to think that they could have a common origin. The lower energy cosmic rays (below ~ 1017eV) are thought to arise from the remnants of supernova explosions, while the more energetic ones are suspected to come from active galactic nuclei, gamma-ray bursts and quasars in other galaxies.

In general, it is hard to pin point the direction of the sky from which the cosmic ray is coming. The typical distance (or gyroradius) that a cosmic ray can travel before changing its direction due to inhomogeneities in the magnetic field of our Galaxy is 1 light-day. Any source of cosmic rays that we can think of (like supernova remnants) are much farther away. For example, the Vela supernova remnant is 800 light years away. Hence, the initial direction of the cosmic rays should be washed out before they reach us. However, scientists are puzzled: several experiments have reported an excess of  TeV (1012 eV) cosmic rays coming from certain directions in the sky.

The High-Altitude Water Cherenkov Observatory (HAWC) is an experiment under construction near Sierra Negra, Mexico. It is originally built for the purpose of detecting gamma-rays. However, highly energetic cosmic rays are also detected by the experiment. When a cosmic ray reaches the atmosphere, its shower of secondary particles produces Cherenkov light when they traverse HAWC’s water tanks, and it is this light that is detected. With this information, the direction of the cosmic ray can be inferred to within 1.2 degrees. On the one hand, Cherenkov light from cosmic ray showers are a nuisance to the gamma-ray observations that are the main aim of HAWC, but it also constitutes an interesting measurement on its own. After roughly one year of gathering data, HAWC has measured variations in the cosmic ray intensity across the sky at the level of 0.0001.

The TeV cosmic ray sky as seen by HAWC.

Figure 1. The TeV cosmic ray sky as seen by HAWC. Large scale variations (on scales > 60 degrees), which are sensitive to incomplete sky coverage, have been subtracted from this map. The three excess regions are identified in the map, and these coincide with those found previously by other experiments. Figure 5 of Abeysekara et al.

The HAWC team has found an excess of cosmic rays coming from three different regions of the sky, as shown in Figure 1 above. All of these regions had previously been identified by other experiments (the Milagro experiment and ARGO-YBJ), and one of these regions is now detected more clearly in the HAWC data, confirming the previous results. The colors in the map indicate the significance level: a comparison of the level of detection of each feature to the noise in the measurement. The authors also explore the energy spectrum of the cosmic rays coming from Region A, the most significant region detected, and they find them to be more energetic than those that come from the whole sky, on average.

The team has also computed the power spectrum of the cosmic ray intensity. This is a function that tells us the relative abundance of intensity variations of a given scale in the map (commonly used in cosmology), and it is shown below in Figure 2. The blue points give the power spectrum of the whole map, while the red points correspond to a version of the map where the largest scale variations have been subtracted. The gray bands indicate the expected result if the cosmic rays came from random directions in the sky. Both the detection of the three excess regions and the structure in this plot can help elucidate the structure of the magnetic field in the neighborhood of the Earth, the physics of how cosmic rays propagate throughout the interstellar medium and the locations of Galactic sources of cosmic rays.

Power spectrum of cosmic ray intensity map from the HAWC measurements.

Figure 2. Power spectrum of cosmic ray intensity map from the HAWC measurements. The blue points correspond to the power spectrum of the map of the whole sky, while for the red points, the largest scales variations have been subtracted. The authors are most interested in the red power spectrum in this work, which shows variations in the intensity of cosmic rays across the sky on scales smaller than 60 degrees. Figure 8 of the Abeysekara et al.




by Elisa Chisari at August 25, 2014 04:46 PM

Clifford V. Johnson - Asymptotia

Coral Forest
crochet_forest_7Given that you read here at this blog, you may well like to keep your boundaries between art and science nicely blurred, in which case you might like to learn more about the coral reef forests made of crochet spearheaded by Margaret and Christine Wertheim. The pieces mix crochet (a hand-craft I know and love well from my childhood - I got to explore my love for symmetry, patterns, and problem-solving by making doilies) with mathematics - hyperbolic geometry in particular - as well as biology (mimicking and celebrating the forms of corals - and drawing attention to their destruction in the wild). You can read much more about the projects here. I've mentioned the work here before on the blog, but the other day I went along to see a new set [...] Click to continue reading this post

by Clifford at August 25, 2014 05:23 AM

The n-Category Cafe

Math and Mass Surveillance: A Roundup

The Notices of the AMS has just published the second in its series “Mathematicians discuss the Snowden revelations”. (The first was here.) The introduction to the second article cites this blog for “a discussion of these issues”, but I realized that the relevant posts might be hard for visitors to find, scattered as they are over the last eight months.

So here, especially for Notices readers, is a roundup of all the posts and discussions we’ve had on the subject. In reverse chronological order (and updated after the original appearance of this post):

by leinster ( at August 25, 2014 03:00 AM

August 23, 2014

The Great Beyond - Nature blog

Updated: Icelandic volcano erupts

Update, 23 August 23:27 BST: As of this evening, Icelandic experts are reconsidering whether an eruption has begun or not. With no surface changes visible, and no meltwater rushing downriver as of yet, the Icelandic Meteorological Office reports “there are no signs of ongoing volcanic activity”. The aviation alert remains red, “as an imminent eruption cannot be excluded”.

 A volcanic eruption has begun near the caldera of Bárðarbunga, the Icelandic Meteorological Office (IMO) announced on 23 August. Officials have raised the area’s aviation colour code to red, signifying that an “eruption is imminent or in progress”.

All is quiet on the surface above the Bárðarbunga caldera.

All is quiet on the surface above the Bárðarbunga caldera.

Halldór Björnsson/Icelandic Meteorological Office

The eruption is taking place beneath 150–400 metres of ice, north and east of the Bárðarbunga caldera. For the past week magma has been rising from the deep and forming a long underground sheet of freshly cooled rock, known as a dyke. The formation of the dyke has been marked by a series of intense earthquakes stretching from Bárðarbunga towards a glacier called Dyngjujökull (see ‘Icelandic volcano shakes ominously’).

Scientists from the IMO and the University of Iceland flew over the eruption today and reported no visible signs at the surface. The eruption was probably detected by seismic stations monitoring the region, as the shaking produced when water interacts with magma and turns to steam has a distinctive energy signature. Since the earthquake swarm began on 16 August, Icelandic scientists have been peppering the region with extra seismic and global-positioning instruments to capture just such an event.

Officials have also moved mobile radar observation stations into place around Bárðarbunga, to monitor any plumes if the volcano starts to emit ash. All airports in Iceland remain open, although airspace of approximately 140 by 100 nautical miles (260 by 185 kilometres) has been closed over the eruption site. If the eruption begins to produce ash, the volcanic ash advisory centre responsible for the region may issue an alert. Those alerts can be monitored here.

How the eruption proceeds will depend on how much magma is forcing its way upward and at what rate. The last eruption in Iceland happened in 2011 at the Grímsvötn volcano and was the most powerful in nearly a century. Like this new one, it took place under the Vatnajökull ice cap, and it broke through the ice to spew ash 20 kilometres high. So far, there is no indication that the new eruption will do anything like that, although the interaction of magma and ice is notoriously unpredictable.

volcano_status (3)

Icelandic Meteorological Office

by Alexandra Witze at August 23, 2014 04:10 PM

Tommaso Dorigo - Scientificblogging

Will Do Peer Review - For Money
Preparing the documents needed for an exam for a career advancement, to a scientist like me, is something like putting order in a messy garage. Leave alone my desk, which is indeed in a horrific messy state - papers stratified and thrown around with absolutely no ordering criterion, mixed with books I forgot I own and important documents I'd rather have reissued rather than searching for them myself. No, I am rather talking about my own scientific production - pubished articles that need to be put in ordered lists, conference talks that I forgot I have given and need to be cited in the curriculum vitae, refereeing work I also long forgot I'd done, internal documents of the collaborations I worked in, students I tutored, courses I gave.

read more

by Tommaso Dorigo at August 23, 2014 01:20 PM

astrobites - astro-ph reader's digest

UR #15: Colors of Quasars

astrobitesURlogoThe undergrad research series is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.

Did you just wrap up a senior thesis? Are you getting started on an astro research project? If you, too, have been working on a project that you want to share, we want to hear from you! Think you’re up to the challenge of describing your research carefully and clearly to a broad audience, in only one paragraph? Then send us a summary of it!

You can share what you’re doing by clicking on the “Your Research” tab above (or by clicking here) and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself.

We look forward to hearing from you!


Poruri Sai Rahul
Indian Institute of Technology Madras, Chennai, India

Rahul is pursuing an integrated BS and MS degree in physics at the Indian Institute of Technology Madras. He’s also the head of the amateur astronomy club, Astro IITM. He did the work below under the guidance of Prof. Anand Narayanan at the Indian Institute for Space science and Technology, Trivandrum, India.

Colors of Quasars from the SDSS DR9

The practice of photometric redshift estimation through multi-band photometry was initially put forth by Baum 1962 but has only become popular and powerful at the turn of the century. The efficiency of these methods depends greatly on the amount of overlap between adjacent filters, which is why photometry using SDSS u, g, r, i, z and 2MASS J, H, K bands has an advantage over the conventional UBVRI photometry.

Colors of Quasars by Richards et al. 2001 is a study of the color-color and color-redshift relationships of 2625 quasars, mostly from the SDSS DR3 catalog. As only part of the original data set was available on the internet, instead of reproducing the results, I worked on extending them using quasar data from the SDSS DR9. Using sql query, I retrieved photometric data on 146,659 quasars and constructed the color-color and color-redshift relations. Variation in the color can be explained using the various emission and absorption lines that are characteristic of quasars. Refer to sec. 4.3 of Richards et al. 2001 for a detailed explanation.


Looking at one of the interesting features, it can be seen that there's a large scatter in u-g color at high redshifts. As mentioned in sec 4.3, the rapid rise is due to the Lyman alpha forest and lyman-limit systems entering the u band causing there to be little to no flux!

Looking at one of the interesting features, it can be seen that there’s a large scatter in u-g color at high redshifts. As mentioned in sec 4.3, the rapid rise is due to the Lyman alpha forest and lyman-limit systems entering the u band causing there to be little to no flux!

by Astrobites at August 23, 2014 04:37 AM

August 22, 2014

Clifford V. Johnson - Asymptotia

Making and Baking….
Back in LA, I had an amusing day the other day going from this* in the TV studio... photo_laser_mirage_shoot_small involving a laser and liquid nitrogen (so, around -320 F, if you must use those units), to this in the kitchen: tasty_things_1 involving butter, flour, water and shortening... (and once in the oven, around +350 F) which ultimately resulted in this: [...] Click to continue reading this post

by Clifford at August 22, 2014 11:39 PM

Lubos Motl - string vacua and pheno

Ashoke Sen: elementary particles are small black holes
Three string theorists added as Dirac Medal winners

On August 8th of every year, the Abdus Salam Institute in Trieste, Italy chooses up to three recipients of the Dirac Medal. (It's the anniversary of Dirac's 1902 birth. There exist three other awards called the "Dirac Medal" which I will ignore because they're less relevant for this blog's audience.)

Of course, the medal tries to decorate deep minds who are doing a similar kind of profound research as Paul Dirac did which is why dozens of string theorists have already won it. The Dirac Medal shows what the Nobel prize would look like if the committee weren't constrained by the required explicit, dynamite-like demonstration of the physical discoveries.

In 2014, i.e. two weeks ago, the Italian institute avoided all experiments and awarded just three string theorists:
Ashoke Sen, Andrew Strominger, Gabriele Veneziano
Congratulations! Of course, Veneziano is the forefather of the whole discipline (the intercourse that has led to the birth was Veneziano's encounter with the Euler Beta function), Andy Strominger is a lot of fun and a perfectly balanced top thinker in one package and I know him the best of all, of course ;-), and Ashoke Sen is among the most brilliant minds, too. He has previously won the Milner award, too.

The Hindu printed a short yet interesting interview with Ashoke Sen yesterday:
‘Elementary particles may be thought of as small black holes’
It's funny – the title is actually a sentence I have included in almost every general physics talk I gave in the last decade, perhaps 30 talks in total. Sometimes I talk about the panic about the LHC-produced black holes and emphasize that only experts may distinguish a small black hole from an elementary particle such as the Higgs boson – and its evaporation from the Higgs decay etc.

It's true that the Hawking radiation of a "larger than minimal" black hole has a higher number of decay products (particles) so it's more uniform but for the truly minimum-size black holes, there's no difference.

String theory makes this unification of particles and black holes very explicit and elegant and Ashoke Sen has contributed to these wonderful insights a lot. String theory generally predicts an infinite spectrum of particle species – the perturbative "Hagedorn" tower of excited string states is the first glimpse of it, and it gets transformed to black hole microstates for even higher masses where the spectrum gets even denser.

Or you may go in the opposite direction: as a black hole is shrinking, the quantum effects and effects that may be represented as the quantization of its mass eigenvalues get increasingly important and once you get to the sub-Planckian masses, there are just a few black hole microstates and they are identical to the known elementary particles.

The interview mentions his work on S-duality and his research of the black hole microstates. Ashoke was also the #1 soul behind the tachyon minirevolution in the late 1990s but he remained modestly silent about it.

He is also asked about the criticisms directed against string theory that we may still occasionally hear. He reminds everyone of the fact that not just string theory but any theory claiming to clarify the quantum foundations of gravity deals with new phenomena at (at least superficially) experimentally inaccessible scales – the Planck length has been known and known to be ultratiny for more than 100 years (Planck defined the natural units in 1899). So we have two options: either to hang ourselves, or to try to get as deeply as we can with the available knowledge and tools.

String theory is choosing the second option, Ashoke states. Those who are not choosing the second option should at least follow the first option more rigorously so that we don't hear so much unnecessary yelling before they complete their alternative, non-stringy strategy.

by Luboš Motl ( at August 22, 2014 12:33 PM

John Baez - Azimuth

Information Aversion


Why do ostriches stick their heads under the sand when they’re scared?

They don’t. So why do people say they do? A Roman named Pliny the Elder might be partially to blame. He wrote that ostriches “imagine, when they have thrust their head and neck into a bush, that the whole of their body is concealed.”

That would be silly—birds aren’t that dumb. But people will actually pay to avoid learning unpleasant facts. It seems irrational to avoid information that could be useful. But people do it. It’s called information aversion.

Here’s a new experiment on information aversion:

In order to gauge how information aversion affects health care, one group of researchers decided to look at how college students react to being tested for a sexually transmitted disease.

That’s a subject a lot of students worry about, according to Josh Tasoff, an economist at Claremont Graduate University who led the study along with Ananda Ganguly, an associate professor of accounting at Claremont McKenna College.

The students were told they could get tested for the herpes simplex virus. It’s a common disease that spreads via contact. And it has two forms: HSV1 and HSV2.

The type 1 herpes virus produces cold sores. It’s unpleasant, but not as unpleasant as type 2, which targets the genitals. Ganguly says the college students were given information — graphic information — that made it clear which kind of HSV was worse.

“There were pictures of male and female genitalia with HSV2, guaranteed to kind of make them really not want to have the disease,” Ganguly says.

Once the students understood what herpes does, they were told a blood test could find out if they had either form of the virus.

Now, in previous studies on information aversion it wasn’t always clear why people declined information. So Tasoff and Ganguly designed the experiment to eliminate every extraneous reason someone might decline to get information.

First, they wanted to make sure that students weren’t declining the test because they didn’t want to have their blood drawn. Ganguly came up with a way to fix that: All of the students would have to get their blood drawn. If a student chose not to get tested, “we would draw 10 cc of their blood and in front of them have them pour it down the sink,” Ganguly says.

The researchers also assured the students that if they elected to get the blood tested for HSV1 and HSV2, they would receive the results confidentially.

And to make triply sure that volunteers who said they didn’t want the test were declining it to avoid the information, the researchers added one final catch. Those who didn’t want to know if they had a sexually transmitted disease had to pay $10 to not have their blood tested.

So what did the students choose? Quite a few declined a test.

And while only 5 percent avoided the HSV1 test, three times as many avoided testing for the nastier form of herpes.

For those who didn’t want to know, the most common explanation was that they felt the results might cause them unnecessary stress or anxiety.

Let’s try extrapolating from this. Global warming is pretty scary. What would people do to avoid learning more about it? You can’t exactly pay scientists to not tell you about it. But you can do lots of other things: not listen to them, pay people to contradict what they’re saying, and so on. And guess what? People do all these things.

So, don’t expect that scaring people about global warming will make them take action. If a problem seems scary and hard to solve, many people will just avoid thinking about it.

Maybe a better approach is to tell people things they can do about global warming. Even if these things aren’t big enough to solve the problem, they can keep people engaged.

There’s a tricky issue here. I don’t want people to think turning off the lights when they leave the room is enough to stop global warming. That’s a dangerous form of complacency. But it’s even worse if they decide global warming is such a big problem that there’s no point in doing anything about it.

There are also lots of subtleties worth exploring in further studies. What, exactly, are the situations where people seek to avoid unpleasant information? What are the situations where they will accept it? This is something we need to know.

The quote is from here:

• Shankar Vedantham, Why we think ignorance Is bliss, even when It hurts our health, Morning Edition, National Public Radio, 28 July 2014.

Here’s the actual study:

• Ananda Ganguly and Joshua Tasoff, Fantasy and dread: the demand for information and the consumption utility of the future.

Abstract. Understanding the properties of intrinsic information preference is important for predicting behavior in many domains including finance and health. We present evidence that intrinsic demand for information about the future is increasing in expected future consumption utility. In the first experiment subjects may resolve a lottery now or later. The information is useless for decision making but the larger the reward, the more likely subjects are to pay to resolve the lottery early. In the second experiment subjects may pay to avoid being tested for HSV-1 and the more highly feared HSV-2. Subjects are three times more likely to avoid testing for HSV-2, suggesting that more aversive outcomes lead to more information avoidance. We also find that intrinsic information demand is negatively correlated with positive affect and ambiguity aversion.

Here’s an attempt by economists to explain information aversion:

• Marianne Andries and Valentin Haddad, Information aversion, 27 February 2014.

Abstract. We propose a theory of inattention solely based on preferences, absent any cognitive limitations and external costs of acquiring information. Under disappointment aversion, information decisions and risk attitude are intertwined, and agents are intrinsically information averse. We illustrate this link between attitude towards risk and information in a standard portfolio problem, in which agents balance the costs, endogenous in our framework, and benefits of information. We show agents never choose to receive information continuously in a diffusive environment: they optimally acquire information at infrequent intervals only. We highlight a novel channel through which the optimal frequency of information acquisition decreases when risk increases, consistent with empirical evidence. Our framework accommodates a broad range of applications, suggesting our approach can explain many observed features of decision under uncertainty.

The photo, probably fake, is from here.

by John Baez at August 22, 2014 08:00 AM

August 21, 2014

Sean Carroll - Preposterous Universe

Effective Field Theory MOOC from MIT

Faithful readers are well aware of the importance of effective field theory in modern physics. EFT provides, in a nutshell, the best way we have to think about the fundamental dynamics of the universe, from the physics underlying everyday life to structure formation in the universe.

And now you can learn about the real thing! MIT is one of the many colleges and universities that is doing a great job putting top-quality lecture courses online, such as the introduction to quantum mechanics I recently mentioned. (See the comments of that post for other goodies.) Now they’ve announced a course at a decidedly non-introductory level: a graduate course in effective field theory, taught by Caltech alumn Iain Stewart. This is the real enchilada, the same stuff a second-year grad student in particle theory at MIT would be struggling with. If you want to learn how to really think about naturalness, or a good way of organizing what we learn from experiments at the LHC, this would be a great place to start. (Assuming you already know the basics of quantum field theory.)


Classes start Sept. 16. I would love to take it myself, but I have other things on my plate at the moment — anyone who does take it, chime in and let us know how it goes.

by Sean Carroll at August 21, 2014 05:50 PM

Symmetrybreaking - Fermilab/SLAC

LHC physicist takes on new type of collisions

A former Large Hadron Collider researcher brings his knowledge of high-energy collisions to a new hockey game.

After years of particle physics research—first for the D0 experiment at Fermilab near Chicago and later for the ATLAS experiment at CERN near Geneva—Michele Petteni faced a dilemma. He loved physics, but not academia.

“Academia is very competitive, and if you want to be successful, you have to be one-hundred-percent committed,” Petteni says. “After my postdoc, I realized going down the academic path was not for me.”

Petteni found the perfect compromise. He took a job as a software engineer with EA SPORTS and helped design the hockey game NHL 15, to be released on September 9 in the US and September 11 in Europe.

“It was a new opportunity to do physics, just not at a quantum level,” Petteni says. “We try to keep quantum mechanical effects out of video games.”

Petteni’s job was to help model the puck and players with more realistic geometries so that their interactions mirrored those of real hockey games.

“Previous versions of the game modeled the puck as a sphere, which has a perfectly symmetrical geometry and bounces in a very predictable way,” Petteni says. “But hockey pucks are cylinders, which move and interact completely differently than spheres. We wanted to develop new models in which the puck flicks and rotates in a way which is believable.”

Petteni and his team even went so far as to determine how the puck interacts with a player’s jersey and moves during a multi-player pile-ups.

“It really changes everything and gives you a much more realistic hockey experience,” Petteni says.

Petteni says his previous experiences working at Fermilab and CERN were invaluable: The coding, computational analysis and problem-solving skills he learned while working as a physicist translate almost directly into game design.

“We knew that we wanted to redo our puck physics, and the fact that Michele had physics experience made it a perfect fit,” says Sean Ramjagsingh, the lead producer for the game NHL 15. “Physics has been a huge part of the success of our franchise.”

In fact, Petteni wasn’t the first physicist hired by EA sports to help re-vamp the realism in their games. His team also contains a former string theorist and a former astrophysicist, as well as several engineers.


Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at August 21, 2014 02:31 PM

Lubos Motl - string vacua and pheno

David Gross: why do we have faith in string theory
David Gross has given lots and lots of vision talks at various string conferences but this time, in June 2014, he focused on string theory and the scientific method in his 21-minute-long vision talk:

At the beginning, he would enumerate five of his favorite talks, said that Andy Strominger's vision talk brought Gross almost to tears, and he finally concentrated on the explanations why the people in that Princeton room have faith in the theory despite some outsiders' opinions that they shouldn't.

(Paul Steinhardt, a speaker at Strings 2014 who has delivered some "strange" statements to the audience, was chosen as the only named prototype of the critics.)

String theory is a framework, not a specific theory making specific down-to-earth predictions about realistically doable or ongoing experiments that could decide about its fate, but David Gross recommended a very intelligent 2013 book by Richard Dawid, a trained physicist and a philosopher, that articulates rather nicely what the actual reasons for the competent physicists' faith in the theory are in the absence of those confirmed new down-to-earth predictions.

Incidentally, the rating and ranking of Dawid's book at is catastrophic, especially if you compare them with some of the anti-physics tirades – a big enough piece of evidence that most laymen are just way too stupid and superficial regardless of the time some people are spending with licking these laymen's rectums. The rank that is worse than 1 million also indicates that no listener of Gross' talk has bought the book via

The three arguments that either instinctively or knowingly contribute to the competent physicists' faith and growing confidence in string theory are
  • UEA: unexpected explanatory coherence argument. If the theory weren't worth studying, it would probably almost never lead to unexpected answers, explanations, and ways to solve problems previously thought to be independent
  • NAA: no alternative argument. There's no other game in town. The argument has existed in the case of the Standard Model – in recent decades, NAA was getting increasingly important.
  • MIA: meta inductive argument. String theory is a part of the same research program that includes theories whose success has already been established.
Gross has mentioned that these sentiments aren't really new. UEA has played a role for him when he became sure that asymptotic freedom in QCD was right: the asymptotic freedom idea automatically produced a bonus insight, the infrared slavery. String theory has produced dozens of similar "inevitable corollaries". He also sketches the developments that will occur soon. For example, supersymmetry is going to be observed in a few years at the LHC. (Laughter.) We're all waiting for this argument in favor of the MIA.

At the end, David Gross presented his theory of history that was especially addressed to the physicists who ever get depressed about anything. Things are always getting worse. But we know that in the long run, things are getting better. Why is it? The idea of Gross' explanation is more or less mathematically isomorphic to the "escalator", a meme about "how skeptics describe the temperature" and "what the temperature is actually doing" promoted by climate alarmist John Cook:

You see that the first derivative of the temperature is negative – the weather seems to be getting worse all the time. (The summer 2014 in Czechia has been said to be over, too.) But if you adopt these smooth monotonic decreasing pieces of the function, there are discontinuities, often or mostly positive ones, so the weather is getting better in the long run (it's sometimes called the global warming). Gross has applied the same idea to all of history. Things are getting worse but suddenly Jump, a big improvement.

Of course, Gross has made some sign errors – he counted the 2008 U.S. elections as an improvement, for example – but his overall performance was pretty good.

Unlike his predecessor and his spouse, Barack Obama doesn't even know what to do with a bucket of ice water.

by Luboš Motl ( at August 21, 2014 11:03 AM

Jaques Distler - Musings

Golem V

For nearly 20 years, Golem has been the machine on my desk. It’s been my mail server, web server, file server, … ; it’s run Mathematica and TeX and compiled software for me. Of course, it hasn’t been the same physical machine all these years. Like Doctor Who, it’s gone through several reincarnations.

Alas, word came down from the Provost that all “servers” must move (physically or virtually) to the University Data Center. And, bewilderingly, the machine on my desk counted as a “server.”

Obviously, a 27” iMac wasn’t going to make such a move. And, equally obvious, it would have been rather difficult to replace/migrate all of the stuff I have running on the current Golem. So we had to go out shopping for Golem V. The iMac stayed on my desk; the machine that moved to the Data Center is a new Mac Mini

The new Mac Mini
side view
Golem V, all labeled and ready to go
  • 2.3 GHz quad-core Intel Core i7 (8 logical cores, via hyperthreading)
  • 16 GB RAM
  • 480 GB SSD (main drive)
  • 1 TB HD (Time Machine backup)
  • 1 TB external HD (CCC clone of the main drive)
  • Dual 1 Gigabit Ethernet Adapters, bonded via LACP

In addition to the dual network interface, it (along with, I gather, a rack full of other Mac Minis) is plugged into an ATS, to take advantage of the dual redundant power supply at the Data Center.

Not as convenient, for me, as having it on my desk, but I’m sure the new Golem will enjoy the austere hum of the Data Center much better than the messy cacophony of my office.

I did get a tour of the Data Center out of the deal. Two things stood out for me.

  1. Most UPSs involve large banks of lead-acid batteries. The UPSs at the University Data Center use flywheels. They comprise a long row of refrigerator-sized cabinets which give off a persistent hum due to the humongous flywheels rotating in vacuum within.
  2. The server cabinets are painted the standard generic white. But, for the networking cabinets, the University went to some expense to get them custom-painted … burnt orange.
Custom paint job on the networking cabinets.

by distler ( at August 21, 2014 04:24 AM

August 20, 2014

The n-Category Cafe

Holy Crap, Do You Know What A Compact Ring Is?

You know how sometimes someone tells you a theorem, and it’s obviously false, and you reach for one of the many easy counterexamples only to realize that it’s not a counterexample after all, then you reach for another one and another one and find that they fail too, and you begin to concede the possibility that the theorem might not actually be false after all, and you feel your world start to shift on its axis, and you think to yourself: “Why did no one tell me this before?”

That’s what happened to me today, when my PhD student Barry Devlin — who’s currently writing what promises to be a rather nice thesis on codensity monads and topological algebras — showed me this theorem:

Every compact Hausdorff ring is totally disconnected.

I don’t know who it’s due to; Barry found it in the book Profinite Groups by Ribes and Zalesskii. And in fact, there’s also a result for rings analogous to a well-known one for groups: a ring is compact, Hausdorff and totally disconnected if and only if it can be expressed as a limit of finite discrete rings. Every compact Hausdorff ring is therefore “profinite”, that is, expressible as a limit of finite rings.

So the situation for compact rings is completely unlike the situation for compact groups. There are loads of compact groups (the circle, the torus, <semantics>SO(n)<annotation encoding="application/x-tex">SO(n)</annotation></semantics>, <semantics>U(n)<annotation encoding="application/x-tex">U(n)</annotation></semantics>, <semantics>E 8<annotation encoding="application/x-tex">E_8</annotation></semantics>, …) and there’s a very substantial theory of them, from Haar measure through Lie theory and onwards. But compact rings are relatively few: it’s just the profinite ones.

I only laid eyes on the proof for five seconds, which was just long enough to see that it used Pontryagin duality. But how should I think about this theorem? How can I alter my worldview in such a way that it seems natural or even obvious?

by leinster ( at August 20, 2014 11:13 PM

ZapperZ - Physics and Physicists

How Long Can You Balance A Pencil
Minute Physics took up a topic that I had discussed previously. It is about the time scale on how long a pencil can be balanced on its tip.

Note that in a previous post, I had pointed out several papers that debunked the fallacy of using quantum mechanics and the HUP to arrive at such time scale. So it seems that this particular topic, like many others, keeps coming back every so often.


by ZapperZ ( at August 20, 2014 04:01 PM

CERN Bulletin

"Science and Peace" symposium to celebrate the 60th anniversary of the first Council session | 19 September
Friday, 19 September 2014 In the tent behind the Globe of Science and Innovation The Convention for the Establishment of a European Organization for Nuclear Research entered into force on 29 September 1954, 60 years ago. This marks CERN's official birthday. The first session of the CERN Council, the governance of CERN, was held in Geneva on 7 and 8 October 1954, just one week later. The symposium "Science and Peace" is being held to celebrate the 60th anniversary of the first Council session. Speakers from all generations will present highlights from 60 years of the Council and various views from their own perspectives. Programme 15:00 – 15:10: Welcome address – Agnieszka Zalewska 15:10 – 15:25: The history of the Council – a brief selection of highlights – Jens Vigen 15:25 – 15:40: The Council as seen by a Member State – Sijbrand De Jong 15:40 – 15:55: The Council as seen by an outreach specialist – Steven Goldfarb 15:55 – 16:10: The Council as seen by a young scientist – Laura Grob 16:10 – 16:25: The Council as seen by a pensioner – Edith Deluermoz 16:25 – 16:40: The Council on the world stage – Jonathan Ellis 16:40 – 16:55: The Council and accelerators, medical applications, technology transfer, and the World Wide Web – Horst Wenninger 16:55 - 17:00: Closing remarks – Rolf Heuer

August 20, 2014 02:23 PM

CERN Bulletin

Internal conference in the framework of CERN’s 60th anniversary | 8 September
Dieter Schlatter and Hans Specht discuss deep inelastic lepton-nucleon scattering experiments at CERN and SPS heavy ion physics.   8 September 2014 Main Auditorium 3.45 p.m. – 4.00 p.m.: Coffee 4.00 – 5.00 p.m.: Dieter Schlatter - Deep Inelastic Lepton-Nucleon Scattering Experiments at CERN Abstract Several deep inelastic scattering experiments using neutrino and muon beams were done at the SPS during the period 1977 to 1985. The experiments, their physics results and the importance of these early tests of the Standard Model will be described. Biography Born 1944 in Germany. 1973, PhD in experimental particle physics, Hamburg University. From 1976, a CERN fellow at the neutrino experiment CDHS. 1980-83, at SLAC, e+e- physics with Mark II at the PEP ring. 1983, back at CERN, e+e- physics with the ALEPH experiment at LEP. 2001-2005, EP Division Leader / PH Dept. Head. From 2006, worked on the conceptual design of a detector for a future CLIC e+e- collider. Retired since 2010. 5.00 p.m. – 6.00 p.m.: Hans Specht - Heavy Ion Physics at the CERN SPS: Roots 1974-1984 and Key Results Abstract Two communities, nuclear and particle physics, had to come together to open up a new field at the CERN SPS in the early eighties, bringing CERN to the forefront worldwide until the start of RHIC in 2000. I will discuss the period before that, including the basic new ideas on parton deconfinement, key workshops, alternative accelerator options in the LBL-GSI-CERN triangle, and the final convergence of the three labs on the SPS, sacrificing any home future in this field for the first two. In 1984, five major experiments were approved for initially O16 and S32 beams at the SPS, with an unprecedented re-use of existing experimental equipment. Subsequent evolution followed thanks to intense learning processes, leading to a second generation of much improved or completely new experiments together with Pb beams starting in 1994. I will summarise the key results and their (then still cautious) interpretation as of 2000. They were used as an input to a press conference at CERN, announcing the detection of a 'new state of matter' just before the start-up of RHIC. Fortunately, a new experiment a few years later unambiguously confirmed that the Quark Gluon Plasma had indeed been formed already at SPS energies. Biography Born in 1936 in Germany. Studied Physics at TU Muenchen and ETH Zurich. PhD in 1964 at TU Munich (H.Meier-Leibnitz). NRC Fellow and Postdoc. 1965-1968 at AECL in Chalk River, Canada. Habilitation and Associate Professor. 1969-1973 at LMU Muenchen. Full Professor since 1973 at Universität Heidelberg. Since 1983, main research at CERN. Member of R807/808 for the last year of ISR running. Member of the Heavy Ion Experiments NA34/2 (Spokesperson), NA45 (Spokesperson) and NA60. Scientific Director of GSI Darmstadt 1992-1999. Publications in Atomic, Nuclear, High-Energy Physics and Brain Research. Since 2000, Member of the Heidelberg Academy of Science. Since 2004 Emeritus.

August 20, 2014 01:23 PM

CERN Bulletin

Public Lecture | Philipe Lebrun | "Particle accelerators" | 2 September
"Les accélérateurs de particules : vecteurs de découvertes, moteurs de développement", by Dr. Philippe Lebrun.   2 September 2014 - 7:30 p.m. Globe of Science and Innovation Particle accelerators have been used in fundamental research for over a century, allowing physicists to discover elementary particles and study them at increasingly smaller scales. Making use of emerging technologies whose progress they helped to stimulate, they developed exponentially throughout the 20th century to become major tools for research today, not only in particle physics but also – as powerful radiation sources for probing matter – in atomic and molecular physics, condensed matter physics and materials science. They have also found applications in society, where they are increasingly used in a wide range of fields including applied sciences, medicine (research and clinical applications) and industry. The lecture will cover examples of these applications and further development opportunities. Philippe Lebrun works at CERN on the Laboratory’s large particle accelerators. He led the Accelerator Technology department during the construction of the Large Hadron Collider (LHC). A graduate of the École des Mines (Paris) and the California Institute of Technology (Pasadena), he is also an alumnus of the Institut des hautes études pour la science et la technologie (Paris) and holds an honorary doctorate from the Wrocław University of Technology (Poland). Lecture in french, translated into english. 
 Entrance free. Limited number of seats.
 Reservation essential: +41 22 767 76 76 or The conference will be webcast on This conference is organised in the framework of CERN’s 60th anniversary :

August 20, 2014 12:33 PM

CERN Bulletin

Exhibition | CERN Micro Club | 1-30 September
The CERN Micro Club (CMC) is organising an exhibition looking back on the origins of the personal computer, also known as the micro-computer, to mark the 60th anniversary of CERN and the club’s own 30th anniversary.   CERN, Building 567, R-021 and R-029 01.09.2014 - 30.09.2014 from 4.00 to 6.00 p.m. The exhibition will be held in the club’s premises (Building 567, rooms R-0121 and R-029) and will be open Mondays to Thursdays from 1 to 30 September 2014. Come and admire, touch and use makes and models that disappeared from the market many years ago, such as Atari, Commodore, Olivetti, DEC, IBM and Apple II and III, all in good working order and installed with applications and games from the period. Club members will be on hand to tell you about these early computers, which had memories of just of a few kilobytes, whereas those of modern computers can reach several gigabytes or even terabytes.

August 20, 2014 12:03 PM

Lubos Motl - string vacua and pheno

Adimensional gravity
Natalie Wolchover wrote a good article for the Simons Foundation,
At Multiverse Impasse, a New Theory of Scale
about Agravity, a provoking paper by Alberto Salvio and Alessandro Strumia. Incidentally, has anyone noticed that Strumia is Joe Polchinski's twin brother? The similarity goes beyond the favorite color of the shirt and pants.

At any rate, the system of ideas known as "naturalness" seems to marginally conflict with the experiments and things may be getting worse. Roughly speaking, naturalness wants dimensionful parameters (masses) to be comparable unless there is an increased symmetry when they're not comparable. But the Higgs boson is clearly much lighter than the Planck scale and in 2015, the LHC may show (but doesn't have to show!) that there are no light superpartners that help to make the lightness natural.

The "agravity" approach, if true, eliminates these naturalness problems because according to its scheme of things, there is no fundamental scale in Nature. One tries to get all the terms in the Lagrangian with some dimensionful couplings from terms that have no dimensionful couplings. "Agravity" is a different solution to these problems than both "naturalness" and "multiverse" – a third way, if you wish.

Similar things have been tried before, e.g. by William Bardeen in 1995, but Strumia et al. are the first ones who are trying to add gravity. The claim is that one may get the Einstein-Hilbert action by a dynamical process in a theory whose terms only include four-derivative terms such as \(R^2\).

Aside from a novel solution of the problems with the hierarchies, it is claimed that the scenario may predict inflation with the spectral index and the tensor-to-scalar ratio immensely compatible with the BICEP2 results.

The main obvious problem are the ghosts. The terms like \(R^2\) may be rewritten as propagating degrees of freedom whose squared normal (sign of the kinetic term) are indefinite – some of them lead to proper positive probabilities while others produce pathological negative probabilities.

I remember a 2001 Santa Barbara talk by Stephen Hawking about "how he befriended ghosts", with some pretty amusing multimedia involving ghosts hugging his wheelchair, so you should be sure that Strumia et al. aren't the first folks who want to befriend ghosts.

At this moment, ghosts look like a lethal flaw. But I can imagine that by some clever technical or conceptual tricks, this flaw could perhaps be cured. The physical probabilities could become positive if one chose some better degrees of freedom, or there could be a new argument why these negative probabilities are ultimately harmless for some reason I can't quite imagine at this moment.

However, my concerns about the theory go beyond the problem with the ghosts. I do think that the Planck scale has been made extremely important by the modern "holographic" research of quantum gravity. The Planck area defines the minimum area where nontrivial information may be squeezed. It seems to be the scale that determines the nonlocalities and breakdown of the normal geometric concepts. The Planck scale is the minimum distance where a dynamical, gravitating space may start to emerge.

So if someone envisions some smooth ordinary spacetime at ultratiny, sub-Planckian distances, he is facing exactly the same difficulties – I would say that many of them are lethal ones – as the difficulties mentioned in the context of Weinberg's asymptotic safety which also envisions a scale-invariant theory underlying gravity at ultrashort distances.

There could be some amazing advance that cures these serious diseases but such a cure remains a wishful thinking at this point. We shouldn't pretend that the diseases have already been cured – even though you may use this proposition as a "working hypothesis" and a "big motivator" whenever you try to do some research related to agravity. That's why I find the existing proposals of scale-invariant underpinnings of quantum gravity, including the agravity meme, to be very unlikely. Hierarchy-like problems including the cosmological constant problem may look rather serious but they're still less serious than predicting negative probabilities of physical processes.

by Luboš Motl ( at August 20, 2014 05:50 AM

August 19, 2014

Symmetrybreaking - Fermilab/SLAC

A whole-Earth approach

Ecologist John Harte applies principles from his former life as a physicist to his work trying to save the planet.

Each summer for the past 25 years, ecologist John Harte has spent his mornings in a meadow on the western slope of the Rocky Mountains. He takes soil samples from a series of experimentally heated plots at the Rocky Mountain Biological Laboratory, using the resulting data to predict how responses of ecosystems to climate change will generate further heating of the climate.

Harte, a former theoretical physicist, studies ecological theory and the relationship between climates and ecosystems. He holds a joint professorship at UC Berkeley’s Energy Resources Group and the university’s Ecosystem Sciences Division. He says he is motivated by a desire to help save the planet and to solve complex ecological problems.

“John is a gifted naturalist and a great birdwatcher,” says Robert Socolow, a colleague and former physicist who transitioned to the environmental field at the same time. “John went into physics to combine his deep love of nature and his talent for mathematical analysis.”

Harte, who loved bird watching and nature as a child, also enjoyed physics and math, which his schoolteachers urged him to pursue. He received his undergraduate degree in physics from Harvard in 1961, and a PhD in theoretical physics from the University of Wisconsin in 1965. He went on to serve as an NSF Postdoctoral Fellow at CERN from 1965-66 and a postdoctoral fellow at Lawrence Berkeley National Laboratory from 1966-68.

It was in the storied summer of 1969 while Harte was teaching physics at Yale that he decided to return to nature studies. He and Socolow spent a month that summer conducting a hydrology study of the Florida Everglades, and their work showed that a proposed new airport would endanger the water supply for hundreds of thousands of people. That work, which Harte and Socolow detailed in one chapter of the book Patient Earth, led to the creation of an immense water conservation area in southwestern Florida.

“With not much more than back-of-the-envelope calculations, we were able to stop the jetport,” Harte says. “I thought, man, that’s cool. I want to do this.”

Harte was already worried about climate change and decided to transition to studying interdisciplinary environmental science. He sought out the wisdom of famous ecologists, such as G. Evelyn Hutchinson, to help him learn the field.

“I was lucky because I made this transition in the late ’60s and ’70s,” Harte says. “It was a novelty back then, and there weren’t a lot of people doing the things I wanted to do.”

He retained his love for physics and used physics concepts in his work.

“Unification is such an important goal in physics,” Harte says. “I came away with the thirst for finding unification in ecology. I also came away empowered that I could master practically any mathematic formula. ”

Viewing many different phenomena through the same lens has been critical to Harte’s work. His big-picture view isn’t always widely accepted by other ecologists, but it has helped him understand and make significant contributions to the natural world.

“John is gifted in non-linear modeling. He’s a physicist doing ecology to this day,” Socolow says.

During his career, Harte has served on six National Academy of Sciences Committees, has published hundreds of papers and has written eight books on topics including biodiversity, climate change and water resources. He has also received numerous awards, including a George Polk award for his work advising a group of graduate journalism students reporting on climate change.

He typically divides his days between fieldwork and theory, teaching courses in theoretical biology and environmental problem solving. He has mentored about 35 graduate students during the years, about 10 of whom have come from physics.

“They saw that I had made this transition, and they thought I’d be a good mentor. Students who want to make that transition come to work with me,” Harte says. “Because I speak the language of physics.”

Like what you see? Sign up for a free subscription to symmetry!

by Rhianna Wisniewski at August 19, 2014 10:15 PM

August 18, 2014

Quantum Diaries

Dark Energy Survey kicks off second season cataloging the wonders of deep space

This Fermilab press release came out on Aug. 18, 2014.

This image of the NGC 1398 galaxy was taken with the Dark Energy Camera. This galaxy lives in the Fornax cluster, roughly 65 million light-years from Earth. It is 135,000 light-years in diameter, just slightly larger than our own Milky Way galaxy, and contains more than 100 billion stars. Credit: Dark Energy Survey

This image of the NGC 1398 galaxy was taken with the Dark Energy Camera. This galaxy lives in the Fornax cluster, roughly 65 million light-years from Earth. It is 135,000 light-years in diameter, just slightly larger than our own Milky Way galaxy, and contains more than 100 billion stars. Credit: Dark Energy Survey

On Aug. 15, with its successful first season behind it, the Dark Energy Survey (DES) collaboration began its second year of mapping the southern sky in unprecedented detail. Using the Dark Energy Camera, a 570-megapixel imaging device built by the collaboration and mounted on the Victor M. Blanco Telescope in Chile, the survey’s five-year mission is to unravel the fundamental mystery of dark energy and its impact on our universe.

Along the way, the survey will take some of the most breathtaking pictures of the cosmos ever captured. The survey team has announced two ways the public can see the images from the first year.

Today, the Dark Energy Survey relaunched Dark Energy Detectives, its successful photo blog. Once every two weeks during the survey’s second season, a new image or video will be posted to, with an explanation provided by a scientist. During its first year, Dark Energy Detectives drew thousands of readers and followers, including more than 46,000 followers on its Tumblr site.

Starting on Sept. 1, the one-year anniversary of the start of the survey, the data collected by DES in its first season will become freely available to researchers worldwide. The data will be hosted by the National Optical Astronomy Observatory. The Blanco Telescope is hosted at the National Science Foundation’s Cerro Tololo Inter-American Observatory, the southern branch of NOAO.

In addition, the hundreds of thousands of individual images of the sky taken during the first season are being analyzed by thousands of computers at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, Fermi National Accelerator Laboratory (Fermilab), and Lawrence Berkeley National Laboratory. The processed data will also be released in coming months.

Scientists on the survey will use these images to unravel the secrets of dark energy, the mysterious substance that makes up 70 percent of the mass and energy of the universe. Scientists have theorized that dark energy works in opposition to gravity and is responsible for the accelerating expansion of the universe.

“The first season was a resounding success, and we’ve already captured reams of data that will improve our understanding of the cosmos,” said DES Director Josh Frieman of the U.S. Department of Energy’s Fermi National Accelerator Laboratory and the University of Chicago. “We’re very excited to get the second season under way and continue to probe the mystery of dark energy.”

While results on the survey’s probe of dark energy are still more than a year away, a number of scientific results have already been published based on data collected with the Dark Energy Camera.

The first scientific paper based on Dark Energy Survey data was published in May by a team led by Ohio State University’s Peter Melchior. Using data that the survey team acquired while putting the Dark Energy Camera through its paces, they used a technique called gravitational lensing to determine the masses of clusters of galaxies.

In June, Dark Energy Survey researchers from the University of Portsmouth and their colleagues discovered a rare superluminous supernova in a galaxy 7.8 billion light years away. A group of students from the University of Michigan discovered five new objects in the Kuiper Belt, a region in the outer reaches of our solar system, including one that takes over a thousand years to orbit the Sun.

In February, Dark Energy Survey scientists used the camera to track a potentially hazardous asteroid that approached Earth. The data was used to show that the newly discovered Apollo-class asteroid 2014 BE63 would pose no risk.

Several more results are expected in the coming months, said Gary Bernstein of the University of Pennsylvania, project scientist for the Dark Energy Survey.

The Dark Energy Camera was built and tested at Fermilab. The camera can see light from more than 100,000 galaxies up to 8 billion light-years away in each crystal-clear digital snapshot.

“The Dark Energy Camera has proven to be a tremendous tool, not only for the Dark Energy Survey, but also for other important observations conducted year-round,” said Tom Diehl of Fermilab, operations scientist for the Dark Energy Survey. “The data collected during the survey’s first year — and its next four — will greatly improve our understanding of the way our universe works.”

The Dark Energy Survey Collaboration comprises more than 300 researchers from 25 institutions in six countries. For more information, visit

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit

The National Optical Astronomy Observatory (NOAO) is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the National Science Foundation.

by Fermilab at August 18, 2014 09:07 PM

Andrew Jaffe - Leaves on the Line

&ldquo;Public Service Review&rdquo;?

A few months ago, I received a call from someone at the “Public Service Review”, supposedly a glossy magazine distributed to UK policymakers and influencers of various stripes. The gentleman on the line said that he was looking for someone to write an article for his magazine giving an example of what sort of space-related research was going on at a prominent UK institution, to appear opposite an opinion piece written by Martin Rees, president of the Royal Society.

This seemed harmless enough, although it wasn’t completely clear what I (or the Physics Department, or Imperial College) would get out of it. But I figured I could probably knock something out fairly quickly. However, he told me there was a catch: it would cost me £6000 to publish the article. And he had just ducked out of his editorial meeting in order to find someone to agree to writing the article that very afternoon. Needless to say, in this economic climate, I didn’t have an account with an unused £6000 in it, especially for something of dubious benefit. (On the other hand, astrophysicists regularly publish in journals with substantial page charges.) It occurred to me that this could be a scam, although the website itself seems legitimate (although no one I spoke to knew anything about it).

I had completely forgotten about this until this week, when another colleague in our group at Imperial told me had received the same phone call, from the same organization, with the same details: article to appear opposite Lord Rees’; short deadline; large fee.

So, this is beginning to sound fishy. Has anyone else had any similar dealings with this organization?

Update: It has come to my attention that one of the comments below was made under a false name, in particular the name of someone who actually works for the publication in question, so I have removed the name, and will possibly likely the comment unless the original write comes forward with more and truthful information (which I will not publish without permission). I have also been informed of the possibility that some other of the comments below may come from direct competitors of the publication. These, too, may be removed in the absence of further confirming information.

Update II: In the further interest of hearing both sides of the discussion, I would like to point out the two comments from staff at the organization giving further information as well as explicit testimonials in their favor.

by Andrew at August 18, 2014 03:47 PM

Symmetrybreaking - Fermilab/SLAC

Dark Energy Survey kicks off second season

In September, DES will make data collected in its first season freely available to researchers.

On August 15, with its successful first season behind it, the Dark Energy Survey collaboration began its second year of mapping the southern sky in unprecedented detail. Using the Dark Energy Camera, a 570-megapixel imaging device built by the collaboration and mounted on the Victor M. Blanco Telescope in Chile, the survey’s five-year mission is to unravel the fundamental mystery of dark energy and its impact on our universe.

Along the way, the survey will take some of the most breathtaking pictures of the cosmos ever captured. The survey team has announced two ways the public can see the images from the first year.

Today, the Dark Energy Survey relaunched its photo blog, Dark Energy Detectives. Once every two weeks during the survey’s second season, a new image or video will be posted to with an explanation provided by a scientist. During its first year, Dark Energy Detectives drew thousands of readers and followers, including more than 46,000 followers on its Tumblr site.

Starting on September 1, the one-year anniversary of the start of the survey, the data collected by DES in its first season will become freely available to researchers worldwide. The data will be hosted by the National Optical Astronomy Observatory. The Blanco Telescope is hosted at the National Science Foundation's Cerro Tololo Inter-American Observatory, the southern branch of NOAO.

In addition, the hundreds of thousands of individual images of the sky taken during the first season are being analyzed by thousands of computers at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, Fermi National Accelerator Laboratory and Lawrence Berkeley National Laboratory. The processed data will also be released in coming months.

Scientists on the survey will use these images to unravel the secrets of dark energy, the mysterious substance that makes up 70 percent of the mass and energy of the universe. Scientists have theorized that dark energy works in opposition to gravity and is responsible for the accelerating expansion of the universe.

“The first season was a resounding success, and we’ve already captured reams of data that will improve our understanding of the cosmos,” says DES Director Josh Frieman of Fermilab and the University of Chicago. “We’re very excited to get the second season under way and continue to probe the mystery of dark energy.”

While results on the survey’s probe of dark energy are still more than a year away, a number of scientific results have already been published based on data collected with the Dark Energy Camera.

The first scientific paper based on Dark Energy Survey data was published in May by a team led by Ohio State University’s Peter Melchior. Using data that the survey team acquired while putting the Dark Energy Camera through its paces, they used a technique called gravitational lensing to determine the masses of clusters of galaxies.

In June, Dark Energy Survey researchers from the University of Portsmouth and their colleagues discovered a rare superluminous supernova in a galaxy 7.8 billion light years away. A group of students from the University of Michigan discovered five new objects in the Kuiper Belt, a region in the outer reaches of our solar system, including one that takes over a thousand years to orbit the Sun.

In February, Dark Energy Survey scientists used the camera to track a potentially hazardous asteroid that approached Earth. The data was used to show that the newly discovered Apollo-class asteroid 2014 BE63 would pose no risk.

Several more results are expected in the coming months, says Gary Bernstein of the University of Pennsylvania, project scientist for the Dark Energy Survey.

The Dark Energy Camera was built and tested at Fermilab. The camera can see light from more than 100,000 galaxies up to 8 billion light-years away in each crystal-clear digital snapshot.

“The Dark Energy Camera has proven to be a tremendous tool, not only for the Dark Energy Survey, but also for other important observations conducted year-round,” says Tom Diehl of Fermilab, operations scientist for the Dark Energy Survey. “The data collected during the survey’s first year—and its next four—will greatly improve our understanding of the way our universe works.”

Fermilab published a version of this article as a press release.


Like what you see? Sign up for a free subscription to symmetry!

August 18, 2014 01:00 PM

Tommaso Dorigo - Scientificblogging

Tight Constraints On Dark Matter From CMS
Although now widely accepted as the most natural explanation of the observed features of the universe around us, dark matter remains a highly mysterious entity to this day. There are literally dozens of possible candidates to explain its nature, wide-ranging in size from subnuclear particles all the way to primordial black holes and beyond. To particle physicists, it is of course natural to assume that dark matter IS a particle, which we have not detected yet. We have a hammer, and that looks like a nail.

read more

by Tommaso Dorigo at August 18, 2014 10:41 AM

John Baez - Azimuth

El Nino Project (Part 7)

So, we’ve seen that Ludescher et al have a way to predict El Niños. But there’s something a bit funny: their definition of El Niño is not the standard one!

Precisely defining a complicated climate phenomenon like El Niño is a tricky business. Lots of different things tend to happen when an El Niño occurs. In 1997-1998, we saw these:

But what if just some of these things happen? Do we still have an El Niño or not? Is there a right answer to this question, or is it partially a matter of taste?

A related puzzle: is El Niño a single phenomenon, or several? Could there be several kinds of El Niño? Some people say there are.

Sometime I’ll have to talk about this. But today let’s start with the basics: the standard definition of El Niño. Let’s see how this differs from Ludescher et al’s definition.

The standard definition

The most standard definitions use the Oceanic Niño Index or ONI, which is the running 3-month mean of the Niño 3.4 index:

• An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row.

• A La Niña occurs when the ONI is below -0.5 °C for at least 5 months in a row.

Of course I should also say exactly what the ‘Niño 3.4 index’ is, and what the ‘running 3-month mean’ is.

The Niño 3.4 index is the area-averaged, time-averaged sea surface temperature anomaly for a given month in the region 5°S-5°N and 170°-120°W:

Here anomaly means that we take the area-averaged, time-averaged sea surface temperature for a given month — say February — and subtract off the historical average of this quantity — that is, for Februaries of other years on record.

If you’re clever you can already see room for subtleties and disagreements. For example, you can get sea surface temperatures in the Niño 3.4 region here:

Niño 3.4 data since 1870 calculated from the HadISST1, NOAA. Discussed in N. A. Rayner et al, Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, J. Geophys. Res. 108 (2003), 4407.

However, they don’t actually provide the Niño 3.4 index.

You can get the Niño 3.4 index here:

TNI (Trans-Niño Index) and N3.4 (Niño 3.4 Index), NCAR.

You can also get it from here:

Monthly Niño 3.4 index, Climate Prediction Center, National Weather Service.

The actual temperatures in Celsius on the two websites are quite close — but the anomalies are rather different, because the second one ‘subtracts off the historical average’ in a way that takes global warming into account. For example, to compute the Niño 3.4 index in June 1952, instead of taking the average temperature that month and subtracting off the average temperature for all Junes on record, they subtract off the average for Junes in the period 1936-1965. Averages for different periods are shown here:

You can see how these curves move up over time: that’s global warming! It’s interesting that they go up fastest during the cold part of the year. It’s also interesting to see how gentle the seasons are in this part of the world. In the old days, the average monthly temperatures ranged from 26.2 °C in the winter to 27.5 °C in the summer — a mere 1.3 °C fluctuation.

Finally, to compute the ONI in a given month, we take the average of the Niño 3.4 index in that month, the month before, and the month after. This definition of running 3-month mean has a funny feature: we can’t know the ONI for this month until next month!

You can get a table of the ONI here:

Cold and warm episodes by season, Climate Prediction Center, National Weather Service.

It’s not particularly computer-readable.

Ludescher et al

Now let’s compare Ludescher et al. They say there’s an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months in a row. By not using the ONI — by using the Niño 3.4 index instead of its 3-month running mean — they could be counting some short ‘spikes’ in the Niño 3.4 index as El Niños, that wouldn’t count as El Niños by the usual definition.

I haven’t carefully checked to see how much changing the definition would affect the success rate of their predictions. To be fair, we should also let them change the value of their parameter θ, which is tuned to be good for predicting El Niños in their setup. But we can see that there could be some ‘spike El Niños’ in this graph of theirs, that might go away with a different definition. These are places where the red line goes over the horizontal line for more than 5 months, but no more:

Let’s see look at the spike around 1975. See that green arrow at the beginning of 1975? That means Ludescher et al are claiming to successfully predict an El Niño sometime the next calendar year. We can zoom in for a better look:

The tiny blue bumps are where the Niño 3.4 index exceeds 0.5.

Let’s compare the ONI as computed by the National Weather Service, month by month, with El Niños in red and La Niñas in blue:

1975: 0.5, -0.5, -0.6, -0.7, -0.8, -1.0, -1.1, -1.2, -1.4, -1.5, -1.6, -1.7

1976: -1.5, -1.1, -0.7, -0.5, -0.3, -0.1, 0.2, 0.4, 0.6, 0.7, 0.8, 0.8

1977: 0.6, 0.6, 0.3, 0.3, 0.3, 0.4, 0.4, 0.4, 0.5, 0.7, 0.8, 0.8

1978: 0.7, 0.5, 0.1, -0.2, -0.3, -0.3, -0.3, -0.4, -0.4, -0.3, -0.1, -0.1

So indeed an El Niño started in September 1976. The ONI only stayed above 0.5 for 6 months, but that’s enough. Ludescher and company luck out!

Just for fun, let’s look at the National Weather service Niño 3.4 index to see what that’s like:

1975: -0.33, -0.48, -0.72, -0.54, -0.68, -1.17, -1.07, -1.19, -1.36, -1.69 -1.45, -1.76

1976: -1.78, -1.10, -0.55, -0.53, -0.33, -0.10, 0.20, 0.39, 0.49, 0.88, 0.85, 0.63

So, this exceeded 0.5 in October 1976. That’s when Ludescher et al would say the El Niño starts, if they used the National Weather Service data.

Let’s also compare the NCAR Niño 3.4 index:

1975: -0.698, -0.592, -0.579, -0.801, -1.025, -1.205, -1.435, -1.620, -1.699 -1.855, -2.041, -1.960

1976: -1.708, -1.407, -1.026, -0.477, -0.095, 0.167, 0.465, 0.805, 1.039, 1.137, 1.290, 1.253

It’s pretty different! But it also gives an El Niño in 1976 according to Ludescher et al’ definition: the Niño 3.4 index exceeds 0.5 starting in August 1976.

For further study

This time we didn’t get into the interesting question of why one definition of El Niño is better than another. For that, try:

• Kevin E. Trenberth, The definition of El Niño, Bulletin of the American Meteorological Society 78 (1997), 2771–2777.

There could also be fundamentally different kinds of El Niño. For example, besides the usual sort where high sea surface temperatures are centered in the Niño 3.4 region, there could be another kind centered farther west near the International Date Line. This is called the dateline El Niño or El Niño Modoki. For more, try this:

• Nathaniel C. Johnson, How many ENSO flavors can we distinguish?, Journal of Climate 26 (2013), 4816-4827.

which has lots of references to earlier work. Here, to whet your appetite, is his picture showing the 9 most common patterns of sea surface temperature anomalies in the Pacific:

At the bottom of each is a percentage showing how frequently that pattern has occurred from 1950 to 2011. To get these pictures Johnson used something called a ‘self-organizing map analysis’ – a fairly new sort of cluster analysis done using neural networks. This is the sort of thing I hope we get into as our project progresses!

The series so far

Just in case you want to get to old articles, here’s the story so far:

El Niño project (part 1): basic introduction to El Niño and our project here.

El Niño project (part 2): introduction to the physics of El Niño.

El Niño project (part 3): summary of the work of Ludescher et al.

El Niño project (part 4): how Graham Jones replicated the work by Ludescher et al, using software written in R.

El Niño project (part 5): how to download R and use it to get files of climate data.

El Niño project (part 6): Steve Wenner’s statistical analysis of the work of Ludescher et al.

El Niño project (part 7): the definition of El Niño.

by John Baez at August 18, 2014 06:07 AM

August 17, 2014

Sean Carroll - Preposterous Universe

Single Superfield Inflation: The Trailer

This is amazing. (Via Bob McNees and Michael Nielsen on Twitter.)

Backstory for the puzzled: here is a nice paper that came out last month, on inflation in supergravity.

Inflation in Supergravity with a Single Chiral Superfield
Sergei V. Ketov, Takahiro Terada

We propose new supergravity models describing chaotic Linde- and Starobinsky-like inflation in terms of a single chiral superfield. The key ideas to obtain a positive vacuum energy during large field inflation are (i) stabilization of the real or imaginary partner of the inflaton by modifying a Kahler potential, and (ii) use of the crossing terms in the scalar potential originating from a polynomial superpotential. Our inflationary models are constructed by starting from the minimal Kahler potential with a shift symmetry, and are extended to the no-scale case. Our methods can be applied to more general inflationary models in supergravity with only one chiral superfield.

Supergravity is simply the supersymmetric version of Einstein’s general theory of relativity, but unlike GR (where you can consider just about any old collection of fields to be the “source” of gravity), the constraints of supersymmetry place quite specific requirements on what counts as the “stuff” that creates the gravity. In particular, the allowed stuff comes in the form of “superfields,” which are combinations of boson and fermion fields. So if you want to have inflation within supergravity (which is a very natural thing to want), you have to do a bit of exploring around within the allowed set of superfields to get everything to work. Renata Kallosh and Andrei Linde, for example, have been examining this problem for quite some time.

What Ketov and Terada have managed to do is boil the necessary ingredients down to a minimal amount: just a single superfield. Very nice, and worth celebrating. So why not make a movie-like trailer to help generate a bit of buzz?

Which is just what Takahiro Terada, a PhD student at the University of Tokyo, has done. The link to the YouTube video appeared in an unobtrusive comment in the arxiv page for the revised version of their paper. iMovie provides a template for making such trailers, so it can’t be all that hard to do — but (1) nobody else does it, so, genius, and (2) it’s a pretty awesome job, with just the right touch of humor.

I wouldn’t have paid nearly as much attention to the paper without the trailer, so: mission accomplished. Let’s see if we can’t make this a trend.

by Sean Carroll at August 17, 2014 05:25 PM

August 15, 2014

Quantum Diaries

Coffee and code: Innovation at the CERN Webfest
The Particle Clicker team working late into the night.

The Particle Clicker team working late into the night.

This article was also published here on CERN’s website.

This weekend CERN hosted its third Summer Student Webfest, a three-day caffeine-fuelled coding event at which participants worked in small teams to build innovative projects using open-source web technologies.

There were a host of projects to inspire the public to learn about CERN and particle physics, and others to encourage people to explore web-based solutions to humanitarian disasters with CERN’s partner UNOSAT.

The event opened with a session of three-minute pitches: participants with project ideas tried to recruit team members with particular skills, from software development and design expertise to acumen in physics. Projects crystallised, merged or floundered as 14 pitches resulted in the formation of eight teams. Coffee was brewed and the hacking commenced…

Run Broton Run

Members of the Run Broton Run team help each other out at the CERN Summer Student Webfest 2014 (Image: James Doherty)

The weekend was interspersed with mentor-led workshops introducing participants to web technologies. CERN’s James Devine detailed how Arduino products can be used to build cosmic-ray detectors or monitor LHC operation, while developers from PyBossa provided an introduction to building crowdsourced citizen science projects on (See a full list of workshops).

After three days of hard work and two largely sleepless nights, the eight teams were faced with the daunting task of presenting their projects to a panel of experts, with a trip to the Mozilla Festival in London up for grabs for one member of the overall winning team. The teams presented a remarkable range of applications built from scratch in under 48 hours.

Students had the opportunity to with Ben Segal, an inductee of the Internet Hall of Fame.

Students had the opportunity to collaborate with Ben Segal (middle), inductee of the Internet Hall of Fame.

Prizes were awarded as follows:

Best Innovative Project: Terrain Elevation

A mobile phone application that accurately measures elevation. Designed as an economical method of choosing sites with a low risk of flooding for refugee camps.

Find out more.

Best Technology Project: Blindstore

A private query database with real potential for improving online privacy.

Find out more here.

Best Design Project: GeotagX and PyBossa

An easy-to-use crowdsourcing platform for NGOs to use in responding to humanitarian disasters.

Find out more here and here.

Best Educational Project: Run Broton Run

An educational 3D game that uses Kinect technology.

Find out more here.

Overall Winning Project: Particle Clicker

Particle Clicker is an elegantly designed detector-simulation game for web.

Play here.

“It’s been an amazing weekend where we’ve seen many impressive projects from different branches of technology,” says Kevin Dungs, captain of this year’s winning team. “I’m really looking forward to next year’s Webfest.”

Participants of the CERN Summer Student Webfest 2014 in the CERN Auditorium after three busy days' coding.

Participants of the CERN Summer Student Webfest 2014 in the CERN Auditorium after three busy days’ coding.

The CERN Summer Student Webfest was organised by François Grey, Ben Segal and SP Mohanty, and sponsored by the Citizen Cyberlab, Citizen Cyberscience Centre, Mozilla Foundation and The Port. Event mentors were from CERN, PyBossa and UNITAR/UNOSTAT. The judges were Antonella del Rosso (CERN Communications), Bilge Demirkoz (CERN Researcher) and Fons Rademakers (CTO of CERN Openlab).

by James Doherty at August 15, 2014 09:18 PM

Clifford V. Johnson - Asymptotia

West Maroon Valley Wild Flowers
west_maroon_valley_sketch_10_aug_2014I promised two things in a previous post. One was the incomplete sketch I did of Crater lake and West Maroon Valley (not far from Aspen) that I started before the downpour began, last weekend. It is on the left (click to enlarge.) The other is a collection of the wild flowers and other pretty things that I picked for you (non-destructively) from my little hike in the West Maroon valley. There's Columbine, Indian Paintbrush, and so forth, along with [...] Click to continue reading this post

by Clifford at August 15, 2014 07:48 PM

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

by Andrew at August 15, 2014 03:42 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
September 02, 2014 04:36 PM
All times are UTC.

Suggest a blog: