Particle Physics Planet


October 25, 2014

Geraint Lewis - Cosmic Horizons

The Redshift Drift
Things are crazily busy, with me finishing teaching this week. Some of you may know, that I am writing a book, which is progressing, but more slowly than I hoped. Up to just over 60,000 words, with a goal of about 80 to 90 thousand, so more than half way through.

I know that I have to catch up with papers, and I have another article in The Conversation brewing, but I thought I would write about something interesting. The problem is that my limited brain has been occupied by so many other things that my clear thinking time has been reduced to snippets here and there.

But one thing that has been on my mind is tests of cosmology. Nothing I post here will be new, but you might not know about it. But here goes.

So, the universe is expanding. But how do we know? I've written a little about this previously, but we know that almost 100 years ago, Edwin Hubble discovered his "law", that galaxies are moving away from us, and the further away they are, the faster they are moving. There's a nice article here describing the importance of this, and we end up with a picture that looks something like this
Distance is actually the hard thing to measure, and there are several books that detail astronomers on-off love affair with measuring distances. But how about velocities?

These are measured using the redshift. It's such a simple measurement. In our laboratory, we might see emission from an element, such as hydrogen, at one wavelength, but when we observe it in a distant galaxy, we see it at another, longer, wavelength. The light has been redshifted due to the expansion of the universe (although exactly what this means can be the source for considerable confuddlement).

Here's an illustration of this;
Relating the redshift to a Doppler shift we can turn it into a velocity. As we know, the Hubble law is  what we expect if we use Einstein's theory of relativity to describe the universe. Ecellent stuff all around!

One thing we do know is that the expansion rate of the universe is not uniform in time. It was very fast at the Big Bang, slowed down for much of cosmic history, before accelerating due to the presence of dark energy.

So, there we have an interesting question. Due to the expansion of the universe, will the redshift I measure for a galaxy today be the same when I measure it again tomorrow.

This question was asked before I was born and then again several times afterwards. For those that love mathematics, and who doesn't, you get a change of redshift with time that looks like this

(taken from this great paper) where z is the redshift, Ho is Hubble's constant today, while H(z) is Hubble's constant at the time the light was emitted from the galaxy your observing. 

The cool thing is that last term depends upon the energy content of the universe, just how much mass there is, how much radiation, how much dark energy, and all the other cool things that we would like to know, like if dark energy is evolving and and changing, or interacting with matter and radiation. It would be a cool cosmological probe.

Ah, there is a problem! We know that Hubble's constant is about Ho = 72 km/s/Mpc, which seems like a nice sort of number. But if you look closely, you can see that it actually had units of 1/time. So, expressing it in years, this number is about 0.0000000001 per year. This is a small number. Bottom.

But this does not mean that astronomers back up their bags and head home. No, you look for solutions and see if you can come up with technologies to allow you to measure this tiny shift. I could write an entire post on this, but people are developing laser combs to give extremely accurate measurement of the wavelength in spectra, and actually measure the changing expansion of the Universe in real time!

Why am I writing about this? Because these direct tests of cosmology have always fascinated me, and every so often I start doodling with the cosmological equations to see if I can come up with another one. Often this ends up with a page of squiggles and goes no where, but some times I have what I thing is a new insight.


And this gives me a chance to spruik an older paper of mine, with then PhD student, Madhura Killedar. I still love this stuff!


The evolution of the expansion rate of the Universe results in a drift in the redshift of distant sources over time. A measurement of this drift would provide us with a direct probe of expansion history. The Lyman alpha forest has been recognized as the best candidate for this experiment, but the signal would be weak and it will take next generation large telescopes coupled with ultra-stable high resolution spectrographs to reach the cm/s resolution required. One source of noise that has not yet been assessed is the transverse motion of Lyman alpha absorbers, which varies the gravitational potential in the line of sight and subsequently shifts the positions of background absorption lines. We examine the relationship between the pure cosmic signal and the observed redshift drift in the presence of moving Lyman alpha clouds, particularly the collapsed structures associated with Lyman limit systems (LLSs) and damped Lyman alpha systems (DLAs). Surprisingly, the peculiar velocities and peculiar accelerations both enter the expression, although the acceleration term stands alone as an absolute error, whilst the velocity term appears as a fractional noise component. An estimate of the magnitude of the noise reassures us that the motion of the Lyman alpha absorbers will not pose a threat to the detection of the signal.

by Cusp (noreply@blogger.com) at October 25, 2014 09:08 PM

Geraint Lewis - Cosmic Horizons

Catching the Conversation
Wow!!! Where has time gone! I must apologise for the sluggishness of posts on this blog. I promise you that it is not dead, I have been consumed with a number of other things and not all of it fun. I will get back to interesting posts as soon as possible.

So, here's a couple of articles I've written in the meantime, appearing in The Conversation

One on some of my own research: Dark matter and the Milky Way: more little than large



And the other on proof (or lack of it) in science: Where’s the proof in science? There is none



There's more to come :)

by Cusp (noreply@blogger.com) at October 25, 2014 08:54 PM

Peter Coles - In the Dark

Thought for the Day

A good world needs knowledge, kindliness, and courage; it does not need a regretful hankering after the past, or a fettering of the free intelligence by the words uttered long ago by ignorant men.

Bertrand Russell (1927)


by telescoper at October 25, 2014 01:06 PM

Jaques Distler - Musings

Wikipedia

Wow! After a decade, Wikipedia finally rolls out MathML rendering. Currently, only available (as an optional preference) to registered users. Hopefully, in a few more years, they’ll make it the default.

Some implementation details are available at Frédéric’s blog.

by distler (distler@golem.ph.utexas.edu) at October 25, 2014 06:19 AM

October 24, 2014

Christian P. Robert - xi'an's og

Rivers of London [book review]

London by Delta, Dec. 14, 2011Yet another book I grabbed on impulse while in Birmingham last month. And which had been waiting for me on a shelf of my office in Warwick. Another buy I do not regret! Rivers of London is delightful, as much for taking place in all corners of London as for the story itself. Not mentioning the highly enjoyable writing style!

“I though you were a sceptic, said Lesley. I though you were scientific”

The first volume in this detective+magic series, Rivers of London, sets the universe of this mix of traditional Metropolitan Police work and of urban magic, the title being about the deities of the rivers of London, including a Mother and a Father Thames… I usually dislike any story mixing modern life and fantasy but this is a definitive exception! What I enjoy in this book setting is primarily the language used in the book that is so uniquely English (to the point of having the U.S. edition edited!, if the author’s blog is to be believed). And the fact that it is so much about London, its history and inhabitants. But mostly about London, as an entity on its own. Even though my experience of London is limited to a few boroughs, there are many passages where I can relate to the location and this obviously makes the story much more appealing. The style is witty, ironic and full of understatements, a true pleasure.

“The tube is a good place for this sort of conceptual breakthrough because, unless you’ve got something to read, there’s bugger all else to do.”

The story itself is rather fun, with at least three levels of plots and two types of magic. It centres around two freshly hired London constables, one of them discovering magical abilities and been drafted to the supernatural section of the Metropolitan Police. And making all the monologues in the book. The supernatural section is made of a single Inspector, plus a few side characters, but with enough fancy details to give it life. In particular, Isaac Newton is credited with having started the section, called The Folly. Which is also the name of Ben Aaronovitch’s webpage.

“There was a poster (…) that said: `Keep Calm and Carry On’, which I thought was good advice.”

This quote is unvoluntarily funny in that it takes place in a cellar holding material from World War II. Except that the now invasive red and white poster was never distributed during the war… On the opposite it was pulped to save paper and the fact that a few copies survived is a sort of (minor) miracle. Hence a double anachronism in that it did not belong to a WWII room and that Peter Grant should have seen its modern avatars all over London.

“Have you ever been to London? Don’t worry, it’s basically  just like the country. Only with more people.”

The last part of the book is darker and feels less well-written, maybe simply because of the darker side and of the accumulation of events, while the central character gets rather too central and too much of an unexpected hero that saves the day. There is in particular a part where he seems to forget about his friend Lesley who is in deep trouble at the time and this does not seem to make much sense. But, except for this lapse (maybe due to my quick reading of the book over the week in Warwick), the flow and pace are great, with this constant undertone of satire and wit from the central character. I am definitely looking forward reading tomes 2 and 3 in the series (having already read tome 4 in Austria!, which was a mistake as there were spoilers about earlier volumes).


Filed under: Books, Kids, Travel Tagged: Ben Aaronnovitch, book review, cockney slang, ghosts, Isaac Newton, Keep calm posters, London, magics, Metropolitan Police, Peter Grant series, Thames, Warwick

by xi'an at October 24, 2014 10:14 PM

arXiv blog

Why Quantum "Clippers" Will Distribute Entanglement Across The Oceans

The best way to build a global quantum internet will use containerships to carry qubits across the oceans, say physicists.

October 24, 2014 08:36 PM

The Great Beyond - Nature blog

WHO plans for millions of doses of Ebola vaccine by 2015

Posted on behalf of Declan Butler.

The World Health Organization (WHO) announced plans on 24 October to produce millions of doses of two experimental Ebola vaccines by the end of 2015.

The Ebola virus has caused about 5,000 deaths in West Africa during the current epidemic.

The Ebola virus has caused about 5,000 deaths in West Africa during the current epidemic.

US National Institute of Allergy and Infectious Disease

Hundreds of thousands of doses should be available to help affected countries  before the end of June, the WHO said at the conclusion of a meeting in Geneva.  Vaccine makers, high-level government representatives, and regulatory and other bodies gathered to discuss the design and timing of planned clinical trials, as well as issues of supply and funding for mass vaccination programmes.

Phase I trials of two vaccine candidates have started, and as many as five additional vaccines could begin testing by 2015, says Marie-Paul Kieny, WHO assistant-director general for health systems and innovation.

As of 19 October, Ebola had infected almost 10,000 people in Sierra Leone, Liberia and Guinea and killed around 5,000 of them, the WHO estimates. The true figures are likely higher, as many cases go unreported. With no end to the epidemic yet in sight,  a working vaccine could be a game-changer.

First clinical trials underway

The two vaccines whose production will be increased are already in early stage testing in healthy volunteers. One is a chimpanzee adenovirus vaccine containing a surface Ebola protein (ChAd3), developed by the US National Institute of Allergy and Infectious Diseases and drug giant GlaxoSmithKline. It is being tested in the United States, the United Kingdom and Mali.

The other is a recombinant vesicular stomatitis virus (rVSV) vaccine, developed by the Public Health Agency of Canada and licensed to NewLink Genetics in Ames, Iowa. It is being tested in the United States, with plans to start trials soon in Europe and Africa.

These phase 1 trials will assess the vaccines’ safety and whether they elicit levels of immune response that have been shown to confer protection in non-human primates. The trials will also assess the dose needed to generate sufficient immune response, which in turn helps determine how quickly manufacturers can produce doses.

A third candidate is a two-vaccine regimen: one developed by US pharmaceutical company Johnson and Johnson and the US National Institute of Allergy and Infectious Diseases, and another by Bavarian Nordic, a biotechnology company based in Denmark. It will begin phase 1 testing in the United States and Europe in January. Johnson and Johnson announced on 22 October that it would spend up to US$200 million to fast track the vaccine’s development; it plans to produce more than a million doses in 2015, with 250,000 available by May.

Advanced testing

The first phase II and III trials, to test efficacy as well as safety, are set to start in Liberia in December and  in Sierra Leone in January.  The current plan is to test both the GSK and NewLink vaccines simultaneously, but that could change depending on the results of the ongoing phase I trials. Data from the phase II and III tests is expected by April, Kieny says.

The ‘three-arm’ Liberia trial would test and compare the safety and effectiveness of the two vaccines against each other and a placebo. Each vaccine would be tested on 10,000 subjects, with an equal number of subjects given placebo. This allows researchers to obtain quick, reliable data on how well the vaccines work.

A  ‘stepped-wedge’ randomized trial in Sierra Leone would give subjects vaccine sequentially, with no group given a placebo. This is useful for testing products that are expected to benefit patients, and products that are in short supply.

No trial design has yet been fixed for Guinea, where a lack of infrastructure has precluded early testing. If the Liberia and Sierra Leone trials show that the vaccines works and is safe, subsequent trials in Guinea would be used to answer follow-up questions.

Ethical and practical considerations

The Sierra Leone trial will enrol at least 8,000 healthcare workers, and other frontline responders, such as ambulance drivers and burial workers. The Liberia trial might included healthcare workers, but these would not be the primary study population, Kieny says.

Any decision to give a placebo to healthcare and other frontline workers will be controversial; many consider it to be unethical, given these individuals’ work caring for Ebola patients, and the risks that they face in doing so.

Mass vaccinations are usually only carried out after years of trials to accumulate full safety and efficacy data. The proposed timeline for Ebola vaccine development is therefore unprecedented.

If existing public-health interventions used to control Ebola outbreaks begin to slow the epidemic, the  need for mass vaccination will lessen, Kieny says. But if the epidemic continues to expand, the WHO could consider expanding vaccination programmes.

In the meantime, the WHO and its partners are considering how best to engage with communities to prepare for vaccination programmes. Another issue is simply determining how to keep vaccine cool enough — -80 degrees Celsius — to maintain its efficacy. This will require specialised fridges and the establishment of cold supply chains to affected areas.

Also to be determined : who will pay for mass vaccination. Kieny says simply that “money will not be an issue”. Aid groups and governments have begun to pledge support for such efforts.  Médecins Sans Frontières (MSF) has said that it will create a fund for Ebola vaccination, while the European Union has committed €200 million. The GAVI vaccine alliance, the main sponsor of routine vaccinations in low-income countries, is also looking at how it could bring its vast resources and experience to the table. It will put a plan to its board in December as to what role it could play in any Ebola mass vaccination.

by Lauren Morello at October 24, 2014 07:47 PM

Emily Lakdawalla - The Planetary Society Blog

Surveyor Digitization Project Will Bring Thousands of Unseen Lunar Images to Light
A team of scientists at the University of Arizona plan to digitize 87,000 vintage images from the surface of the moon, of which less than two percent have ever been seen.

October 24, 2014 07:03 PM

astrobites - astro-ph reader's digest

Mind the Gaps

“Music is the silence between the notes.” – Claude Debussy

Astronomical data gathered over time has gaps. For instance, when using a ground-based telescope, there is the pesky fact that roughly half of every 24-hour period is lit by the Sun. Or, the star you want to look at isn’t above the horizon, or clouds are blocking it. Even the most reliable space telescopes suffer from occasional pauses in their otherwise constant watchfulness.

Why are gaps a problem? Can’t astronomers just analyze the short chunks of data that don’t have gaps? Besides, no observation is truly continuous: there is always some gap between data points. Why should slightly longer or shorter gaps really make a difference?

The answer: Fourier transforms.

The Fourier transform “is like a mathematical prism—you feed in a wave and it spits out the ingredients of that wave.” (Read more of the superb Nautilus piece explaining the Fourier transform here.) It is an incredibly versatile data analysis tool. But in order for it to work perfectly, there are a couple important rules. First, the starting wave, or dataset, can have no beginning or end. Second, all the data points must be evenly spaced.

Of course, those of us leftward-of-mathematician on the field purity scale know that gap-free, infinite observations are never going to happen. So we need to fill gaps and mask edges. Today’s paper takes a look at how this is often done (spoiler: not carefully enough), and proposes a new gap-filling method to better preserve all the information in stellar light curves.

pascual_fig1

Periodogram (calculated using a Fast Fourier Transform) of a Delta Scuti star’s pulsations. The blue version has gaps in the light curve filled with simple linear interpolation, while the red version has used Pascual-Granada et al.’s new MIARMA algorithm to fill the gaps. Figure 1 from the paper.

The image above compares two slightly different Fourier transforms of a pulsating Delta Scuti star light curve, observed by the CoRoT satellite. The blue transform uses a common gap-filling technique: linear interpolation. This is simply drawing a straight line from the last point before a gap to the first point after it and pretending points on that line are observations with the same regular spacing as the real data. In contrast, the red transform uses a new algorithm called MIARMA to fill gaps in the light curve. As you can see, the frequencies and their heights and patterns are very different between these two methods. Since the main goal of asteroseismology is to learn about the insides of stars by studying their oscillation frequencies, you had better be sure you are studying the correct frequencies!

Pascual-Granada et al. create the MIARMA algorithm using an autoregressive moving average (ARMA) model. In essence, it looks at data on either side of a gap to predict what happens after the gap ends and before it begins—an autoregression, and it does this many times for each gap with different combinations of data points—a moving average.

words

Filling gaps in a solar-type star’s light curve spanning two days. Blue points are CoRoT observations, red points are gaps filled with MIARMA, and green points are gaps filled with linear interpolation. Figure 6 from the paper.

To demonstrate MIARMA preserves information better than linear interpolation, the authors test it on three different variable stars observed with CoRoT. They study the Delta Scuti pulsator described above, a Be star with longer time variations, and a rapidly-varying solar-type star.

Overall, MIARMA makes the biggest difference for the two stars with light curves that vary more slowly. For these, frequency spikes present in the linear interpolation case match with how often gaps tend to occur. The MIARMA Fourier transform lacks these telltale spikes and is free of aliasing—a common problem in signal processing in which incorrect frequencies and amplitudes are inferred because you aren’t recording data often enough. But the choice of gap-filler does not matter as much for more rapidly-varying solar-type stars. This makes sense because the typical separation between two gaps is long compared to how quickly the star is varying. As a result, the scientifically interesting frequencies are less susceptible to being affected by the gaps.

The authors report that their new method will be used to process all CoRoT data going forward, and can be adapted to work with Kepler data too. This is an important reminder that scientists must deeply understand their data. Sometimes the most problematic data points are none at all.

by Meredith Rawls at October 24, 2014 06:59 PM

The Great Beyond - Nature blog

US research ethics agency upholds decision on informed consent

United States regulators are standing by their decision that parents were not properly informed of the risks of a clinical trial in which premature babies received different levels of oxygen supplementation.

From 2005-2009, the Surfactant, Positive Pressure, and Oxygenation Randomized Trial (SUPPORT) trial randomly assigned 1,316 premature babies to receive one of two levels of oxygen supplementation in an effort to test which level was best. Though the lower level was associated with increased risk of brain damage and possibly death, and the higher level with blindness, the study leaders said that they did not disclose these risks to parents because all ranges of oxygen used in the trial were considered to be within the medically appropriate range at the time.

The study was supported by the US National Institutes of Health (NIH). On 7 March, 2013, the US Office of Human Research Protections (OHRP) issued a letter determining that the trial investigators had not adequately informed parents about the risks to their babies in the SUPPORT trial. The NIH and many researchers disputed the decision, arguing that it would impede “comparative effectiveness research” studies that are designed to test the best use of approved interventions. Parents of children in the trial, however, and others supported the OHRP’s determination that parents hadn’t received adequate information. The two sides clashed at a meeting convened by NIH and OHRP in August 2013.

Today, 24 October 2014, the OHRP has issued guidance reiterating and clarifying its position on what types of risks must be disclosed to study subjects in comparative effectiveness research studies such as SUPPORT. The agency has determined that risks of the intervention must be disclosed to study participants even if the risks are considered acceptable according to current medical guidelines, if the study intends to evaluate these risks and if the patients’ risks will change when they enroll in the study.

The OHRP said that even though both the low and high levels of oxygen supplementation were considered within the acceptable range, “the key issue is that the treatment and possible risks infants were exposed to in the research were different from the treatment and possible risks they would have been exposed to if they had not been in the trial.”

“[F]or the great majority of infants in the trial, it is likely that their participation altered the level of oxygen they received compared to what they would have received had they not participated,” the OHRP added.

The agency said further that if a trial is designed to compare the risks of potential side effects of a treatment already in use, then the risks are “reasonably foreseeable” and that prospective study participants should be made aware of it.

“If a specific risk has been identified as significant enough that it is important for the Federal government to spend taxpayer money to better understand the extent or nature of that risk, then that risk is one that prospective subjects should be made aware of so that they can decide if they want to be exposed to it,” OHRP said.

The guidance is open to comments until 24 December.

by Erika Check Hayden at October 24, 2014 05:32 PM

The Great Beyond - Nature blog

Western Australia abandons shark cull
barnett_20140813_TomFisher_01

Western Australia Premier Colin Barnett

Government of Western Australia

The state of Western Australia is abandoning a controversial shark-culling programme, but has also gained the right to deploy deadly baited lines for animals that pose an “imminent threat”.

The programme, run by the state government off several Western Australian beaches, had been heavily criticised by scientists when it was announced in 2013. It was due to run until 2017, and had caught at least 170 sharks using hooks suspended from drums moored to the seafloor.

In September the state’s own Environmental Protection Agency halted it. State Premier Colin Barnett then applied to the national government for permission to resume it, but today he announced that his government had ended that effort. “We have withdrawn the application after reaching agreement with the Commonwealth which enables us to take immediate action when there is an imminent threat,” said Barnett.

Under an agreement with the national government, Western Australia will be able to kill sharks in future to deal with a shark that has attacked or with one that it thinks poses a threat. Protocols for how this would happen are now in development.

This apparent concession from the national government has drawn some concern from those celebrating the end of the cull.

“I remain concerned that drum lines could be used in some instances as part of emergency measures and particularly that this could occur without Federal approval,” said Rachel Siewert, the marine spokeswoman for the Australian Greens, in a statement.

The Western Australia cull is also drawing renewed attention to the long-standing cull in Queensland, which continues unabated.

by Daniel Cressey at October 24, 2014 02:47 PM

Symmetrybreaking - Fermilab/SLAC

Cosmic inflation

Cosmic inflation refers to a period of rapid, accelerated expansion that scientists think took place about 14 billion years ago.

Cosmic inflation refers to a period of rapid, accelerated expansion that scientists think took place about 14 billion years ago.

Our universe has likely never grown as quickly as it did during that period. Faster than the blink of an eye, the whole universe expanded so that an area the size of an atom was suddenly the size of a grapefruit.

Scientists think this expansion was driven by the potential energy of the inflaton field, a new field that turned on just after the big bang.

Support for the theory of cosmic inflation comes from the Cosmic Microwave Background, or CMB, a pattern of light released when the early universe first cooled enough for particles to travel freely through it.

Although nearly uniform, the CMB contains ripples. Scientists think these were caused by tiny quantum fluctuations that were amplified to huge scales by cosmic inflation.

Scientists study cosmic inflation through experiments at telescopes, such as the Planck satellite and BICEP2 at the South Pole. These experiments measure elements of the CMB, looking for the footprints of inflation.

When inflation ended, the expansion of our universe began to slow down. But then another influence took over, pushing it back to an accelerating rate. This influence is thought to be dark energy.

 

Like what you see? Sign up for a free subscription to symmetry!

by Rhianna Wisniewski at October 24, 2014 01:00 PM

Quantum Diaries

Where the wind goes sweeping ’round the ring?

I travel a lot for my work in particle physics, but it’s usually the same places over and over again — Fermilab, CERN, sometimes Washington to meet with our gracious supporters from the funding agencies.  It’s much more interesting to go someplace new, and especially somewhere that has some science going on that isn’t particle physics.  I always find myself trying to make connections between other people’s work and mine.

This week I went to a meeting of the Council of the Open Science Grid that was hosted by the Oklahoma University Supercomputing Center for Education and Research in Norman, OK.  It was already interesting that I got to visit Oklahoma, where I had never been before.  (I think I’m up to 37 states now.)  But we held our meeting in the building that hosts the National Weather Center, which gave me an opportunity to take a tour of the center and learn a bit more about how research in meteorology and weather forecasting is done.

OU is the home of the largest meteorology department in the country, and the center hosts a forecast office of the National Weather Service (which produces forecasts for central and western Oklahoma and northern Texas, at the granularity of one hour and one kilometer) and the National Severe Storms Laboratory (which generates storm watches and warnings for the entire country — I saw the actual desk where the decisions get made!).  So how is the science of the weather like and not like the science that we do at the LHC?

(In what follows, I offer my sincere apologies to meteorologists in case I misinterpreted what I learned on my tour!)

Both are fields that can generate significant amounts of data that need to be interpreted to obtain a scientific result.  As has been discussed many times on the blog, each LHC experiment records petabytes of data each year.  Meteorology research is performed by much smaller teams of observers, which makes it hard to estimate their total data volume, but the graduate student who led our tour told us that he is studying a mere three weather events, but he has more than a terabyte of data to contend with — small compared to what a student on the LHC might have to handle, but still significant.

But where the two fields differ is what limits the rate at which the data can be understood.  At the LHC, it’s all about the processing power needed to reconstruct the raw data by performing the algorithms that turn the voltages read out from millions of amplifiers into the energies and momenta of individual elementary particles.  We know what the algorithms for this are, we know how to code them; we just have to run them a lot.  In meteorology, the challenge is getting to the point where you can even make the data interpretable in a scientific sense.  Things like radar readings still need to be massaged by humans to become sensible.  It is a very labor-intensive process, akin to the work done by the “scanner girls” of the particle physics days of yore, who carefully studied film emulsions by eye to identify particle tracks.  I do wonder what the prospects are in meteorology for automating this process so that it can be handed off to humans instead.  (Clearly this has to apply more towards forefront research in the field about how tornadoes form and the like, rather than to the daily weather predictions that just tell you the likelihood of tornado-forming conditions.)

Weather forecasting data is generally public information, accessible by anyone.  The National Weather Service publishes it in a form that has already had some processing done on it so that it can be straightforwardly ingested by others.  Indeed, there is a significant private weather-forecasting industry that makes use of this, and sells products with value added to the NWS data.  (For instance, you could buy a forecast much more granular than that provided by the NWS, e.g. for the weather at your house in ten-minute intervals.)  Many of these companies rent space in buildings within a block of the National Weather Center.  The field of particle physics is still struggling with how to make our data publicly available (which puts us well behind many astronomy projects which make all of their data public within a few years of the original observations).  There are concerns about how to provide the data in a form that will allow people who are not experts to learn something from the data without making mistakes.  But there has been quite a lot of progress in this in recent years, especially as it has been recognized that each particle physics experiment creates a unique dataset that will probably never be replicated in the future.  We can expect an increasing number of public data releases in the next few years.  (On that note, let me point out the NSF-funded Data and Software Preservation for Open Science (DASPOS) project that I am associated with on its very outer edges, which is working on some aspects of the problem.)  However, I’d be surprised if anyone starts up a company that will sell new interpretations of LHC data!

Finally, here’s one thing that the weather and the LHC has in common — they’re both always on!  Or, at least we try to run the LHC for every minute possible when the accelerator is operational.  (Remember, we are currently down for upgrades and will start up again this coming spring.)  The LHC experiments have physicists on on duty 24 hours a day, monitoring data quality and ready to make repairs to the detectors should they be needed.  Weather forecasters are also on shift at the forecasting center and the severe-storm center around the clock.  They are busy looking at data being gathered by their own instruments, but also from other sources.  For instance, when there are reports of tornadoes near Oklahoma City, the local TV news stations often send helicopters out to go take a look.  The forecasters watch the TV news to get additional perspectives on the storm.

Now, if only the weather forecasters on shift could make repairs to the weather just like our shifters can fix the detector!

by Ken Bloom at October 24, 2014 05:14 AM

Emily Lakdawalla - The Planetary Society Blog

GSA 2014: The puzzle of Gale crater's basaltic sedimentary rocks
At the Geological Society of America conference this week, Curiosity scientists dug into the geology of Gale crater and shared puzzling results about the nature of the rocks that the rover has found there.

October 24, 2014 12:31 AM

October 23, 2014

Christian P. Robert - xi'an's og

Feller’s shoes and Rasmus’ socks [well, Karl's actually...]

Yesterday, Rasmus Bååth [of puppies' fame!] posted a very nice blog using ABC to derive the posterior distribution of the total number of socks in the laundry when only pulling out orphan socks and no pair at all in the first eleven draws. Maybe not the most pressing issue for Bayesian inference in the era of Big data but still a challenge of sorts!

Rasmus set a prior on the total number m of socks, a negative Binomial Neg(15,1/3) distribution, and another prior of the proportion of socks that come by pairs, a Beta B(15,2) distribution, then simulated pseudo-data by picking eleven socks at random, and at last applied ABC (in Rubin’s 1984 sense) by waiting for the observed event, i.e. only orphans and no pair [of socks]. Brilliant!

The overall simplicity of the problem set me wondering about an alternative solution using the likelihood. Cannot be that hard, can it?! After a few computations rejected by opposing them to experimental frequencies, I put the problem on hold until I was back home and with access to my Feller volume 1, one of the few [math] books I keep at home… As I was convinced one of the exercises in Chapter II would cover this case. After checking, I found a partial solution, namely Exercice 26:

A closet contains n pairs of shoes. If 2r shoes are chosen at random (with 2r<n), what is the probability that there will be (a) no complete pair, (b) exactly one complete pair, (c) exactly two complete pairs among them?

This is not exactly a solution, but rather a problem, however it leads to the value

p_j=\binom{n}{j}2^{2r-2j}\binom{n-j}{2r-2j}\Big/\binom{2n}{2r}

as the probability of obtaining j pairs among those 2r shoes. Which also works for an odd number t of shoes:

p_j=2^{t-2j}\binom{n}{j}\binom{n-j}{t-2j}\Big/\binom{2n}{t}

as I checked against my large simulations. socksSo I solved Exercise 26 in Feller volume 1 (!), but not Rasmus’ problem, since there are those orphan socks on top of the pairs. If one draws 11 socks out of m socks made of f orphans and g pairs, with f+2g=m, the number k of socks from the orphan group is an hypergeometric H(11,m,f) rv and the probability to observe 11 orphan socks total (either from the orphan or from the paired groups) is thus the marginal over all possible values of k:

\sum_{k=0}^{11} \dfrac{\binom{f}{k}\binom{2g}{11-k}}{\binom{m}{11}}\times\dfrac{2^{11-k}\binom{g}{11-k}}{\binom{2g}{11-k}}

so it could be argued that we are facing a closed-form likelihood problem. Even though it presumably took me longer to achieve this formula than for Rasmus to run his exact ABC code!


Filed under: Books, Kids, R, Statistics, University life Tagged: ABC, capture-recapture, combinatorics, subjective prior, William Feller

by xi'an at October 23, 2014 10:14 PM

arXiv blog

Data Mining Reveals How News Coverage Varies Around the World

Last year, the news media reported on 195,000 disasters around the world. The ones you heard about depend crucially on your location.


One interesting question about the nature of news is how well it reflects the pattern of real events around the world. It’s natural to assume that people living in a certain part of the world are more likely to read, see and hear about news from their own region. But what of the international news they get—how does that compare to the international news that people in other parts of the world receive?

October 23, 2014 07:00 PM

Clifford V. Johnson - Asymptotia

Reading Storm…
Screen Shot 2014-10-23 at 09.55.07For a while back there earlier this week I was in a storm of reading duties of the sort that I hope not to see again in a while. A lot of it had to be put off at the end of the week before because I wanted to prepare my talk for Sunday, which took a little more time than I'd planned since I wanted to do some drawings for it. All of it had a deadline. Monday was to see me participating in a podcast at the USC Bedrosian Center to discuss the book "Beyond the University: Why Liberal Education Matters", by Michael S. Roth. I had the book for about six weeks, and started reading it when I first got it... but found that I was getting through it too fast too early and wanted to have it fresher in my mind for the podcast, so I held off until closer to the date. Unfortunately, this then clashed with two promotion dossiers that got scheduled for a Tuesday meeting, both from book-heavy fields, and so that added three books on language, representation, business and history (tangled up in a fascinating way) that I can't tell you about since the proceedings of the relevant committee are confidential. Then I remembered that a Ph.D. thesis exam had been moved from the previous week to that same Tuesday (and I had put off the reading) and so I had a thesis to read as well. (Not to mention all the dossier letters, statements, committee reports, and so forth that come from reading two promotion dossiers...) A lot of the reading is also fun, but it's certainly hard work and one is reading while taking careful notes for later reference, in a lot of the instances. I always end up surprising myself with how much fun I have learning about topics far beyond my field when I read promotions dossiers for other areas. I'm certainly not an expert (and that is not why I'm called into service in such cases) so I'm reading with an eye on seeing what the quality of scholarship is, and what the voice of the person writing is like. These are things that (if you are not of the tedious point of view that your own field of inquiry is somehow king of the disciplines (a view we physicists all too often seem to have)) can be glimpsed and sometimes firmly perceived by wading deep into the pool of their work and keeping an open mind. I strongly recommend the Roth book about what the point [...] Click to continue reading this post

by Clifford at October 23, 2014 05:06 PM

The Great Beyond - Nature blog

Fundamental overhaul of China’s competitive funding

On 20 October, the Chinese government announced the passage of a reform plan that will fundamentally reshape research in the country.

By 2017, the main competitive government funding initiatives will be eliminated. This includes the ’863′ and ’973′ programmes, two channels for large grants that have been at the heart of modern China’s development of science and technology infrastructure since being established in 1986 and 1997, respectively.

Xi Jinping, General Secretary of the Communist Party of China, is behind reforms to overhaul research in the country.

Xi Jinping, General Secretary of the Communist Party of China, is behind reforms to overhaul research in the country.

By Antilong (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

The government announcement noted that wastefulness and fragmented management has led to overlaps and inefficient use of funds for science and technology, and the need for a unified platform for distributing grants. As new funding programmes have been added over the years, competitive funding has become divided among some 100 competitive schemes overseen by about 30 different governmental departments.

Although efforts to reorganize science in China are already underway,  the latest reform will be comprehensive. Science and technology spending by the central government was 77.4 billion yuan renminbi (US$12.6 billion) in 2006 but jumped to 236 billion yuan renminbi in 2013, 11.6% of the central government’s direct public expenditure. Some 60% of this is competitive funding, and subject to change under under the new reforms. To maintain stability, the overhaul will not affect the remaining 40%, which covers operation costs for research institutes and key state laboratories.

The new plan, jointly drafted by the ministries of science and technology and the ministry of finance, will reorganize competitive funding into five new channels: the National Natural Science Foundation (which currently distributes many of the small-scale competitive grants); national science and technology major projects; key national research and development programmes; a special fund to guide technological innovation; and special projects for developing human resources and infrastructure. These five will be managed under a new science and technology agency that will unify planning and assessment of scientific projects.

 

 

 

by David Cyranoski at October 23, 2014 02:57 PM

Symmetrybreaking - Fermilab/SLAC

Australia’s first dark matter experiment

A proposed dark matter experiment would use two underground detectors, one in each hemisphere.

Physicists are hoping to hit pay dirt with a proposed experiment—the first of its kind in the Southern Hemisphere—that would search for traces of dark matter more than a half mile below ground in Victoria, Australia.

The current plan, now being explored by an international team, is for two new, identical dark matter experiments to be installed and operated in parallel—one at an underground site at Grand Sasso National Laboratory in Italy, and the other at the Stawell Gold Mine in Australia.

“An experiment of this significance could ultimately lead to the discovery of dark matter,” says Elisabetta Barberio of the ARC Centre of Excellence for Particle Physics at the Terascale (CoEPP) and the University of Melbourne, who is Australian project leader for the proposed experiment.

The experiment proposal was discussed during a two-day workshop on dark matter in September. Work could begin on the project as soon as 2015 if it gathers enough support. “We’re looking at logistics and funding sources,” Barberio says.

The experiments would be modeled after the DAMA experiment at Gran Sasso, now called DAMA/LIBRA, which in 1998 found a possible sign of dark matter.

DAMA/LIBRA looks for seasonal modulation, an ebb and flow in the amount of potential dark matter signals it sees depending on the time of year.

If the Milky Way is surrounded by a halo of dark matter particles, then the sun is constantly moving through it, as is the Earth. The Earth’s rotation around the sun causes the two to spend half of the year moving in the same direction and the other half moving in opposite directions. During the six months in which the Earth and sun are cooperating, a dark matter detector on the Earth will move faster through the dark matter particles, giving it more opportunities to catch them.

This seasonal difference appears in the data from DAMA/LIBRA, but no other experiment has been able to confirm this as a sign of dark matter.

For one thing, the changes in the signal could be caused on other factors that change by the season.

“There are environmental effects—different characteristics of the atmosphere—in winter and summer that are clearly reversed if you go from the Northern to the Southern hemisphere,” says Antonio Masiero, vice president for the Italian National Institute of Nuclear Physics (INFN) and a member of the Italian delegation collaborating on the proposal, which also includes Gran Sasso Director Stefano Ragazzi. If the results matched up at both sites at the same time of year, that would help to rule out such effects.

The Australian mine hosting the proposed experiment could also house scientific experiments from different fields.

“It wouldn’t be limited to particle physics and could include experiments involving biology, geosciences and engineering,” Barberio says. “These could include neutrino detection, nuclear astrophysics, geothermal energy extraction and carbon sequestration, and subsurface imaging and sensing.”

Preliminary testing has begun at the mine site down to depths of about 880 meters, about 200 meters above the proposed experimental site. Regular mining operations are scheduled to cease at Stawell in the next few years.

The ARC Centre of Excellence for All-Sky Astrophysics (CAASTRO), the local government in the Victoria area, and the mine operators have joined forces with COEPP and INFN to support the proposal.

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at October 23, 2014 02:56 PM

Emily Lakdawalla - The Planetary Society Blog

UPDATED: China successfully launched test mission for Chang'e 5 program today
China launched to the Moon today! The spacecraft will have a brief, 8-day mission, out to the Moon and back. It is an engineering test for the technology that the future Chang'e 5 sample return mission will need to return its precious samples to Earth.

October 23, 2014 02:54 PM

astrobites - astro-ph reader's digest

Today’s Partial Solar Eclipse
Eclipses

Fig. 1: Comparing lunar and solar eclipses. (Credit: Encyclopedia Britannica, Inc.)

This is a special post for our North American readers. In case you haven’t heard, there will be a partial solar eclipse today, visible from Mexico, Canada and the USA. “But wait!” you might say, “Didn’t we just have an eclipse?!” Yes, in fact, there was a lovely total lunar eclipse on October 8th. But today’s eclipse is a rarer solar eclipse.

Lunar vs. Solar Eclipses

Remember that a lunar eclipse occurs when the Earth passes between the Sun and Moon, casting a shadow on the Moon. Everyone on the half of the Earth that’s facing the Moon can view a lunar eclipse. But a solar eclipse occurs when the Moon passes between the Earth and Sun, casting a much smaller shadow on the Earth. You can only view a solar eclipse if you’re standing on the part of the Earth where the shadow falls, so it’s rarer to see a solar eclipse than a lunar eclipse (unless you’re willing to go on an eclipse expedition).

Lucky Coincidences in the Earth-Moon-Sun System

We get to enjoy solar eclipses thanks to a happy coincidence in the Earth-Moon-Sun system. The Moon is much smaller than the Sun, of course, but it’s just the right distance from Earth that the Moon and Sun have roughly the same apparent size in the sky. This means that when the Moon passes between us and the Sun, it’s just the right size to block the Sun’s light during a total solar eclipse.

This is lucky for eclipse fans, and if the Earth, Moon, and Sun orbited on the same plane, we’d get to see a total solar eclipse about once a month. In fact, we’d probably bored with them after a while. However, the tilt of the Moon’s orbit keeps eclipses from occurring more often than about every 18 months. Fig. 2 shows why the Moon’s 5° tilt makes solar eclipses rare.

Fig. 2: Diagram showing the effect of the tilt of the Moon’s orbit on the frequency of solar eclipses. (Credit: theconversation.com)

Observing the Eclipse

Today’s solar eclipse is only a partial eclipse, meaning that the Moon does not pass directly between the Earth and Sun, so it will only appear to take a bite out of the Sun rather than blocking it completely. This means that if you’re not trying to observe the eclipse, you probably won’t even notice it, since the Sun’s light will not dim significantly. So, what’s the best way to observe the eclipse?

Images of an Eclipse on the ground

Fig. 3: The gaps between the leaves of a tree can act as pinhole cameras, projecting the image of the eclipse onto the ground. (Credit: CSIRO)

First and most importantly, practice safe eclipse-viewing. Looking directly at the Sun for a long period can cause permanent damage to your vision, even during a partial eclipse. Looking through a telescope or binoculars at the Sun is even more dangerous! Even using a telescope to project an image of the eclipse onto a piece of paper can be dangerous, because the heat collected inside the telescope can damage it or break glass lenses or eyepieces.

Your best option is to visit a local observatory or astronomy club and take advantage of their expertise and equipment. If you’re on your own, you could try to get your hands on some eclipse glasses. Sunglasses won’t protect your eyes, even if they’re UV-blocking sunglasses, and neither will looking through undeveloped film or tinted glass. You need some solar-rated filtered glasses. I picked up a pair during the Venus transit back in 2012.

If it’s too late to find some filtered glasses, you can easily create a pinhole camera to project the image of the eclipse onto the ground or a piece of paper. If you want to build a pinhole camera, there are some great instructions here, but you can also just use anything that allows light to pass through one or more small holes, like a sieve or even the leaves of a tree (like in Fig. 3).

If it’s too cloudy where you are, you can check out one of the live streams from the Coca-Cola Science Center or the Mt. Lemmon SkyCenter. Fig. 4 shows a map of the start times of the eclipse, and the amount of light that will be blocked by the Moon, for different regions in North America. Sky & Telescope also has a table listing eclipse start and end times for different cities.

Visibility of October 23, 2014, solar eclipse

Fig. 4: Visibility map of the October 23 partial solar eclipse. (Credit: Sky & Telescope / Leah Tiscione)

If you do have access to a properly-filtered telescope, be sure to look for the huge sunspot, AR 2192. At 100,000 km across, this sunspot is big enough to swallow the Earth, and just yesterday it spit out an X-class solar flare. AR 2192 is easily visible through a solar telescope.

Even if you don’t have any equipment, I hope you North Americans will take the opportunity to step outside with a last-minute pinhole camera today to catch the eclipse. The next solar eclipse visible from North America won’t be until 2017, when we’ll get the chance to see a rare total solar eclipse!

by Erika Nesvold at October 23, 2014 12:59 PM

Peter Coles - In the Dark

Why Cosmology Isn’t Boring

As promised yesterday, here’s a copy of the slides I used for my talk to the ~150 participants of the collaboration meeting of the Dark Energy Survey that’s going on here this week at Sussex. The title is a reaction to a statement I heard that recent developments in cosmology, especially from Planck, have established that we live in a Maximally Boring Universe. I the talk I tried to explain why I don’t think the standard cosmology is at all boring. In fact, I think it’s only now that we can start to ask the really interesting questions.

At various points along the way I stopped to sample opinions…

IMG-20141022-00439

I did however notice that Josh Frieman (front left) seemed to vote in favour of all the possible options on all the questions.  I think that’s taking the multiverse idea a bit too far..

 


by telescoper at October 23, 2014 11:29 AM

The n-Category Cafe

Why It Matters

One interesting feature of the Category Theory conference in Cambridge last month was that lots of the other participants started conversations with me about the whole-population, suspicionless surveillance that several governments are now operating. All but one were enthusiastically supportive of the work I’ve been doing to try to get the mathematical community to take responsibility for its part in this, and I appreciated that very much.

The remaining one was a friend who wasn’t unsupportive, but said to me something like “I think I probably agree with you, but I’m not sure. I don’t see why it matters. Persuade me!”

Here’s what I replied.

“A lot of people know now that the intelligence agencies are keeping records of almost all their communications, but they can’t bring themselves to get worked up about it. And in a way, they might be right. If you, personally, keep your head down, if you never do anything that upsets anyone in power, it’s unlikely that your records will end up being used against you.

But that’s a really self-centred attitude. What about people who don’t keep their heads down? What about protesters, campaigners, activists, people who challenge the establishment — people who exercise their full democratic rights? Freedom from harassment shouldn’t depend on you being a quiet little citizen.

“There’s a long history of intelligence agencies using their powers to disrupt legitimate activism. The FBI recorded some of Martin Luther King’s extramarital liaisons and sent the tape to his family home, accompanied by a letter attempting to blackmail him into suicide. And there have been many many examples since then (see below).

“Here’s the kind of situation that worries me today. In the UK, there’s a lot of debate at the moment about the oil extraction technique known as fracking. The government has just given permission for the oil industry to use it, and environmental groups have been protesting vigorously.

“I don’t have strong opinions on fracking myself, but I do think people should be free to organize and protest against it without state harassment. In fact, the state should be supporting people in the exercise of their democratic rights. But actually, any anti-fracking group would be sensible to assume that it’s the object of covert surveillance, and that the police are working against it, perhaps by employing infiltrators — because they’ve been doing that to other environmental groups for years.

“It’s the easiest thing in the world for politicians to portray anti-fracking activists as a danger to the UK’s economic well-being, as a threat to national energy security. That’s virtually terrorism! And once someone’s been labelled with the T word, it immediately becomes trivial to justify using all that surveillance data that the intelligence agencies routinely gather. And I’m not exaggerating — anti-terrorism laws really have been used against environmental campaigners in the recent past.

“Or think about gay rights. Less than fifty years ago, sex between men in England was illegal. This law was enforced, and it ruined people’s lives. For instance, my academic great-grandfather Alan Turing was arrested under this law and punished with chemical castration. He’s widely thought to have killed himself as a direct result. But today, two men in England can not only have sex legally, they can marry with the full endorsement of the state.

“How did this change so fast? Not by people writing polite letters to the Times, or by going through official parliamentary channels (at least, not only by those means). It was mainly through decades of tough, sometimes dangerous, agitation, campaigning and protest, by small groups and by courageous individual citizens.

“By definition, anyone campaigning for anything to be decriminalized is siding with criminals against the establishment. It’s the easiest thing in the world for politicians to portray campaigners like this as a menace to society, a grave threat to law and order. Any nation state with the ability to monitor, infiltrate, harass and disrupt such “menaces” will be very sorely tempted to use it. And again, that’s no exaggeration: in the US at least, this has happened to gay rights campaigners over and over again, from the 1950s to nearly the present day, even sometimes — ludicrously — in the name of fighting terrorism (1, 2, 3, 4).

“So government surveillance should matter to you in a very direct way if you’re involved in any kind of activism or advocacy or protest or campaigning or dissent. It should also matter to you if you’re not, but you quietly support any of this activism — or if you reap its benefits. Even if you don’t (which is unlikely), it matters if you simply want to live in a society where people can engage in peaceful activism without facing disruption or harassment by the state. And it matters more now than it ever did before, because government surveillance powers are orders of magnitude greater than they’ve ever been before.”


That’s roughly what I said. I think we then talked a bit about mathematicians’ role in enabling whole-population surveillance. Here’s Thomas Hales’s take on this:

If privacy disappears from the face of the Earth, mathematicians will be some of the primary culprits.

Of course, there are lots of other reasons why the activities of the NSA, GCHQ and their partners might matter to you. Maybe you object to industrial espionage being carried out in the name of national security, or the NSA supplying data to the CIA’s drone assassination programme (“we track ‘em, you whack ‘em”), or the raw content of communications between Americans being passed en masse to Israel, or the NSA hacking purely civilian infrastructure in China, or government agencies intercepting lawyer-client and journalist-source communications, or that the existence of mass surveillance leads inevitably to self-censorship. Or maybe you simply object to being watched, for the same reason you close the bathroom door: you’re not doing anything to be ashamed of, you just want some privacy. But the activism point is the one that resonates most deeply with me personally, and it seemed to resonate with my friend too.

You may think I’m exaggerating or scaremongering — that the enormous power wielded by the US and UK intelligence agencies (among others) could theoretically be used against legitimate citizen activism, but hasn’t been so far.

There’s certainly an abstract argument against this: it’s simply human nature that if you have a given surveillance power available to you, and the motive to use it, and the means to use it without it being known that you’ve done so, then you very likely will. Even if (for some reason) you believe that those currently wielding these powers have superhuman powers of self-restraint, there’s no guarantee that those who wield them in future will be equally saintly.

But much more importantly, there’s copious historical evidence that governments routinely use whatever surveillance powers they possess against whoever they see as troublemakers, even if this breaks the law. Without great effort, I found 50 examples in the US and UK alone — read on.

Six overviews

If you’re going to read just one thing on government surveillance of activists, I suggest you make it this:

Among many other interesting points, it reminds us that this isn’t only about “leftist” activism — three of the plaintiffs in this case are pro-gun organizations.

Here are some other good overviews:

And here’s a short but incisive comment from journalist Murtaza Hussain.

50 episodes of government surveillance of activists

Disclaimer   Journalism about the activities of highly secretive organizations is, by its nature, very difficult. Even obtaining the basic facts can be a major feat. Obviously, I can’t attest to the accuracy of all these articles — and the entries in the list below are summaries of the articles linked to, not claims I’m making myself. As ever, whether you believe what you read is a judgement you’ll have to make for yourself.

1940s

1. FBI surveillance of War Resisters League (1, 2), continuing in 2010 (1)

1950s

2. FBI surveillance of the National Association for the Advancement of Colored People (1)

3. FBI “surveillance program against homosexuals” (1)

1960s

4. FBI’s Sex Deviate programme (1)

5. FBI’s Cointelpro projects aimed at “surveying, infiltrating, discrediting, and disrupting domestic political organizations”, and NSA’s Project Minaret targeted leading critics of Vietnam war, including senators, civil rights leaders and journalists (1)

6. FBI attempted to blackmail Martin Luther King into suicide with surveillance tape (1)

7. NSA intercepted communications of antiwar activists, including Jane Fonda and Dr Benjamin Spock (1)

8. Harassment of California student movement (including Stephen Smale’s free speech advocacy) by FBI, with support of Ronald Reagan (1, 2)

1970s

9. FBI surveillance and attempted deportation of John Lennon (1)

10. FBI burgled the office of the psychiatrist of Pentagon Papers whistleblower Daniel Ellsberg (1)

1980s

11. Margaret Thatcher had the Canadian national intelligence agency CSEC surveil two of her own ministers (1, 2, 3)

12. MI5 tapped phone of founder of Women for World Disarmament (1)

13. Ronald Reagan had the NSA tap the phone of congressman Michael Barnes, who opposed Reagan’s Central America policy (1)

1990s

14. NSA surveillance of Greenpeace (1)

15. UK police’s “undercover work against political activists” and “subversives”, including future home secretary Jack Straw (1)

16. UK undercover policeman Peter Francis “undermined the campaign of a family who wanted justice over the death of a boxing instructor who was struck on the head by a police baton” (1)

17. UK undercover police secretly gathered intelligence on 18 grieving families fighting to get justice from police (1, 2)

18. UK undercover police spied on lawyer for family of murdered black teenager Stephen Lawrence; police also secretly recorded friend of Lawrence and his lawyer (1, 2)

19. UK undercover police spied on human rights lawyers Bindmans (1)

20. GCHQ accused of spying on Scottish trade unions (1)

2000s

21. US military spied on gay rights groups opposing “don’t ask, don’t tell” (1)

22. Maryland State Police monitored nonviolent gay rights groups as terrorist threat (1)

23. NSA monitored email of American citizen Faisal Gill, including while he was running as Republican candidate for Virginia House of Delegates (1)

24. NSA surveillance of Rutgers professor Hooshang Amirahmadi and ex-California State professor Agha Saeed (1)

25. NSA tapped attorney-client conversations of American lawyer Asim Ghafoor (1)

26. NSA spied on American citizen Nihad Awad, executive director of the Council on American-Islamic Relations, the USA’s largest Muslim civil rights organization (1)

27. NSA analyst read personal email account of Bill Clinton (date unknown) (1)

28. Pentagon counterintelligence unit CIFA monitored peaceful antiwar activists (1)

29. Green party peer and London assembly member Jenny Jones was monitored and put on secret police database of “domestic extremists” (1, 2)

30. MI5 and UK police bugged member of parliament Sadiq Khan (1, 2)

31. Food Not Bombs (volunteer movement giving out free food and protesting against war and poverty) labelled as terrorist group and infiltrated by FBI (1, 2, 3)

32. Undercover London police infiltrated green activist groups (1)

33. Scottish police infiltrated climate change activist organizations, including anti-airport expansion group Plane Stupid (1)

34. UK undercover police had children with activists in groups they had infiltrated (1)

35. FBI infiltrated Muslim communities and pushed those with objections to terrorism (and often mental health problems) to commit terrorist acts (1, 2, 3)

2010s

36. California gun owners’ group Calguns complains of chilling effect of NSA surveillance on members’ activities (1, 2, 3)

37. GCHQ and NSA surveilled Unicef and head of Economic Community of West African States (1)

38. NSA spying on Amnesty International and Human Rights Watch (1)

39. CIA hacked into computers of Senate Intelligence Committee, whose job it is to oversee the CIA
(1, 2, 3, 4, 5, 6; bonus: watch CIA director John Brennan lie that it didn’t happen, months before apologizing)

40. CIA obtained legally protected, confidential email between whistleblower officials and members of congress, regarding CIA torture programme (1)

41. Investigation suggests that CIA “operates an email surveillance program targeting senate intelligence staffers” (1)

42. FBI raided homes and offices of Anti-War Committee and Freedom Road Socialist Organization, targeting solidarity activists working with Colombians and Palestinians (1)

43. Nearly half of US government’s terrorist watchlist consists of people with no recognized terrorist group affiliation (1)

44. FBI taught counterterrorism agents that mainstream Muslims are “violent” and “radical”, and used presentations about the “inherently violent nature of Islam” (1, 2, 3)

45. GCHQ has developed tools to manipulate online discourse and activism, including changing outcomes of online polls, censoring videos, and mounting distributed denial of service attacks (1, 2)

46. Green member of parliament Caroline Lucas complains that GCHQ is intercepting her communications (1)

47. GCHQ collected IP addresses of visitors to Wikileaks websites (1, 2)

48. The NSA tracks web searches related to privacy software such as Tor, as well as visitors to the website of the Linux Journal (calling it an “extremist forum”) (1, 2, 3)

49. UK police attempt to infiltrate anti-racism, anti-fascist and environmental groups, anti-tax-avoidance group UK Uncut, and politically active Cambridge University students (1, 2)

50. NSA surveillance impedes work of investigative journalists and lawyers (1, 2, 3, 4, 5).

Back to mathematics

As mathematicians, we spend much of our time studying objects that don’t exist anywhere in the world (perfect circles and so on). But we exist in the world. So, being a mathematician sometimes involves addressing real-world concerns.

For instance, Vancouver mathematician Izabella Laba has for years been writing thought-provoking posts on sexism in mathematics. That’s not mathematics, but it’s a problem that implicates every mathematician. On this blog, John Baez has written extensively on the exploitative practices of certain publishers of mathematics journals, the damage it does to the universities we work in, and what we can do about it.

I make no apology for bringing political considerations onto a mathematical blog. The NSA is a huge employer of mathematicians — over 1000 of us, it claims. Like it or not, it is part of our mathematical environment. Both the American Mathematical Society and London Mathematical Society are now regularly publishing articles on the role of mathematicians in enabling government surveillance, in recognition of our responsibility for it. As a recent New York Times article put it:

To say mathematics is political is not to diminish it, but rather to recognize its greater meaning, promise and responsibilities.

by leinster (tom.leinster@ed.ac.uk) at October 23, 2014 07:51 AM

The n-Category Cafe

Where Do Probability Measures Come From?

Guest post by Tom Avery

Tom (here Tom means me, not him — Tom) has written several times about a piece of categorical machinery that, when given an appropriate input, churns out some well-known mathematical concepts. This machine is the process of constructing the codensity monad of a functor.

In this post, I’ll give another example of a well-known concept that arises as a codensity monad; namely probability measures. This is something that I’ve just written a paper about.

The Giry monads

Write <semantics>Meas<annotation encoding="application/x-tex">\mathbf{Meas}</annotation></semantics> for the category of measurable spaces (sets equipped with a <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebra of subsets) and measurable maps. I’ll also write <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> for the unit interval <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics>, equipped with the Borel <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebra.

Let <semantics>ΩMeas<annotation encoding="application/x-tex">\Omega \in \mathbf{Meas}</annotation></semantics>. There are lots of different probability measures we can put on <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics>; write <semantics>GΩ<annotation encoding="application/x-tex">G\Omega</annotation></semantics> for the set of all of them.

Is <semantics>GΩ<annotation encoding="application/x-tex">G\Omega</annotation></semantics> a measurable space? Yes: An element of <semantics>GΩ<annotation encoding="application/x-tex">G\Omega</annotation></semantics> is a function that sends measurable subsets of <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> to numbers in <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>. Turning this around, we have, for each measurable <semantics>AΩ<annotation encoding="application/x-tex">A \subseteq \Omega</annotation></semantics>, an evaluation map <semantics>ev A:GΩI<annotation encoding="application/x-tex">ev_A \colon G\Omega \to I</annotation></semantics>. Let’s give <semantics>GΩ<annotation encoding="application/x-tex">G\Omega</annotation></semantics> the smallest <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebra such that all of these are measurable.

Is <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> a functor? Yes: Given a measurable map <semantics>g:ΩΩ<annotation encoding="application/x-tex">g \colon \Omega \to \Omega'</annotation></semantics> and <semantics>πGΩ<annotation encoding="application/x-tex">\pi \in G\Omega</annotation></semantics>, we can define the pushforward <semantics>Gg(π)<annotation encoding="application/x-tex">G g(\pi)</annotation></semantics> of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> along <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> by

<semantics>Gg(π)(A)=π(g 1A)<annotation encoding="application/x-tex"> G g(\pi)(A') = \pi(g^{-1} A') </annotation></semantics>

for measurable <semantics>AΩ<annotation encoding="application/x-tex">A' \subseteq \Omega'</annotation></semantics>.

Is <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> a monad? Yes: Given <semantics>ωΩ<annotation encoding="application/x-tex">\omega \in \Omega</annotation></semantics> we can define <semantics>η(ω)GΩ<annotation encoding="application/x-tex">\eta(\omega) \in G\Omega</annotation></semantics> by

<semantics>η(ω)(A)=χ A(ω)<annotation encoding="application/x-tex"> \eta(\omega)(A) = \chi_A (\omega) </annotation></semantics>

where <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is a measurable subset of <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> and <semantics>χ A<annotation encoding="application/x-tex">\chi_A</annotation></semantics> is its characteristic function. In other words <semantics>η(ω)<annotation encoding="application/x-tex">\eta(\omega)</annotation></semantics> is the Dirac measure at <semantics>ω<annotation encoding="application/x-tex">\omega</annotation></semantics>. Given <semantics>ρGGΩ<annotation encoding="application/x-tex">\rho \in G G\Omega</annotation></semantics>, let

<semantics>μ(ρ)(A)= GΩev Adρ<annotation encoding="application/x-tex"> \mu(\rho)(A) = \int_{\G\Omega} ev_A \,\mathrm{d}\rho </annotation></semantics>

for measurable <semantics>AΩ<annotation encoding="application/x-tex">A \subseteq \Omega</annotation></semantics>, where <semantics>ev A:GΩI<annotation encoding="application/x-tex">\ev_A \colon G\Omega \to I</annotation></semantics> is as above.

This is the Giry monad <semantics>𝔾=(G,η,μ)<annotation encoding="application/x-tex">\mathbb{G} = (G,\eta,\mu)</annotation></semantics>, first defined (unsurprisingly) by Giry in “A categorical approach to probability theory”.

A finitely additive probability measure <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> is just like a probability measure, except that it is only well-behaved with respect to finite disjoint unions, rather than arbitrary countable disjoint unions. More precisely, rather than having

<semantics>π( i=1 A i)= i=1 π(A i)<annotation encoding="application/x-tex"> \pi\left(\bigcup_{i=1}^{\infty} A_i\right) = \sum_{i=1}^{\infty} \pi(A_i) </annotation></semantics>

for disjoint <semantics>A i<annotation encoding="application/x-tex">A_i</annotation></semantics>, we just have

<semantics>π( i=1 nA i)= i=1 nπ(A i)<annotation encoding="application/x-tex"> \pi\left(\bigcup_{i=1}^{n} A_i\right) = \sum_{i=1}^{n} \pi(A_i) </annotation></semantics>

for disjoint <semantics>A i<annotation encoding="application/x-tex">A_i</annotation></semantics>.

We could repeat the definition of the Giry monad with “probability measure” replaced by “finitely additive probability measure”; doing so would give the finitely additive Giry monad <semantics>𝔽=(F,η,μ)<annotation encoding="application/x-tex">\mathbb{F} = (F,\eta,\mu)</annotation></semantics>. Every probability measure is a finitely additive probability measure, but not all finitely additive probability measures are probability measures. So <semantics>𝔾<annotation encoding="application/x-tex">\mathbb{G}</annotation></semantics> is a proper submonad of <semantics>𝔽<annotation encoding="application/x-tex">\mathbb{F}</annotation></semantics>.

The Kleisli category of <semantics>𝔾<annotation encoding="application/x-tex">\mathbb{G}</annotation></semantics> is quite interesting. Its objects are just the measurable spaces, and the morphisms are a kind of non-deterministic map called a Markov kernel or conditional probability distribution. As a special case, a discrete space equipped with an endomorphism in the Kleisli category is a discrete-time Markov chain.

I’ll explain how the Giry monads arise as codensity monads, but first I’d like to mention a connection with another example of a codensity monad; namely the ultrafilter monad.

An ultrafilter <semantics>𝒰<annotation encoding="application/x-tex">\mathcal{U}</annotation></semantics> on a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is a set of subsets of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> satisfying some properties. So <semantics>𝒰<annotation encoding="application/x-tex">\mathcal{U}</annotation></semantics> is a subset of the powerset <semantics>𝒫X<annotation encoding="application/x-tex">\mathcal{P}X</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, and is therefore determined by its characteristic function, which takes values in <semantics>{0,1}I<annotation encoding="application/x-tex">\{0,1\} \subseteq I</annotation></semantics>. In other words, an ultrafilter on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> can be thought of as a special function

<semantics>𝒫XI.<annotation encoding="application/x-tex"> \mathcal{P}X \to I. </annotation></semantics>

It turns out that “special function” here means “finitely additive probability measure defined on all of <semantics>𝒫X<annotation encoding="application/x-tex">\mathcal{P}X</annotation></semantics> and taking values in <semantics>{0,1}<annotation encoding="application/x-tex">\{0,1\}</annotation></semantics>”.

So the ultrafilter monad on <semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics> (which sends a set to the set of ultrafilters on it) is a primitive version of the finitely additive Giry monad. With this in mind, and given the fact that the ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of sets, it is not that surprising that the Giry monads are also codensity monads. In particular, we might expect <semantics>𝔽<annotation encoding="application/x-tex">\mathbb{F}</annotation></semantics> to be the codensity monad of some functor involving spaces that are “finite” in some sense, and for <semantics>𝔾<annotation encoding="application/x-tex">\mathbb{G}</annotation></semantics> we’ll need to include some information pertaining to countable additivity.

Integration operators

If you have a measure on a space then you can integrate functions on that space. The converse is also true: if you have a way of integrating functions on a space then you can extract a measure.

There are various ways of making this precise, the most famous of which is the Riesz-Markov-Kakutani Representation Theorem:

Theorem. Let <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> be a compact Hausdorff space. Then the space of finite, signed Borel measures on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is canonically isomorphic to

<semantics>NVS(Top(X,),)<annotation encoding="application/x-tex"> \mathbf{NVS}(\mathbf{Top}(X,\mathbb{R}),\mathbb{R}) </annotation></semantics>

as a normed vector space, where <semantics>Top<annotation encoding="application/x-tex">\mathbf{Top}</annotation></semantics> is the category of topological spaces, and <semantics>NVS<annotation encoding="application/x-tex">\mathbf{NVS}</annotation></semantics> is the category of normed vector spaces.

Given a finite, signed Borel measure <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, the corresponding map <semantics>Top(X,)<annotation encoding="application/x-tex">\mathbf{Top}(X,\mathbb{R}) \to \mathbb{R}</annotation></semantics> sends a function to its integral with respect to <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>. There are various different versions of this theorem that go by the same name.

My paper contains the following more modest version, which is a correction of a claim by Sturtz.

Proposition. Finitely additive probability measures on a measurable space <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> are canonically in bijection with functions <semantics>ϕ:Meas(Ω,I)I<annotation encoding="application/x-tex">\phi \colon \mathbf{Meas}(\Omega,I) \to I</annotation></semantics> that are

  • affine: if <semantics>f,gMeas(Ω,I)<annotation encoding="application/x-tex">f,g \in \mathbf{Meas}(\Omega,I)</annotation></semantics> and <semantics>rI<annotation encoding="application/x-tex">r \in I</annotation></semantics> then

<semantics>ϕ(rf+(1r)g)=rϕ(f)+(1r)ϕ(g),<annotation encoding="application/x-tex"> \phi(r f + (1-r)g) = r\phi(f) + (1-r)\phi(g), </annotation></semantics>

and

  • weakly averaging: if <semantics>r¯<annotation encoding="application/x-tex">\bar{r}</annotation></semantics> denotes the constant function with value <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> then <semantics>ϕ(r¯)=r<annotation encoding="application/x-tex">\phi(\bar{r}) = r</annotation></semantics>.

Call such a function a finitely additive integration operator. The bijection restricts to a correspondence between (countably additive) probability measures and functions <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> that additionally

  • respect limits: if <semantics>f nMeas(Ω,I)<annotation encoding="application/x-tex">f_n \in \mathbf{Meas}(\Omega,I)</annotation></semantics> is a sequence of functions converging pointwise to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> then <semantics>ϕ(f n)<annotation encoding="application/x-tex">\phi(f_n)</annotation></semantics> converges to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>.

Call such a function an integration operator. The integration operator corresponding to a probability measure <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> sends a function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> to

<semantics> Ωfdπ,<annotation encoding="application/x-tex"> \int_{\Omega}f \mathrm{d}\pi, </annotation></semantics>

which justifies the name. In the other direction, given an integration operator <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, the value of the corresponding probability measure on a measurable set <semantics>AΩ<annotation encoding="application/x-tex">A \subseteq \Omega</annotation></semantics> is <semantics>ϕ(χ A)<annotation encoding="application/x-tex">\phi(\chi_A)</annotation></semantics>.

These bijections are measurable (with respect to a natural <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebra on the set of finitely additive integration operators) and natural in <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics>, so they define isomorphisms of endofunctors of <semantics>Meas<annotation encoding="application/x-tex">\mathbf{Meas}</annotation></semantics>. Hence we can transfer the monad structures across the isomorphisms, and obtain descriptions of the Giry monads in terms of integration operators.

The Giry monads via codensity monads

So far so good. But what does this have to do with codensity monads? First let’s recall the definition of a codensity monad. I won’t go into a great deal of detail; for more information see Tom’s first post on the topic.

Let <semantics>U:<annotation encoding="application/x-tex">U \colon \mathbb{C} \to \mathcal{M}</annotation></semantics> be a functor. The codensity monad of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is the right Kan extension of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> along itself. This consists of a functor <semantics>T U:<annotation encoding="application/x-tex">T^U \colon \mathcal{M} \to \mathcal{M}</annotation></semantics> satisfying a universal property, which equips <semantics>T U<annotation encoding="application/x-tex">T^U</annotation></semantics> with a canonical monad structure. The codensity monad doesn’t always exist, but it will whenever <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> is small and <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics> is complete. You can think of <semantics>T U<annotation encoding="application/x-tex">T^U</annotation></semantics> as a generalisation of the monad induced by the adjunction between <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> and its left adjoint that makes sense when the left adjoint doesn’t exist. In particular, when the left adjoint does exist, the two monads coincide.

The end formula for right Kan extensions gives

<semantics>T Um= c[(m,Uc),Uc],<annotation encoding="application/x-tex"> T^U m = \int_{c \in \mathbb{C}} [\mathcal{M}(m,U c),U c], </annotation></semantics>

where <semantics>[(m,Uc),Uc]<annotation encoding="application/x-tex">[\mathcal{M}(m,U c),U c]</annotation></semantics> denotes the <semantics>(m,Uc)<annotation encoding="application/x-tex">\mathcal{M}(m,U c)</annotation></semantics> power of <semantics>Uc<annotation encoding="application/x-tex">U c</annotation></semantics> in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>, i.e. the product of <semantics>(m,Uc)<annotation encoding="application/x-tex">\mathcal{M}(m,U c)</annotation></semantics> (a set) copies of <semantics>Uc<annotation encoding="application/x-tex">U c</annotation></semantics> (an object of <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>) in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>.

It doesn’t matter too much if you’re not familiar with ends because we can give an explicit description of <semantics>T Um<annotation encoding="application/x-tex">T^U m</annotation></semantics> in the case that <semantics>=Meas<annotation encoding="application/x-tex">\mathcal{M} = \mathbf{Meas}</annotation></semantics>: The elements of <semantics>T UΩ<annotation encoding="application/x-tex">T^U\Omega</annotation></semantics> are families <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> of functions

<semantics>α c:Meas(Ω,Uc)Uc<annotation encoding="application/x-tex"> \alpha_c \colon \mathbf{Meas}(\Omega, U c) \to U c </annotation></semantics>

that are natural in <semantics>c<annotation encoding="application/x-tex">c \in \mathbb{C}</annotation></semantics>. For each <semantics>c<annotation encoding="application/x-tex">c \in \mathbb{C}</annotation></semantics> and measurable <semantics>f:ΩUc<annotation encoding="application/x-tex">f \colon \Omega \to U c</annotation></semantics> we have <semantics>ev f:T UΩI<annotation encoding="application/x-tex">\ev_f \colon T^U \Omega \to I</annotation></semantics> mapping <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> to <semantics>α c(f)<annotation encoding="application/x-tex">\alpha_c (f)</annotation></semantics>. The <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebra on <semantics>T UΩ<annotation encoding="application/x-tex">T^U \Omega</annotation></semantics> is the smallest such that each of these maps is measurable.

All that’s left is to say what we should choose <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> and <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> to be in order to get the Giry monads.

A subset <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> of a real vector space <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is convex if for any <semantics>x,yc<annotation encoding="application/x-tex">x,y \in c</annotation></semantics> and <semantics>rI<annotation encoding="application/x-tex">r \in I</annotation></semantics> the convex combination <semantics>rx+(1r)y<annotation encoding="application/x-tex">r x + (1-r)y</annotation></semantics> is also in <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics>, and a map <semantics>h:cc<annotation encoding="application/x-tex">h \colon c \to c'</annotation></semantics> between convex sets is called affine if it preserves convex combinations. So there’s a category of convex sets and affine maps between them. We will be interested in certain full subcategories of this.

Let <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics> be the (convex) set of sequences in <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> that converge to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> (it is a subset of the vector space <semantics>c 0<annotation encoding="application/x-tex">c_0</annotation></semantics> of all real sequences converging to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>). Now we can define the categories of interest:

  • Let <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> be the category whose objects are all finite powers <semantics>I n<annotation encoding="application/x-tex">I^n</annotation></semantics> of <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>, with all affine maps between them.

  • Let <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}</annotation></semantics> be the category whose objects are all finite powers of <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>, together with <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics>, and all affine maps between them.

All the objects of <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> and <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}</annotation></semantics> can be considered as measurable spaces (as subspaces of powers of <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>), and all the affine maps between them are then measurable, so we have (faithful but not full) inclusions <semantics>U:Meas<annotation encoding="application/x-tex">U \colon \mathbb{C} \to \mathbf{Meas}</annotation></semantics> and <semantics>V:𝔻Meas<annotation encoding="application/x-tex">V \colon \mathbb{D} \to \mathbf{Meas}</annotation></semantics>.

Theorem. The codensity monad of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is the finitely additive Giry monad, and the codensity monad of <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is the Giry monad.

Why should this be true? Let’s start with <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>. An element of <semantics>T UΩ<annotation encoding="application/x-tex">T^U \Omega</annotation></semantics> is a family of functions

<semantics>α I n:Meas(Ω,I n)I n.<annotation encoding="application/x-tex"> \alpha_{I^n} \colon\mathbf{Meas}(\Omega,I^n) \to I^n. </annotation></semantics>

But a map into <semantics>I n<annotation encoding="application/x-tex">I^n</annotation></semantics> is determined by its composites with the projections to <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>, and these projections are affine. This means that <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> is completely determined by <semantics>α I<annotation encoding="application/x-tex">\alpha_{I}</annotation></semantics>, and the other components are obtained by applying <semantics>α I<annotation encoding="application/x-tex">\alpha_{I}</annotation></semantics> separately in each coordinate. In other words, an element of <semantics>T UΩ<annotation encoding="application/x-tex">T^U \Omega</annotation></semantics> is a special sort of function

<semantics>Meas(Ω,I)I.<annotation encoding="application/x-tex"> \mathbf{Meas}(\Omega, I) \to I. </annotation></semantics>

Look familiar? As you might guess, the functions with the above domain and codomain that define elements of <semantics>T UΩ<annotation encoding="application/x-tex">T^U \Omega</annotation></semantics> are precisely the finitely additive integration operators.

The affine and weakly averaging properties of <semantics>α I<annotation encoding="application/x-tex">\alpha_{I}</annotation></semantics> are enforced by naturality with respect to certain affine maps. For example, the naturality square involving the affine map

<semantics>rπ 1+(1r)π 2:I 2I<annotation encoding="application/x-tex"> r\pi_1 + (1-r)\pi_2 \colon I^2 \to I </annotation></semantics>

(where <semantics>π i<annotation encoding="application/x-tex">\pi_i</annotation></semantics> are the projections) forces <semantics>α I<annotation encoding="application/x-tex">\alpha_I</annotation></semantics> to preserve convex combinations of the form <semantics>rf+(1r)g<annotation encoding="application/x-tex">r f + (1-r)g</annotation></semantics>. The weakly averaging condition comes from naturality with respect to constant maps.

How is the situation different for <semantics>T V<annotation encoding="application/x-tex">T^V</annotation></semantics>? As before <semantics>αT VΩ<annotation encoding="application/x-tex">\alpha \in T^V \Omega</annotation></semantics> is determined by <semantics>α I<annotation encoding="application/x-tex">\alpha_I</annotation></semantics>, and <semantics>α d 0<annotation encoding="application/x-tex">\alpha_{d_0}</annotation></semantics> is obtained by applying <semantics>α I<annotation encoding="application/x-tex">\alpha_I</annotation></semantics> in each coordinate, thanks to naturality with respect to the projections. A measurable map <semantics>f:Ωd 0<annotation encoding="application/x-tex">f \colon \Omega \to d_0</annotation></semantics> is a sequence of maps <semantics>f n:ΩI<annotation encoding="application/x-tex">f_n \colon \Omega \to I</annotation></semantics> converging pointwise to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, and

<semantics>α d 0(f)=(α I(f i)) i=1 .<annotation encoding="application/x-tex"> \alpha_{d_0}(f) = (\alpha_I(f_i))_{i=1}^{\infty}. </annotation></semantics>

But <semantics>α d 0(f)d 0<annotation encoding="application/x-tex">\alpha_{d_0}(f) \in d_0</annotation></semantics>, so <semantics>α I(f i)<annotation encoding="application/x-tex">\alpha_I(f_i)</annotation></semantics> must converge to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>. So <semantics>α I<annotation encoding="application/x-tex">\alpha_I</annotation></semantics> is an integration operator!

The rest of the proof consists of checking that these assignments <semantics>αα I<annotation encoding="application/x-tex">\alpha \mapsto \alpha_{I}</annotation></semantics> really do define isomorphisms of monads.

It’s natural to wonder how much you can alter the categories <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> and <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}</annotation></semantics> without changing the codensity monads. Here’s a result to that effect:

Proposition. The categories <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> and <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}</annotation></semantics> can be replaced by the monoids of affine endomorphisms of <semantics>I 2<annotation encoding="application/x-tex">I^2</annotation></semantics> and <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics> respectively (regarded as 1-object categories, with the evident functors to <semantics>Meas<annotation encoding="application/x-tex">\mathbf{Meas}</annotation></semantics>) without changing the codensity monads.

This gives categories of convex sets that are minimal such that their inclusions into <semantics>Meas<annotation encoding="application/x-tex">\mathbf{Meas}</annotation></semantics> give rise to the Giry monads. Here I mean minimal in the sense that they contain the fewest objects with all affine maps between them. They are not uniquely minimal; there are other convex sets whose monoids of affine endomorphisms also give rise to the Giry monads.

This result gives yet another characterisation of (finitely and countably) additive probability measures: a probability measure on <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> is an <semantics>End(d 0)<annotation encoding="application/x-tex">\mathrm{End}(d_0)</annotation></semantics>-set morphism

<semantics>Meas(Ω,d 0)d 0,<annotation encoding="application/x-tex"> \mathbf{Meas}(\Omega,d_0) \to d_0, </annotation></semantics>

where <semantics>End(d 0)<annotation encoding="application/x-tex">\mathrm{End}(d_0)</annotation></semantics> is the monoid of affine endomorphisms of <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics>. Similarly for finitely additive probability measures, with <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics> replaced by <semantics>I 2<annotation encoding="application/x-tex">I^2</annotation></semantics>.

What about maximal categories of convex sets giving rise to the Giry monads? I don’t have a definitive answer to this question, but you can at least throw in all bounded, convex subsets of Euclidean space:

Proposition. Let <semantics><annotation encoding="application/x-tex">\mathbb{C}'</annotation></semantics> be the category of all bounded, convex subsets of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> (where <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> varies) and affine maps. Let <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}'</annotation></semantics> be <semantics><annotation encoding="application/x-tex">\mathbb{C}'</annotation></semantics> but with <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics> adjoined. Then replacing <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> by <semantics><annotation encoding="application/x-tex">\mathbb{C}'</annotation></semantics> and <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}</annotation></semantics> by <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}'</annotation></semantics> does not change the codensity monads.

The definition of <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}'</annotation></semantics> is a bit unsatisfying; <semantics>d 0<annotation encoding="application/x-tex">d_0</annotation></semantics> feels (and literally is) tacked on. It would be nice to have a characterisation of all the subsets of <semantics> <annotation encoding="application/x-tex">\mathbb{R}^{\mathbb{N}}</annotation></semantics> (or indeed all the convex sets) that can be included in <semantics>𝔻<annotation encoding="application/x-tex">\mathbb{D}'</annotation></semantics>. But so far I haven’t found one.

by leinster (tom.leinster@ed.ac.uk) at October 23, 2014 07:50 AM

Christian P. Robert - xi'an's og

BibTool on the air

Yesterday night, just before leaving for Coventry, I realised I had about 30 versions of my “mother of all .bib” bib file, spread over directories and with broken links with the original mother file… (I mean, I always create bib files in new directories by a hard link,

    ln ~/mother.bib

but they eventually and inexplicably end up with a life of their own!) So I decided a Spring clean-up was in order and installed BibTool on my Linux machine to gather all those versions into a new encompassing all-inclusive bib reference. I did not take advantage of the many possibilities of the program, written by Gerd Neugebauer, but it certainly solved my problem: once I realised I had to set the variates

check.double = on
check.double.delete = on
pass.comments = off

all I had to do was to call

bibtool -s -i ../*/*.bib -o mother.bib
bibtool -d -i mother.bib -o mother.bib
bibtool -s -i mother.bib -o mother.bib

to merge all bib file and then to get rid of the duplicated entries in mother.bib (the -d option commented out the duplicates and the second call with -s removed them). And to remove the duplicated definitions in the preamble of the file. This took me very little time in the RER train from Paris-Dauphine (where I taught this morning, having a hard time to make the students envision the empirical cdf as an average of Dirac masses!) to Roissy airport, in contrast with my pedestrian replacement of all stray siblings of the mother bib into new proper hard links, one by one. I am sure there is a bash command that could have done it in one line, but I spent instead my flight to Birmingham switching all existing bib files, one by one…


Filed under: Books, Linux, Travel, University life Tagged: bash, BibTeX, BibTool, Birmingham, Charles de Gaulle, LaTeX, link, Linux, RER B, Roissy, University of Warwick

by xi'an at October 23, 2014 06:14 AM

October 22, 2014

The Great Beyond - Nature blog

AstraZeneca neither confirms nor denies that it will ditch antibiotics research
16881_lores

A computer image of a cluster of drug-resistant Mycobacterium tuberculosis.

US Centers for Disease Control and Prevention/ Melissa Brower

The fight against antibiotic-resistant microbes would suffer a major blow if widely circulated rumours were confirmed that pharmaceutical giant AstraZeneca plans to disband its in-house antibiotic development. The company called the rumours “highly speculative” while not explicitly denying them.

On 23 October, drug-industry consultant David Shlaes wrote on his blog that AstraZeneca, a multinational behemoth headquartered in London, “has told its antibiotics researchers that they should make efforts to find other jobs in the near future”, and that in his opinion this heralds the end of in-house antibiotic development at the company. “As far as antibiotic discovery and development goes, this has to be the most disappointing news of the entire antibiotic era,” wrote Shlaes.

AstraZeneca would not directly address these claims when approached by Nature for comment. In its statement it said, in full:

The blog is highly speculative. We continue to be active in anti-infectives and have a strong pipeline of drugs in development. However, we have previously said on a number of occasions that as we focus on our core therapy areas (Oncology, CVMD [cardiovascular and metabolic diseases] and Respiratory, Inflammation and Autoimmune) we will continue to remain opportunity driven in infection and neuroscience, in particular exploring partnering opportunities to maximise the value of our pipeline and portfolio.

Research into antibiotics is notorious for its high cost and high failure rate. AstraZeneca has previously said that its main research focus would be on areas other than antibiotic development.

Public-health experts have been warning about a trend among large pharmaceutical companies to move away from antibiotics research — just as the World Health Organization and others have pointed to the rising threat of deadly multi-drug-resistant strains of bacteria such as Mycobacterium tuberculosis or Staphylococcus aureus (see ‘Antibiotic resistance: The last resort‘).

by Daniel Cressey at October 22, 2014 05:56 PM

Peter Coles - In the Dark

Cosmology, to be precise…

After an extremely busy morning I had the pleasant task this afternoon of talking to the participants of a collaboration meeting of the Dark Energy Survey that’s going on here at Sussex. Now there’s the even more pleasant task in front of me of having drinks and dinner with the crowd. At some point I’ll post the slides of my talk on here, but for the mean time here’s a pretty accurate summary..

Summary


by telescoper at October 22, 2014 05:28 PM

Emily Lakdawalla - The Planetary Society Blog

Herschel observations of Comet Siding Spring initiated by an amateur astronomer
The European satellite Herschel acquired images of Comet Siding Spring before its death in 2013 — thanks to an observing proposal from an amateur astronomer!

October 22, 2014 04:26 PM

Quantum Diaries

Have we detected Dark Matter Axions?

An interesting headline piqued my interest when browsing the social networking and news website Reddit the other day. It simply said:

“The first direct detection of dark matter particles may have been achieved.”


Well, that was news to me! 
Obviously, the key word here is “may”. Nonetheless, I was intrigued, not being aware of any direct detection experiments publishing such results around this time. As a member of LUX, there are usually collaboration-wide emails sent out when a big paper is published by a rival group, most recently the DarkSide-50 results . Often an email like this is followed by a chain of comments, both good and bad, from the senior members of our group. I can’t imagine there being a day where I think I could read a paper and instantly have intelligent criticisms to share like those guys – but maybe when I’ve been in the dark matter business for 20+ years I will!

It is useful to look at other work similar to our own. We can learn from the mistakes and successes of the other groups within our community, and most of the time rivalry is friendly and professional. 
So obviously I took a look at this claimed direct detection. Note that there are three methods to dark matter detection, see figure. To summarise quickly,

The three routes to dark matter detection

  • Direct detection is the observation of an interaction of a dark matter particle with a standard model one
.
  • Indirect detection is the observation of annihilation products that have no apparent standard model source and so are assumed to be the products of dark matter annihilation.
  • Production is the measurement of missing energy and momentum in a particle interaction (generally a collider experiment) that could signify the creation of dark matter (this method must be very careful, as this is how the neutrinos are measured in collider experiments).

So I was rather surprised to find the article linked was about a space telescope – the XMM-Newton observatory. These sort of experiments are usually for indirect detection. The replies on the Reddit link reflected my own doubt – aside from the personification of x-rays, this comment was also my first thought:

“If they detected x-rays who are produced by dark matter axions then it’s not direct detection.”

These x-rays supposedly come from a particle called an axion – a dark matter candidate. But to address the comment, I considered LUX, a direct dark matter detector, where what we are actually detecting is photons. These are produced by the recoil of a xenon nuclei that interacted with a dark matter particle, and yet we call it direct – because the dark matter has interacted with a standard model particle, the xenon. 
So to determine whether this possible axion detection is direct, we need to understand the effect producing the x-rays. And for that, we need to know about axions.

I haven’t personally studied axions much at all. At the beginning of my PhD, I read a paper called “Expected Sensitivity to Galactic/Solar Axions and Bosonic Super-WIMPs based on the Axio-electric Effect in Liquid Xenon Dark Matter Detectors” – but I couldn’t tell you a single thing from that paper now, without re-reading it. After some research I have a bit more understanding under my belt, and for those of you that are physicists, I can summarise the idea:

  • The axion is a light boson, proposed by Roberto Peccei and Helen Quinn in 1977 to solve the strong CP problem (why does QCD not break CP-symmetry when there is no theoretical reason it shouldn’t?).
  • The introduction of the particle causes the strong CP violation to go to zero (by some fancy maths that I can’t pretend to understand!).
  • 
It has been considered as a cold dark matter candidate because it is neutral and very weakly interacting, and could have been produced with the right abundance.
Conversion of an axion to  a photon within a magnetic field (Yamanaka, Masato et al)

Conversion of an axion to a photon within a magnetic field (Yamanaka, Masato et al)


For non-physicists, the key thing to understand is that the axion is a particle predicted by a separate theory (nothing to do with dark matter) that solves another problem in physics. It just so happens that its properties make it a suitable candidate for dark matter. Sounds good so far – the axion kills two birds with one stone. We could detect a dark matter axion via an effect that converts an axion to an x-ray photon within a magnetic field. The XMM-Newton observatory orbits the Earth and looks for x-rays produced by the conversion of an axion within the Earth’s magnetic field. Although there is no particular interaction with a standard model particle (one is produced), the axion is not annihilating to produce the photons, so I think it is fair to call this direct detection.

What about the actual results? What has actually been detected is a seasonal variation in the cosmic x-ray background. The conversion signal is expected to be greater in summer due to the changing visibility of the magnetic field region facing the sun, and that’s exactly what was observed. In the paper’s conclusion the authors state:

“On the basis of our results from XMM-Newton, it appears plausible that axions – dark matter particle candidates – are indeed produced in the core of the Sun and do indeed convert to soft X-rays in the magnetic field of the Earth, giving rise to a significant, seasonally-variable component of the 2-6 keV CXB”

 

axions

Conversion of solar axions into photons within the Earth’s magnetic field (University of Leicester)

Note the language used – “it appears plausible”. This attitude of physicists to always be cautious and hold back from bold claims is a wise one – look what happened to BICEP2. It is something I am personally becoming familiar with, last week having come across a lovely LUX event that passed my initial cuts and looked very much like it could have been a WIMP. My project partner from my masters degree at the University of Warwick is now a new PhD student at UCL – and he takes great joy in embarrassing me in whatever way he can. So after I shared my findings with him, he told everyone we came across that I had found WIMPs. Even upon running into my supervisor, he asked “Have you seen Sally’s WIMP?”. I was not pleased – that is not a claim I want to make as a mere second year PhD student. Sadly, but not unexpectedly, my “WIMP” has now been cut away. But not for one second did I truly believe it could have been one – surely there’s no way I‘m going to be the one that discovers dark matter! (Universe, feel free to prove me wrong.)

These XMM-Newton results are nice, but tentative – they need confirming by more experiments. I can’t help but wonder how many big discoveries end up delayed or even discarded due to the cautiousness of physicists, who can scarcely believe they have found something so great. I look forward to the time when someone actually comes out and says ‘We did it – we found it.” with certainty. It would be extra nice if it were LUX. But realistically, to really convince anyone that dark matter has been found, detection via several different methods and in several different places is needed. There is a lot of work to do yet.

It’s an exciting time to be in this field, and papers like the XMM-Newton one keep us on our toes! LUX will be starting up again soon for what we hope will be a 300 day run, and an increase in sensitivity to WIMPs of around 5x. Maybe it’s time for me to re-read that paper on the axio-electric effect in liquid xenon detectors!

by Sally Shaw at October 22, 2014 04:07 PM

Tommaso Dorigo - Scientificblogging

The Quote Of The Week - Shocked And Disappointed
"Two recent results from other experiments add to the excitement of Run II. The results from Brookhaven's g-minus-two experiments with muons have a straightforward interpretation as signs of supersymmetry. The increasingly interesting results from BABAR at the Stanford Linear Accelerator Center add to the importance of B physics in Run II, and also suggest new physics. I will be shocked and disappointed if we don't have at least one major discovery."

read more

by Tommaso Dorigo at October 22, 2014 03:20 PM

Quantum Diaries

New high-speed transatlantic network to benefit science collaborations across the U.S.

This Fermilab press release came out on Oct. 20, 2014.

ESnet to build high-speed extension for faster data exchange between United States and Europe. Image: ESnet

ESnet to build high-speed extension for faster data exchange between United States and Europe. Image: ESnet

Scientists across the United States will soon have access to new, ultra-high-speed network links spanning the Atlantic Ocean thanks to a project currently under way to extend ESnet (the U.S. Department of Energy’s Energy Sciences Network) to Amsterdam, Geneva and London. Although the project is designed to benefit data-intensive science throughout the U.S. national laboratory complex, heaviest users of the new links will be particle physicists conducting research at the Large Hadron Collider (LHC), the world’s largest and most powerful particle collider. The high capacity of this new connection will provide U.S. scientists with enhanced access to data at the LHC and other European-based experiments by accelerating the exchange of data sets between institutions in the United States and computing facilities in Europe.

DOE’s Brookhaven National Laboratory and Fermi National Accelerator Laboratory—the primary computing centers for U.S. collaborators on the LHC’s ATLAS and CMS experiments, respectively—will make immediate use of the new network infrastructure once it is rigorously tested and commissioned. Because ESnet, based at DOE’s Lawrence Berkeley National Laboratory, interconnects all national laboratories and a number of university-based projects in the United States, tens of thousands of researchers from all disciplines will benefit as well.

The ESnet extension will be in place before the LHC at CERN in Switzerland—currently shut down for maintenance and upgrades—is up and running again in the spring of 2015. Because the accelerator will be colliding protons at much higher energy, the data output from the detectors will expand considerably—to approximately 40 petabytes of raw data per year compared with 20 petabytes for all of the previous lower-energy collisions produced over the three years of the LHC first run between 2010 and 2012.

The cross-Atlantic connectivity during the first successful run for the LHC experiments, which culminated in the discovery of the Higgs boson, was provided by the US LHCNet network, managed by the California Institute of Technology. In recent years, major research and education networks around the world—including ESnet, Internet2, California’s CENIC, and European networks such as DANTE, SURFnet and NORDUnet—have increased their backbone capacity by a factor of 10, using sophisticated new optical networking and digital signal processing technologies. Until recently, however, higher-speed links were not deployed for production purposes across the Atlantic Ocean—creating a network “impedance mismatch” that can harm large, intercontinental data flows.

An evolving data model
This upgrade coincides with a shift in the data model for LHC science. Previously, data moved in a more predictable and hierarchical pattern strongly influenced by geographical proximity, but network upgrades around the world have now made it possible for data to be fetched and exchanged more flexibly and dynamically. This change enables faster science outcomes and more efficient use of storage and computational power, but it requires networks around the world to perform flawlessly together.

“Having the new infrastructure in place will meet the increased need for dealing with LHC data and provide more agile access to that data in a much more dynamic fashion than LHC collaborators have had in the past,” said physicist Michael Ernst of DOE’s Brookhaven National Laboratory, a key member of the team laying out the new and more flexible framework for exchanging data between the Worldwide LHC Computing Grid centers.

Ernst directs a computing facility at Brookhaven Lab that was originally set up as a central hub for U.S. collaborators on the LHC’s ATLAS experiment. A similar facility at Fermi National Accelerator Laboratory has played this role for the LHC’s U.S. collaborators on the CMS experiment. These computing resources, dubbed Tier 1 centers, have direct links to the LHC at the European laboratory CERN (Tier 0).  The experts who run them will continue to serve scientists under the new structure. But instead of serving as hubs for data storage and distribution only among U.S.-based collaborators at Tier 2 and 3 research centers, the dedicated facilities at Brookhaven and Fermilab will be able to serve data needs of the entire ATLAS and CMS collaborations throughout the world. And likewise, U.S. Tier 2 and Tier 3 research centers will have higher-speed access to Tier 1 and Tier 2 centers in Europe.

“This new infrastructure will offer LHC researchers at laboratories and universities around the world faster access to important data,” said Fermilab’s Lothar Bauerdick, head of software and computing for the U.S. CMS group. “As the LHC experiments continue to produce exciting results, this important upgrade will let collaborators see and analyze those results better than ever before.”

Ernst added, “As centralized hubs for handling LHC data, our reliability, performance and expertise have been in demand by the whole collaboration, and now we will be better able to serve the scientists’ needs.”

An investment in science
ESnet is funded by DOE’s Office of Science to meet networking needs of DOE labs and science projects. The transatlantic extension represents a financial collaboration, with partial support coming from DOE’s Office of High Energy Physics (HEP) for the next three years. Although LHC scientists will get a dedicated portion of the new network once it is in place, all science programs that make use of ESnet will now have access to faster network links for their data transfers.

“We are eagerly awaiting the start of commissioning for the new infrastructure,” said Oliver Gutsche, Fermilab scientist and member of the CMS Offline and Computing Management Board. “After the Higgs discovery, the next big LHC milestones will come in 2015, and this network will be indispensable for the success of the LHC Run 2 physics program.”

This work was supported by the DOE Office of Science.
Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @FermilabToday.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy.  The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.  For more information, please visit science.energy.gov.

One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by the Research Foundation for the State University of New York on behalf of Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization.

Visit Brookhaven Lab’s electronic newsroom for links, news archives, graphics, and more at http://www.bnl.gov/newsroom, follow Brookhaven Lab on Twitter, http://twitter.com/BrookhavenLab, or find us on Facebook, http://www.facebook.com/BrookhavenLab/.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Media contacts:

  • Karen McNulty-Walsh, Brookhaven Media and Communications Office, kmcnulty@bnl.gov, 631-344-8350
  • Kurt Riesselmann, Fermilab Office of Communication, media@fnal.gov, 630-840-3351
  • Jon Bashor, Computing Sciences Communications Manager, Lawrence Berkeley National Laboratory, jbashor@lbnl.gov, 510-486-5849

Computing contacts:

  • Lothar Bauerdick, Fermilab, US CMS software computing, bauerdick@fnal.gov, 630-840-6804
  • Oliver Gutsche, Fermilab, CMS Offline and Computing Management Board, gutsche@fnal.gov, 630-840-8909

by Fermilab at October 22, 2014 03:15 PM

CERN Bulletin

CERN Bulletin Issue No. 43-44/2014
Link to e-Bulletin Issue No. 43-44/2014Link to all articles in this issue No.

October 22, 2014 02:58 PM

Clifford V. Johnson - Asymptotia

I Dare!
sunday_assembly_3(Click photos* for larger view) Yes. I dare to show equations during public lectures. There'll be equations in my book too. If we do not show the tools we use, how can we give a complete picture of how science works? If we keep hiding the mathematics, won't people be even more afraid of this terrifying horror we are "protecting" them from? I started my Sunday Assembly talk reflecting upon the fact that next year will make 100 years after Einstein published one of the most beautiful and far-reaching scientific works in history, General Relativity, describing how gravity works. In the first 30 seconds of the talk, I put up the equations. Just because they deserve to be seen, and to drive home the point that its not just a bunch of words, but an actual method of computation, that allows you to do quantitative science about the largest physical object we know of - the entire universe! sunday_assembly_1 It was a great audience, who seemed to enjoy the 20 minute talk as part of [...] Click to continue reading this post

by Clifford at October 22, 2014 02:29 PM

astrobites - astro-ph reader's digest

The Singles’ Club
Title:The Kepler dichotomy among the M dwarfs: half of systems contain five or more coplanar planets
Authors: Sarah Ballard & John Johnson
First author’s institution: University of Washington
Status: Submitted to ApJ

The Kepler dichotomy

The Kepler spacecraft hasn’t just found transiting exoplanets: it’s found transiting exoplanet systems. Hundreds of alternative Solar-systems have been spotted just a few hundred light years away, and Ballard & Johnson want to use them to describe the population of planetary systems across the entire galaxy.

Exoplanets only transit if they pass between their host star and the Earth, blocking out a little light once every orbital period. Sometimes we see more than one planet transit in the same system, which means they lie in the same plane and have small ‘mutual inclinations’ (the angles between exoplanet orbits within a planetary system). But how can you be sure that you detect all the planets in a system? If planets always orbited their stars in a single plane, with very small mutual inclinations, you’d see them all transit. But we know (through radial velocity measurements that reveal additional non-transiting planets in the systems, and through planets passing in front of star-spots, or in front of other planets) that planetary systems are often not well aligned and sometimes have large mutual inclinations.

Although Kepler has found lots of multiple planet systems (multis), it has also found lots of single planets (singletons – that’s what Ballard & Johnson call them). I mean LOTS of singletons. Is this large number of singletons what you would expect to see, given some distribution of modest mutual inclinations? Can the single population be explained by assuming that they have non-transiting friends? Or is there some mysterious process generating all these singletons?

This mystery has been investigated previously for Sun-like stars (Morton & Winn, 2014), where evidence was found for separate populations of singletons and multis. Ballard & Johnson apply the same logic to the M dwarfs, the small stars, to see whether the same dual population phenomenon exists in the mini-planetary systems.

The planet machine

Screen Shot 2014-10-20 at 21.23.32

Figure 1. The model is shown in red with its 1 and 2 \sigma intervals. The blue histogram represents the observations. Ballard & Johnson find that they cannot adequately reproduce the observations with a single population of planets.

Ballard & Johnson compare the exoplanets observed by Kepler to a fake set of planets. They generate thousands of M dwarf planetary systems with between 1 and 8 planets and mutual inclination scatter ranging from 0-10o. They then actually tested whether their fake planetary systems were stable and got rid of any that would shake themselves apart through planet-planet interactions. For each system they recorded the number of planets that would be seen to transit and created a histogram of the number of transiting planets. This histogram was parameterised by N (the total number of planets per star) and \sigma (the scatter in mutual inclinations of the planets). They then determined which values of N and \sigma best describe the observations by comparing their fake-data histogram to the real-data histogram using a Poissonian likelihood function.

Two populations of planets . . .

Figure 2. The same as figure 1 but this time two planet populations are used: one with one planet only and one with 2-8 planets.

Figure 2. The same as Figure 1 but this time two planet populations are used: one with 1 planet only and one with 2-8 planets.

Ballard & Johnson found that they couldn’t reproduce the observations with this simple model: they couldn’t create enough singletons to match the observations. This is shown in Figure 1 (above)—see how the model (in red) just can’t quite get up to the same height as the observations (the blue histogram) for the singletons.

Next, the authors tried generating two populations of planets: one of singletons only, and another with 2-8 planets and a range of mutual inclination scatters. The ratio of number-of-singletons to number-of-multis, f, was an additional parameter of their model. Ballard & Johnson generated thousands of planetary systems with varying numbers of planets, mutual inclinations and fs. Once again, they counted how many would transit and made a histogram, then found the values of N, \sigma and f that best reproduced the data. This time they were able to reproduce the observations—see Figure 2. The sharp upturn in the number of singletons seen in the observations (the blue histogram) is matched by the model (in red). They find a value of f that best reproduces the data: 0.55 +0.23-0.12, so around half of the systems are multis and half are singletons. For the multis, they find that there must be more than five planets per planet–hosting star, with small mutual inclinations, in order to reproduce the observations.

Trending hosts?

Having found that two populations of planets, one of singles and one of multis, best describe the data, Ballard & Johnson ask: is there a fundamental difference between the singleton hosts and the multi hosts? They looked at the host stars’ rotation periods, metallicities (the amount of heavier elements like Helium, Carbon, etc, in the star) and positions in the galaxy and found that the multi-hosting stars tend to be rotating more rapidly, are more metal poor and are closer to the galaxy’s midplane. The rotation trend might actually be an age trend: old stars spin slower than young stars, so perhaps we’re seeing that young systems have lots of planets that get shed over time. These findings are consistent with previous studies.

In this paper, Ballard & Johnson show that with some smooth statistical moves, you can probe an underlying population of objects even though you only observe a fraction of them. They show that the Kepler dichotomy persists for the mini Solar systems—intensifying the mystery behind the singleton excess. Some process that we don’t yet understand is generating all these singletons…. turns out it’s a lonely existence for most exoplanets.

 

by Ruth Angus at October 22, 2014 12:57 PM

Tommaso Dorigo - Scientificblogging

ECFA Workshop: Planning For The High Luminosity LHC
I am spending a few days in Aix Les Bains, a pleasant lakeside resort in the French southwest, to follow the works of the second ECFA workshop, titled "High-Luminosity LHC". ECFA stands for "European Committee for Future Accelerators" but this particular workshop is indeed centred on the future of the LHC, despite the fact that there are at present at least half a dozen international efforts toward the design of more powerful hadron colliders, more precise linear electron-positron colliders, or still other solutions.

read more

by Tommaso Dorigo at October 22, 2014 11:04 AM

astrobites - astro-ph reader's digest

Newer Horizons Beyond Pluto
In the 1970s and 1980s, the Voyager probes visited the outer solar system, giving us some of the first close-up images of the giant planets at the edge of our solar system. Voyager 1 visited Jupiter and Saturrn before beginning a journey out of the solar system, while Voyager 2 continued along the plane of the planets and visited Uranus and Neptune. Not for the last time, Pluto was left out.

To right this wrong (and to learn a lot about Pluto), NASA launched the New Horizons probe towards Pluto in 2006. After a nine year journey, it will reach Pluto in July, 2015. That sounds like a slow trip, but the distance to Pluto is huge: even light takes eight hours to get out there. New Horizons is actually moving away from the sun at more than 10 miles a second!

New Horizons before launch, including humans for scale.

New Horizons before launch, including humans for scale.

The high speed of the probe is great for getting to Pluto, but terrible for staying at Pluto. To put a probe in orbit around a planet, we need to get the probe and the planet moving at almost exactly the same speed and in the same direction. And it gets worse: the smaller the planet, the smaller the gravitational field, so the closer you need to match the velocities. For the giant planets this is pretty easy: their gravitational pulls are so strong that just getting the probes there is a good start; once the spacecraft is near the planet will do a lot of the work.

For smaller bodies near us (like Mars), we use a special orbit called a Homann transfer orbit to send the probes to the planets. The Homann orbit places the spacecraft in an elliptical orbit, using the Sun’s gravity to slow down the probe on its way out of the solar system. This technique is optimized for minimizing the amount of fuel needed to put a probe in orbit around another planet, but is very slow. Earth-Mars transfers take about nine months; an Earth-Pluto transfer would take about 200 years!

Gravity transfers are out: we don’t want to wait 200 years to see Pluto. Our only other reasonable option is to fire New Horizons’ thrusters near Pluto to slow it down. This process would need a lot of fuel; we’re decelerating to zero from more than 10 miles per second, remember! Fuel isn’t light, and adding all this fuel would make the spacecraft significantly heavier. This is a problem: now that we’ve added weight, we need more fuel to even get New Horizons off the Earth. And this fuel takes up room, so we need to build bigger tanks. But this is more weight that we’ll have to decelerate, so we need more fuel on board to slow them down too. See the problem? For every pound you add in fuel for the thrusters, you really add way more than one pound of mission. New Horizons weighed about 1,000 pounds at launch; to use its thrusters to stop at Pluto, we would have needed to launch it with almost 70,000 pounds of fuel!

We’re out of options. The only conclusion is that we can’t stop at Pluto. As a result, New Horizons is a flyby mission. It’s going to come within 6,000 miles of Pluto, but only once. It might seem like a waste to just go past Pluto once and end the mission; NASA agrees with you! Since launch, the plan has been for New Horizons to visit another Kuiper belt object after visiting Pluto. The problem is that we don’t know of many objects close enough to Pluto for New Horizons to visit. At launch, we didn’t know of any.

In 2011, the team started a search for new objects near Pluto to visit. They collected images from telescopes in Hawaii and Chile, where they were sensitive to objects larger than about 50 kilometers (30 miles) in size. While they found 50 objects, none of them were close enough to Pluto to be appropriate for New Horizons! New Horizons’ post-Pluto plans were on the precipice of peril.

This year, the astronomers turned to their last hope: the mighty Hubble Space Telescope. Being above the Earth’s atmosphere, Hubble is sensitive to even smaller objects than the ground-based telescopes were. In this case, Hubble came successfully to the rescue, finding three potential targets! Last week, the team announced the top choice (although not necessarily final selection) for a post-Pluto mission, the romantically-named Kuiper belt object “1110113Y.”

We know how bright the object is, but have to rely on models of its composition and reflectivity to estimate its size. The best estimate is that 1110113Y is about 40 kilometers (25 miles) across. Based on 1110113Ys position and motion, New Horizons should speed along to it and visit in January, 2019.

So why do we care? New Horizons’ goal is to study the outer solar system, and these observations will give us close-up information on a Kuiper belt object like never before. Kuiper belt objects are believed to be the building blocks of Pluto and the most similar objects to the original planetesimals that formed the planets. Therefore, studying Kuiper belt objects really enables us to probe the Earth’s formation, giving us an initial condition to use when modeling the effects of 4.5 billion years of orbiting the Sun.

by Ben Montet at October 22, 2014 03:42 AM

October 21, 2014

Christian P. Robert - xi'an's og

delayed acceptance [alternative]

In a comment on our Accelerating Metropolis-Hastings algorithms: Delayed acceptance with prefetching paper, Philip commented that he had experimented with an alternative splitting technique retaining the right stationary measure: the idea behind his alternative acceleration is again (a) to divide the target into bits and (b) run the acceptance step by parts, towards a major reduction in computing time. The difference with our approach is to represent the  overall acceptance probability

\min_{k=0,..,d}\left\{\prod_{j=1}^k \rho_j(\eta,\theta),1\right\}

and, even more surprisingly than in our case, this representation remains associated with the right (posterior) target!!! Provided the ordering of the terms is random with a symmetric distribution on the permutation. This property can be directly checked via the detailed balance condition.

In a toy example, I compared the acceptance rates (acrat) for our delayed solution (letabin.R), for this alternative (letamin.R), and for a non-delayed reference (letabaz.R), when considering more and more fractured decompositions of a Bernoulli likelihood.

> system.time(source("letabin.R"))
user system elapsed
225.918 0.444 227.200
> acrat
[1] 0.3195 0.2424 0.2154 0.1917 0.1305 0.0958
> system.time(source("letamin.R"))
user system elapsed
340.677 0.512 345.389
> acrat
[1] 0.4045 0.4138 0.4194 0.4003 0.3998 0.4145
> system.time(source("letabaz.R"))
user system elapsed
49.271 0.080 49.862
> acrat
[1] 0.6078 0.6068 0.6103 0.6086 0.6040 0.6158

A very interesting outcome since the acceptance rate does not change with the number of terms in the decomposition for the alternative delayed acceptance method… Even though it logically takes longer than our solution. However, the drawback is that detailed balance implies picking the order at random, hence loosing on the gain in computing the cheap terms first. If reversibility could be bypassed, then this alternative would definitely get very appealing!


Filed under: Books, Kids, Statistics, University life Tagged: acceleration of MCMC algorithms, delayed acceptance, detailed balance, MCMC, Monte Carlo Statistical Methods, reversibility, simulation

by xi'an at October 21, 2014 10:14 PM

ZapperZ - Physics and Physicists

Scientific Evidence Points To A Designer?
We have had these types of anthropic universe arguments before, and I don't see this being settled anytime soon, unless we encounter an alien life form or something that dramatic.

Apparently, this physicists have been making the rounds giving talks on scientific evidence that points to a designer. Unfortunately, this claim is highly misleading. There are several issues that need to be clarified here:

1. These so-called evidence have many varying interpretations. In the hands of Stephen Hawking, he sees this as evidence that we do NOT need a designer for the universe to exist. So to claim it that they point to a designer is highly misleading, because obviously there are very smart people out there who think of the opposite.

2. Scientific evidence have varying degree of certainty. The evidence that Niobium undergoes a superconducting transition at 9.3 K is a lot more certain than many of the astrophysical parameters that we have gathered so far. It is just the nature of the study and the field.

3. It is also interesting to note that even if the claim is true, it has a significant conflict with many of the orthodox religious view of the origin of the universe, including the fact that it allows for significant time for speciation and evolution.

4. The argument that the universe has been fine-tuned for us to live in is very weak in my book. Who is there to say that if any of these parameters is different that a different type of universe couldn't appear and that different type of life forms would dominate? We are still at an infant knowledge as far as how different types of universes could form, which is one of the argument that Hawking used when he invoked the multiverse scenario. So unless that there is a convincing argument that our universe is the one and only universe that can exist, and nothing else can, then this argument falls very flat.

I find that this type of seminar can't be very productive unless there is a panel discussion presenting both sides. People who listened to this may not be aware of the holes in such arguments, and I would point out also to the any talk by those on the opposite side as well. It would have been better if they invited two scientists with opposing view, and they can show to the public how the same set of evidence leads to different conclusions. This is what happens when the full set of evidence to paint a clear picture isn't available.

Zz.

by ZapperZ (noreply@blogger.com) at October 21, 2014 06:38 PM

Quantum Diaries

I feel it mine

On Saturday, 4 October, Nikhef – the Dutch National Institute for Subatomic Physics where I spend long days and efforts – opened its doors, labs and facilities to the public. In addition to Nikhef, all the other institutes located in the so-called “Science Park” – the scientific district located in the east part of Amsterdam – welcomed people all day long.

It’s the second “Open Day” that I’ve attended, both as a guest and as guide. Together with my fellow theoreticians we provided answers and explanations to people’s questions and curiosities, standing in the “Big Bang Theory Corner” of the main hall. Each department in Nikhef arranged its own stand and activities, and there were plenty of things to be amazed at to cover the entire day.

The research institutes in Science Park (and outside it) offer a good overview of the concept of research, looking for what is beyond the current status of knowledge. “Verder kijken”, or looking further, is the motto of Vrije Universiteit Amsterdam, my Dutch alma mater.

I deeply like this attitude of research, the willingness to investigating what’s around the corner. As they like to define themselves, Dutch people are “future oriented”: this is manifest in several things, from the way they read the clock (“half past seven” becomes “half before eight” in Dutch) to some peculiarities of the city itself, like the presence of a lot of cultural and research institutes.

This abundance of institutes, museums, exhibitions, public libraries, music festivals, art spaces, and independent cinemas makes me feel this city as cultural place. People interact with culture in its many manifestations and are connected to it in a more dynamic way than if they were only surrounded by historical and artistic.

Back to the Open Day and Nikhef, I was pleased to see lots of people, families with kids running here and there, checking out delicate instruments with their curious hands, and groups of guys and girls (also someone who looked like he had come straight from a skate-park) stopping by and looking around as if it were their own courtyard.

The following pictures give some examples of the ongoing activities:

We had a model of the ATLAS detector built with Legos: amazing!

IMG_0770

Copyright Nikhef

And not only toy-models. We had also true detectors, like a cloud chamber that allowed visitors to see the traces of particles passing by!

ADL_167796

Copyright Nikhef

Weak force and anti-matter are also cool, right?

ADL_167823

Copyright Nikhef

The majority of people here (not me) are blond and/or tall, but not tall enough to see cosmic rays with just their eyes… So, please ask the experts!

ADL_167793

Copyright Nikhef

I think I can summarize the huge impact and the benefit of such a cool day with the words of one man who stopped by one of the experimental setups. He listened to the careful (but a bit fuzzy) explanation provided by one of the students, and said “Thanks. Now I feel it mine too.”

Many more photos are available here: enjoy!

by Andrea Signori at October 21, 2014 05:23 PM

Peter Coles - In the Dark

A Dirge

Rough Wind, that moanest loud
Grief too sad for song;
Wild wind, when sullen cloud
Knells all the night long;
Sad storm, whose tears are vain,
Bare woods, whose branches strain,
Deep caves and dreary main, _
Wail, for the world’s wrong!

by Percy Bysshe Shelley

 

 


by telescoper at October 21, 2014 04:09 PM

John Baez - Azimuth

Network Theory Seminar (Part 3)

 

This time we use the principle of minimum power to determine what a circuit made of resistors actually does. Its ‘behavior’ is described by a functor sending circuits to linear relations between the potentials and currents at the input and output terminals. We call this the ‘black box’ functor, since it takes a circuit:

and puts a metaphorical ‘black box’ around it:

hiding the circuit’s internal details and letting us see only how it acts as viewed ‘from outside’.

For more, see the lecture notes here:

Network theory (part 32).

http://johncarlosbaez.wor


by John Baez at October 21, 2014 03:17 PM

Symmetrybreaking - Fermilab/SLAC

Costumes to make zombie Einstein proud

These physics-themed Halloween costume ideas are sure to entertain—and maybe even educate. Terrifying, we know.

So you haven’t picked a Halloween costume, and the big night is fast approaching. If you’re looking for something a little funny, a little nerdy and sure to impress fellow physics fans, look no further. We’ve got you covered.

1. Dark energy

This is an active costume, perfect for the party-goer who plans to consume a large quantity of sugar. Suit up in all black or camouflage, then spend your evening squeezing between people and pushing them apart.

Congratulations! You’re dark energy: a mysterious force causing the accelerating expansion of the universe, intriguing in the lab and perplexing on the dance floor.

2. Cosmic inflation

Theory says that a fraction of a second after the big bang, the universe grew exponentially, expanding so that tiny fluctuations were stretched into the seeds of entire galaxies.

But good luck getting that costume through the door.

Instead, take a simple yellow life vest and draw the cosmos on it: stars, planets, asteroids, whatever you fancy. When friends pull on the emergency tab, the universe will grow.

3. Heisenberg Uncertainty Principle

Here’s a great excuse to repurpose your topical Breaking Bad costume from last year.

Walter White—aka “Heisenberg”—may have been a chemistry teacher, but the Heisenberg Uncertainty Principle is straight out of physics. Named after Werner Heisenberg, a German physicist credited with the creation of quantum mechanics, the Heisenberg Uncertainty Principle states that the more accurately you know the position of a particle, the less information you know about its momentum.

Put on Walter White’s signature hat and shades (or his yellow suit and respirator), but then add some uncertainty by pasting Riddler-esque question marks to your outfit.

4. Bad neutrino

A warning upfront: Only the ambitious and downright extroverted should attempt this costume.

Neutrinos are ghostly particles that pass through most matter undetected. In fact, trillions of neutrinos pass through your body every second without your knowledge.

But you aren’t going to go as any old neutrino. Oh no. You’re a bad neutrino—possibly the worst one in the universe—so you run into everything: lampposts, trees, haunted houses and yes, people. Don a simple white sheet and spend the evening interacting with everyone and everything.

5. Your favorite physics experiment

You physics junkies know that there are a lot of experiments with odd acronyms and names that are ripe for Halloween costumes. You can go as ATLAS (experiment at the Large Hadron Collider / character from Greek mythology), DarkSide (dark matter experiment at Gran Sasso National Laboratory / good reason to repurpose your Darth Vader costume), PICASSO (dark matter experiment at SNOLAB / creator of Cubism), MINERvA (Fermilab neutrino experiment / Roman goddess of wisdom), or the Dark Energy Survey (dark energy camera located at the Blanco Telescope in Chile / good opportunity for a pun).

Physics-loving parents can go as explorer Daniel Boone, while the kids go as neutrino experiments MicroBooNE and MiniBooNE. The kids can wear mini fur hats of their own or dress as detector tanks to be filled with candy.

6. Feynman diagram

You might know that a Feynman diagram is a drawing that uses lines and squiggles to represent a particle interaction. But have you ever noticed that they sometimes look like people? Try out this new take on the black outfit/white paint skeleton costume. Bonus points for going as a penguin diagram.

7. Antimatter

Break out the bell-bottoms and poster board. In bold letters, scrawl the words of your choosing: “I hate things!,” “Stuff is awful!,” and “Down with quarks!” will all do nicely. Protest from house to house and declare with pride that you are antimatter. It’s a fair critique: Physicists still aren’t sure why matter dominates the universe when equal amounts of matter and antimatter should have been created in the big bang.

Fortunately, you don’t have to solve this particular puzzle on your quest for candy. Just don’t high five anyone; you might annihilate.

8. Entangled particles

Einstein described quantum entanglement as “spooky action at a distance”—the perfect costume for Halloween. Entangled particles are extremely strange. Measuring one automatically determines the state of the other, instantaneously.

Find someone you are extremely in tune with and dress in opposite colors, like black and white. When no one is observing you, you can relax. But when interacting with people, be sure to coordinate movements. They spin to the left, you spin to the right. They wave with the right hand? You wave with the left. You get the drill.

You can also just wrap yourselves together in a net. No one said quantum entanglement has to be hard.

9. Holographic you(niverse)

The universe may be like a hologram, according to a theory currently being tested at Fermilab’s Holometer experiment. If so, information about spacetime is chunked into 2-D bits that only appear three-dimensional from our perspective.

Help others imagine this bizarre concept by printing out a photo of yourself and taping it to your front. You’ll still technically be 3-D, but that two-dimensional picture of your face will still start some interesting discussions. Perhaps best not to wear this if you have a busy schedule or no desire to discuss the nature of time and space while eating a Snickers.

10. Your favorite particle

There are many ways to dress up as a fundamental particle. Bring a lamp along to trick-or-treat to go as the photon, carrier of light. Hand out cookies to go as the Higgs boson, giver of mass. Spend the evening attaching things to people to go as a gluon.

To branch out beyond the Standard Model of particle physics, go as a supersymmetric particle, or sparticle: Wear a gladiator costume and shout, “I am Sparticle!” whenever someone asks about your costume.

Or grab a partner to become a meson, a particle made of a quark and antiquark. Mesons are typically unstable, so whenever you unlink arms, be sure to decay in a shower of electrons and neutrinos—or candy corn.

 

Like what you see? Sign up for a free subscription to symmetry!

by Lauren Biron at October 21, 2014 02:51 PM

Jester - Resonaances

Dark matter or pulsars? AMS hints it's neither.
Yesterday AMS-02 updated their measurement of cosmic-ray positron and electron fluxes. The newly published data extend to positron energies 500 GeV, compared to 350 GeV in the previous release. The central value of the positron fraction in the highest energy bin is one third of the error bar lower than the central value of the next-to-highestbin.  This allows the collaboration to conclude that the positron fraction has a maximum and starts to decrease at high energies :]  The sloppy presentation and unnecessary hype obscures the fact that AMS actually found something non-trivial.  Namely, it is interesting that the positron fraction, after a sharp rise between 10 and 200 GeV, seems to plateau at higher energies at the value around 15%.  This sort of behavior, although not expected by popular models of cosmic ray propagation, was actually predicted a few years ago, well before AMS was launched.  

Before I get to the point, let's have a brief summary. In 2008 the PAMELA experiment observed a steep rise of the cosmic ray positron fraction between 10 and 100 GeV. Positrons are routinely produced by scattering of high energy cosmic rays (secondary production), but the rise was not predicted by models of cosmic ray propagations. This prompted speculations of another (primary) source of positrons: from pulsars, supernovae or other astrophysical objects, to  dark matter annihilation. The dark matter explanation is unlikely for many reasons. On the theoretical side, the large annihilation cross section required is difficult to achieve, and it is difficult to produce a large flux of positrons without producing an excess of antiprotons at the same time. In particular, the MSSM neutralino entertained in the last AMS paper certainly cannot fit the cosmic-ray data for these reasons. When theoretical obstacles are overcome by skillful model building, constraints from gamma ray and radio observations disfavor the relevant parameter space. Even if these constraints are dismissed due to large astrophysical uncertainties, the models poorly fit the shape the electron and positron spectrum observed by PAMELA, AMS, and FERMI (see the addendum of this paper for a recent discussion). Pulsars, on the other hand, are a plausible but handwaving explanation: we know they are all around and we know they produce electron-positron pairs in the magnetosphere, but we cannot calculate the spectrum from first principles.

But maybe primary positron sources are not needed at all? The old paper by Katz et al. proposes a different approach. Rather than starting with a particular propagation model, it assumes the high-energy positrons observed by PAMELA are secondary, and attempts to deduce from the data the parameters controlling the propagation of cosmic rays. The logic is based on two premises. Firstly, while production of cosmic rays in our galaxy contains many unknowns, the production of different particles is strongly correlated, with the relative ratios depending on nuclear cross sections that are measurable in laboratories. Secondly, different particles propagate in the magnetic field of the galaxy in the same way, depending only on their rigidity (momentum divided by charge). Thus, from an observed flux of one particle, one can predict the production rate of other particles. This approach is quite successful in predicting the cosmic antiproton flux based on the observed boron flux. For positrons, the story is more complicated because of large energy losses (cooling) due to synchrotron and inverse-Compton processes. However, in this case one can make the  exercise of computing the positron flux assuming no losses at all. The result correspond to roughly 20% positron fraction above 100 GeV. Since in the real world cooling can only suppress the positron flux, the value computed assuming no cooling represents an upper bound on the positron fraction.

Now, at lower energies, the observed positron flux is a factor of a few below the upper bound. This is already intriguing, as hypothetical primary positrons could in principle have an arbitrary flux,  orders of magnitude larger or smaller than this upper bound. The rise observed by PAMELA can be interpreted that the suppression due to cooling decreases as positron energy increases. This is not implausible: the suppression depends on the interplay of the cooling time and mean propagation time of positrons, both of which are unknown functions of energy. Once the cooling time exceeds the propagation time the suppression factor is completely gone. In such a case the positron fraction should saturate the upper limit. This is what seems to be happening at the energies 200-500 GeV probed by AMS, as can be seen in the plot. Already the previous AMS data were consistent with this picture, and the latest update only strengthens it.

So, it may be that the mystery of cosmic ray positrons has a simple down-to-galactic-disc explanation. If further observations show the positron flux climbing  above the upper limit or dropping suddenly, then the secondary production hypothesis would be invalidated. But, for the moment, the AMS data seems to be consistent with no primary sources, just assuming that the cooling time of positrons is shorter than predicted by the state-of-the-art propagation models. So, instead of dark matter, AMS might have discovered models of cosmic-ray propagation need a fix. That's less spectacular, but still worthwhile.

Thanks to Kfir for the plot and explanations. 

by Jester (noreply@blogger.com) at October 21, 2014 08:49 AM

October 20, 2014

Christian P. Robert - xi'an's og

control functionals for Monte Carlo integration

This new arXival by Chris Oates, Mark Girolami, and Nicolas Chopin (warning: they all are colleagues & friends of mine!, at least until they read those comments…) is a variation on control variates, but with a surprising twist namely that the inclusion of a control variate functional may produce a sub-root-n (i.e., faster than √n) convergence rate in the resulting estimator. Surprising as I did not know one could get to sub-root-n rates..! Now I had forgotten that Anne Philippe and I used the score in an earlier paper of ours, as a control variate for Riemann sum approximations, with faster convergence rates, but this is indeed a new twist, in particular because it produces an unbiased estimator.

The control variate writes

\psi_\phi (x) = \nabla_x \cdot \phi(x) + \phi(x)\cdot \nabla \pi(x)

where π is the target density and φ is a free function to be optimised. (Under the constraint that πφ is integrable. Then the expectation of ψφ is indeed zero.) The “explanation” for the sub-root-n behaviour is that ψφ is chosen as an L2 regression. When looking at the sub-root-n convergence proof, the explanation is more of a Rao-Blackwellisation type, assuming a first level convergent (or presistent) approximation to the integrand [of the above form ψφ can be found. The optimal φ is the solution of a differential equation that needs estimating and the paper concentrates on approximating strategies. This connects with Antonietta Mira’s zero variance control variates, but in a non-parametric manner, adopting a Gaussian process as the prior on the unknown φ. And this is where the huge innovation in the paper resides, I think, i.e. in assuming a Gaussian process prior on the control functional and in managing to preserve unbiasedness. As in many of its implementations, modelling by Gaussian processes offers nice features, like ψφ being itself a Gaussian process. Except that it cannot be shown to lead to presistency on a theoretical basis. Even though it appears to hold in the examples of the paper. Apart from this theoretical difficulty, the potential hardship with the method seems to be in the implementation, as there are several parameters and functionals to be calibrated, hence calling for cross-validation which may often be time-consuming. The gains are humongous, so the method should be adopted whenever the added cost in implementing it is reasonable, cost which evaluation is not clearly provided by the paper. In the toy Gaussian example where everything can be computed, I am surprised at the relatively poor performance of a Riemann sum approximation to the integral, wondering at the level of quadrature involved therein. The paper also interestingly connects with O’Hagan’s (1991) Bayes-Hermite [polynomials] quadrature and quasi-Monte Carlo [obviously!].


Filed under: Books, Statistics, University life Tagged: control variate, convergence rate, Gaussian processes, Monte Carlo Statistical Methods, simulation, University of Warwick

by xi'an at October 20, 2014 10:14 PM

John Baez - Azimuth

Network Theory (Part 32)

Okay, today we will look at the ‘black box functor’ for circuits made of resistors. Very roughly, this takes a circuit made of resistors with some inputs and outputs:

and puts a ‘black box’ around it:

forgetting the internal details of the circuit and remembering only how the it behaves as viewed from outside. As viewed from outside, all the circuit does is define a relation between the potentials and currents at the inputs and outputs. We call this relation the circuit’s behavior. Lots of different choices of the resistances R_1, \dots, R_6 would give the same behavior. In fact, we could even replace the whole fancy circuit by a single edge with a single resistor on it, and get a circuit with the same behavior!

The idea is that when we use a circuit to do something, all we care about is its behavior: what it does as viewed from outside, not what it’s made of.

Furthermore, we’d like the behavior of a system made of parts to depend in a simple way on the external behaviors of its parts. We don’t want to have to ‘peek inside’ the parts to figure out what the whole will do! Of course, in some situations we do need to peek inside the parts to see what the whole will do. But in this particular case we don’t—at least in the idealization we are considering. And this fact is described mathematically by saying that black boxing is a functor.

So, how do circuits made of resistors behave? To answer this we first need to remember what they are!

Review

Remember that for us, a circuit made of resistors is a mathematical structure like this:

It’s a cospan where:

\Gamma is a graph labelled by resistances. So, it consists of a finite set N of nodes, a finite set E of edges, two functions

s, t : E \to N

sending each edge to its source and target nodes, and a function

r : E \to (0,\infty)

that labels each edge with its resistance.

i: I \to \Gamma is a map of graphs labelled by resistances, where I has no edges. A labelled graph with no edges has nothing but nodes! So, the map i is just a trick for specifying a finite set of nodes called inputs and mapping them to N. Thus i picks out some nodes of \Gamma and declares them to be inputs. (However, i may not be one-to-one! We’ll take advantage of that subtlety later.)

o: O \to \Gamma is another map of graphs labelled by resistances, where O again has no edges, and we call its nodes outputs.

The principle of minimum power

So what does a circuit made of resistors do? This is described by the principle of minimum power.

Recall from Part 27 that when we put it to work, our circuit has a current I_e flowing along each edge e \in E. This is described by a function

I: E \to \mathbb{R}

It also has a voltage across each edge. The word ‘across’ is standard here, but don’t worry about it too much; what matters is that we have another function

V: E \to \mathbb{R}

describing the voltage V_e across each edge e.

Resistors heat up when current flows through them, so they eat up electrical power and turn this power into heat. How much? The power is given by

\displaystyle{ P = \sum_{e \in E} I_e V_e }

So far, so good. But what does it mean to minimize power?

To understand this, we need to manipulate the formula for power using the laws of electrical circuits described in Part 27. First, Ohm’s law says that for linear resistors, the current is proportional to the voltage. More precisely, for each edge e \in E,

\displaystyle{ I_e = \frac{V_e}{r_e} }

where r_e is the resistance of that edge. So, the bigger the resistance, the less current flows: that makes sense. Using Ohm’s law we get

\displaystyle{ P = \sum_{e \in E} \frac{V_e^2}{r_e} }

Now we see that power is always nonnegative! Now it makes more sense to minimize it. Of course we could minimize it simply by setting all the voltages equal to zero. That would work, but that would be boring: it gives a circuit with no current flowing through it. The fun starts when we minimize power subject to some constraints.

For this we need to remember another law of electrical circuits: a spinoff of Kirchhoff’s voltage law. This says that we can find a function called the potential

\phi: N \to \mathbb{R}

such that

V_e = \phi_{s(e)} - \phi_{t(e)}

for each e \in E. In other words, the voltage across each edge is the difference of potentials at the two ends of this edge.

Using this, we can rewrite the power as

\displaystyle{ P = \sum_{e \in E} \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)})^2 }

Now we’re really ready to minimize power! Our circuit made of resistors has certain nodes called terminals:

T \subseteq N

These are the nodes that are either inputs or outputs. More precisely, they’re the nodes in the image of

i: I \to \Gamma

or

o: O \to \Gamma

The principle of minimum power says that:

If we fix the potential \phi on all terminals, the potential at other nodes will minimize the power

\displaystyle{ P(\phi) = \sum_{e \in E} \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)})^2 }

subject to this constraint.

This should remind you of all the other minimum or maximum principles you know, like the principle of least action, or the way a system in thermodynamic equilibrium maximizes its entropy. All these principles—or at least, most of them—are connected. I could talk about this endlessly. But not now!

Now let’s just use the principle of minimum power. Let’s see what it tells us about the behavior of an electrical circuit.

Let’s imagine changing the potential \phi by adding some multiple of a function

\psi: N \to \mathbb{R}

If this other function vanishes at the terminals:

\forall n \in T \; \; \psi(n) = 0

then \phi + x \psi doesn’t change at the terminals as we change the number x.

Now suppose \phi obeys the principle of minimum power. In other words, supposes it minimizes power subject to the constraint of taking the values it does at the terminals. Then we must have

\displaystyle{ \frac{d}{d x} P(\phi + x \psi)\Big|_{x = 0} }

whenever

\forall n \in T \; \; \psi(n) = 0

This is just the first derivative test for a minimum. But the converse is true, too! The reason is that our power function is a sum of nonnegative quadratic terms. Its graph will look like a paraboloid. So, the power has no points where its derivative vanishes except minima, even when we constrain \phi by making it lie on a linear subspace.

We can go ahead and start working out the derivative:

\displaystyle{ \frac{d}{d x} P(\phi + x \psi)! = ! \frac{d}{d x} \sum_{e \in E} \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)} + x(\psi_{s(e)} -\psi_{t(e)}))^2  }

To work out the derivative of these quadratic terms at x = 0, we only need to keep the part that’s proportional to x. The rest gives zero. So:

\begin{array}{ccl} \displaystyle{ \frac{d}{d t} P(\phi + x \psi)\Big|_{x = 0} } &=& \displaystyle{ \frac{d}{d x} \sum_{e \in E} \frac{x}{r_e} (\phi_{s(e)} - \phi_{t(e)}) (\psi_{s(e)} - \psi_{t(e)}) \Big|_{x = 0} } \\ \\  &=&   \displaystyle{  \sum_{e \in E} \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) (\psi_{s(e)} - \psi_{t(e)}) }  \end{array}

The principle of minimum power says this is zero whenever \psi : N \to \mathbb{R} is a function that vanishes at terminals. By linearity, it’s enough to consider functions \psi that are zero at every node except one node n that is not a terminal. By linearity we can also assume \psi(n) = 1.

Given this, the only nonzero terms in the sum

\displaystyle{ \sum_{e \in E} \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) (\psi_{s(e)} - \psi_{t(e)}) }

will be those involving edges whose source or target is n. We get

\begin{array}{ccc} \displaystyle{ \frac{d}{d x} P(\phi + x \psi)\Big|_{x = 0} } &=& \displaystyle{ \sum_{e: \; s(e) = n}  \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)})}  \\  \\        && -\displaystyle{ \sum_{e: \; t(e) = n}  \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) }   \end{array}

So, the principle of minimum power says precisely

\displaystyle{ \sum_{e: \; s(e) = n}  \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) = \sum_{e: \; t(e) = n}  \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) }

for all nodes n that aren’t terminals.

What does this mean? You could just say it’s a set of linear equations that must be obeyed by the potential \phi. So, the principle of minimum power says that fixing the potential at terminals, the potential at other nodes must be chosen in a way that obeys a set of linear equations.

But what do these equations mean? They have a nice meaning. Remember, Kirchhoff’s voltage law says

V_e = \phi_{s(e)} - \phi_{t(e)}

and Ohm’s law says

\displaystyle{ I_e = \frac{V_e}{r_e} }

Putting these together,

\displaystyle{ I_e = \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) }

so the principle of minimum power merely says that

\displaystyle{ \sum_{e: \; s(e) = n} I_e = \sum_{e: \; t(e) = n}  I_e }

for any node n that is not a terminal.

This is Kirchhoff’s current law: for any node except a terminal, the total current flowing into that node must equal the total current flowing out! That makes a lot of sense. We allow current to flow in or out of our circuit at terminals, but ‘inside’ the circuit charge is conserved, so if current flows into some other node, an equal amount has to flow out.

In short: the principle of minimum power implies Kirchoff’s current law! Conversely, we can run the whole argument backward and derive the principle of minimum power from Kirchhoff’s current law. (In both the forwards and backwards versions of this argument, we use Kirchhoff’s voltage law and Ohm’s law.)

When the node n is a terminal, the quantity

\displaystyle{  \sum_{e: \; s(e) = n} I_e \; - \; \sum_{e: \; t(e) = n}  I_e }

need not be zero. But it has an important meaning: it’s the amount of current flowing into that terminal!

We’ll call this I_n, the current at the terminal n \in T. This is something we can measure even when our circuit has a black box around it:

So is the potential \phi_n at the terminal n. It’s these currents and potentials at terminals that matter when we try to describe the behavior of a circuit while ignoring its inner workings.

Black boxing

Now let me quickly sketch how black boxing becomes a functor.

A circuit made of resistors gives a linear relation between the potentials and currents at terminals. A relation is something that can hold or fail to hold. A ‘linear’ relation is one defined using linear equations.

A bit more precisely, suppose we choose potentials and currents at the terminals:

\psi : T \to \mathbb{R}

J : T \to \mathbb{R}

Then we seek potentials and currents at all the nodes and edges of our circuit:

\phi: N \to \mathbb{R}

I : E \to \mathbb{R}

that are compatible with our choice of \psi and J. Here compatible means that

\psi_n = \phi_n

and

J_n = \displaystyle{  \sum_{e: \; s(e) = n} I_e \; - \; \sum_{e: \; t(e) = n}  I_e }

whenever n \in T, but also

\displaystyle{ I_e = \frac{1}{r_e} (\phi_{s(e)} - \phi_{t(e)}) }

for every e \in E, and

\displaystyle{  \sum_{e: \; s(e) = n} I_e \; = \; \sum_{e: \; t(e) = n}  I_e }

whenever n \in N - T. (The last two equations combine Kirchoff’s laws and Ohm’s law.)

There either exist I and \phi making all these equations true, in which case we say our potentials and currents at the terminals obey the relation… or they don’t exist, in which case we say the potentials and currents at the terminals don’t obey the relation.

The relation is clearly linear, since it’s defined by a bunch of linear equations. With a little work, we can make it into a linear relation between potentials and currents in

\mathbb{R}^I \oplus \mathbb{R}^I

and potentials and currents in

\mathbb{R}^O \oplus \mathbb{R}^O

Remember, I is our set of inputs and O is our set of outputs.

In fact, this process of getting a linear relation from a circuit made of resistors defines a functor:

\blacksquare : \mathrm{ResCirc} \to \mathrm{LinRel}

Here \mathrm{ResCirc} is the category where morphisms are circuits made of resistors, while \mathrm{LinRel} is the category where morphisms are linear relations.

More precisely, here is the category \mathrm{ResCirc}:

• an object of \mathrm{ResCirc} is a finite set;

• a morphism from I to O is an isomorphism class of circuits made of resistors:

having I as its set of inputs and O as its set of outputs;

• we compose morphisms in \mathrm{ResCirc} by composing isomorphism classes of cospans.

(Remember, circuits made of resistors are cospans. This lets us talk about isomorphisms between them. If you forget the how isomorphism between cospans work, you can review it in Part 31.)

And here is the category \mathrm{LinRel}:

• an object of \mathrm{LinRel} is a finite-dimensional real vector space;

• a morphism from U to V is a linear relation R \subseteq U \times V, meaning a linear subspace of the vector space U \times V;

• we compose a linear relation R \subseteq U \times V and a linear relation S \subseteq V \times W in the usual way we compose relations, getting:

SR = \{(u,w) \in U \times W : \; \exists v \in V \; (u,v) \in R \mathrm{\; and \;} (v,w) \in S \}

Next steps

So far I’ve set up most of the necessary background but not precisely defined the black boxing functor

\blacksquare : \mathrm{ResCirc} \to \mathrm{LinRel}

There are some nuances I’ve glossed over, like the difference between inputs and outputs as elements of I and O and their images in N. If you want to see the precise definition and the proof that it’s a functor, read our paper:

• John Baez and Brendan Fong, A compositional framework for passive linear networks.

The proof is fairly long: there may be a much quicker one, but at least this one has the virtue of introducing a lot of nice ideas that will be useful elsewhere.

Perhaps next time I will clarify the nuances by doing an example.


by John Baez at October 20, 2014 10:00 PM

Clifford V. Johnson - Asymptotia

Secrets of the Earth
Screen Shot 2014-10-20 at 1.17.23 PMMy guess is that most of you don't know that you can find original science programming on the Weather Channel. (Just like, say, 8 years ago most of you would not have been tuning to the History Channel for original science programming about how the Universe works, but many of you know better - (and thanks for watching The Universe!)) Well, this week one of the series that they have that does do some science, Secrets of the Earth, comes back for a new season. I made some contributions to several of the episodes, and I think I appear in at least two of them as a guest. So look at the whole season for some tasty bits of science about the world around you, and if inclined to, do [...] Click to continue reading this post

by Clifford at October 20, 2014 08:21 PM

Emily Lakdawalla - The Planetary Society Blog

When Good Rockets Go Bad: Orion's Launch Abort System
One of the tricky parts of launching humans into space is deciding what to do if something goes wrong. And that's where Orion's Launch Abort System comes in.

October 20, 2014 06:30 PM

arXiv blog

How An Intelligent Text Message Service Aims To Tackle Ebola In Western Africa

A computer-controlled text message service could direct Ebola cases to appropriate medical facilities and track the spread of the disease in the process–provided it can raise the necessary funding.


Back in July, Cedric Moro started a crowdsourced mapping service to keep track of the spread of Ebola in Sierra Leone, Liberia and Guinea. Moro is a risk consultant who has created several crowdsourced maps of this kind using the openStreetMap project Umap.

October 20, 2014 03:49 PM

ATLAS Experiment

Defending Your Life (Part 2)

I’ve been working on our simulation software for a long time, and I’m often asked “what on earth is that?” This is my attempt to help you love simulation as much as I do. This is a follow up to Part 1, which told you all about the first step of good simulation software, called “event generation”. In that step, we had software that gave us a list of stable particles that our detector might be able to see. And we’re trying to find some “meons” that our friend the theorist dreamed up.

One little problem with those wonderful event generators is that they don’t know anything about our experiment, ATLAS. We need a different piece of software to take those particles and move them through the detector one by one, helping model the detector’s response to each one of the particles as it goes. There are a few pieces of software that can do that, but the one that we use most is called Geant4. Geant4 is publicly available, and is described as a “toolkit” on their webpage. What that means is that it knows about basic concepts, but it doesn’t do specifics. Like building a giant lego house out of a bag of bricks, you have to figure out what fits where, and often throw out things that don’t fit.

One of the detector layouts that we simulate

The first part of a good detector simulation is the detector description. Every piece of the detector has to be put together, with the right material assigned to each. We have a detector description with over five million (!) volumes and about 400 different materials (from Xenon to Argon to Air to Aerogel and Kapton Cable). There are a few heroes of ATLAS who spend a lot of time taking technical drawings (and photographs, because the technical drawings aren’t always right!) of the detector and translating them into something Geant4 can use. You can’t put every wire and pipe in – the simulation would take an eternity! – so you have to find shortcuts sometimes. It’s a painstaking process that’s still ongoing today. We continuously refine and improve our description, adding pieces that weren’t important at the beginning several years ago but are starting to be important now (like polyboron neutron shielding in our forward region; few people thought early on that we would be able to model low-energy neutron flux in our detector with Geant4, because it’s really complex nuclear physics, but we’re getting so close to being able to do so that we’ve gone back to re-check that our materials’ neutron capture properties are correct). And sometimes we go back and revise things that were done approximately in the beginning because we think we can do better. This part also involves making a detailed magnetic field map. We can’t measure the field everywhere in the detector (like deep in the middle of the calorimeter), and it takes too much time to constantly simulate the currents flowing through the magnets and their effect on the particles moving through the detector, so we do that simulation once and save the magnetic field that results.

A simulated black hole event. But what do meons look like?

Next is a good set of physics models. Geant4 has a whole lot of them that you can use and (fortunately!) they have a default that works pretty well for us. Those physics models describe each process (the photoelectric effect, Compton scattering, bremsstrahlung, ionization, multiple scattering, decays, nuclear interactions, etc) for each particle. Some are very, very complicated, as you can probably imagine. You have to choose, at this point, what physics you’re interested in. Geant4 can be used for simulation of space, simulation of cells and DNA, and simulations of radioactive environments. If we used the most precise models for everything, our simulation would never finish running! Instead, we take the fastest model whose results we can’t really distinguish from the most detailed models. That is, we turn off everything that we don’t really notice in our detector anyway. Sometimes we don’t get that right and have to go back and adjust things further – but usually we’ve erred on the side of a slower, more accurate simulation.

The last part is to “teach” Geant4 what you want to save. All Geant4 cares about is particles and materials – it doesn’t inherently know the difference between some silicon that is a part of a computer chip somewhere in the detector and the silicon that makes up the sensors in much of our inner detector. So we have to say “these are the parts of the detector that we care about most” (called “sensitive” detectors). There are a lot of technical tricks to optimizing the storage, but in the end we want to write files with all the little energy deposits that Geant4 has made, their time and location – and sometimes information (that we call “truth”) about what really happened in the simulation, so later we can find out how good our reconstruction software was at correctly identifying photons and their conversions into electron-positron pairs, for example.

The fun part of working on the simulation software is that you have to learn everything about the experiment. You have to know how much time after the interaction every piece of the detector is sensitive, so that you can avoid wasting time simulating particles long after that time. You get to learn when things were installed incorrectly or are misaligned, because you need those effects in the simulation. When people want to upgrade a part of the detector, you have to learn what they have in mind, and then (often) help them think of things they haven’t dealt with yet that might affect other parts of the detector (like cabling behind their detector, which we often have to think hard about). You also have to know about the physics that each detector is sensitive to, what approximations are reasonable, and what approximations you’re already making that they might need to check on.

That also brings us back to our friend’s meons. If they decay very quickly into Standard Model particles, then the event generator will do all the hard work. But if they stick around long enough to interact with the detector, then we have to ask our friend for a lot more information, like how they interact with different materials. For some funny theoretical particles like magnetic monopoles, R-hadrons, and stable charginos, we have to write our own Geant4 physics modules, with a lot of help from theorists.

The detector simulation is a great piece of software to work on – but that’s not the end of it! After the simulation comes the final step, “digitization”, which I’ll talk about next time – and we’ll find out the fate of our buddy’s meon theory.

ZachMarshall Zach Marshall is a Divisional Fellow at the Lawrence Berkeley National Laboratory in California. His research is focused on searches for supersymmetry and jet physics, with a significant amount of time spent working on software and trying to help students with physics and life in ATLAS.

by Zachary Marshall at October 20, 2014 03:21 PM

Jester - Resonaances

Weekend Plot: Bs mixing phase update
Today's featured plot was released last week by the LHCb collaboration:

It shows the CP violating phase in Bs meson mixing, denoted as φs,  versus the difference of the decay widths between the two Bs meson eigenstates. The interest in φs comes from the fact that it's  one of the precious observables that 1) is allowed by the symmetries of the Standard Model, 2) is severely suppressed due to the CKM structure of flavor violation in the Standard Model. Such observables are a great place to look for new physics (other observables in this family include Bs/Bd→μμ, K→πνν, ...). New particles, even too heavy to be produced directly at the LHC, could produce measurable contributions to φs as long as they don't respect the Standard Model flavor structure. For example, a new force carrier with a mass as large as 100-1000 TeV and order 1 flavor- and CP-violating coupling to b and s quarks would be visible given the current experimental precision. Similarly, loops of supersymmetric particles with 10 TeV masses could show up, again if the flavor structure in the superpartner sector is not aligned with that in the  Standard Model.

The phase φs can be measured in certain decays of neutral Bs mesons where the process involves an interference of direct decays and decays through oscillation into the anti-Bs meson. Several years ago measurements at Tevatron's D0 and CDF experiments suggested a large new physics contribution. The mild excess has gone away since, like many other such hints.  The latest value quoted by LHCb is φs = - 0.010 ± 0.040, which combines earlier measurements of the Bs → J/ψ π+ π- and  Bs → Ds+ Ds- decays with  the brand new measurement of the Bs → J/ψ K+ K- decay. The experimental precision is already comparable to the Standard Model prediction of φs = - 0.036. Further progress is still possible, as the Standard Model prediction can be computed to a few percent accuracy.  But the room for new physics here is getting tighter and tighter.

by Jester (noreply@blogger.com) at October 20, 2014 02:20 PM

Symmetrybreaking - Fermilab/SLAC

Transatlantic data-transfer gets a boost

New links will improve the flow of data from the Large Hadron Collider to US institutions.

Scientists across the US will soon have access to new, ultra high-speed network links spanning the Atlantic Ocean.

A new project is currently underway to extend the US Department of Energy’s Energy Sciences Network, or ESnet, to London, Amsterdam and Geneva.

Although the project is designed to benefit data-intensive science throughout the US national laboratory complex, heaviest users of the new links will be particle physicists conducting research at the Large Hadron Collider, the world’s largest and most powerful particle collider. The high capacity of this new connection will provide US-based scientists with enhanced access to data at the LHC and other European-based experiments by accelerating the exchange of data sets between institutions in the US and computing facilities in Europe.

“After the Higgs discovery, the next big LHC milestones will come in 2015,” says Oliver Gutsche, Fermilab scientist and member of the CMS Offline and Computing Management Board. “And this network will be indispensable for the success of the [next LHC physics program].”

DOE’s Brookhaven National Laboratory and Fermi National Accelerator Laboratory—the primary computing centers for US collaborators on the LHC’s ATLAS and CMS experiments, respectively—will make immediate use of the new network infrastructure, once it is rigorously tested and commissioned. Because ESnet, based at DOE’s Lawrence Berkeley National Laboratory, interconnects all national laboratories and a number of university-based projects in the US, tens of thousands of researchers from other disciplines will benefit as well. 

The ESnet extension will be in place before the LHC at CERN in Switzerland—currently shut down for maintenance and upgrades—is up and running again in the spring of 2015. Because the accelerator will be colliding protons at much higher energy, the data output from the detectors will expand considerably to approximately 40 petabytes of RAW data per year, compared with 20 petabytes for all of the previous lower-energy collisions produced over the three years of the LHC’s first run between 2010 and 2012.

The cross-Atlantic connectivity during the first successful run for the LHC experiments was provided by the US LHCNet network, managed by the California Institute of Technology. In recent years, major research and education networks around the world—including ESnet, Internet2, California’s CENIC, and European networks such as DANTE, SURFnet and NORDUnet—have increased their backbone capacity by a factor of 10, using sophisticated new optical networking and digital signal processing technologies. Until recently, however, higher-speed links were not deployed for production purposes across the Atlantic Ocean. 

Courtesy of: Brookhaven/Fermilab

An evolving data model

This upgrade coincides with a shift in the data model for LHC science. Previously, data moved in a more predictable and hierarchical pattern strongly influenced by geographical proximity, but network upgrades around the world have now made it possible for data to be fetched and exchanged more flexibly and dynamically. This change enables faster science outcomes and more efficient use of storage and computational power, but it requires networks around the world to perform flawlessly together. 

“Having the new infrastructure in place will meet the increased need for dealing with LHC data and provide more agile access to that data in a much more dynamic fashion than LHC collaborators have had in the past,” says physicist Michael Ernst of Brookhaven National Laboratory, a key member of the team laying out the new and more flexible framework for exchanging data between the Worldwide LHC Computing Grid centers. 

Ernst directs a computing facility at Brookhaven Lab that was originally set up as a central hub for US collaborators on the LHC’s ATLAS experiment. A similar facility at Fermi National Accelerator Laboratory has played this role for the LHC’s US collaborators on the CMS experiment. These computing resources, dubbed “Tier 1” centers, have direct links to the LHC at Europe’s CERN laboratory (Tier 0).

The experts who run them will continue to serve scientists under the new structure. But instead of serving only as hubs for data storage and distribution among US-based collaborators at Tier 2 and 3 research centers, the dedicated facilities at Brookhaven and Fermilab will also be able to serve data needs of the entire ATLAS and CMS collaborations throughout the world. And likewise, US Tier 2 and Tier 3 research centers will have higher-speed access to Tier 1 and Tier 2 centers in Europe. 

“This new infrastructure will offer LHC researchers at laboratories and universities around the world faster access to important data," says Fermilab’s Lothar Bauerdick, head of software and computing for the US CMS group. "As the LHC experiments continue to produce exciting results, this important upgrade will let collaborators see and analyze those results better than ever before.”

Ernst adds, “As centralized hubs for handling LHC data, our reliability, performance, and expertise have been in demand by the whole collaboration and now we will be better able to serve the scientists’ needs.”


Fermilab published a version of this article as a press release.

 

Like what you see? Sign up for a free subscription to symmetry!

October 20, 2014 01:00 PM

Peter Coles - In the Dark

Controlled Nuclear Fusion: Forget about it

telescoper:

You’ve probably heard that Lockheed Martin has generated a lot of excitement with a recent announcement about a “breakthrough” in nuclear fusion technology. Here’s a pessimistic post from last year. I wonder if it will be proved wrong?

Originally posted on Protons for Breakfast Blog:

Man or woman doing a technical thing with a thingy told with laser induced nuclear fusion.

Man or woman adjusting the ‘target positioner’ (I think) within the target chamber of the US Lawrence Livermore National Laboratory.

The future is very difficult to predict. But I am prepared to put on record my belief that controlled nuclear fusion as a source of power on Earth will never be achieved.

This is not something I want to believe. And the intermittent drip of news stories about ‘progress‘ and ‘breakthroughs‘ might make one think that the technique would eventually yield to humanity’s collective ingenuity.

But  in fact that just isn’t going to happen. Let me explain just some of the problems and you can judge for yourself whether you think it will ever work.

One option for controlled fusion is called Inertial Fusion Energy, and the centre of research is the US National Ignition Facility. Here the most powerful laser…

View original 601 more words


by telescoper at October 20, 2014 12:17 PM

astrobites - astro-ph reader's digest

Gravitational waves and the need for fast galaxy surveys

Gravitational waves are ripples in space-time that occur, for example, when two very compact celestial bodies merge. Their direct detection would allow scientists to characterize these mergers and understand the physics of systems undergoing strong gravitational interactions. Perhaps that era is not so distant; gravitational wave detectors such as advanced LIGO and Virgo are expected to come online by 2016. While this is a very exciting prospect, gravitational wave detectors have limited resolution; they can constrain the location of the source to within an area of 10 to 1000 deg2  on the sky, depending on the number of detectors and the strength of the signal.

An artist's rendering of two white dwarfs coallescing and producing gravitational wave emission.

An artist’s rendering of two white dwarfs coallescing and producing gravitational wave emission.

To understand the nature of the source of gravitational waves, scientists hope to be able to locate it more accurately by searching for its electromagnetic counterpart immediately after the gravitational wave is detected. How can telescopes help in this endeavor? The authors of this paper explore the possibility of performing very fast galaxy surveys to identify and characterize the birthplace of gravitational waves.

Gravitational waves from the merger of two neutron stars can be detected out to 200 Mpc, which is roughly 800 times the distance to the Andromeda galaxy. It is expected that LIGO-Virgo will detect about 40 of these events per year. There are roughly 8 galaxies per 1 deg2 within 200 Mpc - that is 800 candidate galaxies in an area of 100 deg2. Hence, a quick survey that would pinpoint those possible galaxy counterparts to the gravitational wave emission would be very useful. After potential hosts are identified, they could be followed-up with telescopes with smaller fields-of-view to measure the light emitted by the source of gravitational waves.

The electromagnetic emission following the gravitational wave detection only lasts for short periods of time (for a kilonova, the timescale is of approximately a week), and this drives the need for fast surveys. To devise an efficient search strategy, the authors suggest looking for galaxies with high star formation rates. It is expected that those galaxies will have higher chances of hosting a gravitational wave event. (Although they clarify that the rate of mergers of compact objects might be better correlated with the mass of the galaxy rather than its star formation activity.) A good proxy for star formation in a galaxy is the light it emits in the red H-alpha line, coming from  hydrogen atoms in clouds of gas that act as stellar nurseries. The issue is whether current telescopes can survey large areas fast enough to find a good fraction of all star forming galaxies within the detection area of LIGO-Virgo.

The authors consider a 2m-size telescope and estimate the typical observing time needed to identify a typical star forming galaxy up to a distance of 200 Mpc. This ranges from 40-80 seconds depending on the observing conditions. It would take this type of telescope a week to cover 100 deg2This result matches very well the expected duration of the visible light signal from these events! Mergers of black holes and neutron stars could be detected out to larger distances (~450 Mpc). To find possible galaxy hosts out to these distances, a 2m-class telescope would cover 30 deg2 in a week. Without a doubt, the exciting prospect of gravitational wave detection will spur more detailed searches for the best strategies to locate their sources.

by Elisa Chisari at October 20, 2014 08:44 AM

Lubos Motl - string vacua and pheno

ETs, hippies, loons introduce Andrew Strominger
...or a yogi and another nude man?

Exactly one week ago, Andrew Strominger of Harvard gave a Science and Cocktails talk in Christiania – a neighborhood of Copenhagen, Denmark.



The beginning of this 64-minute lecture on "Black Holes, String Theory and the Fundamental Laws of Nature" is rather extraordinary and if you only want to see the weirdest introduction of a fresh winner of the Dirac Medal, just listen to the first three minutes of the video.




However, you will obviously be much more spiritually enriched if you continue to watch for another hour – even though some people who have seen similar popular talks by Andy may feel that some of the content is redundant and similar to what they have heard.




After the introduction, you may appreciate how serious and credible Andy's and Andy daughter's illustrations are (sorry, I can't distinguish these two artists!) in comparison with the mainstream culture in the Danish capital.

At the beginning, Andy said that it's incredible how much we already know about the Universe. We may design a space probe and land it on Mars and predict the landing within a second. We are even able to feed roast beef to Andrew Strominger and make him talk as a consequence of the food, and even predict that he would talk.

It's equally shocking when we may find something clear we don't understand – something that looks like a contradiction. Such paradoxes have been essential in the history of physics. Einstein was thinking what he would see in the mirror if he were running by the speed of light (or faster than that) and looking at his image in the mirror in front of him. Newton's and Maxwell's theories gave different answers. Einstein was bothered by that.

The puzzle was solved... there is a universal speed limit, special relativity, and all this stuff. About 6 other steps in physics are presented as resolutions to paradoxes of similar types. If we don't understand, it's not a problem: it's an opportunity.

Soon afterwards, Andy focuses on general relativity, spacetime curvature etc. The parabolic trajectories of freely falling objects are actually the straigh(est) lines in the curved spacetime. After a few words, he gets to the uncertainty principle and also emphasizes that everything has to be subject to the principle – it's not possible to give "exceptions" to anyone. And the principle has to apply to the space's geometry, too.

There is a cookbook how to "directly quantize" any theory, and this procedure is amazingly tested. If you apply the cookbook to gravity, GR, you get carbagan which is great because it's a lot of fun. ;-) He says we will need "time" to figure out whether the solution we have, string theory, is right in Nature. However, already now, some more basic Harvard courses have to be fixed by some insights from the string course.

Suddenly he mentions Hawking and Bekenstein's ideas about black holes. What do black holes have to do with these issues? They have everything to do with them, it surprisingly turns out. An introduction to black holes follows. Lots of matter, escape velocity, surpasses the speed of light – the basic logic of this introduction is identical to my basic school talk in the mountains a few months ago. ;-) The talks would remain identical even when Andy talks about the ability of Karl Schwarzschild to exactly solve Einstein's equations that Einstein considered unsolvably difficult. Einstein had doubts about the existence of the black holes for quite some time but in the 1960s, the confusion disappeared. Sgr A* is his (and my) key example of a real-world black hole.

Andy says that there's less than nothing, namely nothing nothing, inside black holes. I am not 100% sure what he actually means by that. Probably some topological issues – the Euclidean black hole has no geometry for \(r\lt r_0\) at all. OK, what happens in the quantum world? Particles tunnel out of the nothing nothing and stuff comes out as the black body radiation – at Hawking's temperature. Andy calls this single equation for the temperature "the Hawking's contribution to science" which slightly belittles Hawking and it's surely partly Andy's goal but OK.

He switches to thermodynamics, the science done by those people who were playing with water and fire and the boiling point of carbon dioxide without knowing about molecules. Ludwig Boltzmann beautifully derived those phenomenologically found laws from the assumption that matter is composed of molecules that may be traced using the probabilistic reasoning. He found the important of the entropy/information. Andy wisely presents entropy to be in the units of kilobytes or gigabytes - because that's what ordinary people sort of know today.

Andy counts the Hawking-Bekenstein entropy formula among the five most fundamental formulae in physics, and perhaps the most interesting one because we don't understand. That's a bit bizarre because whenever I was telling him about the general derivations of this formula I was working on, aside from other things, Andy would tell me that we didn't need such a derivation! ;-)

Amusingly and cleverly, he explains the holographic entropy bounds by talking about the Moore's law (thanks, Luke) that must inevitably break down at some point. Of course, in the real world, it will break down long before that... Now, he faces the tension between two pictures of black holes: something with the "nothing nothing" inside; or the most complicated (highest-entropy) objects we may have.

Around 41:00, he begins to talk about string theory, its brief history, and its picture of elementary particles. On paper, string theory is capable of unifying all the forces as well as QM with GR, and it addresses the black hole puzzle. String theory has grown by having eaten almost all the competitors (a picture of a hungry boy eating some trucks, of course). The term "string theory" is used for the big body of knowledge even today.

I think that at this point, he's explaining the Strominger-Vafa paper – and its followups – although the overly popular language makes me "slightly" uncertain about that. But soon, he switches to a much newer topic, his and his collaborators' analysis of the holographic dual of the rotating Kerr black holes.

Andy doesn't fail to mention that without seeing and absorbing the mathematics, the beauty of the story is as incomplete as someone's verbal story about his visit to the Grand Canyon whose pictures can't be seen by the recipient of the story. The equation-based description of these insights is much more beautiful for the theoretical physicists than the Grand Canyon. Hooray.

Intense applause.

Last nine minutes are dedicated to questions.

The first question is not terribly original and you could guess that. What kind of experiments can we make to decide whether string theory is correct? Andy says that the question is analogous to the question to Magellan when he's in the middle of his trip around the Earth, when will he complete the trip? We don't know what comes next.

Now, I exploded in laughter because Andy's wording of this idea almost exactly mimics what I am often saying in such contexts. "You know, the understanding of Nature isn't a five-year plan." Of course, I like to say such a thing because 1) I was sort of fighting against the planned economy and similar excesses already as a child, 2) some people, most notably Lee Smolin, openly claimed that they think that science should be done according to five-year plans. It's great that Andy sees it identically. We surely don't have a proposal for an experiment that could say Yes or No but we work with things that are accessible and not just dreamed about, Andy says, and the work on the black hole puzzle is therefore such an important part of the research.

The second question was so great that one might even conjecture that the author knew something about the answer: Why does the entropy and the bounds scale like the area and not the volume? So Andy says that the black hole doesn't really have the volume. We "can't articulate it well" – he slightly looks like he is struggling and desperately avoiding the word "holography" for reasons I don't fully understand. OK, now he said the word.

In the third question, a girl asks how someone figured out that there should be black holes. Andy says that physicists solve things in baby steps or smaller ones. Well, they first try to solve everything exactly and they usually fail. So they try to find special solutions and Schwarzschild did find one. Amazingly, it took decades to understand what the solution meant. Every wrong thing has been tried before the right thing was arrived at.

Is a black hole needed for every galaxy? Is a black hole everywhere? He thinks that it is an empirical question. Andy says that he doesn't have an educated guess himself. Astronomers tend to believe that a black hole is in every galaxy. Of course, I would say that this question depends on the definition of a galaxy. The "galaxies" without a black hole inside are probably low-density "galaxies", and one may very well say that such diluted ensembles don't deserve the name "galaxy".

In twenty years, Andy will be able to answer the question – which he wouldn't promise for the "egg or chicken first" question.

I didn't understand the last question about some character of string theory. Andy answered that string theory will be able to explain that, whatever "that" means. ;-)

Another intense applause with colorful lights. Extraterrestrial sounds conclude the talk.

by Luboš Motl (noreply@blogger.com) at October 20, 2014 06:10 AM

October 19, 2014

Michael Schmitt - Collider Blog

Enhanced Higgs to tau+tau- Search with Deep Learning

“Enhanced Higgs to tau+tau- Search with Deep Learning” – that is the title of a new article posted to the archive this week by Daniel Whiteson and two collaborators from the Computer Science Department at UC Irvine (1410.3469). While the title may be totally obscure to someone outside of collider physics, it caught my immediate attention because I am working on a similar project (to be released soon).

Briefly: the physics motivation comes from the need for a stronger signal for Higgs decays to τ+τ-, which are important for testing the Higgs couplings to fermions (specifically, leptons). The scalar particle with a mass of 125 GeV looks very much like the standard model Higgs boson, but tests of couplings, which are absolutely crucial, are not very precise yet. In fact, indirect constraints are stronger than direct ones at the present time. So boosting the sensitivity of the LHC data to Higgs decays to fermions is an important task.

The meat of the article concerns the comparisons of shallow artificial neural networks, which contain only one or two hidden layers, and deep artificial neural networks, which have many. Deep networks are harder to work with than shallow ones, so the question is: does one really gain anything? The answer is: yes, its like increasing your luminosity by 25%.

This case study considers final states with two oppositely-charged leptons (e or μ) and missing transverse energy. The Higgs signal must be separated from the Drell-Yan production of τ pairs, especially Z→τ+τ-, on a statistical basis. It appears that no other backgrounds (such as W pair or top pair production) were considered, so this study is a purely technical one. Nonetheless, there is plenty to be learned from it.

Whiteson, Baldi and Sadowski make a distinction between low-level variables, which include the basic kinematic observables for the leptons and jets, and the high-level variables, which include derived kinematic quantities such as invariant masses, differences in angles and pseudorapidity, sphericity, etc. I think this distinction and the way they compare the impact of the two sets is interesting.

The question is: if a sophisticated artificial neural network is able to develop complex functions of the low-level variables through training and optimization, isn’t it redundant to provide derived kinematic quantities as additional inputs? More sharply: does the neural network need “human assistance” to do its job?

The answer is clear: human assistance does help the performance of even a deep neural network with thousands of neurons and millions of events for training. Personally I am not surprised by this, because there is physics insight behind most if not all of the high-level variables — they are not just arbitrary functions of the low-level variables. So these specific functions carry physics meaning and fall somewhere between arbitrary functions of the input variables and brand new information (or features). I admit, though, that “physics meaning” is a nebulous concept and my statement is vague…

Comparison of the performance of shallow networks and deep networks, and also of low-level and high-level variables

Comparison of the performance of shallow networks and deep networks, and also of low-level and high-level variables

The authors applied state of the art techniques for this study, including optimization with respect to hyperparameters, i.e., the parameters that concern the details of the training of the neural network (learning speed, `velocity’ and network architecture). A lot of computer cycles were burnt to carry out these comparisons!

Deep neural networks might seem like an obvious way to go when trying to isolate rare signals. There are real, non-trivial stumbling blocks, however. An important one is the vanishing gradient problem. If the number of hidden nodes is large (imagine eight layers with 500 neurons each) then training by back-propagation fails because it cannot find a significantly non-zero gradient with respect to the weights and offsets of the all the neurons. If the gradient vanishes, then the neural network cannot figure out which way to evolve so that it performs well. Imagine a vast flat space with a minimum that is localized and far away. How can you figure out which way to go to get there if the region where you are is nearly perfectly flat?

The power of a neural network can be assessed on the basis of the receiver operator curve (ROC) by integrating the area beneath the curve. For particle physicists, however, the common coinage is the expected statistical significance of an hypothetical signal, so Whiteson & co translate the performance of their networks into a discovery significance defined by a number of standard deviations. Notionally, a shallow neural network working only with low-level variables would achieve a significance of 2.57σ, while adding in the high-level variables increases the significance to 3.02σ. In contrast, the deep neural networks achieve 3.16σ with low-level, and 3.37σ with all variables.

Some conclusions are obvious: deep is better than shallow. Also, adding in the high-level variables helps in both cases. (Whiteson et al. point out that the high-level variables incorporate the τ mass, which otherwise is unavailable to the neural networks.) The deep network with low-level variables is better than a shallow network with all variables, and the authors conclude that the deep artificial neural network is learning something that is not embodied in the human-inspired high-level variables. I am not convinced of this claim since it is not clear to me that the improvement is not simply due to the inadequacy of the shallow network to the task. By way of an analogy, if we needed to approximate an exponential curve by a linear one, we would not succeed unless the range was very limited; we should not be surprised if a quadratic approximation is better.

In any case, since I am working on similar things, I find this article very interesting. It is clear that the field is moving in the direction of very advanced numerical techniques, and this is one fruitful direction to go in.


by Michael Schmitt at October 19, 2014 02:19 PM

October 18, 2014

Sean Carroll - Preposterous Universe

How to Communicate on the Internet

Let’s say you want to communicate an idea X.

You would do well to simply say “X.”

Also acceptable is “X. Really, just X.”

A slightly riskier strategy, in cases where miscomprehension is especially likely, would be something like “X. This sounds a bit like A, and B, and C, but I’m not saying those. Honestly, just X.” Many people will inevitably start arguing against A, B, and C.

Under no circumstances should you say “You might think Y, but actually X.”

Equally bad, perhaps worse: “Y. Which reminds me of X, which is what I really want to say.”

For examples see the comment sections of the last couple of posts, or indeed any comment section anywhere on the internet.

It is possible these ideas may be of wider applicability in communication situations other than the internet.

(You may think this is just grumping but actually it is science!)

by Sean Carroll at October 18, 2014 04:32 PM

October 17, 2014

Sean Carroll - Preposterous Universe

Does Santa Exist?

There’s a claim out there — one that is about 95% true, as it turns out — that if you pick a Wikipedia article at random, then click on the first (non-trivial) link, and keep clicking on the first link of each subsequent article, you will end up at Philosophy. More specifically, you will end up at a loop that runs through Reality, Existence, Awareness, Consciousness, and Quality (philosophy), as well as Philosophy itself. It’s not hard to see why. These are the Big Issues, concerning the fundamental nature of the universe at a deep level. Almost any inquiry, when pressed to ever-greater levels of precision and abstraction, will get you there.

Does Santa Exist? Take, for example, the straightforward-sounding question “Does Santa Exist?” You might be tempted to say “No” and move on. (Or you might be tempted to say “Yes” and move on, I don’t know — a wide spectrum of folks seem to frequent this blog.) But even to give such a common-sensical answer is to presume some kind of theory of existence (ontology), not to mention a theory of knowledge (epistemology). So we’re allowed to ask “How do you know?” and “What do you really mean by exist?”

These are the questions that underlie an entertaining and thought-provoking new book by Eric Kaplan, called Does Santa Exist?: A Philosophical Investigation. Eric has a resume to be proud of: he is a writer on The Big Bang Theory, and has previously written for Futurama and other shows, but he is also a philosopher, currently finishing his Ph.D. from Berkeley. In the new book, he uses the Santa question as a launching point for a rewarding tour through some knotty philosophical issues. He considers not only a traditional attack on the question, using Logic and the beloved principles of reason, but sideways approaches based on Mysticism as well. (“The Buddha ought to be able to answer our questions about the universe for like ten minutes, and then tell us how to be free of suffering.”) His favorite, though, is the approach based on Comedy, which is able to embrace contradiction in a way that other approaches can’t quite bring themselves to do.

Most people tend to have a pre-existing take on the Santa question. Hence, the book trailer for Does Santa Exist? employs a uniquely appropriate method: Choose-Your-Own-Adventure. Watch and interact, and you will find the answers you seek.

by Sean Carroll at October 17, 2014 04:34 PM

CERN Bulletin

CHIS – Letter from French health insurance authorities "Assurance Maladie" and “frontalier” status

Certain members of the personnel residing in France have recently received a letter, addressed to themselves and/or their spouse, from the French health insurance authorities (Assurance Maladie) on the subject of changes in the health insurance coverage of “frontalier” workers.

 

It should be recalled that employed members of personnel (MPE) are not affected by the changes made by the French authorities to the frontalier  workers' "right to choose" (droit d'option) in matters of health insurance (see the CHIS website for more details), which took effect as of 1 June 2014, as they are not considered to be frontalier workers. Associated members of the personnel (MPA) are not affected either, unless they live in France and are employed by a Swiss institute.

For the small number of MPAs in the latter category who might be affected, as well as for family members who do have frontalier status, CERN is still in discussion with the authorities of the two Host States regarding the health insurance coverage applicable to them.

We hope to receive more information in the coming weeks and will keep you informed via the CHIS web site and the CERN Bulletin.

HR Department

October 17, 2014 04:10 PM

The n-Category Cafe

New Evidence of the NSA Deliberately Weakening Encryption

One of the most high-profile ways in which mathematicians are implicated in mass surveillance is in the intelligence agencies’ deliberate weakening of commercially available encryption systems — the same systems that we rely on to protect ourselves from fraud, and, if we wish, to ensure our basic human privacy.

We already knew quite a lot about what they’ve been doing. The NSA’s 2013 budget request asked for funding to “insert vulnerabilities into commercial encryption systems”. Many people now know the story of the Dual Elliptic Curve pseudorandom number generator, used for online encryption, which the NSA aggressively and successfully pushed to become the industry standard, and which has weaknesses that are widely agreed by experts to be a back door. Reuters reported last year that the NSA arranged a secret $10 million contract with the influential American security company RSA (yes, that RSA), who became the most important distributor of that compromised algorithm.

In the August Notices of the AMS, longtime NSA employee Richard George tried to suggest that this was baseless innuendo. But new evidence published in The Intercept makes that even harder to believe than it already was. For instance, we now know about the top secret programme Sentry Raven, which

works with specific US commercial entities … to modify US manufactured encryption systems to make them exploitable for SIGINT [signals intelligence].

(page 9 of this 2004 NSA document).

The Intercept article begins with a dramatic NSA-drawn diagram of the hierarchy of secrecy levels. Each level is colour-coded. Top secret is red, and above top secret (these guys really give it 110%) are the “core secrets” — which, as you’d probably guess, are in black. From the article:

the NSA’s “core secrets” include the fact that the agency works with US and foreign companies to weaken their encryption systems.

(The source documents themselves are linked at the bottom of the article.)

It’s noted that there is “a long history of overt NSA involvement with American companies, especially telecommunications and technology firms”. Few of us, I imagine, would regard that as a bad thing in itself. It’s the nature of the involvement that’s worrying. The aim is not just to crack the encrypted messages of particular criminal suspects, but the wholesale compromise of all widely used encryption methods:

The description of Sentry Raven, which focuses on encryption, provides additional confirmation that American companies have helped the NSA by secretly weakening encryption products to make them vulnerable to the agency.

The documents also appear to suggest that NSA staff are planted inside American security, technology or telecomms companies without the employer’s knowledge. Chris Soghoian, principal technologist at the ACLU, notes that “As more and more communications become encrypted, the attraction for intelligence agencies of stealing an encryption key becomes irresistible … It’s such a juicy target.”

Unsurprisingly, the newly-revealed documents don’t say anything specific about the role played by mathematicians in weakening digital encryption. But they do make it that bit harder for defenders of the intelligence agencies to maintain that their cryptographic efforts are solely directed against the “bad guys” (a facile distinction, but one that gets made).

In other words, there is now extremely strong documentary evidence that the NSA and its partners make strenuous efforts to compromise, undermine, degrade and weaken all commonly-used encryption software. As the Reuters article puts it:

The RSA deal shows one way the NSA carried out what Snowden’s documents describe as a key strategy for enhancing surveillance: the systematic erosion of security tools.

The more or less explicit aim is that no human being is able to send a message to any other human being that the NSA cannot read.

Let that sink in for a while. There is less hyperbole than there might seem when people say that the NSA’s goal is the wholesale elimination of privacy.

This evening, I’m going to see Laura Poitras’s film Citizenfour (trailer), a documentary about Edward Snowden by one of the two journalists to whom he gave the full set of documents. But before that, I’m going to a mathematical colloquium by Trevor Wooley, Strategic Director of the Heilbronn Institute — which is the University of Bristol’s joint venture with GCHQ. I wonder how mathematicians like him, or young mathematicians now considering working for the NSA or GCHQ, feel about the prospect of a world where it is impossible for human beings to communicate in private.

by leinster (tom.leinster@ed.ac.uk) at October 17, 2014 03:05 PM

arXiv blog

Urban "Fingerprints" Finally Reveal the Similarities (and Differences) Between American and European Cities

Travelers have long noticed that some American cities “feel” more European than others. Now physicists have discovered a way to measure the “fingerprint” of a city that captures this sense.


Travel to any European city and the likelihood is that it will look and feel substantially different to modern American cities such as Los Angeles, San Diego, or Miami.

October 17, 2014 03:05 PM

Lubos Motl - string vacua and pheno

Lorentz violation: zero or 10 million times smaller than previously thought
One of the research paradigms that I consider insanely overrated is the idea that the fundamental theory of Nature may break the Lorentz symmetry – the symmetry underlying the special theory of relativity – and that the theorist may pretty much ignore the requirement that the symmetry should be preserved.

The Super-Kamiokande collaboration has published a new test of the Lorentz violation that used over a decade of observations of atmospheric neutrinos:
Test of Lorentz Invariance with Atmospheric Neutrinos
The Lorentz-violating terms whose existence they were trying to discover are some bilinear terms modifying the oscillations of the three neutrino species, \(\nu_e,\nu_\mu,\nu_\tau\), by treating the temporal and spatial directions of the spacetime differently.




They haven't found any evidence that these coefficients are nonzero which allowed them to impose new upper bounds. Some of them, in some parameterization, are 10 million times more constraining than the previous best upper bounds!




I don't want to annoy you with some technical details of this good piece of work because I am not terribly interested in it myself, being more sure about the result than about any other experiment by a wealthy enough particle-physics-like collaboration. But I can't resist to reiterate a general point.

The people who are playing with would-be fundamental theories that don't preserve the Lorentz invariance exactly (like most of the "alternatives" of string theory meant to describe quantum gravity) must hope that "the bad things almost exactly cancel" so that the resulting effective theory is almost exactly Lorentz-preserving which is needed for the agreement with the Super-Kamiokande search – as well as a century of less accurate experiments in different sectors of physics.

But in the absence of an argument why the resulting effective theory should be almost exactly Lorentz-preserving, one must assume that it's not and that the Lorentz-violating coefficients are pretty much uniformly distributed in a certain interval.

Before this new paper, they were allowed to be between \(0\) and a small number \(\epsilon\) and if one assumes that they were nonzero, there was no theoretical reason to think that the value was much smaller than \(\epsilon\). But a new observation shows that the new value of \(\epsilon\) is 10 million times smaller than the previous one. The Lorentz-breaking theories just don't have any explanation for this strikingly accurate observation, so they should be disfavored.

The simplest estimate what happens with the "Lorentz symmetry is slightly broken" theories is, of course, that their probability has decreased 10 million times when this paper was published! Needless to say, it's not the first time when the plausibility of such theories has dramatically decreased. But even if this were the first observation, it should mean that one lines up 10,000,001 likes of Lee Smolins who are promoting similar theories and kills 10,000,000 of them.

(OK, their names don't have to be "Lee Smolin". Using millions of his fans would be pretty much OK with me. The point is that the research into these possibilities should substantially decrease.)

Because nothing remotely similar to this sensible procedure is taking place, it seems to me that too many people just don't care about the empirical data at all. They don't care about the mathematical cohesiveness of the theories, either. Both the data and the mathematics seem to unambiguously imply that the Lorentz symmetry of the fundamental laws of Nature is exact and a theory that isn't shown to exactly preserve this symmetry – or to be a super-tiny deformation of an exactly Lorentz-preserving theory – is just ruled out.

Most of the time, they hide their complete denial of this kind of experiment behind would-be fancy words. General relativity always breaks the Lorentz symmetry because the spacetime is curved, and so on. But this breaking is spontaneous and there are still several extremely important ways how the Lorentz symmetry underlying the original laws of physics constrains all phenomena in the spacetime whether it is curved or not. The Lorentz symmetry still has to hold "locally", in small regions that always resemble regions of a flat Minkowski space, it it must also hold in "large regions" that resemble the flat space if the objects inside (which may be even black holes, highly curved objects) may be represented as local disturbances inside a flat spacetime.

One may misunderstand the previous sentences – or pretend that he misunderstands the previous sentences – but it is still a fact that a fundamentally Lorentz-violating theory makes a prediction (at least a rough, qualitative prediction) about experiments such as the experiment in this paper and this prediction clearly disagrees with the observations.

By the way, few days ago, Super-Kamiokande published another paper with limits, those for the proton lifetime (in PRD). Here the improvement is small, if any, and theories naturally giving these long lifetimes obviously exist and still seem "most natural". But yes, I also think that the theories with a totally stable proton may also exist and should be considered.

by Luboš Motl (noreply@blogger.com) at October 17, 2014 02:50 PM

CERN Bulletin

Computer Security: Our life in symbiosis*

Do you recall our Bulletin articles on control system cyber-security (“Hacking control systems, switching lights off!” and “Hacking control systems, switching... accelerators off?”) from early 2013? Let me shed some light on this issue from a completely different perspective.

 

I was raised in Europe during the 80s. With all the conveniences of a modern city, my environment made me a cyborg - a human entangled with technology - supported but also dependent on software and hardware. Since my childhood, I have eaten food packaged by machines and shipped through a sophisticated network of ships and lorries, keeping it fresh or frozen until it arrives in supermarkets. I heat my house with the magic of nuclear energy provided to me via a complicated electrical network. In fact, many of the amenities and gadgets I use are based on electricity and I just need to tap a power socket. When on vacation, I travel by taxi, train and airplane. And I enjoy the beautiful weather outside thanks to the air conditioning system located in the basement of the CERN IT building.

This air conditioning system, a process control system (PCS), monitors the ambient room temperature through a distributed network of sensors. A smart central unit - the Programmable Logic Controller (PLC) - compares the measured temperature values with a set of thresholds and subsequently calculates a new setting for heating or cooling. On top of this temperature control loop (monitor - calculate - set), a small display (a simple SCADA (supervisory controls and data acquisition) system) attached to the wall allows me to read the current room temperature and to manipulate its set-points. Depending on the size of the building and the number of processes controlled, many (different) sensors, PLCs, actuators and SCADA systems can be combined and inter-connected to build a larger and more complex PCS.

In a similar way, all our commodities and amenities depend on many different, complex PCSs e.g. a PCS for water and waste management, for electricity production and transmission, for public and private transport, for communication, for production of oil and gas but also cars, food, and pharmaceuticals. Today, many people live in symbiosis with those PCSs which make their lives cosy and comfortable, and industry depends on them. The variety of PCSs has become a piece of “critical infrastructure”, providing the fundamental basis for their general survival.

So what would happen if part or all of this critical infrastructure failed? How would your life change without clean tap water and proper waste disposal, without electricity, without fresh and frozen food? The cool air in the lecture hall will get hot and become uncomfortable. On a wider scale, with no drinking water from the tap, we would have to go back to local wells or collect and heat rain water in order to purify it. Failure of the electricity system would halt public life: frozen goods in supermarkets would warm up and become inedible, fuel pumps would not work anymore, life-preservation systems in hospitals would stop once the local diesel generators ran out of fuel…  (this is nicely depicted in the novel “Blackout” by M. Elsberg).

We rely on our critical infrastructure, we rely on PCS and we rely on the technologies behind PCSs. In the past, PCSs, PLCs and SCADA systems and their hardware and software components were proprietary, custom-built, and stand-alone. Expertise was centralised with a few system engineers who knew their system by heart. That has changed in recent decades. Pressure for consolidation and cost-effectiveness has pushed manufacturers to open up. Today, modern PCSs employ the same technological means that have been used for years in computer centres, in offices and at home: Microsoft’s Windows operating system to run SCADA systems; web browser as user interfaces; laptops and tablets replacing paper checklists; emails to disseminate status information and alerts; the IP protocol to communicate among different parts of a PCS; the Internet for remote access for support personnel and experts...

Unfortunately, while benefitting from standard information technology, PCSs have also inherited its drawbacks: design flaws in hardware, bugs in software components and applications, and vulnerabilities in communication protocols. Exploiting these drawbacks, malicious cyber-attackers and benign IT researchers have probed many different hardware, software and protocols for many years. Today, computer centres, office systems and home computers are permanently under attack. With their new technological basis, PCSs underwent scrutiny, too. The sophisticated “Stuxnet” attack by the US and Israel against the control system of Iranian uranium enrichment facilities in 2010 is just one of the more publicised cases. New vulnerabilities affecting PCSs are regularly published on certain web pages, and recipes for malicious attacks circulate widely on the Internet. The damage caused may be enormous.

Therefore, “Critical Infrastructure Protection” (CIP) becomes a must. But protecting PCSs like computer centres, patching them, running anti-virus on them, and controlling their access is much more difficult than attacking. PCS are built for use-cases. Malicious abuse is rarely considered during their design and implementation phase. For example, rebooting a SCADA PC will temporarily cease monitoring capabilities while updating PLCs firmware usually requires thorough re-testing and probably even re-certification. Both are non-trivial and costly tasks that cannot be done in-line with the monthly patch cycle releases by firms like Microsoft.

Ergo, a fraction (if not many) of today’s PCSs are vulnerable to common cyber-attacks. Not without reason, the former advisor to the US president, Richard Clarke, said “that the US might be able to blow up a nuclear plant somewhere, or a terrorist training centre somewhere, but a number of countries could strike back with a cyber-attack and the entire [US] economic system could be crashed in retaliation … because we can’t defend it today.” (AP 2011) We need to raise our cyber-defences now. Without CIP, without protected SCADA systems, our modern symbiotic life is at risk.

*To be published in the annual yearbook of the World Federation of Scientists.


Check out our website for further information, answers to your questions and help, or e-mail Computer.Security@cern.ch.

If you want to learn more about computer security incidents and issues at CERN, just follow our Monthly Report.


Access the entire collection of Computer Security articles here.

October 17, 2014 02:10 PM

ZapperZ - Physics and Physicists

Iranian Physicist Omid Kokabee To Receive A New Trial
This type of prosecution used to happen in the iron-fisted rule of the Soviet Union. But there is a sign of optimism in the case of physicist Omid Kokabee as the Iranian Supreme Court ordered a new trial. This after Kokabee has spent 4 years in prison for a charge that many in the world considered to be flimsy at best.

"Acceptance of the retrial request means that the top judicial authority has deemed Dr. Omid Kokabee's [initial] verdict against the law," Kokabee's lawyer, Saeed Khalili was quoted as saying on the website of the International Campaign for Human Rights in Iran. "The path has been paved for a retrial in his case, and God willing, proving his innocence."

Kokabee, a citizen of Iran who at the time was studying at the University of Texas, Austin, was first arrested at the Tehran airport in January 2011. After spending 15 months in prison waiting for a trial, including more than a month in solitary confinement, he was convicted by Iran's Revolutionary Court of "communicating with a hostile government" and receiving "illegitimate funds" in the form of his college loans. He was sentenced to ten years in prison without ever talking to his lawyer or being allowed testimony in his defense.

He received stipends as part of his graduate assistantship that was considered to be "illegitimate funds", which is utterly ridiculous. My characterization of such an accusation is that this can only come out of a bunch of extremely stupid and moronic group of people. There, I've said it!

Zz.

by ZapperZ (noreply@blogger.com) at October 17, 2014 01:41 PM

Symmetrybreaking - Fermilab/SLAC

High schoolers try high-powered physics

The winners of CERN's Beam Line for Schools competition conducted research at Europe’s largest physics laboratory.

Many teenagers dream about getting the keys to their first car. Last month, a group of high schoolers got access to their first beam of accelerated particles at CERN.

As part of its 60th anniversary celebration, CERN invited high school students from around the world to submit proposals for how they would use a beam of particles at the laboratory. Of the 292 teams that submitted the required “tweet of intent,” 1000-word proposal and one-minute video, CERN chose not one but two groups of winners: one from Dominicus College in Nijmegen, the Netherlands, and another from the Varvakios Pilot School in Athens, Greece.

The teams travelled to Switzerland in early September.

“Just being at CERN was fantastic,” says Nijmegen student Lisa Biesot. “The people at CERN were very enthusiastic that we were there. They helped us very much, and we all worked together.”

The Beam Line for Schools project was the brainchild of CERN physicist Christoph Rembser, who also coordinated the project. He and others at CERN didn’t originally plan for more than one team to win. But it made sense, as the two groups easily merged their experiments: Dominicus College students constructed a calorimeter that was placed within the Varvakios Pilot School’s experiment, which studied one of the four fundamental forces, the weak force.

“These two strong experiments fit so well together, and having an international collaboration, just like what we have at CERN, was great,” says Kristin Kaltenhauser of CERN’s international relations office, who worked with the students.

Over the summer the Nijmegen team grew crystals from potassium dihydrogen phosphate, a technique not used before at CERN, to make their own calorimeter, a piece of equipment that measures the energy of different particles.

At CERN, the unified team cross-calibrated the Nijmegen calorimeter with a calorimeter at CERN.

“We were worried if it would work,” says Nijmegen teacher Rachel Crane. “But then we tested our calorimeter on the beam with a lot of particles—positrons, electrons, pions and muons—and we really saw the difference. That was really amazing.”

The Athens team modeled their proposal on one of CERN’s iconic early experiments, conducted at the laboratory's first accelerator in 1958 to study an aspect of the weak force, which powers the thermonuclear reactions that cause the sun to shine.

Whereas the 1958 experiment had used a beam made completely of particles called pions, the students’ experiment used a higher energy beam containing a mixture of pions, kaons, protons, electrons and muons. They are currently analyzing the data.

CERN physicists Saime Gurbuz and Cenk Yildiz, who assisted the two teams, say they and other CERN scientists were very impressed with the students. “They were like real physicists,” Gurbuz says. “They were  professional and eager to take data and analyze it.”

The students and their teachers agree that working together enriched both their science and their overall experience. “We were one team,” says Athens student Nikolas Plaskovitis. “The collaboration was great and added so much to the experiment.” 

The students, teachers and CERN scientists have stayed in touch since the trip.

Before Nijmegen student Olaf Leender started working on the proposal, he was already interested in science, he says. “Now after my visit to CERN and this awesome experience, I am definitely going to study physics.”

Andreas Valadakis, who teaches the Athens group, says that his students now serve as science mentors to their fellow students. “This experience was beyond what we imagined,” he says.

Plaskovitis agrees with his teacher. “When we ran the beam line at CERN, just a few meters away behind the wall was the weak force at work. Just like the sun. And we were right there next to it.” 

Kaltenhauser says that CERN plans to hold another Beam Line for Schools competition in the future.

 

Like what you see? Sign up for a free subscription to symmetry!

by Rich Blaustein at October 17, 2014 01:27 PM

The n-Category Cafe

'Competing Foundations?' Conference

FINAL CFP and EXTENDED DEADLINE: SoTFoM II `Competing Foundations?’, 12-13 January 2015, London.

The focus of this conference is on different approaches to the foundations of mathematics. The interaction between set-theoretic and category-theoretic foundations has had significant philosophical impact, and represents a shift in attitudes towards the philosophy of mathematics. This conference will bring together leading scholars in these areas to showcase contemporary philosophical research on different approaches to the foundations of mathematics. To accomplish this, the conference has the following general aims and objectives. First, to bring to a wider philosophical audience the different approaches that one can take to the foundations of mathematics. Second, to elucidate the pressing issues of meaning and truth that turn on these different approaches. And third, to address philosophical questions concerning the need for a foundation of mathematics, and whether or not either of these approaches can provide the necessary foundation.

Date and Venue: 12-13 January 2015 - Birkbeck College, University of London.

Confirmed Speakers: Sy David Friedman (Kurt Goedel Research Center, Vienna), Victoria Gitman (CUNY), James Ladyman (Bristol), Toby Meadows (Aberdeen).

Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly construed). While we welcome submissions from all areas concerned with foundations, particularly desired are submissions that address the role of and compare different foundational approaches. Applicants should prepare an extended abstract (maximum 1,500 words) for blind review, and send it to sotfom [at] gmail [dot] com, with subject `SOTFOM II Submission’.

Submission Deadline: 31 October 2014

Notification of Acceptance: Late November 2014

Scientific Committee: Philip Welch (University of Bristol), Sy-David Friedman (Kurt Goedel Research Center), Ian Rumfitt (University of Birmigham), Carolin Antos-Kuby (Kurt Goedel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Goedel Research Center), Neil Barton (Birkbeck College), Chris Scambler (Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni (Universita Vita-Salute S. Raffaele), Giorgio Venturi (CLE, Universidade Estadual de Campinas)

Organisers: Sy-David Friedman (Kurt Goedel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Goedel Research Center), Neil Barton (Birkbeck College), Carolin Antos-Kuby (Kurt Goedel Research Center)

Conference Website: sotfom [dot] wordpress [dot] com

Further Inquiries: please contact Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at) Neil Barton (bartonna [at] gmail [dot] com) Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at) John Wigglesworth (jmwigglesworth [at] gmail [dot] com)

The conference is generously supported by the Mind Association, the Institute of Philosophy, British Logic Colloquium, and Birkbeck College.

by david (d.corfield@kent.ac.uk) at October 17, 2014 01:13 PM

CERN Bulletin

Emilio Picasso (1927-2014)

Many people in the high-energy physics community will be deeply saddened to learn that Emilio Picasso passed away on Sunday 12 October after a long illness. His name is closely linked in particular with the construction of CERN’s Large Electron-Positron (LEP) collider.

 

Emilio studied physics at the University of Genoa. He came to CERN in 1964 as a research associate to work on the ‘g-2’ experiments, which he was to lead when he became a staff member in 1966. These experiments spanned two decades at two different muon storage rings and became famous for their precision studies of the muon and tests of quantum electrodynamics.

In 1979, Emilio became responsible for the coordination of work by several institutes, including CERN, on the design and construction of superconducting RF cavities for LEP. Then, in 1981, the Director-General, Herwig Schopper, appointed him as a CERN director and LEP project leader. Emilio immediately set up a management board of the best experts at CERN and together they went on to lead the construction of LEP, the world’s largest electron synchrotron, in the 27-km tunnel that now houses the LHC.

LEP came online just over 25 years ago on 14 July 1989 and ran for 11 years. Its experiments went on to perform high-precision tests of the Standard Model, a true testament to Emilio’s skills as a physicist and as a project leader.

We send our deepest condolences to his wife and family.


A full obituary will appear in a later edition of the Bulletin.

See also the CERN Courier, in which Emilio talks about the early days of the LEP project and its start-up.

October 17, 2014 01:10 PM

CERN Bulletin

UK school visit: Alfriston School for girls

Pupils with learning disabilities from Alfriston School in the UK visited the CMS detector last week. This visit was funded by the UK's Science and Technologies Facilities Council (STFC) as part of a grant awarded to support activities that will help to build the girls’ self-esteem and interest in physics.

 

Alfriston School students at CMS.

On Friday, 10 October, pupils from Alfriston School – a UK secondary school catering for girls with a wide range of special educational needs and disabilities – paid a special visit to CERN.

Dave Waterman, a science teacher at the school, recently received a Public Engagement Small Award from the STFC, which enabled the group of girls and accompanying teachers to travel to Switzerland and visit CERN. The awards form part of a project to boost the girls’ confidence and interest in physics. The aim is to create enthusiastic role models with first-hand experience of science who can inspire their peers back home.

By building pupils' self-esteem with regards to learning science, the project further aims to encourage students to develop the confidence to go on to study subjects related to science or engineering when they leave school.

Waterman first visited CERN as part of the UK Teachers Programme in December 2013, which was when the idea of bringing his pupils over for a visit was first suggested. "The main challenge with a visit of this kind is finding how to engage the pupils who don’t have much knowledge of maths," said Waterman. Dave Barney, a member of the CMS collaboration, rose to the challenge, hitting the level spot on with a short and engaging introductory talk just before the detector visit. Chemical-engineering student Olivia Bailey, who recently completed a year-long placement at CERN, accompanied the students on the visit. "Being involved in this outreach project was really fun," she said. "It was a great way of using my experience at CERN and sharing it with others."

For one pupil – Laura – this was her first journey out of England and her first time on a plane. "The whole trip has been so exciting," she said. "My highlight was seeing the detector because it was so much bigger than what I thought." Other students were similarly impressed, expressing surprise and awe as they entered the detector area.

October 17, 2014 01:10 PM

Clifford V. Johnson - Asymptotia

Sunday Assembly – Origin Stories
Sorry about the slow posting this week. It has been rather a busy time the last several days, with all sorts of deadlines and other things taking up lots of time. This includes things like being part of a shooting of a new TV show, writing and giving a midterm to my graduate electromagnetism class, preparing a bunch of documents for my own once-every-3-years evaluation (almost forgot to do that one until the last day!), and so on and so forth. Well, the other thing I forgot to do is announce that I'll be doing the local Sunday Assembly sermon (for want of a better word) this coming Sunday. I've just taken a step aside from writing it to tell you about it. You'll have maybe heard of Sunday Assembly since it has been featured a lot in the news as a secular alternative (or supplement) to a Sunday Church gathering, in many cities around the world (more here). Instead of a sermon they have someone come along and talk about a topic, and they cover a lot of interesting topics. They sound like a great bunch of people to hang out with, and I strongly [..] Click to continue reading this post

by Clifford at October 17, 2014 12:24 AM

October 16, 2014

John Baez - Azimuth

Network Theory Seminar (Part 2)

 

This time I explain more about how ‘cospans’ represent gadgets with two ends, an input end and an output end:

I describe how to glue such gadgets together by composing cospans. We compose cospans using a category-theoretic construction called a ‘pushout’, so I also explain pushouts. At the end, I explain how this gives us a category where the morphisms are electrical circuits made of resistors, and sketch what we’ll do next: study the behavior of these circuits.

These lecture notes provide extra details:

Network theory (part 31).


by John Baez at October 16, 2014 08:59 PM

Lubos Motl - string vacua and pheno

An overlooked paper discovering axions gets published
What's the catch?

Sam Telfer has noticed and tweeted about a Royal Astronomic Society press release promoting today's publication (in Monthly Notices of RAS: link goes live next Monday) of a paper we should (or could) have discussed since or in March 2014 when it was sent to the arXiv – except that no one has discussed it and the paper has no followups at this moment:
Potential solar axion signatures in X-ray observations with the XMM-Newton observatory by George Fraser and 4 co-authors
The figures are at the end of the paper, after the captions. Unfortunately, Prof Fraser died in March, two weeks after this paper was sent to the arXiv. This can make the story about the discovery if it is real dramatic; alternatively, you may view it as a compassionate piece of evidence that the discovery isn't real.



Yes, this photograph of five axions was posted on the blog of the science adviser of The Big Bang Theory. It is no bazinga.

This French-English paper takes some data from XMM-Newton, X-ray Multi-Mirror Mission installed on and orbiting with ESA's Arianne 5's rocket. My understanding is that the authors more or less assume that the orientation of this X-ray telescope is "randomly changing" relatively to both the Earth and the Sun (which may be a problematic assumption but they study some details about the changing orientation, too).

With this disclaimer, they look at the amount of X-rays with energies between \(0.2\) and \(10\keV\) and notice that the flux has a rather clear seasonal dependence. The significance of these effects is claimed to be 4, 5, and 11 sigma (!!!), depending on some details. Seasonal signals are potentially clever but possibly tricky, too: recall that DAMA and (later) CoGeNT have "discovered" WIMP dark matter using the seasonal signals, too.




What is changing as a function of the season (date) is mainly the relative orientation of the Sun and the Earth. If you ignore the Sun, the Earth is just a gyroscope that rotates in the same way during the year, far away from stars etc., so seasons shouldn't matter. If you ignore the Earth, the situation should be more or less axially symmetric, although I wouldn't claim it too strongly, so there should also be no seasonal dependence.




What I want say and what is reasonable although not guaranteed is that the seasonal dependence of a signal seen from an orbiting rocket probably needs to depend both on the Sun and the Earth. Their interpretation is that axions are actually coming from the Sun, and they are later processed by the geomagnetic field.

The birth of the solar axions is either from a Compton-like process\[

e^- + \gamma \to e^- + a

\] or the (or more precisely: die) Bremsstrahlung-like process\[

e^- + Z \to e^- + Z+ a.

\] where the electrons and gauge bosons are taken from the mundane thermal havoc within the Sun's core, unless I am wrong. This axion \(a\) is created and some of those fly towards the Earth. And in the part of the geomagnetic field pointing towards the Sun, the axions \(a\) are converted to photons \(\gamma\) via the axion-to-photon conversion or the Primakoff effect (again: this process only works in the external magnetic field). The strength and relevance of the relevant geomagnetic field is season-dependent.



Their preferred picture is that there is the axion \(a\) with masses comparable to a few microelectronvolts and it couples both to electrons and photons. The product of these two coupling constants is said to be \(2.2\times 10^{-22} \GeV^{-1}\) because the authors love to repeat the word "two". Their hypothesis (or interpretation of the signal) probably makes some specific predictions about the spectrum of the X-rays and they should be checked which they have tried but I don't see too many successes of these checks after the first super-quick analysis of the paper.

There are lots of points and arguments and possible loopholes and problems over here that I don't fully understand at this point. You are invited to teach me (and us) or think loudly if you want to think about this bold claim at all.

Clearly, if the signal were real, it would be an extremely important discovery. Dark matter could be made out of these axions. The existence of axions would have far-reaching consequences not just for CP-violation in QCD but also for the scenarios within string theory, thanks to the axiverse and related paradigms.

The first news outlets that posted stories about the paper today were The Guardian, Phys.ORG, EurekAlert, and Fellowship for ET aliens.

by Luboš Motl (noreply@blogger.com) at October 16, 2014 04:47 PM

ZapperZ - Physics and Physicists

No Women Physics Nobel Prize Winner In 50 Years
This article reports on the possible reasons why there have been no Physics Nobel Prize for a woman in 50 years.

But there's also, of course, the fact that the prize is awarded to scientists whose discoveries have stood the test of time. If you're a theorist, your theory must be proven true, which knocks various people out of the running. One example is Helen Quinn, whose theory with Roberto Peccei predicts a new particle called the axion. But the axion hasn't been discovered yet, and therefore they can't win the Nobel Prize.
.
.
Age is important to note. Conrad tells Mashable that more and more women are entering the field of physics, but as a result, they're still often younger than what the committee seems to prefer. According to the Nobel Prize website, the average age of Nobel laureates has even increased since the 1950s.
 .
.
But the Nobel Prize in Physics isn't a lifetime achievement award — it honors a singular accomplishment, which can be tricky for both men and women.

"Doing Nobel Prize-worthy research is a combination of doing excellent science and also getting lucky," Conrad says. "Discoveries can only happen at a certain place and time, and you have to be lucky to be there then. These women coming into the field are as excellent as the men, and I have every reason to think they will have equal luck. So, I think in the future you will start to see lots of women among the Nobel Prize winners. I am optimistic."

The article mentioned the names of 4 women who are the leading candidates for the Nobel prize: Deborah Jin, Lene Hau, Vera Rubin, and Margaret Murnane. If you noticed, I mentioned about Jin and Hau way back when already, and I consider them to have done Nobel caliber work. I can only hope that, during my lifetime, we will see a woman win this again after so long.

Zz.

by ZapperZ (noreply@blogger.com) at October 16, 2014 12:40 PM

ZapperZ - Physics and Physicists

Lockheed Fusion "Breakthrough" - The Skeptics Are Out
Barely a day after Lockheed Martin announced their "fusion breakthrough" in designing a workable and compact fusion reactor, the skeptics are already weighing in their opinions even when details of Lockheed design has not been clearly described.

"The nuclear engineering clearly fails to be cost effective," Tom Jarboe told Business Insider in an email. Jarboe is a professor of aeronautics and astronautics, an adjunct professor in physics, and a researcher with the University of Washington's nuclear fusion experiment.
.
.
"This design has two doughnuts and a shell so it will be more than four times as bad as a tokamak," Jarboe said, adding that, "Our concept [at the University of Washington] has no coils surrounded by plasma and solves the problem."

Like I said earlier, from the sketchy detail that I've read, they are using a familiar technique for confinement, etc., something that has been used and studied extensively before. So unless they are claiming to find something that almost everyone has overlooked, this claim of their will need to be very convincing for others to accept. As stated in the article, Lockheed hasn't published anything yet, and they probably won't until they get patent approval of their design. That is what a commercial entity will typically do when they want to protect their design and investment.

There's a lot more work left to do for this to be demonstrated.

Zz.

by ZapperZ (noreply@blogger.com) at October 16, 2014 12:26 PM