Particle Physics Planet


September 21, 2017

The n-Category Cafe

Applied Category Theory at UCR (Part 2)

I’m running a special session on applied category theory, and now the program is available:

This is going to be fun.

My former student Brendan Fong is now working with David Spivak at M.I.T., and they’re both coming. My collaborator John Foley at Metron is also coming: we’re working on the CASCADE project for designing networked systems.

Dmitry Vagner is coming from Duke: he wrote a paper with David and Eugene Lerman on operads and open dynamical system. Christina Vaisilakopolou, who has worked with David and Patrick Schultz on dynamical systems, has just joined our group at UCR, so she will also be here. And the three of them have worked with Ryan Wisnesky on algebraic databases. Ryan will not be here, but his colleague Peter Gates will: together with David they have a startup called Categorical Informatics, which uses category theory to build sophisticated databases.

That’s not everyone — for example, most of my students will be speaking at this special session, and other people too — but that gives you a rough sense of some people involved. The conference is on a weekend, but John Foley and David Spivak and Brendan Fong and Dmitry Vagner are staying on for longer, so we’ll have some long conversations… and Brendan will explain decorated corelations in my Tuesday afternoon network theory seminar.

Wanna see what the talks are about?

Here’s the program. Click on talk titles to see abstracts. For a multi-author talk, the person with the asterisk after their name is doing the talking. All the talks will be in Room 268 of the Highlander Union Building or ‘HUB’.

Saturday November 4, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
A higher-order temporal logic for dynamical systems.
David I. Spivak, M.I.T.


10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagrams.
Dmitry Vagner*, Duke University
David I. Spivak, M.I.T.
Eugene Lerman, University of Illinois at Urbana-Champaign


10:30 a.m.
Abstract dynamical systems.
Christina Vasilakopoulou*, University of California, Riverside
David Spivak, M.I.T.
Patrick Schultz, M.I.T.


Saturday November 4, 2017, 3:00 p.m.-5:50 p.m.

3:00 p.m.
Black boxes and decorated corelations.
Brendan Fong, M.I.T.


4:00 p.m.
Compositional modelling of open reaction networks.
Blake S. Pollard*, University of California, Riverside
John C. Baez, University of California, Riverside


4:30 p.m.
A bicategory of coarse-grained Markov processes.
Kenny Courser, University of California, Riverside


5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanics.
Daniel Michael Cicala, University of California, Riverside


5:30 p.m.
Open systems in classical mechanics.
Adam Yassine, University of California Riverside


Sunday November 5, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
Controllability and observability: diagrams and duality.
Jason Erbele, Victor Valley College


9:30 a.m.
Frobenius monoids, weak bimonoids, and corelations.
Brandon Coya, University of California, Riverside


10:00 a.m.
Compositional design and tasking of networks.
John D. Foley*, Metron, Inc.
John C. Baez, University of California, Riverside
Joseph Moeller, University of California, Riverside
Blake S. Pollard, University of California, Riverside


10:30 a.m.
Operads for modeling networks.
Joseph Moeller*, University of California, Riverside
John Foley, Metron Inc.
John C. Baez, University of California, Riverside
Blake S. Pollard, University of California, Riverside


Sunday November 5, 2017, 2:00 p.m.-4:50 p.m.

2:00 p.m.
Reeb graph smoothing via cosheaves.
Vin de Silva, Department of Mathematics, Pomona College


3:00 p.m.
Knowledge representation in bicategories of relations.
Evan Patterson*, Stanford University, Statistics Department


3:30 p.m.
The multiresolution analysis of flow graphs.
Steve Huntsman*, BAE Systems


4:00 p.m.
Categorical logic as a foundation for reasoning under uncertainty.
Ralph L. Wojtowicz*, Shepherd University


4:30 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL).
Peter Y. Gates*, Categorical Informatics
Ryan Wisnesky, Categorical Informatics

by john (baez@math.ucr.edu) at September 21, 2017 11:26 PM

The n-Category Cafe

Applied Category Theory 2018

We’re having a conference on applied category theory!

The plenary speakers will be:

  • Samson Abramsky (Oxford)
  • John Baez (UC Riverside)
  • Kathryn Hess (EPFL)
  • Mehrnoosh Sadrzadeh (Queen Mary)
  • David Spivak (MIT)

There will be a lot more to say as this progresses, but for now let me just quote from the conference website.

Applied Category Theory (ACT 2018) is a five-day workshop on applied category theory running from April 30 to May 4 at the Lorentz Center in Leiden, the Netherlands.

Towards an integrative science: in this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one scientific discipline can be reused in another. The aim of the workshop is to (1) explore the use of category theory within and across different disciplines, (2) create a more cohesive and collaborative ACT community, especially among early-stage researchers, and (3) accelerate research by outlining common goals and open problems for the field.

While the workshop will host talks on a wide range of applications of category theory, there will be three special tracks on exciting new developments in the field:

  1. Dynamical systems and networks
  2. Systems biology
  3. Cognition and AI
  4. Causality

Accompanying the workshop will be an Adjoint Research School for early-career researchers. This will comprise a 16 week online seminar, followed by a 4 day research meeting at the Lorentz Center in the week prior to ACT 2018. Applications to the school will open prior to October 1, and are due November 1. Admissions will be notified by November 15.

Sincerely,
The organizers

Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford)

We welcome any feedback! Please send comments to this link.

About Applied Category Theory

Category theory is a branch of mathematics originally developed to transport ideas from one branch of mathematics to another, e.g. from topology to algebra. Applied category theory refers to efforts to transport the ideas of category theory from mathematics to other disciplines in science, engineering, and industry.

This site originated from discussions at the Computational Category Theory Workshop at NIST on Sept. 28-29, 2015. It serves to collect and disseminate research, resources, and tools for the development of applied category theory, and hosts a blog for those involved in its study.

The Proposal: Towards an Integrative Science

Category theory was developed in the 1940s to translate ideas from one field of mathematics, e.g. topology, to another field of mathematics, e.g. algebra. More recently, category theory has become an unexpectedly useful and economical tool for modeling a range of different disciplines, including programming language theory [10], quantum mechanics [2], systems biology [12], complex networks [5], database theory [7], and dynamical systems [14].

A category consists of a collection of objects together with a collection of maps between those objects, satisfying certain rules. Topologists and geometers use category theory to describe the passage from one mathematical structure to another, while category theorists are also interested in categories for their own sake. In computer science and physics, many types of categories (e.g. topoi or monoidal categories) are used to give a formal semantics of domain-specific phenomena (e.g. automata [3], or regular languages [11], or quantum protocols [2]). In the applied category theory community, a long-articulated vision understands categories as mathematical workspaces for the experimental sciences, similar to how they are used in topology and geometry [13]. This has proved true in certain fields, including computer science and mathematical physics, and we believe that these results can be extended in an exciting direction: we believe that category theory has the potential to bridge specific different fields, and moreover that developments in such fields (e.g. automata) can be transferred successfully into other fields (e.g. systems biology) through category theory. Already, for example, the categorical modeling of quantum processes has helped solve an important open problem in natural language processing [9].

In this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one discipline can be reused in another. Tangibly and in the short-term, we will bring together people from different disciplines in order to write an expository survey paper that grounds the varied research in applied category theory and lays out the parameters of the research program.

In formulating this research program, we are motivated by recent successes where category theory was used to model a wide range of phenomena across many disciplines, e.g. open dynamical systems (including open Markov processes and open chemical reaction networks), entropy and relative entropy [6], and descriptions of computer hardware [8]. Several talks will address some of these new developments. But we are also motivated by an open problem in applied category theory, one which was observed at the most recent workshop in applied category theory (Dagstuhl, Germany, in 2015): “a weakness of semantics/CT is that the definitions play a key role. Having the right definitions makes the theorems trivial, which is the opposite of hard subjects where they have combinatorial proofs of theorems (and simple definitions). […] In general, the audience agrees that people see category theorists only as reconstructing the things they knew already, and that is a disadvantage, because we do not give them a good reason to care enough” [1, pg. 61].

In this workshop, we wish to articulate a natural response to the above: instead of treating the reconstruction as a weakness, we should treat the use of categorical concepts as a natural part of transferring and integrating knowledge across disciplines. The restructuring employed in applied category theory cuts through jargon, helping to elucidate common themes across disciplines. Indeed, the drive for a common language and comparison of similar structures in algebra and topology is what led to the development category theory in the first place, and recent hints show that this approach is not only useful between mathematical disciplines, but between scientific ones as well. For example, the ‘Rosetta Stone’ of Baez and Stay demonstrates how symmetric monoidal closed categories capture the common structure between logic, computation, and physics [4].

[1] Samson Abramsky, John C. Baez, Fabio Gadducci, and Viktor Winschel. Categorical methods at the crossroads. Report from Dagstuhl Perspectives Workshop 14182, 2014.

[2] Samson Abramsky and Bob Coecke. A categorical semantics of quantum protocols. In Handbook of Quantum Logic and Quantum Structures. Elsevier, Amsterdam, 2009.

[3] Michael A. Arbib and Ernest G. Manes. A categorist’s view of automata and systems. In Ernest G. Manes, editor, Category Theory Applied to Computation and Control. Springer, Berlin, 2005.

[4] John C. Baez and Mike Stay. Physics, topology, logic and computation: a Rosetta stone. In Bob Coecke, editor, New Structures for Physics. Springer, Berlin, 2011.

[5] John C. Baez and Brendan Fong. A compositional framework for passive linear networks. arXiv e-prints, 2015.

[6] John C. Baez, Tobias Fritz, and Tom Leinster. A characterization of entropy in terms of information loss. Entropy, 13(11):1945-1957, 2011.

[7] Michael Fleming, Ryan Gunther, and Robert Rosebrugh. A database of categories. Journal of Symbolic Computing, 35(2):127-135, 2003.

[8] Dan R. Ghica and Achim Jung. Categorical semantics of digital circuits. In Ruzica Piskac and Muralidhar Talupur, editors, Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer, Berlin, 2016.

[9] Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, and Bob Coecke. Reasoning about meaning in natural language with compact closed categories and Frobenius algebras. In Logic and Algebraic Structures in Quantum Computing and Information. Cambridge University Press, Cambridge, 2013.

[10] Eugenio Moggi. Notions of computation and monads. Information and Computation, 93(1):55-92, 1991.

[11] Nicholas Pippenger. Regular languages and Stone duality. Theory of Computing Systems 30(2):121-134, 1997.

[12] Robert Rosen. The representation of biological systems from the standpoint of the theory of categories. Bulletin of Mathematical Biophysics, 20(4):317-341, 1958.

[13] David I. Spivak. Category Theory for Scientists. MIT Press, Cambridge MA, 2014.

[14] David I. Spivak, Christina Vasilakopoulou, and Patrick Schultz. Dynamical systems and sheaves. arXiv e-prints, 2016.

by john (baez@math.ucr.edu) at September 21, 2017 11:06 PM

Clifford V. Johnson - Asymptotia

Unexpected Throwback!

Wow, I've really got something good for Throwback Thursday! A large white envelope arrived in my mailbox*, addressed to me in handwriting. My first thought was that it was yet another sheaf of papers with someone's very earnest "Theory of Everything", helpfully sent along for me to discover that indeed the science world has "got it totally wrong": the universe is in fact made of (fill in the blank - let's say parmesan cheese?) which interacts via (hungry angels tethered together by fondue strands?) and so on and so forth, and all I have to do is "work out the math for me because it is not my strong point" and it'll all work out... "you're welcome".

But no, it was not. I don't open things like this without caution, for various reasons, and often I throw them away, but there was something strangely familiar about the writing and so I took it away to (maybe) open later.

Then it struck me. It was my handwriting! Huh? How could that be? Was I [...] Click to continue reading this post

The post Unexpected Throwback! appeared first on Asymptotia.

by Clifford at September 21, 2017 10:25 PM

Christian P. Robert - xi'an's og

the end of the Series B’log…

Today is the last and final day of Series B’log as David Dunson, Piotr Fryzlewicz and myself have decided to stop the experiment, faute de combattants. (As we say in French.) The authors nicely contributed long abstracts of their papers, for which I am grateful, but with a single exception, no one came out with comments or criticisms, and the idea to turn some Series B papers into discussion papers does not seem to appeal, at least in this format. Maybe the concept will be rekindled in another form in the near future, but for now we let it lay down. So be it!


Filed under: Books, Statistics, University life Tagged: blogging, discussion paper, Journal of the Royal Statistical Society, Series B, Series B'log

by xi'an at September 21, 2017 10:17 PM

John Baez - Azimuth

Applied Category Theory at UCR (Part 2)

I’m running a special session on applied category theory, and now the program is available:

Applied category theory, Fall Western Sectional Meeting of the AMS, 4-5 November 2017, U.C. Riverside.

This is going to be fun.

My former student Brendan Fong is now working with David Spivak at MIT, and they’re both coming. My collaborator John Foley at Metron is also coming: we’re working on the CASCADE project for designing networked systems.

Dmitry Vagner is coming from Duke: he wrote a paper with David and Eugene Lerman on operads and open dynamical system. Christina Vaisilakopolou, who has worked with David and Patrick Schultz on dynamical systems, has just joined our group at UCR, so she will also be here. And the three of them have worked with Ryan Wisnesky on algebraic databases. Ryan will not be here, but his colleague Peter Gates will: together with David they have a startup called Categorical Informatics, which uses category theory to build sophisticated databases.

That’s not everyone—for example, most of my students will be speaking at this special session, and other people too—but that gives you a rough sense of some people involved. The conference is on a weekend, but John Foley and David Spivak and Brendan Fong and Dmitry Vagner are staying on for longer, so we’ll have some long conversations… and Brendan will explain decorated corelations in my Tuesday afternoon network theory seminar.

Here’s the program. Click on talk titles to see abstracts. For a multi-author talk, the person with the asterisk after their name is doing the talking. All the talks will be in Room 268 of the Highlander Union Building or ‘HUB’.

Saturday November 4, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
A higher-order temporal logic for dynamical systems.
David I. Spivak, MIT

10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagrams.
Dmitry Vagner*, Duke University
David I. Spivak, MIT
Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.
Abstract dynamical systems.
Christina Vasilakopoulou*, University of California, Riverside
David Spivak, MIT
Patrick Schultz, MIT

Saturday November 4, 2017, 3:00 p.m.-5:50 p.m.

3:00 p.m.
Black boxes and decorated corelations.
Brendan Fong, MIT

4:00 p.m.
Compositional modelling of open reaction networks.
Blake S. Pollard*, University of California, Riverside
John C. Baez, University of California, Riverside

4:30 p.m.
A bicategory of coarse-grained Markov processes.
Kenny Courser, University of California, Riverside

5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanics.
Daniel M. Cicala, University of California, Riverside

5:30 p.m.
Open systems in classical mechanics.
Adam Yassine, University of California Riverside

Sunday November 5, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
Controllability and observability: diagrams and duality.
Jason Erbele, Victor Valley College

9:30 a.m.
Frobenius monoids, weak bimonoids, and corelations.
Brandon Coya, University of California, Riverside

10:00 a.m.
Compositional design and tasking of networks.
John D. Foley*, Metron, Inc.
John C. Baez, University of California, Riverside
Joseph Moeller, University of California, Riverside
Blake S. Pollard, University of California, Riverside

10:30 a.m.
Operads for modeling networks.
Joseph Moeller*, University of California, Riverside
John Foley, Metron Inc.
John C. Baez, University of California, Riverside
Blake S. Pollard, University of California, Riverside

Sunday November 5, 2017, 2:00 p.m.-4:50 p.m.

2:00 p.m.
Reeb graph smoothing via cosheaves.
Vin de Silva, Department of Mathematics, Pomona College

3:00 p.m.
Knowledge representation in bicategories of relations.
Evan Patterson*, Stanford University, Statistics Department

3:30 p.m.
The multiresolution analysis of flow graphs.
Steve Huntsman*, BAE Systems

4:00 p.m.
Categorical logic as a foundation for reasoning under uncertainty.
Ralph L. Wojtowicz*, Shepherd University

4:30 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL).
Peter Y. Gates*, Categorical Informatics
Ryan Wisnesky, Categorical Informatics


by John Baez at September 21, 2017 09:19 PM

ZapperZ - Physics and Physicists

Gravity As A Result Of Random Quantum Fluctuation?
There are too many "buzzwords" in this entire thing, but it might still be an interesting reading for some people.

There is a new report on the possibility that gravity might not be an interaction within QFT framework, but rather as a result of quantum fluctuation.

The average of these fluctuations is a gravitational field that is consistent with Newton’s theory of gravity. In this model, gravity is born out of quantum mechanics, but is not in itself a quantum-mechanical force. It is what scientists call “semiclassical.” Until this theory is tested further, it will remain a semi-solution; while the idea does predict certain known phenomena, it doesn’t yet account for Einstein’s theory of general relativity.

This latest report is due to a preprint uploaded to ArXiv.

Now, I can understand New Scientist reporting on something like this, because they have the tendency to report on sensational and unverified science news, but for PBS/NOVA webpage to jump onto this still-unpublished work? That's surprising.

Of course, I'm complicit on this as well since I'm reporting it here. I'm going to make sure I won't highlight something like this again in the future until it has at least appear in a peer-reviewed publication, not just in New Scientist and the likes.

Zz.

by ZapperZ (noreply@blogger.com) at September 21, 2017 05:40 PM

Christian P. Robert - xi'an's og

Symmetrybreaking - Fermilab/SLAC

Concrete applications for accelerator science

A project called A2D2 will explore new applications for compact linear accelerators.

Tom Kroc, Matteo Quagliotto and Mike Geelhoed set up a sample beneath the A2D2 accelerator to test the electron beam.

Particle accelerators are the engines of particle physics research at Fermi National Accelerator Laboratory. They generate nearly light-speed, subatomic particles that scientists study to get to the bottom of what makes our universe tick. Fermilab experiments rely on a number of different accelerators, including a powerful, 500-foot-long linear accelerator that kick-starts the process of sending particle beams to various destinations.

But if you’re not doing physics research, what’s an accelerator good for?

It turns out, quite a lot: Electron beams generated by linear accelerators have all kinds of practical uses, such as making the wires used in cars melt-resistant or purifying water.

A project called Accelerator Application Development and Demonstration (A2D2) at Fermilab’s Illinois Accelerator Research Center will help Fermilab and its partners to explore new applications for compact linear accelerators, which are only a few feet long rather than a few hundred. These compact accelerators are of special interest because of their small size—they’re cheaper and more practical to build in an industrial setting than particle physics research accelerators—and they can be more powerful than ever.

“A2D2 has two aspects: One is to investigate new applications of how electron beams might be used to change, modify or process different materials,” says Fermilab’s Tom Kroc, an A2D2 physicist. “The second is to contribute a little more to the understanding of how these processes happen.”

To develop these aspects of accelerator applications, A2D2 will employ a compact linear accelerator that was once used in a hospital to treat tumors with electron beams. With a few upgrades to increase its power, the A2D2 accelerator will be ready to embark on a new venture: exploring and benchmarking other possible uses of electron beams, which will help specify the design of a new, industrial-grade, high-power machine under development by IARC and its partners.

It won’t be just Fermilab scientists using the A2D2 accelerator: As part of IARC, the accelerator will be available for use (typically through a formal CRADA or SPP agreement) by anyone who has a novel idea for electron beam applications. IARC’s purpose is to partner with industry to explore ways to translate basic research and tools, including accelerator research, into commercial applications.

“I already have a lot of people from industry asking me, ‘When can I use A2D2?’” says Charlie Cooper, general manager of IARC. “A2D2 will allow us to directly contribute to industrial applications—it’s something concrete that IARC now offers.”

Speaking of concrete, one of the first applications in mind for compact linear accelerators is creating durable pavement for roads that won’t crack in the cold or spread out in the heat. This could be achieved by replacing traditional asphalt with a material that could be strengthened using an accelerator. The extra strength would come from crosslinking, a process that creates bonds between layers of material, almost like applying glue between sheets of paper. A single sheet of paper tears easily, but when two or more layers are linked by glue, the paper becomes stronger.

“Using accelerators, you could have pavement that lasts longer, is tougher and has a bigger temperature range,” says Bob Kephart, director of IARC. Kephart holds two patents for the process of curing cement through crosslinking. “Basically, you’d put the road down like you do right now, and you’d pass an accelerator over it, and suddenly you’d turn it into really tough stuff—like the bed liner in the back of your pickup truck.”

This process has already caught the eye of the U.S. Army Corps of Engineers, which will be one of A2D2’s first partners. Another partner will be the Chicago Metropolitan Water Reclamation District, which will test the utility of compact accelerators for water purification. Many other potential customers are lining up to use the A2D2 technology platform.

“You can basically drive chemical reactions with electron beams—and in many cases those can be more efficient than conventional technology, so there are a variety of applications,” Kephart says. “Usually what you have to do is make a batch of something and heat it up in order for a reaction to occur. An electron beam can make a reaction happen by breaking a bond with a single electron.”

In other words, instead of having to cook a material for a long time to reach a specific heat that would induce a chemical reaction, you could zap it with an electron beam to get the same effect in a fraction of the time.

In addition to exploring the new electron-beam applications with the A2D2 accelerator, scientists and engineers at IARC are using cutting-edge accelerator technology to design and build a new kind of portable, compact accelerator, one that will take applications uncovered with A2D2 out of the lab and into the field. The A2D2 accelerator is already small compared to most accelerators, but the latest R&D allows IARC experts to shrink the size while increasing the power of their proposed accelerator even further.

“The new, compact accelerator that we’re developing will be high-power and high-energy for industry,” Cooper says. “This will enable some things that weren’t possible in the past. For something such as environmental cleanup, you could take the accelerator directly to the site.”

While the IARC team develops this portable accelerator, which should be able to fit on a standard trailer, the A2D2 accelerator will continue to be a place to experiment with how to use electron beams—and study what happens when you do.

“The point of this facility is more development than research, however there will be some research on irradiated samples,” says Fermilab’s Mike Geelhoed, one of the A2D2 project leads. “We’re all excited—at least I am. We and our partners have been anticipating this machine for some time now. We all want to see how well it can perform.”

Editor's note: This article was originally published by Fermilab.

by Leah Poffenberger at September 21, 2017 05:18 PM

Peter Coles - In the Dark

Knitted Omnibus

The inestimable Miss Lemon, who occasionally operates under the pseudonym Dorothy Lamb, has sent me a picture of her latest knitting exploits, i.e. two buses in the livery of the Brighon & Hove Bus Company!

They add a whole new meaning to the term `bendy bus’!

To find out what inspired these contributions please see related the University of Sussex news item here.


by telescoper at September 21, 2017 03:56 PM

Peter Coles - In the Dark

Free Will in the Theory of Everything

There’s a very thoughtful and provocative paper on the arXiv by Gerard tHooft, who (jointly) won the Nobel Prize for Physics in 1999. It’s well worth reading, even if you decide you don’t agree with him!

From what is known today about the elementary particles of matter, and the forces that control their behavior, it may be observed that still a host of obstacles must be overcome that are standing in the way of further progress of our understanding. Most researchers conclude that drastically new concepts must be investigated, new starting points are needed, older structures and theories, in spite of their successes, will have to be overthrown, and new, superintelligent questions will have to be asked and investigated. In short, they say that we shall need new physics. Here, we argue in a different manner. Today, no prototype, or toy model, of any so-called Theory of Everything exists, because the demands required of such a theory appear to be conflicting. The demands that we propose include locality, special and general relativity, together with a fundamental finiteness not only of the forces and amplitudes, but also of the set of Nature’s dynamical variables. We claim that the two remaining ingredients that we have today, Quantum Field Theory and General Relativity, indeed are coming a long way towards satisfying such elementary requirements. Putting everything together in a Grand Synthesis is like solving a gigantic puzzle. We argue that we need the correct analytical tools to solve this puzzle. Finally, it seems to be obvious that this solution will give room neither for “Divine Intervention”, nor for “Free Will”, an observation that, all by itself, can be used as a clue. We claim that this reflects on our understanding of the deeper logic underlying quantum mechanics.

The full paper can be downloaded here.


by telescoper at September 21, 2017 11:38 AM

Emily Lakdawalla - The Planetary Society Blog

Meet two astronaut candidates who can help NASA do science on other worlds
Two of NASA's new astronaut candidates are particularly suited to conduct scientific research on other worlds: Zena Cardman, a geobiologist, and Jessica Watkins, a geologist.

September 21, 2017 11:00 AM

September 20, 2017

Christian P. Robert - xi'an's og

LaTeX issues from Vienna

When working on the final stage of our edited handbook on mixtures, in Vienna, I came across unexpected practical difficulties! One was that by working on Dropbox with Windows users, files and directories names suddenly switched from upper case to lower cases letters !, making hard-wired paths to figures and subsections void in the numerous LaTeX files used for the book. And forcing us to change to lower cases everywhere. Having not worked under Windows since George Casella gave me my first laptop in the mid 90’s!, I am amazed that this inability to handle both upper and lower names is still an issue. And that Dropbox replicates it. (And that some people see that as a plus.)

The other LaTeX issue that took a while to solve was that we opted for one chapter one bibliography, rather than having a single bibliography at the end of the book, mainly because CRC Press asked for this feature in order to sell chapters individually… This was my first encounter with this issue and I found the solutions to produce individual bibliographies incredibly heavy handed, whether through chapterbib or bibunits, since one has to bibtex one .aux file for each chapter. Even with a one line bash command, this is annoying to the extreme!


Filed under: Books, Statistics, University life Tagged: bibliography, BibTeX, Book, Dropbox, George Casella, handbook of mixture analysis, LaTeX, Linux, mixtures of distributions, Vienna, Windows, WU Wirtschaftsuniversität Wien

by xi'an at September 20, 2017 10:17 PM

Christian P. Robert - xi'an's og

Emily Lakdawalla - The Planetary Society Blog

In Appreciation of Kim Poor
We at The Planetary Society are saddened to hear about the recent passing of veteran space artist Kim Poor.

September 20, 2017 04:25 PM

Emily Lakdawalla - The Planetary Society Blog

How did China decide where to land its upcoming Moon missions?
How were the Chang'e 5 and 4 landing sites chosen? Space exploration historian Phil Stooke explains.

September 20, 2017 04:17 PM

Peter Coles - In the Dark

Song of Creation

Then there was neither Aught nor Nought, no air nor sky beyond.
What covered all? Where rested all? In watery gulf profound?
Nor death was then, nor deathlessness, nor change of night and day.
That One breathed calmly, self-sustained; nought else beyond it lay.

Gloom hid in gloom existed first – one sea, eluding view.
That One, a void in chaos wrapt, by inward fervour grew.
Within it first arose desire, the primal germ of mind,
Which nothing with existence links, as sages searching find.

The kindling ray that shot across the dark and drear abyss-
Was it beneath? or high aloft? What bard can answer this?
There fecundating powers were found, and mighty forces strove-
A self-supporting mass beneath, and energy above.

Who knows, who ever told, from whence this vast creation rose?
No gods had then been born – who then can e’er the truth disclose?
Whence sprang this world, and whether framed by hand divine or no-
Its lord in heaven alone can tell, if even he can show.

Translated by John Muir from the original (anonymous) Sanskrit text of a hymn.


by telescoper at September 20, 2017 02:19 PM

September 19, 2017

Christian P. Robert - xi'an's og

weapons of math destruction [fan]

As a [new] member of Parliement, Cédric Villani is now in charge of a committee on artificial intelligence, which goal is to assess the positive and negative sides of AI. And refers in Le Monde interview below to Weapons of Maths Destruction as impacting his views on the topic! Let us hope Superintelligence is no next on his reading list…


Filed under: Statistics Tagged: AI, artificial intelligence, Cédric Villani, French parliement, French politics, Le Monde, singularity, superintelligence, weapons of math destruction

by xi'an at September 19, 2017 10:17 PM

Peter Coles - In the Dark

September at Sophia Gardens

Since it was a fine evening I popped in at the SSE SWALEC Stadium in Sophia Gardens on the way home from work to catch the last few overs of Day 1 of the County Championship match between Glamorgan and Gloucestershire.

For a change, and despite  losing two quick wickets while I watched,  Glamorgan are in a reasonably strong position at 342 for 7 off 96 overs at the end of Day 1, with young Kiran Carlson unbeaten on 137. That’s not bad considering that, having been put in to bat, they had been 63 for 4 at one stage.

It’s been a disappointing season in the County Championship for Glamorgan, who have only won two games out of 12 so far, and there’s not much at stake in this game, but I hope they can get a good result in this, their last game of the season in Cardiff.


by telescoper at September 19, 2017 05:50 PM

ZapperZ - Physics and Physicists

Amazon's CAPTCHA Patent Proposal Tests Your Physics Understanding
... well, more like your physics INTUITION on what should happen next.

It seems that Amazon has file a patent application that uses a physics engine to generate scenarios to see if you are a real person or a bot.

The company has filed a patent application for a new CAPTCHA method which would show you a 3D simulation of something about to happen to a person or object. That something would involve Newtonian physics — perhaps an item is about to fall on someone, or a ball is about to roll down a slope. The test would then show you several "after" scenarios and, if you pick the correct option, you've passed the test.
.
.
.
The idea is that, because you are a human, you have an "intuitive" understanding of what would happen next in these scenarios. But computers need much more information about the scene and "might be unable to solve the test", according to the application.

Definitely interesting, although in Fig. 3B shown in the article, both Fig (A) and Fig (B) might be possible depending on the ambiguity of the drawing.

But this brings me an important point that I've been telling my students in intro physics classes when they dealt with mechanics. We all ALREADY KNOW many of the things that will happen in cases like this. We do not need to learn physics or to be enrolled in a physics class to know the qualitative description of the dynamics of these systems. So we are not teaching you about something you are not familiar with.

What a formal physics lesson will do is to describe these things more accurately, i.e. in a QUANTITATIVE manner. We won't simply say "Oh, the ball will roll down that inclined plane." Rather, we will describe the motion of the ball mathematically, and we will be able to say how long the ball will take to each the bottom, at what speed, etc...etc. In other words, we don't just say "What goes up must come down", but we will also say "When and where it will come down". This is what separates physics (and science) from hand-waving, everyday conversation.

All of us already have an intuitive understanding of the physical systems around us. That's why Amazon can make such a CAPTCHA test for everyone. A physics lessons simply formalize that understanding in a more accurate and non-ambiguous fashion.

Zz.

by ZapperZ (noreply@blogger.com) at September 19, 2017 03:36 PM

Peter Coles - In the Dark

A Fellow’s Diary

Yet another sign that Autumn is on the way arrived yesterday in the form of my new Royal Astronomical Society diary, which comes with the subscription. This runs from October to October so each year’s new edition usually comes in September. I say `usually’ because mine didn’t come at all last year. It probably got lost in a muddle when I changed address back to Cardiff from Sussex. Each year’s version is usually a different colour from the previous one too. This time it’s a sort of bottle green.

Anyway, although many of my colleagues seem not to use them, I like old-fashioned diaries like this. I do run an electronic calendar for work-related events, meetings etc, but I use the paper one to scribble down extra-curricular activities such as concerts and cricket fixtures, as I find the smartphone version of my electronic calendar a bit fiddly.

Anyway, I’m interested to know the extent to which I am an old fogey so here’s a little poll on the subject of diaries:

<noscript><a href="http://polldaddy.com/poll/9833658">Take Our Poll</a></noscript>

by telescoper at September 19, 2017 03:33 PM

Emily Lakdawalla - The Planetary Society Blog

OSIRIS-REx Earth flyby: What to Expect
OSIRIS-REx launched on September 8, 2016. Now, a year later, it's returning to its home to get a second boost on to its destination, the asteroid Bennu. It'll test all its cameras on Earth and the Moon in the 10 days after the flyby.

September 19, 2017 02:31 PM

Symmetrybreaking - Fermilab/SLAC

50 years of stories

To celebrate a half-century of discovery, Fermilab has been gathering tales of life at the lab.

People discussing Fermilab history

Science stories usually catch the eye when there’s big news: the discovery of gravitational waves, the appearance of a new particle. But behind the blockbusters are the thousands of smaller stories of science behind the scenes and daily life at a research institution. 

As the Department of Energy’s Fermi National Accelerator Laboratory celebrates its 50th anniversary year, employees past and present have shared memories of building a lab dedicated to particle physics.

Some shared personal memories: keeping an accelerator running during a massive snowstorm; being too impatient for the arrival of an important piece of detector equipment to stay put and wait for it to arrive; accidentally complaining about the lab to the lab’s director.

Others focused on milestones and accomplishments: the first daycare at a national lab, the Saturday Morning Physics Program built by Nobel laureate Leon Lederman, the birth of the web at Fermilab.

People shared memories of big names that built the lab: charismatic founding director Robert R. Wilson, fiery head of accelerator development Helen Edwards, talented lab artist Angela Gonzales.

And or course, employees told stories about Fermilab’s resident herd of bison.

There are many more stories to peruse. You can watch a playlist of the video anecdotes or find all of the stories (both written and video) collected on Fermilab’s 50th anniversary website.

by Lauren Biron at September 19, 2017 01:00 PM

September 18, 2017

The n-Category Cafe

Lattice Paths and Continued Fractions I

In my last post I talked about certain types of lattice paths with weightings on them and formulas for the weighted count of the paths, in particular I was interested in expressing the reverse Bessel polynomials as a certain weighted count of Schröder paths. I alluded to some connection with continued fractions and it is this connection that I want to explain here and in my next post.

In this post I want to prove Flajolet’s Fundamental Lemma. Alan Sokal calls this Flajolet’s Master Theorem, but Viennot takes the stance that it deserves the high accolade of being described as a ‘Fundamental Lemma’, citing Aigner and Ziegler in Proofs from THE BOOK:

“The essence of mathematics is proving theorems – and so, that is what mathematicians do: They prove theorems. But to tell the truth, what they really want to prove, once in their lifetime, is a Lemma, like the one by Fatou in analysis, the Lemma of Gauss in number theory, or the Burnside-Frobenius Lemma in combinatorics.

“Now what makes a mathematical statement a true Lemma? First, it should be applicable to a wide variety of instances, even seemingly unrelated problems. Secondly, the statement should, once you have seen it, be completely obvious. The reaction of the reader might well be one of faint envy: Why haven’t I noticed this before? And thirdly, on an esthetic level, the Lemma – including its proof – should be beautiful!”

Interestingly, Aigner and Ziegler were building up to describing a result of Viennot’s – the Gessel-Lindström-Viennot Lemma – as a fundamental lemma! (I hope to talk about that lemma in a later post.)

Anyway, Flajolet’s Fundamental Lemma that I will describe and prove below is about expressing the weighted count of paths that look like

weighted Motzkhin path

as a continued fraction

<semantics>11c 0a 1b 11c 1a 2b 21c 2a 3b 31<annotation encoding="application/x-tex"> \frac{1} {1- c_{0} - \frac{a_{1} b_{1}} {1-c_{1} - \frac{a_{2} b_{2}} {1- c_2 - \frac{a_3 b_3} {1-\dots }}}} </annotation></semantics>

Next time I’ll give a few examples, including the connection with reverse Bessel polynomials.

Motzkhin paths

We consider Motzkhin paths, which are like Dyck paths and Schröder paths we considered last time, but here the flat paths have length <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>.

A Motzkhin path, then, is a lattice path in <semantics> 2<annotation encoding="application/x-tex">\mathbb{N}^2</annotation></semantics> starting at <semantics>(0,0)<annotation encoding="application/x-tex">(0, 0)</annotation></semantics>, having steps in the direction <semantics>(1,1)<annotation encoding="application/x-tex">(1,1)</annotation></semantics>, <semantics>(1,1)<annotation encoding="application/x-tex">(1,-1)</annotation></semantics> or <semantics>(1,0)<annotation encoding="application/x-tex">(1,0)</annotation></semantics>. The path finishes at some <semantics>(,0)<annotation encoding="application/x-tex">(\ell, 0)</annotation></semantics>. Here is a Motzkhin path.

Motzkhin path

(Actually at this point the length of each step is a bit of a red herring, but let’s not worry about that.)

We want to count weighted paths, so we’re going to have to weight them. We’ll do it in a universal way to start with. Let <semantics>{a i} i=1 <annotation encoding="application/x-tex">\{a_i\}_{i=1}^\infty</annotation></semantics>, <semantics>{b i} i=1 <annotation encoding="application/x-tex">\{b_i\}_{i=1}^\infty</annotation></semantics> and <semantics>{c i} i=0 <annotation encoding="application/x-tex">\{c_i\}_{i=0}^\infty</annotation></semantics>, be three sets of commuting indeterminates. Now weight each step in a path in the following way. Each step going up to level <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> will be given the weight <semantics>a i<annotation encoding="application/x-tex">a_i</annotation></semantics>; each step going down from level <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> will be given the weight <semantics>b i<annotation encoding="application/x-tex">b_i</annotation></semantics>; and each flat step at level <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> will be given weight <semantics>c i<annotation encoding="application/x-tex">c_i</annotation></semantics>. Here’s the path from above with the weights marked on it.

weighted Motzkhin path

The weight <semantics>w a,b,c(σ)<annotation encoding="application/x-tex">w_{a,b,c}(\sigma)</annotation></semantics> of a path <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> is just the product of the weights of each of its steps, so the weight of the above path is <semantics>c 0a 1 2b 1 2c 1a 2b 2<annotation encoding="application/x-tex">c_0a_1^2b_1^2c_1a_2b_2</annotation></semantics>.

If you try to start writing down the sum of the weightings of all Motzkhin paths you’ll get a power series that begins

<semantics>1+c 0+a 1b 1+c 0 2+2a 1b 1c 0+a 1b 1c 1+c 0 3+[[a i,b i,c i]]<annotation encoding="application/x-tex"> 1 + c_0 + a_1b_1 + c_0^2 + 2a_1b_1c_0 + a_1b_1c_1 + c_0^3 + \dots \in\mathbb{Z}[[a_i, b_i, c_i]] </annotation></semantics>

Flajolet’s Fundamental Lemma will give us a formula for this power series.

Flajolet’s Fundamental Lemma

In order to prove the result about the enumeration of weightings of all paths we will need to consider slightly more general paths that don’t just start on the <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>-axis. So define a <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>–path to be like a Motzkhin path except that it starts at some point <semantics>(0,h)<annotation encoding="application/x-tex">(0, h)</annotation></semantics>, for <semantics>h0<annotation encoding="application/x-tex">h\ge 0</annotation></semantics>, does not go below the line <semantics>y=h<annotation encoding="application/x-tex">y=h</annotation></semantics> nor above the line <semantics>y=k<annotation encoding="application/x-tex">y=k</annotation></semantics> and finishes at some point <semantics>(,h)<annotation encoding="application/x-tex">(\ell, h)</annotation></semantics>. Let <semantics>P h k<annotation encoding="application/x-tex">P_h^k</annotation></semantics> denote the set of all <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>–paths.

Here is a <semantics>(2,4)<annotation encoding="application/x-tex">(2,4)</annotation></semantics>–path with the weights marked on. Of course this is also, for instance, a <semantics>(2,13)<annotation encoding="application/x-tex">(2,13)</annotation></semantics>–path.

high Motzkhin path

We want the weighted sum of all Motzkhin paths, so in order to calculate that we will take <semantics>p h k<annotation encoding="application/x-tex">p_h^k</annotation></semantics> to be the sum of all weights of <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>-paths: <semantics>p h k σP h kw a,b,c(σ)[[a i,b i,c i]].<annotation encoding="application/x-tex">p_h^k\coloneqq \sum_{\sigma\in P_h^k} w_{a,b,c}(\sigma)\in \mathbb{Z}[[a_i, b_i, c_i]].</annotation></semantics> There is a beautifully simple expression for <semantics>p h k<annotation encoding="application/x-tex">p^k_h</annotation></semantics>.

Observe first that any path in <semantics>P k k<annotation encoding="application/x-tex">P_k^k</annotation></semantics> is constrained to lie at level <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> so must simply be a product of flat steps which all have weight <semantics>c k<annotation encoding="application/x-tex">c_k</annotation></semantics>, thus

<semantics>p k k=1+c k+c k 2+c k 3+=11c k.<annotation encoding="application/x-tex"> p^k_k = 1 + c_k +c_k^2 + c_k^3+\dots = \frac{1}{1-c_k}. </annotation></semantics>

Given two paths <semantics>σ 1,σ 2P h k<annotation encoding="application/x-tex">\sigma_1, \sigma_2\in P^k_h</annotation></semantics> we can multiply them together simply by placing <semantics>σ 2<annotation encoding="application/x-tex">\sigma_2</annotation></semantics> after <semantics>σ 1<annotation encoding="application/x-tex">\sigma_1</annotation></semantics>. The above pictured example is the product of three paths in <semantics>P 2 4<annotation encoding="application/x-tex">P_2^4</annotation></semantics>, the middle one being a flat path. Weighting is clearly preserved by this multiplication: <semantics>w a,b,c(σ 1σ 2)=w a,b,c(σ 1)w a,b,c(σ 2)<annotation encoding="application/x-tex">w_{a,b,c}(\sigma_1\sigma_2)=w_{a,b,c}(\sigma_1)w_{a,b,c}(\sigma_2)</annotation></semantics>.

An indecomposable <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>-path is a path which only returns to level <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> at its finishing point, i.e. as the name suggests, it can not be decomposed into a non-trivial product. It is clear that any path uniquely decomposes as a product of indecomposable paths. There are two types of non-trivial indecomposable <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>-paths: there is the single flat step; and there are the paths which are an up step followed by a path in <semantics>P h+1 k<annotation encoding="application/x-tex">P_{h+1}^k</annotation></semantics> followed by a down step back to level <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics>. We let <semantics>I h k<annotation encoding="application/x-tex">I_{h}^k</annotation></semantics> be the set of non-trivial indecomposable <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>-paths.

This all leads to the following argument to deduce an expression for the weighted count of all <semantics>(h,k)<annotation encoding="application/x-tex">(h,k)</annotation></semantics>-paths.

<semantics>p h k = σP h kw a,b,c(σ) = n=0 π i,,π nI h kw a,b,c(π 1π n) = n=0 π i,,π nI h kw a,b,c(π 1)w a,b,c(π n) =11 πI h kw a,b,c(π) =11c k σP h+1 ka h+1w a,b,c(σ)b h+1 =11c ka h+1b h+1 σP h+1 kw a,b,c(σ) =11c ka h+1b h+1p h+1 k<annotation encoding="application/x-tex"> \begin{aligned} p^k_h&=\sum_{\sigma\in P^k_h} w_{a,b,c}(\sigma)\\ &= \sum_{n=0}^\infty \sum_{\pi_i,\dots,\pi_n \in I^k_h} w_{a,b,c}(\pi_1\dots \pi_n)\\ &= \sum_{n=0}^\infty \sum_{\pi_i,\dots,\pi_n \in I^k_h} w_{a,b,c}(\pi_1)\dots w_{a,b,c}(\pi_n)\\ &= \frac{1}{1- \sum_{\pi\in I^k_h} w_{a,b,c}(\pi)} \\ &= \frac{1}{1- c_k - \sum_{\sigma\in P^k_{h+1}}a_{h+1} w_{a,b,c}(\sigma)b_{h+1}} \\ &= \frac{1}{1- c_k- a_{h+1}b_{h+1}\sum_{\sigma\in P^k_{h+1}} w_{a,b,c}(\sigma)} \\ &= \frac{1}{1- c_k - a_{h+1} b_{h+1}p_{h+1}^k} \end{aligned} </annotation></semantics>

This is a lovely recursive expression for the weighted count <semantics>p h k<annotation encoding="application/x-tex">p_h^k</annotation></semantics>. Using the fact <semantics>p k k=11c k<annotation encoding="application/x-tex">p^k_k=\frac{1}{1-c_k}</annotation></semantics> that we gave above, we obtain the following.

Lemma <semantics>p h k=11c ha h+1b h+11c h+1a h+2b h+21c k1a kb k1c k<annotation encoding="application/x-tex"> p_h^k= \frac{1} {1- c_{h} - \frac{a_{h+1} b_{h+1}} {1-c_{h+1} - \frac{a_{h+2} b_{h+2}} {\qquad \frac{\vdots}{1- c_{k-1}-\frac{a_k b_k}{1-c_k}} }}} </annotation></semantics>

Now taking <semantics>h=0<annotation encoding="application/x-tex">h=0</annotation></semantics> and letting <semantics>k<annotation encoding="application/x-tex">k\to \infty</annotation></semantics> we get the following continued fraction expansion for the weighted count of all Motzkhin paths starting at level <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>.

Flajolet’s Fundamental Lemma <semantics> σMotzkhinw a,b,c(σ)=11c 0a 1b 11c 1a 2b 21c 2a 3b 31[[a i,b i,c i]]<annotation encoding="application/x-tex"> \sum_{\sigma\,\,\mathrm{Motzkhin}} w_{a,b,c}(\sigma) = \frac{1} {1- c_{0} - \frac{a_{1} b_{1}} {1-c_{1} - \frac{a_{2} b_{2}} {1- c_2 - \frac{a_3 b_3} {1-\dots }}}} \in\mathbb{Z}[[a_i, b_i, c_i]] </annotation></semantics>

How lovely and simple is that?

Next time I’ll give some examples and applications which include the Dyck paths and Scröder paths we looked at previously.

by willerton (S.Willerton@sheffield.ac.uk) at September 18, 2017 01:01 PM

September 17, 2017

Tommaso Dorigo - Scientificblogging

Letters From Indochina, 1952-54: The Tragic Story Of A 18 Year Old Enrolled In The Legion Etrangere
In 1952 my uncle Antonio, then 18 years old, left his family home in Venice, Italy to never return, running away from the humiliation of a failure at school. With a friend he reached the border with France and crossed it during the night, chased by border patrols and wolves. Caught by the French police Toni - that was the abbreviated name with which was known by everybody - was offered a choice: be sent back to Italy, facing three months of jail, or enrol in the French legion. Afraid of the humiliation and the consequences, he tragically chose the latter.

read more

by Tommaso Dorigo at September 17, 2017 04:59 PM

Lubos Motl - string vacua and pheno

Nima et al.: making the amplitude minirevolution massive
Nima Arkani-Hamed (Princeton), Tzu-Chen Huang (Caltech), and Yu-tin Huang (Taiwan) released their new 79-page-long paper
Scattering Amplitudes For All Masses and Spins
a few days ago. They claim to do something that may be considered remarkable: to generalize the spinor-indices-based uprising in the scattering amplitude industry of the previous 15 years to the case of particles of any mass and spin, and to deduce some properties of all possible particle theories out of their new formalism.

Is it possible? Does it work? What can they learn?

First, they remain restricted to the case of on-shell, i.e. scattering amplitudes, not general off-shell, i.e. Green's functions. They have a cute self-motivating semi-heuristic argument why they don't lose any generality by this constraint: the actual off-shell amplitudes are being experimentally measured by the analysis of some on-shell scattering that involves the particles as well as some new very heavy particles, namely the detectors and other apparatuses.

Nice. I guess that the numbers showed on the apparatuses' displays must be considered as labeling different particle species, not just polarizations of spin. If your Geiger-Müller counter shows "5" at the beginning and measures something and shows "6" at the end, it was a scattering in which the "Geiger-Müller-counter-type-5 particle species" collided with some small particles, got annihilated, and produced a similar big "*-6 counter" particle. Cute. ;-)




Second, the massless spinor-indices-based minirevolution depended on the possibility to write a massless momentum vector as\[

p^\mu = \sigma^{\mu}_{\alpha \dot\alpha} \lambda^\alpha \tilde \lambda^{\dot \alpha}.

\] Because the \(2\times 2\) matrix is written as a tensor product of two vectors, its rank is just one and the determinant is therefore zero. But the determinant is \(p^\mu p_\mu\) by some elementary calculus so the particle has to be massless.




How do you write a more general momentum which is timelike? Well, one \(\lambda \tilde \lambda\) isn't enough but the sum of two such products is enough: a time-like vector may be written as a sum of two light-like ones. This generalization has been considered by numerous people I have met – but they never got too far. Arkani-Hamed, Huang, and Huang are mature and hard-working researchers in this area, however, so they didn't just expect all the massive generalizations of the rules to be straightforward and fall into their laps. They were intensely thinking and deriving and they did derive some things.

So they write the amplitudes in the spinor variables. A particle with spin \(S\) adds \(2S\) spinor indices to the scattering amplitude, a symmetrization may be assumed in the irreducible case. There is some ambiguity in the choice of the spinors \(\lambda^I_\alpha\) and \(\lambda^I_{\dot \alpha}\) for a particle, \(I=1,2\) represents the index needed because we sum two products of spinors. The time-like momentum (a non-singular, rank-two matrix with two 2-valued spinor indices) may be used to convert dotted and undotted indices for a single particle to each other.

OK, so their starting point is to imagine that all the scattering amplitudes in a theory with particles of general spins and masses are being rewritten in terms of various rational functions of products of the spinor-index-based variables. The Lorentz invariance constrains what is allowed. Also, the amplitudes may have various singularities and Nima and his collaborators have gained an incredible amount of experience in figuring out which singularities may be present in the amplitudes, which can't, and so on.

In this massive case, they were just going to rederive similar conclusions about some more general Ansätze which include a larger number of spinors and/or indices. At the end, they seem confident that the switch to the massive case doesn't cripple the key methods that worked in the massless case. Massive particles don't need the gauge invariance for consistency because the little group is \(SU(2)\) and only relates "positive-norm" polarizations.

But some of the constraints become more intense if you switch from the massless realm to the massive one.

Using their new formalism, they review some of the general theorems and wisdoms about the allowed spins and masses of particles. A coupling of three spin-1 particle is impossible. Yang's theorem: a massive spin-1 particle cannot decay to two photons: derived using a new formalism. With gravity (spin-two massless fields), massless higher-spin particles are impossible. The Weinberg-Witten theorem, rederived.

More cutely and beyond quantum field theory, they ask whether a massive particle of spin exceeding two may be elementary. The adjective "elementary" means that it can exist in isolation – so its mass may be parameterically separated from the masses of all other particle species. The answer turns out to be No. They get it by studying the \(E\to \infty\) or, equivalently, \(m\to 0\) limit of the scattering amplitude involving such a hypothetical elementary particle. Some singularity analogous to the massless particles' singularities materializes in the scattering amplitude and that singularity may be used to argue that an additional particle species whose mass is comparable to the same \(m\) has to exist. The original one couldn't exist in isolation.

Weakly-coupled string theory is compatible with this conclusion, of course, because it gives you a whole infinite tower of massive particles. You can't cherry-pick them one by one. You either have to accept the whole package that string theory gives you, or you have to die (or at least shut up). One may see that Arkani-Hamed and Huang squared gave a new derivation of the qualitative property of string theory. With some optimism, one could argue that by adding a few more derivations like that, one could derive that "all string theory is mathematically forced upon us" by pure mathematical thought – an analysis of the scattering amplitudes written in their spinor-based formalism.

The massless twistor minirevolution already had lots of unusual expressions, lots of indices, geometric shapes of large dimensions in spaces whose dimensions were even higher – equal to products of some numbers. It was already complicated. To play the game started by this paper, you need to swallow an additional collection of indices – \(I=1,2\) for each particle that indicate the two products of spinors you need to describe the momentum, and the corresponding increase of possible forms of the amplitudes. They make it look less terrifying than it is by using bold fonts for massive variables and suppressing some little group indices.

You shouldn't expect this industry to technically simplify, at least not in a foreseeable future. But its applicability is almost certainly being expanded and as they (and I) mentioned, there are signs that they can derive some statements that go beyond the list of known theorems in quantum field theory, that really go beyond the usual quantum field theory thinking itself. Similar methods are really applicable to the S-matrix of field theories with "infinitely many species" and string theory is a representative. The stringy S-matrix is complicated, involves the ratios of gamma-function-like entities, and it must display lots of special features relatively to "random functions of a similar type" that follow from the consistency of string theory.

Their formalism may be a tool to make some or all of these features "comprehensible" and accessible to a straightforward, albeit in no way technically easy, analysis.

by Luboš Motl (noreply@blogger.com) at September 17, 2017 08:01 AM

September 16, 2017

Clifford V. Johnson - Asymptotia

Cassini Tribute

Our Cassini tribute, posted on Instagram. (p.s. I started an instagram account. Not a lot to see yet, but I’d be delighted if you went over there and followed the heck out of it….) Our #cassini tribute. A post shared by Clifford Johnson (@asymptotia) on Sep 16, 2017 at 11:02am … Click to continue reading this post

The post Cassini Tribute appeared first on Asymptotia.

by Clifford at September 16, 2017 06:41 PM

Clifford V. Johnson - Asymptotia

New Job Search!

Dear colleagues far and wide. I’m delighted to share the news that we are having a job search in Astrophysics! Please note the link’s contents, and share it to those who might be interested. Thanks! https://usccareers.usc.edu/job/los-angeles/assistant-professor-of-physics-and-astronomy/1209/5668638 -cvj

The post New Job Search! appeared first on Asymptotia.

by Clifford at September 16, 2017 06:20 PM

September 15, 2017

Symmetrybreaking - Fermilab/SLAC

SENSEI searches for light dark matter

Technology proposed 30 years ago to search for dark matter is finally seeing the light.

Two scientists in hard hats stand next to a cart holding detector components.

In a project called SENSEI, scientists are using innovative sensors developed over three decades to look for the lightest dark matter particles anyone has ever tried to detect.

Dark matter—so named because it doesn’t absorb, reflect or emit light—constitutes 27 percent of the universe, but the jury is still out on what it’s made of. The primary theoretical suspect for the main component of dark matter is a particle scientists have descriptively named the weakly interactive massive particle, or WIMP.

But since none of these heavy particles, which are expected to have a mass 100 times that of a proton, have shown up in experiments, it might be time for researchers to think small.

“There is a growing interest in looking for different kinds of dark matter that are additives to the standard WIMP model,” says Fermi National Accelerator Laboratory scientist Javier Tiffenberg, a leader of the SENSEI collaboration. “Lightweight, or low-mass, dark matter is a very compelling possibility, and for the first time, the technology is there to explore these candidates.”

Sensing the unseen

In traditional dark matter experiments, scientists look for a transfer of energy that would occur if dark matter particles collided with an ordinary nucleus. But SENSEI is different; it looks for direct interactions of dark matter particles colliding with electrons.

“That is a big difference—you get a lot more energy transferred in this case because an electron is so light compared to a nucleus,” Tiffenberg says.

If dark matter had low mass—much smaller than the WIMP model suggests—then it would be many times lighter than an atomic nucleus. So if it were to collide with a nucleus, the resulting energy transfer would be far too small to tell us anything. It would be like throwing a ping-pong ball at a boulder: The heavy object wouldn’t go anywhere, and there would be no sign the two had come into contact.

An electron is nowhere near as heavy as an atomic nucleus. In fact, a single proton has about 1836 times more mass than an electron. So the collision of a low-mass dark matter particle with an electron has a much better chance of leaving a mark—it’s more bowling ball than boulder.

Bowling balls aren't exactly light, though. An energy transfer between a low-mass dark matter particle and an electron would leave only a blip of energy, one either too small for most detectors to pick up or easily overshadowed by noise in the data.

“The bowling ball will move a very tiny amount,” says Fermilab scientist Juan Estrada, a SENSEI collaborator. “You need a very precise detector to see this interaction of lightweight particles with something that is much heavier.”

That’s where SENSEI’s sensitive sensors come in.

SENSEI will use skipper charge-couple devices, also called skipper CCDs. CCDs have been used for other dark matter detection experiments, such as the Dark Matter in CCDs (or DAMIC) experiment operating at SNOLAB in Canada. These CCDs were a spinoff from sensors developed for use in the Dark Energy Camera in Chile and other dark energy search projects.

CCDs are typically made of silicon divided into pixels. When a dark matter particle passes through the CCD, it collides with the silicon’s electrons, knocking them free, leaving a net electric charge in each pixel the particle passes through. The electrons then flow through adjacent pixels and are ultimately read as a current in a device that measures the number of electrons freed from each CCD pixel. That measurement tells scientists about the mass and energy of the particle that got the chain reaction going. A massive particle, like a WIMP, would free a gusher of electrons, but a low-mass particle might free only one or two.

Typical CCDs can measure the charge left behind only once, which makes it difficult to decide if a tiny energy signal from one or two electrons is real or an error.

Skipper CCDs are a new generation of the technology that helps eliminate the “iffiness” of a measurement that has a one- or two-electron margin of error. “The big step forward for the skipper CCD is that we are able to measure this charge as many times as we want,” Tiffenberg says.

The charge left behind in the skipper CCD can be sampled multiple times and then averaged, a method that yields a more precise measurement of the charge deposited in each pixel than the measure-one-and-done technique. That’s the rule of statistics: With more data, you get closer to a property’s true value.

SENSEI scientists take advantage of the skipper CCD architecture, measuring the number of electrons in a single pixel a whopping 4000 times.

“This is a simple idea, but it took us 30 years to get it to work,” Estrada says.

From idea to reality to beyond

A small SENSEI prototype is currently running at Fermilab in a detector hall 385 feet below ground, and it has demonstrated that this detector design will work in the hunt for dark matter.

Skipper CCD technology and SENSEI were brought to life by Laboratory Directed Research and Development (LDRD) funds at Fermilab and Lawrence Berkeley National Laboratory (Berkeley Lab). LDRD programs are intended to provide funding for development of novel, cutting-edge ideas for scientific discovery.

The Fermilab LDRDs were awarded only recently—less than two years ago—but close collaboration between the two laboratories has already yielded SENSEI’s promising design, partially thanks to Berkeley lab’s previous work in skipper CCD design.

Fermilab LDRD funds allow researchers to test the sensors and develop detectors based on the science, and the Berkeley Lab LDRD funds support the sensor design, which was originally proposed by Berkeley Lab scientist Steve Holland.

“It is the combination of the two LDRDs that really make SENSEI possible,” Estrada says.

Future SENSEI research will also receive a boost thanks to a recent grant from the Heising-Simons Foundation.

“SENSEI is very cool, but what’s really impressive is that the skipper CCD will allow the SENSEI science and a lot of other applications,” Estrada says. “Astronomical studies are limited by the sensitivity of their experimental measurements, and having sensors without noise is the equivalent of making your telescope bigger—more sensitive.”

SENSEI technology may also be critical in the hunt for a fourth type of neutrino, called the sterile neutrino, which seems to be even more shy than its three notoriously elusive neutrino family members.

A larger SENSEI detector equipped with more skipper CCDs will be deployed within the year. It’s possible it might not detect anything, sending researchers back to the drawing board in the hunt for dark matter. Or SENSEI might finally make contact with dark matter—and that would be SENSEI-tional.

Editor's note: This article is based on an article published by Fermilab.

by Leah Poffenberger at September 15, 2017 07:00 PM

Emily Lakdawalla - The Planetary Society Blog

Cassini: The dying of the light
Cassini is no more. At 10:31 according to its own clock, its thrusters could no longer hold its radio antenna pointed at Earth, and it turned away. A minute later, it vaporized in Saturn’s atmosphere. Its atoms are part of Saturn now.

September 15, 2017 03:52 PM

ZapperZ - Physics and Physicists

Bell's Theorem - The Venn Diagram Paradox
Minute Physics is tackling Bell's theorem, with limited success.



It would have been nice if they included Malus' Law description in here, because that is what we knew before QM came around, and that is what we teach students in intro physics.

In any case, I still find it difficult to follow, especially if you didn't pay that much attention to the part when they are doing the counting. They went over this a bit too quickly to let it sink in.

Maybe your brain works faster than mine and can keep up.

Zz.

by ZapperZ (noreply@blogger.com) at September 15, 2017 02:22 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Thinking about space and time in Switzerland

This week, I spent a very enjoyable few days in Bern, Switzerland, attending the conference ‘Thinking about Space and Time: 100 Years of Applying and Interpreting General Relativity’. Organised by Claus Beisbart, Tilman Sauer and Christian Wüthrich, the workshop took place at the Faculty of Philosophy at the University of Bern, and focused on the early reception of Einstein’s general theory of relativity and the difficult philosophical questions raised by the theory. The conference website can be found here and the conference programme is here .

Image result for university of bern

The university of Bern, Switzerland

Of course, such studies also have a historical aspect, and I particularly enjoyed talks by noted scholars in the history and philosophy of 20th century science such as Chris Smeenk (‘Status of the Expanding Universe Models’), John Norton (‘The Error that Showed the Way; Einstein’s Path to the Field Equations’), Dennis Lehmkuhl (‘The Interpretation of Vacuum Solutions in Einstein’s Field Equations’), Daniel Kennefick (‘A History of Gravitational Wave Emission’) and Galina Weinstein (‘The Two-Body Problem in General Relativity as a Heuristic Guide to the Einstein-Rosen Bridge and the EPR Argument’). Other highlights were a review of the problem of dark energy (something I’m working on myself at the moment) by astrophysicist Ruth Durrer and back-to-back talks on the so-called black-hole information paradox from physicist Sabine Hossenfelder and philosopher Carina Prunkl. There were also plenty of talks on general relativity such as Claus Kiefer’s recall of the problems raised at the famous 1955 Bern conference (GR0),  and a really interesting talk on Noether’s theorems by Valeriya Chasova.

IMG_1194[1]

Walking to the conference through the old city early yesterday morning

IMG_1196[1]

Dr Valereria Chasova giving a talk on Noether’s theorems

My own talk, ‘Historical and Philosophical Aspects of Einstein’s 1917 Model of the Universe’, took place on the first day, the slides are here. (It’s based on our recent review of the Einstein World which has just appeared in EPJH). As for the philosophy talks, I don’t share the disdain some physicists have for philosophers. It seems to me that philosophy has a big role to play in understanding what we think we have discovered about space and time, not least in articulating the big questions clearly. After all, Einstein himself had great interest in the works of philosophers, from Ernst Mach to Hans Reichenbach, and there is little question that modern philosophers such as Harvey Brown have made important contributions to relativity studies. Of course, some philosophers are harder to follow than others, but this is also true of mathematical talks on relativity!

The conference finished with a tour of the famous Einstein Haus in Bern. It’s strange walking around the apartment Einstein lived in with Mileva all those years ago, it has been preserved extremely well. The tour included a very nice talk by Professor Hans Ott , President of the Albert Einstein Society, on AE’s work at the patent office, his 3 great breakthroughs of 1905, and his rise from obscurity to stardom in the years 1905-1909.

Einstein’s old apartment in Bern, a historic site maintained by the Albert Einstein Society

All in all, my favourite sort of conference. A small number of speakers and participants, with plenty of time for Q&A after each talk. I also liked the way the talks took place in a lecture room in the University of Bern, a pleasant walk from the centre of town through the old part of the city (not some bland hotel miles from anywhere). This afternoon, I’m off to visit the University of Zurich and the ETH, and then it’s homeward bound.

Update

I had a very nice day being shown around  ETH Zurich, where Einstein studied as a student

 

Image may contain: sky, night and outdoor
Image may contain: sky and outdoor
Image may contain: one or more people, sky and outdoor
Image may contain: sky and outdoor
Imagine taking a mountain lift from the centre of town to lectures!

by cormac at September 15, 2017 09:41 AM

September 14, 2017

Clifford V. Johnson - Asymptotia

It’s Time for the County Fair!

It's that time of year again. For me, County Fairs have a charmingly old-fashioned quality to them, and I love to visit what might be considered some of the more boring aspects - the various crafts on display (shelves of pots of jam, pies and cakes, and so forth, knitted and crocheted items, and so forth), and the old games (hitting things with hammers, etc.) And of course sampling a tiny bit of the the terrible (but tasty) foods you get to eat!

I have a story (told within another story) in my forthcoming book that takes place at a fair (that illustrates an interesting scientific idea - but not one you'd guess at all, I'll bet), and two years ago I went location scouting at the LA County Fair to get reference material for some of the various drawings I did for [...] Click to continue reading this post

The post It’s Time for the County Fair! appeared first on Asymptotia.

by Clifford at September 14, 2017 07:23 PM

September 12, 2017

Tommaso Dorigo - Scientificblogging

CMS Reports Evidence For Higgs Decays To B-Quark Pairs
Another chapter in the saga of the search for the elusive, but dominant, decay mode of the Higgs boson has been reported by the CMS collaboration last month. This is one of those specific sub-fields of research where a hard competition arises on the answer to a relatively minor scientific question. That the Higgs boson couples to b-quarks is indirectly already well demonstrated by a number of other measurements - its coupling to (third generation) quarks being demonstrated by its production rate, for example. Yet, being the first ones to "observe" the H->bb decay is a coveted goal.

read more

by Tommaso Dorigo at September 12, 2017 02:59 PM

CERN Bulletin

Staff Association membership is free of charge for the rest of 2017

Starting from September 1st, membership of the Staff Association is free for all new members for the period up to the end of 2017.

This is to allow you to participate in the Staff Council elections.

Indeed, only Employed Members of the Personnel (MPE: staff and fellows) and Associated Members of the Personnel (MPA), who are members of the Staff Association, can:

  • stand for election and become a delegate of the personnel;
  • vote and elect their representatives to the Staff Council.

Do not hesitate any longer; join now!

September 12, 2017 02:09 PM

CERN Bulletin

Kick off of the 2017-2018 school year at the EVE and School of the CERN Staff Association

The Children’s Day-Care Centre (“Espace de Vie Enfantine” - EVE) and School of the CERN Staff Association opened its doors once again to welcome the children, along with the teaching and administrative staff of the structure. The start of the school year was carried out gradually and in small groups to allow quality interaction between children, professionals and parents.

At the EVE (Nursery and Kindergarten) and School, the children have the opportunity to thrive in a privileged environment, rich in cultural diversity, since the families (parents and children) come from many different nationalities.

The teaching staff do their utmost to ensure that the children can become more autonomous and develop their social skills, all the while taking care of their well-being.

This year, several new features are being introduced, for instance, first steps towards English language awareness.

Indeed, the children will get to discover the English language in creative classes together with trained and competent staff. The approach remains playful and is not bilingualism, since the mission of the structure is to promote French language learning, in view of facilitating the children’s future integration into the local school system.

Moreover, the number of coffee meetings proposed to parents will increase this year, which will allow to address together, among other issues, the educational themes of the structure.

This school year 2017-2018, a team of 38 professionals will teach and care for over 120 children between four months and six years old.

We would also like to remind you that children can still be welcomed into the structure during the school year, whether or not their parents work at CERN.

Carole Dargagnon, the Headmistress, and her entire staff, are delighted to welcome the children entrusted in their care. They will certainly keep you informed of their activities and adventures in upcoming articles on the life of the structure.

Should you need more information, please do not hesitate to make an appointment with the Headmistress or contact us via the EVE and School website: http://nurseryschool.web.cern.ch/.

And to all, we want to wish a great school year!

September 12, 2017 01:09 PM

CERN Bulletin

Urgent – 30 September 2017 deadline for “frontalier’s” right to choose a health insurance system - all “frontalier” spouses are concerned

The HR Department has published in the CERN Bulletin a reminder regarding the OBLIGATION for all “frontalier” spouses of CERN Members of Personnel (resident in France and working in Switzerland) to formally choose between the Swiss (e.g. LAMAL) or French (e.g. CMU) health insurance systems, even though he (she) is covered by CHIS through the member of the personnel.

If you still have any questions or doubts about whether you are subject to this obligation, do not hesitate to contact the CHIS Administrator in HR or the Staff Association (Staff.Association@cern.ch).

Finally, please share this information with colleagues who are likely to be concerned as the deadline is fast approaching and the consequences of non-compliance with this rule could be very costly.

September 12, 2017 01:09 PM

Symmetrybreaking - Fermilab/SLAC

Clearing a path to the stars

Astronomers are at the forefront of the fight against light pollution, which can obscure our view of the cosmos.

Header: Clearing a path to the stars

More than a mile up in the San Gabriel Mountains in Los Angeles County sits the Mount Wilson Observatory, once one of the cornerstones of groundbreaking astronomy. 

Founded in 1904, it was twice home to the largest telescope on the planet, first with its 60-inch telescope in 1908, followed by its 100-inch telescope in 1917. In 1929, Edwin Hubble revolutionized our understanding of the shape of the universe when he discovered on Mt. Wilson that it was expanding. 

But a problem was radiating from below. As the city of Los Angeles grew, so did the reach and brightness of its skyglow, otherwise known as light pollution. The city light overpowered the photons coming from faint, distant objects, making deep-sky cosmology all but impossible. In 1983, the Carnegies, who had owned the observatory since its inception, abandoned Mt. Wilson to build telescopes in Chile instead.

“They decided that if they were going to do greater, more detailed and groundbreaking science in astronomy, they would have to move to a dark place in the world,” says Tom Meneghini, the observatory’s executive director. “They took their money and ran.” 

(Meneghini harbors no hard feelings: “I would have made the same decision,” he says.)

Beyond being a problem for astronomers, light pollution is also known to harm and kill wildlife, waste energy and cause disease in humans around the globe. For their part, astronomers have worked to convince local governments to adopt better lighting ordinances, including requiring the installation of fixtures that prevent light from seeping into the sky. 

Inline_1: Clearing a path to the stars
Artwork by Corinne Mucha

Many towns and cities are already reexamining their lighting systems as the industry standard shifts from sodium lights to light-emitting diodes, or LEDs, which last longer and use far less energy, providing both cost-saving and environmental benefits. But not all LEDs are created equal. Different bulbs emit different colors, which correspond to different temperatures. The higher the temperature, the bluer the color. 

The creation of energy-efficient blue LEDs was so profound that its inventors were awarded the 2014 Nobel Prize in Physics. But that blue light turns out to be particularly detrimental to astronomers, for the same reason that the daytime sky is blue: Blue light scatters more than any other color. (Blue lights have also been found to be more harmful to human health than more warmly colored, amber LEDs. In 2016, the American Medical Association issued guidance to minimize blue-rich light, stating that it disrupts circadian rhythms and leads to sleep problems, impaired functioning and other issues.)

The effort to darken the skies has expanded to include a focus on LEDs, as well as an attempt to get ahead of the next industry trend. 

At a January workshop at the annual American Astronomical Society (AAS) meeting, astronomer John Barentine sought to share stories of towns and cities that had successfully battled light pollution. Barentine is a program manager for the International Dark-Sky Association (IDA), a nonprofit founded in 1988 to combat light pollution. He pointed to the city of Phoenix, Arizona. 

Arizona is a leader in reducing light pollution. The state is home to four of the 10 IDA-recognized “Dark Sky Communities” in the United States. “You can stand in the middle of downtown Flagstaff and see the Milky Way,” says James Lowenthal, an astronomy professor at Smith College.

But it’s not immune to light pollution. Arizona’s Grand Canyon National Park is designated by the IDA as an International Dark Sky Park, and yet, on a clear night, Barentine says, the horizon is stained by the glow of Las Vegas 170 miles away.

Inline_2: Clearing a path to the stars
Artwork by Corinne Mucha

In 2015, Phoenix began testing the replacement of some of its 100,000 or so old streetlights with LEDs, which the city estimated would save $2.8 million a year in energy bills. But they were using high-temperature blue LEDs, which would have bathed the city in a harsh white light. 

Through grassroots work, the local IDA chapter delayed the installation for six months, giving the council time to brush up on light pollution and hear astronomers’ concerns. In the end, the city went beyond IDA’s “best expectations,” Barentine says, opting for lights that burn at a temperature well under IDA’s maximum recommendations. 

“All the way around, it was a success to have an outcome arguably influenced by this really small group of people, maybe 10 people in a city of 2 million,” he says. “People at the workshop found that inspiring.”

Just getting ordinances on the books does not necessarily solve the problem, though. Despite enacting similar ordinances to Phoenix, the city of Northampton, Massachusetts, does not have enough building inspectors to enforce them. “We have this great law, but developers just put their lights in the wrong way and nobody does anything about it,” Lowenthal says. 

For many cities, a major part of the challenge of combating light pollution is simply convincing people that it is a problem. This is particularly tricky for kids who have never seen a clear night sky bursting with bright stars and streaked by the glow of the Milky Way, says Connie Walker, a scientist at the National Optical Astronomy Observatory who is also on the board of the IDA. “It’s hard to teach somebody who doesn’t know what they’ve lost,” Walker says.

Walker is focused on making light pollution an innate concern of the next generation, the way campaigns in the 1950s made littering unacceptable to a previous generation of kids. 

In addition to creating interactive light-pollution kits for children, the NOAO operates a citizen-science initiative called Globe at Night, which allows anyone to take measurements of brightness in their area and upload them to a database. To date, Globe at Night has collected more than 160,000 observations from 180 countries. 

It’s already produced success stories. In Norman, Oklahoma, for example, a group of high school students, with the assistance of amateur astronomers, used Globe at Night to map light pollution in their town. They took the data to the city council. Within two years, the town had passed stricter lighting ordinances. 

“Light pollution is foremost on our minds because our observatories are at risk,” Walker says. “We should really be concentrating on the next generation.”

by Laura Dattaro at September 12, 2017 01:00 PM

CERN Bulletin

Engagez-vous, devenez délégué(e) du personnel du CERN

Dans notre ECHO N° 275, nous avons annoncé les élections à venir au Conseil du personnel du CERN.

Dans le présent ECHO, nous vous informons du lancement du processus des élections qui débute par le dépôt des candidatures.

Tous les titulaires, boursiers et associés, qui sont aussi membres de l’Association du personnel, peuvent s’engager et déposer leur candidature entre le 11 septembre à 08 h 00 et le 13 octobre 2017 à 17 h 00.

N’hésitez plus, remplissez le formulaire de candidature, présentez-vous aux élections au Conseil du personnel afin de pouvoir représenter et défendre vos collègues du personnel du CERN.

ÊTRE DÉLÉGUÉ(E), C’EST QUOI ?

Poser la question à plusieurs délégués du personnel, c’est à coup sûr avoir des réponses différentes en fonction de leur sensibilité, de leur expérience, de leurs motivations.

Le discours officiel standard dit qu’être délégué, c’est (http://staff-association.web.cern.ch/fr/organes/elections) :

  • mettre ses compétences aux service de tous ;
  • apporter sa vision au Conseil du personnel ;
  • proposer des méthodes et des solutions innovantes ;

mais encore :

  • acquérir de nouvelles compétences et les mettre en pratique ;
  • se former dans des domaines variés ;
  • travailler au sein d’une équipe diverse ;
  • travailler dans l’intérêt de l’Organisation.

Interrogé, un délégué à bien voulu exprimer en quelques lignes ce qu’est être délégué :

  • avoir en tête les intérêts du CERN ;
  • vouloir aller au-delà du travail qui nous est donné par la voie hiérarchique ;
  • accepter de se mettre au service de la communauté du CERN, Organisation qui est pour nous plus encore qu’un employeur puisqu’il nous fournit la protection sociale que les États doivent accorder à leurs citoyens : la pension de retraite et la couverture contre la maladie et les accidents ;
  • avoir envie de voir le CERN sous un autre angle, à travers d’autres lunettes et en découvrant des aspects souvent mal connus ;
  • avoir conscience que le meilleur atout du CERN est son personnel, au sens large, c’est-à-dire pas seulement le personnel employé mais aussi le personnel associé, les visiteurs et les utilisateurs qui tous ensemble forment la communauté du personnel du CERN ;
  • accepter de se mettre au service de nos collègues pour les aider dans leur quotidien et dans les soucis qu’ils peuvent rencontrer, soit dans leur travail, les relations avec la hiérarchie, ou bien dans leur vie sociale, le pendant de la vie au travail ;
  • avoir l’opportunité de discuter et de débattre de sujets importants qui peuvent avoir un impact sur la pérennité du CERN. Avoir l’opportunité de proposer des idées nouvelles qui influenceront le futur de l’Organisation.

De manière très générale, c’est d’abord et avant tout se mettre au service de notre communauté et par là même démontrer de manière visible son attachement et son engagement envers le CERN.

Pour plus d’information et accéder au formulaire de candidature : http://staff-association.web.cern.ch/fr/organes/elections

 

Elections Timetable

Starting with Echo of 11 September, posters, etc.
Call for applications

Friday 13 October, at 5 p.m.
Closing date for receipt of the application

Monday 23 October, at noon
Start date for voting

Monday 13 November, at noon
Closing date for voting

Tuesday 21 November and Tuesday 5 December,
Publication of the results in Echo

Monday 27 and Tuesday 28 November
Staff Association Assizes

Tuesday 5 December (afternoon)
First meeting of the new Staff Council and election of the new Executive Committee

The voting procedure will be monitored by the Election Committee, which is also in charge of announcing the results in Echo on 16 and 24 November.

 

In accordance with the rules for the elections, the Election Committee has defined the number of seats to be filled in the Electoral Colleges (see table below).

1 Group A: benchmark jobs classified on grade spans 1-2-3, 2-3-4, 3-4-5 and 4-5-6.

2 Group B: benchmark jobs classified on grade spans 6-7-8 and 9-10.

September 12, 2017 09:09 AM

John Baez - Azimuth

Applied Category Theory 2018

There will be a conference on applied category theory!

Applied Category Theory (ACT 2018). School 23–27 April 2018 and conference 30 April–4 May 2018 at the Lorentz Center in Leiden, the Netherlands. Organized by Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford).

The plenary speakers will be:

• Samson Abramsky (Oxford)
• John Baez (UC Riverside)
• Kathryn Hess (EPFL)
• Mehrnoosh Sadrzadeh (Queen Mary)
• David Spivak (MIT)

There will be a lot more to say as this progresses, but for now let me just quote from the conference website:

Applied Category Theory (ACT 2018) is a five-day workshop on applied category theory running from April 30 to May 4 at the Lorentz Center in Leiden, the Netherlands.

Towards an Integrative Science: in this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one scientific discipline can be reused in another. The aim of the workshop is to (1) explore the use of category theory within and across different disciplines, (2) create a more cohesive and collaborative ACT community, especially among early-stage researchers, and (3) accelerate research by outlining common goals and open problems for the field.

While the workshop will host discussions on a wide range of applications of category theory, there will be four special tracks on exciting new developments in the field:

1. Dynamical systems and networks
2. Systems biology
3. Cognition and AI
4. Causality

Accompanying the workshop will be an Adjoint Research School for early-career researchers. This will comprise a 16 week online seminar, followed by a 4 day research meeting at the Lorentz Center in the week prior to ACT 2018. Applications to the school will open prior to October 1, and are due November 1. Admissions will be notified by November 15.

Sincerely,
The organizers

Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford)

We welcome any feedback! Please send comments to this link.

About Applied Category Theory

Category theory is a branch of mathematics originally developed to transport ideas from one branch of mathematics to another, e.g. from topology to algebra. Applied category theory refers to efforts to transport the ideas of category theory from mathematics to other disciplines in science, engineering, and industry.

This site originated from discussions at the Computational Category Theory Workshop at NIST on Sept. 28-29, 2015. It serves to collect and disseminate research, resources, and tools for the development of applied category theory, and hosts a blog for those involved in its study.

The proposal: Towards an Integrative Science

Category theory was developed in the 1940s to translate ideas from one field of mathematics, e.g. topology, to another field of mathematics, e.g. algebra. More recently, category theory has become an unexpectedly useful and economical tool for modeling a range of different disciplines, including programming language theory [10], quantum mechanics [2], systems biology [12], complex networks [5], database theory [7], and dynamical systems [14].

A category consists of a collection of objects together with a collection of maps between those objects, satisfying certain rules. Topologists and geometers use category theory to describe the passage from one mathematical structure to another, while category theorists are also interested in categories for their own sake. In computer science and physics, many types of categories (e.g. topoi or monoidal categories) are used to give a formal semantics of domain-specific phenomena (e.g. automata [3], or regular languages [11], or quantum protocols [2]). In the applied category theory community, a long-articulated vision understands categories as mathematical workspaces for the experimental sciences, similar to how they are used in topology and geometry [13]. This has proved true in certain fields, including computer science and mathematical physics, and we believe that these results can be extended in an exciting direction: we believe that category theory has the potential to bridge specific different fields, and moreover that developments in such fields (e.g. automata) can be transferred successfully into other fields (e.g. systems biology) through category theory. Already, for example, the categorical modeling of quantum processes has helped solve an important open problem in natural language processing [9].

In this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one discipline can be reused in another. Tangibly and in the short-term, we will bring together people from different disciplines in order to write an expository survey paper that grounds the varied research in applied category theory and lays out the parameters of the research program.

In formulating this research program, we are motivated by recent successes where category theory was used to model a wide range of phenomena across many disciplines, e.g. open dynamical systems (including open Markov processes and open chemical reaction networks), entropy and relative entropy [6], and descriptions of computer hardware [8]. Several talks will address some of these new developments. But we are also motivated by an open problem in applied category theory, one which was observed at the most recent workshop in applied category theory (Dagstuhl, Germany, in 2015): “a weakness of semantics/CT is that the definitions play a key role. Having the right definitions makes the theorems trivial, which is the opposite of hard subjects where they have combinatorial proofs of theorems (and simple definitions). […] In general, the audience agrees that people see category theorists only as reconstructing the things they knew already, and that is a disadvantage, because we do not give them a good reason to care enough” [1, pg. 61].

In this workshop, we wish to articulate a natural response to the above: instead of treating the reconstruction as a weakness, we should treat the use of categorical concepts as a natural part of transferring and integrating knowledge across disciplines. The restructuring employed in applied category theory cuts through jargon, helping to elucidate common themes across disciplines. Indeed, the drive for a common language and comparison of similar structures in algebra and topology is what led to the development category theory in the first place, and recent hints show that this approach is not only useful between mathematical disciplines, but between scientific ones as well. For example, the ‘Rosetta Stone’ of Baez and Stay demonstrates how symmetric monoidal closed categories capture the common structure between logic, computation, and physics [4].

[1] Samson Abramsky, John C. Baez, Fabio Gadducci, and Viktor Winschel. Categorical methods at the crossroads. Report from Dagstuhl Perspectives Workshop 14182, 2014.

[2] Samson Abramsky and Bob Coecke. A categorical semantics of quantum protocols. In Handbook of Quantum Logic and Quantum Structures. Elsevier, Amsterdam, 2009.

[3] Michael A. Arbib and Ernest G. Manes. A categorist’s view of automata and systems. In Ernest G. Manes, editor, Category Theory Applied to Computation and Control. Springer, Berlin, 2005.

[4] John C. Baez. Physics, topology, logic and computation: a Rosetta stone. In Bob Coecke, editor, New Structures for Physics. Springer, Berlin, 2011.

[5] John C. Baez and Brendan Fong. A compositional framework for passive linear networks. arXiv e-prints, 2015.

[6] John C. Baez, Tobias Fritz, and Tom Leinster. A characterization of entropy in terms of information loss. Entropy, 13(11):1945–1957, 2011.

[7] Michael Fleming, Ryan Gunther, and Robert Rosebrugh. A database of categories. Journal of Symbolic Computing, 35(2):127–135, 2003.

[8] Dan R. Ghica and Achim Jung. Categorical semantics of digital circuits. In Ruzica Piskac and Muralidhar Talupur, editors, Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer, Berlin, 2016.

[9] Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, and Bob Coecke. Reasoning about meaning in natural language with compact closed categories and Frobenius algebras. In Logic and Algebraic Structures in Quantum Computing and Information. Cambridge University Press, Cambridge, 2013.

[10] Eugenio Moggi. Notions of computation and monads. Information and Computation, 93(1):55–92, 1991.

[11] Nicholas Pippenger. Regular languages and Stone duality. Theory of Computing Systems 30(2):121–134, 1997.

[12] Robert Rosen. The representation of biological systems from the standpoint of the theory of categories. Bulletin of Mathematical Biophysics, 20(4):317–341, 1958.

[13] David I. Spivak. Category Theory for Scientists. MIT Press, Cambridge MA, 2014.

[14] David I. Spivak, Christina Vasilakopoulou, and Patrick Schultz. Dynamical systems and sheaves. arXiv e-prints, 2016.


by John Baez at September 12, 2017 05:32 AM

September 11, 2017

CERN Bulletin

Cine club

Wednesday 13 September 2017 at 20:00
CERN Main Auditorium

Repulsion

Directed by Roman Polanski
UK, 1965, 105 minutes

A sex-repulsed woman who disapproves of her sister's boyfriend sinks into depression and has horrific visions of rape and violence.

Original version English; French subtitles

 

 

Wednesday 20 September 2017 at 20:00
CERN Council Chamber

Rosemary’s Baby

Directed by Roman Polanski
USA, 1968, 137 minutes

A young couple move into an apartment, only to be surrounded by peculiar neighbours and occurrences. When the wife becomes mysteriously pregnant, paranoia over the safety of her unborn child begins to control her life.

Original version English; French subtitles

September 11, 2017 05:09 PM

September 10, 2017

ZapperZ - Physics and Physicists

Is Relativistic Mass Real?
I've mentioned about this issue several times on here. In this post, I've linked to a reference, and also a link to Lev Okun's paper in another post, that both debunked the concept of relativistic mass, and why it should not be used.

Unfortunately, as a physics instructor, I still see texts teaching this concept, and I have to work around it, telling the students the caveat on why what they should be cautious in what they are reading. It isn't easy, but I'd rather say something about it than let the students walk out of my class not knowing that this idea of "relativistic mass" is not what it has been popularly made out.

So I'm delighted that Don Lincoln has a video addressing this issue as well.



He explains it quite clearly, and also why we still sometime teach this concept to students in intro classes (unfortunately). Yes, I can understand why, but I still don't like it if it can be avoided without sacrificing the pedagogical reason for it.

It's a good video if you are still wondering what the fuss is all about.

Zz.

by ZapperZ (noreply@blogger.com) at September 10, 2017 02:17 PM

September 09, 2017

Tommaso Dorigo - Scientificblogging

On Turtles, Book Writing, And Overcommitments
Back from vacations, I think I need to report a few random things before I get back into physics blogging. So I'll peruse the science20 article category aptly called "Random Thoughts" for this one occasion.
My summer vacations took place just after a week spent in Ecuador, where I gave 6 hours of lectures on LHC physics and statistics for data analysis to astrophysics PhD students. I did report about that and an eventful hike in the last post. Unfortunately, the first week of my alleged rest was mostly spent fixing a few documents that the European Commission expected to receive by August 31st. As a coordinator of a training network, I have indeed certain obligations that I cannot escape. 

read more

by Tommaso Dorigo at September 09, 2017 03:31 PM

September 08, 2017

Sean Carroll - Preposterous Universe

Joe Polchinski’s Memories, and a Mark Wise Movie

Joe Polchinski, a universally-admired theoretical physicist at the Kavli Institute for Theoretical Physics in Santa Barbara, recently posted a 150-page writeup of his memories of doing research over the years.

Memories of a Theoretical Physicist
Joseph Polchinski

While I was dealing with a brain injury and finding it difficult to work, two friends (Derek Westen, a friend of the KITP, and Steve Shenker, with whom I was recently collaborating), suggested that a new direction might be good. Steve in particular regarded me as a good writer and suggested that I try that. I quickly took to Steve’s suggestion. Having only two bodies of knowledge, myself and physics, I decided to write an autobiography about my development as a theoretical physicist. This is not written for any particular audience, but just to give myself a goal. It will probably have too much physics for a nontechnical reader, and too little for a physicist, but perhaps there with be different things for each. Parts may be tedious. But it is somewhat unique, I think, a blow-by-blow history of where I started and where I got to. Probably the target audience is theoretical physicists, especially young ones, who may enjoy comparing my struggles with their own. Some disclaimers: This is based on my own memories, jogged by the arXiv and Inspire. There will surely be errors and omissions. And note the title: this is about my memories, which will be different for other people. Also, it would not be possible for me to mention all the authors whose work might intersect mine, so this should not be treated as a reference work.

As the piece explains, it’s a bittersweet project, as it was brought about by Joe struggling with a serious illness and finding it difficult to do physics. We all hope he fully recovers and gets back to leading the field in creative directions.

I had the pleasure of spending three years down the hall from Joe when I was a postdoc at the ITP (it didn’t have the “K” at that time). You’ll see my name pop up briefly in his article, sadly in the context of an amusing anecdote rather than an exciting piece of research, since I stupidly spent three years in Santa Barbara without collaborating with any of the brilliant minds on the faculty there. Not sure exactly what I was thinking.

Joe is of course a world-leading theoretical physicist, and his memories give you an idea why, while at the same time being very honest about setbacks and frustrations. His style has never been to jump on a topic while it was hot, but to think deeply about fundamental issues and look for connections others have missed. This approach led him to such breakthroughs as a new understanding of the renormalization group, the discovery of D-branes in string theory, and the possibility of firewalls in black holes. It’s not necessarily a method that would work for everyone, especially because it doesn’t necessarily lead to a lot of papers being written at a young age. (Others who somehow made this style work for them, and somehow survived, include Ken Wilson and Alan Guth.) But the purity and integrity of Joe’s approach to doing science is an example for all of us.

Somehow over the course of 150 pages Joe neglected to mention perhaps his greatest triumph, as a three-time guest blogger (one, two, three). Too modest, I imagine.

His memories make for truly compelling reading, at least for physicists — he’s an excellent stylist and pedagogue, but the intended audience is people who have already heard about the renormalization group. This kind of thoughtful but informal recollection is an invaluable resource, as you get to see not only the polished final product of a physics paper, but the twists and turns of how it came to be, especially the motivations underlying why the scientist chose to think about things one way rather than some other way.

(Idea: there is a wonderful online magazine called The Players’ Tribune, which gives athletes an opportunity to write articles expressing their views and experiences, e.g. the raw feelings after you are traded. It would be great to have something like that for scientists, or for academics more broadly, to write about the experiences [good and bad] of doing research. Young people in the field would find it invaluable, and non-scientists could learn a lot about how science really works.)

You also get to read about many of the interesting friends and colleagues of Joe’s over the years. A prominent one is my current Caltech colleague Mark Wise, a leading physicist in his own right (and someone I was smart enough to collaborate with — with age comes wisdom, or at least more wisdom than you used to have). Joe and Mark got to know each other as postdocs, and have remained friends ever since. When it came time for a scientific gathering to celebrate Joe’s 60th birthday, Mark contributed a home-made movie showing (in inimitable style) how much progress he had made over the years in the activities they had enjoyed together in their relative youth. And now, for the first time, that movie is available to the whole public. It’s seven minutes long, but don’t make the mistake of skipping the blooper reel that accompanies the end credits. Many thanks to Kim Boddy, the former Caltech student who directed and produced this lost masterpiece.

When it came time for his own 60th, Mark being Mark he didn’t want the usual conference, and decided instead to gather physicist friends from over the years and take them to a local ice rink for a bout of curling. (Canadian heritage showing through.) Joe being Joe, this was an invitation he couldn’t resist, and we had a grand old time, free of any truly serious injuries.

We don’t often say it out loud, but one of the special privileges of being in this field is getting to know brilliant and wonderful people, and interacting with them over periods of many years. I owe Joe a lot — even if I wasn’t smart enough to collaborate with him when he was down the hall, I learned an enormous amount from his example, and often wonder how he would think about this or that issue in physics.

 

by Sean Carroll at September 08, 2017 06:18 PM

Symmetrybreaking - Fermilab/SLAC

Detectors in the dirt

A humidity and temperature monitor developed for CMS finds a new home in Lebanon.

A technician from the Optosmart company examines the field in the Bekaa valley in Lebanon.

People who tend crops in Lebanon and people who tend particle detectors on the border of France and Switzerland have a need in common: large-scale humidity and temperature monitoring. A scientist who noticed this connection is working with farmers to try to use a particle physics solution to solve an agricultural problem.

Farmers, especially those in dry areas found in the Middle East, need to produce as much food as possible without using too much water. Scientists on experiments at the Large Hadron Collider want to track the health of their detectors—a sudden change in humidity or temperature can indicate a problem.

To monitor humidity and temperature in their detector, members of the CMS experiment at the LHC developed a fiber-optic system. Fiber optics are wires made from glass that can carry light. Etching small mirrors into the core of a fiber creates a “Bragg grating,” a system that either lets light through or reflects it back, based on its wavelength and the distance between the mirrors.

“Temperature will naturally have an impact on the distance between the mirrors because of the contraction and dilation of the material,” says Martin Gastal, a member of the CMS collaboration at the LHC. “By default, a Bragg grating sensor is a temperature sensor.”

Scientists at the University of Sannio and INFN Naples developed a material for the CMS experiment that could turn the temperature sensors into humidity monitors as well. The material expands when it comes into contact with water, and the expansion pulls the mirrors apart. The sensors were tested by a team from the Experimental Physics Department at CERN.

In December 2015, Lebanon signed an International Cooperation Agreement with CERN, and the Lebanese University joined CMS. As Professor Haitham Zaraket, a theoretical physicist at the Lebanese University and member of the CMS experiment, recalls, they picked fiber optic monitoring from a list of CMS projects for one of their engineers to work on. Martin then approached them about the possibility of applying the technology elsewhere.

With Lebanon’s water resources under increasing pressure from a growing population and agricultural needs, irrigation control seemed like a natural application. “Agriculture consumes quite a high amount of water, of fresh water, and this is the target of this project,” says Ihab Jomaa, the Department Head of Irrigation and Agrometeorology at the Lebanese Agricultural Research Institute. “We are trying to raise what we call in agriculture lately ‘water productivity.’”

The first step after formally establishing the Fiber Optic Sensor Systems for Irrigation (FOSS4I) collaboration was to make sure that the sensors could work at all in Lebanon’s clay-heavy soil. The Lebanese University shipped 10 kilograms of soil from Lebanon to Naples, where collaborators at University of Sannio adjusted the sensor design to increase the measurement range.

During phase one, which lasted from March to June, 40 of the sensors were used to monitor a small field in Lebanon. It was found that, contrary to the laboratory findings, they could not in practice sense the full range of soil moisture content that they needed to. Based on this feedback, “we are working on a new concept which is not just a simple modification of the initial architecture,” Haitham says. The new design concept is to use fiber optics to monitor an absorbing material planted in the soil rather than having a material wrapped around the fiber.

“We are reinventing the concept,” he says. “This should take some time and hopefully at the end of it we will be able to go for field tests again.” At the same time, they are incorporating parts of phase three, looking for soil parameters such as pesticide or chemicals inside the soil or other bacterial effects.

If the new concept is successfully validated, the collaboration will move on to testing more fields and more crops. Research and development always involves setbacks, but the FOSS4I collaboration has taken this one as an opportunity to pivot to a potentially even more powerful technology.

by Jameson O'Reilly at September 08, 2017 04:40 PM

The n-Category Cafe

Postdoc in Applied Category Theory

guest post by Spencer Breiner

One Year Postdoc Position at Carnegie Mellon/NIST

We are seeking an early-career researcher with a background in category theory, functional programming and/or electrical engineering for a one-year post-doctoral position supported by an Early-concept Grant (EAGER) from the NSF’s Systems Science program. The position will be managed through Carnegie Mellon University (PI: Eswaran Subrahmanian), but the position itself will be located at the US National Institute for Standards and Technology (NIST), located in Gaithersburg, Maryland outside of Washington, DC.

The project aims to develop a compositional semantics for electrical networks which is suitable for system prediction, analysis and control. This work will extend existing methods for linear circuits (featured on this blog!) to include (i) probabilistic estimates of future consumption and (ii) top-down incentives for load management. We will model a multi-layered system of such “distributed energy resources” including loads and generators (e.g., solar array vs. power plant), different types of resource aggregation (e.g., apartment to apartment building), and across several time scales. We hope to demonstrate that such a system can balance local load and generation in order to minimize expected instability at higher levels of the electrical grid.

This post is available full-time (40 hours/5 days per week) for 12 months, and can begin as early as October 1st.

For more information on this position, please contact Dr. Eswaran Subrahmanian (sub@cmu.edu) or Dr. Spencer Breiner (spencer.breiner@nist.gov).

by john (baez@math.ucr.edu) at September 08, 2017 05:49 AM

John Baez - Azimuth

Postdoc in Applied Category Theory

guest post by Spencer Breiner

One Year Postdoc Position at Carnegie Mellon/NIST

We are seeking an early-career researcher with a background in category theory, functional programming and/or electrical engineering for a one-year post-doctoral position supported by an Early-concept Grant (EAGER) from the NSF’s Systems Science program. The position will be managed through Carnegie Mellon University (PI: Eswaran Subrahmanian), but the position itself will be located at the US National Institute for Standards and Technology (NIST), located in Gaithersburg, Maryland outside of Washington, DC.

The project aims to develop a compositional semantics for electrical networks which is suitable for system prediction, analysis and control. This work will extend existing methods for linear circuits (featured on this blog!) to include (i) probabilistic estimates of future consumption and (ii) top-down incentives for load management. We will model a multi-layered system of such “distributed energy resources” including loads and generators (e.g., solar array vs. power plant), different types of resource aggregation (e.g., apartment to apartment building), and across several time scales. We hope to demonstrate that such a system can balance local load and generation in order to minimize expected instability at higher levels of the electrical grid.

This post is available full-time (40 hours/5 days per week) for 12 months, and can begin as early as October 1st.

For more information on this position, please contact Dr. Eswaran Subrahmanian (sub@cmu.edu) or Dr. Spencer Breiner (spencer.breiner@nist.gov).


by John Baez at September 08, 2017 05:46 AM

September 07, 2017

Matt Strassler - Of Particular Significance

Watch for Auroras

Those of you who remember my post on how to keep track of opportunities to see northern (and southern) lights will be impressed by this image from http://www.swpc.noaa.gov/communities/space-weather-enthusiasts .

The latest space weather overview plot

The top plot shows the number of X-rays (high-energy photons [particles of light]) coming from the sun, and that huge spike in the middle of the plot indicates a very powerful solar flare occurred about 24 hours ago.  It should take about 2 days from the time of the flare for its other effects — the cloud of electrically-charged particles expelled from the Sun’s atmosphere — to arrive at Earth.  The electrically-charged particles are what generate the auroras, when they are directed by Earth’s magnetic field to enter the Earth’s atmosphere near the Earth’s magnetic poles, where they crash into atoms in the upper atmosphere, exciting them and causing them to radiate visible light.

The flare was very powerful, but its cloud of particles didn’t head straight for Earth.  We might get only a glancing blow.  So we don’t know how big an effect to expect here on our planet.  All we can do for now is be hopeful, and wait.

In any case, auroras borealis and australis are possible in the next day or so.  Watch for the middle plot to go haywire, and for the bars in the lower plot to jump higher; then you know the time has arrived.


Filed under: Astronomy Tagged: astronomy, auroras

by Matt Strassler at September 07, 2017 04:51 PM

September 05, 2017

Symmetrybreaking - Fermilab/SLAC

What can particles tell us about the cosmos?

The minuscule and the immense can reveal quite a bit about each other.

Header: Particle astro

In particle physics, scientists study the properties of the smallest bits of matter and how they interact. Another branch of physics—astrophysics—creates and tests theories about what’s happening across our vast universe.

While particle physics and astrophysics appear to focus on opposite ends of a spectrum, scientists in the two fields actually depend on one another. Several current lines of inquiry link the very large to the very small.

The seeds of cosmic structure

For one, particle physicists and astrophysicists both ask questions about the growth of the early universe. 

In her office at Stanford University, Eva Silverstein explains her work parsing the mathematical details of the fastest period of that growth, called cosmic inflation. 

“To me, the subject is particularly interesting because you can understand the origin of structure in the universe,” says Silverstein, a professor of physics at Stanford and the Kavli Institute for Particle Astrophysics and Cosmology. “This paradigm known as inflation accounts for the origin of structure in the most simple and beautiful way a physicist can imagine.” 

Scientists think that after the Big Bang, the universe cooled, and particles began to combine into hydrogen atoms. This process released previously trapped photons—elementary particles of light. 

The glow from that light, called the cosmic microwave background, lingers in the sky today. Scientists measure different characteristics of the cosmic microwave background to learn more about what happened in those first moments after the Big Bang.

According to scientists’ models, a pattern that first formed on the subatomic level eventually became the underpinning of the structure of the entire universe. Places that were dense with subatomic particles—or even just virtual fluctuations of subatomic particles—attracted more and more matter. As the universe grew, these areas of density became the locations where galaxies and galaxy clusters formed. The very small grew up to be the very large.

Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

“It’s amazing that we can probe what was going on almost 14 billion years ago,” Silverstein says. “We can’t learn everything that was going on, but we can still learn an incredible amount about the contents and interactions.”

For many scientists, “the urge to trace the history of the universe back to its beginnings is irresistible,” wrote theoretical physicist Stephen Weinberg in his 1977 book The First Three Minutes. The Nobel laureate added, “From the start of modern science in the sixteenth and seventeenth centuries, physicists and astronomers have returned again and again to the problem of the origin of the universe.”

Searching in the dark

Particle physicists and astrophysicists both think about dark matter and dark energy. Astrophysicists want to know what made up the early universe and what makes up our universe today. Particle physicists want to know whether there are undiscovered particles and forces out there for the finding.

“Dark matter makes up most of the matter in the universe, yet no known particles in the Standard Model [of particle physics] have the properties that it should possess,” says Michael Peskin, a professor of theoretical physics at SLAC. “Dark matter should be very weakly interacting, heavy or slow-moving, and stable over the lifetime of the universe.”

There is strong evidence for dark matter through its gravitational effects on ordinary matter in galaxies and clusters. These observations indicate that the universe is made up of roughly 5 percent normal matter, 25 percent dark matter and 70 percent dark energy. But to date, scientists have not directly observed dark energy or dark matter.

“This is really the biggest embarrassment for particle physics,” Peskin says. “However much atomic matter we see in the universe, there’s five times more dark matter, and we have no idea what it is.” 

But scientists have powerful tools to try to understand some of these unknowns. Over the past several years, the number of models of dark matter has been expanding, along with the number of ways to detect it, says Tom Rizzo, a senior scientist at SLAC and head of the theory group.

Some experiments search for direct evidence of a dark matter particle colliding with a matter particle in a detector. Others look for indirect evidence of dark matter particles interfering in other processes or hiding in the cosmic microwave background. If dark matter has the right properties, scientists could potentially create it in a particle accelerator such as the Large Hadron Collider.

Physicists are also actively hunting for signs of dark energy. It is possible to measure the properties of dark energy by observing the motion of clusters of galaxies at the largest distances that we can see in the universe.

“Every time that we learn a new technique to observe the universe, we typically get lots of surprises,” says Marcelle Soares-Santos, a Brandeis University professor and a researcher on the Dark Energy Survey. “And we can capitalize on these new ways of observing the universe to learn more about cosmology and other sides of physics.”

Inline: Particle astro
Artwork by Ana Kova

Forces at play

Particle physicists and astrophysicists find their interests also align in the study of gravity. For particle physicists, gravity is the one basic force of nature that the Standard Model does not quite explain. Astrophysicists want to understand the important role gravity played and continues to play in the formation of the universe.

In the Standard Model, each force has what’s called a force-carrier particle or a boson. Electromagnetism has photons. The strong force has gluons. The weak force has W and Z bosons. When particles interact through a force, they exchange these force-carriers, transferring small amounts of information called quanta, which scientists describe through quantum mechanics. 

General relativity explains how the gravitational force works on large scales: Earth pulls on our own bodies, and planetary objects pull on each other. But it is not understood how gravity is transmitted by quantum particles. 

Discovering a subatomic force-carrier particle for gravity would help explain how gravity works on small scales and inform a quantum theory of gravity that would connect general relativity and quantum mechanics. 

Compared to the other fundamental forces, gravity interacts with matter very weakly, but the strength of the interaction quickly becomes larger with higher energies. Theorists predict that at high enough energies, such as those seen in the early universe, quantum gravity effects are as strong as the other forces. Gravity played an essential role in transferring the small-scale pattern of the cosmic microwave background into the large-scale pattern of our universe today.

“Another way that these effects can become important for gravity is if there’s some process that lasts a long time,” Silverstein says. “Even if the energies aren’t as high as they would need to be to be sensitive to effects like quantum gravity instantaneously.” 

Physicists are modeling gravity over lengthy time scales in an effort to reveal these effects.

Our understanding of gravity is also key in the search for dark matter. Some scientists think that dark matter does not actually exist; they say the evidence we’ve found so far is actually just a sign that we don’t fully understand the force of gravity.  

Big ideas, tiny details

Learning more about gravity could tell us about the dark universe, which could also reveal new insight into how structure in the universe first formed. 

Scientists are trying to “close the loop” between particle physics and the early universe, Peskin says. As scientists probe space and go back further in time, they can learn more about the rules that govern physics at high energies, which also tells us something about the smallest components of our world.


Artwork for this article is available as a printable poster.

by Amanda Solliday at September 05, 2017 02:18 PM

September 04, 2017

John Baez - Azimuth

Complex Adaptive Systems (Part 5)

When we design a complex system, we often start with a rough outline and fill in details later, one piece at a time. And if the system is supposed to be adaptive, these details may need to changed as the system is actually being used!

The use of operads should make this easier. One reason is that an operad typically has more than one algebra.

Remember from Part 3: an operad has operations, which are abstract ways of sticking things together. An algebra makes these operations concrete: it specifies some sets of actual things, and how the operations in the operad get implemented as actual ways to stick these things together.

So, an operad O can have one algebra in which things are described in a bare-bones, simplified way, and another algebra in which things are described in more detail. Indeed it will typically have many algebras, corresponding to many levels of detail, but let’s just think about two for a minute.

When we have a ‘less detailed’ algebra A and a ‘more detailed’ algebra A', they will typically be related by a map

f : A' \to A

which ‘forgets the extra details’. This map should be a ‘homomorphism’ of algebras, but I’ll postpone the definition of that concept.

What we often want to do, when designing a system, is not forget extra detail, but rather add extra detail to some rough specification. There is not always a systematic way to do this. If there is, then we may have a homomorphism

g : A \to A'

going back the other way. This is wonderful, because it lets us automate the process of filling in the details. But we can’t always count on being able to do this—especially not if we want an optimal or even acceptable result. So, often we may have to start with an element of A and search for elements of A' that are mapped to it by f : A' \to A.

Let me give some examples. I’ll take the operad that I described last time, and describe some of its algebras, and homomorphisms between these.

I’ll start with an algebra that has very little detail: its elements will be simple graphs. As the name suggests, these are among the simplest possible ways of thinking about networks. They just look like this:

Then I’ll give an algebra with more detail, where the vertices of our simple graphs are points in the plane. There’s nothing special about the plane: we could replace the plane by any other set, and get another algebra of our operad. For example, we could use the set of points on the surface of the Caribbean Sea, the blue stuff in the rectangle here:

That’s what we might use in a search and rescue operation. The points could represent boats, and the edges could represent communication channels.

Then I’ll give an algebra with even more detail, where two points connected by an edge can’t be too far apart. This would be good for range-limited communication channels.

Then I’ll give an algebra with still more detail, where the locations of the points are functions of time. Now our boats are moving around!

Okay, here we go.

The operad from last time was called O_G. Here G is the network model of simple graphs. The best way to picture an operation of O_G is as a way of sticking together a list of simple graphs to get a new simple graph.

For example, an operation

f \in O_G(3,4,2;9)

is a way of sticking together a simple graph with 3 vertices, one with 4 vertices and one with 2 vertices to get one with 9 vertices. Here’s a picture of such an operation:

Note that this operation is itself a simple graph. An operation in O_G(3,4,2;9) is just a simple graph with 9 vertices, where we have labelled the vertices from 1 to 9.

This operad comes with a very obvious algebra A where the operations do just what I suggested. In this algebra, an element of A(t) is a simple graph with t vertices, listed in order. Here t is any natural number, which I’m calling ‘t’ for ‘type’.

We also need to say how the operations in O_G act on these sets A(t). If we take simple graphs in A(3), A(4), and A(2):

we can use our operation f to stick them together and get this:

But we can also make up a more interesting algebra of O_G. Let’s call this algebra A'. We’ll let an element of A'(t) be a simple graph with t vertices, listed in order, which are points in the plane.

My previous pictures can be reused to show how operations in O_G act on this new algebra A'. The only difference is that now we tread the vertices literally as points in the plane! Before you should have been imagining them as abstract points not living anywhere; now they have locations.

Now let’s make up an even more detailed algebra A''.

What if our communication channels are ‘range-limited’? For example, what if two boats can’t communicate if they are more than 100 kilometers apart?

Then we can let an element of A''(t) be a simple graph with t vertices in the plane such that no two vertices connected by an edge have distance > 100.

Now the operations of our operad O_G act in a more interesting way. If we have an operation, and we apply it to elements of our algebra, it ‘tries’ to put in new edges as it did before, but it ‘fails’ for any edge that would have length > 100. In other words, we just leave out any edges that would be too long.

It took me a while to figure this out. At first I thought the result of the operation would need to be undefined whenever we tried to create an edge that violated the length constraint. But in fact it acts in a perfectly well-defined way: we just don’t put in edges that would be too long!

This is good. This means that if you tell two boats to set up a communication channel, and they’re too far apart, you don’t get the ‘blue screen of death’: your setup doesn’t crash and burn. Instead, you just get a polite warning—‘communication channel not established’—and you can proceed.

The nontrivial part is to check that if we do this, we really get an algebra of our operad! There are some laws that must hold in any algebra. But since I haven’t yet described those laws, I won’t check them here. You’ll have to wait for our paper to come out.

Let’s do one more algebra today. For lack of creativity I’ll call it A'''. Now an element of A'''(t) is a time-dependent graph in the plane with t vertices, listed in order. Namely, the positions of the vertices depend on time, and the presence or absence of an edge between two vertices can also depend on time. Furthermore, let’s impose the requirement that any two vertices can only connected by an edge at times when their distance is ≤ 100.

When I say ‘functions of time’ here, what ‘time’? We can model time by some interval [T_1, T_2]. But if you don’t like that, you can change it.

This algebra A''' works more or less like A''. The operations of O_G try to create edges, but these edges only ‘take’ at times when the vertices they connect have distance ≤ 100.

There’s something here you might not like. Our operations can only try to create edges ‘for all times’… and succeed at times when the vertices are close enough. We can’t try to set up a communication channel for a limited amount of time.

But fear not: this is just a limitation in our chosen network model, ‘simple graphs’. With a fancier network model, we’d get a fancier operad, with fancier operations. Right now I’m trying to keep the operad simple (pun not intended), and show you a variety of different algebras.

And you might expect, we have algebra homomorphisms going from more detailed algebras to less detailed ones:

f_T : A''' \to A'', \quad h : A' \to A

The homomorphism h takes a simple graph in the plane and forgets the location of its vertices. The homomorphism f_T depends on a choice of time T \in [T_1, T_2]. For any time T, it takes a time-dependent graph in the plane and evaluates it at that time, getting a graph in the plane (which obeys the distance constraints, since the time-dependent graph obeyed those constraints at any time).

We do not have a homomorphism g: A'' \to A' that takes a simple graph in the plane obeying our distance constraints and forgets about those constraints. There’s a map g sending elements of A'' to elements of A' in this way. But it’s not an algebra homomorphism! The problem is that first trying to connect two graphs with an edge and then applying g may give a different result than first applying g and then connecting two graphs with an edge.

In short: a single operad has many algebras, which we can use to describe our desired system at different levels of detail. Algebra homomorphisms relate these different levels of detail.

Next time I’ll look at some more interesting algebras of the same operad. For example, there’s one that describes a system of interacting mobile agents, which move around in some specific way, determined by their location and the locations of the agents they’re communicating with.

Even this is just the tip of the iceberg—that is, still a rather low level of detail. We can also introduce stochasticity (that is, randomness). And to go even further, we could switch to a more sophisticated operad, based on a fancier ‘network model’.

But not today.


by John Baez at September 04, 2017 10:22 AM

September 03, 2017

ZapperZ - Physics and Physicists

Rebuilding Quantum Theory
Theorists and philosophers are trying to "rebuild" quantum theory's foundation and axioms. Good luck to them!

Still, this is a rather good article on some of the issues surrounding concepts that still do not sit well with many physicists. Those of us who are in the "Shut up and calculate" camp will leave it up to them to sort things out. We are busy with doing other things.

:)

Zz.

by ZapperZ (noreply@blogger.com) at September 03, 2017 01:49 PM

Lubos Motl - string vacua and pheno

Caring about math of equations and math of solutions
A reader of the Tetragraviton blog named nueww highlighted an interesting footnote on page 126 of Polchinski's memories (arXiv):
Morrison came to UCSB from Duke about ten years ago, with a joint position in math and physics. He plays a unique role in tying these subjects together. He and I have an ongoing friendly dispute about whether I know much math (I claim not). I think that the difference goes back to Susskind’s distinction between the mathematics of the equations and the mathematics of the solutions, where I care only about the former.
David Morrison is a very smart string theorist who was trained as a mathematician. Well, he – and others – weren't just trained as mathematicians. I think that they were born and hardwired to think as mathematicians. The memes in the quote above – invented and promoted by Susskind and Polchinski – seem to crisply demystify the difference between the psychology of a mathematician and the psychology of a theoretical physicist.




First, Joe Polchinski is clearly being way too modest. The amount and depth of fascinating "mathematics of solutions" that he has shown in his papers and pedagogical texts is surely huge and if you impartially measured Polchinski's mathematical IQ – even one specialized for the mathematics of solutions – it would end up in the top 1% or 0.1% of the mankind, to say the least.

But there's still a very true core in Polchinski's words, I think. The difference between his thinking, that of a theoretical physicist, and the thinking of a typical mathematician is significant and it boils down to some very different internal drivers and motivations.




Around page 12 of his memories, Polchinski mentions how much time he spent with chess, became a participant of local tournaments, read theoretical books on chess etc., but he wouldn't surpass the level of a "good recreational player" – he's being modest here as well, I would bet, but again, I am convinced he is not lying here, too – which differs from his "Grandmaster" status as a theoretical physicist.

I actually think that his focus on the "mathematics of the equations" is the same reason that also explains his relative "underperformance in chess". (I write about Polchinski but these qualitative observations hold for me as well – not quantitatively, of course: I am not comparing myself to Polchinski in the absolute sense.)

While mathematicians and theoretical physicists may write somewhat similar papers, do similar things in them, sometimes define objects, sometimes solve problems and equations, they have a very different idea about the "beef of their work":
A theoretical physicist mostly considers a deep problem as a solved one once he finds the full equations that govern the problem – and the problem is therefore reduced to some more or less mechanical operations, at least in principle. The actual solutions may already be left to less profound thinkers or computers: they are a matter of brute force which is not too interesting.

A mathematician imagines that the bulk of the work and depth is this actual later search for the solutions – that what he's doing, wants to be doing, that's what he's good at – and considers the previous search for the right problems and ideas to be either a matter of luck, arbitrary, or something that ordinary people may do, or something that has "obvious" answers.
Whenever you order scientific disciplines on a line, you usually push mathematics to the "more abstract, unpractical" end of the axis than the theoretical physicists. But because of the comparison discussed in this article, mathematicians are actually more practical than the theoretical physicists. They are actually imagining that their skills should be used to solve pre-determined problems. Well, that's why you often hear about applied mathematicians. Theoretical physicists can't really be applied because the words theoretical and applied are antonymous.

Theoretical physicists are really focusing on the search what the right problems should be, on finding the relevant and/or interesting rules of the game. So when I learned the rules of chess, I thought that "most of chess has already been mastered". This is obviously not shared by most people – the fun is only getting started once you learn the rules – but I do think that this attitude of mine is more or less defining for the psyche of a theoretical physicist.

So I think that Joe Polchinski could become a chess grandmaster if the intelligence were the only thing that mattered. But it's some internal difference in motivations that decides otherwise. I think that his subconscious mechanisms tell the rest of his mind that "it's ultimately a waste of mental energy" to do the mechanical operations such as the scanning of the space of possible future moves in chess.

Polchinski himself explains his being "a physics Grandmaster but not a chess Grandmaster" by his less than stellar memory and the fear of irreversible moves. But I think that those aren't the "primary" differences. His memory is being cleaned subconsciously but intentionally and the fear of irreversible moves follows from some kind of theoretical perfectionism which differs from the "let's live" trial-and-error paradigm.

Chess is often counted as a sport, a mind sport of a sort. Many physically oriented people laugh. Chess is a sport? That's funny. When an intellectually oriented young person would choose chess as his sport of choice, they would think it's a swindle. Chess is like some kind of mathematics, isn't it? But at the end, I think that they are wrong. Chess is a sport. One becomes good at it if he deepens his skills that may be classified as a brute force of a sort.

In physical sports, one wants to be strong, fast etc. In chess, one wants to be strong mentally, have a high CPU capacity and memory and some combinations of them. But in both cases, the motivation is sports-like. Well, people like Polchinski or myself don't have enough of this motivation. It looks too egotist, too narrow-minded. I just don't get why I "should" be a better chess player or Olympic sprinter than someone else. What would be better about the world? Needless to say, my chances would be 0 to achieve the Olympic level in physical sports but "somewhat higher yet still low" for mind sports.

But isn't it irrelevant who is the fastest sprinter in the world? He may be faster by one percent than his top competitors. But he may earn 10 times as much for that. Is that fair? Why does someone earn 10 times more for being 1% faster? Needless to say, similar questions may be asked about all other sports – and to a large extent, that includes the mind sports such as chess, too. I am just utterly unimpressed about one man's being 1% faster than another man. Why should it matter? And isn't the careful following of these 1% improvements more boring than the most boring bureaucratic work of a secretary? The whole concept of making someone insanely rich or famous because of these tiny relative differences looks like a sign of the mankind's collective irrationality to me. It doesn't mean that I never watch sports and I never find it fun. It is often fun. But despite the fun, I still rationally realize that this fun is irrational.

Well, I am unimpressed even when one man is 30% faster than another man. They're still comparable. It's still a sign of the mankind's collective irrationality for the first man to earn 1,000 times and sometimes 1,000,000 more in sports than the other one. After all, robots already greatly surpass humans in physical sports as well as mind sports, don't they? So why would humans be so obsessed with some disciplines in which they're not too good even as a species? And think about the sex gap. If sports weren't segregated, women would be earning literally zero as athletes – just because of these 30%-like differences. So the whole income of a rather large group of people – female athletes – depends on a pure sociological convention, the segregation of sexes. Doesn't this dependence on social conventions say something unflattering about the whole idea of professional sports?

What's different about theoretical physics is that the skills, talents, and actual work that top theoretical physicists may display or perform may be greater than other people's skills and work – and not just by 1% or 30%. They may be and they often are larger by many orders of magnitude, sometimes a dozen of orders of magnitude. For all practical and most of the impractical purposes, a man familiar with quantum mechanics belongs to a different species than a man who is stuck in the classical thinking. And quantum mechanics is not the only "gap" that separates the people to these very different "castes".

Breakthroughs in theoretical physics may change and sometimes do change the rules of the game fundamentally. They're not like improving the fastest 100-meter sprint by a fraction of a percent.

And the difference between the two psychologies isn't just about the magnitude of the breakthrough. It's about its "universal relevance". When you beat another sprinter by 0.02 seconds, you're just making a big change from your viewpoint. From a more objective viewpoint, one African sprinter has just trumped another. (I am assuming that this blog post is being read mostly by African sprinters.) What's the difference? ;-) But the advances in theoretical physics are "big changes" even from an objective viewpoint, even if you don't care about the precise names of the people who make them and the differences between these people.

At the end, I think that the excessive modesty of folks like Polchinski is bad news. Folks like Polchinski are making a huge difference but the "majority opinion in the society" understates the importance of their work – and the skills and talents that are needed for that work – dramatically.

No, the right rules of the game – the fundamental equations and rules of physics in particular – aren't obvious to start with. And no, they won't be found by an average person who is just a little bit lucky. The discoveries of such things are transformative events that decide about all the minor ones.

I must mention that when I was a teenager, I was greatly influenced by (the Czech translation of) a letter that Einstein wrote for Max Planck's 60th birthday:
In the temple of science are many mansions, and various indeed are they that dwell therein and the motives that have led them thither. Many take to science out of a joyful sense of superior intellectual power; science is their own special sport to which they look for vivid experience and the satisfaction of ambition; many others are to be found in the temple who have offered the products of their brains on this altar for purely utilitarian purposes. Were an angel of the Lord to come and drive all the people belonging to these two categories out of the temple, the assemblage would be seriously depleted, but there would still be some men, of both present and past times, left inside. Our Planck is one of them, and that is why we love him.

I am quite aware that we have just now light-heartedly expelled in imagination many excellent men who are largely, perhaps chiefly, responsible for the building of the temple of science; and in many cases our angel would find it a pretty ticklish job to decide. But of one thing I feel sure: if the types we have just expelled were the only types there were, the temple would never have come to be, any more than a forest can grow which consists of nothing but creepers. For these people any sphere of human activity will do, if it comes to a point; whether they become engineers, officers, tradesmen, or scientists depends on circumstances. Now let us have another look at those who have found favor with the angel. Most of them are somewhat odd, uncommunicative, solitary fellows, really less like each other, in spite of these common characteristics, than the hosts of the rejected. What has brought them to the temple? That is a difficult question and no single answer will cover it. To begin with, I believe with Schopenhauer that one of the strongest motives that leads men to art and science is escape from everyday life with...
As you can see, the angel expelled all the superficial people, the athletes, chess players, and careerists of all kinds. After this expulsion, folks like Planck, Einstein, and Polchinski remained in that place. In 1918, Einstein found it both safe and natural to talk about the careerists and athletes of science as if they were weeds or creepers. It's too bad that during the following 99 years, the counterparts, followers, and disciples of Einstein were basically turned to modest guys who aren't proud about what they are and/or who have to hide this pride.

Well, I think that e.g. in the case of Joe, it's more about the hiding. Also, it seems to me that folks like Morrison have been fooled by this superficially modest talk and they still haven't gotten a key point – that Polchinski is actually intrinsically proud about "being weaker in the mathematics of solutions" because he ultimately knows it's a positive trait. So, Dave, if you're telling a physicist like Polchinski that he (Polchinski) is good at mathematics in a similar sense as mathematicians (or you), you are not actually flattering him! ;-)

by Luboš Motl (noreply@blogger.com) at September 03, 2017 06:45 AM

John Baez - Azimuth

Voyager 1

Launched 40 years ago, the Voyagers are our longest-lived and most distant spacecraft. Voyager 2 has reached the edge of the heliosphere, the realm where the solar wind and the Sun’s magnetic field live. Voyager 1 has already left the heliosphere and entered interstellar space! A new movie, The Farthest, celebrates the Voyagers’ journey toward the stars:

What has Voyager 1 been doing lately? I’ll skip its amazing exploration of the Solar System….

Leaving the realm of planets

On February 14, 1990, Voyager 1 took the first ever ‘family portrait’ of the Solar System as seen from outside. This includes the famous image of planet Earth known as the Pale Blue Dot:

Soon afterwards, its cameras were deactivated to conserve power and computer resources. The camera software has been removed from the spacecraft, so it would now be hard to get it working again. And here on Earth, the software for reading these images is no longer available!

On February 17, 1998, Voyager 1 reached a distance of 69 AU from the Sun — 69 times farther from the Sun than we are. At that moment it overtook Pioneer 10 as the most distant spacecraft from Earth! Traveling at about 17 kilometers per second, it was moving away from the Sun faster than any other spacecraft. It still is.

That’s 520 million kilometers per year — hard to comprehend. I find it easier to think about this way: it’s 3.6 AU per year. That’s really fast… but not if you’re trying to reach other stars. It will take 20,000 years just to go one light-year.

Termination shock

As Voyager 1 headed for interstellar space, its instruments continued to study the Solar System. Scientists at the Johns Hopkins University said that Voyager 1 entered the termination shock in February 2003. This is a bit like a ‘sonic boom’, but in reverse: it’s the place where the solar wind drops to below the speed of sound. Yes, sound can move through the solar wind, but only sound with extremely long wavelengths — nothing you can hear.

Some other scientists expressed doubt about this, and the issue wasn’t resolved until other data became available, since Voyager 1’s solar-wind detector had stopped working in 1990. This failure meant that termination shock detection had to be inferred from the other instruments on board. We now think that Voyager 1 reached the termination shock on December 15, 2004 — at a distance of 94 AU from the Sun.

Heliosheath

In May 2005, a NASA press release said that Voyager 1 had reached the
heliosheath
. This is a bubble of stagnant solar wind, moving below the speed of sound. It’s outside the termination shock but inside the heliopause, where the interstelllar wind crashes against the solar wind.

On March 31, 2006, amateur radio operators in Germany tracked and received radio waves from Voyager 1 using a 20-meter dish. They
checked their data against data from the Deep Space Network station in Madrid, Spain and yes — it matched. This was the first amateur tracking of Voyager 1!

On December 13, 2010, the the Low Energy Charged Particle device
aboard Voyager 1 showed that it passed the point where the solar wind flows away from the Sun. At this point the solar wind seems to turn sideways, due to the push of the interstellar wind. On this date, the spacecraft was approximately 17.3 billion kilometers from the Sun, or 116 AU.

In March 2011, Voyager 1 was commanded to change its orientation to measure the sideways motion of the solar wind. How? I don’t know. Its solar wind detector was broken.

But anyway, a test roll done in February had confirmed the spacecraft’s ability to maneuver and reorient itself. So, in March it rotated 70 degrees counterclockwise with respect to Earth to detect the solar wind. This was the first time the spacecraft had done any major maneuvering since the family portrait photograph of the planets was taken in 1990.

After the first roll the spacecraft had no problem in reorienting itself with Alpha Centauri, Voyager 1’s guide star, and it resumed sending transmissions back to Earth.

On December 1, 2011, it was announced that Voyager 1 had detected the first Lyman-alpha radiation originating from the Milky Way galaxy. Lyman-alpha radiation had previously been detected from other galaxies, but because of interference from the Sun, the radiation from the Milky Way was not detectable.

Puzzle: What the heck is Lyman-alpha radiation?

On December 5, 2011, Voyager 1 saw that the Solar System’s magnetic field had doubled in strength, basically because it was getting compressed by the pressure of the interstellar wind. Energetic particles originating in the Solar System declined by nearly half, while the detection of high-energy electrons from outside increased 100-fold.

Heliopause and beyond

In June 2012, NASA announced that the probe was detecting even more charged particles from interstellar space. This meant that it was getting close to the heliopause: the place where the gas of interstellar space crashes into the solar wind.

Voyager 1 actually crossed the heliopause in August 2012, although it took another year to confirm this. It was 121 AU from the Sun.

What’s next?

In about 300 years Voyager 1 will reach the Oort cloud, the region of frozen comets. It will take 30,000 years to pass through the Oort cloud. Though it is not heading towards any particular star, in about 40,000 years it will pass within 1.6 light-years of the star Gliese 445.

NASA says:

The Voyagers are destined — perhaps eternally —
to wander the Milky Way.

That’s an exaggeration. The Milky Way will not last forever. In just 3.85 billion years, before our Sun becomes a red giant, the Andromeda galaxy will collide with the Milky Way. In just 100 trillion years, all the stars in the Milky Way will burn out. And in just 10 quintillion years, the Milky Way will have disintegrated, with all the dead stars either falling into black holes or being flung off into intergalactic space.

But still: the Voyagers’ journeys are just beginning. Let’s wish them a happy 40th birthday!

My story here is adapted from this Wikipedia article:

• Wikipedia, Voyager 1.

You can download PDFs of posters commemorating the Voyagers here:

• NASA, NASA and iconic museum honor Voyager spacecraft 40th anniversary, August 30, 2017.


by John Baez at September 03, 2017 06:03 AM

September 01, 2017

Lubos Motl - string vacua and pheno

Joe Polchinski's memories
While Joseph Polchinski was dealing with a brain injury – this phrase [which is probably a misspelled "surgery"] sounds much nicer than the ugly C-word – he found it an order of magnitude more difficult to work. Steve Shenker and Derek Westen have noticed that Joe's writing was crisp and excellent so he could do what a string theorist may always do even when he's temporarily reduced to 10% of his mental capacity – to become a writer.

Polchinski just published these
Memories of a Theoretical Physicist
I sincerely wish that with hindsight, the timing of these memories will be considered utterly non-essential!



If you start to read the memories, you will realize that you have already heard about the beginning. Joe Joe left his home in Tucson, Arizona, for some Californian grass.




His paternal ancestors came to the States around 1870 – from a land between Poland and Germany (the surname Polczynski is surely Polish, [it is a masculine adjective] meaning "originating from Połczyno near Gdańsk") and from Ireland. His mother ended up as a German American. Parents were unscientific (I know that background, too), Joey was keen on science already as a kid. How and Why books, Sputnik, Asimov, advanced algebra, recreational chess (I would probably never get above that level, either!), telescope.




Joe came to Caltech in the early 1970s and met three remarkable people, Richard Feynman, Kip Thorne, and Bill Zajc (yes, you know him from TRF). Bill has read Joe's memories and you can learn a lot about him from Joe's memories. Security, pranks, Feynman, Thorne. Serious physics learning in the sophomore year. QCD is being born at that time. Moving to Berkeley. Isidor Singer. Wife Dorothy. SLAC, Stanford, supersymmetry and its breaking, Susskind, D-terms.

Harvard, Mark Wise, Sidney Coleman, kids who run the circus, SUGRA, renormalization, monopoles, phenomenology. Austin, Weinberg, and the first superstring revolution explodes. In Summer 1988, Polchinski would "realize that he would never be a great scientist". As you can see, a big mistake! Fun with T-duality and the cosmological constant, Joe's students, from Austin to Santa Barbara. D-branes and orientifolds of the 1990s. Information loss, Strings 1995, D-branes.

Discretuum, anthropic stuff, plan to have no typos in his book. Joe kindly says that I sent him at least 200 errata, I believe that the number is just around 128. Proofs of finiteness of string theory. AdS/CFT. Matt Strassler and Raphael Bousso get special sections. 21st century, after the end of physics (1996). Centennial talks for founders of quantum mechanics. Cosmic strings may be out there. Interactions with Lee Smolin. Bubbles of nothing, firewalls.

The annoying "brane" cancer diagnosis in December 2015 – I knew about it with some vagueness and uncertainty. Please, Joe, never ever give up.

I think that those of you who are enthusiastic readers will love Joe's memories.

Related to firewalls: Rik van Breukelen and Kyriakos Papadodimas have released a new paper Quantum teleportation through time-shifted AdS wormholes. They construct double trace deformations optimized for thermofield states shifted by an arbitrary time translation, prove the smooth connection between two CFTs, and therefore the absence of firewalls in this setup.

by Luboš Motl (noreply@blogger.com) at September 01, 2017 01:12 PM

August 31, 2017

Clifford V. Johnson - Asymptotia

Angel’s Flight Lives!

Today marks the day when, after a long closure, the lovely tiny railway called Angel's Flight in downtown Los Angeles re-opens. There is a news piece here for example. It was a common feature of what some called the "Asymptotia Tour", meaning that back in the day, readers of this blog who visited LA and happened to meet me might well be shown this hidden gem of the city. Well, all those years ago (before it closed) I ended up capturing it (or a version of it) on the page as part of the setting for one of my dialogues in my forthcoming book, The Dialogues: Conversations about the Nature of the Universe (MIT Press, 2017). The images above show some fragments of two pages in the book, featuring the railway.

In Spring 2010, I took a sabbatical semester and decided to spend most of it in hiding (in some cities in Europe), telling nobody what [...] Click to continue reading this post

The post Angel’s Flight Lives! appeared first on Asymptotia.

by Clifford at August 31, 2017 11:56 PM

August 30, 2017

Symmetrybreaking - Fermilab/SLAC

Neural networks meet space

Artificial intelligence analyzes gravitational lenses 10 million times faster.

Neurons and Einstein ring

Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks—a form of artificial intelligence—can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip,” says postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published today in Nature.

Lightning-fast complex analysis

The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.

Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.

But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.

To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.

Grid of nine boxes showing various gravitational lenses

KIPAC researchers used images of strongly lensed galaxies taken with the Hubble Space Telescope to test the performance of neural networks, which promise to speed up complex astrophysical analyses tremendously.

Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA

Prepared for the data floods of the future

“The neural networks we tested—three publicly available neural nets and one that we developed ourselves—were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,” says the study’s lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.

This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.

The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe—and produce more data—than ever before.

The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.

“We won’t have enough people to analyze all these data in a timely manner with the traditional methods,” Perreault Levasseur says. “Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.”

Convolutional neural network example with pictures of dogs and features

Scheme of an artificial neural network, with individual computational units organized into hundreds of layers. Each layer searches for certain features in the input image (at left). The last layer provides the result of the analysis. The researchers used particular kinds of neural networks, called convolutional neural networks, in which individual computational units (neurons, gray spheres) of each layer are also organized into 2-D slabs that bundle information about the original image into larger computational units.

Greg Stewart, SLAC National Accelerator Laboratory

A revolutionary approach

Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.

In the artificial version, the “neurons” are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.

“The amazing thing is that neural networks learn by themselves what features to look for,” says KIPAC staff scientist Phil Marshall, a co-author of the paper. “This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.”

But in this case, Hezaveh says, “It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.”

Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones.

“Neural nets have been applied to astrophysical problems in the past with mixed outcomes,” says KIPAC faculty member Roger Blandford, who was not a co-author on the paper. “But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.”    

Editor's note: This article originally appeared as a SLAC press release.

by Manuel Gnida at August 30, 2017 05:08 PM

August 29, 2017

Symmetrybreaking - Fermilab/SLAC

The dance of the particles

In collaboration with a scientist, an Iranian dancer is working to communicate the beauty of particle physics through dance.

 

The dance of the particles

Although CERN physicist Andrea Latina had always been interested in the arts, he had never really thought about dance before. 

While at a local film festival in 2015, he happened upon a flyer that quoted Persian poet Rumi about the “dance of particles.” Curious, he reached out to its author, Iranian dancer and choreographer Sahar Dehghan, to learn more.

Dehghan says that even as a child she was fascinated by both physics and dance. 

When she moved to France at a young age, she started taking dance classes, focusing on a meditative form called Sufi dancing and later concentrating on contemporary dance. But she also kept her fascination with physics, reading books and articles and having conversations with scientists she befriended in Paris as a young adult. 

“I became interested in quantum mechanics and its relation to physics, and I really started experimenting physically in my dance with a lot of these concepts,” she says.

Dehghan and Latina developed a friendship, meeting to chat about physics and dance. 

Virtual particles

Dehghan says that she was inspired by ideas such as the confinement of quarks via the strong force. 

“If you try to separate quarks, this force will be so strong that new particles will be created to prevent separation,” Latina says. “The density of energy is so high that a new pair of quark and antiquark will form so that the new quarks pair up with the original ones, just to avoid there being a single quark isolated in nature.”

In the winter of 2016, Dehghan visited CERN to learn more about its goals and how scientists are working to achieve them. One of the most inspiring things, she says, was seeing thousands of scientists from different backgrounds uniting to further our understanding of the universe.

“There are more than 11,000 people of more than 110 nationalities coming together with a common goal,” she says. “Instead of seeing superficial differences caused by cultural, religious, political or sexual preference, they respect and collaborate with each other, learning from each other for a greater purpose.”

Latina says that conversations with Dehghan gave him insight into physics as well. 

“I’m very enthusiastic about CERN and my work,” he says. “In drawing parallels between ancient philosophies, Sahar reminded me that what we are doing is the same thing humans have been doing for millennia: questioning where we come from, where we are going and what our role in the universe is. She was able to evoke this ancestral wonder and help me rediscover the poetry of what we do at CERN. We are incessantly trying to answer the same questions; we just use different tools and the language of mathematics.” 

Dehghan says she would love to communicate these themes through dance. Through artistic mediums, she says, new ideas can be heard, seen and felt in a deeper, more meaningful way.

“It would be great if we could all see beyond our own illusions into the fascinating particle interactions happening in everything and everyone at all times and the true unity that connects us in this great quantum dance, whirling at all times in rhythm with the music of the entire cosmos,” she says.

She has begun to choreograph a show called WHIRL Quantum Dance. Through scenes in her show, she tries to illustrate concepts such as quantum chromodynamics (with colored lights) or quantum entanglement (with pairs of dancers). She is even trying to create a collision scene with spinning dancers in a large circle representing an accelerator.

“I am not a scientific expert in anything so I am not trying to teach anyone,” she says. “What I want to do with this show is open some doors for the audience to go out there and search for more and learn about not just about quantum and particle physics, but also go out there and physically experiment and see how we’re all connected. 

“Even if I open just one door for one person in the audience to go in that direction, I will have achieved my goal.”

WHIRL: Quantum Dance, which is being presented by Sangram Arts, will premiere in the San Francisco Bay Area at the School of Arts & Culture at Mexican Heritage Plaza on September 22 and 23, with dancers Shahrokh Moshkin Ghalam and Rakesh Sukesh. Dehghan says that she hopes to make a film of the show to tour at different venues in cities around the world.

For more information, visit Dehghan's Facebook page.

by Ali Sundermier at August 29, 2017 05:26 PM

The n-Category Cafe

Schröder Paths and Reverse Bessel Polynomials

I want to show you a combinatorial interpretation of the reverse Bessel polynomials which I learnt from Alan Sokal. The sequence of reverse Bessel polynomials begins as follows.

<semantics>θ 0(R) =1 θ 1(R) =R+1 θ 2(R) =R 2+3R+3 θ 3(R) =R 3+6R 2+15R+15<annotation encoding="application/x-tex"> \begin{aligned} \theta_0(R)&=1\\ \theta_1(R)&=R+1\\ \theta_2(R)&=R^2+3R+3\\ \theta_3(R)&= R^3 +6R^2+15R+15 \end{aligned} </annotation></semantics>

To give you a flavour of the combinatorial interpretation we will prove, you can see that the second reverse Bessel polynomial can be read off the following set of ‘weighted Schröder paths’: multiply the weights together on each path and add up the resulting monomials.

Schroeder paths

In this post I’ll explain how to prove the general result, using a certain result about weighted Dyck paths that I’ll also prove. At the end I’ll leave some further questions for the budding enumerative combinatorialists amongst you.

These reverse Bessel polynomials have their origins in the theory of Bessel functions, but which I’ve encountered in the theory of magnitude, and they are key to a formula I give for the magnitude of an odd dimensional ball which I have just posted on the arxiv.

In that paper I use the combinatorial expression for these Bessel polynomials to prove facts about the magnitude.

Here, to simplify things slightly, I have used the standard reverse Bessel polynomials whereas in my paper I use a minor variant (see below).

I should add that a very similar expression can be given for the ordinary, unreversed Bessel polynomials; you just need a minor modification to the way the weights on the Schrö paths are defined. I will leave that as an exercise.

The reverse Bessel polynomials

The reverse Bessel polynomials have many properties. In particular they satisfy the recursion relation <semantics>θ i+1(R)=R 2θ i1(R)+(2i+1)θ i(R)<annotation encoding="application/x-tex"> \theta_{i+1}(R)=R^2\theta_{i-1}(R) + (2i+1)\theta_{i}(R) </annotation></semantics> and <semantics>θ i(R)<annotation encoding="application/x-tex">\theta_i(R)</annotation></semantics> satisfies the differential equation <semantics>Rθ i (R)2(R+i)θ i (R)+2iθ i(R)=0.<annotation encoding="application/x-tex"> R\theta_i^{\prime\prime}(R)-2(R+i)\theta_i^\prime(R)+2i\theta_i(R)=0. </annotation></semantics> There’s an explicit formula: <semantics>θ i(R)= t=0 i(i+t)!(it)!t!2 tR it.<annotation encoding="application/x-tex"> \theta_i(R) = \sum_{t=0}^i \frac{(i+t)!}{(i-t)!\, t!\, 2^t}R^{i-t}. </annotation></semantics>

I’m interested in them because they appear in my formula for the magnitude of odd dimensional balls. To be more precise, in my formula I use the associated Sheffer polynomials, <semantics>(χ i(R)) i=0 <annotation encoding="application/x-tex">(\chi_i(R))_{i=0}^\infty</annotation></semantics>; they are related by <semantics>χ i(R)=Rθ i1(R)<annotation encoding="application/x-tex">\chi_i(R)=R\theta_{i-1}(R)</annotation></semantics>, so the coefficients are the same, but just moved around a bit. These polynomials have a similar but slightly more complicated combinatorial interpretation.

In my paper I prove that the magnitude of the <semantics>(2p+1)<annotation encoding="application/x-tex">(2p+1)</annotation></semantics>-dimensional ball of radius <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> has the following expression:

<semantics>|B R 2p+1|=det[χ i+j+2(R)] i,j=0 p(2p+1)!Rdet[χ i+j(R)] i,j=0 p<annotation encoding="application/x-tex"> \left|B^{2p+1}_R \right|= \frac{\det[\chi_{i+j+2}(R)]_{i,j=0}^p}{(2p+1)!\, R\,\det[\chi_{i+j}(R)]_{i,j=0}^p} </annotation></semantics>

As the each polynomial <semantics>χ i(R)<annotation encoding="application/x-tex">\chi_i(R)</annotation></semantics> has a path counting interpretation, one can use the rather beautiful Lindström-Gessel-Viennot Lemma to give a path counting interpretation to the determinants in the above formula and find some explicit expression. I will probably blog about this another time. (Fellow host Qiaochu has also blogged about the LGV Lemma.)

Weighted Dyck paths

Before getting on to Bessel polynomials and weighted Schröder paths, we need to look at counting weighted Dyck paths, which are simpler and more classical.

A Dyck path is a path in the lattice <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics> which starts at <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>, stays in the upper half plane, ends back on the <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>-axis at <semantics>(2i,0)<annotation encoding="application/x-tex">(2{i},0)</annotation></semantics> and has steps going either diagonally right and up or right and down. The integer <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is called the length of the path. Let <semantics>D i<annotation encoding="application/x-tex">D_{i}</annotation></semantics> be the set of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Dyck paths.

For each Dyck path <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, we will weight each edge going right and down, from <semantics>(x,y)<annotation encoding="application/x-tex">(x,y)</annotation></semantics> to <semantics>(x+1,y1)<annotation encoding="application/x-tex">(x+1,y-1)</annotation></semantics> by <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> then we will take <semantics>w(σ)<annotation encoding="application/x-tex">w(\sigma)</annotation></semantics>, the weight of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, to be the product of all the weights on its steps. Here are all five weighted Dyck paths of length six.

Dyck paths

Famously, the number of Dyck paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is given by the <semantics>i<annotation encoding="application/x-tex">{i}</annotation></semantics>th Catalan number; here, however, we are interested in the number of paths weighted by the weighting(!). If we sum over the weights of each of the above diagrams we get <semantics>6+4+2+2+1=15<annotation encoding="application/x-tex">6+4+2+2+1=15</annotation></semantics>. Note that this is <semantics>5×3×1<annotation encoding="application/x-tex">5\times 3 \times 1</annotation></semantics>. This is a pattern that holds in general.

Theorem A. (Françon and Viennot) The weighted count of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Dyck paths is equal to the double factorial of <semantics>2i1<annotation encoding="application/x-tex">2{i} -1</annotation></semantics>: <semantics> σD iw(σ) =(2i1)(2i3)(2i5)31 (2i1)!!.<annotation encoding="application/x-tex"> \begin{aligned} \sum_{\sigma\in D_{i}} w(\sigma)&= (2{i} -1)\cdot (2{i} -3)\cdot (2{i}-5)\cdot \cdots\cdot 3\cdot 1 \\ &\eqqcolon (2{i} -1)!!. \end{aligned} </annotation></semantics>

The following is a nice combinatorial proof of this theorem that I found in a survey paper by Callan. (I was only previously aware of a high-tech proof involving continued fractions and a theorem of Gauss.)

The first thing to note is that the weight of a Dyck path is actually counting something. It is counting the ways of labelling each of the down steps in the diagram by a positive integer less than the height (i.e.~the weight) of that step. We call such a labelling a height labelling. Note that we have no choice of weighting but we often have choice of height labelling. Here’s a height labelled Dyck path.

height labelled Dyck path

So the weighted count of Dyck paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is precisely the number of height labelled Dyck paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics>. <semantics> σD iw(σ)=#{height labelled paths of length 2i}<annotation encoding="application/x-tex"> \sum_{\sigma\in D_{i}} w(\sigma) = \#\{\text{height labelled paths of length }\,\,2{i}\} </annotation></semantics>

We are going to consider marked Dyck paths, which just means we single out a specific vertex. A path of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> has <semantics>2i+1<annotation encoding="application/x-tex">2{i} + 1</annotation></semantics> vertices. Thus

<semantics>#{height labelled, MARKED paths of length 2i} =(2i+1)×#{height labelled paths of length 2i}.<annotation encoding="application/x-tex"> \begin{aligned} \#\{\text{height labelled,}\,\, &\text{ MARKED paths of length }\,\,2{i}\}\\ &=(2{i}+1)\times\#\{\text{height labelled paths of length }\,\,2{i}\}. \end{aligned} </annotation></semantics>

Hence the theorem will follow by induction if we find a bijection

<semantics>{height labelled, paths of length 2i} {height labelled, MARKED paths of length 2i2}.<annotation encoding="application/x-tex"> \begin{aligned} \{\text{height labelled,}\,\,&\text{ paths of length }\,\,2{i} \}\\ &\cong \{\text{height labelled, MARKED paths of length }\,\,2{i}-2 \}. \end{aligned} </annotation></semantics>

Such a bijection can be constructed in the following way. Given a height labelled Dyck path, remove the left-hand step and the first step that has a label of one on it. On each down step between these two deleted steps decrease the label by one. Now join the two separated parts of the path together and mark the vertex at which they are joined. Here is an example of the process.

dyck bijection

Working backwards it is easy to describe the inverse map. And so the theorem is proved.

Schröder paths and reverse Bessel polynomials

In order to give a path theoretic interpretation of reverse Bessel polynomials we will need to use Schröder paths. These are like Dyck paths except we allow a certain kind of flat step.

A Schröder path is a path in the lattice <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics> which starts at <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>, stays in the upper half plane, ends back on the <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>-axis at <semantics>(2i,0)<annotation encoding="application/x-tex">(2{i},0)</annotation></semantics> and has steps going either diagonally right and up, diagonally right and down, or horizontally two units to the right. The integer <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is called the length of the path. Let <semantics>S i<annotation encoding="application/x-tex">S_{i}</annotation></semantics> be the set of all length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Schröder paths.

For each Schröder path <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, we will weight each edge going right and down, from <semantics>(x,y)<annotation encoding="application/x-tex">(x,y)</annotation></semantics> to <semantics>(x+1,y1)<annotation encoding="application/x-tex">(x+1,y-1)</annotation></semantics> by <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> and we will weight each flat edge by the indeterminate <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics>. Then we will take <semantics>w(σ)<annotation encoding="application/x-tex">w(\sigma)</annotation></semantics>, the weight of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, to be the product of all the weights on its steps.

Here is the picture of all six length four weighted Schröder paths again.

Schroeder paths

You were asked at the top of this post to check that the sum of the weights equals the second reverse Bessel polynomial. Of course that result generalizes!

The following theorem was shown to me by Alan Sokal, he proved it using continued fractions methods, but these essentially amount to the combinatorial proof I’m about to give.

Theorem B. The weighted count of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Schröder paths is equal to the <semantics>i<annotation encoding="application/x-tex">{i}</annotation></semantics>th reverse Bessel polynomial: <semantics> σS iw(σ)=θ i(R).<annotation encoding="application/x-tex"> \sum_{\sigma\in S_{i}} w(\sigma)= \theta_{i}(R). </annotation></semantics>

The idea is to observe that you can remove the flat steps from a weighted Schröder path to obtain a weighted Dyck path. If a Schröder path has length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> and <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> upward steps then it has <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> downward steps and <semantics>it<annotation encoding="application/x-tex">{i}-t</annotation></semantics> flat steps, so it has a total of <semantics>i+t<annotation encoding="application/x-tex">{i}+t</annotation></semantics> steps. This means that there are <semantics>(i+tit)<annotation encoding="application/x-tex">\binom{{i}+t}{{i}-t}</annotation></semantics> length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Schröder paths with the same underlying length <semantics>2t<annotation encoding="application/x-tex">2t</annotation></semantics> Dyck path (we just choose were to insert the flat steps). Let’s write <semantics>S i t<annotation encoding="application/x-tex">S^t_{i}</annotation></semantics> for the set of Schröder paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> with <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> upward steps. <semantics> σS iw(σ) = t=0 i σS i tw(σ)= t=0 i(i+tit) σD tw(σ)R it = t=0 i(i+tit)(2t1)!!R it = t=0 i(i+t)!(it)!(2t)!(2t)!2 tt!R it =θ i(R),<annotation encoding="application/x-tex"> \begin{aligned} \sum_{\sigma\in S_{i}} w(\sigma) &= \sum_{t=0}^{i} \sum_{\sigma\in S^t_{i}} w(\sigma) = \sum_{t=0}^{i} \binom{{i}+t}{{i}-t}\sum_{\sigma'\in D_t} w(\sigma')R^{{i}-t}\\ &= \sum_{t=0}^{i} \binom{{i}+t}{{i}-t}(2t-1)!!\,R^{{i}-t}\\ &= \sum_{t=0}^{i} \frac{({i}+t)!}{({i}-t)!\,(2t)!}\frac{(2t)!}{2^t t!}R^{{i}-t}\\ &= \theta_{i}(R), \end{aligned} </annotation></semantics> where the last equality comes from the formula for <semantics>θ i(R)<annotation encoding="application/x-tex">\theta_{i}(R)</annotation></semantics> given at the beginning of the post.

Thus we have the required combinatorial interpretation of the reverse Bessel polynomials.

Further questions

The first question that springs to mind for me is if it is possible to give a bijective proof of Theorem B, similar in style, perhaps (or perhaps not), to the proof given of Theorem A, basically using the recursion relation <semantics>θ i+1(R)=R 2θ i1(R)+(2i+1)θ i(R)<annotation encoding="application/x-tex"> \theta_{i+1}(R)=R^2\theta_{i-1}(R) + (2i+1)\theta_{i}(R) </annotation></semantics> rather than the explicit formular for them.

The second question would be whether the differential equation <semantics>Rθ i (R)2(R+i)θ i (R)+2iθ i(R)=0.<annotation encoding="application/x-tex"> R\theta_i^{\prime\prime}(R)-2(R+i)\theta_i^\prime(R)+2i\theta_i(R)=0. </annotation></semantics> has some sort of combinatorial interpretation in terms of paths.

I’m interested to hear if anyone has any thoughts.

by willerton (S.Willerton@sheffield.ac.uk) at August 29, 2017 02:21 PM

August 26, 2017

Tommaso Dorigo - Scientificblogging

A Narrow Escape From The Cumbre Rucu Volcano

If I am alive, I probably owe it to my current very good physical shape.

That does not mean I narrowly escaped a certain death; rather, it means that if I had been slower there are good chances I would have got hit by lightning, under arduous conditions, at 4300 meters of altitude.

read more

by Tommaso Dorigo at August 26, 2017 09:10 AM

August 24, 2017

Symmetrybreaking - Fermilab/SLAC

Mega-collaborations for scientific discovery

DUNE joins the elite club of physics collaborations with more than 1000 members.

Group photo of the members of the DUNE collaboration

Sometimes it takes lot of people working together to make discovery possible. More than 7000 scientists, engineers and technicians worked on designing and constructing the Large Hadron Collider at CERN, and thousands of scientists now run each of the LHC’s four major experiments.

Not many experiments garner such numbers. On August 15, the Deep Underground Neutrino Experiment (DUNE) became the latest member of the exclusive clique of particle physics experiments with more than a thousand collaborators.

Meet them all:

Photo of CMS detector
Photo by Maximilien Brice, CERN

4,000+: Compact Muon Solenoid Detector (CMS) Experiment

CMS is one of the two largest experiments at the LHC. It is best known for its role in the discovery of the Higgs boson.

The “C” in CMS stands for compact, but there’s nothing compact about the CMS collaboration. It is one of the largest scientific collaborations in history. More than 4000 people from 200 institutions around the world work on the CMS detector and use its data for research.

About 30 percent of the CMS collaboration hail from US institutions. A remote operations center at the Department of Energy’s Fermi National Accelerator Laboratory in Batavia, Illinois, serves as a base for CMS research in the United States.

Photo: ATLAS detector
Claudia Marcelloni, CERN

3,000+: A Toroidal LHC ApparatuS (ATLAS) Experiment

The ATLAS experiment, the other large experiment responsible for discovering the Higgs boson at the LHC, ranks number two in number of collaborators. The ATLAS collaboration has more than 3000 members from 182 institutions in 38 countries. ATLAS and CMS ask similar questions about the building blocks of the universe, but they look for the answers with different detector designs. 

About 30 percent of the ATLAS collaboration are from institutions in the United States. Brookhaven National Laboratory in Upton, New York, serves as the US host.

2,000+: Linear Collider Collaboration

The Linear Collider Collaboration (LCC) is different from CMS and ATLAS in that the collaboration’s experiment is still a proposed project and has not yet been built. LCC has around 2000 members who are working to develop and build a particle collider that can produce different kinds of collisions than those seen at the LHC.

LCC members are working on two potential linear collider projects: the compact linear collider study (CLIC) at CERN and the International Linear Collider (ILC) in Japan. CLIC and the ILC originally began as separate projects, but the scientists working on both joined forces in 2013.

Either CLIC or the ILC would complement the LHC by colliding electrons and positrons to explore the Higgs particle interactions and the nature of subatomic forces in greater detail.

Photo: ALICE work
Antonio Saba, CERN

1,500+; A Large Ion Collider Experiment (ALICE)

ALICE is part of LHC’s family of particle detectors, and, like ATLAS and CMS, it too has a large, international collaboration, counting 1500 members from 154 physics institutes in 37 countries. Research using ALICE is focused on quarks, the sub-atomic particles that make up protons and neutrons, and the strong force responsible for holding quarks together.

Image: DUNE
Courtesy of Fermilab

1,000+: Deep Underground Neutrino Experiment (DUNE)

The Deep Underground Neutrino Experiment is the newest member of the club. This month, the DUNE collaboration surpassed 1000 collaborators from 30 countries.

From its place a mile beneath the earth at the Sanford Underground Research Facility in South Dakota, DUNE will investigate the behavior of neutrinos, which are invisible, nearly massless particles that rarely interact with other matter. The neutrinos will come from Fermilab, 800 miles away.

Neutrino research could help scientists answer the question of why there is an imbalance between matter and antimatter in the universe. Groundbreaking for DUNE occurred on July 21, and the experiment will start taking data in around 2025.

Honorable mentions

A few notable collaborations have made it close to 1000 but didn’t quite make the list. LHCb, the fourth major detector at LHC, boasts a collaboration 800 strong. Over 700 collaborators work on the Belle II experiment at KEK in Japan, which will begin taking data in 2018, studying the properties of B mesons, particles that contain a bottom quark. The 600-member BaBar collaboration at SLAC National Accelerator Laboratory also studies B mesons. STAR, a detector at Brookhaven National Laboratory that probes the conditions of the early universe, has more than 600 collaborators from 55 institutions. The CDF and DZero collaborations at Fermilab, best known for their co-discovery of the top quark in 1995, had about 700 collaborators at their peak.

by Leah Poffenberger at August 24, 2017 03:45 PM

August 23, 2017

Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs Part 5: When US Labs Competed For Leadership In HEP
This is the fifth and final part of Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". (the beginning of the chapter was omitted since it described a different story). The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989.  The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider.

read more

by Tommaso Dorigo at August 23, 2017 02:57 PM

August 22, 2017

Symmetrybreaking - Fermilab/SLAC

Expanding the search for dark matter

At a recent meeting, scientists shared ideas for searching for dark matter on the (relative) cheap.

Scientists in search of dark matter

Thirty-one years ago, scientists made their first attempt to find dark matter with a particle detector in a South Dakota mine. 

Since then, researchers have uncovered enough clues to think dark matter makes up approximately 26.8 percent of all the matter and energy in the universe. They think it forms a sort of gravitational scaffolding for the galaxies and galaxy clusters our telescopes do reveal, shaping the structure of our universe while remaining unseen. 

These conclusions are based on indirect evidence such as the behavior of galaxies and galaxy clusters. Direct detection experiments—ones designed to actually sense a dark matter particle pinging off the nucleus of an atom—have yet to find what they’re looking for. Nor has dark matter been seen at the Large Hadron Collider. That invisible, enigmatic material, that Greta Garbo of particle physics, still wants to be alone. 

It could be that researchers are just looking in the wrong place. Much of the search for dark matter has focused on particles called WIMPs, weakly interacting massive particles. But interest in WIMP alternatives has been growing, prompting the development of a variety of small-scale research projects to investigate some of the most promising prospects. 

In March more than 100 scientists met at the University of Maryland for “Cosmic Visions: New Ideas in Dark Matter,” a gathering to take the pulse of the post-WIMP dark matter landscape for the Department of Energy. That pulse was surprisingly strong. Organizers recently published a white paper detailing the results.

The conference came about partly because, “it seemed a good time to get everyone together to see what each experiment was doing, where they reinforced each other and where they did something new,” says Natalia Toro, a theorist at SLAC National Accelerator Laboratory and a member of the Cosmic Visions Scientific Advisory Committee. What she and many other participants didn’t expect, Toro says, was just how many good ideas would be presented. 

Almost 50 experiments in various stages of development were presented during three days of talks, and a similar number of potential experiments were discussed.

Some of the experiments presented would be designed to look for dark matter particles that are lighter than traditional WIMPs, or for the new fundamental forces through which such particles could interact. Others would look for oscillating forces produced by dark matter particles trillions of times lighter than the electron. Still others would look for different dark matter candidates, such as primordial black holes. 

The scientists at the workshop were surprised by how small and relatively inexpensive many of the experiments could be, says Philip Schuster, a particle theorist at SLAC National Accelerator Laboratory.

“‘Small’ and ‘inexpensive’ depend on what technology you’re using, of course,” Schuster says. DOE is prepared to provide funding to the tune of $10 million (still a fraction of the cost of a current WIMP experiment), and many of the experiments could cost in the $1 to $2 million range.

Several factors work together to lessen the cost. For example, advances in detector technology and quantum sensors have made technology cheaper. Then there are small detectors that can be placed at already-existing large facilities like the Heavy Photon Search, a dark-sector search at Jefferson Lab. “It’s basically a table-top detector, as opposed to CMS and ATLAS at the Large Hadron Collider, which took years to build and weigh as much as a battleship,” Schuster says.

Experimentalist Joe Incandela of the University of California, Santa Barbara and one of the coordinators of the Cosmic Visions effort, has a simple explanation for this current explosion of ideas. “There’s a good synergy between the technology and interest in dark matter,” he says. 

Incandela says he is feeling the synergy himself. He is a former spokesperson for CMS, a battleship-class experiment in which he continues to play an active role while also developing the Light Dark Matter Experiment, which would use a high-resolution silicon-based calorimeter that he originally helped develop for CMS to search for an alternative to WIMPs. 

“It occurred to me that this calorimeter technology could very useful for low-mass dark matter searches,” he says. “My hope is that, starting soon, and spanning roughly five years, the funding—and not very much is needed—will be available to support experiments that can cover a lot more of the landscape where dark matter may be hiding. It’s very exciting.”


Check out our printable poster about the expanding search for dark matter.

Small Dark Matter Experiments Poster
Artwork by Sandbox Studio, Chicago with Ana Kova

by Lori Ann White at August 22, 2017 02:13 PM

August 20, 2017

Jon Butterworth - Life and Physics

August 18, 2017

Matt Strassler - Of Particular Significance

An Experience of a Lifetime: My 1999 Eclipse Adventure

Back in 1999 I saw a total solar eclipse in Europe, and it was a life-altering experience.  I wrote about it back then, but was never entirely happy with the article.  This week I’ve revised it.  It could still benefit from some editing and revision (comments welcome), but I think it’s now a good read.  It’s full of intellectual observations, but there are powerful emotions too.

If you’re interested, you can read it as a pdf, or just scroll down.

 

 

A Luminescent Darkness: My 1999 Eclipse Adventure

© Matt Strassler 1999

After two years of dreaming, two months of planning, and two hours of packing, I drove to John F. Kennedy airport, took the shuttle to the Air France terminal, and checked in.  I was brimming with excitement. In three days time, with a bit of luck, I would witness one the great spectacles that a human being can experience: a complete, utter and total eclipse of the Sun.

I had missed one eight years earlier. In July 1991, a total solar eclipse crossed over Baja California. I had thought seriously about driving the fourteen hundred miles from the San Francisco area, where I was a graduate student studying theoretical physics, to the very southern tip of the peninsula. But worried about my car’s ill health and scared by rumors of gasoline shortages in Baja, I chickened out. Four of my older colleagues, more worldly and more experienced, and supplied with a more reliable vehicle, drove down together. When they returned, exhilarated, they regaled us with stories of their magical adventure. Hearing their tales, I kicked myself for not going, and had been kicking myself ever since. Life is not so long that such opportunities can be rationalized or procrastinated away.

A total eclipse of the Sun is a event of mythic significance, so rare and extraordinary and unbelievable that it really ought to exist only in ancient legends, in epic poems, and in science fiction stories. There are other types of eclipses — partial and total eclipses of the Moon, in which the Earth blocks sunlight that normally illuminates the Moon, and various eclipses of the Sun in which the Moon blocks sunlight that normally illuminates the Earth. But total solar eclipses are in a class all their own. Only during the brief moments of totality does the Sun vanish altogether, leaving the shocked spectator in a suddenly darkened world, gazing uncomprehendingly at a black disk of nothingness.

Our species relies on daylight. Day is warm; day grows our food; day permits travel with a clear sense of what lies ahead. We are so fearful of the night — of what lurks there unseen, of the sounds that we cannot interpret. Horror films rely on this fear; demons and axe murderers are rarely found walking about in bright sunshine. Dark places are dangerous places; sudden unexpected darkness is worst of all. These are the conventions of cinema, born of our inmost psychology. But the Sun and the Moon are not actors projected on a screen. The terror is real.

It has been said that if the Earth were a member of a federation of a million planets, it would be a famous tourist attraction, because this home of ours would be the only one in the republic with such beautiful eclipses. For our skies are witness to a coincidence truly of cosmic proportions. It is a stunning accident that although the Sun is so immense that it could hold a million Earths, and the Moon so small that dozens could fit inside our planet, these two spheres, the brightest bodies in Earth’s skies, appear the same size. A faraway giant may seem no larger than a nearby child. And this perfect match of their sizes and distances makes our planet’s eclipses truly spectacular, visually and scientifically. They are described by witnesses as a sight of weird and unique beauty, a visual treasure completely unlike anything else a person will ever see, or even imagine.

But total solar eclipses are uncommon, occurring only once every year or two. Even worse, totality only occurs in a narrow band that sweeps across the Earth — often just across its oceans. Only a small fraction of the Earth sees a total eclipse in any century. And so these eclipses are precious; only the lucky, or the devoted, will experience one before they die.

In my own life, I’d certainly been more devoted than lucky. I knew it wasn’t wise to wait for the Moon’s shadow to find me by chance. Instead I was going on a journey to place myself in its path.

The biggest challenge in eclipse-chasing is the logistics. The area in which totality is visible is very long but very narrow. For my trip, in 1999, it was a long strip running west to east all across Europe, but only a hundred miles wide from north to south. A narrow zone crossing heavily populated areas is sure to attract a massive crowd, so finding hotels and transport can be difficult. Furthermore, although eclipses are precisely predictable, governed by the laws of gravity worked out by Isaac Newton himself, weather and human beings are far less dependable.

But I had a well-considered plan. I would travel by train to a small city east of Paris, where I had reserved a rental car. Keeping a close watch on the weather forecast, I would drive on back roads, avoiding clogged highways. I had no hotel reservations. It would have been pointless to make them for the night before the event, since it was well known that everything within two hours drive of the totality zone was booked solid. Moreover, I wanted the flexibility to adjust to the weather and couldn’t know in advance where I’d want to stay. So my idea was that on the night prior to the eclipse, I would drive to a good location in the path of the lunar shadow, and sleep in the back of my car. I had a sleeping bag with me to keep me warm, and enough lightweight clothing for the week — and not much else.

Oh, it was such a good plan, clean and simple, and that’s why my heart had so far to sink and my brain so ludicrous a calamity to contemplate when I checked my wallet, an hour before flight time, and saw a gaping black emptiness where my driver’s license was supposed to be. I was struck dumb. No license meant no car rental; no car meant no flexibility and no place to sleep. Sixteen years of driving and I had never lost it before; why, why, of all times, now, when it was to play a central role in a once-in-a-lifetime adventure?

I didn’t panic. I walked calmly back to the check-in counters, managed to get myself rescheduled for a flight on the following day, drove the three hours back to New Jersey, and started looking. It wasn’t in my car. Nor was it in the pile of unneeded items I’d removed from my wallet. Not in my suitcase, not under my bed, not in my office. As it was Sunday, I couldn’t get a replacement license. Hope dimmed, flickered, and went dark.

Deep breaths. Plan B?

I didn’t have a tent, and couldn’t easily have found one. But I did have a rain poncho, large enough to keep my sleeping bag off the ground. As long as it didn’t rain too hard, I could try, the night before the eclipse, to find a place to camp outdoors; with luck I’d find lodging for the other nights. I doubted this would be legal, but I was willing to take the chance. But what about my suitcase? I couldn’t carry that around with me into the wilderness. Fortunately, I knew a solution. For a year after college, I had studied music in France, and had often gone sightseeing by rail. On those trips I had commonly made use of the ubiquitous lockers at the train stations, leaving some luggage while I explored the nearby town. As for flexibility of location, that was unrecoverable; the big downside of Plan B was that I could no longer adjust to the weather. I’d just have to be lucky. I comforted myself with the thought that the worst that could happen to me would be a week of eating French food.

So the next day, carrying the additional weight of a poncho and an umbrella, but having in compensation discarded all inessential clothing and tourist information, I headed back to the airport, this time by bus. Without further misadventures, I was soon being carried across the Atlantic.

As usual I struggled to nap amid the loud silence of a night flight. But my sleeplessness was rewarded with one of those good omens that makes you think that you must be doing the right thing. As we approached the European coastline, and I gazed sleepily out my window, I suddenly saw a bright glowing light. It was the rising white tip of the thin crescent Moon.

Solar eclipses occur at New Moon, always. This is nothing but simple geometry; the Moon must place itself exactly between the Sun and the Earth to cause an eclipse, and that means the half of the Moon that faces us must be in shadow. (At Full Moon, the opposite is true; the Earth is between the Sun and the Moon, so the half of the Moon that faces us is in full sunlight. That’s when lunar eclipses can occur.) And just before a New Moon, the Moon is close to the Sun’s location in the sky. It becomes visible, as the Earth turns, just before the Sun does, rising as a morning crescent shortly before sunrise. (Similarly, we get an evening crescent just after a New Moon.)

There, out over the vast Atlantic, from a dark ocean of water into a dark sea of stars, rose the delicate thin slip of Luna the lover, on her way to her mystical rendezvous with Sol. Her crescent smiled at me and winked a greeting. I smiled back, and whispered, “see you in two days…” For totality is not merely the only time you can look straight at the Sun and see its crown. It is the only time you can see the New Moon.

We landed in Paris at 6:30 Monday morning, E-day-minus-two. I headed straight to the airport train station, and poured over rail maps and my road maps trying to guess a good location to use as a base. Eventually I chose a medium-sized town with the name Charleville-Mezieres. It was on the northern edge of the totality zone, at the end of a large spoke of the Paris-centered rail system, and was far enough from Paris, Brussels, and all large German towns that I suspected it might escape the worst of the crowds. It would then be easy, the night before the eclipse, to take a train back into the center of the zone, where totality would last the longest.

Two hours later I was in the Paris-East rail station and had purchased my ticket for Charleville-Mezieres. With ninety minutes to wait, I wandered around the station. It was evident that France had gone eclipse-happy. Every magazine had a cover story; every newspaper had a special insert; signs concerning the event were everywhere. Many of the magazines carried free eclipse glasses, with a black opaque metallic material for lenses that only the Sun can penetrate. Warnings against looking at the Sun without them were to be found on every newspaper front page. I soon learned that there had been a dreadful scandal in which a widely distributed shipment of imported glasses was discovered to be dangerously defective, leading the government to make a hurried and desperate attempt to recall them. There were also many leaflets advertising planned events in towns lying in the totality zone, and information about extra trains that would be running. A chaotic rush out of Paris was clearly expected.

Before noon I was on a train heading through the Paris suburbs into the farmlands of the Champagne region. The rocking of the train put me right to sleep, but the shrieking children halfway up the rail car quickly ended my nap. I watched the lovely sunlit French countryside as it rolled by. The Sun was by now well overhead — or rather, the Earth had rotated so that France was nearly facing the Sun head on. Sometimes, when the train banked on a turn, the light nearly blinded me, and I had to close my eyes.

With my eyelids shut, I thought about how I’d managed, over decades, to avoid ever once accidentally staring at the Sun for even a second… and about how almost every animal with eyes manages to do this during its entire life. It’s quite a feat, when you think about it. But it’s essential, of course. The Sun’s ferocious blaze is even worse than it appears, for it contains more than just visible light. It also radiates light too violet for us to see — ultraviolet — which is powerful enough to destroy our vision. Any animal lacking instincts powerful enough to keep its eyes off the Sun will go blind, soon to starve or be eaten. But humans are in danger during solar eclipses, because our intense curiosity can make us ignore our instincts. Many of us will suffer permanent eye damage, not understanding when and how it is safe to look at the Sun… which is almost, but not quite, never.

In fact the only time it is safe to look with the naked eye is during totality, when the Sun’s disk is completely blocked by the New Moon, and the world is dark. Then, and only then, can one see that the Sun is not a sphere, and that it has a sort of atmosphere, immense and usually unseen.

At the heart of the Sun, and source of its awesome power, is its nuclear furnace, nearly thirty million degrees hot and nearly five billion years old. All that heat gradually filters and boils out of the Sun’s core toward its visible surface, which is a mere six thousand degrees… still white-hot. Outside this region is a large irregular halo of material that is normally too dim to see against the blinding disk. The inner part of that halo is called the chromosphere; there, giant eruptions called “prominences” loop outward into space. The outer part of the halo is the “corona”, Latin for “crown.” The opportunity to see the Sun’s corona is one of the main reasons to seek totality.

Still very drowsy, but in a good mood, I arrived in Charleville. Wanting to leave my bags in the station while I looked for a hotel room, I searched for the luggage lockers. After three tiring trips around the station, I asked at a ticket booth. “Oh,” said the woman behind the desk, “we haven’t had them available since the Algerian terrorism of a few years ago.”

I gulped. This threatened plan B, for what was I to do with my luggage on eclipse day? I certainly couldn’t walk out into the French countryside looking for a place to camp while carrying a full suitcase and a sleeping bag! And even the present problem of looking for a hotel would be daunting. The woman behind the desk was sympathetic, but her only suggestion was to try one of the hotels near the station. Since the tourist information office was a mile away, it seemed the only good option, and I lugged my bags across the street.

Here, finally, luck smiled. The very first place I stopped at had a room for that night, reasonably priced and perfectly clean, if spartan. It was also available the night after the eclipse. My choice of Charleville had been wise. Unfortunately, even here, Eclipse Eve — Tuesday evening — was as bad as I imagined. The hoteliere assured me that all of Charleville was booked (and my later attempts to find a room, even a last-minute cancellation, proved fruitless.) Still, she she was happy for me to leave my luggage at the hotel while I tramped through the French countryside. Thus was Plan B saved.

Somewhat relieved, I wandered around the town. Charleville is not unattractive, and the orange sandstone 16th century architecture of its central square is very pleasing to the eye. By dusk I was exhausted and collapsed on my bed. I slept long and deep, and awoke refreshed. I took a short sightseeing trip by train, ate a delicious lunch, and tried one more time to find a room in Charleville for Eclipse Eve. Failing once again, I resolved to camp in the heart of the totality zone.

But where? I had several criteria in mind. For the eclipse, I wanted to be far from any large town or highway, so that streetlights, often automatically triggered by darkness, would not spoil the experience. Also I wanted hills and farmland; I wanted to be at a summit, with no trees nearby, in order to have the best possible view. It didn’t take long to decide on a location. About five miles south of the unassuming town of Rethel, rebuilt after total destruction in the first world war, my map showed a high hill. It seemed perfect.

Fortunately, I learned just in time that this same high hill had attracted the attention of the local authorities, and they had decided to designate this very place the “official viewing site” in the region. A hundred thousand people were expected to descend on Rethel and take shuttles from the town to the “site.” Clearly this was not where I wanted to be!

So instead, when I arrived in Rethel, I walked in another direction. I aimed for an area a few miles west of town, quiet hilly farmland.

Yet again, my luck seemed to be on the wane. By four it was drizzling, and by five it was raining. Darkness would settle at around eight, and I had little time to find a site for unobtrusive camping, much less a dry one. The rain stopped, restarted, hesitated, spat, but refused to go away. An unending mass of rain clouds could be seen heading toward me from the west. I had hoped to use trees for some shelter against rain, but now the trees were drenched and dripping, even worse than the rain itself.

Still completely unsure what I would do, I continued walking into the evening. I must have cut a very odd figure, carrying an open umbrella, a sleeping bag, and a small black backpack. I took a break in a village square, taking shelter at a church’s side door, where I munched on French bread and cheese. Maybe one of these farmers would let me sleep in a dry spot in his barn, I thought to myself. But I still hadn’t reached the hills I was aiming for, so I kept walking.

After another mile, I came to a hilltop with a dirt farm track crossing the road. There, just off the road to the right, was a large piece of farm machinery. And underneath it, a large, flat, sheltered spot. Hideous, but I could sleep there. Since it wasn’t quite nightfall yet and I could see a hill on the other side of the road along the same track, one which looked like it might be good for watching the eclipse, I took a few minutes to explore it. There I found another piece of farm equipment, also with a sheltered underbelly. This one was much further from the road, looked unused, and presumably offered both safer and quieter shelter. It was sitting just off the dirt track in a fallow field. The field was of thick, sticky, almost hard mud, the kind you don’t slip in and which doesn’t ooze but which gloms onto the sides of your shoe.

And so it was that Eclipse Eve found me spreading my poncho in a friendly unknown farmer’s field, twisting my body so as not to hit my head on the metal bars of my shelter, carefully unwrapping my sleeping bag and removing my shoes so as not to cover everything in mud, brushing my teeth in bottled water, and bedding down for the night. The whole scene was so absurd that I found myself sporting a slightly manic grin and giggling. But still, I was satisfied. Despite the odds, I was in the zone at the appointed time; when I awoke the next morning I would be scarcely two miles from my final destination. If the clouds were against me, so be it. I had done my part.

I slept pretty well, considering both my excitement and the uneven ground. At daybreak I was surrounded by fog, but by 8 a.m.~the fog was lifting, revealing a few spots of blue sky amid low clouds. My choice of shelter was also confirmed; my sleeping bag was dry, and across the road the other piece of machinery I had considered was already in use.

I packed up and started walking west again. The weather seemed uncertain, with three layers of clouds — low stratus, medium cumulus, and high cirrus — crossing over each other. Blue patches would appear, then close up. I trudged to the base of my chosen hill, then followed another dirt track to the top, where I was graced with a lovely view. The rolling paysage of fertile France stretched before me, blotched here and there with sunshine.  Again I had chosen well, better than I realized, as it turned out, for I was not alone on the hill. A Belgian couple had chosen it too — and they had a car…

There I waited. The minutes ticked by. The temperature fluctuated, and the fields changed color, as the Sun played hide and seek. I didn’t need these reminders of the Sun’s importance — that without its heat the Earth would freeze, and without its light, plants would not grow and the cycle of life would quickly end. I thought about how pre-scientific cultures had viewed the Sun. In cultures and religions around the world, the blazing disk has often been attributed divine power and regal authority. And why not? In the past century, we’ve finally learned what the Sun is made from and why it shines. But we are no less in awe than our ancestors, for the Sun is much larger, much older, and much more powerful than most of them imagined.

For a while, I listened to the radio. Crowds were assembling across Europe. Special events — concerts, art shows, contests — were taking place, organized by towns in the zone to coincide with the eclipse. This was hardly surprising. All those tourists had come for totality. But totality is brief, never more than a handful of minutes.  It’s the luck of geometry, the details of the orbits of the Earth and Moon, that set its duration. For my eclipse, the Moon’s shadow was only about a hundred miles wide. Racing along at three thousand miles per hour, it would darken any one location for at most two minutes. Now if a million people are expected to descend on your town for a two-minute event, I suppose it is a good idea to give them something else to do while they wait. And of course, the French cultural establishment loves this kind of opportunity. Multimedia events are their specialty, and they often give commissions to contemporary artists. I was particularly amused to discover later that an old acquaintance of mine — I met him in 1987 at the composers’ entrance exams for the Paris Conservatory — had been commissioned to write an orchestral piece, called “Eclipse,” for the festival in the large city of Reims. It was performed just before the moment of darkness.

Finally, around 11:30, the eclipse began. The Moon nibbled a tiny notch out of the sun. I looked at it briefly through my eclipse glasses, and felt the first butterflies of anticipation. The Belgian couple, in their late fourties, came up to the top of the hill and stood alongside me. They were Flemish, but the man spoke French, and we chatted for a while. It turned out he was a scientist also, and had spent some time in the United States, so we had plenty to talk about. But our discussion kept turning to the clouds, which showed no signs of dissipating. The Sun was often veiled by thin cirrus or completely hidden by thick cumulus. We kept a nervous watch.

Time crawled as the Moon inched across the brilliant disk. It passed the midway point and the Sun became a crescent. With only twenty minutes before totality, my Belgian friends conversed in Dutch. The man turned to me. “We have decided to drive toward that hole in the clouds back to the east,” he said in French. “It’s really not looking so good here. Do you want to come with us?” I paused to think. How far away was that hole? Would we end up back at the town? Would we get caught in traffic? Would we end up somewhere low? What were my chances if I stayed where I was? I hesitated, unsure. If I went with them, I was subject to their whims, not my own. But after looking at the oncoming clouds one more time, I decided my present location was not favorable. I joined them.

We descended the dirt track and turned left onto the road I’d taken so long to walk. It was completely empty. We kept one eye on where we were going and five eyes on the sky. After two miles, the crescent sun became visible through a large gap in the low clouds. There were still high thin clouds slightly veiling it, but the sky around it was a pale blue. We went a bit further, and then stopped… at the very same dirt track where I had slept the night before. A line of ten or fifteen cars now stretched along it, but there was plenty of room for our vehicle.

By now, with ten minutes to go, the light was beginning to change. When only five percent of the Sun remains, your eye can really tell. The blues become deeper, the whites become milkier, and everything is more subdued. Also it becomes noticeably cooler. I’d seen this light before, in New Mexico in 1994. I had gone there to watch an “annular” eclipse of the Sun. An annular eclipse occurs when the Moon passes directly in front of the Sun but is just a bit too far away from the Earth for its shadow to reach the ground. In such an eclipse, the Moon fails to completely block the Sun; a narrow ringlet, or “annulus”, often called the “ring of fire,” remains visible. That day I watched from a mountain top, site of several telescopes, in nearly clear skies. But imagine the dismay of the spectators as the four-and-a-half minutes of annularity were blocked by a five-minute cloud! Fortunately there was a bright spot. For a brief instant — no more than three seconds — the cloud became thin, and a perfect circle of light shone through, too dim to penetrate eclipse glasses but visible with the naked eye… a veiled, surreal vision.

On the dirt track in the middle of French fields, we started counting down the minutes. There was more and more tension in the air. I put faster speed film into my camera. The light became still milkier, and as the crescent became a fingernail, all eyes were focused either on the Sun itself or on a small but thick and dangerous-looking cloud heading straight for it. Except mine. I didn’t care if I saw the last dot of sunlight disappear. What I wanted to watch was the coming of Moon-shadow.

One of my motivations for seeking a hill was that I wanted to observe the approach of darkness. Three thousand miles an hour is just under a mile per second, so if one had a view extending out five miles or so, I thought, one could really see the edge coming. I expected it would be much like watching the shadow of a cloud coming toward me, with the darkness sweeping along the ground, only much darker and faster. I looked to the west and waited for the drama to unfold.

And it did, but it was not what I was expecting. Even though observing the shadow is a common thing for eclipse watchers to do, nothing I had ever read about eclipses prepared me in the slightest for what I was about to witness. I’ve never seen it photographed, or even described. Maybe it was an effect of all the clouds around us. Or maybe others, just as I do, find it difficult to convey.

For how can one relate the sight of daylight sliding swiftly, like an sigh, to deep twilight? of the western sky, seen through scattered clouds, changing seamlessly and inexorably from blue to pink to slate gray to the last yellow of sunset? of colors rising up out of the horizon and spreading across the sky like water from a broken dyke flooding onto a field?

I cannot find the right combination of words to capture the sense of being swept up, of being overwhelmed, of being transfixed with awe, as one might be before the summoning of a great wave or a great wind by the command of a god, yet all in utter silence and great beauty. Reliving it as I write this brings a tear. In the end I have nothing to compare it to.

The great metamorphosis passed. The light stabilized. Shaken, I looked up.

And quickly looked away. I had seen a near-disk of darkness, the fuzzy whiteness of the corona, and some bright dots around the disk’s edge, one especially bright where the Sun still clearly shone through. Accidentally I had seen with my naked eyes the “diamond ring,” a moment when the last brilliant drop of Sun and the glistening corona are simultaneously visible. It’s not safe to look at. I glanced again. Still several bright dots. I glanced again. Still there — but the Sun had to be covered by now…

So I looked longer, and realized that the Sun was indeed covered, that those bright dots were there to stay. There it was. The eclipsed Sun, or rather, the dark disk of the New Moon, surrounded by the Sun’s crown, studded at its edge with seven bright pink jewels. It was bizarre, awe-inspiring, a spooky hallucination. It shimmered.

The Sun’s corona didn’t really resemble what I had seen in photographs, and I could immediately see why. The corona looked as though it were made of glistening white wispy hair, billowing outward like a mop of whiskers. It gleamed with a celestial light, a shine resembling that of well-lit tinsel. No camera could capture that glow, no photograph reproduce it.

But the greatest, most delightful surprise was the seven beautiful gems. I knew they had to be the great eruptions on the surface of the Sun, prominences, huge magnetic storms larger than our planet and more violent than anything else in the solar system. However, nobody ever told me they were bright pink! I always assumed they were orange (silly of me, since the whole Sun looks orange if you look at it through an orange filter, which the photographs always do.) They were arranged almost symmetrically around the sun, with one of them actually well separated from its surface and halfway out into the lovely soft filaments of the corona. I explored them with my binoculars. The colors, the glistening timbre, the rich detail, it is a visual delight. The scene is living, vibrant, delicate and soft; by comparison, all the photographs and films seem dry, flat, deadened.

I was surprised at my calm. After the great rush of the shadow, the stasis of totality had caught me off guard.  Around me it was much lighter than I had expected. The sense was of late twilight, with a deep blue-purple sky; yet it was still bright enough to read by. The yellow light of late sunset stretched all the way around the horizon. The planet Venus was visible, but no stars peeked through the clouds. Perhaps longer eclipses have darker skies, a larger Moon-shadow putting daylight further away.

I had scarcely had time to absorb all of this when, just at the halfway point of totality, the dangerous-looking cumulus cloud finally arrived, and blotted out the view. A groan, but only a half-hearted one, emerged from the spectators; after all we’d seen what we’d come to see. I took in the colors emanating from the different parts of the sky, and then looked west again, waiting for the light to return. A thin red glow touched the horizon. I waited. Suddenly the red began to grow furiously. I yelled “Il revient!” — it is returning! — and then watched in awe as the reds became pinks, swarmed over us, turned yellow-white…

And then… it was daylight again. Normality, or a slightly muted version of it. The magical show was over, heavenly love had been consummated, we who had traveled far had been rewarded. The weather had been kind to us. There was a pause as we savored the experience, and waited for our brains to resume functioning. Then congratulations were passed around as people shook hands and hugged each other. I thanked my Belgian friends, who like me were smiling broadly. They offered me a ride back to town. I almost accepted, but stopped short, and instead thanked them again and told them I somehow wanted to be outside for a while longer. We exchanged addresses, said goodbyes, they drove off.

I started retracing my steps from the previous evening. As I walked back to the town of Rethel in the returning sunshine, the immensity of what I had seen began gradually to make its way through my skin into my blood, making me teary-eyed. I thought about myself, a scientist, educated and knowledgeable about the events that had just taken place, and tried to imagine what would have happened to me today if I had not had
that knowledge and had found myself, unexpectedly, in the Moon’s shadow.

It was not difficult; I had only to imagine what I would feel if the sky suddenly, without any warning, turned a fiery red instead of blue and began to howl. It would have been a living nightmare. The terror that I would have felt would have penetrated my bones. I would have fallen on my knees in panic; I would have screamed and wept; I would have called on every deity I knew and others I didn’t know for help; I would have despaired; I would have thought death or hell had come; I would have assumed my life was about to end. The two minutes of darkness, filled with the screams and cries of my neighbors, would have been timeless, maddening. When the Sun just as suddenly returned, I would have collapsed onto the ground with relief, profusely and weepingly thanking all of the deities for restoring the world to its former condition, and would have rushed home to relatives and friends, hoping to find some comfort and solace.

I would have sought explanations. I would have been willing to consider anything: dragons eating the Sun, spirits seeking to punish our village or country for its transgressions, evil and spiteful monsters trying to freeze the Earth, gods warning us of terrible things to come in future. But above all, I could never, never have imagined that this brief spine-chilling extinction and transformation of the Sun was a natural phenomenon. Nothing so spectacular and sudden and horrifying could have been the work of mere matter. It would once and for all have convinced me of the existence of creatures greater and more powerful than human beings, if I had previously had any doubt.

And I would have been forever changed. No longer could I have entirely trusted the regularity of days and nights, of seasons, of years. For the rest of my life I would have always found myself glancing at the sky, wanting to make sure that all, for the moment, was well. For if the Sun could suddenly vanish for two minutes, perhaps the next time it could vanish for two hours, or two days… or two centuries. Or forever.

I pondered the impact that eclipses, both solar and lunar, have had throughout human history. They have shaped civilizations. Wars and slaughters were begun and ended on their appearance; they sent ordinary people to their deaths as appeasement sacrifices; new gods and legends were invoked to give meaning to them. The need to predict them, and the coincidences which made their prediction possible, helped give birth to astronomy as a mathematically precise science, in China, in Greece, in modern Europe — developments without which my profession, and even my entire technologically-based culture, might not exist.

It was an hour’s walk to Rethel, but that afternoon it was a long journey. It took me across the globe to nations ancient and distant. By the time I reached the town, I’d communed with my ancestors, reconsidered human history, and examined anew my tiny place in the universe.  If I’d been a bit calm during totality itself, I wasn’t anymore. What I’d seen was gradually filtering, with great potency, into my soul.

I took the train back to Charleville, and slept dreamlessly. The next two days were an opportunity to unwind, to explore, and to eat well. On my last evening I returned to Paris to visit my old haunts. I managed to sneak into the courtyard of the apartment house where I had had a one-room garret up five flights of stairs, with its spartan furnishings and its one window that looked over the roofs of Paris to the Eiffel Tower. I wandered past the old Music Conservatory, since moved to the northeast corner of town, and past the bookstore where I bought so much music. My favorite bakery was still open.

That night I slept in an airport hotel, and the next day flew happily home to the American continent. I never did find my driver’s license.

But psychological closure came already on the day following the eclipse. I spent that day in Laon, a small city perched magnificently atop a rocky hill that rises vertically out of the French plains. I wandered its streets and visited its sights — an attractive church, old houses, pleasant old alleyways, ancient walls and gates. As evening approached I began walking about, looking for a restaurant, and I came to the northwestern edge of town overlooking the new city and the countryside beyond. The clouds had parted, and the Sun, looking large and dull red, was low in the sky. I leaned on the city wall and watched as the turning Earth carried me, and Laon, and all of France, at hundreds of miles an hour, intent on placing itself between me and the Sun. Yet another type of solar eclipse, one we call “sunset.”

The ruddy disk touched the horizon. I remembered the wispy white mane and the brilliant pink jewels. In my mind the Sun had always been grand and powerful, life-giver and taker, essential and dangerous. It could blind, burn, and kill.  I respected it, was impressed and awed by it, gave thanks for it, swore at it, feared it. But in the strange light of totality, I had seen beyond its unforgiving, blazing sphere, and glimpsed a softer side of the Sun. With its feathery hair blowing in a dark sky, it had seemed delicate, even vulnerable. It is, I thought to myself, as mortal as we.

The distant French hills rose across its face. As it waned, I found myself feeling a warmth, even a tenderness — affection for this giant glowing ball of hydrogen, this protector of our planet, this lonely beacon in a vast emptiness… the only star you and I will ever know.


Filed under: Astronomy, History of Science Tagged: astronomy, earth, eclipse, moon, space, sun

by Matt Strassler at August 18, 2017 12:30 PM

August 16, 2017

Symmetrybreaking - Fermilab/SLAC

QuarkNet takes on solar eclipse science

High school students nationwide will study the effects of the solar eclipse on cosmic rays.

Group photo of students and teachers involved in QuarkNet

While most people are marveling at Monday’s eclipse, a group of researchers will be measuring its effects on cosmic rays—particles from space that collide with the earth’s atmosphere to produce muons, heavy cousins of the electron. But these researchers aren’t the usual PhD-holding suspects: They’re still in high school.

More than 25 groups of high school students and teachers nationwide will use small-scale detectors to find out whether the number of cosmic rays raining down on Earth changes during an eclipse. Although the eclipse event will last only three hours, this student experiment has been a months-long collaboration.

The cosmic ray detectors used for this experiment were provided as kits by QuarkNet, an outreach program that gives teachers and students opportunities to try their hands at high-energy physics research. Through QuarkNet, high school classrooms can participate in a whole range of physics activities, such as analyzing real data from the CMS experiment at CERN and creating their own experiments with detectors.

“Really active QuarkNet groups run detectors all year and measure all sorts of things that would sound crazy to a physicist,” says Mark Adams, QuarkNet’s cosmic ray studies coordinator. “It doesn’t really matter what the question is as long as it allows them to do science.”

And this year’s solar eclipse will give students a rare chance to answer a cosmic question: Is the sun a major producer of the cosmic rays that bombard Earth, or do they come from somewhere else?

“We wanted to show that, if the rate of cosmic rays changes a lot during the eclipse, then the sun is a big source of cosmic rays,” Adams says. “We sort of know that the sun is not the main source, but it’s a really neat experiment. As far as we know, no one has ever done this with cosmic ray muons at the surface.”

Adams and QuarkNet teacher Nate Unterman will be leading a group of nine students and five adults to Missouri to the heart of the path of totality—where the moon will completely cover the sun—to take measurements of the event. Other QuarkNet groups will stay put, measuring what effect a partial eclipse might have on cosmic rays in their area.  

Most cosmic rays are likely high-energy particles from exploding stars deep in space, which are picked up via muons in QuarkNet detectors. But the likely result of the experiment—that cosmic rays don’t change their rate when the moon moves in front of the sun—doesn’t eclipse the excitement for the students in the collaboration.

“They’ve been working for months and months to develop the design for the measurements and the detectors,” Adams says. “That’s the great part—they’re not focused on what the answer is but the best way to find it.”

Photo of three students carrying a long detector while another holds the door
Mark Adams

by Leah Poffenberger at August 16, 2017 05:46 PM

August 15, 2017

Symmetrybreaking - Fermilab/SLAC

Dark matter hunt with LUX-ZEPLIN

A video from SLAC National Accelerator Laboratory explains how the upcoming LZ experiment will search for the missing 85 percent of the matter in the universe.

Illustration of a cut-away view of the inside of the LZ detector

What exactly is dark matter, the invisible substance that accounts for 85 percent of all the matter in the universe but can’t be seen even with our most advanced scientific instruments?

Most scientists believe it’s made of ghostly particles that rarely bump into their surroundings. That’s why billions of dark matter particles might zip right through our bodies every second without us even noticing. Leading candidates for dark matter particles are WIMPs, or weakly interacting massive particles.

Scientists at SLAC National Accelerator Laboratory are helping to build and test one of the biggest and most sensitive detectors ever designed to catch a WIMP: the LUX-ZEPLIN or LZ detector. The following video explains how it works.

Dark Matter Hunt with LUX-ZEPLIN (LZ)

Video of Dark Matter Hunt with LUX-ZEPLIN (LZ)

August 15, 2017 04:36 PM

August 12, 2017

Lubos Motl - string vacua and pheno

Arctic mechanism: a derivation of the multiple point criticality principle?
One of the ideas I found irresistible in my research during the last 3 weeks was the multiple point criticality principle mentioned in a recent blog post about a Shiu-Hamada paper.



Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something maximally special seems to happen over there.

This principle is supported by a reasonably impressive prediction of the fine-structure constant, the top quark mass, the Higgs boson mass, and perhaps the neutrino masses and/or the cosmological constant related to them.




In some sense, the principle modifies the naive "uniform measure" on the parameter space that is postulated by naturalness. We may say that the multiple point criticality principle not only modifies naturalness. It almost exactly negates it. The places with \(\theta=0\) where \(\theta\) is the distance from some phase transition are of measure zero, and therefore infinitely unlikely, according to naturalness. But the multiple point criticality principle says that they're really preferred. In fact, if there are several phase transitions and \(\theta_i\) measure the distances from several domain walls in the moduli space, the multiple point criticality principle wants to set all the parameters \(\theta_i\) equal to zero.




Is there an everyday life analogy for that? I think so. Look at the picture at the top and ignore the boat with the German tourist in it. What you see is the Arctic Ocean – with lots of water and ice over there. What is the temperature of the ice and the water? Well, it's about 0 °C, the melting point of water. In reality, the melting point is a bit different due to the salinity.

But in this case, there exists a very good reason to conclude that we're near the melting point. It's because we can see that the water and the ice co-exist. And the water may only exist above the melting point; and the ice may only exist beneath the melting point. The intersection of these two intervals is a narrow interval – basically the set containing the melting point only. If the water were much warmer than the melting point, it would have to cool quickly enough because the ice underneath is colder – it can't really be above the melting point.

(The heat needed for the ice to melt is equal to the heat needed to warm the same amount of water by some 80 °C if I remember well.)

How is it possible that the temperature 0 °C, although it's a special value of measure zero, is so popular in the Arctic Ocean? It's easy. If you study what's happening when you warm the ice – start with a body of ice only – you will ultimately get to the melting point and a part of ice will melt. You will obtain a mixture of the ice and water. Now, if you are adding additional heat, the ice no longer heats up. Instead, the extra heat will be used to transform an increasing fraction of the ice to the water – i.e. to melt the ice.

So the growth of the temperature stops at the melting point. Instead of the temperature, what the additional incoming heat increases is the fraction of the H2O molecules that have already adopted the liquid state. Only when the fraction increases to 100%, you get pure liquid water and the additional heating may increase the temperature above 0 °C.



In theoretical physics, we want things like the top quark mass \(m_t\) to be analogous to the temperature \(T\) of the Arctic water. Can we find a similar mechanism in physics that would just explain why the multiple point criticality principle is right?

The easiest way is to take the analogy literally and consider the multiverse. The multiverse may be just like the Arctic Ocean. And parts of it may be analogous to the floating ice, parts of it may be analogous to the water underneath. There could be some analogy of the "heat transfer" that forces something like \(m_t\) to be nearly the same in the nearby parts of the multiverse. But the special values of \(m_t\) that allow several phases may occupy a finite fraction of the multiverse and what is varying in this region isn't \(m_t\) but rather the percentage of the multiverse occupied by the individual phases.

There may be regions of the multiverse where several phases co-exist and several parameters analogous to \(m_t\) appear to be fine-tuned to special values.

I am not sure whether an analysis of this sort may be quantified and embedded into a proper full-blown cosmological model. It would be nice. But maybe the multiverse isn't really needed. It seems to me that at these special values of the parameters where several phases co-exist, the vacuum states could naturally be superpositions of quantum states built on several classically very different configurations. Such a law would make it more likely that the cosmological constant is described by a seesaw mechanism, too.

If it's true and if the multiple-phase special points are favored, it's because of some "attraction of the eigenvalues". If you know random matrix theory, i.e. the statistical theory of many energy levels in the nuclei, you know that the energy levels tend to repel each other. It's because some Jacobian factor is very small in the regions where the energy eigenvalues approach each other. Here, we need the opposite effect. We need the values of parameters such as \(m_t\) to be attracted to the special values where phases may be degenerate.

So maybe even if you avoid any assumption about the existence of any multiverse, you may invent a derivation at the level of the landscape only. We normally assume that the parameter spaces of the low-energy effective field theory (or their parts allowed in the landscape, i.e. those draining the swamp) are covered more or less uniformly by the actual string vacua. We know that this can't quite be right. Sometimes we can't even say what the "uniform distribution" is supposed to look like.

But this assumption of uniformity could be flawed in very specific and extremely interesting ways. It could be that the actual string vacua actually love to be degenerate – "almost equal" superpositions of vacua that look classically very different from each other. In general, there should be some tunneling in between the vacua and the tunneling gives you off-diagonal matrix elements (between different phases) to many parameters describing the low-energy physics of the vacua (coupling constants, cosmological constant).

And because of the off-diagonal elements, the actual vacua we should find when we're careful aren't actually "straightforward quantum coherent states" built around some classical configurations. But very often, they may like to be superpositions – with non-negligible coefficients – of many phases. If that's so, even the single vacuum – in our visible Universe – could be analogous to the Arctic Ocean in my metaphor and an explanation of the multiple point criticality principle could exist.

If it were right qualitatively, it could be wonderful. One could try to look for a refinement of this Arctic landscape theory – a theory that tries to predict more realistic probability distributions on the low-energy effective field theories' parameter spaces, distributions that are non-uniform and at least morally compatible with the multiple point criticality principle. This kind of reasoning could even lead us to a calculation of some values of the parameters that are much more likely than others – and it could be the right ones which are compatible with our measurements.

A theory of the vacuum selection could exist. I tend to think that this kind of research hasn't been sufficiently pursued partly because of the left-wing bias of the research community. They may be impartial in many ways but the biases often do show up even in faraway contexts. Leftists may instinctively think that non-uniform distributions are politically incorrect so they prefer the uniformity of naturalness or the "typical vacua" in the landscape. I have always felt that these Ansätze are naive and on the wrong track – and the truth is much closer to their negations. The apparent numerically empirical success of the multiple point criticality principle is another reason to think so.

Note that while we're trying to calculate some non-uniform distributions, the multiple point criticality principle is a manifestation of egalitarianism and multiculturalism from another perspective – because several phases co-exist as almost equal ones. ;-)

by Luboš Motl (noreply@blogger.com) at August 12, 2017 04:49 PM

August 10, 2017

Symmetrybreaking - Fermilab/SLAC

Think FAST

The new Fermilab Accelerator Science and Technology facility at Fermilab looks to the future of accelerator science.

Scientists in laser safety goggles work in a laser lab

Unlike most particle physics facilities, the new Fermilab Accelerator Science and Technology facility (FAST) wasn’t constructed to find new particles or explain basic physical phenomena. Instead, FAST is a kind of workshop—a space for testing novel ideas that can lead to improved accelerator, beamline and laser technologies.

Historically, accelerator research has taken place on machines that were already in use for experiments, making it difficult to try out new ideas. Tinkering with a physicist’s tools mid-search for the secrets of the universe usually isn’t a great idea. By contrast, FAST enables researchers to study pieces of future high-intensity and high-energy accelerator technology with ease.

“FAST is specifically aiming to create flexible machines that are easily reconfigurable and that can be accessed on very short notice,” says Alexander Valishev, head of department that manages FAST. “You can roll in one experiment and roll the other out in a matter of days, maybe months, without expensive construction and operation costs.”

This flexibility is part of what makes FAST a useful place for training up new accelerator scientists. If a student has an idea, or something they want to study, there’s plenty of room for experimentation.

“We want students to come and do their thesis research at FAST, and we already have a number of students working.” Valishev says. “We have already had a PhD awarded on the basis of work done at FAST, but we want more of that.”

Yellow crymodule with RF distribution

This yellow cyromodule will house the superconducting cavities that take the beam’s energy from 50 to 300 MeV. 

Courtesy of Fermilab

Small ring, bright beam

FAST will eventually include three parts: an electron injector, a proton injector and a particle storage ring called the Integrable Optics Test Accelerator, or IOTA. Although it will be small compared to other rings—only 40 meters long, while Fermilab’s Main Injector has a circumference of 3 kilometers—IOTA will be the centerpiece of FAST after its completion in 2019. And it will have a unique feature: the ability to switch from being an electron accelerator to a proton accelerator and back again.

“The sole purpose of this synchrotron is to test accelerator technology and develop that tech to test ideas and theories to improve accelerators everywhere,” says Dan Broemmelsiek, a scientist in the IOTA/FAST department.

One aspect of accelerator technology FAST focuses on is creating higher-intensity or “brighter” particle beams.

Brighter beams pack a bigger particle punch. A high-intensity beam could send a detector twice as many particles as is usually possible. Such an experiment could be completed in half the time, shortening the data collection period by several years.

IOTA will test a new concept for accelerators called integrable optics, which is intended to create a more concentrated, stable beam, possibly producing higher intensity beams than ever before.

“If this IOTA thing works, I think it could be revolutionary,” says Jamie Santucci, an engineering physicist working on FAST. “It’s going to allow all kinds of existing accelerators to pack in way more beam. More beam, more data.”

Photoelectron gun

The beam starts here: Once electrons are sent down the beamline, they pass through the a set of solenoid magnets—the dark blue rings—before entering the first two superconducting cavities.

Courtesy of Fermilab

Maximum energy milestone

Although the completion of IOTA is still a few years away, the electron injector will reach a milestone this summer: producing an electron beam with the energy of 300 million electronvolts (MeV).

The electron injector for IOTA is a research vehicle in its own right,” Valishev says. It provides scientists a chance to test superconducting accelerators, a key piece of technology for future physics machines that can produce intense acceleration at relatively low power.

“At this point, we can measure things about the beam, chop it up or focus it,” Broemmelsiek says. “We can use cameras to do beam diagnostics, and there’s space here in the beamline to put experiments to test novel instrumentation concepts.”

The electron beam’s previous maximum energy of 50 MeV was achieved by passing the beam through two superconducting accelerator cavities and has already provided opportunities for research. The arrival of the 300 MeV beam this summer—achieved by sending the beam through another eight superconducting cavities—will open up new possibilities for accelerator research, with some experiments already planned to start as soon as the beam is online.

Yellow, red and black wires plugged into a device

Electronics for IOTA

Chip Edstrom

FAST forward

The third phase of FAST, once IOTA is complete, will be the construction of the proton injector.

“FAST is unique because we will specifically target creating high-intensity proton beams,” Valishev says.

This high-intensity proton beam research will directly translate to improving research into elusive particles called neutrinos, Fermilab’s current focus.

“In five to 10 years, you’ll be talking to a neutrino guy and they’ll go, ‘I don’t know what the accelerator guys did, but it’s fabulous. We’re getting more neutrinos per hour than we ever thought we would,’” Broemmelsiek says.

Creating new accelerator technology is often an overlooked area in particle physics, but the freedom to try out new ideas and discover how to build better machines for research is inherently rewarding for people who work at FAST.

“Our business is science, and we’re supposed to make science, and we work really hard to do that,” Broemmelsiek says. “But it’s also just plain ol’ fun.”

by Leah Poffenberger at August 10, 2017 01:00 PM

August 08, 2017

Symmetrybreaking - Fermilab/SLAC

A new search for dark matter 6800 feet underground

Prototype tests of the future SuperCDMS SNOLAB experiment are in full swing.

From left: SLAC's Tsuguo Aramak, Paul Brink and Mike Racine are performing final adjustments to the SuperCDMS SNOLAB engineering

When an extraordinarily sensitive dark matter experiment goes online at one of the world’s deepest underground research labs, the chances are better than ever that it will find evidence for particles of dark matter—a substance that makes up 85 percent of all matter in the universe but whose constituents have never been detected.

The heart of the experiment, called SuperCDMS SNOLAB, will be one of the most sensitive detectors for hypothetical dark matter particles called WIMPs, short for “weakly interacting massive particles.” SuperCDMS SNOLAB is one of two next-generation experiments (the other one being an experiment called LZ) selected by the US Department of Energy and the National Science Foundation to take the search for WIMPs to the next level, beginning in the early 2020s.

“The experiment will allow us to enter completely unexplored territory,” says Richard Partridge, head of the SuperCDMS SNOLAB group at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory. “It’ll be the world’s most sensitive detector for WIMPs with relatively low mass, complementing LZ, which will look for heavier WIMPs.”  

The experiment will operate deep underground at Canadian laboratory SNOLAB inside a nickel mine near the city of Sudbury, where 6800 feet of rock provide a natural shield from high-energy particles from space, called cosmic rays. This radiation would not only cause unwanted background in the detector; it would also create radioactive isotopes in the experiment’s silicon and germanium sensors, making them useless for the WIMP search. That’s also why the experiment will be assembled from major parts at its underground location.

A detector prototype is currently being tested at SLAC, which oversees the efforts of the SuperCDMS SNOLAB project.

Colder than the universe

The only reason we know dark matter exists is that its gravity pulls on regular matter, affecting how galaxies rotate and light propagates. But researchers believe that if WIMPs exist, they could occasionally bump into normal matter, and these collisions could be picked up by modern detectors.

SuperCDMS SNOLAB will use germanium and silicon crystals in the shape of oversized hockey pucks as sensors for these sporadic interactions. If a WIMP hits a germanium or silicon atom inside these crystals, two things will happen: The WIMP will deposit a small amount of energy, causing the crystal lattice to vibrate, and it’ll create pairs of electrons and electron deficiencies that move through the crystal and alter its electrical conductivity. The experiment will measure both responses. 

“Detecting the vibrations is very challenging,” says KIPAC’s Paul Brink, who oversees the detector fabrication at Stanford. “Even the smallest amounts of heat cause lattice vibrations that would make it impossible to detect a WIMP signal. Therefore, we’ll cool the sensors to about one hundredth of a Kelvin, which is much colder than the average temperature of the universe.”

These chilly temperatures give the experiment its name: CDMS stands for “Cryogenic Dark Matter Search.” (The prefix “Super” indicates that the experiment is more sensitive than previous detector generations.)

The use of extremely cold temperatures will be paired with sophisticated electronics, such as transition-edge sensors that switch from a superconducting state of zero electrical resistance to a normal-conducting state when a small amount of energy is deposited in the crystal, as well as superconducting quantum interference devices, or SQUIDs, that measure these tiny changes in resistance.      

The experiment will initially have four detector towers, each holding six crystals. For each crystal material—silicon and germanium—there will be two different detector types, called high-voltage (HV) and interleaved Z-sensitive ionization phonon (iZIP) detectors. Future upgrades can further boost the experiment’s sensitivity by increasing the number of towers to 31, corresponding to a total of 186 sensors.

Working hand in hand

The work under way at SLAC serves as a system test for the future SuperCDMS SNOLAB experiment. Researchers are testing the four different detector types, the way they are integrated into towers, their superconducting electrical connectors and the refrigerator unit that cools them down to a temperature of almost absolute zero.

“These tests are absolutely crucial to verify the design of these new detectors before they are integrated in the experiment underground at SNOLAB,” says Ken Fouts, project manager for SuperCDMS SNOLAB at SLAC. “They will prepare us for a critical DOE review next year, which will determine whether the project can move forward as planned.” DOE is expected to cover about half of the project costs, with the other half coming from NSF and a contribution from the Canadian Foundation for Innovation. 

Important work is progressing at all partner labs of the SuperCDMS SNOLAB project. Fermi National Accelerator Laboratory is responsible for the cryogenics infrastructure and the detector shielding—both will enable searching for faint WIMP signals in an environment dominated by much stronger unwanted background signals. Pacific Northwest National Laboratory will lend its expertise in understanding background noise in highly sensitive precision experiments. A number of US universities are involved in various aspects of the project, including detector fabrication, tests, data analysis and simulation.

The project also benefits from international partnerships with institutions in Canada, France, the UK and India. The Canadian partners are leading the development of the experiment’s data acquisition and will provide the infrastructure at SNOLAB. 

“Strong partnerships create a lot of synergy and make sure that we’ll get the best scientific value out of the project,” says Fermilab’s Dan Bauer, spokesperson of the SuperCDMS collaboration, which consists of 109 scientists from 22 institutions, including numerous universities. “Universities have lots of creative students and principal investigators, and their talents are combined with the expertise of scientists and engineers at the national labs, who are used to successfully manage and build large projects.”

SuperCDMS SNOLAB will be the fourth generation of experiments, following CDMS-I at Stanford, CDMS-II at the Soudan mine in Minnesota, and a first version of SuperCDMS at Soudan, which completed operations in 2015.   

“Over the past 20 years we’ve been pushing the limits of our detectors to make them more and more sensitive for our search for dark matter particles,” says KIPAC’s Blas Cabrera, project director of SuperCDMS SNOLAB. “Understanding what constitutes dark matter is as fundamental and important today as it was when we started, because without dark matter none of the known structures in the universe would exist—no galaxies, no solar systems, no planets and no life itself.”

by Manuel Gnida at August 08, 2017 01:00 PM

August 04, 2017

Lubos Motl - string vacua and pheno

T2K: a two-sigma evidence supporting CP-violation in neutrino sector
Let me write a short blog post by a linker, not a thinker:
T2K presents hint of CP violation by neutrinos
The strange acronym T2K stands for Tokai to Kamioka. So the T2K experiment is located in Japan but the collaboration is heavily multi-national. It works much like the older K2K, KEK to Kamioka. Indeed, it's no coincidence that Kamioka sounds like Kamiokande. Average Japanese people probably tend to know the former, average physicists tend to know the latter. ;-)



Dear physicists, Kamiokande was named after Kamioka, not vice versa! ;-)

Muon neutrinos are created at the source.




These muon neutrinos go under ground through 295 kilometers of rock and they have the opportunity to change themselves into electron neutrinos.

In 2011, T2K claimed evidence for neutrino oscillations powered by \(\theta_{13}\), the last and least "usual" real angle in the mixing matrix. In Summer 2017, we still believe that this angle is nonzero, like the other two, \(12,23\), and F-theory, a version of string theory, had predicted its approximate magnitude rather correctly.

In 2013, they found more than 7-sigma evidence for electron-muon neutrino oscillations and received a Breakthrough Prize for that.




By some physical and technical arrangements, they are able to look at the oscillations of antineutrinos as well and measure all the processes. The handedness (left-handed or right-handed) of the neutrinos we know is correlated with their being neutrinos or antineutrinos. But this correlation makes it possible to conserve the CP-symmetry. If you replace neutrinos with antineutrinos and reflect all the reality and images in the mirror, so that left-handed become right-handed, the allowed left-handed neutrinos become the allowed right-handed antineutrinos so everything is fine.

But we know that the CP-symmetry is also broken by elementary particles in Nature – even though the spectrum of known particles and their allowed polarizations doesn't make this breaking unavoidable. The only experimentally confirmed source of CP-violation we know is the complex phase in the CKM matrix describing the relationship between upper-type and lower-type quark mass eigenstates.



Well, T2K has done some measurement and they have found some two-sigma evidence – deviation from the CP-symmetric predictions – supporting the claim that a similar CP-violating phase \(\delta_{CP}\), or another CP-violating effect, is nonzero even in the neutrino sector. So if it's true, the neutrinos' masses are qualitatively analogous to the quark masses. They have all the twists and phases and violations of naive symmetries that are allowed by the basic consistency.

Needless to say, the two-sigma evidence is very weak. Most such "weak caricatures of a discovery" eventually turn out to be coincidences and flukes. If they managed to collect 10 times more data and the two-sigma deviation would really follow from a real effect, a symmetry breaking, then it would be likely enough to discover the CP-violation in the neutrino sector at 5 sigma – which is considered sufficient evidence for experimental physicists to brag, get drunk, scream "discovery, discovery", accept a prize, and get drunk again (note that the 5-sigma process has 5 stages).



Ivan Mládek, Japanese [people] in [Czech town of] Jablonec, "Japonci v Jablonci". Japanese men are walking through a Jablonec bijou exhibition and buying corals for the government and the king. The girl sees that one of them has a crush on her. He gives her corals and she's immediately his. I don't understand it, you, my Japanese boy, even though you are not a man of Jablonec, I will bring you home. I will feed you nicely, to turn you into a man, and I won't let you leave to any Japan after that again. Visual arts by fifth-graders.

So while I think that most two-sigma claims ultimately fade away, this particular candidate for a discovery sounds mundane enough so that it could be true and 2 sigma could be enough for you to believe it is true. Theoretically speaking, there is no good reason to think that the complex phase should be absent in the neutrino sector. If quarks and leptons differ in such aspects, I think that neutrinos tend to have larger and more generic angles than the quarks, not vice versa.

by Luboš Motl (noreply@blogger.com) at August 04, 2017 05:34 PM

August 03, 2017

Symmetrybreaking - Fermilab/SLAC

Our clumpy cosmos

The Dark Energy Survey reveals the most accurate measurement of dark matter structure in the universe.

Milky Way galaxy rising over the Dark Energy Camera in Chile

Imagine planting a single seed and, with great precision, being able to predict the exact height of the tree that grows from it. Now imagine traveling to the future and snapping photographic proof that you were right.

If you think of the seed as the early universe, and the tree as the universe the way it looks now, you have an idea of what the Dark Energy Survey (DES) collaboration has just done. In a presentation today at the American Physical Society Division of Particles and Fields meeting at the US Department of Energy’s (DOE) Fermi National Accelerator Laboratory, DES scientists will unveil the most accurate measurement ever made of the present large-scale structure of the universe.

These measurements of the amount and “clumpiness” (or distribution) of dark matter in the present-day cosmos were made with a precision that, for the first time, rivals that of inferences from the early universe by the European Space Agency’s orbiting Planck observatory. The new DES result (the tree, in the above metaphor) is close to “forecasts” made from the Planck measurements of the distant past (the seed), allowing scientists to understand more about the ways the universe has evolved over 14 billion years.

“This result is beyond exciting,” says Scott Dodelson of Fermilab, one of the lead scientists on this result. “For the first time, we’re able to see the current structure of the universe with the same clarity that we can see its infancy, and we can follow the threads from one to the other, confirming many predictions along the way.”

Most notably, this result supports the theory that 26 percent of the universe is in the form of mysterious dark matter and that space is filled with an also-unseen dark energy, which is causing the accelerating expansion of the universe and makes up 70 percent.

Paradoxically, it is easier to measure the large-scale clumpiness of the universe in the distant past than it is to measure it today. In the first 400,000 years following the Big Bang, the universe was filled with a glowing gas, the light from which survives to this day. Planck’s map of this cosmic microwave background radiation gives us a snapshot of the universe at that very early time. Since then, the gravity of dark matter has pulled mass together and made the universe clumpier over time. But dark energy has been fighting back, pushing matter apart. Using the Planck map as a start, cosmologists can calculate precisely how this battle plays out over 14 billion years.

“The DES measurements, when compared with the Planck map, support the simplest version of the dark matter/dark energy theory,” says Joe Zuntz, of the University of Edinburgh, who worked on the analysis. “The moment we realized that our measurement matched the Planck result within 7 percent was thrilling for the entire collaboration.”

map of dark matter is made from gravitational lensing measurements of 26 million galaxies

This map of dark matter is made from gravitational lensing measurements of 26 million galaxies in the Dark Energy Survey. The map covers about 1/30th of the entire sky and spans several billion light-years in extent. Red regions have more dark matter than average, blue regions less dark matter.

Chihway Chang of the Kavli Institute for Cosmological Physics at the University of Chicago and the DES collaboration.

The primary instrument for DES is the 570-megapixel Dark Energy Camera, one of the most powerful in existence, able to capture digital images of light from galaxies eight billion light-years from Earth. The camera was built and tested at Fermilab, the lead laboratory on the Dark Energy Survey, and is mounted on the National Science Foundation’s 4-meter Blanco telescope, part of the Cerro Tololo Inter-American Observatory in Chile, a division of the National Optical Astronomy Observatory. The DES data are processed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.

Scientists on DES are using the camera to map an eighth of the sky in unprecedented detail over five years. The fifth year of observation will begin in August. The new results released today draw from data collected only during the survey’s first year, which covers 1/30th of the sky.

“It is amazing that the team has managed to achieve such precision from only the first year of their survey,” says National Science Foundation Program Director Nigel Sharp. “Now that their analysis techniques are developed and tested, we look forward with eager anticipation to breakthrough results as the survey continues.”

DES scientists used two methods to measure dark matter. First, they created maps of galaxy positions as tracers, and second, they precisely measured the shapes of 26 million galaxies to directly map the patterns of dark matter over billions of light-years using a technique called gravitational lensing.

To make these ultra-precise measurements, the DES team developed new ways to detect the tiny lensing distortions of galaxy images, an effect not even visible to the eye, enabling revolutionary advances in understanding these cosmic signals. In the process, they created the largest guide to spotting dark matter in the cosmos ever drawn (see image). The new dark matter map is 10 times the size of the one DES released in 2015 and will eventually be three times larger than it is now.

“It’s an enormous team effort and the culmination of years of focused work,” says Erin Sheldon, a physicist at the DOE’s Brookhaven National Laboratory, who co-developed the new method for detecting lensing distortions.

These results and others from the first year of the Dark Energy Survey will be released today online and announced during a talk by Daniel Gruen, NASA Einstein fellow at the Kavli Institute for Particle Astrophysics and Cosmology at DOE’s SLAC National Accelerator Laboratory, at 5 pm Central time. The talk is part of the APS Division of Particles and Fields meeting at Fermilab and will be streamed live.

The results will also be presented by Kavli fellow Elisabeth Krause of the Kavli Insitute for Particle Astrophysics and Cosmology at SLAC at the TeV Particle Astrophysics Conference in Columbus, Ohio, on Aug. 9; and by Michael Troxel, postdoctoral fellow at the Center for Cosmology and AstroParticle Physics at Ohio State University, at the International Symposium on Lepton Photon Interactions at High Energies in Guanzhou, China, on Aug. 10. All three of these speakers are coordinators of DES science working groups and made key contributions to the analysis.

“The Dark Energy Survey has already delivered some remarkable discoveries and measurements, and they have barely scratched the surface of their data,” says Fermilab Director Nigel Lockyer. “Today’s world-leading results point forward to the great strides DES will make toward understanding dark energy in the coming years.”

A version of this article was published by Fermilab.

August 03, 2017 02:37 PM

August 01, 2017

Symmetrybreaking - Fermilab/SLAC

Tuning in for science

The sprawling Square Kilometer Array radio telescope hunts signals from one of the quietest places on earth.

133 dishes across the Great Karoo, 400,000 square kilometer

When you think of radios, you probably think of noise. But the primary requirement for building the world’s largest radio telescope is keeping things almost perfectly quiet.

Radio signals are constantly streaming to Earth from a variety of sources in outer space. Radio telescopes are powerful instruments that can peer into the cosmos—through clouds and dust—to identify those signals, picking them up like a signal from a radio station. To do it, they need to be relatively free from interference emitted by cell phones, TVs, radios and their kin.

That’s one reason the Square Kilometer Array is under construction in the Great Karoo, 400,000 square kilometers of arid, sparsely populated South African plain, along with a component in the Outback of Western Australia. The Great Karoo is also a prime location because of its high altitude—radio waves can be absorbed by atmospheric moisture at lower altitudes. SKA currently covers some 1320 square kilometers of the landscape.

Even in the Great Karoo, scientists need careful filtering of environmental noise. Effects from different levels of radio frequency interference (RFI) can range from “blinding” to actually damaging the instruments. Through South Africa’s Astronomy Geographic Advantage Act, SKA is working toward “radio protection,” which would dedicate segments of the bandwidth for radio astronomy while accommodating other private and commercial RF service requirements in the region.

“Interference affects observational data and makes it hard and expensive to remove or filter out the introduced noise,” says Bernard Duah Asabere, Chief Scientist of the Ghana team of the African Very Long Baseline Interferometry Network (African VLBI Network, or AVN), one of the SKA collaboration groups in eight other African nations participating in the project.

SKA “will tackle some of the fundamental questions of our time, ranging from the birth of the universe to the origins of life,” says SKA Director-General Philip Diamond. Among the targets: dark energy, Einstein’s theory of gravity and gravitational waves, and the prevalence of the molecular building blocks of life across the cosmos.

SKA-South Africa can detect radio spectrum frequencies from 350 megahertz to 14 gigahertz. Its partner Australian component will observe the lower-frequency scale, from 50 to 350 megahertz. Visible light, for comparison, has frequencies ranging from 400 to 800 million megahertz. SKA scientists will process radiofrequency waves to form a picture of their source.

A precursor instrument to SKA called MeerKAT (named for the squirrel-sized critters indigenous to the area), is under construction in the Karoo. This array of 16 dishes in South Africa achieved first light on June 19, 2016. MeerKAT focused on 0.01 percent of the sky for 7.5 hours and saw 1300 galaxies—nearly double the number previously known in that segment of the cosmos. 

Since then, MeerKAT met another milestone with 32 integrated antennas. MeerKat will also reach its full array of 64 dishes early next year, making it one of the world’s premier radio telescopes. MeerKAT will eventually be integrated into SKA Phase 1, where an additional 133 dishes will be built. That will bring the total number of antennas for SKA Phase I in South Africa to 197 by 2023. So far, 32 dishes are fully integrated and are being commissioned for science operations.

On completion of SKA 2 by 2030, the detection area of the receiver dishes will exceed 1 square kilometer, or about 11,000,000 square feet. Its huge size will make it 50 times more sensitive than any other radio telescope. It is expected to operate for 50 years.

SKA is managed by a 10-nation consortium, including the UK, China, India and Australia as well as South Africa, and receives support from another 10 countries, including the US. The project is headquartered at Jodrell Bank Observatory in the UK.

The full SKA will use radio dishes across Africa and Australia, and collaboration members say it will have a farther reach and more detailed images than any existing radio telescope.

In preparation for the SKA, South Africa and its partner countries developed AVN to establish a network of radiotelescopes across the African continent. One of its projects is the refurbishing of redundant 30-meter-class antennas, or building new ones across the partner countries, to operate as networked radio telescopes.

The first project of its kind is the AVN Ghana project, where an idle 32-meter diameter dish has been refurbished and revamped with a dual receiver system at 5 and 6.7 gigahertz central frequencies for use as a radio telescope. The dish was previously owned and operated by the government and the company Vodafone Ghana as a telecommunications facility. Now it will explore celestial objects such as extragalactic nebulae, pulsars and other RF sources in space, such as molecular clouds, called masers.

Asabere’s group will be able to tap into areas of SKA’s enormous database (several supercomputers’ worth) over the Internet. So will groups in Botswana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia. SKA is also offering extensive outreach in participating countries and has already awarded 931 scholarships, fellowships and grants.

Other efforts in Ghana include introducing astronomy in the school curricula, training students in astronomy and related technologies, doing outreach in schools and universities, receiving visiting students at the telescope site and hosting programs such as the West African International Summer School for Young Astronomers taking place this week.

Asabere, who achieved his advanced degrees in Sweden (Chalmers University of Technology) and South Africa (University of Johannesburg), would like to see more students trained in Ghana, and would like get more researchers on board. He also hopes for the construction of the needed infrastructure, more local and foreign partnerships and strong governmental backing.

“I would like the opportunity to practice my profession on my own soil,” he says.

That day might not be far beyond the horizon. The Leverhulme-Royal Society Trust and Newton Fund in the UK are co-funding extensive human capital development programs in the SKA-AVN partner countries. A seven-member Ghanaian team, for example, has undergone training in South Africa and has been instructed in all aspects of the project, including the operation of the telescope. 

Several PhD students and one MSc student from Ghana have received SKA-SA grants to pursue further education in astronomy and engineering. The Royal Society has awarded funding in collaboration with Leeds University to train two PhDs and 60 young aspiring scientists in the field of astrophysics.

Based on the success of the Leverhulme-Royal Society program, a joint UK-South Africa Newton Fund intervention (DARA—the Development in Africa with Radio Astronomy) has since been initiated in other partner countries to grow high technology skills that could lead to broader economic development in Africa. 

As SKA seeks answers to complex questions over the next five decades, there should be plenty of opportunities for science throughout the Southern Hemisphere. Though it lives in one of the quietest places, SKA hopes to be heard loud and clear.

by Mike Perricone at August 01, 2017 01:50 PM

July 31, 2017

Symmetrybreaking - Fermilab/SLAC

An underground groundbreaking

A physics project kicks off construction a mile underground.

Fourteen shovelers mark the start of LBNF construction.

For many government officials, groundbreaking ceremonies are probably old hat—or old hardhat. But how many can say they’ve been to a groundbreaking that’s nearly a mile underground?

A group of dignitaries, including a governor and four members of Congress, now have those bragging rights. On July 21, they joined scientists and engineers 4850 feet beneath the surface at the Sanford Underground Research Facility to break ground on the Long-Baseline Neutrino Facility (LBNF).

LBNF will house massive, four-story-high detectors for the Deep Underground Neutrino Experiment (DUNE) to learn more about neutrinos—invisible, almost massless particles that may hold the key to how the universe works and why matter exists.  Fourteen shovels full of dirt marked the beginning of construction for a project that could be, well, groundbreaking.

The Sanford Underground Research Facility in Lead, South Dakota resides in what was once the deepest gold mine in North America, which has been repurposed as a place for discovery of a different kind.

“A hundred years ago, we mined gold out of this hole in the ground. Now we’re going to mine knowledge,” said US Representative Kristi Noem of South Dakota in an address at the groundbreaking.

Transforming an old mine into a lab is more than just a creative way to reuse space. On the surface, cosmic rays from the sun constantly bombard us, causing cosmic noise in the sensitive detectors scientists use to look for rare particle interactions. But underground, shielded by nearly a mile of rock, there’s cosmic quiet. Cosmic rays are rare, making it easier for scientists to see what’s going on in their detectors without being clouded by interference.

Going down?

It may be easier to analyze data collected underground, but entering the subterranean science facility can be a chore. Nearly 60 people took a trip underground to the groundbreaking site, requiring some careful elevator choreography.

Before venturing into the deep below, reporters and representatives alike donned safety glasses, hardhats and wearable flashlights. They received two brass tags engraved with their names—one to keep and another to hang on a corkboard—a process called “brassing in.” This helps keep track of who’s underground in case of emergency.

The first group piled into the open-top elevator, known as a cage, to begin the descent. As the cage glides through a mile of mountain, it’s easy to imagine what it must have been like to be a miner back when Sanford Lab was the Homestake Mine. What’s waiting below may have changed, but the method of getting there hasn’t: The winch lowering the cage at 500-feet-a-minute is 80 years old and still works perfectly.

The ride to the 4850-level takes about 10 minutes in the cramped cage—it fits 35, but even with 20 people it feels tight. Water drips in through the ceiling as the open elevator chugs along, occasionally passing open mouths in the rock face of drifts once mined for gold.

 “When you go underground, you start to think ‘It has never rained in here. And there’s never been daylight,’” says Tim Meyer, Chief Operating Officer of Fermilab, who attended the groundbreaking. “When you start thinking about being a mile below the surface, it just seems weird, like you’re walking through a piece of Swiss cheese.”

Where the cage stops at the 4850-level would be the destination of most elevator occupants on a normal day, since the shaft ends near the entrance of clean research areas housing Sanford Lab experiments. But for the contingent traveling to the future site of LBNF/DUNE on the other end of the mine, the journey continued, this time in an open-car train. It’s almost like a theme-park ride as the motor (as it’s usually called by Sanford staff) clips along through a tunnel, but fortunately, no drops or loop-the-loops are involved.

“The same rails now used to transport visitors and scientists were once used by the Homestake miners to remove gold from the underground facility,” says Jim Siegrist, Associate Director of High Energy Physics at the Department of Energy. “During the ride, rock bolts and protective screens attached to the walls were visible by the light of the headlamp mounted on our hardhats.”

After a 15-minute ride, the motor reached its destination and it was business as usual for a groundbreaking ceremony: speeches, shovels and smiling for photos. A fresh coat of white paint (more than 100 gallons worth) covered the wall behind the officials, creating a scene that almost could have been on the surface.

“Celebrating the moment nearly a mile underground brought home the enormity of the task and the dedication required for such precise experiments,” says South Dakota Governor Dennis Daugaard. “I know construction will take some time, but it will be well worth the wait for the Sanford Underground Research Facility to play such a vital role in one of the most significant physics experiments of our time."

What’s the big deal?

The process to reach the groundbreaking site is much more arduous than reaching most symbolic ceremonies, so what would possess two senators, two representatives, a White House representative, a governor and delegates from three international science institutions (to mention a few of the VIPs) to make the trip? Only the beginning of something huge—literally.

“This milestone represents the start of construction of the largest mega-science project in the United States,” said Mike Headley, executive director of Sanford Lab.  

The 14 shovelers at the groundbreaking made the first tiny dent in the excavation site for LBNF, which will require the extraction of more than 870,000 tons of rock to create huge caverns for the DUNE detectors. These detectors will catch neutrinos sent 800 miles through the earth from Fermi National Accelerator Laboratory in the hopes that they will tell us something more about these strange particles and the universe we live in.

“We have the opportunity to see truly world-changing discovery,” said US Representative Randy Hultgren of Illinois. “This is unique—this is the picture of incredible discovery and experimentation going into the future.”

by Leah Poffenberger at July 31, 2017 07:35 PM

July 26, 2017

Symmetrybreaking - Fermilab/SLAC

Angela Fava: studying neutrinos around the globe

This experimental physicist has followed the ICARUS neutrino detector from Gran Sasso to Geneva to Chicago.

Photo of Angela Fava giving a talk at the Fermilab User's Meeting

Physicist Angela Fava has been at the enormous ICARUS detector’s side for over a decade. As an undergraduate student in Italy in 2006, she worked on basic hardware for the neutrino hunting experiment: tightening bolts and screws, connecting and reconnecting cables, learning how the detector worked inside and out.

ICARUS (short for Imaging Cosmic And Rare Underground Signals) first began operating for research in 2010, studying a beam of neutrinos created at European laboratory CERN and launched straight through the earth hundreds of miles to the detector’s underground home at INFN Gran Sasso National Laboratory.

In 2014, the detector moved to CERN for refurbishing, and Fava relocated with it. In June ICARUS began a journey across the ocean to the US Department of Energy’s Fermi National Accelerator Laboratory to take part in a new neutrino experiment. When it arrives today, Fava will be waiting.

Fava will go through the installation process she helped with as a student, this time as an expert.

Photo of a shipping container with the words
Caraban Gonzalez, Noemi Ordan, Julien Marius, CERN

Journey to ICARUS

As a child growing up between Venice and the Alps, Fava always thought she would pursue a career in math. But during a one-week summer workshop before her final year of high school in 2000, she was drawn to experimental physics.

At the workshop, she realized she had more in common with physicists. Around the same time, she read about new discoveries related to neutral, rarely interacting particles called neutrinos. Scientists had recently been surprised to find that the extremely light particles actually had mass and that different types of neutrinos could change into one another. And there was still much more to learn about the ghostlike particles.

At the start of college in 2001, Fava immediately joined the University of Padua neutrino group. For her undergraduate thesis research, she focused on the production of hadrons, making measurements essential to studying the production of neutrinos. In 2004, her research advisor Alberto Guglielmi and his group joined the ICARUS collaboration, and she’s been a part of it ever since.

Fava jests that the relationship actually started much earlier: “ICARUS was proposed for the first time in 1983, which is the year I was born. So we are linked from birth.”

Fava remained at the University of Padua in the same research group for her graduate work. During those years, she spent about half of her time at the ICARUS detector, helping bring it to life at Gran Sasso.

Once all the bolts were tightened and the cables were attached, ICARUS scientists began to pursue their goal of using the detector to study how neutrinos change from one type to another.

During operation, Fava switched gears to create databases to store and log the data. She wrote code to automate the data acquisition system and triggering, which differentiates between neutrino events and background such as passing cosmic rays. “I was trying to take part in whatever activity was going on just to learn as much as possible,” she says.

That flexibility is a trait that Claudio Silverio Montanari, the technical director of ICARUS, praises. “She has a very good capability to adapt,” he says. “Our job, as physicists, is putting together the pieces and making the detector work.”

Photo of the ICARUS shipping container being transported by truck
Caraban Gonzalez, Noemi Ordan, Julien Marius, CERN

Changing it up

Adapting to changing circumstances is a skill both Fava and ICARUS have in common. When scientists proposed giving the detector an update at CERN and then using it in a suite of neutrino experiments at Fermilab, Fava volunteered to come along for the ride.

Once installed and operating at Fermilab, ICARUS will be used to study neutrinos from a source a few hundred meters away from the detector. In its new iteration, ICARUS will search for sterile neutrinos, a hypothetical kind of neutrino that would interact even more rarely than standard neutrinos. While hints of these low-mass particles have cropped up in some experiments, they have not yet been detected.

At Fermilab, ICARUS also won’t be buried below more than half a mile of rock, a feature of the INFN setup that shielded it from cosmic radiation from space. That means the triggering system will play an even bigger role in this new experiment, Fava says.

“We have a great challenge ahead of us.” She’s up to the task.

by Liz Kruesi at July 26, 2017 04:09 PM

July 25, 2017

Symmetrybreaking - Fermilab/SLAC

Turning plots into stained glass

Hubert van Hecke, a heavy-ion physicist, transforms particle physics plots into works of art.

Stained glass image inspired by Fibonacci numbers

At first glance, particle physicist Hubert van Hecke’s stained glass windows simply look like unique pieces of art. But there is much more to them than pretty shapes and colors. A closer look reveals that his creations are actually renditions of plots from particle physics experiments.  

Van Hecke learned how to create stained glass during his undergraduate years at Louisiana State University. “I had an artistic background—my father was a painter, so I thought, if I need a humanities credit, I'll just sign up for this,” van Hecke recalls. “So in order to get my physics’ bachelors, I took stained glass.” 

Over the course of two semesters, van Hecke learned how to cut pieces of glass from larger sheets, puzzle them together, then solder and caulk the joints. “There were various assignments that gave you an enormous amount of elbow room,” he says. “One of them was to do something with Fibonacci numbers, and one was pick your favorite philosopher and made a window related to their work.” 

Van Hecke continued to create windows and mirrors throughout graduate school but stopped for many years while working as a full-time heavy-ion physicist at Los Alamos National Laboratory and raising a family. Only recently did he return to his studio—this time, to create pieces inspired by physics. 

“I had been thinking about designs for a long time—then it struck me that occasionally, you see plots that are interesting, beautiful shapes,” van Hecke says. “So I started collecting pictures as I saw them.”

His first plot-based window, a rectangle-shaped piece with red, orange and yellow glass, was inspired by the results of a neutrino flavor oscillation study from the MiniBooNE experiment at Fermi National Accelerator Laboratory. He created two pieces after that, one from a plot generated during the hunt for the Higgs boson at the Tevatron, also at Fermilab and the other based on an experiment with quarks and gluons. 

According to van Hecke, what inspires him about these plots is “purely the shapes.” 

“In terms of the physics, it's what I run across—for example, I see talks about heavy ion physics, elementary particle physics, and neutrinos, [but] I haven't really gone out and searched in other fields,” he says. “Maybe there are nice plots in biology or astronomy.”

Although van Hecke has not yet displayed his pieces publicly, if he does one day, he plans to include explanations for the phenomena the plots illustrate, such as neutrinos and the Standard Model, as a unique way to communicate science. 

But before that, van Hecke plans to create more stained glass windows. As of two months ago, he is semiretired—and in between runs to Fermilab, where he is helping with the effort to use Argonne National Laboratory's SeaQuest experiment to search for dark photons, he hopes to spend more time in the studio creating the pieces left on the drawing board, which include plots found in experiments investigating the Standard Model, neutrinoless double decay and dark matter interactions. 

“I hope to make a dozen or more,” he says. “As I bump into plots, I'll collect them and hopefully, turn them all into windows.” 

by Diana Kwon at July 25, 2017 01:00 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
September 22, 2017 07:35 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at