Particle Physics Planet


July 15, 2019

Christian P. Robert - xi'an's og

Nature snapshots

In this 6 June issue of Nature, which I read on my way to O’Bayes, an editorial on the scary move by the WHO to incorporate traditional Chinese medicine remedies in its classification as this includes drugs made from protected and endangered species and as such remedies have not been evidence tested. A news brief on India abandoning the requirement for PhD students to get a paper published prior to been awarded the degree, presumably much to the sorrow of predatory publishers. A delay to Plan S (a European project to make all funded research freely available) reported to 21 January 2021. A review of the latest and yet unpublished book by Neal Stephenson, Fall. Which I obviously ordered immediately! A paper in the British Journal of Anasthesia published along with an independent assessment of the same study (methods and results). Some letters protesting the “public’s phobia” induced by the series Chernobyl. Which recoups an email from one of my colleagues on the same complaining theme, since “only 20 deaths” can be attributed to the disaster with certainty! A revisit of the “cold fusion” with no evidence of the claimed phenomenon that led to a scientific outcry in 1989.

by xi'an at July 15, 2019 10:19 PM

Emily Lakdawalla - The Planetary Society Blog

What the Shoemaker-Levy 9 Impact Taught Us
Twenty-five years ago, multiple fragments of comet Shoemaker-Levy 9 crashed into Jupiter, changing the face of the planet and the course of planetary science.

July 15, 2019 10:00 AM

Peter Coles - In the Dark

Hubble’s Constant – A Postscript on w

Last week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. You can download a PDF of the full paper here.

Reading the paper through over the weekend I was struck by Figure 6:

This shows the constraints on H0 and the parameter w which is used to describe the dark energy component. Bear in mind that these estimates of cosmological parameters actually involve the simultaneous estimation of several parameters, six in the case of the standard ΛCDM model. Incidentally, H0 is not one of the six basic parameters of the standard model – it is derived from the others – and some important cosmological observations are relatively insensitive to its value.

The parameter w is the equation of state parameter for the dark energy component so that the pressure p is related to the energy density ρc2 via p=wρc2. The fixed value w=-1 applies if the dark energy is of the form of a cosmological constant (or vacuum energy). I explained why here. Non-relativistic matter (dominated by rest-mass energy) has w=0 while ultra-relativistic matter has w=1/3.

Applying the cosmological version of the thermodynamic relation for adiabatic expansion  “dE=-pdV” one finds that ρ ∼ a-3(1+w) where a is the cosmic scale factor. Note that w=-1 gives a constant energy density as the Universe expands (the cosmological constant); w=0 gives ρ ∼ a-3, as expected for `ordinary’ matter.

As I already mentioned, in the standard cosmological model w is fixed at  w=-1 but if it is treated as a free parameter then it can be added to the usual six to produce the Figure shown above. I should add for Bayesians that this plot shows the posterior probability assuming a uniform prior on w.

What is striking is that the data seem to prefer a very low value of w. Indeed the peak of the likelihood (which determines the peak of the posterior probability if the prior is flat) appears to be off the bottom of the plot. It must be said that the size of the black contour lines (at one sigma and two sigma for dashed and solid lines respectively) suggests that these data aren’t really very informative; the case w=-1 is well within the 2σ contour. In other words, one might get a slightly better fit by allowing the equation of state parameter to float, but the quality of the fit might not improve sufficiently to justify the introduction of another parameter.

Nevertheless it is worth mentioning that if it did turn out, for example, that w=-2 that would imply ρ ∼ a+3, i.e. an energy density that increases steeply as a increases (i.e. as the Universe expands). That would be pretty wild!

On the other hand, there isn’t really any physical justification for cases with w<-1 (in terms of a plausible model) which, in turn, makes me doubt the reasonableness of imposing a flat prior. My own opinion is that if dark energy turns out not to be of the simple form of a cosmological constant then it is likely to be too complicated to be expressed in terms of a single number anyway.

 

Postscript to this postscript: take a look at this paper from 2002!

by telescoper at July 15, 2019 09:45 AM

July 14, 2019

Christian P. Robert - xi'an's og

noise contrastive estimation

As I was attending Lionel Riou-Durand’s PhD thesis defence in ENSAE-CREST last week, I had a look at his papers (!). The 2018 noice contrastive paper is written with Nicolas Chopin (both authors share the CREST affiliation with me). Which compares Charlie Geyer’s 1994 bypassing the intractable normalising constant problem by virtue of an artificial logit model with additional simulated data from another distribution ψ.

“Geyer (1994) established the asymptotic properties of the MC-MLE estimates under general conditions; in particular that the x’s are realisations of an ergodic process. This is remarkable, given that most of the theory on M-estimation (i.e.estimation obtained by maximising functions) is restricted to iid data.”

Michael Guttman and Aapo Hyvärinen also use additional simulated data in another likelihood of a logistic classifier, called noise contrastive estimation. Both methods replace the unknown ratio of normalising constants with an unbiased estimate based on the additional simulated data. The major and impressive result in this paper [now published in the Electronic Journal of Statistics] is that the noise contrastive estimation approach always enjoys a smaller variance than Geyer’s solution, at an equivalent computational cost when the actual data observations are iid. And the artificial data simulations ergodic. The difference between both estimators is however negligible against the Monte Carlo error (Theorem 2).

This may be a rather naïve question, but I wonder at the choice of the alternative distribution ψ. With a vague notion that it could be optimised in a GANs perspective. A side result of interest in the paper is to provide a minimal (re)parameterisation of the truncated multivariate Gaussian distribution, if only as an exercise for future exams. Truncated multivariate Gaussian for which the normalising constant is of course unknown.

by xi'an at July 14, 2019 10:19 PM

July 13, 2019

Christian P. Robert - xi'an's og

The Long, Cruel History of the Anti-Abortion Crusade [reposted]

[Excerpts from an editorial in the NYT of John Irving, American author of the Cider House Rules novel we enjoyed reading 30 years ago]

“(…) I respect your personal reasons not to have an abortion — no one is forcing you to have one. I respect your choice. I’m pro-choice — often called pro-abortion by the anti-abortion crusaders, although no one is pro-abortion. What’s unequal about the argument is the choice; the difference between pro-life and pro-choice is the choice. Pro-life proponents have no qualms about forcing women to go through childbirth — they give women no choice (…)

I must remind the Roman Catholic Church of the First Amendment to the United States Constitution: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.” In other words, we are free to practice the religion of our choice, and we are protected from having someone else’s religion practiced on us. Freedom of religion in the United States also means freedom from religion (…)

The prevailing impetus to oppose abortion is to punish the woman who doesn’t want the child. The sacralizing of the fetus is a ploy. How can “life” be sacred (and begin at six weeks, or at conception), if a child’s life isn’t sacred after it’s born? Clearly, a woman’s life is never sacred; as clearly, a woman has no reproductive rights (…)

Of an unmarried woman or girl who got pregnant, people of my grandparents’ generation used to say: “She is paying the piper.” Meaning, she deserves what she gets — namely, to give birth to a child. That cruelty is the abiding impetus behind the dishonestly named right-to-life movement. Pro-life always was (and remains) a marketing term. Whatever the anti-abortion crusaders call themselves, they don’t care what happens to an unwanted child — not after the child is born — and they’ve never cared about the mother.”

by xi'an at July 13, 2019 10:19 PM

Peter Coles - In the Dark

Arron Banks and the Threat to Democracy

So it seems that a dodgy businessman named Arron Banks, who has extensive connections to similarly dodgy Russians,  and who is currently under investigation by the National Crime Agency for his role in unlawful activities during the EU referendum campaign, has decided to sue award-winning journalist Carole Cadwalladr for defamation after she exposed him. Interestingly Banks is suing her individually and is not taking action those media who have published her claims. I know I’m not the only person to suspect that this is because the litigation is merely vexatious, i.e. intended to exhaust Carole Cadwalladr’s financial resources, rather than a serious attempt to recover damages.

One of the items mentioned in Mr Banks’s claim is this TED Talk, which he alleges contains defamatory statements. The least I can do therefore is to share the video here.

 

by telescoper at July 13, 2019 12:32 PM

ZapperZ - Physics and Physicists

First Image Of Entangled Photons
We have the first image ever of photons in an entangled state. You may read the actual paper at the Science Advances page.

Of course, you can't tell that there is any entanglement going on just by looking at the image shown. You have to read the entire thing to see why there is a clear violation of Bell-type inequality here, or more specifically, the CHSH inequality that was meaning measured.

Neat stuff!

Zz.


by ZapperZ (noreply@blogger.com) at July 13, 2019 02:17 AM

July 12, 2019

Christian P. Robert - xi'an's og

natural LaTeX

Nature must have been out of inspiration in the past weeks for running a two-page article on LaTeX [as the toolbox column] and how it compares with… Word! Which is not so obvious since most articles in Nature are not involving equations (much) and are from fields where Word prevails. Besides the long-running whine that LaTeX is not he selling argument for this article seemed to be the increasing facility to use (basic) LaTeX commands in forums (like Stack Exchange) and blogs (like WordPress) via MathJax. But the author also pushes for the lighter (R?)Markdown as, “in LaTeX, there is a greater risk that contributors will make changes that prevent the code compiling into a PDF” (which does not make sense to me). This tribune also got me to find out that there is a blog dedicated to the “LaTeX fetish”, which sounds to me like an perfect illustration of Internet vigilantism, especially with arguments like “free and open source software has a strong tendency towards being difficult to install and get up and running”.

by xi'an at July 12, 2019 10:19 PM

Peter Coles - In the Dark

Hubble’s Constant – The Tension Mounts!

There’s a new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. The abstract is here:

You can download a PDF of the full paper here.

You will that these measurements, based on observations of time delays in multiply imaged quasars that have been  gravitationally lensed, give higher values of the Hubble constant than determinations from, e.g., the Planck experiment.

Here’s a nice summary of the tension in pictorial form:

And here are some nice pictures of the lensed quasars involved in the latest paper:

 

It’s interesting that these determinations seem more consistent with local distance-scale approaches than with global cosmological measurements but the possibility remains of some unknown systematic.

Time, methinks, to resurrect my long-running poll on this!

<noscript><a href="https://polldaddy.com/p/9483425" target="_blank">Take Our Poll</a></noscript>

Please feel free to vote. At the risk of inciting Mr Hine to clog up my filter with further gibberish,  you may also comment through the box below.

 

by telescoper at July 12, 2019 11:15 AM

July 11, 2019

Christian P. Robert - xi'an's og

a non-riddle

Unless I missed a point in the last riddle from the Riddler, there is very little to say about it:

Given N ocre balls, N aquamarine balls, and two urns, what is the optimal way to allocate the balls to the urns towards drawing an ocre ball with no urn being empty?

Both my reasoning and a two line exploration code led to having one urn with only one ocre ball (and no acquamarine ball) and all the other balls in the second urn.

odz<-function(n,m,t) 2*m/n+(t-2*m)/(t-n)
probz=matrix(0,trunc(N/2)-1,N-1)
for (n in 1:(N-1))
  for (m in 1:(trunc(N/2)-1))
    probz[m,n]=odz(n,m,N)

by xi'an at July 11, 2019 10:19 PM

Emily Lakdawalla - The Planetary Society Blog

Hayabusa2 makes second touchdown on asteroid Ryugu
Japan's Hayabusa2 spacecraft has touched down on Ryugu for a second time, bagging samples which hopefully contain material from the subsurface of the asteroid.

July 11, 2019 06:27 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

The Environmental Effects of Smartphone Creation and Disposal

For some people, buying a new smartphone is a treat. They get to keep up with the latest digital trend, experience the wonders of the newest technologies, and showcase their delightful digital purchase. Thanks to the ever-advancing field of information technology, smartphones are becoming a necessity rather than an accessory in today’s society. New technology demands the replacement of the old, after all. In 2018, 1.56 billion smartphones were sold worldwide, a testament to the strength of the desire for the latest device. But smartphones carry a hefty cost that’s not included in their price tags. No, the environment pays this price, and the earth keeps paying from the creation to the disposal of a smartphone.

The Green Price

The insatiable market demand for smartphones and similar devices require an equally insatiable production process. This process involves enormous amounts of electricity and mining. The average smartphone requires a small amount of precious metals, such as gold and palladium, in their circuit boards. To supply manufacturers with their rare metals, companies dig deep into the bones of the earth in environmentally-taxing mining efforts. Mining is responsible for up to 95 percent of a smartphone’s carbon footprint. Putting all the components of a smartphone together takes a lot of power, and over the last 10 years, the industry has consumed as much electricity as India uses in a year.

More galling, still, is the fact that manufacturers don’t often design smartphones with a lifespan of more than two years. Intense marketing campaigns and advertising continually influence people to upgrade their devices and buy new phones. As such, people dispose of perfectly serviceable, if outdated, devices alongside damaged units. Their disposal is the second price the environment pays.

The Price of Disposal

broken white phone on hand

Throwing away a smartphone can harm the environment. Electronic devices and appliances like it are responsible for as much as 70 percent of the toxic waste in dumpsites. Although there are businesses and facilities that attempt to recycle these devices, it’s apparently much better for the environment for people to buy pre-owned phones. This because only a small portion of a smartphone is actually recyclable.

Smelters recover the precious palladium and gold components but the process releases a host of more harmful substances into the atmosphere. These deadly vapors include mercury and chloride. Less scrupulous “recyclers” pay shady companies to take their electronic waste to developing countries for cheaper disposal. In these countries, workers inhale and absorb toxic fumes containing cadmium, nickel, and mercury as they disassemble and scavenge smartphone components.

So what’s the smart way to dispose of smartphones? One way is to buy models with longer life expectancies, but that’s still just a stop-gap measure. To truly make an impact, manufacturers should start designing phones that people can reuse. These devices should also be easily upgradable, negating the need for consumers to keep buying new products just to stay compatible with the digital landscape. Without these kinds of far-ranging changes to the industry, the environmental impact of technology will keep exacting a heavy toll on the Earth.

The post The Environmental Effects of Smartphone Creation and Disposal appeared first on None Equilibrium.

by Bertram Mortensen at July 11, 2019 10:20 AM

Peter Coles - In the Dark

Thirty Years as a Doctor!

A chance discovery while rummaging around in my filing cabinet reminded me that today is the anniversary of a momentous event. What I found was this:

It’s the programme of the summer Graduation Ceremony in 1989 at which I formally received my DPhil (Doctor of Philosophy). As you will see that was precisely thirty years ago today!

I actually submitted my thesis the previous summer (either at the end of August or start of September 1988) but had to wait a few months for the examination, which I think was in December.  By the time I had done my corrections (mainly typographical errors) the next available date for the degree to be formally conferred was in July 1989 so that’s when I officially got doctored. I was actually still in Brighton at the time, as had started work as a postdoctoral researcher soon after I had submitted my thesis.

Here’s my thesis:

In those days they actually printed the thesis title in the programme, alongside the graduand’s name in the case of DPhil degrees.

It’s normal practice for people to assume the title of Doctor as soon as they have passed the viva voce examination but although I’ve never objected to that,  I’ve always been a bit unsure of the legality. Probably one doesn’t actually have a doctorate until it is conferred (either at a ceremony or in absentia).

Anyway, here is a picture of me (aged 26!)  emerging from the Brighton Centre wearing the old-style Sussex doctoral gown just after I received my DPhil:

Graduation

Unfortunately the University of Sussex decided a while ago to change the style of its academic dress recently to something a bit more conventional and as far as I know it’s not possible to obtain the old-style gowns any more. They also changed the title DPhil to PhD because it confused potential students, especially those not from the UK.

My first degree came from Cambridge so I had to participate in an even more archaic ceremony for that institution. The whole thing is done in Latin there (or was when I graduated) and involves each graduand holding a finger held out by their College’s Praelector and then kneeling down in front of the presiding dignitary, who is either the Vice-Chancellor ot the Chancellor. I can’t remember which. It’s also worth mentioning that although I did Natural Sciences (specialising in Theoretical Physics), the degree I got was Bachelor of Arts. Other than that, and the fact that the graduands had to walk to the Senate House from their College through the streets of Cambridge,  I don’t remember much about the actual ceremony.

I was very nervous for that first graduation. The reason was that my parents had divorced some years before and my Mum had re-married. My Dad wouldn’t speak to her or her second husband. Immediately after the ceremony there was a garden party at my college, Magdalene, at which the two parts of my family occupied positions at opposite corners of the lawn and I scuttled between them trying to keep everyone happy. It was like that for the rest of the day and I have to say it was very stressful. A few years later I got my doctorate from the University of Sussex, at the Brighton Centre on the seafront. It was pretty much the same deal again with the warring family factions, but I enjoyed the whole day a lot more that time. And I got to wear the funny gown.

by telescoper at July 11, 2019 09:40 AM

Emily Lakdawalla - The Planetary Society Blog

Planetary Society-funded Technology Picked by NASA for Possible Moon Flight
PlanetVac, a technology that simplifies the process of collecting samples from other worlds, may fly to the Moon.

July 11, 2019 09:00 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

7 Ways to Monetise a Blog

A blog, no matter how big or small, can help you earn money when you do things the right way. There are several ways to monetise your blog. One of the first things you should do is to ask the assistance of an SEO firm from your location in Adelaide. This way, you have better chances of getting more traffic. Here are some ways you can monetise your blog:

Coaching Services

Are you an expert of something? This may be about relationships or computer programming you’re good at. Chances are, others want to learn about it. If you’re good at it, share it. Clients will love finding a coach who is an expert on what they have interest in. Pick a niche you’re good at and be a coach for that topic. Make sure you know what you’re talking about so that clients will keep coming back.

Freelance Blogging

A freelance blogger is someone who knows their audience. You should be able to create engaging content, as you’ll be working with different content. As your blog grows, so does your following. Because of that, there’s a strong possibility that many brands will come to you.

Sell Online Courses

Knowing your audience can help determine which courses you can sell them. By knowing their needs and interests, formulate a course that caters specifically to them. You don’t need to have a big following for this one. You can start with only a few followers and work your way up.

Affiliate Marketing

This is a way to sell products or services that aren’t your own through your blog. You can be an affiliate marketer for other products and services. You’ll earn a commission for this. Select products and services that are related to the vision of your blog. Share something that your followers will be interested in. Affiliate marketing is a source of passive income for any blog.

Email Marketing

This works well when you have a list of subscribers. You can use that list to sell you or your affiliate’s products and services. Building a list is hard work, but once you build a strong list, everything else will follow. Build a strong connection with your followers so they’ll trust you enough to subscribe to that list.

Advertisement

You need more than 100,000 visitors a day to get income from this avenue. Having an advertisement in your blog makes you a bit of an authority in the blogging world, but as mentioned, you need traffic for this. It all comes down to how good your contents are and how you market yourself as a blogger.

Sell eBooks

E-book on a tablet

You have a blog so that means you have a knack with words. You can use that to your advantage by selling eBooks. Write something you’re good at or something your audience will love. Research about your topics and plan your writing before you begin. Then, sell your finished products through your blog. Come up with a good marketing strategy. For example, sell three for the price of two. Customers like promos so make use of that knowledge.

Your blog can earn you if you know how to use it to your advantage. Before that, make sure you establish following even in small numbers. Know your craft and advocate it. Keep working hard and be dedicated. The money will come when your audience is having a good time reading your blog.

The post 7 Ways to Monetise a Blog appeared first on None Equilibrium.

by Bertram Mortensen at July 11, 2019 01:00 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Things that Small Businesses Overlook Online

Customers online only want one thing: easy access to information. They want to search for a service and find a business that offers what they need. Naturally, they’ll check out the top results first. The difference between these results is how they attract potential clients to make them click through to their website.

This is a problem with most small businesses. They might have a great physical store, but the lack or mismanagement of information online makes them hard to find. What you need is a strong online presence so that people can find you first. Let’s look at some of the best ways you can bring your business to the top of every customer’s mind:

1. Improve Your Online Presence

It’s one thing to have your business website appear on Google. It’s another for it to be featured on a business directory or a review website. A lot of social media platforms refer to lists for business contact information, while reviews of your business can help boost its SEO ranking.

Try Google Business or the Yellow Pages for starters. Next, search for business directories in your area and make sure that yours is listed. Consistency is essential, so don’t forget to use the same NAP information (name, address, and phone number) across all listings.

Put yourself in the customer’s shoes. You’ll trust a business that has the same contact information across all sites. Clients can quickly think that shops with inconsistent information are already closed.

2. Engage the Locals

store with location pin outside

Proximity and location play a large part in foot and web traffic. People visit the store they searched for 50% of the time, especially if they’re nearby. Take advantage of this by having your store’s location listed in as many places as you can.

If there’s a lot of competition in your area, you’ll have to improve not only your global search results but your local SEO, too. There are several ways to do this. Consulting a digital marketing agency in Virginia Beach can be a good start. Experts can review your business and find ways to promote it.

They can help boost your online rankings and build brand recognition in your community. This is important as building a strong local presence can have positive effects for years to come.

3. Energize Your Content

SEO experts know that regular posting boosts your search ranking. But what’s more important is how engaging that content is. Quality content gets shared more on social media.

Besides blog posts, consider posting photos and videos. Don’t forget to tag the location where you took them. Give customers a tour of your business by sharing what it looks like inside and out.

Your product or service might be helpful to another website or business. Once you have a lot of good content, try approaching sites to link back to you. These links are called backlinks. The more backlinks you have, the more search engines will recognize your domain’s legitimacy. This is called domain authority. The higher your domain’s (your website) authority, the better your search rankings will be.

Good SEO is not an accident. There is no shortcut. Take note of these three essential pointers to help build your SEO rankings. After all that hard work, make sure that people who do visit your website get what they need. Make customers happy and give them an option to leave feedback. Remember that positive reviews are one of the best ways to increase web traffic.

The post Things that Small Businesses Overlook Online appeared first on None Equilibrium.

by Bertram Mortensen at July 11, 2019 01:00 AM

July 09, 2019

Lubos Motl - string vacua and pheno

Vafa, Ellis debate with a bright religion scholar
MarkusM has pointed out that a more pleasant, entertaining, and physics-oriented public discussion took place in recent days, in the Institute of Art and Ideas (iai):
Does the Multiverse Exist? | Full Debate (43 minutes)
Participants were Harvard's string theorist Cumrun Vafa whom I know very well, you know, CERN's phenomenologist John Ellis, and an assistant professor of religion, feminism, gender, and sexuality Mary Jane Rubenstein of Wesleyan University. Religion and feminism is quite a combination – maybe she hasn't noticed yet that according to religion, feminists will burn like brown coal in the hell for the eternity (because of the eternal character of the oxidation, feminist corpses in hell count as a renewable energy source). As we will see, she was the nicest surprise of that event.



Cumrun started by mentioning he was convinced string theory was a theory of Nature, also because it has allowed us to calculate the precise entropy of black holes – he modestly overlooked the fact that it was he and Strominger who pioneered this amazing sub-industry.



Near the beginning, the charming and talkative Ms Rubenstein started to talk a lot. For a few sentences, I thought: She must be a lady who likes to talk and her greatest intellectual achievement was to learn how to pronounce the word "epistemologically" which she really likes. But the following minutes have changed my mind profoundly. She has presented a long monologue – although with some notes – about the constants of Nature, quantum field theory, interesting and uninteresting types of the multiverse, and so on.

So I obviously concluded: This is not a normal behavior for a religion-and-feminism professor. She must have a coach who is a physicist because most of the stuff she said – including the radius of the visible Universe at over 40 billion light years, something that even many enthusiastic fans of physics get wrong – was totally correct and nontrivial. Well, she also suggested that nobody loved the ekpyrotic or cyclic Universe (correct) and that mathematics makes the many worlds interpretation of quantum mechanics inevitable (incorrect).

I started to think. Well, she has a physics coach but I must be capable of saying even more, right? I decided that the coach has to be Brian Greene, especially because her monologues was overlapping so much with the Hidden Reality, a book by Greene that I also translated to Czech, a decade after The Elegant Universe. So I have made a prediction: It will be easy to find Rubenstein and Greene on the same places. This prediction has passed some tests (she wrote a book heavily referring to Greene) so now I am willing to bet a lot that she would agree that she got most of her physics and cosmology from Brian Greene.

Although her monologue was interesting, she was largely ignored. Instead, the host David Malone had a lot of fun with the claim that there were 10500 Universes. Cumrun said it was an underestimate – because the total number of vacua was clearly infinite (just take the AdS5 x S5 vacua of type IIB at all allowed radii for an infinite set!) – which the host found very entertaining.

It's cute when children laugh. You present something and a schoolkid tries to see a penis in every sentence (about amino acids, for example). You may enjoy it as well but you quickly realize it is silly. There is nothing to laugh about here.

It is very clear why the host is laughing – because he is another irrational layman who finds a "very large number of solution" to be a terrible thing – laymen are generally terrified by mathematics, numbers, and especially by large numbers. But there is nothing terrible about it at all. An equation or a theory has some set of solutions. The number of solutions is a non-negative number, an integer or infinity, and it is whatever it is. If we can't prove that a particular value is correct or other values are incorrect, then all values are equally acceptable. It is completely irrational to be biased against any solution. And it is just childish to laugh when someone determines that the right number is a large constant or infinity.

Incidentally, the host also said
10500 is ten followed by 500 zeroes.
Not really. It is ten followed by 499 zeroes! ;-) The normal people say it is one followed by 500 zeroes.

OK, I thought that this laughter of ignorance by the host wouldn't be addressed at all even though it's really a crucial point here. But it happily did get addressed. I am not sure whether it was a lucky coincidence or someone's grander plan. John Ellis introduced himself. Much of the attraction to string theory is about symmetries – string theory is the queen of symmetries. I actually disagree with that (string theory suppresses the role of symmetries in many ways, and also bans any global symmetries) but in the context of these conceptual discussions, it is a detail because the character of string theory's beauty is analogous to the beauty of the symmetries. Ellis said many things about himself and why he chose to be a man who applies the symmetries.

Soon afterwards, he said that Cumrun has pioneered something relevant for this discussion, the swampland. Not everything is allowed. So Cumrun could convey the point that while 10500 or infinity are large numbers, they don't mean that "anything goes" within string theory. Not so fast. So many things could be possible within effective field theory – like vacua where gravity is stronger than the other forces – but they are forbidden in Nature. They don't really exist.

By being forbidden in Nature, Cumrun meant "forbidden by the constraints implied by string theory". Of course it's the same thing in his picture of the world because he assumes and believes that Nature is described by string theory. Nevertheless, it was immediately turned into a controversy. Ellis added "according to string theory" to Vafa's words and the host started to laugh like a naughty schoolkid again. To make things worse, the host said he wanted to "sidetrack" a little bit: it sounds just like the anthropic principle.

Holy cow. It doesn't. It's really the opposite. Cumrun wants to show that good old physics constraints decide while the anthropic principle is basically the assumption that physics doesn't matter and the existence of intelligent animals is what constraints the choice of our vacuum or vacua. So the host wanted to "confirm" he's getting it and Cumrun informed him that "not really".

Also, I find it bizarre that the host vigorously tried to shut down a discussion about the anthropic principle – in a discussion whose title is "does the multiverse exist". Why would you shut down the discussion about this closely related proposed principle in a discussion that claims to be dedicated to the multiverse? The multiverse and the anthropic principle aren't the same thing but they're often discussed together and this is what a discussion about the multiverse should also clarify.

OK, this ban didn't succeed and around 14:10, Cumrun tried to communicate a simple point that he's not a defender of the anthropic principle. The host interrupted him with a would-be witty "you're in the swampland". It's a little bit witty because I have laughed but the real reason why I laughed is that it is so cutely dumb. The anthropic principle isn't the opposite of the swampland in any sense. It's clear why the host made the not so intelligent remark. Because the landscape is "equivalent" to the anthropic principle, and because the swampland is said to be the opposite of the landscape by Vafa, the swampland has to be opposite to the anthropic principle. So the anthropic principle's foe Dr Vafa has to be in the swampland.

The only problem is that the conclusion is wrong and this whole reasoning is totally illogical. The swampland and the landscape are two disjoint sets of models according to a particular kind of reasoning, Vafa's reasoning (or his project to classify theories), but the anthropic principle and old-fashioned-swampland-like reasoning are disjoint in a completely different way. They don't describe two disjoint classes of models. Instead, they describe unequivalent methods how to use the landscape (I wrote these words before I heard Cumrun saying virtually exactly the same thing!) or how to search for the right models.

I could see in Cumrun's eyes that he was getting somewhat anxious. Is it possible to explain these things to somebody who apparently believes that "if you're not a fan of the anthropic principle, then you're in the swampland?" ;-) You can't be in the swampland, only quantum field theories may be in the swampland, and real people can't be in the swampland because nothing in the swampland is "real". If the host can't get this point, can he get anything that matters?

Around 14:40, Cumrun protested against the claim that "mathematics isn't interesting, who cares" etc. Our disobedient happy schoolkid began to laugh again! The reason behind his laughter is his ignorance but at least I found his laughter somewhat contagious, like the laugh track in the Big Bang Theory, so that has improved my experience.

Mathematical consistency etc. can lead us a huge distance towards the future. Maxwell is an example, Vafa said. Maxwell has added a term just to make the equations consistent and the conclusion was that he could predict the moving electromagnetic waves. These comments have clearly made no impact on the host's understanding of mathematics and the Universe. But I am confident that some people in the audience were smarter than the host.

Rubenstein sort of intelligently said that there were two types of physicists in the search for the Universes. The likes of Cumrun look at the possibilities and their probabilities and don't care about the "existence". Well, Cumrun protested because that's exactly what he doesn't do. He doesn't try to find a probabilistic distribution. Well, he has also written some papers about the Hartle-Hawking states but it's not his dominant approach. So ironically, Cumrun's swampland conditions are really conditions of the Yes/No type, so they are about the "existence" which is exactly what Rubenstein tried to describe as the non-Cumrun approach.

OK, where did this misunderstanding come from? A minute later, Rubenstein explained what she meant by the second group. Folks like Penrose who see, in their hopeless papers, stargates into actual other Universe in some patterns drawn in the CMB. ;-) OK, you can't blame Cumrun or me for not predicting that this is what she would call the "second group". It's just some very particular, wrong, single, nutty paper, and it's in no way "a complementary school of thought" to either the swampland or the anthropic principle. OK, I found the spontaneous verbal explosions by that religion-feminist professor sort of cute although, unsurprisingly, she didn't always understand what she was talking about at the end.

Her "I think it's totally fascinating, I am thrilled" at 17:15 made me smile. She has a lot of the physicist's enthusiasm. She reported that she doesn't know whether we will ever come to any certainty about the existence of the other Universes. Right, we can't be sure about that.

Ellis says that the switch to 10500 or more Universes represents a tremendous progress because those questions were out of reach some 50 years ago when people couldn't dare to discuss questions about the other Universes. Right. Needless to say, the schoolkid laughs again. It really looks like the defense of contemporary physics has the same effect on him (and not only him) as if you were saying some obscene jokes. But Ellis wanted to throw "stone in Cumrun's direction". The accelerated expansion seems incompatible with string theory. Cumrun: "No, who said that? Which colleagues said that?" Everyone laughs, the host generously allowed Cumrun to hunt them down later. I think that Cumrun knows well who should be hunted here – the #1 wanted man – claiming that string theory bans de Sitter – is named Cumrun Vafa. ;-) Of course, he would say it's not true because he believes in quintessence instead of the de Sitter spaces. When asked about the future of the Universe, Cumrun said it was wonderfully exciting but not one with a happy ending.

The host just increased his IQ by 15 points and mentioned that mathematics had a good track record and that's how Dirac, with some extra coffee at night, discovered antimatter.

John Ellis bragged that he has coined the term "a theory of everything" – it's OK to boast because it's no longer a popular term – and he liked predictions. String theory was blamed for not having them and that's one reason why Ellis likes Vafa and the swampland – these claims make string theory falsifiable.

Why did the multiverse become popular and it's not just science, is it? Rubenstein recalled the various chapters of Greene's Hidden Reality about the types of the multiverses. She totally correctly said that the multiverse exploded around 2000. Right, that's when string theorists took it to explain the recently observed acceleration of the cosmic expansion (in 1998, she even knows this year!). It was really a trend that many if not most string theorists joined at that time. My only paper focusing on the anthropic principle (negatively) was also released in 2000. She discussed lots of details historical facts about Vilenkin's and Weinberg's ringing telephone etc. I am really impressed. At least as a reader of the popular books, she has done her homework extremely well.

Nevertheless, Cumrun had to point out that the anthropic principle – the methods to use the landscape – may be incorrect but that doesn't mean that the landscape or the string theory framework is incorrect. Precisely. When asked who imposed the multiverse on Cumrun, he said that the multiplicity of solutions had been known independently of Weinberg and his anthropic principle. Happily, all agree. This multiplicity was helpful for Weinberg to use it.

Vafa said a few more things, the host asked Ellis what Vafa meant by "checking". Ellis sort of didn't answer but began to explain the quintessence, which should have been explained some minutes earlier. Now an expectation of many string theorists – possibly testable by telescopes. The host, a testability cop, suddenly expressed his satisfaction. This is how a majority of the laymen around these discussions seem to operate. They seem to insist that they understand some experimental tests, otherwise they consider the science illegitimate. It's really a wrong and harmful attitude and Cumrun tried to convey why but the host hasn't gotten it. Too bad, most of these laymen who have been turned into "testability cops" don't see that they have been brainwashed by some really crappy, deluded, and self-serving demagogues.

The host said that 98% of the room was dark energy. LOL, not really. First, it would be just 68% (and 95% when dark matter is added). Second, it's the average over the Universe and in the room, the air is vastly heavier than the dark energy because matter – instead of dark energy – is concentrated around Earth, as Ellis informed him. ;-) You may always learn some cutting-edge new insights, e.g. that there is matter around Earth.

Ellis wanted to return to the question why the multiverse was popular – especially among non-physicists. They are probably unhappy with their Universe and they also feel that other people live in another Universe LOL (Rubenstein: it's called the U.S., the host wisely terminated these political ramifications). The host asks Rubenstein: What should the chaps be looking for? Now, Jane has the power to decide about the future of physics. ;-) At 30:13, she actually pronounces the name of Brian Greene for the first time LOL. There has to be evidence, she quotes him (he is surely not the guy who invented that sentence). Before that, Ellis mentions that there's not one lamppost but 10500 lampposts. Rubenstein correctly notices that when the CMB is used as a key source of evidence, people have different interpretations for blips so there is an interpretational problem. She wants to return to the question what is the primary question. I am not sure that her proposal will fix anything.

Vafa gives an answer to Rubenstein's question. Physicists want to understand patterns and know whether the Universe is natural or not etc. Rubenstein says that these "why" naturalness questions push physicists to the realm of metaphysics. Well, you may say that but "metaphysics" may still be done scientifically and rationally or unscientifically or irrationally. What is "metaphysical" about these questions is that they are deep and far-reaching. But depth is something completely different than the lack of scientific rigor. Physics has really advanced far enough that it is credibly and rationally dealing with questions that used to seem to be beyond science some 50 years ago.

Ellis opines that physicists do "when where what" and not "why". OK, I disagree with that. A huge portion of physics is about "why" questions. You can't really live without the question "why", as a TV commercial with Gell-Mann concluded. In effect, Ellis later translated many "why" questions to "what are we" and "where did we come from". Also, "where are we going with the whole Universe" completes the list of questions that make Ellis come to work every day.

Ellis also explains his "opportunism" – looking for questions where some progress can be made right now. Right. A good choice of the research projects actually is affected by the recent successes and what has become a promising route because of them. Many laymen don't get this point, either. They think that the right questions for science are independent of time. So they're drowning with these clichés and medieval questions that are disconnected from any actual progress that was taking place in the recent years or century.

The host turned to Rubenstein and Ellis: How much comfortable do you feel with this heretic, Vafa, who pays attention to the elegance of the equations? Instead of abusing the opportunity to burn Vafa at stake, the religion professor intelligently said that "true" or "real" or "existent" has various forms – existence in the realm of possibility of mathematical ideas, or some real physical evidence. She predictably yet cleverly mentions Plato (and Tegmark, another chapter in Brian's book). Vafa is relieved that the host's plan to execute Vafa didn't work and he friendly interacts with Rubenstein.

Vafa reports that string theory has taught us much more than what we expected – something about the Standard Model – such as holography and properties of black holes. The connections of string theory with the known, observed physics and patterns seem way too numerous and encouraging so that it would be a shame not to study the consequences of the theory, and that's what we are doing. It's work in progress, perhaps with many centuries to go.

But the approach "string theory is too hard so let's only do simple things" is not what the human beings do, it's not what the theoretical physicists do, but to overestimate what we can do is also wrong. So we are making finite steps but nonzero steps. Exaggerations may be made and are being made in both directions. String theorists aren't claiming to have everything but they do claim to have more than nothing. It make take years or centuries but to stop wouldn't be good for humanity.

Excellent, Cumrun. The disobedient kid didn't even get an opportunity for his laughter now. ;-)

OK, the host asked a somewhat incomprehensible question about Vafa's opinion on the relationship between the mathematical and physical reality. Vafa pointed out that the host had excessively naive assumptions about how the "physical reality" may be defined. There may "exist" other Universes although it might be possible to prove that no interactions between the components of the world can exist. The word "reality" may be subtle, especially once you allow the multiverse. Right.

Cumrun repeats that the "physical reality" isn't sufficiently defined and the host seems disgusted by this observation. But thankfully, the religion professor is really on the same boat with Vafa here – in fact, she started with this theme. OK, Cumrun still complained that her usage of "reality" is also ambiguous. I think that she hasn't really claimed otherwise. She has helped to classify the types of "reality" into some basic groups and deserves the credit for it. At the end, she rightfully suggested she was on the same frequency with Cumrun.

It was one of the best debates on current theoretical physics involving people with different backgrounds. And Rubenstein was one of the most well-informed people from the "humanities" when it came to theoretical physics whose monologues I have seen for years if not ever. You wouldn't have guessed that I would praise a "professor of religion and feminism" in this way but it just happened to be the case. She was clearly smarter and less naive than the male host but at least, his not very intelligent choices when to laugh have provided the debate with some contagious laugh track. ;-)

I think that at the end, there is a reason why a religion professor looks more intelligent in these debates than many irreligious philosophers and babblers. For centuries, natural science seemed to be a technical branch of the "materialist philosophy" but it simply ceased to be the case sometime during the 20th century. The rise of quantum mechanics – and the materialists' lack of will to admit that facts must be specified relatively to an observer – is the greatest example of the "fall of materialism" in physics. But it's not the only one. The host is a chap who really believes in some "naive materialism". The table is real, it's simple to divide things to real and unreal, everything must be simple like that, and when it's not, he doesn't believe it. But it just doesn't work like that and attempts to enforce a table-like science on modern physics are purely harmful. A religion scholar – who deals with principles of the Bible as well as religious myths – unsurprisingly has a greater understanding for the subtleties of the word "existence" or "reality".

by Luboš Motl (noreply@blogger.com) at July 09, 2019 03:45 PM

CERN Bulletin

CERN Bulletin

CERN Bulletin

CLUB DE PETANQUE

C’est sous une chaleur torride que se déroulait le deuxième concours de la saison Challenge Claude CARTERET ce jeudi 27 juin 2019.

Dix équipes en doublettes participaient.

Après des parties parfois très serrées notre fidèle juge arbitre Claude JOUVE déclarait vainqueur avec trois parties gagnées un jeune stagiaire au CERN pour trois mois et aussi amateur de pétanque :

  • 1 er : Clément TOSI
  • 2ème : Christian JOUVE avec lui aussi trois parties gagnées mais devancé par le goal-avéraient.
  • 3ème : Alain PHILIPONA de retour et encore en forme après une trêve prolongée dû à des douleurs de jeunesse.
  • 4ème : Xavier PACCIONI notre sympathique Corse en progression régulière.

La première féminine : Gabrielle CERRUTI de retour après quelques mois d’arrêt et qui n’a pas perdue de son adresse.

Un grand merci à Sylvie pour son repas et la tenue de la buvette ainsi que Claude JOUVE pour l’intendance et son rôle de juge arbitre.

Prochain rendez-vous le jeudi 25 juillet 2019 pour le Challenge de notre regretté ami « Patrick DURAND ».

July 09, 2019 03:07 PM

CERN Bulletin

Lubos Motl - string vacua and pheno

A frustrating Guardian discussion on string theory
On June 28th, The Guardian's Ian Simple invited David Berman, a very good string theorist whom I know, and Eleanor Knox – both of them did great – to discuss the question
What happens when we can't test scientific theories?


Just to be sure, a good scientist tries to extract evidence in clever ways and hard work, whether easy tests in a foreseeable future look possible or impossible. And indeed, easy tests of string theory look impossible – and have looked impossible in the recent 50 years. When asked about the progress in the future which nobody can know, otherwise it would take place now, they were sketching a century – or thousands of generations – of efforts.

It's possible that people need this much time. It's possible it won't be enough. It's possible that mankind will turn into hopelessly stupid apes again. But it's also possible that the progress could be faster. Clearly, the estimates how quickly a theory of everything is going to be found depends on the recent advances and their extrapolation – on the people's enthusiasm and self-confidence which, in the case of intelligent people, reflects some actual facts or experience. That's why sensible people such as Witten found it totally possible in the mid 1980s or mid 1990s that the theory of everything would be completed within weeks.



While Eleanor and David did great, the whole podcast was framed as a modern "trial against Galileo". Throughout the show, the actual scientists were expected to defend themselves and apologize. I find this basic formatting absolutely unacceptable. It is just another manifestation of the political correctness that has run amok.



Aside from David and Eleanor, Sample (remotely) invited a well-known "vanilla critic" who clearly had nothing to say about string theory whatsoever because he knows virtually nothing and couldn't possibly get a well-deserved passing grade in the first undergraduate string theory course. That stuttering, unpleasant vanilla critic was only repeating hostile clichés "it is not science", "it is not testable" etc.

String theory unquestionably is science and it is in principle testable. What is't scientific are Inquisition trials where scientists and their theories are being attacked by brute force, without any legitimate technical arguments whatever.

The real systemic problem isn't one of the three participants who clearly contributes nothing. The real systemic problem is that Ian Sample, the host, was basically standing on that wrong side in much of his monologues. In effect, there were two plaintiffs and two defendants at this Inquisition trial. So at the very beginning, we learned from Sample that string theory was "controversial" and that no evidence in favor of string theory has been found in 35 years. What? What the hell are you talking about, Sample? 2019 minus 35 is equal 1984 – which is not only the Orwellian year but also the year when the First Superstring Revolution Started.

A majority of the evidence in favor of string theory has been found after 1984.

As David and Eleanor were trying to explain – but too politely, so that Sample couldn't get it – the evidence that assures competent physicists that string theory is here with us to stay has a more complicated, mathematical form than what unrefined minds such as Ian Sample's can comprehend. Everyone who has tried to look into this question but concluded that "no evidence in favor of string theory has been found for decades" is simply an intellectually inferior person who is incapable of becoming a theoretical physicist in 2019. The statement is demonstrably wrong – spectacularly wrong – but only smart people may understand the proof.

What is so difficult about this simple point? What is difficult is that it is politically incorrect – all the people who have no chance to understand the contemporary science because they are too stupid must be considered as "equal", anyway. Well, they are not equal. What they have to say about science is smaller than what string theorists have to say by many orders of magnitude. They're just adding noise. And if they invent an alternative to string theory, they are always pure crackpotteries. There is no real alternative to string theory.

At some moment, Sample was pushing the scientists to defend the thesis that gravitational wave detectors will be the experimental apparatuses that will test string theory. But none of them actually wanted to say that "the gravitational waves are the experimental silver bullet" for string theory by themselves. David said that it was just one among a huge number of rather far-fetched possibilities. The real point is that string theorists are working on strategies to advance knowledge that don't have the form of any simple-to-imagine experiment such as LIGO – strategies that are perfectly alright and contribute positive (and in almost no cases, negative) evidence that string theory works but strategies that intellectually limited people such as Sample simply cannot get.

They cannot get it because they misunderstand even the simplest point that the judgement what is right and what is wrong in theoretical physics usually depends on refined proofs, calculations, and arguments that heavily depend on mathematical details. They have never witnessed an example of a mathematical argument that actually mattered – and that's why many of them assume it is impossible. They are only imagining that proofs may be of the form that the average mammals could understand as well. All their conclusions are intuitive. But the average man's intuition breaks down in high energy physics near the Planck scale. Well, it actually breaks down much earlier.

The implicit assumption that string theorists are obliged to build some realistic experiments is just pure garbage. String theory has been a hardcore theorists' activity since the beginning. Also, from the beginning, it looked way more likely that the extra dimensions would be too small to be seen by doable experiments; for a while, larger extra dimensions were considered but even in that epoch, this possibility was considered far-fetched. There is nothing wrong about it. The claim that all physicists must be doing experiments is the Aryan Physics from Nazi Germany and it is complete nonsense.

However, the numerous string vacua with several compactified dimensions have been proven to be exactly as consistent as the vacua with 10 or 11 large spacetime dimensions. The claim (repeated on the show) that string theory predicts a wrong number of dimensions is simply false, much like almost all statements about the science itself that Sample and the vanilla critic made in the discussion. No, we don't live in 3+1 dimensions if you count the dimensions carefully.

Sample was also asking what the string is. Eleanor and David were attempting to tell him that string theory was just a name and instead, there's difficult mathematics he is unlikely to get. That's one important way to look – it is important because people should be explained that the arguments follow mathematics and not some intuitive opinions about the piano strings or something like that. Another way to look is that the string is a real 1-dimensional curve in space, an infinitely thin fundamental object. The positions and speeds of its points have to be treated as quantum variables etc. (and at stronger coupling, they cease to be uniquely fundamental) but it is fundamentally obtained from something like a real thin rubber band. Does Sample really need to ask that strings are really strings in 2019? String theory has been around for 51 years and for some 48 years, we've known that it was a theory that could be extracted from strings. Sample himself has discussed string theory many times. Is it really appropriate for a top U.K. newspaper to ask "what is a string"? And if a listener has no clue "what is a string", does it make sense for him to listen to much more complex questions about the status of string theory? Isn't it clear that such an incorporation of the listener is fraudulent? If someone doesn't even know "what a string is", not even in some laymen's caricatures of the answer, then he is very, very far from meaningfully thinking about the state-of-the-art questions of theoretical physics. The implicit claim that he can follow the arguments is clearly a lie.

Also, we heard from Sample that string theory was more "controversial" than any theory in the history of science. Oh, really? And what about heliocentrism? Darwin's evolution theory? Relativity in Nazi Germany? Genetics in the Soviet Union? And indeed, quantum mechanics among the West's neo-Marxists of the recent 50 years? Important scientific theories often find people who oppose them. When the theories are correct, the people opposing them are pretty much idiots. That's the case of string theory's critics, too. The more idiots and the louder idiots you can find, the stronger the "controversy" will be. A theory's being "controversial" doesn't say anything whatsoever about the intrinsic properties of the theory itself. It says more about the critics. If you focus on some people's emotions and not scientific arguments themselves, then you are not doing science, you are not looking at the Universe in the scientific way, Mr Sample.

Sample has also asked: How is it possible that the brilliant young people keep on starting to work on this theory, despite Sample's and vanilla critics' constant efforts to sling mud on string theory if not ban it? Isn't it because these young people are brilliant? While you are not? Brilliant people can figure that string theory is the state-of-the-art framework in which the most accurate theory of Nature must be studied as of 2019. They can figure out that the talk in the newspapers is just misleading or downright deceitful junk which is not addressed to them. People who aren't brilliant – and especially, people who are complete idiots – can't figure this out. They're effectively on par with the average monkeys – who are willing to absorb moronic slogans from vanilla critics. And that's why they get easily manipulated by the garbage in the Guardian and similar cesspools.

But the average true monkey can usually understand that it is a monkey – different from the humans. The likes of Sample apparently can't get it. He seems incapable of even inventing the answer – or "possible answer", from his careful viewpoint – that the reason why brilliant people do string theory and he doesn't is that brilliant people can get it and the stupid people can't. He can't invent "create" the possible explanation of the young people's interest – namely that they are right and he is wrong. The fact that the smart people still say that string theory is correct looks like some giant conspiracy theory to him. He would clearly prefer an explanation involving the extraterrestrial aliens who keep the best theoretical physicists – young and older – hostage.

Sample talks about career prospects etc. But he remains completely silent about – and it seems that he is totally failing to get or acknowledge – that "beautiful minds" are actually driven by their curiosity. They want to understand how the Universe works. It isn't about careers. A mediocre person like Sample isn't curious and isn't driven by any forces besides the animal instincts or his desire to make some money but he should be able to understand that some people simply are better and more "beautiful minds" than he is. These activists refuse to get it. Note that all the filthy anti-physics websites talk about the money all the time (how they can rob this group of physicists or another of some money) – and they don't care about the actual science. This clash is really both intellectual and ethical, string theorists are generally the good guys, and the anti-string individuals are the villains.

Eleanor and David were effectively making the same points as I do but they made the statements in such a careful and polite form that they were unavoidably overlooked – and sometimes deliberately overlooked. I think that they sounded pleasant – but too pleasant, like some folks flying in the clouds who are disconnected from the Earth. With this attitude, while facing aggressive, ignorant, prejudiced bullies and chronic liars such as the vanilla critic, science becomes indefensible in the broader society. If scientists can't comprehensibly convey the point that critics of string theory are critics of string theory because they are low-IQ and/or dishonest biological waste or jealous because they can't be better, this point will just not get to the listeners, and it will be unavoidable for science to be increasingly treated like Galileo.

David, you probably can't explain exotic branes in M-theory to too many Guardian listeners – although I do think that a healthy society would have radio stations that would try to explain exotic branes, too. But the sociological fact that various people who discuss these things belong to groups that differ by some 40 IQ points and by the equivalent of 5-10 years of intense study is something that you and others should be capable of communicating. What is at stake is the public's understanding of the very existence of science – and its basic preconditions such as the freedom of research without bullying, and hard work where some loudly pronounced demagogy isn't the ultimate weapon.

It's questionable whether it's a good idea for scientists to help the journalists with creating content whose anti-scientific goal is determined at the very beginning. I think that Sample has decided what the "story behind the discussion should be" in advance – "string theory is controversial" – and nothing could change that plan because the likes of Sample don't care about the facts and arguments as they appear. Isn't a string theorist who helps this content to be produced a useful idiot unwillingly helping the ongoing campaign to delegitimize science? Isn't it wiser to acknowledge the reality – that lots of journalists are just very hostile towards science – and ignore or boycott these journalists?

by Luboš Motl (noreply@blogger.com) at July 09, 2019 08:41 AM

July 08, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Innovative Strategies: Why Over-the-top Advertising is Worth It

According to an article by eMarketer, the number of over-the-top (OTT) video service subscribers may rise to 181.5 million this year, comprising 54.7 percent of the total population in the country. These numbers indicate that more viewers prefer watching through online streaming sites than on traditional cable TV.

As consumers of online video streaming increased, paid cable TV subscribers decreased, with more than 39.3 million users in 2019 cutting their cable TV subscriptions. It’s evident by now that there’s a shift in consumer preference regarding where they watch their shows.

This shift should prompt you to revisit your advertising strategies. Since your primary goal is to reach your audience and let them know about your brand, your products, and your services, you should also be present on whatever platform they’re in.

OTT advertising services help you reach this goal seamlessly. Find out why OTT advertising is worth investing in.

OTT Advertising: The Next Big Thing

Over-the-top video service refers to the platform where videos and advertisements stream. It refers to the content providers that stream media directly to the viewers through the internet. OTT bypasses multichannel television, telecommunications, and broadcast television platforms in distributing advertised content.

OTT is different from advertising on connected TV because the latter refers explicitly to the device where your consumers stream online content. These devices include smart TVs with an internet connection, gaming consoles, and set-top boxes you can connect to your regular TV for internet access.

Benefits to Your Business

Businesswoman using her phone and laptop

OTT advertising delivers highly engaging, targeted content.

A Freewheel study reveals that OTT viewers complete 98 percent of video ads, compared with the lower completion rates on non-OTT devices. Streaming video viewers on connected TV exhibit a higher level of brand awareness and brand favourability than desktop users.

Additionally, a recent streaming meter data report by Nielsen shares that consumers with OTT devices at home spend one out of every ten minutes of TV usage with OTT platforms. Users can access content through a variety of interfaces where they slowly grow accustomed to OTT as a primary form of input.

An advantage of using OTT is that it has the capability to precisely target viewers down to their zip codes, through gathering and analyzing factors like demographics, interests, and lifestyle. Advanced analytics allow advertisers to target audiences directly and measure OTT campaign results effectively.

OTT platforms are closed, controlled streaming systems. Your viewers can’t skip ads, jump to another browser, or install an adblocker; therefore, they’re forced to watch your advertisement from start to end. This practice increases your brand awareness as viewers wait for your ad to finish between shows.

Virtually, the OTT advertising market is untapped. Currently, it’s the only alternative to traditional TV commercials. Through OTT advertising, you can market your brand to your target audience through non-skippable ads between videos and shows streaming online.

OTT platforms include sites like Hulu, Roku, HBO Go, Netflix, and Amazon Prime. Not every online streaming platform provides ad services, so if you want to tap into this type of advertising, you should look up which sites allow you to insert your ads.

Investing in OTT advertising allows you to reach a broader range of audiences, tapping into traditional TV users and those who prefer online streaming. Unskippable ads help increase your brand awareness because most viewers, including your target market, will likely see your advertisements.

The post Innovative Strategies: Why Over-the-top Advertising is Worth It appeared first on None Equilibrium.

by Bertram Mortensen at July 08, 2019 09:16 PM

Sean Carroll - Preposterous Universe

Spacetime and Geometry: Now at Cambridge University Press

Hard to believe it’s been 15 years since the publication of Spacetime and Geometry: An Introduction to General Relativity, my graduate-level textbook on everyone’s favorite theory of gravititation. The book has become quite popular, being used as a text in courses around the world. There are a lot of great GR books out there, but I felt another one was needed that focused solely on the idea of “teach students general relativity.” That might seem like an obvious goal, but many books also try to serve as reference books, or to put forward a particular idiosyncratic take on the subject. All I want to do is to teach you GR.

And now I’m pleased to announce that the book is changing publishers, from Pearson to Cambridge University Press. Even with a new cover, shown above.

I must rush to note that it’s exactly the same book, just with a different publisher. Pearson was always good to me, I have no complaints there, but they are moving away from graduate physics texts, so it made sense to try to find S&G a safe permanent home.

Well, there is one change: it’s cheaper! You can order the book either from CUP directly, or from other outlets such as Amazon. Copies had been going for roughly $100, but the new version lists for only $65 — and if the Amazon page is to be believed, it’s currently on sale for an amazing $46. That’s a lot of knowledge for a minuscule price. I’d rush to snap up copies for you and your friends, if I were you.

My understanding is that copies of the new version are not quite in stores yet, but they’re being printed and should be there momentarily. Plenty of time for courses being taught this Fall. (Apologies to anyone who has been looking for the book over the past couple of months, when it’s been stuck between publishers while we did the handover.)

Again: it’s precisely the same book. I have thought about doing revisions to produce an actually new edition, but I think about many things, and that’s not a super-high priority right now. Maybe some day.

Thanks to everyone who has purchased Spacetime and Geometry over the years, and said such nice things about it. Here’s to the next generation!

by Sean Carroll at July 08, 2019 08:03 PM

Emily Lakdawalla - The Planetary Society Blog

What to Expect when Chandrayaan-2 Launches to and Lands on the Moon
Liftoff is scheduled for 14 July 2019 at 21:21 UTC (15 July 2019 at 02:51 IST local time), atop a GSLV-MKIII rocket from India.

July 08, 2019 11:00 AM

July 06, 2019

Jon Butterworth - Life and Physics

Neutrino platform
Do strategies make a difference? On Monday we had a meeting of UK particle physicists at the shiny new Institute of Physics building (beta version here, well worth a visit if you are in the Kings Cross area) to discuss … Continue reading

by Jon Butterworth at July 06, 2019 07:59 AM

July 03, 2019

John Baez - Azimuth

Applied Category Theory 2019 – Program

Bob Coecke, David Spivak, Christina Vasilakopoulou and I are running a conference on applied category theory:

Applied Category Theory 2019, 15–19 July, 2019, Lecture Theatre B of the Department of Computer Science, 10 Keble Road, Oxford.

You can now see the program here, or below. Hope to see you soon!

by John Baez at July 03, 2019 10:25 PM

July 02, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Big Picture Planning: The Pivotal Role of Vendor Management

The services provided by partner companies can make or break an enterprise in the long haul. Vendor management plays a crucial role in maximizing company efficiency while reducing long-term costs through fair agreements with companies to keep.

Vendor management plays a critical role in ensuring both quality and value from suppliers and service providers. The vendor manager is especially prominent in most businesses and organizations today, assisting the enterprise in acquiring and vetting software solutions that offer the best quality for the best price. The job of those in vendor management is that of continuous negotiation for the best offers from both the enterprise and its supplies.

Contrary to what such a role would imply, however, proper vendor management is not about penny-pinching bargain hunting and haggling. Rather, vendor management aims to look for supplies and services that provide the business with the best amount of value, which in many cases would neither always be the cheapest option or the flashiest. Reliable companies to keep such as ServiceNow can provide businesses with plenty of value throughout their partnership.

The Bigger Picture

Effective vendor management examines the big picture costs of each vendor to determine whether the services offered by the vendors are appropriate for both the enterprise’s needs and budget. The up-front fees of the service are just part of the equation. Although essential, price is not sufficient to seal the deal, and the cheapest solutions are hardly the best ones. Other factors include how versatile the services offered are and how intuitively could the new software be used by the employees at large.

The vendor management team plays a careful balancing act with the vendors the company keeps. Although it is costly to change vendors constantly, it is also a poor choice to be bootstrapped to a specific provider and their services even when their services prove to no longer be adequate for the company’s needs.

Navigating Negotiations

An effective vendor management team steers a company toward a good deal with a vendor. The deals struck with vendors should be in good faith with respect to what the company stands to gain and the price that the vendor deems fair.

To make this decision, the company’s bottom line over a longer time horizon should always be considered; a trustworthy and reliable merchant partner would deliver more value to the enterprise that far outweighs any short-term savings. Vendor managers and the companies they represent should be prepared to take a vendor partner’s offer of a reasonable rate hike if it guarantees reliable or superlative services.

Vendor managers should also realize the important role played by vendors in the company strategy and, whenever possible, invite them to strategic meetings to share insight or advice.  As a trusted expert in their respective field, a vendor’s insights and proficiency could provide companies with a much-needed competitive edge to bolster their strategies.

The Long Haul

Hardware managers talking to each other while looking at tablet

Vendor management’s ultimate aim to cultivate a long-term partnership with a vendor. A right vendor is hard to find; one that provides a satisfactory and innovative product that delivers excellent services for a fair price is a company to keep.

Switching vendors based solely on the prices they offer is not going bode the company well in the long run as employees lose productivity in trying to adapt to a new system that may not be the best fit for the company’s needs. A good partner company, no matter what the initial costs, can and will save an enterprise money in the long run.

The post Big Picture Planning: The Pivotal Role of Vendor Management appeared first on None Equilibrium.

by Bertram Mortensen at July 02, 2019 01:00 AM

July 01, 2019

John Baez - Azimuth

Structured Cospans

My grad student Kenny Courser gave a talk at the 4th Symposium on Compositional Structures. He spoke about his work with Christina Vasilakopolou and me. We’ve come up with a theory that can handle a broad class of open systems, from electrical circuits to chemical reaction networks to Markov processes and Petri nets. The idea is to treat open systems as morphisms in a category of a particular kind: a ‘structured cospan category’.

Here is his talk:

• Kenny Courser, Structured cospans.

In July 11th I’m going to talk about structured cospans at the big annual category theory conference, CT2019:

• John Baez, Structured cospans.

I borrowed more than just the title from Kenny’s talk… but since I’m an old guy, they’re giving me time to say more stuff. For full details, try Kenny’s thesis:

• Kenny Courser, The Mathematics of Open Systems from a Double Categorical Perspective.

This thesis is not quite in its final form, so I won’t try to explain it all now. But it’s full of great stuff, so I hope you look at it! If you have any questions or corrections please let us know.

We’ve been working on this project for a couple of years, so there’s a lot to say… but right now let me just tell you what a ‘structured cospan’ is.

Suppose you have any functor L \colon \mathsf{A} \to \mathsf{X}. Then a structured cospan is a diagram like this:

For example if L \colon \mathsf{A} \to \mathsf{X} is the functor from sets to graphs sending each set to the graph with that set of vertices and no edges, a structured cospan looks like this:

It’s a graph with two sets getting mapped into its set of vertices. I call this an open graph. Or if L \colon \mathsf{A} \to \mathsf{X} is the functor from sets to Petri nets sending each set to the Petri having that set of places and nothing else, a structured cospan looks like this:

You can read a lot more about this example here:

• John Baez, Open Petri nets, Azimuth, 15 August 2018.

It illustrates many ideas from the general theory of structured cospans: for example, what we do with them.

You may have heard of a similar idea: ‘decorated cospans’, invented by Brendan Fong. You may wonder what’s the difference!

Kenny’s talk explains the difference pretty well. Basically, decorated cospans that look isomorphic may not be technically isomorphic. For example, if we have an open graph like this:

and its set of edges is \{a,b,c,d\}, this is not isomorphic to the identical-looking open graph whose set of edges is \{b,c,d,e\}. That’s right: the names of the edges matter!

This is an annoying glitch in the formalism. As Kenny’s talk explains, structured cospans don’t suffer from this problem.

My talk at CT2019 explains another way to fix this problem: using a new improved concept of decorated cospan! This new improved concept gives results that match those coming from structured cospan in many cases. Proving this uses some nice theorems proved by Kenny Courser, Christina Vasilakopoulou and also Daniel Cicala.

But I think structured cospans are simpler than decorated cospans. They get the job done more easily in most cases, though they don’t handle everything that decorated cospans do.

I’ll be saying more about structured cospans as time goes on. The basic theorem, in case you’re curious but don’t want to look at my talk, is this:

Theorem. Let \mathsf{A} be a category with finite coproducts, \mathsf{X} a category with finite colimits, and L \colon \mathsf{A} \to \mathsf{X} a functor preserving finite coproducts. Then there is a symmetric monoidal category {}_L \mathsf{Csp}(\mathsf{X}) where:

• an object is an object of \mathsf{A}
• a morphism is an isomorphism class of structured cospans:

Here two structured cospans are isomorphic if there is a commutative diagram of this form:

If you don’t want to work with isomorphism classes of structured cospans, you can use a symmetric monoidal bicategory where the 1-morphisms are actual structured cospans. But following ideas of Mike Shulman, it’s easier to work with a symmetric monoidal double category. So:

Theorem. Let \mathsf{A} be a category with finite coproducts, \mathsf{X} a category with finite colimits, and L \colon \mathsf{A} \to \mathsf{X} a functor preserving finite coproducts. Then there is a symmetric monoidal double category {_L \mathbb{C}\mathbf{sp}(\mathsf{X})} where:

• an object is an object of \mathsf{A}
• a vertical 1-morphism is a morphism of \mathsf{A}
• a horizontal 1-cell is a structured cospan

• a 2-morphism is a commutative diagram

by John Baez at July 01, 2019 12:26 AM

June 30, 2019

Jon Butterworth - Life and Physics

Off the map
Here’s a reasonable quality recording of a colloquium I gave in Valencia last month. It’s aimed at fellow particle physicists rather than the general public, but I know some of you fit that description, and some of it might be … Continue reading

by Jon Butterworth at June 30, 2019 08:55 AM

Lubos Motl - string vacua and pheno

Higgs mass from entropy maximization?
As you know, the Higgs boson is the most recently discovered fundamental particle (next Thursday, it will be 7 years from the discovery) and its mass seems to be \(m_H=125.14\pm 0.24\GeV\) or so. In various models, supersymmetric or scale-invariant or otherwise, there exist partial hints why the mass could be what it is and what this magnitude qualitatively means.

Reader T.G. had to be blacklisted because he was too vigorous and repetitive in defending the highly provoking 2014 Brazilian paper
Maximum Entropy Principle and the Higgs Boson Mass (by Alves+Dias+DaSilva, about 12 citations now)
which claims to calculate the almost identical value \(125.04\pm 0.25\GeV\) using a new assumption, entropy maximization. What are they doing?



Look at this chart which they omitted, for unknown reasons, so this blog post is more comprehensible than the paper.

The horizontal axis is the Higgs boson mass \(m_H\) and the vertical axis shows the branching ratios of Higgs decays – the probability that the Higgs decays to some final products or others. You may see that near the observed value \(m_H \sim 125\GeV\), there are relatively many decays that are relatively close to each other. None of them is dominant and beating all others etc.



OK, the Brazilian folks simply postulated an obvious idea that you could have – what if Nature tries to maximize the diversity? OK, so they took the branching ratio as a function of the variable Higgs mass from some software on the market and maximized the Shannon entropy\[

S = -\sum_{i=1}^m b_i (m_H)\ln b_i (m_H)

\] where \(b_i\) are the branching ratios i.e. the probabilities of qualitatively different decays.



What are the decays in the Standard Model? They are
  • \(\gamma\gamma\), \(W^\pm W^{\mp *}\), \(ZZ^*\), \(\gamma Z\)
  • \(e^+e^-\), \(\mu^+\mu^-\), \(\tau^+\tau^-\)
  • \(6\times q\bar q\)
  • \(g\bar g\)
The asterisk indicates that one of the particles is virtual – which is the case if the real particles in the final state would be too heavy.

OK, it's the three "particle-antiparticle" pairs of electroweak gauge bosons, the fourth similar but asymmetric decay is \(Z\gamma\), then there is the gluino pair, six quark pairs, and three charged lepton pairs. Now, the top quark is too heavy, heavier than the Higgs boson, so in the range of possible Higgs masses below the top quark mass, both tops would have to be virtual and the top-antitop Higgs decay is very unlikely.

That means that instead of fourteen, they just consider \(m=13\) decays, calculate the entropy as a function of the Higgs mass, and claim that \(125\GeV\) is the value that maximizes the entropy – or the diversity, if you wish. Cute. The arguments in favor of this discovery are obvious:
The latter tells you that if you know a probability distribution imperfectly, you should choose the distribution that maximizes the Shannon entropy \(-\sum_i p_i \ln p_i\). It's surely a special distribution in any restricted subclass, there is something canonical about it. However, the precise explanation "in what sense the Shannon-maximizing" distribution is "the best" is subtle and people may easily overstate its importance or unavoidability. In particular, I would say that if you obtain some probability distributions by proper Bayesian inference, you shouldn't replace it with a different one just because this different one maximizes the Shannon entropy. Instead, the prescription is only valid if you know the proability distribution "incompletely". But an incomplete distribution with "holes" etc. is something that you can't really get from measurements of the system if those are complete enough.

Nevertheless, the maximum entropy distribution principle does recommend you to choose the "maximally ignorant", egalitarian-divided probabilities for the degrees of freedom that are unknown and whose uncertainty is unknown. OK, they maximize the function of the Higgs mass and claim that \(125\GeV\) is the sweetest spot.

You may get overly excited by the positive arguments and neglect the doubts – suppress your skepticism. I think that in that case, you must be considered a numerologist. Like a broken clock, a numerologist may be right twice a day, of course. But the reasons to dismiss the result are really more powerful and fall into three categories:
  • errors and unnatural choices in the calculation even if you accept the fundamental premises
  • the apparent inability to calculate anything beyond the single number, the Higgs mass, using this apparently ambitious principle
  • the acausal character of the implicitly suggested "mechanism" which indicates that it should be impossible for such a maximization rule to operate in Nature
Concerning the first class of the complaints, I think it must be wrong that they consider just the 13-14 channels and the corresponding 13-14 terms in their entropy. Why? Because locally, the color of the quarks and gluons should be considered distinguishable.

They were not calculating any "well-established kind of entropy in the context" or the "real entropy of any parficular physical system". They just took the Shannon entropy formula and substituted some numbers that look "marginally sensible" to be substituted. Because there's no meaningful underlying theory, I can't prove what is "right" and "wrong". Their formula is really their axiom, so it's "right" in their axiomatic system.

But I find it extremely unnatural that there is no coefficient of \(3\) in front of their terms for the quark channels; and the corresponding factor of \(8\) for the gluon channel. For example, the gluon branching ratio should really be divided to 8 equal pieces \(b_{gg}/8\) and their logarithms additively differ by \(-\ln 8\). These extra terms \(\ln 8\) multiplying the gluon terms in their equation would modify the function \(S(m_H)\) and the maximization procedure.

Concerning the second class of the complaints, their entropy maximization principle seems really cool. It is numerologically claimed to work for the Higgs boson. But if such a principle worked, wouldn't it be strange that it only works for the Higgs boson mass? The Higgs boson mass is just one parameter of the Standard Model. Shouldn't it work for the quark and lepton masses – or their Yukawa couplings – as well? Or the masses of the electroweak gauge bosons and/or the gauge couplings? Even if the principle only determined one parameter, shouldn't it be a more generic function of the Higgs mass and other parameters rather than the Higgs mass itself? Why the precise Higgs mass, a particular coordinate on the parameter space? And shouldn't such a principle ultimately determine even the constants that don't seem to be associated with decays such as the cosmological constant?

You know, the claim – pretty much a claim that they try to hide – that such a procedure only determines the Higgs mass seems like a classic sign of a numerological fallacy. Numerologists love to take some number, completely take it out of the context, and produce some "calculation" of this number. They ignore that if some deep principle determines this number, the same principle should really determine many other numbers. The numerological derivations of a number have usually nothing to do with the "context", what the mathematical constant is actually supposed to represent. By definition, numerologists are too focused on patterns in numbers and largely ignore what the numbers are supposed to mean.

They don't seem to discuss this problem at all which indicates either that they're deliberately obfuscating problems, which is dishonest, or they don't understand why this is a problem for almost all similar numerological determinations of any constants. In both cases, it's just bad. Aside from their overlooking of the color degeneracy factors, this is another reason to conclude that they're simply not careful physicists. And this conclusion makes it likely that they have also done some other errors, perhaps completely numerical ones, but (because my belief in the paper is close to zero) I totally lack the motivation to find the answer to the question whether such mistakes also plague the paper.

Concerning the third complaint, well, such a maximization of entropy should be impossible for causal reasons. The higher diversity of the Higgs decays doesn't seem to "useful" to explain anything; there is no known carefully verified rational reason why it should be true. Unlike the extreme anthropic principle that favors "universes with many intelligent observers" because the intelligent observers are not only "like us" but useful for doing any science, and in this sense a desirable component of the universe, the diversity of the Higgs decays doesn't seem to be good for anything. So the justification is even more absent than in the case of the "strong anthropic principle" – and that is already pretty bad.

The reason why this diversity should be a "cause" of the selection of the Higgs mass is lacking. Even more seriously, one may apparently prove that such a determination should be impossible. Why? Because the Higgs mass – and parameters resulting from some vacuum selection – took place when the Universe was extremely young, dense, and hot. Perhaps Planckian. And maybe even more extreme than that. At that time, Higgs bosons didn't have the low energy and didn't have the freedom to decay to something at low energies "almost in the vacuum". Everything had huge energy and was interacting with other particles – whose density was huge – constantly.

So the "13-14 low-energy decay channels of a Higgs boson" weren't even an important part of the physics that governed the very early Universe when the vacuum selection choices were made! So how could the Universe make a choice that would maximize some entropy calculated from some low-energy phenomenological functions – which only seemed empirically relevant much later (but still a fraction of a second after the Big Bang)? It just doesn't make any sense. Such a mechanism could only work if the causal ordering were suppressed (which would almost unavoidably imply a conflict with the usual, causal laws of Nature that determine the evolution differently) and the universe were really planning the future in a teleological way. But why should exactly this kind of diversity be God's plan?

Also, many superficial people who just defend some "entropy maximization" typically fail to understand that the right reasons and mechanisms for the entropy maximization in physics are known. They boil down to the second law of thermodynamics. The entropy goes up because the probabilities of a transition between the "initial ensemble of states" and the "final ensemble of states" is averaged over the initial states but summed over the final ones. That's why the probability of the inverse process is effectively suppressed by the factor of \(N_i/N_f\) which is why the evolution favors the evolution to higher-entropy states. This is the cleanest justification why the entropy doesn't want to go down.

The second law of thermodynamics is a qualitative law but we actually know these quantitative proofs of this law and these derivations – similarly the H-theorem (the objection that it depends on "shaky" assumption such as the ergodic principle are bogus – all these assumptions are surely valid in practice) – tell us not only that the entropy goes up but also how much and why. If you postulate another "high entropy wanted" law for Nature, it may look like being a "morally allied" with the second law of thermodynamics. But because the details of your law – how high the entropy wants to be and why – will be different, your new law will actually contradict the well-established detailed derivations behind the second law!

So the paper is hopeless.

Nevertheless, over the years and even recently, I've spent dozens of hours by "spiritually similar" attempted derivations. In particular, those derivations were a part of my Hawking-Hartle research. The Hawking-Hartle state is the preferred wave function of the universe – especially applicable as the initial state of the universe – which is a solution of the Wheeler-DeWitt solution and may be obtained as a path integral in a spacetime region that isn't bounded by two boundaries (carrying the initial and final state), as appropriate for the calculation of the evolution or S-matrix, but just one boundary (a three-sphere surrounding the Big Bang point).

The Hartle-Hawking state is clearly a possible paradigm to explain the parameters of Nature that won't go away unless another paradigm, like the multiverse, is really established – or some remarkable, rigorously provable bug is found in the Hartle-Hawking principle of the most general type. Or someone makes the Hartle-Hawking paradigm rigorous and quantitative and checks that it makes wrong predictions. But the Hartle-Hawking paradigm hasn't been terribly successful and naive, minisuperspace calculations of the Hartle-Hawking state dominate the literature.

Well, I always wanted to apply it as a rule to determine the right vacuum of string/M-theory. All the numerous details about the compactified dimension could arise from the paradigm – while the non-stringy Hartle-Hawking literature is obviously too obsessed with the four large spacetime dimensions. If that's true, the Hartle-Hawking wave function could be peaked near the vacua with some qualitative properties. It could be peaked around vacua with a low cosmological constant or high Planck-electroweak hierarchies or high hierarchies in general or low Hodge numbers of the Calabi-Yau manifolds, among other "preferred traits".

If the Hartle-Hawking paradigm is correct at all, and if string/M-theory is correct, which are two independent assumptions, then it seems extremely likely that the Hartle-Hawking state would prefer some qualitative traits of the string vacua. And they could be directly relevant for the explanation of some observed traits in Nature – such as the observation of some particular hierarchies or deserts.

The Hartle-Hawking state would still allow many different vacua because it gives rise to a smooth probability distribution. But it could be peaked and the peak could be rather narrow near some point. The maximization needed to find such a point could be mathematically analogous to the Brazilian paper that I have discussed above.

But the details matter. The Devil is in the details. And the details – which are not really small details, if you look at it rationally – imply that this Brazilian paper is hopelessly wrong and irrational. If someone is on a mission to promote it on the Internet and insult everybody who has good reasons not to take this Brazilian paper seriously at all, it's a problem and a ban becomes the optimal solution.

by Luboš Motl (noreply@blogger.com) at June 30, 2019 07:15 AM

Lubos Motl - string vacua and pheno

Are feeling-based popular articles about symmetries helpful?
K.C. Cole is one of the better science writers – who is surely choosing better sources for her texts than almost all other writers about physics – and she just published a new text in the Quanta Magazine:
The Simple Idea Behind Einstein’s Greatest Discoveries
The title is friendly towards symmetries, as you can see, and many parts of her text try to suggest details about the importance of symmetry in the 20th and 21st century physics. The subtitle is unfriendly, however:
Lurking behind Einstein’s theory of gravity and our modern understanding of particle physics is the deceptively simple idea of symmetry. But physicists are beginning to question whether focusing on symmetry is still as productive as it once was.
I concluded that the real intended story is that the symmetries are no longer considered as fundamental as they used to be. And I think that such a statement would be correct – although this transition wasn't really taking place in 2019 but rather in the 1990s or 1980s. However, I don't think that the body of Cole's article actually contains evidence that a rational reader could consider a justification of her subtitle.



OK, her article touches many topics related to symmetries: special relativity and the Lorentz invariance, beauty of equations according to Dirac, general relativity and its vaguely suggested connection with symmetries, Emmy Noether, global symmetries vs gauge symmetries, gauge symmetries' being redundancies, spontaneously broken symmetry as a non-symmetry of solutions when the equations are symmetric, and many other topics.



But each of these topics is sketched or mentioned so superficially (and she jumps from one topic to another so quickly) that I find it inconceivable that someone could actually understand any of these insights, even partially, after having read the article. To be more precise, I haven't been persuaded that the writer actually understands any of these things herself. Like how the Lorentz symmetry acts in special relativity. Or what the spontaneous symmetry breaking does in physics.

Or what Emmy Noether is actually famous for. Cole seems to promote partly wrong statements, such as that Noether has linked the energy conservation in GR to the symmetry exchanging reference frames. Well, the relevant symmetry transformation related to the energy conservation law is a time translation, not really something that we consider "switching between reference frames". On top of that, these coordinate-changing symmetries are just redundancies (local symmetries) in GR, so there's no general [total] energy conservation law in generic spacetimes of GR! So the sentence seems to be defective at so many levels that I concluded that it can't be a bunch of coincidences. The writer must really fail to understand how it works.

Despite this big uncertainty about the understanding of anything by the writer, the writer nevertheless feels confident enough to present the far-reaching conclusions such as "the symmetry isn't as productive as it used to be". Is that statement right, a rational reader must ask? Can the communication of such feelings – and they're really feelings, not conclusions of any rational analysis – be helpful to direct readers' searching for his own understanding? I just doubt it.

Does it make sense for a youngster interested in physics to read articles about symmetry that are written by someone who doesn't actually understand any of the stuff at a level recognizable to the experts? My answer is No.

In the past, good popular writing about science was sort of analogous to textbooks that actually try to train someone so that he really understands the stuff and may become a professional. Popular books – e.g. Martin Gardner's books about relativity – were more playful, slower, using some stories, but the basic goal was the same as the goal of the textbooks.

I think it is no longer the case. Articles such as Cole's don't actually want to teach anything to anybody and they can't even persuade the experts that the writer actually understands what she is writing about. Articles such as Cole's are communicating some feelings and some predetermined, intrinsically political conclusions that the readers are expected to parrot. The reader should simply take it as a fact – because it's written by a person widely called a "science writer" – that symmetry used to be very productive but it is no longer the case. It's too bad because it has very little to do with science and the scientific method of changing collective or individual opinions about anything. The word "science" is just being stolen for something that is no longer "science".

OK, where are the places of the body that show that practitioners no longer consider symmetry productive? Here is the first one, I think:
There has been, in particle physics, this prejudice that symmetry is at the root of our description of nature.
Right. I would agree with Justin Khoury. It's been a pillar of some faith, some armchair physicists still look at the things in this way, and real experts generally believe that this complete obsession with symmetries is wrong today. And Khoury has used a relatively hostile word "prejudice" for what used to be the status quo.

Well, it may have been a prejudice. But you could also call it a "lore". I think that Khoury wouldn't protest if the word "lore" had been used instead of "prejudice". It doesn't really make any difference for a physicist – but for journalists like K.C. Cole who are obsessed with feelings, the difference between "prejudice" and "lore" could be substantial. The special relativity is built from the Poincaré symmetry, GR is built from the diffeomorphism symmetry, the Standard Model may be built from the \(SU(3)\times SU(2)\times U(1)\) gauge symmetry, and so on. That was a picture of the past – a reason why physicists tended to consider the symmetry primary.

We no longer see it in this way – because symmetries may be emergent, accidental, flexible, appearing, disappearing, and gauge symmetries aren't real physical symmetries because they're redundancies that depend on the chosen description of physics – but Cole hasn't really explained why. She doesn't really explain why the gauge symmetry is just a redundancy. She doesn't explain how the descriptions with different gauge symmetries may be dual to each other, how they may be gauge-fixed or deformed to each other on the moduli space, and so on. So the reader is just expected to mindlessly copy, without any real evidence, the opinion of K.C. Cole that "symmetry is no longer as productive as it used to be" although there is no real justification of that assertion in the article. If we don't count some negatively sounding words without sentences, such as "prejudice" in someone's quote, and indeed, we shouldn't pay attention to those.

That's bad. Whether the reader understands one thing or another about symmetry has some limited importance but there is something more important at stake. For a reader to be scientifically literate, he just shouldn't accept assertions that aren't justified by any evidence. This kind of "skepticism" or "caution" is an essential prerequisite for the scientific thinking about anything. And this article is an example where the principle fails. If the reader behaves rationally, the article won't really move him by a micron. It's a useless article. A necessary condition for the article to actually move someone's ideas somewhere is that he blindly believes what he reads. And that's a deviation from the scientific discourse that is much more devastating than a misunderstanding of one technicality involving the notion of symmetry or another.

Well, there are two more segments in the article that could be claimed to be "the evidence that the importance of symmetry has dropped":
Over the past several decades, some physicists have begun to question whether focusing on symmetry is still as productive as it used to be. New particles predicted by theories based on symmetries haven’t appeared in experiments as hoped, and the Higgs boson that was detected was far too light to fit into any known symmetrical scheme. Symmetry hasn’t yet helped to explain why gravity is so weak, why the vacuum energy is so small, or why dark matter remains transparent.

[...]

At the same time, symmetry-based reasoning predicted a slew of things that haven’t shown up in any experiments, including the “supersymmetric” particles that could have served as the cosmos’s missing dark matter and explained why gravity is so weak compared to electromagnetism and all the other forces.
The only problem is that these assertions are mostly wrong. The new particles haven't shown up in experiments so far and there has never been any universally enough accepted prediction of a whole framework – as opposed to particular competing models that make lots of assumptions – that some new particles should have shown up by now. All the predictions assuming naturalness are probabilistic predictions that have never guaranteed and could never guarantee the discovery of anything. Supersymmetric particles remain the most well-motivated possible particles in new physics.

Nothing has qualitatively changed. It has always been known that supersymmetry and perhaps other symmetries relating the Higgs boson to something else etc. have to be broken. And all these qualitative things remain true and promising possibilities. In particular, it's comparably likely to the probability of the 1990s that the weakness of gravity or the smallness of the cosmological constant has an explanation in which symmetries play an important role. The lightness or stability of the dark matter probably depends on some new symmetry, too. These possibilities have not been ruled out or falsified, at least not if one uses the scientific definitions of these verbs.

Symmetries still play many roles in all of that – and they unavoidably appear in newly proposed explanations of observations in physics. Supersymmetry is a symmetry, or a Grassmannian generalization of Lie algebras. Also, symmetries surely do explain why dark matter is transparent, whatever it is. Dark matter is dark because it doesn't interact with the electromagnetic field and it doesn't interact with the electromagnetic field because its pieces are electrically neutral – i.e. invariant under the electromagnetic \(U(1)\) symmetry transformations! K.C. Cole is basically saying that the notion of symmetry is being expelled from the reasoning about all these questions but it's obviously complete nonsense. It will never be expelled because these are settled connections and explanations and the symmetry will never lose its role in those explanations. The only thing that is happening is that there may be a deeper explanation of these known things where the symmetries aren't among the most fundamental concepts.

The statement that "symmetries don't explain why dark matter is transparent" is either wrong or vacuous and no real expert would make such a statement. (I just picked an example but the same criticism applies to many other propositions in Cole's article.) But in the genre such as K.C. Cole's article, it's just fine to write such wrong statements because the political spirit is consistent with the main message she wants to convey. She wants to convey the view that symmetry has deteriorated in recent years or decades – which is really a feeling, not a fact – and she thinks that given this goal, anything "negative" may be or should be written about symmetries. Whether the statements are actually correct doesn't have to be verified.

So she isn't really teaching any physics – the primary reason is that she probably doesn't understand any. But even if I remain slightly uncertain about this proposition, there are just way too many hints in the article that it's a material written by and for "humanities" types, for people interested in feelings, grievances, entitlements, and identity politics, not for the "natural science" types who care about equations, experiments, mathematical proofs, and facts.

What are the signs that this belongs to the "humanities" genre? One of the traits that are innocent but they drive me up the wall is the permanent attribution of some elementary statements to physicists. So she has communicated with five main physicists: Stephon Alexander, Robbert Dijkgraaf, Mark Trodden, Justin Khoury, and David Kaiser (a part-time historian). Sorry if I overlooked someone.

In her article, you find 4 quotes of the type "Alexander said this or that". "Trodden said this or that" about 5 times, "Dijkgraaf said this or that" 4 times, "Khoury said something" 3 times, and "Kaiser said something" 6 times. That's 22 similar quotes in total. You know, people may have different opinions but most of science just isn't about some idiosyncratic opinions. Physics is an objective natural science. And most of the stuff she talks about really is some physics that has been settled for 50 years – and in very many cases, over 100 years.

Most of the 22 quotes end up being similar to
“If that weren’t the case [the Cosmos is uniform at cosmological scales, a manifestation of a symmetry], cosmology would be a big mess,” Khoury said.
Do you understand why I am angry about that? It just doesn't matter at all that it was Khoury who said it. Every competent cosmologist could say – and has said – a sentence that is nearly equivalent. If the Universe were inhomogeneous at the cosmological scales, we couldn't design neat FRW Ansätze for the cosmological evolution and we would have to study "where we exactly live" because the phenomena we observe around the Solar System would heavily depend on our special place in the Universe. Are we close to some "center of the Universe" or far from it? Those things would matter. But there's no real "center of the real Universe" where we live so cosmology may avoid these questions and the research of cosmology is cleaner. We may study the whole Universe by observations made from the Earth and we may extrapolate most of the conclusions to the whole Universe, too. Thank God.

Khoury knows it. But I assure you that Dijkgraaf, Alexander, Trodden, your humble correspondent, maybe Kaiser, and everyone else who got a well-deserved good grade in a cosmology graduate course knows these things, too. The first problem with the attribution is that lots of laymen who read Cole's article will conclude e.g. that Justin Khoury is the guy who discovered that the Universe was uniform, or that the uniformity was important, or that it was related to symmetries. Just to be sure, he didn't discover either. To make things worse, I haven't even been persuaded that Cole herself understands that Khoury didn't discover either!

But there's a more general problem resulting from the attribution. It's about the spirit of science.

You know, technically, it may be right that Khoury said this or that, Trodden said this or that, Dijkgraaf said this or that, and Alexander said this or that. But by making these assertions that are attributed to somebody, K.C. Cole implicitly also says something else, namely that it matters who said it. But this implicit assertion is completely incorrect. In science, it doesn't matter who made one proposition or another. In physics, black lives don't matter and white lives don't matter, either. Also, men don't matter. Let alone women. Natural science isn't about humans or personal opinions. It's about objective evidence that is accessible to everybody with a sufficient intelligence, integrity, attention, background, and patience. It's just not true that everything is personal. The number of correct theories of relativity or the correct interpretations of quantum mechanics is fewer than 26, the number of genders. The only thing that is equal to the number of genders is the spacetime dimension of bosonic string theory. ;-) Even the number of genders is really lower than those people assume but I don't want to make my text controversial by suggesting that there exist men and women! ;-)

So she doesn't really explain any of the points about the importance of symmetries – or the decline of that importance – at least not quite correctly. But she does explain something that surely affects many readers. And the influence is harmful. She conveys the totally wrong lesson that it matters who made a proposition in physics. She is helping to brainwash the readers into thinking – or failing to think – in analogy with the brain-insufficient practitioners of the humanities. People who want to mindlessly parrot and who are just choosing their allies in an ad hominem way, to share the grievances with them. People who don't give a damn about evidence, logical arguments, equations, or observations. At most, they say that they care about the evidence etc., because it "sounds nice", but they're lying because in practice, they are using something completely different to decide what they should root for etc.

That's why I think that the net effect of articles such as this K.C. Cole's text is negative and because K.C. Cole's article is still one of the best popular texts about physics that are appearing in (almost) mass media these days, it seems obvious that the net contribution of the science writers as an occupation to mankind's scientific literacy is unquestionably negative. They're really helping mankind to evolve towards Idiocracy. Maybe some other journalists are even more harmful but that doesn't imply that the current science journalists have a positive sign.

You might suggest that there's a simple "legitimate" explanation why she attributes mostly elementary statements to the 5 physicists 22 times. She isn't a real authority. So it's right for her to "report what the authorities say" which is why the statements are more authoritative once the physicists endorsing them are named. But if that's so, I would like to know the name of the physicist who has recommended the far-reaching and controversial summary in the subtitle, or any of the wrong statements such as "symmetry hasn't helped to explain why dark matter is transparent". It seems she is making the text look more authoritative by attributing some (sensible) statements to physicists, but the most important statements are wrong, unattributed, and the reader is supposed to overlook it.

This mess is unavoidable in the "humanities" type of science journalism. You know, if one writes good popular science, one doesn't need to refer to the authorities. It's just good, the smart readers see it, and some experts who read the popular stuff will see it, too. Martin Gardner – to continue with an example I mentioned above – was primarily keen on recreational mathematics. He was no professional physics researcher, ever. But he just understood special relativity well. He wrote popular books about it where he didn't need to refer to "authoritative" sources because the real authority came from the arguments that made sense because he actually understood the stuff.

These days, journalists and popular writers generally understand nothing and their writing doesn't make much sense beyond the universal templates that they have learned in their journalism courses which is why they have to build on obnoxious appeals to authorities and why their texts unavoidably end up being misleading political tirades. Many of them openly disagree with the thesis that science is about the evidence, about the arguments' making sense, not about authorities (many people claiming to be science journalists are even willing to support insane clichés about the formidable 97% consensus and similar things). That's a simple way to see that the ideas and especially the methodologies they are trying to spread have very little to do with science. They're mostly abusing the word because it has earned quite some capital and they find it useful to be parasites on the good name of science. But the capital wasn't earned by these journalists or their type of "work" at all.

by Luboš Motl (noreply@blogger.com) at June 30, 2019 04:08 AM

June 28, 2019

June 27, 2019

Lubos Motl - string vacua and pheno

A three-parameter jungle of F-theory Standard Models
I want to mention two cool new papers now. First, a paper showing that natural supersymmetry is alive and well.
The current status of fine-tuning in supersymmetry
Melissa, Sascha Baron-Cohen (Borat), and Roberto (Holland+Kazakhstan+Spain – and I've sent a few more people to arXiv.org again LOL) have analyzed the degree of fine-tuning in supersymmetric models using two widely accepted formulae. They found out that totally natural SUSY models are compatible with the LHC exclusion limits – the degree of fine-tuning is just 3-40 or 60-600 for low-scale measure or high-scale measure, respectively.

The models get particularly viable if you look at the pMSSM (phenomenological minimal supersymmetric standard model – parameterized by a limited number of parameters close to the observations; I think it should have been expected) and the pMSSM-GUT is doing much better in fine-tuning than other GUT models. And when the fine-tuning depends primarily on the higgsino mass which may still be very low, and it's possible in huge regions of the parameter space, the fine-tuning may be very low.

Tons of writers if I avoid the more accurate term "lying or deluded inkspillers" have persuaded some 97% of the Internet users who care – it's my estimate based on the comments I am receiving – that the LHC has excluded natural supersymmetry. Well, the calculations in the actual experts' papers show something very different. This 39-page-long paper with 9 MB of graphs concludes in the abstract: "We stress that it is too early to conclude on the fate of supersymmetry/MSSM, based only on the fine-tuning paradigm."

So when someone tells you that the LHC has said something fatal about supersymmetry or naturalness, don't forget you are being lied to.



That was on hep-ph. The second paper I want to mention is on hep-th and is dedicated to a similar topic as a quadrillion Standard Models in F-theory in March:
Generic construction of the Standard Model gauge group and matter representations in F-theory
Like Cvetič et al. in March, Wati Taylor and Andrew Turner (MIT) look for promising realistic classes of F-theory compactifications.



Taylor and Turner demand the final gauge group to be the exact Standard Model gauge group\[

(SU(3)\times SU(2)\times U(1)) / \ZZ_6

\] at all times. They like models with 6 uncompactified dimensions so they look at the F-theory models for those, assuming that the realistic 4D models are obtained as some compactification of two more dimensions from a 6D model that already has the correct gauge group.

In six dimensions, one has to satisfy the nontrivial anomaly cancellation conditions. Note that in this 6D-to-4D F-theory model building, the 6D model is almost completely described by a geometry (an elliptically fibered 3-fold) and it mostly specifies the gauge group. While compactifying to 4D, one may and must add some fluxes (which are "non-geometric" information), and these fluxes are correlated with the chiral matter that appears in 4D physics.

They impressively claim that if they assume the right gauge group above; the MSSM matter spectrum; six large dimensions; and the absence of tensor multiplets in 6D, they have a complete proof of having found all F-theory compactifications with these conditions. Somewhat less certainly, they want us to believe that even if the "no tensor multiplets" condition were relaxed, or if 6D were replaced by 4D, they could still do a similar classification.

A cool result is that the largest bunch of constructions obeying these conditions is a well-defined class of compactifications that are obtained by Higgsing \(SU(4)\times SU(3)\times SU(2)\) F-theory models in 6D. This class is parameterized by 3 parameters: \(b_3,b_2,\beta\). 71 such models exist when there are no tensor multiplets. Three bifundamental fields are involved in these vacua, various bases are possible.

You know, the three parameters are just integers and they have to obey inequalities\[

\eq{
4b_3 + 3b_2 + 2\beta&\leq -8a\\
b_3+b_2+\beta &\geq -a
}

\] These inequalities bind the integers from both sides and there are 98 solutions. Geometries may be built for each etc. although I feel they only superficially mention the geometries – and their Fano bases and other bases etc. The paper is more field-theoretical than geometric in character.

It seems that the authors must really love the \(SU(4)\times SU(3)\times SU(2)\) in 6D and trust it's a promising extension of the existing groups. It's not a Pati-Salam group – they also have some Pati-Salam realizations of the Standard Model – but this Taylor-Turner seems to be analogous to Pati-Salam and according to this analysis, it may be favored over Pati-Salam in F-theory.

by Luboš Motl (noreply@blogger.com) at June 27, 2019 01:49 PM

June 26, 2019

John Baez - Azimuth

Terawatt-Scale Photovoltaics

Here’s a cool paper which seems to be freely available:

• Nancy M. Haegel et al., Terawatt-scale photovoltaics: transform global energy, Science 364 (2019), 836–838.

Important topic! Here’s the abstract:

Solar energy has the potential to play a central role in the future global energy system because of the scale of the solar resource, its predictability, and its ubiquitous nature. Global installed solar photovoltaic (PV) capacity exceeded 500 GW at the end of 2018, and an estimated additional 500 GW of PV capacity is projected to be installed by 2022–2023, bringing us into the era of TW-scale PV. Given the speed of change in the PV industry, both in terms of continued dramatic cost decreases and manufacturing-scale increases, the growth toward TW-scale PV has caught many observers, including many of us (1), by surprise. Two years ago, we focused on the challenges of achieving 3 to 10 TW of PV by 2030. Here, we envision a future with ∼10 TW of PV by 2030 and 30 to 70 TW by 2050, providing a majority of global energy. PV would be not just a key contributor to electricity generation but also a central contributor to all segments of the global energy system. We discuss ramifications and challenges for complementary technologies (e.g., energy storage, power to gas/liquid fuels/chemicals, grid integration, and multiple sector electrification) and summarize what is needed in research in PV performance, reliability, manufacturing, and recycling.

Of course, increased energy storage is needed to take advantage of solar power. Let’s see what they say about that:

Energy storage

At high penetration, increased PV installation is synergistic with increased storage. Tesla recently installed a 100-MW battery in South Australia and in the first 6 months recovered 14% of the capital cost. California is also setting aggressive targets for storage. The price of lithium-ion batteries has decreased by more than 80% in the past 8 years, and improvements are expected to continue through a combination of technological advances and increased manufacturing capacity. To achieve the U.S. Department of Energy target price of U.S. $150/kWh for automotive batteries capable of charging within 15 minutes, research should explore materials with higher energy density to further reduce costs, focusing on nickel-rich, critical-materials–free cathodes and advanced anodes for lithium-ion systems. With further research and cost reduction, flow batteries and sodium-ion and multivalent-ion or conversion systems could also hold the promise of long-term competitors to lithium ion.

An additional approach to battery-based storage is pumped-storage hydropower (pumped hydro). Recent research indicates that there is a substantial technical potential for untapped off-river (closed-loop) pumped hydro and other forms of gravity storage in many parts of the world (9, 10). Pumped hydro has the advantage of being able to provide short-term responsiveness and diurnal-scale storage potentially at low cost.

The biggest challenge may be to meet energy requirements during the winter at high latitudes. However, wind power tends to be more abundant in many of these locations, whereas most of the world’s population lives closer to the equator. Economic development as well as population growth may be dominated by countries within 35° of the equator in the coming decades.

by John Baez at June 26, 2019 11:34 PM

June 25, 2019

CERN Bulletin

Tuesday, 28 May 2019, Ordinary General Assembly of the CERN Staff Association

Ideas, open debates and votes

The CERN Staff Association held its General Assembly on Tuesday 28 May 2019. As every year, this is the ideal time to meet the team at the helm of the Staff Association, to learn about the activities and successes of the past year, but also about the work programme for the coming year. It is also at the General Assembly that important decisions are taken democratically by a vote of all members present.

We are pleased to have been able to count on a significant increase in the number of participants this year.

Thibaut Lefèvre, appointed by the Staff Council, chaired the meeting.

First important subject: Amendment of the Staff Association's Statutes

Ghislain Roy, President of the Association, began by presenting the proposed amendment concerning the principle of membership in the Staff Association. From now on, all new employed members of personnel (MPE), become members de facto as soon as they take up their duties, but with the possibility of withdrawal within 100 days. This proposal, already presented in the Echo n°313 of 29 April 2019, raised several questions and objections in the room.

Ghislain Roy stated that the withdrawal within 100 days is not definitive and that any member of personnel could decide at any time to join or resign from the Association.

Regarding questions centering on the motivation for this change, Ghislain Roy replied that times have changed and it is difficult to interest members of personnel on limited-duration contracts in the topics covered by the Association. As a result, awareness of representativeness is lower nowadays. Ghislain recalled that the main mission of the Staff Association, enshrined in the Staff Rules and Regulations of the Organization, is to represent and defend the staff, all of the staff!

The Staff Association will launch a communication campaign to inform the personnel arriving at CERN in advance of their automatic membership in the Association.

The proposal was approved by a majority of the votes, but with significant opposition. 

Activity report 2018

The meeting continued and Isabelle Mardirossian, Vice-President of the Association, presented the 2018 activity report.

The main lines of this report covered:

1.         Reminder of the principles of “Concertation”;

2.         The results of the 2015 Five-Yearly Review, namely the package concerning the reduction of the advancement budget from 2.1 to 1.6%, the loss in advancement, the introduction of certain diversity measures, the Validation of Acquired Experience (VAE), the introduction of career development interviews and the development of internal mobility, the reduced use of the Equity budget in 2017 and 2018, as well as the exceptional extension by one year of the next Five-Yearly-Review, postponed until 2021;

3.         The work was carried out within the CERN working groups, the sub-groups of the Standing Concertation Committee (CCP) and the Pension Fund, on a diverse range of subjects such as Mobility within CERN sites, Data protection, Stress management, Pension guarantee in the event of the dissolution of the Organisation, revision of the CHIS regulations, Governance of the Pension Fund, the situation of "Le Jardin des Particules" and its future displacement as a result of the Science Gateway, etc;

4.       The number of meetings and topics discussed within CERN's Official bodies such as the Standing Concertation Committee (CCP), the Tripartite Employment Conditions Forum (TREF) and other committees and commissions.

Financial report and provisional budget

Nicolas Salomon, Treasurer of the Association, first presented the 2018 financial report, then the 2019 provisional budget, which was adopted unanimously by the members present, with one abstention.

The Association has expressed its willingness to diversify investments, with a focus on ethical and sustainable investments in response to a comment received at the 2018 General Assembly. The Staff Association will present a detailed proposal at next year's session.

Work programme 2019

Ghislain Roy continued with the presentation of the work programme for 2019, which essentially covered: the follow-up and  termination of the 2015 Five-yearly-review, the strengthening of the Concertation process, the protection of personal data, actuarial reviews, elections to the Staff Council, and the preparation of the next Five-Yearly-Review.

Votes of the members of the Association present

The members of the Staff Association present approved by a majority or by a unanimous vote:

1.         The 2018 Activity Report ;

2.         The 2018 Financial report and the Report of the auditors;

3.         The provisional budget for 2019;

4.         The 2019 contribution rates, unchanged for several years;

5.         The proposal for amendments to the Staff Association's Statutes concerning the principle of automatic membership in the SA for new arrivals with the possibility of withdrawal.

We would like to thank once again all those who participated in this General Assembly.

Your large presence is essential because it allows you to decide on the actions of the Staff Association.

We hope to see even more of you next year!

For more details, do not hesitate to consult all the presentations available on Indico (https://indico.cern.ch/event/820415/)

AND DON'T FORGET THE MOST IMPORTANT THING: JOIN US!

June 25, 2019 05:06 PM

June 19, 2019

Axel Maas - Looking Inside the Standard Model

Creativity in physics
One of the most widespread misconceptions about physics, and other natural sciences, is that they are quite the opposite to art: Precise, fact-driven, logical, and systematic. While art is perceived as emotional, open, creative, and inspired.

Of course, physics has experiments, has data, has math. All of that has to be fitted perfectly together, and there is no room for slights. Logical deduction is central in what we do. But this is not all. In fact, these parts are more like the handiwork. Just like a painter needs to be able to draw a line, a writer needs to be able to write coherent sentences, so we need to be able to calculate, build, check, and infer. But just like the act of drawing a line or writing a sentence is not what we recognize already as art, so is not the solving of an equation physics.

We are able to solve an equation, because we learned this during our studies. We learned, what was known before. Thus, this is our tool set. Like people read books before start writing one. But when we actually do research, we face the fact that nobody knows what is going on. In fact, quite often we do not even know what is an adequate question to pose. We just stand there, baffled, before a couple of observations. That is, where the same act of creativity has to set in as when writing a book or painting a picture. We need an idea, need inspiration, on how to start. And then afterwards, just like the writer writes page after page, we add to this idea various pieces, until we have a hypotheses of what is going on. This is like having the first draft of a book. Then, the real grinding starts, where all our education comes to bear. Then we have to calculate and so on. Just like the writer has to go and fix the draft to become a book.

You may now wonder whether this part of creativity is only limited to the great minds, and at the inception of a whole new step in physics? No, far from it. On the one hand, physics is not the work of lone geniuses. Sure, somebody has occasionally the right idea. But this is usually just the one idea, which is in the end correct, and all the other good ideas, which other people had, did just turn out to be incorrect, and you never hear of them because of this. And also, on the other hand, every new idea, as said above, requires eventually all that what was done before. And more than that. Creativity is rarely borne out of being a hermit. It is often by inspiration due to others. Talking to each other, throwing fragments of ideas at each other, and mulling about consequences together is what creates the soil where creativity sprouts. All those, with whom you have interacted, have contributed to the idea you have being born.

This is, why the genuinely big breakthroughs have often resulted from so-called blue-sky research or curiosity-driven research. It is not a coincidence that the freedom of doing whatever kind of research you think is important is an, almost sacred, privilege of hired scientists. Or should be. Fortunately I am privileged enough, especially in the European Union, to have this privilege. In other places, you are often shackled by all kinds of external influences, down to political pressure to only do politically acceptable research. And this can never spark the creativity you need to make something genuine new. If you are afraid about what you say, you start to restrain yourself, and ultimately anything which is not already established to be acceptable becomes unthinkable. This may not always be as obvious as real political pressure. But if whether you being hired, if your job is safe, starts to depend on it, you start going for acceptable research. Because failure with something new would cost you dearly. And with the currently quite common competitive funding prevalent particularly for non-permanently hired people, this starts to become a serious obstruction.

As a consequence, real breakthrough research can be neither planned nor can you do it on purpose. You can only plan the grinding part. And failure will be part of any creative process. Though you actually never really fail. Because you always learn how something does not work. That is one of the reasons why I strongly want that failures become also publicly available. They are as important to progress as success, by reducing the possibilities. Not to mention the amount of life time of researchers wasted because they fail with them same attempt, not knowing that others failed before them.

And then, perhaps, a new scientific insight arises. And, more often than not, some great technology arises along the way. Not intentionally, but because it was necessary to follow one's creativity. And that is actually where most technological leaps came from. So,real progress in physics, in the end, is made from about a third craftsmanship, a third communication, and a third creativity.

So, after all this general stuff, how do I stay creative?

Well, first of all, I was and am sufficiently privileged. I could afford to start out with just following my ideas, and either it will keep me in business, or I will have to find a non-science job. But this only worked out because of my personal background, because I could have afforded to have a couple of months with no income to find a job, and had an education which almost guarantees me a decent job eventually. And the education I could only afford in this quality because of my personal background. Not to mention that as a white male I had no systemic barriers against me. So, yes, privilege plays a major role.

The other part was that I learned more and more that it is not effort what counts, but effect. Took me years. But eventually, I understood that a creative idea cannot be forced by burying myself in work. Time off is for me as important. It took me until close to the end of my PhD to realize that. But not working overtime, enjoying free days and holidays, is for me as important for the creative process as any other condition. Not to mention that I also do all non-creative chores much more efficiently if well rested, which eventually leaves me with more time to ponder creatively and do research.

And the last ingredient is really exchange. I have had now the opportunity, in a sabbatical, to go to different places and exchange ideas with a lot of people. This gave me what I needed to acquire a new field and have already new ideas for it. It is the possibility to sit down with people for some hours, especially in a nicer and more relaxing surrounding than an office, and just discuss ideas. That is also what I like most about conferences. And one of the reasons I think conferences will always be necessary, even though we need to make going there and back ecologically much more viable, and restrict ourselves to sufficiently close ones until this is possible.

Sitting down over a good cup of coffee or a nice meal, and just discuss, is really jump starting my creativity. Even sitting with a cup of good coffee in a nice cafe somewhere and just thinking does wonders for me in solving problems. And with that, it seems not to be so different for me than for artists, after all.

by Axel Maas (noreply@blogger.com) at June 19, 2019 02:53 PM

June 18, 2019

Marco Frasca - The Gauge Connection

Cracks in the Witten’s index theorem?

In these days, a rather interesting paper (see here for the preprint) appeared on Physical Review Letters. These authors study a Wess-Zumino model for {\cal N}=1, the prototype of any further SUSY model, and show that there exists an anomaly at one loop in perturbation theory that breaks supersymmetry. This is rather shocking as the model is supersymmetric at the classical level and, in agreement with Witten’s index theorem, no breaking of supersymmetry should ever be observed. Indeed, the authors, in the conclusions, correctly ask how the Witten’s theorem copes with this rather strange behavior. Of course, Witten’s theorem is correct and the question comes out naturally and is very much interesting for further studies.

This result is important as I have incurred in a similar situation for the Wess-Zumino model in a couple of papers. The first one (see here and here)  went published and shows how the classical Wess-Zumino model, in a strong coupling regime, breaks supersymmetry. Therefore, I asked a similar question as for the aforementioned case: How quantum corrections recover the Witten’s theorem? The second one is remained a preprint (see here). I tried to send it to Physics Letters B but the referee, without any check of mathematics, just claimed that there was the Witten’s theorem to forbid my conclusions. The Editor asked me to withdraw the paper in view of this identical reason. This was a very strong one. So, I never submited this paper again and just checked the classical case where I was more lucky.

So, my question is still alive: Has supersymmetry in itself the seeds of its breaking?

This is really important in view of the fact that the Minimal Supersymmetric Standard Model (MSSM), now in disgrace after LHC results, can have a dark side in its soft supersymmetry breaking sector. This, in turn, could entail a wrong understanding of where the superpartners could be after the breaking. Anyway, it is really something exciting already at the theoretical level. We are just stressing Witten’s index theorem in search for answers.

by mfrasca at June 18, 2019 03:06 PM

June 16, 2019

John Baez - Azimuth

Applied Category Theory Meeting at UCR

 

The American Mathematical Society is having their Fall Western meeting here at U. C. Riverside during the weekend of November 9th and 10th, 2019. Joe Moeller and I are organizing a session on Applied Category Theory! We already have some great speakers lined up:

• Tai-Danae Bradley
• Vin de Silva
• Brendan Fong
• Nina Otter
• Evan Patterson
• Blake Pollard
• Prakash Panangaden
• David Spivak
• Brad Theilman
• Dmitry Vagner
• Zhenghan Wang

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

General information about abstracts, American Mathematical Society.

More precisely, please read the information there and then click on the link on that page to submit an abstract. It should then magically fly through the aether to me! Abstracts are due September 3rd, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

Fall Western Sectional Meeting, U. C. Riverside, Riverside, California, 9–10 November 2019.

I will also be running a special meeting on diversity and excellence in mathematics on Friday November 8th. There will be a banquet that evening, and at some point I’ll figure out how tickets for that will work.

We had a special session like this in 2017, and it’s fun to think about how things have evolved since then.

David Spivak had already written Category Theory for the Sciences, but more recently he’s written another book on applied category theory, Seven Sketches, with Brendan Fong. He already had a company, but now he’s helping run Conexus, which plans to award grants of up to $1.5 million to startups that use category theory (in exchange for equity). Proposals are due June 30th, by the way!

I guess Brendan Fong was already working with David Spivak at MIT in the fall of 2017, but since then they’ve written Seven Sketches and developed a graphical calculus for logic in regular categories. He’s also worked on a functorial approach to machine learning—and now he’s using category theory to unify learners and lenses.

Blake Pollard had just finished his Ph.D. work at U.C. Riverside back in 2018. He will now talk about his work with Spencer Breiner and Eswaran Subrahmanian at the National Institute of Standards and Technology, using category theory to help develop the “smart grid”—the decentralized power grid we need now. Above he’s talking to Brendan Fong at the Centre for Quantum Technologies, in Singapore. I think that’s where they first met.

Nina Otter was a grad student at Oxford in 2017, but now she’s at UCLA and the University of Leipzig. She worked with Ulrike Tillmann and Heather Harrington on stratifying multiparameter persistent homology, and is now working on a categorical formulation of positional and role analysis in social networks. Like Brendan, she’s on the executive board of the applied category theory journal Compositionality.

I first met Tai-Danae Bradley at ACT2018. Now she will talk about her work at Tunnel Technologies, a startup run by her advisor John Terilla. They model sequences—of letters from an alphabet, for instance—using quantum states and tensor networks.

Vin de Silva works on topological data analysis using persistent cohomology so he’ll probably talk about that. He’s studied the “interleaving distance” between persistence modules, using category theory to treat it and the Gromov-Hausdorff metric in the same setting. He came to the last meeting and it will be good to have him back.

Evan Patterson is a statistics grad student at Stanford. He’s worked on knowledge representation in bicategories of relations, and on teaching machines to understand data science code by the semantic enrichment of dataflow graphs. He too came to the last meeting.

Dmitry Vagner was also at the last meeting, where he spoke about his work with Spivak on open dynamical systems and the operad of wiring diagrams. He is now working on mathematically defining and implementing (in Idris) wiring diagrams for symmetric monoidal categories.

Prakash Panangaden has long been a leader in applied category theory, focused on semantics and logic for probabilistic systems and languages, machine learning, and quantum information theory.

Brad Theilman is a grad student in computational neuroscience at U.C. San Diego. I first met him at ACT2018. He’s using algebraic topology to design new techniques for quantifying the spatiotemporal structure of neural activity in the auditory regions of the brain of the European starling. (I bet you didn’t see those last two words coming!)

Last but not least, Zhenghan Wang works on condensed matter physics and modular tensor categories at U.C. Santa Barbara. At Microsoft’s Station Q, he is using this research to help design topological quantum computers.

In short: a lot has been happening in applied category theory, so it will be good to get together and talk about it!

by John Baez at June 16, 2019 08:41 PM

June 14, 2019

Matt Strassler - Of Particular Significance

A Ring of Controversy Around a Black Hole Photo

[Note Added: Thanks to some great comments I’ve received, I’m continuing to add clarifying remarks to this post.  You’ll find them in green.]

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public. Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t. There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt.EHTDiscoveryM87

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? These have been problematic issues from the beginning, but discussion is starting to heat up. And it’s important: it has implications for the measurement of the black hole’s mass (which EHT claims is that of 6.5 billion Suns, with an uncertainty of about 15%), and for any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it. So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

One note before I begin: this post is long. But it starts with a summary of the situation that you can read quickly, and then comes the long part: a step-by-step non-technical explanation of an important aspect of the black hole ‘photo’ that, to my knowledge, has not yet been given anywhere else.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What Does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well. If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information. We don’t get to observe the black hole itself (it’s black, after all!) What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (a plasma of mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.) The black hole’s gravity then causes the paths on which the radio waves travel to bend, even more than a glass lens will bend the path of visible light, so that where things appear in the ‘photo’ is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time. (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and it has just been said publicly in a paper by Samuel Gralla, Daniel Holz and Robert Wald.

Two months ago, when the EHT `photo’ appeared, it was widely reported in the popular press and on blogs that the photo shows the image of a photon sphere at the edge of the shadow of the M87bh. (Instead of `shadow’, I suggested the term ‘quasi-silhouette‘, which I viewed as somewhat less misleading to a non-expert.)

Unfortunately, it seems these statements are not true; and this was well-known to (but poorly communicated by, in my opinion) the EHT folks.  This lack of clarity might perhaps annoy some scientists and science-loving non-experts; but does this issue also matter scientifically? Gralla et al., in their new preprint, suggest that it does (though they were careful to not yet make a precise claim.)

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully when the `photo’ first appeared, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was. You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. Studying the equations and conversing with expert colleagues, I soon learned why: for a rotating black hole, the photon sphere doesn’t really exist.

But let’s first define what the photon sphere is for a non-rotating black hole! Like the Earth’s equator, the photon sphere is a location, not an object. This location is the surface of an imaginary ball, lying well outside the black hole’s horizon. On the photon sphere, photons (the particles that make up light, radio waves, and all other electromagnetic waves) travel on special circular or spherical orbits around the black hole.

By contrast, a rotating black hole has a larger, broader `photon-zone’ where photons can have special orbits. But you won’t ever see the whole photon zone in any image of a rotating black hole. Instead, a piece of the photon zone will appear as a `photon ring‘, a bright and very thin loop of radio waves. However, the photon ring is not the edge of anything spherical, is generally not perfectly circular, and generally is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it has a photon-zone rather than a photon-sphere, and images of it will have a photon ring. Ok, fine; but then, can we interpret EHT’s `photo’ simply as showing the photon ring, blurred by the imperfections in the `telescope’? Although some of the EHT folks have seemed to suggest the answer is “yes”, Gralla et al. suggest the answer is likely “no” (and many of their colleagues have been pointing out the same thing in private.) The circlet of radio waves that appears in the EHT `photo’ is probably not simply a blurred image of M87bh’s photon ring; it probably shows a combination of the photon ring with something brighter (as explained below). That’s where the controversy starts.

…so the Dark Patch May Not Be the Full Shadow…

The term `shadow’ is confusing (which is why I prefer `quasi-silhouette’ in describing it in public contexts, though that’s my own personal term) but no matter what you call it, in its ideal form it is supposed to be an absolutely dark area whose edge is the photon ring. But in reality the perfectly dark area need not appear so dark after all; it may be partly filled in by various effects. Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway. The EHT folks are well aware of this, but at the time the photo came out, many science writers and scientist-writers (including me) were not.

…so EHT’s Measurement of the M87bh’s Mass is Being Questioned

It was wonderful that EHT could make a picture that could travel round the internet at the speed of light, and generate justifiable excitement and awe that human beings could indirectly observe such an amazing thing as a black hole with a mass of several billion Sun-like stars. Qualitatively, they achieved something fantastic in showing that yes, the object at the center of M87 really is as compact and dark as such a black hole would be expected to be! But the EHT telescope’s main quantitative achievement was a measurement of the mass of the M87bh, with a claimed precision of about 15%.

Naively, one could imagine that the mass is measured by looking at the diameter of the dark spot in the black hole ‘photo’, under the assumption that it is the black hole’s shadow. So here’s the issue: Could interpreting the dark region incorrectly perhaps lead to a significant mistake in the mass measurement, and/or an underestimate of how uncertain the mass measurement actually is?

I don’t know.  The EHT folks are certainly aware of these issues; their simulations show them explicitly.  The mass of the M87bh isn’t literally measured by putting a ruler on the ‘photo’ and measuring the size of the dark spot! The actual methods are much more sophisticated than that, and I don’t understand them well enough yet to explain, evaluate or criticize them. All I can say with confidence right now is that these are important questions that experts currently are debating, and consensus on the answer may not be achieved for quite a while.

———————————————————————-

The Appearance of a Black Hole With Nearby Matter

Ok, now I’m going to explain the most relevant points, step-by-step. Grab a cup of coffee or tea, find a comfy chair, and bear with me.

Because fast-rotating black holes are more complicated, I’m going to start illuminating the controversy by looking at a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87bh as seen from our perspective; that’s important because the M87bh may well be rotating at a very good clip.

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.) See Figure 1. Where will the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That path of that light is the orange arrow in Figure 1. But then there will be an indirect image (the green arrow in Figure 1) from light that goes halfway around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. In short, Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

BHTruthBulb.png

Figure 1: A light bulb (yellow) outside but near the non-rotating black hole’s horizon (in black) can be seen by someone at the top of the image not only through the light that goes directly upward (orange line) — a “direct image” — but also through light that makes partial or complete orbits of the black hole — “indirect images.” The first indirect and second indirect images are from light taking the green and blue paths. For light to make orbits of the black hole, it must travel near the grey-dashed circle that indicates the location of a “photon-sphere.” (A rotating black hole has no such sphere, but when seen from the north or south pole, the light observed takes similar paths to what is shown in this figure.) [The paths of the light rays were calculated carefully using Mathematica 11.3.]

What you can see in Figure 1 is that both the first and second indirect images are formed by light that spends part of its time close to a special radius around the back hole, shown as a dotted line. This imaginary surface, the edge of a ball,  is an honest “photon-sphere” in the case of a non-rotating black hole.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north (or south) pole; there’s a special circle then too. But that circle is not the edge of a photon-sphere! In general, photons can have special orbits in a wide region, which I called the “photon-zone” earlier, and only a small set of them are on this circle. You’ll see photons from other parts of the photon zone if you look at the black hole not from the poles but from some other angle.

[If you’d like to learn a bit more about the photon zone, and you have a little bit of knowledge of black holes already, you can profit from exploring this demo by Professor Leo Stein: https://duetosymmetry.com/tool/kerr-circular-photon-orbits/ ]

Back to the non-rotating case: What our camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is large and bright; this is one of Gralla et al.‘s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high-powered camera, we’ll never pick those other images up. Let’s therefore focus our attention on the direct image and the first indirect image.

BHObsvBulb3.png

Figure 2: What the drawing in Figure 1 actually looks like to the observer peering toward the black hole; all the indirect images lie at almost exactly the same distance from the black hole’s center.

WARNING (since this seems to be a common confusion):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGES ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Proceeding step by step toward a more realistic situation, let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now?

BHTruthCirc2.png

Figure 3: if we replace the light bulb with a circle of light, the paths of the light are the same as in Figure 1, except now for each point along the circle. That means each direct and indirect image itself forms a circle, as shown in the next figure.

That’s shown in Figure 4: the direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

BHObsvCirc3.png

Figure 4: The circular bulb’s direct image is the bright circle, but a somewhat dimmer first indirect image appears further out, and just beyond one finds all the other indirect images, forming a thin `photon ring’.

Importantly, if we consider circular bulbs of different diameter [yellow, red and blue in Figure 5], then although the direct images reflect the differences in the bulbs’ diameters (somewhat enlarged by lensing), the first indirect images all are about the same diameter, just a tad larger or smaller than the photon ring.  The remaining indirect images all sit together at the radius of the photon ring.

BH3Circ4.png

Figure 5: Three bulbs of different diameter (yellow, blue, red) create three distinct direct images, but their first indirect images are located much closer together, and very close to the photon ring where all their remaining indirect images pile up.

These statements are also essentially true for a rotating black hole seen from the north or south pole; a circular bulb generates a series of circular images, and the indirect images all pile more or less on top of each other, forming a photon ring. When viewed off the poles, the rotating black hole becomes a more complicated story, but as long as the viewing angle is small enough, the changes are relatively minor and the picture is qualitatively somewhat similar.

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ made of material orbiting the black hole, and eventually spiraling in. The accretion disk may well be the dominant source emitting radio waves at the M87bh. (I’m showing a very thin uniform disk for illustration, but a real accretion disk is not uniform, changes rapidly as clumps of material move within it and then spiral into the black hole, and may be quite thick — as thick as the black hole is wide, or even thicker.)

Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown in Figure 6 left, on one side of the disk, as an orange wash) would form a disk in your camera, the dim red region in Figure 6 right; the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be similar. However, the indirect images would all pile up in almost the same place from your perspective, forming a bright and quite thin ring, the bright yellow circle in Figure 6 right. (The path of the disk’s first indirect image is shown in Figure 6 left, going halfway about the black hole as a green wash; notice how it narrows as it travels, which is why it appears as a narrow ring in the image at right.) This circle — the full set of indirect images of the whole disk — is the edge of the photon-sphere for a non-rotating black hole, and the circular photon ring for a rotating black hole viewed from its north or south pole.

BHDisk2.png

Figure 6: A glowing disk of material (note it does not touch the black hole) looks like a version of Figure 5 with many more circular bulbs. The direct image of the disk forms a disk (illustrated at left, for a piece of the disk, as an orange wash) while the first indirect image becomes highly compressed (illustrated, for a piece of the disk, as a green wash) and is seen as a narrow circle of bright light.  (It is expected that the disk is mostly transparent in radio waves, so the indirect image can pass through it.) That circle, along with the other indirect images, forms the photon ring. In this case, because the disk’s inner edge lies close to the black hole horizon, the photon ring sits within the disk’s direct image, but we’ll see a different example in Figure 9.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, currently unobservable at EHT, the `photon ring’, while EHT refers to all the indirect images as the `photon ring’. Just letting you know in case you hear `lensed ring’ referred to in future.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — i.e., along its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, technology had to be pushed to its absolute limits. Someday we’ll see both the disk and the ring, but right now, they’re all blurred together. So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim and the ring so bright that the camera will mainly show you a blurry image of the bright photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that the ​photo will show mainly the disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10). (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — Doppler-like effect from rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the brightness of the image is dominated by the disk, not the ring.

BHBlurDisk_a1_2.png

Figure 7: At left is repeated the image in Figure 6, as seen in a perfect camera, while at right the same image is shown when observed using a camera with imperfect vision. The disk and ring blur together into a single thick ring, whose brightness is dominated by the disk. Note that the shadow — the region surrounded by the yellow photon ring — is not the same as the dark patch in the right-hand image; the dark patch is considerably smaller than the shadow.

Let’s say that again: the black hole `photo’ may mainly show the M87bh’s accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’ In general, the patch could be considerably smaller than what is usually termed the `shadow’ of the black hole.

M87BH_Vicinity_Photo_2a.png

Figure 8: (Left) We probably observe the M87bh at a small angle off its south pole. Its accretion disk has an unknown size and shape — it may be quite thick and non-uniform — and it may not even lie at the black hole’s equator. The disk and the black hole interact to create outward-going jets of material (observed already many years ago but not clearly visible in the EHT ‘photo’.) (Right) The EHT `photo’ of the M87bh (taken in radio waves and shown in false color!) Compare with Figure 7; the most important difference is that one side of the image is brighter than the other. This likely arises from (a) our view being slightly off from the south pole, combined with (b) rotation of the black hole and its disk, and (c) possibly other more subtle issues.

This is important. The photon ring’s diameter, and thus the width of the `shadow’ too, barely depend on the rotation rate of the black hole; they depend almost exclusively on the black hole’s mass. So if the ring in the photo were simply the photon ring of the M87bh, you’d have a very simple way to measure the black hole’s mass without knowing its rotation rate: you’d look at how large the dark patch is, or equivalently, the diameter of the blurry ring, and that would give you the answer to within 10%. But it’s nowhere near so simple if the blurry ring shows the accretion disk, because the accretion disk’s properties and appearance can vary much more than the photon ring; they can depend strongly on the black hole’s rotation rate, and also on magnetic fields and other details of the black hole’s vicinity.

The Important Role of Rotation

If we conclude that EHT is seeing a mix of the accretion disk with the photon ring, with the former dominating the brightness, then this makes EHT’s measurement of the M87bh’s mass more confusing and even potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important. Despite what I showed (for pedagogical simplicity) in Figure 7, for a non-rotating black hole the accretion disk’s central gap is actually expected to lie outside the photon ring; this is shown at the top of Figure 9.  But  the faster the black hole rotates, the smaller this central gap is expected to be, to the point that for a fast-rotating black hole the gap will lie inside the photon ring, as shown at the bottom of Figure 9. (This tendency is not obvious; it requires understanding details of the black hole geometry.) And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette), which is the region inside the photon ring. It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

The effect of blurring in the two cases of slow (or zero) and fast rotation are illustrated in Figure 9, where the photon ring’s size is taken to be the same in each case but the disk’s inner edge is close in or far out. (The black holes, not illustrated since they aren’t visible anyway, differ in mass by about 10% in order to have the photon ring the same size.) This shows why the size of the dark patch can be quite different, depending on the disk’s shape, even when the photon ring’s size is the same.

BHBlurDisk_a0_a1_3.png

Figure 9: Comparing the appearance of slightly more realistically-shaped disks around slowly rotating or non-rotating black holes (top) to those around fast-rotating black holes (bottom) of the same mass, as seen from the north or south pole. (Left) the view in a perfect camera; (right) rough illustration of the effect of blurring in the current version of the EHT. The faster the black hole is spinning, the smaller the central gap in the accretion disk is likely to be. No matter what the extent of the accretion disk (dark red), the photon ring (yellow) remains at roughly the same location, changing only by 10% between a non-rotating black hole and a maximally rotating black hole of the same mass. But blurring in the camera combines the disk and photon ring into a thick ring whose brightness is dominated by the disk rather than the ring, and which can therefore be of different size even though the mass is the same. This implies that the radius of the blurry ring in the EHT `photo’, and the size of the dark region inside it, cannot by themselves tell us the black hole’s mass; at a minimum we must also know the rotation rate (which we do not.)

Gralla et al. subtly raise these questions but are careful not to overstate their case, perhaps because they have not yet completed their study of rotating black holes. But the question is now in the air.

I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures. In particular, EHT’s simulations show all of the effects mentioned above; there’s none of this of which they are unaware. (In fact, the reason I know my illustrations above are reasonable is partly because you can see similar pictures in the EHT papers.) As long as the EHT folks correctly accounted for all the issues, then they should have been able to properly measure the mass and estimate their uncertainties correctly. In fact, they don’t really use the photo itself; they use more subtle techniques applied to their telescope data directly. Thus it’s not enough to argue the photo itself is ambiguous; one has to argue that EHT’s more subtle analysis methods are flawed. No one has argued that yet, as far as I am aware.

But the one thing that’s clear right now is that science writers almost uniformly got it wrong [because the experts didn’t explain these points well] when they tried to describe the image two months ago. The `photo’ probably does not show “a photon ring surrounding a shadow.” That would be nice and simple and impressive-sounding, since it refers to fundamental properties of the black hole’s warping effects on space. But it’s far too glib, as Figures 7 and 9 show. We’re probably seeing an accretion disk supplemented by a photon ring, all blurred together, and the dark region may well be smaller than the black hole’s shadow.

(Rather than, or in addition to, the accretion disk, it is also possible that the dominant emission in the photo comes from the inner portion of one of the jets that emerges from the vicinity of the black hole; see Figure 8 above. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Someday in the not distant future, improved imaging should allow EHT to separately image the photon ring and the disk, so both can be observed easily, as in the left side of Figure 9. Then all these questions will be answered definitively.

Why the Gargantua Black Hole from Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11. And that leads to the Gargantua image from the movie, also shown in Figure 11. Notice the photon ring (which is, as I cautioned you earlier, off-center!)   [Note added: this figure has been modified; in the original version I referred to the top and bottom views of the disk’s far side as the  “1st indirect image”, but as pointed out by Professor Jean-Pierre Luminet, that’s not correct terminology here.]

BHGarg4.png

Figure 10: The movie Interstellar features a visit to an imaginary black hole called Gargantua, and the simulated images in the movie (from 2014) are taken from near the equator, not the pole. As a result, the direct image of the disk cuts across the black hole, and indirect images of the back side of the disk are seen above and below the black hole. There is also a bright photon ring, slightly off center; this is well outside the surface of the black hole, which is not visible. A real image would not be symmetric left-to-right; it would be brighter on the side that is rotating toward the viewer.  At the bottom is shown a much more realistic visual image (albeit not so good quality) from 1994 by Jean-Alain Marck, in which this asymmetry can be seen clearly.

However, the movie image leaves out an important Doppler-like effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so a real image from this vantage point would be very asymmetric — bright on the left, dim on the right — unlike the movie image.  At the suggestion of Professsor Jean-Pierre Luminet I have added, at the bottom of Figure 10, a very early simulation by Jean-Alain Marck that shows this effect.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole.

Final Remarks: Where a Rotating Black Hole Differs from a Non-Rotating One

Before I quit for the week, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) As I’ve just emphasized, what a rotating black hole looks like to a distant observer depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A pole observer, an equatorial observer, and a near-pole observer see quite different things. (As noted in Figure 8, we are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are some reasons to expect this. Even then, the story is complex.

2) As I mentioned above, instead of a photon-sphere, there is a ‘photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. For high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; and this can cause a filling-in of the ‘shadow’.

3) Depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from along the rotation axis (i.e., above the north or south pole) will the direct and indirect images of the disk all be circular and concentric. We’re not viewing the M87bh on its axis, and that further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that I, at least, cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 14, 2019 12:15 PM

June 11, 2019

Georg von Hippel - Life on the lattice

Looking for guest bloggers to cover LATTICE 2019
My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, please send me an email and we can arrange to grant you posting rights.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:28 AM

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field-theoretic background; while some more advanced topics such as the Nielsen-Ninomiya theorem and the Symanzik effective theory are touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are omitted altogether. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation for pure gauge fields, and hybrid Monte Carlo (HMC) for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov-space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging are explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:27 AM

June 10, 2019

Matt Strassler - Of Particular Significance

Minor Technical Difficulty with WordPress

Hi all — sorry to bother you with an issue you may not even have noticed, but about 18 hours ago a post of mine that was under construction was accidentally published, due to a WordPress bug.  Since it isn’t done yet, it isn’t readable (and has no figures yet) and may still contain errors and typos, so of course I tried to take it down immediately.  But it seems some of you are still getting the announcement of it or are able to read parts of it.  Anyway, I suggest you completely ignore it, because I’m not done working out the scientific details yet, nor have I had it checked by my more expert colleagues; the prose and perhaps even the title may change greatly before the post comes out later this week.  Just hang tight and stay tuned…

by Matt Strassler at June 10, 2019 11:43 PM

Matt Strassler - Of Particular Significance

The Black Hole Photo: Controversy Begins To Bubble Up

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public.  Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t.  There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt. This concern seems to have been warranted.  This is the first post of several in which I’ll explain the issue as I see it.

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? That’s been a problematic issue from the beginning, but discussion is starting to heat up.  And it’s important: it has implications for the measurement of the black hole’s mass, and of any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it.  So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well.  If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information.  We don’t get to observe black hole itself (it’s black, after all!)   What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.)  The black hole then bends the paths of the radio waves extensively, even more than does a glass lens, so that where things appear in the image is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time.  (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and a paper by Gralla, Holz and Wald has just said it publicly.

When the EHT `photo’ appeared, it was widely reported that it shows the image of a photon sphere at the edge of the shadow (or ‘quasi-silhouette‘, a term I suggested as somewhat less misleading) of the M87bh.

[Like the Earth’s equator, the photon sphere is a location, not an object.  Photons (the particles that make up light, radio waves, and all other electromagnetic radiation) that move along the photon sphere have special, spherical orbits around the black hole.]

Unfortunately, it seems likely that these statements are incorrect; and Gralla et al. have said almost as much in their new preprint (though they were careful not to make a precise claim.)

 

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully back then, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was.  You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. A couple of weeks later, as I studied the equations and conversed with colleagues, I learned why; for a rotating black hole, the photon sphere doesn’t really exist.  There’s a broad photon-zone' where photons can have special orbits, but you won't ever see the whole photon zone in an image of a rotating black hole.  Instead a piece of the photon zone will show up asphoton ring, a bright thin loop of radio waves.

But this ring is not the edge of anything spherical, is generally not perfectly circular, and is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it probably has no photon-sphere.  But does it show a photon ring?  Although some of the EHT folks seemed to suggest the answer was ‘yes’, Gralla et al. suggest the answer is likely `no’ (and my Harvard colleagues were finding the same thing.)  It seems unlikely that the circlet of radio waves that appears in the EHT `photo’ is really an image of M87bh’s photon ring anyway; it’s probably something else.  That’s where controversy starts.

…so the Dark Patch is Probably Not the Full Shadow

The term shadow' is confusing (which is why I prefer quasi-silhouette’) but no matter what you call it, in its ideal form it is a perfectly dark area whose edge is the photon ring.    But in reality the perfectly dark area need not appear so dark after all; it may be filled in by various effects.  Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway.

Step-By-Step Approach

To explain these points will take some time and care, so I’m going to spread the explanation out over several blog posts.  Otherwise it’s just too much information too fast, and I won’t do a good job writing it down.  So bear with me… expect at least three more posts, probably four, and even then there will still be important issues to return to in future.

The Appearance of a  Black Hole With Nearby Matter

Because fast-rotating black holes are complicated, I’m going to illuminate the controversy using a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87BH as seen from our perspective; that’s important because the M87BH is probably rotating at a very good clip. (At the end of this post I’ll briefly describe some key differences between the appearance of non-rotating black holes, rotating black holes observed along the rotation axis, and rotating black holes observed just a bit off the rotation axis.)

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.)  See Figure 1.  Where would the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That’s the orange arrow in Figure 1.  But then there will be an indirect image from light that goes halfway (the green arrow in Figure 1) around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

What you can see in Figure 1 is that both the first and second indirect images are formed by light (er, radio waves) that spends part of its time close to a special radius around the back hole, shown as a dotted line. This, in the case of a non-rotating black hole, is an honest “photon-sphere”.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north pole; there’s a special circle then too.  But that circle is not the edge of a photon-sphere!  In general, photons can orbit in a wide region, which I’ll call the “photon-zone.” You’ll see photons from other parts of the photon zone if you look at the black hole not from the north pole but from some other angle.

What our radio-wave camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is bright; this is one of Gralla et al’s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high resolution camera, we’ll never pick those other images up up. Consequently, all we can really hope to detect with something like EHT is the direct image and the first indirect image.

WARNING (since this seems to be a common confusion even after two months):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGE ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now? Figure 4: The direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

Importantly, if we replace that circular bulb [shown yellow in Figure 5] by one of a larger or smaller radius [shown blue in Figure 5], then (Figure 6) the inner direct image would look larger or smaller to us, but the indirect images would barely move. They remain very close to the same size no matter how big a circular bulb we chose would barely move!

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ of material (possibly quite thick — I’m showing a very thin one for illustration, but they can be as thick as the black hole is wide, or even thicker) that orbit them. The accretion disk may be the source of the radio waves at M87’s black hole. Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown on one side of the disk as an orange wash) would form a disk in your camera (Figure 8); the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be the same. However, the indirect images (the first of which is shown going halfway about the black hole as a green wash) would all pile up in the same place from your perspective, forming a bright and quite thin ring. This is the photon ring for a non-spinning black hole — the full set of indirect images of everything that lies at or inside the photon sphere but outside the black hole horizon.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, completely unobservable at EHT, the `photon ring’. I don’t know if their notation will be adopted but you might hear `lensed ring’ referred to in future. In any case, what EHT calls the photon ring includes what Gralla et al. call the lensed ring.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, it had to be pushed to its absolute limits.  Someday we’ll see both the disk and the ring, but right now, they’re all blurred together.  So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim that the camera will mainly show you a blurry image of the bight photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that blurring the ring makes it dimmer than the disk! The photo, therefore, will show mainly the accretion disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10).  (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the image is dominated by the disk.

Let’s say that again: the black hole `photo’ is likely showing the accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’  In general, the patch may well be smaller than what is usually termed the `shadow’ of the black hole.

This is very important. The photon ring’s radius barely depend on the rotation rate of the black hole, and therefore, if the light were coming from the ring, you’d know (without knowing the black hole’s rotation right) how big its dark patch will appear for a given mass. You could therefore use the radius of the ring in the photo to determine the black hole’s mass. But the accretion disk’s properties and appearance can vary much more. Depending on the spin of the black hole and the details of the matter that’s spiraling in to the black hole, its radius can be larger or smaller than the photon ring’s radius… making the measurement of the mass both more ambiguous and — if you partially mistook the accretion disk for the photon ring — potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important.  For a non-rotating black hole the accretion disk’s inner edge is expected to lie outside the photon ring, but for a fast-rotating black hole (as M87’s may well be), it will lie inside the photon ring. And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette). It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

Gralla et al. subtly raise these questions but are careful not to overstate their case, because they have not yet completed their study of rotating black holes. But the question is now in the air. I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures.

(Rather than the accretion disk, it is also possible that the dominant emission comes from the inner portion of one of the jets that emerges from the vicinity of the black hole. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Why the Gargantua Black Hole From Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11.

One thing that isn’t included in the Gargantua image from the movie (Figure 12) is a sort of Doppler effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so the image will be very asymmetric, unlike the movie image. See Figure 13 with what it would really `look’ like to the EHT.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole, and does not show the Doppler effect correctly (for artistic reasons).

Where a Rotating Black Hole Differs

Before I quit for the day, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) What a rotating black hole looks like to a distant observe depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A north-pole observer, an equatorial observer, and a near-north-pole observer see quite different things. (We are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are reasons to expect this. Even then, the story is complex.

2) Instead of a photon-sphere, there is what you might call a `photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. As I mentioned above, for high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; this leads to multiple indirect images of the disk and a potentially bright photon ring.

3) However, depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from points along the rotation axis (i.e., above the north or south pole) will the direct and indirect image of the disk both be circular and concentric. That further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that we cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 10, 2019 04:04 AM

June 06, 2019

John Baez - Azimuth

Nonstandard Models of Arithmetic

There seems to be a murky abyss lurking at the bottom of mathematics. While in many ways we cannot hope to reach solid ground, mathematicians have built impressive ladders that let us explore the depths of this abyss and marvel at the limits and at the power of mathematical reasoning at the same time.

This is a quote from Matthew Katz and Jan Reimann’s book An Introduction to Ramsey Theory: Fast Functions, Infinity, and Metamathematics. I’ve been been talking to my old friend Michael Weiss about nonstandard models of Peano arithmetic on his blog. We just got into a bit of Ramsey theory. But you might like the whole series of conversations, which are precisely about this murky abyss.

Here it is so far:

Part 1: I say I’m trying to understand ‘recursively saturated’ models of Peano arithmetic, and Michael dumps a lot of information on me. The posts get easier to read after this one!

Part 2: I explain my dream: to show that the concept of ‘standard model’ of Peano arithmetic is more nebulous than many seem to think. We agree to go through Ali Enayat’s paper Standard models of arithmetic.

Part 3: We talk about the concept of ‘standard model’, and the ideas of some ultrafinitists: Alexander Yessenin-Volpin and Edward Nelson.

Part 4: Michael mentions “the theory of true arithmetic”, and I ask what that means. We decide that a short dive into the philosophy of mathematics may be required.

Part 5: Michael explains his philosophies of mathematics, and how they affect his attitude toward the natural numbers and the universe of sets.

Part 6: After explaining my distaste for the Punch-and-Judy approach to the philosophy of mathematics (of which Michael is thankfully not guilty), I point out a strange fact: our views on the infinite cast shadows on our study of the natural numbers. For example: large cardinal axioms help us name larger finite numbers.

Part 7: We discuss Enayat’s concept of “a T-standard model of PA”, where T is some axiom system for set theory. I describe my crazy thought: maybe your standard natural numbers are nonstandard for me. We conclude with a brief digression into Hermetic philosophy: “as above, so below”.

Part 8: We discuss the tight relation between PA and ZFC with the axiom of infinity replaced by its negation. We then chat about Ramsey theory as a warmup for the Paris–Harrington Theorem.

Part 9: Michael sketches the proof of the Paris–Harrington Theorem, which says that a certain rather simple theorem about combinatorics can be stated in PA, and proved in ZFC, but not proved in PA. The proof he sketches builds a nonstandard model in which this theorem does not hold!

Part 10: Michael and I talk about “ordinal analysis”: a way of assigning ordinals to theories of arithmetic, that measures how strong they are.

Part 11: Michael begins explaining Enayat’s paper Standard models of arithmetic. I pull him into explaining “Craig’s trick” and “Rosser’s trick”, two famous tricks in mathematical logic.

by John Baez at June 06, 2019 12:32 AM

June 05, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVII: Super-Entropic Instability

I'm quite excited because of some new results I got recently, which appeared on the ArXiv today. I've found a new (and I think, possibly important) instability in quantum gravity.

Said more carefully, I've found a sibling to Hawking's celebrated instability that manifests itself as black hole evaporation. This new instability also results in evaporation, driven by Hawking radiation, and it can appear for black holes that might not seem unstable to evaporation in ordinary circumstances (i.e., there's no Hawking channel to decay), but turn out to be unstable upon closer examination, in a larger context. That context is the extended gravitational thermodynamics you've read me talking about here in several previous posts (see e.g. here and here). In that framework, the cosmological constant is dynamical and enters the thermodynamics as a pressure variable, p. It has a conjugate, V, which is a quantity that can be derived once you know the pressure and the mass of the black hole.

Well, Hawking evaporation is a catastrophic quantum phenomenon that follows from the fact that the radiation temperature of a Schwarzschild black hole (the simplest one you can think of) goes inversely with the mass. So the black hole radiates and loses energy, reducing its mass. But that means that it will radiate at even higher temperature, driving its mass down even more. So it will radiate even more, and so on. So it is an instability in the sense that the system drives itself even further away from where it started at every moment. Like a pencil falling over from balancing on a point.

This is the original quantum instability for gravitational systems. It's, as you probably know, very important. (Although in our universe, the temperature of radiation is so tiny for astrophysical black holes (they have large mass) that the effect is washed out by the local temperature of the universe... But if the univverse ever had microscopic black holes, they'd have radiated in this way...)

So very nice, so very 1970s. What have I found recently?

A nice way of expressing the above instability is to simply say [...] Click to continue reading this post

The post News from the Front, XVII: Super-Entropic Instability appeared first on Asymptotia.

by Clifford at June 05, 2019 02:11 AM

May 29, 2019

ZapperZ - Physics and Physicists

How Do You Detect A Neutrino?
Another Don Lincoln video, and this time, it is on a topic that I had a small involvement in, which is neutrino detection.



My small part was in the photomultiplier photocathode used for detection of Cerenkov light that is emitted from such a collision between the "weak boson" and the nucleus. We were trying to design a photodetector that has a large surface area as compared to the current PMT round surface.

In any case, this is a good introduction to why neutrinos are so difficult to detect.

Zz.

by ZapperZ (noreply@blogger.com) at May 29, 2019 02:14 PM

May 27, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A conference in Paris

This week I’m in Paris, attending a conference in memory of the outstanding British astronomer and theoretician Arthur Stanley Eddington. The conference, which is taking place at the Observatoire de Paris, is designed to celebrate the centenary of Eddington’s famous measurement of the bending of distant starlight by the sun.  a key experiment that offered important early support for Einstein’s general theory of relativity. However, there are talks on lots of different topics, from Eddington’s philosophy of science to his work on the physics of stars, from his work in cosmology to his search for a unified field theory. The conference website and programme is here.

IMG_2761

The view from my hotel in Denfert-Rochereau

All of the sessions of the conference were excellent, but today was a particular treat with four outstanding talks on the 1919 expedition. In ‘Eddington, Dyson and the Eclipse of 1919’, Daniel Kennefick of the University of Arkansas gave a superb overview of his recent book on the subject. In ‘The 1919 May 29 Eclipse: On Accuracy and Precision’, David Valls-Gabaud of the Observatoire de Paris gave a forensic analysis of Eddington’s calculations. In ‘The 1919 Eclipse; Were the Results Robust?’ Gerry Gilmore of the University of Cambridge described how recent reconstructions of the expedition measurements gave confidence in the results; and in ‘Chasing Mare’s Nests ; Eddington and the Early Reception of General Relativity among Astronomers’, Jeffrey Crelinsten of the University of Toronto summarized the doubts expressed by major American astronomical groups in the early 1920s, as described in his excellent book.

Image result for no shadow of a doubt by daniel kennefick        Image result for einstein's jury

I won’t describe the other sessions, but just note a few things that made this conference the sort of meeting I like best. All speakers were allocated the same speaking time (30 mins including questions); most speakers were familiar with each other’s work; many speakers spoke on the same topic, giving different perspectives; there was plenty of time for further questions and comments at the end of each day. So a superb conference organised by Florian Laguens of the IPC and David Valls-Gabaud of the Observatoire de Paris.

IMG_2742

On the way to the conference

In my own case, I gave a talk on Eddington’s role in the discovery of the expanding universe. I have long been puzzled by the fact that Eddington, an outstanding astronomer and strong proponent of the general theory of relativity, paid no attention when his brilliant former student Georges Lemaître suggested that a universe of expanding universe could be derived from general relativity, a phenomenon that could account for the redshifts of the spiral nebulae, the biggest astronomical puzzle of the age. After considering some standard explanations (Lemaître’s status as an early-career researcher, the journal he chose to publish in and the language of the paper), I added two considerations of my own: (i) the theoretical analysis in Lemaître’s 1927 paper would have been very demanding for a 1927 reader and (ii) the astronomical data that Lemaître relied upon were quite preliminary (Lemaître’s calculation of a redshift/distance coefficient for the nebulae relied upon astronomical distances from Hubble that were established using the method of apparent magnitude, a method that was much less reliable than Hubble’s later observations using the method of Cepheid variables).

IMG_2759

Making my points at the Eddington Conference

It’s an interesting puzzle because it is thought that Lemaitre sent a copy of his paper to Eddington in 1927 – however I finished by admitting that there is a distinct possibility that Eddington simply didn’t take the time to read his former student’s paper. Sometimes the most boring explanation is the right one! The slides for my talk can be found here.

All in all, a superb conference.

 

by cormac at May 27, 2019 07:39 PM

May 25, 2019

Jon Butterworth - Life and Physics

Murray Gell-Mann
Sad to learn that Murray Gell-Mann, pioneer of particle physics and more, has died at the age of 89.  Here is the obituary from Caltech. The first person to bring some order to Hadron Island and point the way to … Continue reading

by Jon Butterworth at May 25, 2019 06:50 AM

May 24, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVI: Toward Quantum Heat Engines

(The following post is a bit more technical than usual. But non-experts may still find parts helpful.)

A couple of years ago I stumbled on an entire field that I had not encountered before: the study of Quantum Heat Engines. This sounds like an odd juxtaposition of terms since, as I say in the intro to my recent paper:

The thermodynamics of heat engines, refrigerators, and heat pumps is often thought to be firmly the domain of large classical systems, or put more carefully, systems that have a very large number of degrees of freedom such that thermal effects dominate over quantum effects. Nevertheless, there is thriving field devoted to the study—both experimental and theoretical—of the thermodynamics of machines that use small quantum systems as the working substance.

It is a fascinating field, with a lot of activity going on that connects to fields like quantum information, device physics, open quantum systems, condensed matter, etc.

Anyway, I stumbled on it because, as you may know, I've been thinking (in my 21st-meets-18th century way) about heat engines a lot over the last five years since I showed how to make them from (quantum) black holes, when embedded in extended gravitational thermodynamics. I've written it all down in blog posts before, so go look if interested (here and here).

In particular, it was when working on a project I wrote about here that I stumbled on quantum heat engines, and got thinking about their power and efficiency. It was while working on that project that I had a very happy thought: Could I show that holographic heat engines (the kind I make using black holes) -at least a class of them- are actually, in some regime, quantum heat engines? That would be potentially super-useful and, of course, super-fun.

The blunt headline statement is that they are, obviously, because every stage [...] Click to continue reading this post

The post News from the Front, XVI: Toward Quantum Heat Engines appeared first on Asymptotia.

by Clifford at May 24, 2019 05:16 PM

ZapperZ - Physics and Physicists

Charles Kittel
Physicist Charles Kittel passed away this past May 15th, 2019.

This is one of those names that will not ring a bell to the public. But for most of us in the field of condensed matter physics, his name has almost soared to mythical heights. His book "Introduction to Solid State Physics" has become almost a standard to everyone entering this field of study. That text alone has educated innumerable number of physicists that went on to make contribution to a field of physics that has a direct impact on our world today. It is also a text that are used (yes, they are still being used in physics classes today) in many electrical engineering courses.

He has been honored with many awards and distinctions, including the Buckley prize from the APS. He may be gone, but his legacy, influence, and certainly his book, will live on.

Zz.

by ZapperZ (noreply@blogger.com) at May 24, 2019 01:34 PM

May 14, 2019

Axel Maas - Looking Inside the Standard Model

Acquiring a new field
I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁

by Axel Maas (noreply@blogger.com) at May 14, 2019 03:03 PM

May 12, 2019

Marco Frasca - The Gauge Connection

Is it possible to get rid of exotic matter in warp drive?

On 1994, Miguel Alcubierre proposed a solution of the Einstein equations (see here) describing a space-time bubble moving at arbitrary speed. It is important to notice that no violation of the light speed limit happens because is the space-time moving and inside the bubble everything goes as expected. Miguel AlcubierreThis kind of solutions of the Einstein equations have a fundamental drawback: they violate Weak Energy Condition (WEC) and, in order to exist, some exotic matter with negative energy density must exist. Useless to say, nobody has ever seen such kind of matter. There seems to exist some clue in the way Casimir effect works but this just relies on the way one interprets quantum fields rather than an evidence of existence. Besides, since the initial proposal, a great number of studies have been published showing how pathological the Alcubierre’s solution can be, also recurring to quantum field theory (e.g. Hawking radiation). So, we have to turn to dream of a possible interstellar travel hoping that some smart guy will one day come out with a better solution.

Of course, Alcubierre’s solution is rather interesting from a physical point of view as it belongs to a number of older solutions, likeKip Thorne wormholes, time machines and like that, yielded by very famous authors as Kip Thorne, that arise when one impose a solution and then check the conditions of its existence. This turns out to be a determination of the energy-momentum tensor and, unavoidably, is negative. Then, they violate whatever energy condition of the Einstein equations granting pathological behaviour. On the other side, they appear the most palatable for science fiction of possible futures of space and time travels. In these times where this kind of technologies are largely employed by the film industry, moving the fantasy of millions, we would hope that such futures should also be possible.

It is interesting to note the procedure to obtain these particular solutions. One engineers it on a desk and then substitute them into the Einstein equations to see when are really a solution. One fixes in this way the energy requirements. On the other side, it is difficult to come out from the blue with a solution of the Einstein equations that provides such a particular behaviour, moving the other way around. It is also possible that such solutions are not possible and imply always a violation of the energy conditions. Some theorems have been proved in the course of time that seem to prohibit them (e.g. see here). Of course, I am convinced that the energy conditions must be respected if we want to have the physics that describes our universe. They cannot be evaded.

So, turning at the question of the title, could we think of a possible warp drive solution of the Einstein equations without exotic matter? The answer can be yes of course provided we are able to recover the York time, or warp factor, in the way Alcubierre obtained it with its pathological solution. At first, this seems an impossible mission. But the space-time bubble we are considering is a very small perturbation and perturbation theory can come to rescue. Particularly, when this perturbation can be locally very strong. On 2005, I proposed such a solution (see here) together with a technique to solve the Einstein equations when the metric is strongly perturbed. My intent at that time was to give a proof of the BKL conjecture. A smart referee suggested to me to give an example of application of the method. The metric I have obtained in this way, perturbing a Schwarzaschild metric, yields a solution that has an identical York time (warp factor) as for the Alcubierre’s metric. Of course, I am respecting energy conditions as I am directly solving the Einstein equations that do.

The identity between the York times can be obtained provided the form factor proposed by Alcubierre is taken to be 1 but this is just the simplest case. Here is an animation of my warp factor.

Warp factor

It seen the bubble moving as expected along the x direction.

My personal hope is that this will go beyond a mathematical curiosity. On the other side, it should be understood how to provide such kind of perturbations to a given metric. I can think to the Einstein-Maxwell equations solved using perturbation theory. There is a lot of literature about and a lot of great contributions on this argument.

Finally, this could give a meaning to the following video by NASA.

by mfrasca at May 12, 2019 05:59 PM

ZapperZ - Physics and Physicists

The Geekiest T-Shirt That I've Ever Bought
I just had to get this one. I found this last week during the members night at Chicago's Adler Planetarium.


The people that I were with of course knew that this is referring to "force", but they didn't get the connection. So I had to explain to them that Newton's 2nd law, i.e. F=ma can be expressed in a more general form, i.e. F = dp/dt, where p is momentum mv. Thus

F = d/dt (mv)

Of course, I'm not surprised that most people, and probably most of Adler's visitors, would not get this unless they know a bit of calculus and have done general physics with calculus. Maybe that was why this t-shirt was on sale! :)

Maybe I'll wear this when I teach kinematics this Fall!

Zz.

by ZapperZ (noreply@blogger.com) at May 12, 2019 02:40 PM

May 08, 2019

Jon Butterworth - Life and Physics

Mosquitos and Toblerones
A couple of years ago I went to see Lucy Kirkwood’s play Mosquitos at the National Theatre. It starred Olivia Coleman and Olivia Williams, who were both brilliant, and was set largely in and around CERN. There was a lot … Continue reading

by Jon Butterworth at May 08, 2019 07:20 PM

May 04, 2019

Clifford V. Johnson - Asymptotia

Endgame Memories

About 2-3 (ish) years ago, I was asked to visit the Disney/Marvel mothership in Burbank for a meeting. I was ushered into the inner workings of the MCU, past a statue of the newly acquired Spidey, and into a room. Present were Christopher Markus and Stephen McFeely, the writers of … Click to continue reading this post

The post Endgame Memories appeared first on Asymptotia.

by Clifford at May 04, 2019 06:34 PM

April 30, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Week at The Surf Experience

I don’t often take a sun holiday these days, but I had a fabulous time last week at The Surf Experience in Lagos, Portugal. I’m not an accomplished surfer by any measure, but there is nothing quite like the thrill of catching a few waves in the sea with the sun overhead – a nice change from the indoors world of academia.

Not for the first time, I signed up for a residential course with The Surf Experience in Lagos. Founded by veteran German surfer Dago Lipke, guests of The Surf Experience stay at the surf lodge Vila Catarina, a lovely villa in the hills above Lagos, complete with beautiful gardens and swimming pool. Sumptuous meals are provided by Dagos’s wife Connie, a wonderful cook. Instead of wandering around town trying to find a different restaurant every evening, guests enjoy an excellent meal in a quiet setting in good company, followed by a game of pool or chess. And it really is good company. Guests at TSE tend mainly to hail from Germany and Switzerland, with a sprinkling from France and Sweden, so it’s truly international – quite a contrast to your average package tour (or indeed our college staff room). Not a mention of Brexit, and an excellent opportunity to improve my German. (Is that what you tell yourself?- Ed)

IMG_2637 (1)

Hanging out at the pool before breakfast

IMG_2634

Fine dining at The Surf Experience

IMG_2624

A game of cards and a conversation instead of a noisy bar

Of course, no holiday is perfect and in this case I managed to pick up an injury on the first day. Riding the tiniest wave all the way back to the beach, I got unexpectedly thrown off, hitting my head off the bottom at speed. (This is the most elementary error you can make in surfing and it risks serious injury, from concussion to spinal fracture). Luckily, I walked away with nothing more than severe bruising to the neck and chest (as later established by X-ray at the local medical clinic, also an interesting experience). So no life-altering injuries, but like a jockey with a broken rib, I was too sore to get back on the horse for few days. Instead, I tried Stand Up Paddling for the first time, which I thoroughly enjoyed. It’s more exciting than it looks, must get my own board for calm days at home.

E6Jc2LvY

Stand Up Paddling in Lagos with Kiteschool Portugal

Things got even better towards the end of the week as I began to heal. Indeed, the entire surf lodge had a superb day’s surfing yesterday on beautiful small green waves at a beach right next to town (in Ireland, we very rarely see clean conditions like this, the surf is mainly driven by wind). It was fantastic to catch wave after wave throughout the afternoon, even if clambering back on the board after each wasn’t much fun for yours truly.

This morning, I caught a Ryanair flight back to Dublin from Faro, should be back in the office by late afternoon. Oddly enough, I feel enormously refreshed – perhaps it’s the feeling of gradually healing. Hopefully the sensation of being continuously kicked in the ribs will disappear soon and I’ll be back on the waves in June. In the meantime, this week marks a study period for our students before their exams, so it’s an ideal time to prepare my slides for the Eddington conference in Paris later this month.

Update

I caught a slight cold on the way back, so today I’m wandering around college like a lunatic going cough, ‘ouch’ , sneeze, ‘ouch’.  Maybe it’s karma for flying Ryanair – whatever about indulging in one or two flights a year, it’s a terrible thing to use an airline whose CEO continues to openly deny the findings of climate scientists.

 

by cormac at April 30, 2019 09:49 PM

April 25, 2019

Clifford V. Johnson - Asymptotia

Black Hole Session

Well I did not get the special NYT issue as a keepsake, but this is maybe better: I got to attend the first presentation of the “black hole picture” scientific results at a conference, the APS April meeting (Sunday April 14th 2019). I learned so much! These are snaps of … Click to continue reading this post

The post Black Hole Session appeared first on Asymptotia.

by Clifford at April 25, 2019 06:35 PM

April 24, 2019

Andrew Jaffe - Leaves on the Line

Spring Break?

Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.

But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.

Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.

by Andrew at April 24, 2019 01:19 AM

April 23, 2019

Georg von Hippel - Life on the lattice

Looking for guest blogger(s) to cover LATTICE 2018
Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

April 16, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: Seeing More Clearly

THIS POST CONTAINS ERRORS CONCERNING THE EXISTENCE AND VISIBILITY OF THE SO-CALLED PHOTON-SPHERE AND SHADOW; THESE ERRORS WERE COMMON TO ESSENTIALLY ALL REPORTING ON THE BLACK HOLE ‘PHOTO’.  IT HAS BEEN SUPERSEDED BY THIS POST, WHICH CORRECTS THESE ERRORS AND EXPLAINS THE SITUATION.

Ok, after yesterday’s post, in which I told you what I still didn’t understand about the Event Horizon Telescope (EHT) black hole image (see also the pre-photo blog post in which I explained pedagogically what the image was likely to show and why), today I can tell you that quite a few of the gaps in my understanding are filling in (thanks mainly to conversations with Harvard postdoc Alex Lupsasca and science journalist Davide Castelvecchi, and to direct answers from professor Heino Falcke, who leads the Event Horizon Telescope Science Council and co-wrote a founding paper in this subject).  And I can give you an update to yesterday’s very tentative figure.

First: a very important point, to which I will return in a future post, is that as I suspected, it’s not at all clear what the EHT image really shows.   More precisely, assuming Einstein’s theory of gravity is correct in this context:

  • The image itself clearly shows a black hole’s quasi-silhouette (called a `shadow’ in expert jargon) and its bright photon-sphere where photons [particles of light — of all electromagnetic waves, including radio waves] can be gathered and focused.
  • However, all the light (including the observed radio waves) coming from the photon-sphere was emitted from material well outside the photon-sphere; and the image itself does not tell you where that material is located.  (To quote Falcke: this is `a blessing and a curse’; insensitivity to the illumination source makes it easy to interpret the black hole’s role in the image but hard to learn much about the material near the black hole.) It’s a bit analogous to seeing a brightly shining metal ball while not being able to see what it’s being lit by… except that the photon-sphere isn’t an object.  It’s just a result of the play of the light [well, radio waves] directed by the bending effects of gravity.  More on that in a future post.
  • When you see a picture of an accretion disk and jets drawn to illustrate where the radio waves may come from, keep in mind that it involves additional assumptions — educated assumptions that combine many other measurements of M87’s black hole with simulations of matter, gravity and magnetic fields interacting near a black hole.  But we should be cautious: perhaps not all the assumptions are right.  The image shows no conflicts with those assumptions, but neither does it confirm them on its own.

Just to indicate the importance of these assumptions, let me highlight a remark made at the press conference that the black hole is rotating quickly, clockwise from our perspective.  But (as the EHT papers state) if one doesn’t make some of the above-mentioned assumptions, one cannot conclude from the image alone that the black hole is actually rotating.  The interplay of these assumptions is something I’m still trying to get straight.

Second, if you buy all the assumptions, then the picture I drew in yesterday’s post is mostly correct except (a) the jets are far too narrow, and shown overly disconnected from the disk, and (b) they are slightly mis-oriented relative to the orientation of the image.  Below is an improved version of this picture, probably still not the final one.  The new features: the jets (now pointing in the right directions relative to the photo) are fatter and not entirely disconnected from the accretion disk.  This is important because the dominant source of illumination of the photon-sphere might come from the region where the disk and jets meet.

My3rdGuessBHPhoto.png

Updated version of yesterday’s figure: main changes are the increased width and more accurate orientation of the jets.  Working backwards: the EHT image (lower right) is interpreted, using mainly Einstein’s theory of gravity, as (upper right) a thin photon-sphere of focused light surrounding a dark patch created by the gravity of the black hole, with a little bit of additional illumination from somewhere.  The dark patch is 2.5 – 5 times larger than the event horizon of the black hole, depending on how fast the black hole is rotating; but the image itself does not tell you how the photon-sphere is illuminated or whether the black hole is rotating.  Using further assumptions, based on previous measurements of various types and computer simulations of material, gravity and magnetic fields, a picture of the black hole’s vicinity (upper left) can be inferred by the experts. It consists of a fat but tenuous accretion disk of material, almost face-on, some of which is funneled into jets, one heading almost toward us, the other in the opposite direction.  The material surrounds but is somewhat separated from a rotating black hole’s event horizon.  At this radio frequency, the jets and disk are too dim in radio waves to see in the image; only at (and perhaps close to) the photon-sphere, where some of the radio waves are collected and focused, are they bright enough to be easily discerned by the Event Horizon Telescope.

 

by Matt Strassler at April 16, 2019 12:53 PM

Jon Butterworth - Life and Physics

The Universe Speaks in Numbers
I have reviewed Graham Farmelo’s new book for Nature. You can find the full review here. Mathematics, physics and the relationship between the two is a fascinating topic which sparks much discussion. The review only came out this morning and … Continue reading

by Jon Butterworth at April 16, 2019 12:16 PM

April 15, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: What Are We Looking At?

The short answer: I’m really not sure yet.  [This post is now largely superseded by the next one, in which some of the questions raised below have now been answered.]  EVEN THAT POST WAS WRONG ABOUT THE PHOTON-SPHERE AND SHADOW.  SEE THIS POST FROM JUNE 2019 FOR SOME ESSENTIAL CORRECTIONS THAT WERE LEFT OUT OF ALL REPORTING ON THIS SUBJECT.

Neither are some of my colleagues who know more about the black hole geometry than I do. And at this point we still haven’t figured out what the Event Horizon Telescope experts do and don’t know about this question… or whether they agree amongst themselves.

[Note added: last week, a number of people pointed me to a very nice video by Veritasium illustrating some of the features of black holes, accretion disks and the warping of their appearance by the gravity of the black hole.  However, Veritasium’s video illustrates a non-rotating black hole with a thin accretion disk that is edge-on from our perspective; and this is definitely NOT what we are seeing!]

As I emphasized in my pre-photo blog post (in which I described carefully what we were likely to be shown, and the subtleties involved), this is not a simple photograph of what’s `actually there.’ We all agree that what we’re looking at is light from some glowing material around the solar-system-sized black hole at the heart of the galaxy M87.  But that light has been wildly bent on its path toward Earth, and so — just like a room seen through an old, warped window, and a dirty one at that — it’s not simple to interpret what we’re actually seeing. Where, exactly, is the material `in truth’, such that its light appears where it does in the image? Interpretation of the image is potentially ambiguous, and certainly not obvious.

The naive guess as to what to expect — which astronomers developed over many years, based on many studies of many suspected black holes — is crudely illustrated in the figure at the end of this post.  Material around a black hole has two main components:

  • An accretion disk of `gas’ (really plasma, i.e. a very hot collection of electrons, protons, and other atomic nuclei) which may be thin and concentrated, or thick and puffy, or something more complicated.  The disk extends inward to within a few times the radius of the black hole’s event horizon, the point of no-return; but how close it can be depends on how fast the black hole rotates.
  • Two oppositely-directed jets of material, created somehow by material from the disk being concentrated and accelerated by magnetic fields tied up with the black hole and its accretion disk; the jets begin not far from the event horizon, but then extend outward all the way to the outer edges of the entire galaxy.

But even if this is true, it’s not at all obvious (at least to me) what these objects look like in an image such as we saw Wednesday. As far as I am currently aware, their appearance in the image depends on

  • Whether the disk is thick and puffy, or thin and concentrated;
  • How far the disk extends inward and outward around the black hole;
  • The process by which the jets are formed and where exactly they originate;
  • How fast the black hole is spinning;
  • The orientation of the axis around which the black hole is spinning;
  • The typical frequencies of the radio waves emitted by the disk and by the jets (compared to the frequency, about 230 Gigahertz, observed by the Event Horizon Telescope);

and perhaps other things. I can’t yet figure out what we do and don’t know about these things; and it doesn’t help that some of the statements made by the EHT scientists in public and in their six papers seem contradictory (and I can’t yet say whether that’s because of typos, misstatements by them, or [most likely] misinterpretations by me.)

So here’s the best I can do right now, for myself and for you. Below is a figure that is nothing but an illustration of my best attempt so far to make sense of what we are seeing. You can expect that some fraction of this figure is wrong. Increasingly I believe this figure is correct in cartoon form, though the picture on the left is too sketchy right now and needs improvement.  [NOTE ADDED: AS EXPLAINED IN THIS MORE RECENT POST, THE “PHOTON-SPHERE” DOES NOT EXIST FOR A ROTATING BLACK HOLE; THE “PHOTON-RING” OF LIGHT THAT SURROUNDS THE SHADOW DOES NOT DOMINATE WHAT IS ACTUALLY SEEN IN THE IMAGE; AND THE DARK PATCH IN THE IMAGE ISN’T NECESSARILY THE ENTIRE SHADOW.]  What I’ll be doing this week is fixing my own misconceptions and trying to become clear on what the experts do and don’t know. Experts are more than welcome to set me straight!

In short — this story is not over, at least not for me. As I gain a clearer understanding of what we do and don’t know, I’ll write more about it.

 

MyFirstGuessBHPhoto.png

My personal confused and almost certainly inaccurate understanding [the main inaccuracy is that the disk and jets are fatter than shown, and connected to one another near the black hole; that’s important because the main illumination source may be the connection region; also jets aren’t oriented quite right] of how one might interpret the black hole image; all elements subject to revision as I learn more. Left: the standard guess concerning the immediate vicinity of M87’s black hole: an accretion disk oriented nearly face-on from Earth’s perspective, jets aimed nearly at and away from us, and a rotating black hole at the center.  The orientation of the jets may not be correct relative to the photo.  Upper right: The image after the radio waves’ paths are bent by gravity.  The quasi-silhouette of the black hole is larger than the `true’ event horizon, a lot of radio waves are concentrated at the ‘photon-sphere’ just outside (brighter at the bottom due to the black-hole spinning clockwise around an axis slightly askew to our line of sight); some additional radio waves from the accretion disk and jets further complicate the image. Most of the disk and jets are too dim to see.  Lower Right: This image is then blurred out by the Event Horizon Telescope’s limitations, partly compensated for by heavy-duty image processing.

 

by Matt Strassler at April 15, 2019 04:02 PM

April 10, 2019

Clifford V. Johnson - Asymptotia

It’s a Black Hole!

Yes, it’s a black hole all right. Following on from my reflections from last night, I can report that the press conference revelations were remarkable indeed. Above you see the image they revealed! It is the behemoth at the centre of the galaxy M87! This truly groundbreaking image is the … Click to continue reading this post

The post It’s a Black Hole! appeared first on Asymptotia.

by Clifford at April 10, 2019 01:37 PM

April 06, 2019

Andrew Jaffe - Leaves on the Line

@TheMekons make the world alright, briefly, at the 100 Club, London.

by Andrew at April 06, 2019 10:17 AM

March 31, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

My favourite conference; the Institute of Physics Spring Weekend

This weekend I attended the annual meeting of the Institute of Physics in Ireland. I always enjoy these meetings – more relaxing than a technical conference and a great way of keeping in touch with physicists from all over the country. As ever, there were a number of interesting presentations, plenty of discussions of science and philosophy over breakfast, lunch and dinner, all topped off by the annual awarding of the Rosse Medal, a highly competitive competition for physics postgraduates across the nation.

banner

The theme of this year’s meeting was ‘A Climate of Change’ and thus the programme included several talks on the highly topical subject of anthropogenic climate change. First up was ‘The science of climate change’, a cracking talk on the basic physics of climate change by Professor Joanna Haigh of Imperial College London. This was followed by ‘Climate change: where we are post the IPCC report and COP24’, an excellent presentation by Professor John Sweeney of Maynooth University on the latest results from the IPCC. Then it was my turn. In ‘Climate science in the media – a war on information?’,  I compared the coverage of climate change in the media with that of other scientific topics such as medical science and and big bang cosmology. My conclusion was that climate change is a difficult subject to convey to the public, and matters are not helped by actors who deliberately attempt to muddle the science and downplay the threat. You can find details of the full conference programme here and the slides for my own talk are here.

 

Images of my talk from IoP Ireland 

There followed by a panel discussion in which Professor Haigh, Professor Sweeney and I answered questions from the floor on climate science. I don’t always enjoy panel discussions, but I think this one was useful thanks to some excellent chairing by Paul Hardaker of the Institute of Physics.

IMG_2504 (1)

Panel discussion of the threat of anthopogenic climate change

After lunch, we were treated to a truly fascinating seminar: ‘Tropical storms, hurricanes, or just a very windy day?: Making environmental science accessible through Irish Sign Language’, by Dr Elizabeth Mathews of Dublin City University, on the challenge of making media descriptions of threats such as storms hurricanes and climate change accessible to deaf people. This was followed by a most informative talk by Dr Bajram Zeqiri of the National Physical Laboratory on the recent redefinition of the kilogram,  ‘The measure of all things: redefinition of the kilogram, the kelvin, the ampere and the mole’.

Finally, we had the hardest part of the day, the business of trying to select the best postgraduate posters and choosing a winner from the shortlist. As usual, I was blown away by the standard, far ahead of anything I or my colleagues ever produced. In the end, the Rosse Medal was awarded to Sarah Markham of the University of Limerick for a truly impressive poster and presentation.

D25jKhvXcAE8vdA

Viewing posters at the IoP 2019 meeting; image courtesy of IoP Ireland

All in all, another super IoP Spring weekend. Now it’s back to earth and back to teaching…

by cormac at March 31, 2019 08:51 PM

March 29, 2019

Robert Helling - atdotde

Proving the Periodic Table
The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.



In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$ a^2+b^2=c^2$$

$$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)


Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

by Unknown (noreply@blogger.com) at March 29, 2019 11:02 AM

March 21, 2019

Alexey Petrov - Symmetry factor

CP-violation in charm observed at CERN

 

There is a big news that came from CERN today. It was announced at a conference called Recontres de Moriond, one of the major yearly conferences in the field of particle physics. One of the CERN’s experiments, LHCb, reported an observation — yes, observation, not an evidence for, but an observation, of CP-violation in charmed system. Why is it big news and why should you care?

You should care about this announcement because it has something to do with how our Universe looks like. As you look around, you might notice an interesting fact: everything is made of matter. So what about it? Well, one thing is missing from our everyday life: antimatter.

As it turns out, physicists believe that the amount of matter and antimatter was the same after the Universe was created. So, the $1,110,000 question is: what happened to antimatter? According to Sakharov’s criteria for baryonogenesis (a process of creating  more baryons, like protons and neutrons, than anti-baryons), one of the conditions for our Universe to be the way it is would be to have matter particles interact slightly differently from the corresponding antimatter particles. In particle physics this condition is called CP-violation. It has been observed in beauty and strange quarks, but never in charm quarks. As charm quarks are fundamentally different from both beauty and strange ones (electrical charge, mass, ways they interact, etc.), physicists hoped that New Physics, something that we have not yet seen or predicted, might be lurking nearby and can be revealed in charm decays. That is why so much attention has been paid to searches for CP-violation in charm.

Now there are indications that the search is finally over: LHCb announced that they observed CP-violation in charm. Here is their announcement (look for a news item from 21 March 2019). A technical paper can be found here, discussing how LHCb extracted CP-violating observables from time-dependent analysis of D -> KK and D-> pipi decays.

The result is generally consistent with the Standard Model expectations. However, there are theory papers (like this one) that predict the Standard Model result to be about seven times smaller with rather small uncertainty.  There are three possible interesting outcomes:

  1. Experimental result is correct but the theoretical prediction mentioned above is not. Well, theoretical calculations in charm physics are hard and often unreliable, so that theory paper underestimated the result and its uncertainties.
  2. Experimental result is incorrect but the theoretical prediction mentioned above is correct. Maybe LHCb underestimated their uncertainties?
  3. Experimental result is correct AND the theoretical prediction mentioned above is correct. This is the most interesting outcome: it implies that we see effects of New Physics.

What will it be? Time will show.

More technical note on why it is hard to see CP-violation in charm.

Once reason that CP-violating observables are hard to see in charm is because they are quite small, at least in the Standard Model.  All final/initial state quarks in the D -> KK or D -> pi pi transition belong to the first two generations. The CP-violating asymmetry that arises when we compare time-dependent decay rates of D0 to a pair of kaons or pions to the corresponding decays of anti-D0 particle can only happen if one reaches the weak phase taht is associated with the third generation of quarks (b and t), which is possible via penguin amplitude. The problem is that the penguin amplitude is small, as Glashow-Illiopulos -Maiani (GIM) mechanism makes it to be proportional to m_b^2 times tiny CKM factors. Strong phases needed for this asymmetry come from the tree-level decays and (supposedly) are largely non-perturbative.

Notice that in B-physics the situation is exactly the opposite. You get the weak phase from the tree-level amplitude and the penguin one is proportional to m_top^2, so CP-violating interference is large.

Ask me if you want to know more!

by apetrov at March 21, 2019 06:45 PM

March 16, 2019

Robert Helling - atdotde

Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"
Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.

Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.

Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.

Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?

Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.

Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.

Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.

by Unknown (noreply@blogger.com) at March 16, 2019 09:43 AM

March 13, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

RTE’s Brainstorm; a unique forum for public intellectuals

I have an article today on RTE’s ‘Brainstorm’ webpage, my tribute to Stephen Hawking one year after his death.

"Hawking devoted a great deal of time to science outreach, unusual for a scientist at this level"

I wasn’t aware of the RTE brainstorm initiative until recently, but I must say it is a very interesting and useful resource. According to the mission statement on the website“RTÉ Brainstorm is where the academic and research community will contribute to public debate, reflect on what’s happening in the world around us and communicate fresh thinking on a broad range of issues”.  A partnership between RTE, University College Cork, NUI Galway, University of Limerick, Dublin City University, Ulster University, Maynooth University and the Technological University of Dublin, the idea is to provide an online platform for academics and other specialists to engage in public discussions of interesting ideas and perspectives in user-friendly language.  You can find a very nice description of the initiative in The Irish Times here .

I thoroughly approve of this initiative. Many academics love to complain about the portrayal of their subject (and a lot of other subjects) in the media; this provides a simple and painless method for such people to reach a wide audience. Indeed, I’ve always liked the idea of the public intellectual. Anyone can become a specialist in a given topic; it’s a lot harder to make a meaningful contribution to public debate. Some would say this is precisely the difference between the academic and the public intellectual. Certainly, I enjoy engaging in public discussions of matters close to my area of expertise and I usually learn something new.  That said, a certain humility is an absolute must – it’s easy to forget that detailed knowledge of a subject does not automatically bestow the wisdom of Solomon. Indeed, there is nothing worse than listing to an specialist use their expertise to bully others into submission – it’s all about getting the balance right and listening as well as informing….

by cormac at March 13, 2019 07:28 PM

March 06, 2019

Robert Helling - atdotde

Challenge: How to talk to a flat earther?
Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).

These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.

Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.

So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).

My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.

How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.


by Unknown (noreply@blogger.com) at March 06, 2019 02:24 PM

February 24, 2019

Michael Schmitt - Collider Blog

Miracles when you use the right metric

I recommend reading, carefully and thoughtfully, the preprint “The Metric Space of Collider Events” by Patrick Komiske, Eric Metodiev, and Jesse Thaler (arXiv:1902.02346). There is a lot here, perhaps somewhat cryptically presented, but much of it is exciting.

First, you have to understand what the Earth Mover’s Distance (EMD) is. This is easier to understand than the Wasserstein Metric of which it is a special case. The EMD is a measure of how different two pdfs (probability density functions) are and it is rather different than the usual chi-squared or mean integrated squared error because it emphasizes separation rather than overlap. The idea is look at how much work you have to do to reconstruct one pdf from another, where “reconstruct” means transporting a portion of the first pdf a given distance. You keep track of the “work” you do, which means the amount of area (i.e.,”energy” or “mass”) you transport and how far you transport it. The Wikipedia article aptly makes an analogy with suppliers delivering piles of stones to customers. The EMD is the smallest effort required.

The EMD is a rich concept because it allows you to carefully define what “distance” means. In the context of delivering stones, transporting them across a plain and up a mountain are not the same. In this sense, rotating a collision event about the beam axis should “cost” nothing – i.e, be irrelevant — while increasing the energy or transverse momentum should, because it is phenomenologically interesting.

The authors want to define a metric for LHC collision events with the notion that events that come from different processes would be well separated. This requires a definition of “distance” – hence the word “metric” in the title. You have to imagine taking one collision event consisting of individual particle or perhaps a set of hadronic jets, and transporting pieces of it in order to match some other event. If you have to transport the pieces a great distance, then the events are very different. The authors’ ansatz is a straight forward one, depending essentially on the angular distance θij/R plus a term than takes into account the difference in total energies of the two events. Note: the subscripts i and j refer to two elements from the two different events. The paper gives a very nice illustration for two top quark events (read and blue):

Transformation of one top quark event into another

The first thing that came to mind when I had grasped, with some effort, the suggested metric, was that this could be a great classification tool. And indeed it is. The authors show that a k-nearest neighbors algorithm (KNN), straight out of the box, equipped with their notion of distance, works nearly as well as very fancy machine learning techniques! It is crucial to note that there is no training here, no search for a global minimum of some very complicated objection function. You only have to evaluate the EMD, and in their case, this is not so hard. (Sometimes it is.) Here are the ROC curves:

ROC curves. The red curve is the KNN with this metric, and the other curves close by are fancy ML algorithms. The light blue curve is a simple cut on N-subjettiness observables, itself an important theoretical tool


I imagine that some optimization could be done to close the small gap with respect to the best performing algorithms, for example in improving on the KNN.

The next intriguing idea presented in this paper is the fractal dimension, or correlation dimension, dim(Q), associated with their metric. The interesting bit is how dim(Q) depends on the mass/energy scale Q, which can plausibly vary from a few GeV (the regime of hadronization) up to the mass of the top quark (173 GeV). The authors compare three different sets of jets from ordinary QCD production, from W bosons decaying hadronically, and from top quarks, because one expects the detailed structure to be distinctly different, at least if viewed with the right metric. And indeed, the variation of dim(Q) with Q is quite different:

dim(Q) as a function of Q for three sources of jets


(Note these jets all have essentially the same energy.) There are at least three take-away points. First, the dim(Q) is much higher for top jets than for W and QCD jets, and W is higher than QCD. This hierarchy reflects the relative complexity of the events, and hints at new discriminating possibilities. Second, they are more similar at low scales where the structure involves hadronication, and more different at high scales which should be dominated by the decay structure. This is born out by they decay products only curves. Finally, there is little difference in the curves based on particles or on partons, meaning that the result is somehow fundamental and not an artifact of hadronization itself. I find this very exciting.

The authors develop the correlation distance dim(Q) further. It is a fact that a pair of jets from W decays boosted to the same degree can be described by a single variable: the ratio of their energies. This can be mapped onto an annulus in a abstract dimensional space (see the paper for slightly more detail). The interesting step is to look at how the complexity of individual events, reflected in dim(Q), varies around the annulus:

Embedding of W jets and how dim(Q) varies around the annulus and inside it


The blue events to the lower left are simple, with just a single round dot (jet) in the center, while the red events in the upper right have two dots of nearly equal size. The events in the center are very messy, with many dots of several sizes. So morphology maps onto location in this kinematic plane.

A second illustration is provided, this time based on QCD jets of essentially the same energy. The jet masses will span a range determined by gluon radiation and the hadronization process. Jets at lower mass should be clean and simple while jets at high mass should show signs of structure. This is indeed the case, as nicely illustrated in this picture:

How complex jet substructure correlates with jet mass


This picture is so clear it is almost like a textbook illustration.

That’s it. (There is one additional topic involving infrared divergence, but since I do not understand it I won’t try to describe it here.) The paper is short with some startling results. I look forward to the authors developing these studies further, and for other researchers to think about them and apply them to real examples.

by Michael Schmitt at February 24, 2019 05:16 PM