Particle Physics Planet


September 19, 2019

Christian P. Robert - xi'an's og

ABC in Clermont-Ferrand

Today I am taking part in a one-day workshop at the Université of Clermont Auvergne on ABC. With applications to cosmostatistics, along with Martin Kilbinger [with whom I worked on PMC schemes], Florent Leclerc and Grégoire Aufort. This should prove a most exciting day! (With not enough time to run up Puy de Dôme in the morning, though.)

by xi'an at September 19, 2019 10:19 PM

Peter Coles - In the Dark

Feline Matters

In gratitude for Maynooth University’s recent rise in the Times Higher League Tables reported yesterday, the authorities have made appropriate offerings to the deity responsible for this good fortune:

I notice also that Maynooth University Library Cat is clearly the inspiration for this visualisation of an inspiralling system, though I’m not sure what amplitude of gravitational waves this event would produce.

All of which means that I’m about to go home for an evening’s relaxation before spending tomorrow in Dublin…

by telescoper at September 19, 2019 04:45 PM

Emily Lakdawalla - The Planetary Society Blog

Bill Nye, Planetary Society Staff Listen to LightSail 2 Signal
When LightSail 2 recently flew south of The Planetary Society's headquarters, CEO Bill Nye and other staff members stepped outside to listen.

September 19, 2019 12:00 PM

Peter Coles - In the Dark

Open Letter to the EU: Reinstate the Commissioner for Science and Research

It may have escaped your attention (as it did mine) that, when the candidates for members of the European Union Commission were presented last week, the role of Commissioner for Research, Science and Innovation has apparently been phased out, and its remit subsumed by that of the Commissioner for “Innovation and Youth”.

Downgrading the role of Science and Research in this way is a retrograde step, as is the introduction of a Commissioner for `Protecting the European Way of Life’, which is a racist dog-whistle if ever I heard one.

Anyway, back on the subject of Research and Science, there is a letter going around protesting the loss of a specific role in the Commission covering this portfolio.

Here is the text:

Your Excellencies Presidents Sassoli, Dr. Juncker and Dr. von der Leyen,

The candidates for the new EU commissioners were presented last week. In the new commission the areas of education and research are not explicitly represented anymore and instead are subsumed under the “innovation and youth” title. This emphasizes economic exploitability (i.e. “innovation”) over its foundation, which is education and research, and it reduces “education” to “youth” while being essential to all ages.

We, as members of the scientific community of Europe, wish to address this situation early on and emphasize both to the general public, as well as to relevant politicians on the national and European Union level, that without dedication to education and research there will neither exist a sound basis for innovation in Europe, nor can we fulfill the promise of a high standard of living for the citizens of Europe in a fierce global competition.

President von der Leyen, in her mission letter to commissioner Gabriel, has emphasized that “education, research and innovation will be key to our competitiveness”.

With this open letter we demand that the EU commission revises the title for commissioner Gabriel to “Education, Research, Innovation and Youth” reflecting Europe’s dedication to all of these crucial areas. We also call upon the European Parliament to request this change in name before confirming the nominees for commissioner.

I have signed the letter, and encourage you to do likewise if you are so inclined. You can find a link to the letter, together with instructions how to sign it, here.

by telescoper at September 19, 2019 09:11 AM

September 18, 2019

Christian P. Robert - xi'an's og

No review this summer

A recent editorial in Nature was a declaration by a biologist from UCL on her refusal to accept refereeing requests during the summer (or was it the summer break), which was motivated by a need to reconnect with her son. Which is a good enough reason (!), but reflects sadly on the increasing pressure on one’s schedule to juggle teaching, research, administration, grant hunting, society service, along with a balanced enough family life. (Although I have been rather privileged in this regard!) Given that refereeing or journal editing is neither visible nor rewarded, it comes as the first task to be postponed or abandoned, even though most of us realise it is essential to keep science working as a whole and to make our own papers published. I have actually noticed an increasing difficulty in the past decade to get (good) referees to accept new reviews, often asking for deadlines that are hurting the authors, like six months. Making them practically unavailable. As I mentioned earlier on this blog, it could be that publishing referees’ reports as discussions would help, since they would become recognised as (unreviewed!) publications, but it is unclear this is the solution. If judging from the similar difficulty in getting discussions for discussed papers. (As an aside, there are two exciting papers coming up for discussion in Series B, ‘Unbiased Markov chain Monte Carlo methods with couplings’ by  Pierre E. Jacob, John O’Leary and Yves F. Atchadé and in Bayesian Analysis, Latent nested nonparametric priors by Frederico Camerlenghi, David Dunson, Antonio Lijoi, Igor Prünster, and Abel Rodríguez). Which is surprising when considering the willingness of a part of the community to engage into forii discussions, sometimes of a considerable length as illustrated on Andrew’s blog.

Another entry in Nature mentioned the case of two University of København tenured professors in geology who were fired for either using a private email address (?!) or being away on field work during an exam and at a conference without permission from the administration. Which does not even remotely sound like a faulty behaviour to me or else I would have been fired eons ago..!

by xi'an at September 18, 2019 10:19 PM

Emily Lakdawalla - The Planetary Society Blog

Registration Is Now Open for the 2020 Day of Action
Join The Planetary Society and advocate for space in Washington, D.C. this 9 - 10 February 2020.

September 18, 2019 06:53 PM

Peter Coles - In the Dark

University Rankings Again

Last week saw the publication of the Times Higher World University Rankings which have once again predictably generated a great deal of sound and fury while signifying nothing very important. I can’t be bothered to repeat my previous criticisms of these league tables (though I will point you to a very good rant here) but I will make a couple of comments on the reaction to them here in Ireland.

First let me mention (for what it’s worth) that Maynooth University has risen from the band covering 351st-400th place to that covering 301st to 350th place. That means that Maynooth went up by anything from 1 place to 99 places. That’s two consecutive years of rises for NUIM.

(I’ll add without further comment that I arrived here two years ago…)

The Irish Media have not paid much attention to this (or to the improvement in standing of NUI Galway) but have instead been preoccupied with the fact that the College of the Holy and Undivided Trinity of Queen Elizabeth near Dublin, known as Trinity College Dublin for short, has fallen by 44 places to 164th place; see, for example, here. Now there’s no question in my mind that Irish universities need an injection of income – especially in science subjects – in order to improve standards of education and research, but I don’t really understand the obsession with Trinity College. It’s a fine institution, of course, but sometimes it’s almost as if the press think that’s the only University in Ireland…

In response to its declining fortunes Trinity College has claimed that Ireland needs a `Rankings Strategy’. No it doesn’t. It needs something far more radical – a higher education strategy. The current government  doesn’t have one

Anyway, given the rate of Maynooth’s rise and Trinity’s fall it is a straightforward undoubtedly scientifically valid extrapolation to predict that in two or three years time, Maynooth will have overtaken Trinity in the World Rankings anyway!

(No, I’m not going to take any bets on that.)

Turning away from the exercise in numerological flummery that is the Times Higher League Tables, let me pass on some numbers that are actually meaningful. The week before term with not everyone yet registered, the number of students taking Mathematical Physics in the first year at Maynooth has increased by 31% since last year and the number on our fast-track Theoretical Physics and Mathematics (TP&M) programme has increased threefold. These increases are very pleasing. Although lectures proper don’t start until next week, I did an introductory session with the TP&M students this morning. It was very nice to be able to welcome them to Maynooth for what I hope will be an enjoyable time at Ireland’s soon-to-be top University!

by telescoper at September 18, 2019 02:34 PM

Lubos Motl - string vacua and pheno

Seiberg helped to create the culture of "time fillers" like Harlow
"Observable" has reminded me of a video I was sent a few days ago, a 12-minute introduction by Dan Harlow (MIT) to topological field theories etc. within high energy physics, presented during an event at Harvard's CMSA (Center of Mathematical Sciences and Applications) last week.

Aside from some general technical points about topological matter, he also discussed the refocus of physicists to subfields. Around 1:40, he said it was harder to build particle colliders etc. and around 1:55, he asked "what are we supposed to do in the meantime? You know we need to write papers and posting them to hep-th".

That was quite a frank demystification of Harlow's "moral foundations and motivation" to do physics.



When Harlow was finished, at 10:16, top IAS Princeton physicist Nathan Seiberg praised Harlow's "beautiful" summary but pointed out that one thing shouldn't have been said. We're not doing what we're doing because we need to fill the time, as Harlow implicitly said. We're doing it because it's important, Seiberg thinks.



Those are of course very different ideas about the very reason why people keep on being employed as physicists. Harlow has backtracked and claimed that he agreed the topic was important because he's also working on it – and he doesn't like to waste time. Well, it is equally sensible to hypothesize that this was just a lame rationalization of a different statement than one in which Harlow actually believes.

There is a clear difference between the views (about the very value of physics) of the older physicists like Seiberg on one side; and younger ones like Harlow. Older ones think that physics is interesting and makes sense (or at least they continue to flawlessly pretend to believe so); younger ones generally don't. I have followed similar "conceptual" pronouncements by Harlow for quite some time and I think he is one of the main people who have been open about the idea that "he is in it mainly for the money" and to get the money, he needs to "fill the time and submit some papers".

The transformation took place at some moment, gradually, not uniformly across the world. The transformation hasn't reached some of the best places – while it has conquered many of the šitty places a very long time ago. And a part of the transformation may perhaps be justified by some objectively caused slowdown in HEP physics, not just politically driven changes. But the fact that this transformation is real seems clear. Are Seiberg and Harlow physicists of the same kind?

Professionally, in the case of these two men (OK, a man and a person), I think that the answer is Yes. Harlow has good enough capability to do physics research of a kind that is somewhat similar to Seiberg's. Both are working on rather conceptually high-brow aspects of quantum field theory and general quantum mechanical theories that are tightly connected with string theory when done properly – but they also tend to avoid the "most stringy" topics. They're not doing the same things but the analogy would work well and would be very far from some evidence supporting a "tectonic shift" in the field.

However, the overall motivation, psyche, thinking about the value, purpose, and future of pure science – and the scientific institutions and their relationships to science – couldn't be more different. If you have common sense, you know that "we have to fill the time and send papers" was the actual way how Harlow thinks about these matters – and he only backtracked because he saw some (surprising?) opposition from a senior person. When Harlow leaves such a conference, he talks differently to the people in his environment whom he considers his actual soulmates, unlike Seiberg. He probably tells them that he, Harlow, was a target of hundreds of microaggressions if not several milliaggression from the evil old white male Seiberg's side so he simply had to surrender – but he is still the same heroic snowflake whom all his comrades know.

And they find it obvious that the physics departments are just buildings giving rather convenient jobs to somebody – assuming that some bureaucratic criteria are obeyed – and the main question is how to divide these feeding troughs. And because Harlow is one of the main feminist activists – in fact, the primary or only author of the utterly despicable petition against Strumia written during a nasty, Nazi-era-style witch hunt – he believes that all the "progressive" quotas on the less capable ("underrepresented") groups of people in the physics departments are the main thing that physicists should be concerned with.

The broad similarity of Seiberg's and Harlow's work shouldn't confuse us. The tectonic shift is real and dramatic. Harlow was hired by MIT which is one of the last top places where the hiring is meritocratic. But Harlow no longer feels to be a part of this old system. Instead, he is just an – accidentally more talented than others – member of an entirely new community of physicists and (mostly) "physicists" whose goals and psyches are totally different.

Harlow's papers make sense and some of them have been valuable but most of the people in "the community that he considers his own" can't do meaningful research. Most of them have been hired for political reasons, because of affirmative action, because of their support for this or that cause. Most of these people end up at much less prestigious places than the MIT but it doesn't matter – all of them including Harlow still act as a united community with a new, very different, set of values.

It's a community that takes it for granted that pure science is meaningless and worthless per se and what really matters are just the feeding troughs that have to be divided according to the right – i.e. far left – criteria. Lots of these people candidly tell you that all the work they were doing in the recent 10 or 20 years is worthless crap according to their own judgement – but they find it OK not to return the hundreds of thousands of dollars that they have received for this crap. In fact, some of them act as if they were morally superior because they have robbed the taxpayers of hundreds of thousands of dollars for the crap and they brag about it! They know what it means to write nonsense for 20 years and be getting high salaries for that – so they should be promoted and celebrated for that experience and special expertise! It's completely insane and I am not making it up. And there's virtually no adult left in the room at universities who would chastise them – Seiberg's objection was a tiny fluke.

Some basic morality has largely disappeared from the community of younger physicists – along with the excitement and curiosity. The moral decay has been intense. These people no longer share even the most basic ideas about the scientific integrity – e.g. that a scientist shouldn't do things (and be paid for things) that he or she believes to be wrong or worthless nonsense. Again, why did it happen, when did it happen, and how did it happen?

It has happened because it was allowed to happen. In fact, it was encouraged to happen. And it was allowed or encouraged even by the likes of Nathan Seiberg. For 10 or 20 years or longer, the likes of Nathan Seiberg have been quietly okaying the transformation of the physics community by agreeing with all the ideological deformations, by being silent even in the most egregious cases. Now, the younger part of the community is composed in such a way that most of the members feel existentially threatened by meritocracy such as the expectation of any interesting scientific results. And the likes of Harlow are their allies so they don't want to support any kind of sociologically-independent values or meritocracy, either.

Of course internally in "his community", Harlow has to agree that the only task for a physicist is to fill xir time and submit some number of preprints – with the assumption that something may always be written and everyone can learn how to do it. So everyone can be a "physicist" and xe can even omit the quotes because no one will shout at xir to return the damn quotes at their proper place. If something more were expected, he would be existentially threatening most of the people whom he considers his comrades. And that would be so bad! So of course, they mostly don't want to do a very ambitious or difficult stuff.

I think that the rot is beyond the point of no return – and it's been there for many years. In fact, I find it utterly ludicrous that Nathan Seiberg acts as if he were surprised that "filling the time" is how Daniel Harlow thinks about the reasons why physics research exists. Nathan Seiberg must have spent a decade or two in Josef Fritzl's basement if he hasn't noticed that this is how the bulk of similar young people at the universities think today (especially those who are visible activists – and Harlow is unquestionably one of those). Is Seiberg really unaware of their basic views – or does he only pretend to be unaware? And does it actually matter what the answer to this question is?

So the pressure suddenly exerted on Harlow in September 2019 is weird and it is too little too late (a decade or two decades too late). Such ludicrous theaters should be avoided now because it's damn obvious that the "plan for the physics departments" has been modified so that Harlow's attitude is the conventional, tolerated one and it is bound to spread further as the "old dinosaurs" such as Seiberg retire or die away. You should have opened your eyes, escaped from that basement, and do something about a decade or two earlier, Dr Seiberg! Now, the universities are just generic feeding troughs for everybody where no special skills let alone moral values are expected from anybody. Just parrot the recommended PC clichés, fulfill some formal bureaucratic criteria, and get the food in your feeding trough. That is the new ideal template for a research institution. This will continue and it will be getting worse up to the moment when some wise politicians start to abolish the universities whose ludicrously useless character will be manifest to basically everybody.

by Luboš Motl (noreply@blogger.com) at September 18, 2019 01:08 PM

John Baez - Azimuth

Divesting

Christian Williams

John always tells me to write short, sweet, and clear. Knowing that his advice is supreme on these matters, I’ll try to write mini-posts in between the bigger ones. But… not this time – the topic is too good.

(Dispossess of property/authority. Say it, sound smart.)

…..

Work smarter, not (just) harder.

Today I got an email from Bill McKibben, founder of 350.org. (350 parts per million, the concentration of CO2 considered a “safe upper limit” for Earth, by NASA scientists James Hansen. We’re soaring past 415ppm.) In preparation for the global climate strike, Bill wants to share an important idea: divesting from fossil fuels may be our greatest lever.

Money is the Oxygen on which the Fire of Global Warming Burns

I’ll pluck paragraphs to quote, but please read the whole article; this is an extremely important and practical idea for addressing the crisis. And it’s well written… the first sentence sounds fairly Baezian.

I’m skilled at eluding the fetal crouch of despair—because I’ve been working on climate change for thirty years, I’ve learned to parcel out my angst, to keep my distress under control. But, in the past few months, I’ve more often found myself awake at night with true fear-for-your-kids anguish. This spring, we set another high mark for carbon dioxide in the atmosphere: four hundred and fifteen parts per million, higher than it has been in many millions of years. The summer began with the hottest June ever recorded, and then July became the hottest month ever recorded. The United Kingdom, France, and Germany, which have some of the world’s oldest weather records, all hit new high temperatures, and then the heat moved north, until most of Greenland was melting and immense Siberian wildfires were sending great clouds of carbon skyward. At the beginning of September, Hurricane Dorian stalled above the Bahamas, where it unleashed what one meteorologist called “the longest siege of violent, destructive weather ever observed” on our planet.

Bill emphasizes that change has moved far too slowly, of course. But he’s spent the past week with Greta Thunberg and many other activists, and one can tell that he really is heartened.

It seems that there are finally enough people to make an impact… what if there were an additional lever to pull, one that could work both quickly and globally?

The answer: money.

Today it is large corporations which have the greatest power over daily life, and they are far more susceptible to pressure and change then the insulated bureaucracies of governments.

Thankfully Bill and many others knew this years ago, and started a divestment campaign of breathtaking magnitude:

Seven years ago, 350.org helped launch a global movement to persuade the managers of college endowments, pension funds, and other large pots of money to sell their stock in fossil-fuel companies. It has become the largest such campaign in history: funds worth more than eleven trillion dollars have divested some or all of their fossil-fuel holdings.

$11,000,000,000,000.

And it has been effective: when Peabody Energy, the largest American coal company, filed for bankruptcy, in 2016, it cited divestment as one of the pressures weighing on its business, and, this year, Shell called divestment a “material adverse effect” on its performance.

The movement is only growing, accelerating, and setting its sights on the big gorillas. The main sectors: banking, asset management, and insurance.

Consider a bank like, say, JPMorgan Chase, which is America’s largest bank and the world’s most valuable by market capitalization. In the three years since the end of the Paris climate talks, Chase has reportedly committed 196 billion dollars in financing for the fossil-fuel industry, much of it to fund extreme new ventures: ultra-deep-sea drilling, Arctic oil extraction, and so on. In each of those years, ExxonMobil, by contrast, spent less than 3 billion dollars on exploration, research, and development. $196B is larger than the market value of BP; it dwarfs that of the coal companies or the frackers. By this measure, Jamie Dimon, the C.E.O. of JPMorgan Chase, is an oil, coal, and gas baron almost without peer.


But here’s the thing: fossil-fuel financing accounts for only about 7% of Chase’s lending and underwriting. The bank lends to everyone else, too—to people who build bowling alleys and beach houses and breweries. And, if the world were to switch decisively to solar and wind power, Chase would lend to renewable-energy companies, too. Indeed, it already does, though on a much smaller scale… It’s possible to imagine these industries, given that the world is now in existential danger, quickly jettisoning their fossil-fuel business. It’s not easy to imagine—capitalism is not noted for surrendering sources of revenue. But, then, the Arctic ice sheet is not noted for melting.

Bill elucidates the fact that it is critical to effect the divestment of giants like Chase, Blackrock, and Chubb. Even if these targets are quite hard, this method of action applies to every aspect of the economy, and empowers every single individual (more below). If the total divestment is spread over a decade, it can be done without serious economic instability. And if done well, it will spur tremendous growth in the renewable energy sector and ecological economy in general, as public consciousness opens up to these ideas on a large scale.

I want to keep giving quotes, but you can read it. (If anyone is out of free articles for New Yorker, I can send a text file.) I’ll contribute a few of my own thoughts, expanding on stuff implicit in the article; and then this topic can be continued with another post.

…..

Divesting is a truly powerful lever, for several reasons.

First, money talks. Many people who have been misled by modern society have the following equation in their heads:

money = value

These people, being overwhelmed with social complexity, have lifted the “burden” of large-scale ethics off of their shoulders and into a blind faith in the economic system – thinking “well, if enough people have the right idea, then capitalism will surely head in the right direction.”

Of course, after not too long, we find that this is not the case. But their thinking has not changed, and we need a way to communicate with them. While it may feel strange and wrong to reformulate the message from “ethical imperative” to “financial risk”, this is the way to get through to many people in powerful places. When you read about success stories, it is effective, especially considering all the time spent mired in anthropogenic-warming skepticism.

Second, social pressure is now a real force in the world. We can bend competition to our will: incentivize companies to better practices, and when one capitulates, the others in that sphere follow. It has happened many times, and the current is only getting stronger.

Though if we want to fry bigger fish than no-straws, we need to sharpen our collective tactics. It will of course be more systematic and penetrating than shaming companies on Twitter. The article includes great examples of this; it would be awesome to discuss more ideas in the comments.

Third, everyone can help this way, directly and significantly. Everyone has a bank account. It is not difficult, nor seriously detrimental, to switch to a credit union. The divestment campaign can be significantly accelerated by a movement of concerned citizens making this transition.

(My family uses Chase. When I was spending quality time back home, I asked my parents how the value of a bank is anything more than secure money storage. The main thing they mentioned was loans – but they admitted that the biggest and best loan they ever took was through a credit union. The reasons simply did not add up. I plan to show them this article, and I’ll try to have an earnest conversation with them. I really hope they understand, because I know they are rational and good people.)

It’s all but impossible for most of us to stop using fossil fuels immediately, especially since, in many places, the fossil-fuel and utility industries have made it difficult and expensive to install solar panels on your roof. But it’s both simple and powerful to switch your bank account: local credit unions and small-town banks are unlikely to be invested in fossil fuels, and Beneficial State Bank and Amalgamated Bank bring fossil-free services to the West and East Coasts, respectively, while Aspiration Bank offers them online. (And they’re all connected to A.T.M.s.)


This all could, in fact, become one of the final great campaigns of the climate movement—a way to focus the concerted power of any person, city, and institution with a bank account, a retirement fund, or an insurance policy on the handful of institutions that could actually change the game. We are indeed in a climate moment—people’s fear is turning into anger, and that anger could turn fast and hard on the financiers. If it did, it wouldn’t end the climate crisis: we still have to pass the laws that would actually cut the emissions, and build out the wind farms and solar panels. Financial institutions can help with that work, but their main usefulness lies in helping to break the power of the fossil-fuel companies.

…..

The economy is far more responsive to changes in the collective ethos than the government. This is how people can directly express their values every day, with every bit of earning they have. We are recognizing that the public mindset is changing, and we can now take heart and leverage society in the right direction.

Conjecture The critical science of our time has the form:

Ecology
\Uparrow \;\;\;\;\; \Downarrow
Economy

This is why John Baez brought together so many capable people for the Azimuth Project. I hope that we can connect with the new momentum and coordinate on something great. Even in just the last post there were some really good ideas. I really look forward to hearing more. Thanks.

by christianbwilliams at September 18, 2019 03:36 AM

September 17, 2019

Christian P. Robert - xi'an's og

Le Monde puzzle [#1111]

Another low-key arithmetic problem as Le Monde current mathematical puzzle:

Notice that there are 10 numbers less than, and prime with 11, 100 less than and prime with 101, 1000 less than, and prime with 1111? What is the smallest integer N such that the cardinal of the set of M<N prime with N is 10⁴? What is the smallest multiple of 1111 using only different digits? And what is the largest?

As it is indeed a case for brute force resolution as in the following R code

library(numbers)
homanycoprim=function(n){
  many=1
  for (i in 2:(n-1)) many=many+coprime(i,n)
  return(many)}

smallest=function(){
  n=1e4
  many=homanycoprim(n)
  while (many!=1e4) many=homanycoprim(n<-n+1)
  return(n)}

which returns n=10291 as the smallest value of N.  For the other two questions, the usual integer2digit function is necessary

smalldiff=function(){
  n=1111;mul=1
  while (mul<1e6) {
    x=as.numeric(strsplit(as.character(n*mul), "")[[1]])
    while (sum(duplicated(x))!=0){
     mul=mul+1
     x=as.numeric(strsplit(as.character(n*mul), "")[[1]])}
  print(n*mul);mul=mul+1}}

leading to 241,087 as the smallest and 9,875,612,340 as the largest (with 10 digits).

by xi'an at September 17, 2019 10:19 PM

astrobites - astro-ph reader's digest

A New Technique for Finding Newly Formed Exoplanets

Title: A Kinematical Detection of Two Embedded Jupiter-mass Planets in HD 163296

Authors: Richard Teague, Jaehan Baez, Edwin A. Bergin, Tilman Birnstiel, Daniel Foreman-Mackey

First Author’s Institution: Department of Astronomy, University of Michigan

Status: Published in The Astrophysical Journal [open access]

 

Next year will mark the 25th anniversary of the first detection of a planet orbiting a main sequence star: 51 Peg b. Since this discovery, we have been finding these extrasolar planets (or exoplanets) at an exponential rate, passing the 4,000 milestone earlier this year. Of these thousands of planets, we have found some bigger than Jupiter and rocky planets more massive than Earth. As you might expect, most of the planets we have discovered are unlike any we have in our own Solar System. But all except a handful of these have one important characteristic in common: they are no longer undergoing formation. As most of these systems are older than a billion years, it is a challenge to piece together how these planets initially formed and evolved. We know they must have formed in a disk (known as a protoplanetary disk) that surrounded the forming star. We’ve observed many of these disks, and we have observed very massive planets forming within them. Still, more observations are needed if we want to learn how a planet goes from dust and gas particles to a rocky object potentially surrounded by a volatile-rich (not just hydrogen and helium) atmosphere.

This might seem like an easy task; we are clearly skilled at finding exoplanets. However, the disk makes it nearly impossible to find anything but the most massive planets forming. Methods such as the transit method or radial velocities rely on watching for changes in the star’s light due to the presence of a planet, but the disk can mask these signals. Our current direct imaging capabilities are already limited to only the brightest and hence most massive objects.

Disk Motion is the Key

FIgure 1: Cartoon depicting how the velocity of gas particles at varying distances to the star is expected to change in a disk when a planet (gray dot) is present. The blue dot represents the location where the gas is slowing down the most and the red dot is where the gas is speeding up. (Modified from Figure 1 in today’s paper)

If we want to find Jupiter-mass and smaller planets, we need a new technique. This is where today’s authors come in. Since we have this massive and large disk, why not look for possible forming planets by observing the effect these planets would have on the motion of the gas particles, known as kinematical effects? As a planet orbits, it will push the gas around creating a lack of particles in one area and a build-up of particles in another. This local deviation in the gas structure in turn creates a difference in pressure, or pressure gradient. Without any pressure gradient, gas particles should orbit the star with a Keplerian, or orbital, velocity where the velocity decreases as you move further away from the star. However with this gradient, particles deviate from Keplerian orbits by speeding up or slowing down in order to fill the void created by the planet. A cartoon of this effect is shown in Figure 1 which plots the velocity differences as a function of distance from the star. Notice that gas particles at distances closer to the star than the planet slow down while those further away speed up in order to try to match the planet’s orbital speed. The authors hypothesize that if they can observe this effect in a disk, they will not only be able to confirm the existence of a planet but also determine its mass.

 

Putting the Method to the Test

 

While this sounds like a practical method, does it actually work? Today’s authors test their idea by using archival data of the protoplanetary disk HD 163296. Previous analyses have hinted at the presence of forming planets in the disk, making it a promising test candidate. Using emission from a special type of isotope of carbon monoxide, C^{18}O, to probe the middle of the disk where planets should be present, the authors determined the velocity of this gas at different distances from the star (gray points in Figure 2). They then created a simulation of the disk motion assuming no planets, which they plot in orange. Interestingly, it does a poor job of explaining the motion of the disk. Next they re-ran their simulation with two embedded planets: a Jupiter-mass planet at 100 AU and a 1.3 Jupiter-mass planet at 165 AU (blue line). With this model, they were able to better fit the structure of their data beyond 80 AU. Although the authors note that they cannot dismiss every scenario without a planet, they do manage to rule-out many common ones allowing them to conclude that this disk should host at least two Jupiter-mass planets.

Figure 2: The change in velocity as a function of distance for the disk HD 163296 (gray points). The location of the suspected planets are noted with the gray dashed lines. Notice how the change in the velocity around each gas line is similar to the cartoon in Figure 1. The best fit model with no planets is shown in orange while the model including two Jupiter-mass planets is shown in blue. Distances beyond 80 AU are well fit by the blue model, however, the authors are unable to explain why it varies so much inside 80 AU. (Figure 5 of today’s paper)

The question still remains as to what is going on inside 80 AU. The authors note that adding another planet of 0.6 Jupiter-masses at 65 AU provided a decent but not great fit to the data. They also suggest that magnetic fields creating instabilities in the gas could also cause a disagreement between the model and the data. However, they note the resolution of the image deteriorates as you move closer to the star (fewer pixels available). Perhaps future instruments with better resolution will be able to unravel this mystery. Regardless, today’s authors have demonstrated that their new technique could open doors to finding forming, or newly-formed, planets by helping us look into the dynamics of their protoplanetary disks.

 

by Jessica Roberts at September 17, 2019 03:56 PM

ZapperZ - Physics and Physicists

Electron Neutrino Losses It's Mass By Almost Half
A new experimental result out of KATRIN has cut the upper limit of the mass of electron neutrino by half, to 1.1 eV. This was reported at the recent conference and in a recent preprint.

I suppose if I want to be accurate, I should say it is the electron antineutrino, since they measured this from beta decays, but nowadays, we don't have a clear cut idea of the difference between the neutrino and its antiparticle. For all we know, they can possibly also be a Majorana particle.

I'll be giving this report to my students in the general physics class, and see if they can convert the 1.1 eV into "kg". :)

Zz.

by ZapperZ (noreply@blogger.com) at September 17, 2019 03:30 PM

CERN Bulletin

Staff delegates: representatives of all the Organization's members of personnel

As already announced in the previous ECHO, in October of this year, elections will be held for your staff delegates for the 2020-2021 term.

To take part in the vote and choose your representatives, you must, if you are not yet a member, join the Staff Association.

Become a member, it is still time to join: https://cern.service-now.com/service-portal/report-ticket.do?name=ap-membership&se=staff-association, especially since membership for the 2019 calendar year is free as of 1st September, 2019.

How is the election of staff representatives of CERN conducted?

Any member of the personnel, whether staff, fellow or associate, who is also a member of the Staff Association and wishes to become involved in the Staff Council, is invited to submit an application until 11 October at 5 p.m: http://staff-association.web.cern.ch/bodies/elections.

You will then have three weeks, from 21 October to 11 November, to vote in order to elect your representatives.

Elections Timetable

Starting with Echo of 3 September, posters, etc.
Call for applications

Friday 11 October, at 5 p.m.
Closing date for receipt of the application

Monday 21 October, at noon
Start date for voting

Monday 11 November, at noon
Closing date for voting

Tuesday 12 November and Tuesday 26 November,
Publication of the results in Echo

Tuesday 26 and Wednesday 27 November
Staff Association Assizes

Tuesday 3 December (afternoon)
First meeting of the new Staff Council and election of the new Executive Committee

Elections organization

The CERN Staff Association Statutes provide in Article V.2.3:

  • Staff delegates representing staff members shall be elected by members of the Association of the Electoral College to which they belong, in accordance with the Rules for Election ruled by the Staff Council.
  • The distribution of seats to be filled shall be determined in accordance with the Rules for Election. This distribution must guarantee a fair representation between the different organic units and categories of staff.
  • Staff delegates representing non staff members shall be elected in accordance with the Rules for Election ruled by the Staff Council.

At its meeting on 3 September 2019, the Staff Council renewed the current Rules of the elections. In accordance with the latter, the Electoral Commission determined the number of seats to be held in the Electoral Colleges as follows:

The voting procedure will be monitored by the Election Committee, which is also in charge of announcing the results in ECHO on 12 and 26 November.

Join and vote, it is the least you can do for your Organization, for your colleagues’ members of the personnel.

Get involved for your Organization, for your colleagues in your Electoral College, for CERN members of the personnel and, more generally, for the CERN community!

Become a delegate!

September 17, 2019 03:09 PM

CERN Bulletin

Offer

Aquaparc:

Full day ticket:

  • Children: 33 CHF instead of 39 CHF
  • Adults: 33 CHF instead of 49 CHF

Free for children under 5.

September 17, 2019 02:09 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels. La prochaine permanence se tiendra le :

Mardi 1er octobre de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/

Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

September 17, 2019 02:09 PM

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

September 17, 2019 02:09 PM

CERN Bulletin

Peter Coles - In the Dark

Operation Market Garden – 75 Years On

Seventy-five years ago today, on 17th September 1944, the largest airborne operation in military history began. Operation Market Garden (as it was called) saw about 35,000 Allied troops dropped by parachute or landed in gliders behind German lines in Holland, with the aim of seizing key bridges in order to allow infantry and armoured divisions to advance, eventually into Germany. Of more immediate tactical importance was that capture of the Northernmost bridges over the Rhine at Arnhem would prevent German reinforcements from moving South to confront the advancing troops, tanks and armoured vehicles of XXX Corps whose job was to punch a hole in the German defences and link up with the airborne troops.

Operation_MARKET-GARDEN_-_82.Airborne_near_Grave

Motivated by the belief that German armies in the West were exhausted and on the brink of collapse as well as the desire if possible to finish the war before Christmas, Operation Market Garden was daring and imaginative, but began to unravel right from the outset and ended as a disastrous failure, with the loss of many lives.

I’m not a military historian, so am not competent to add anything significant to the huge amount that has been written about what went wrong, but I will add a personal note. A cousin of my Grandfather flew to Arnhem with the 1st British Airborne division whose job was to take and hold the bridges over the Rhine that would open the door to an invasion of Germany. Sadly, he was one of those many troops who never even made it to their objective. In fact he was dead before he even hit the ground; his unit was dropped virtually on top of heavily armed German forces and had no chance of defending themselves. I had always been told that he had been dropped by parachute, but the records at the cemetery revealed that was wrong; he was on a glider which was badly shot up during its approach and crash-landed with no survivors.

The action at Arnhem actually involved two bridges, one a railway bridge at Oosterbeek, and the other a road bridge in Arnhem itself. British paratroopers did manage to capture one end of the road bridge, but never succeeded in securing both ends of the structure. Cut off from the much larger force pinned down near their landing zones they were eventually forced to surrender simply because they had run out of ammunition. The other units that landed near Arnhem never made their objectives and had to dig in and hope for reinforcements that never came. They fought a brave but desperate defensive action until 25th September when some were successfully evacuated across the Rhine. The original battle orders had specified they were to hold their ground for 48 hours until relieved by armour and infantry advancing rapidly from the South, but 30 Corps was heavily delayed by fighting, poor tactical decisions and congestion on the single road.

Some years ago, after attending a conference in Leiden, I took time out to visit Oosterbeek cemetery, where  1437 soldiers lie buried. Such was the chaos at Arnhem that bodies of fallen soldiers are still being discovered in gardens and woods; as there were so many dead that there was only time to bury them in shallow graves where they had fallen. As remains are discovered they are removed and reburied in Oosterbeek. When I visited the cemetery about 25 years ago, there were several brand new graves.

At the time of Market Garden the local people looked on in horror as their potential liberators were cut down. It must have been deeply traumatizing for them. I think it is telling that when, in 1969, the British Army proposed bringing to an end the annual ceremonies in commemoration of these events, local Dutch civilians insisted that they continue.

As I stood by the grave I couldn’t help thinking of how lucky members of my generation are that we have not been called on to make such a sacrifice. Now I fear deeply that the rise of nationalism and xenophobia, not least in Britain, threatens to the peace in Europe that has held for almost 75 years.

The failure of Operation Market Garden had other terrible consequences. The winter of 1944/45 was a terrible time one for Dutch civilians in the part of their country that had not been liberated, with many thousands dying from hunger and the bitter cold.

And of course had the Allies succeeded in penetrating into Germany in 1944, the post-war map of Europe would probably have been very different. This is how the front lines were drawn in mid-September 1944, with the Western Front and Eastern Front roughly equidistant from Berlin.

(By Army Map Service – Document “Atlas of the World Battle Fronts in Semimonthly Phases to August 15th 1945: Supplement to The Biennial report of the Chief of Staff of the United States Army July 1, 1943 to June 30 1945 To the Secretary of War”, Public Domain, Link.)

Had Market Garden been successful would there have been 45 years of Cold War?

by telescoper at September 17, 2019 09:03 AM

September 16, 2019

Christian P. Robert - xi'an's og

unimaginable scale culling

Despite the evidence brought by ABC on the inefficiency of culling in massive proportions the British Isles badger population against bovine tuberculosis, the [sorry excuse for a] United Kingdom government has permitted a massive expansion of badger culling, with up to 64,000 animals likely to be killed this autumn… Since the cows are the primary vectors of the disease, what about starting with these captive animals?!

by xi'an at September 16, 2019 10:19 PM

Peter Coles - In the Dark

Eleven Years a-Blogging

I received the little graphic above from WordPress yesterday to remind me that today is the 11th Anniversary of my first blog post, on September 16th 2008. If this were a wedding it would be a steel anniversary…

To be precise, the graphic reminded me that that I registered with WordPress on 15th September 2008. I actually wrote my first post on the day I registered but unfortunately I didn’t really know what I was doing on my first day at blogging and I didn’t actually manage to figure out how to publish this earth-shattering piece. It was only after I’d written my second post that I realized that the first one wasn’t actually visible to the general public because I hadn’t pressed the right buttons, so the two appear in the wrong order in my archive. Anyway, that confusion is the reason why I usually take 16th September as this blog’s real anniversary.

I’d like to take this opportunity to send my best wishes, and to thank, everyone who reads this blog, however occasionally. According to the WordPress stats, I’ve got readers from all round the world, including the Vatican!

by telescoper at September 16, 2019 03:39 PM

astrobites - astro-ph reader's digest

A Bare Hot Rock with No Atmosphere

Title: Absence of a thick atmosphere on the terrestrial exoplanet LHS 3844b

Authors: Laura Kreidberg, Daniel D.B. Koll, Caroline Morley, Renyu Hu, Laura Schaefer, Drake Deming, Kevin B. Stevenson, Jason Dittmann, Andrew Vanderburg, David Berardo, Xueying Guo, Keivan Stassun, Ian Crossfield, David Charbonneau, David W. Latham, Abraham Loeb, George Ricker, Sara Seager, Roland Vanderspek

First Author’s Institution: Harvard University

Status: Accepted to Nature, open access

Exoplanets come in a wide range of flavours and sizes, ranging from puffy gas giants to small rocky worlds. Characterizing the atmospheres of this diverse population gives us an insight into their formation processes and potential habitability. On this front the most attractive candidates are terrestrial exoplanets around M dwarfs, which form a significant fraction of the known terrestrial exoplanets. The smaller size of M dwarfs as compared to Sun-like stars means that the signal due to a transiting exoplanet is relatively larger making it easier to detect and characterize even the small rocky planets orbiting them. But what makes rocky worlds around M dwarfs so intriguing? Since M dwarfs have a fraction of the Sun’s luminosity, their habitable zones are much closer in—about a tenth of the Earth-Sun distance.
However, M dwarfs are also infamous for frequently spewing high energy flares which over time can strip the atmospheres of close-in rocky exoplanets around them. Observing more such systems with a range of configurations and physical properties can help us in understanding the survival of atmospheres on exoplanets orbiting M dwarfs. The authors of today’s paper present an investigation of one such exoplanet recently detected by TESS. Meet LHS3844—a nearby (~ 14 parsecs) M dwarf hosting a rocky exoplanet 1.3 times the size of Earth and with an orbital period of just 11 hours.
Where’s the hot spot? 
Monitoring the brightness of a star and planet system reveals different information about the planet at different points in the orbit. While measuring the planetary transit depth and its variation with wavelength (also known as transmission spectroscopy) can show us how the atmosphere looks vertically, the phase curve and secondary eclipse measurement (when the planet passes behind the star) holds the information about the horizontal processes in the atmosphere like atmospheric circulation. How do we extract this information? The variation in brightness of the system as measured by phase curves as different longitudes or phases of the planet come in and out of view can be used to reconstruct the surface brightness distribution of the planet, more easily so if the planet is tidally locked. Surface brightness inferred from infrared phase curves (also known as thermal phase curves) in particular, can tell us how the thermal energy from dayside is redistributed around the planet by winds in the atmospheres. The signature for the presence of such atmospheric circulation can be seen in form of an eastward shifted hotspot on the surface brightness distribution of the planet. The authors of today’s paper use this shift as a diagnostic to determine the presence of an atmosphere on LHS3844. They use the Spitzer Space Telescope to obtain a phase curve of the planet in the 4.5 micron wavelength band and use that to reconstruct the surface brightness distribution of the planet. As seen in Figure 1, they observe a symmetric phase curve and surface brightness distribution with the hotspot on the point directly facing the host star, indicating the absence of atmospheric circulation on the planet. They conclude that the planet is consistent with the picture of a tidally locked rocky world with an absorptive surface and absence of a thick atmosphere (surface pressures higher than 10 bar). The scorching surface temperature of 1040 K as obtained from the secondary eclipse depth (which measures the ratio of planet to star brightness) makes this scenario even more plausible.

Figure 1: Spitzer 4.5 mircon phase curve of LHS3844b. The left diagram shows the change in the brightness of the star-planet system as the planet goes around the star and it’s day side comes into view (which increases the total brightness of the system as now the planet’s dayside is also contributing to it), it goes behind the star during the eclipse, and comes back in the view again. The diagram on the right shows the inferred dayside brightness temperature distribution of the planet, with the hotspot at exactly the point facing the star (Figure 1 in the paper).

Figure 2: Measured planet to star brightness as compared to the predictions from various rock surface types (Figure 2 in the paper).

 

The authors further model the emission spectrum of the planet due to different planetary surface types and find that the observed planet brightness at 4.5 micron is most consistent with a surface covered with dark basaltic rocks (see Figure 2). In the solar system, basaltic rocks form as an outcome of solidification of lava flows from volcanoes, as seen on Earth and Mercury. They also consider the possibility of a thin atmosphere surviving on the planet and conclude from their model that given the age of the system the stellar winds would have eroded a thin atmosphere of 1-10 bars unless there is a mechanism like outgassing continually replenishing it.

While we wait for future observations with JWST to reveal more fascinating details about LHS3844b, TESS will continue to find more such interesting systems in the coming year. There might be a cooler planet with just the right conditions to host a habitable atmosphere right around the corner!

by Vatsal Panwar at September 16, 2019 01:27 PM

September 15, 2019

Christian P. Robert - xi'an's og

Le Monde puzzle [#1110]

A low-key sorting problem as Le Monde current mathematical puzzle:

If the numbers from 1 to 67 are randomly permuted and if the sorting algorithm consists in picking a number i with a position higher than its rank i and moving it at the correct i-th position, what is the maximal number of steps to sort this set of integers when the selected integer is chosen optimaly?

As the question does not clearly state what happens to the number j that stood in the i-th position, I made the assumption that the entire sequence from position i to position n is moved one position upwards (rather than having i and j exchanged). In which case my intuition was that moving the smallest moveable number was optimal, resulting in the following R code

sorite<-function(permu){ n=length(permu) p=0 while(max(abs(permu-(1:n)))>0){
    j=min(permu[permu<(1:n)])
    p=p+1
    permu=unique(c(permu[(1:n)<j],j,permu[j:n]))} 
  return(p)}

which takes at most n-1 steps to reorder the sequence. I however checked this insertion sort was indeed the case through a recursive function

resorite<-function(permu){ n=length(permu);p=0 while(max(abs(permu-(1:n)))>0){
    j=cand=permu[permu<(1:n)]
    if (length(cand)==1){
      p=p+1
      permu=unique(c(permu[(1:n)<j],j,permu[j:n]))
    }else{
      sol=n^3
      for (i in cand){
        qermu=unique(c(permu[(1:n)<i],i,permu[i:n]))
        rol=resorite(qermu)
        if (rol<sol)sol=rol}
      p=p+1+sol;break()}} 
  return(p)}

which did confirm my intuition.

by xi'an at September 15, 2019 10:19 PM

Lubos Motl - string vacua and pheno

Dynamical OPE coefficients as a TOE
Towards the universal equations for quantum gravity in all forms

In the 1960s, before string theory was really born, people studied the bootstrap and the S-matrix theory. The basic idea – going back to Werner Heisenberg (but driven by younger folks such as Geoffrey Chew who died in April 2019) – was that the consistency was enough to determine the S-matrix. In such a consistency-determined quantum theory, there would be no clear difference between elementary and composite fields and everything would just fit together.

Veneziano wrote his amplitude in 1968 and a few years later, it became clear that strings explained that amplitude – and the amplitude could have been created in a "constructive" way, just like QCD completed at roughly the same time which was "constructively" made of quark and gluon fields (although most of the smartest people had believed the strong force not to have any underlying "elementary particles" underneath throughout much of the 1960s). A new wave of constructive theories – colorful and stringy generalizations of the gauge theories – prevailed and downgraded bootstrap to a quasi-philosophical semi-dead fantasy.

On top of that, the constructive theory – string theory – has led to progress that made it clear that it has a large number of vacua so the complete uniqueness of the dynamics was an incorrect wishful thinking.



Still, all these vacua of string/M-theory are connected and the theory that unifies them is unique. Since the 1990s, we have understood its perturbative aspects much more than before, uncovered limited nonperturbative definitions for some superselection sectors, but it's true that the perturbative limit of string theory is the most thoroughly understood portion of quantum gravity that we have.



Various string vacua are being constructed in very different ways. We start with type II-style \(N=1\) world sheet dynamics, or with the heterotic \(N=(1,0)\) dynamics, add various GSO projections and corresponding twisted and antiperiodic sectors. Extra orientifold and orbifold decorations may be added, along with D-branes and fluxes. And the hidden dimensions may be treated as some Ricci-flat manifolds – with fluxes etc. – but also as more abstract CFTs (conformal field theories) resembling minimal models such as the Ising model.

The diversity looks too large. It seems that while the belief in the single underlying theory is totally justified by the network of dualities and topological phase transitions etc., the unity isn't explicitly visible. Isn't there some way to show that all these vacua of string/M-theory are solutions to the same conditions?

It's a difficult task (and there is no theorem guaranteeing that the solution exists at all) because the individual constructions are so different – even qualitatively different. The different vacua – or non-perturbative generalizations of world sheet CFTs – have to be solutions to some conditions or "equations". But how can so heavily different "stories" be "solutions" to the same "equations"? Only relatively recently, I decided that to make progress in this plan, one has to "shut up and calculate".

And to calculate, to have a chance of equations and their solutions, one needs to convert the "stories" to "quantitative objects". We need to "quantify the rules defining theories and orbifolds etc.". To do so, we need to write a more general Ansatz that interpolates between all the different theories and vacua that we have in string/M-theory i.e. quantum gravity.

What is the Ansatz that is enough for a full world sheet CFT? Well, it's necessary and almost sufficient to define the spectrum of all local operators and their OPEs (operator product expansions). The latter contain most of the information and are encoded in OPE coefficients \(C_{12}^3\) – closely related to three-point structure constants \(C_{123}\) and \(B_3\), see e.g. these pages for some reminder about the crossing symmetry and the conformal bootstrap.

What you should appreciate is that coefficients such as \(C_{12}^3\) encode most of the information about the world sheet CFT – e.g. a perturbative string vacuum – but they are treated as completely static numbers that define the theory at the very beginning. You can't touch them afterwards; they can't change. It became clear to me that it's exactly this static condition that is incompatible with the desire to unify the string vacua as solutions to the same equations.

To have a chance for a unifying formulation of a theory of everything (TOE), we apparently need to treat all these coefficients such as \(C_{12}^3\) as dynamical ones. "Dynamical" means "the dependence on time". Which time? I think that in the case of \(C_{12}^3\), we need the dependence on the spacetime's time \(x^0\) (and its spatial partners \(x^i\), if you wish), not the world sheet time \(\tau\), because the values of all these coefficients \(C_{12}^3\) carry the "equivalent information" as any more direct specification of the point on the configuration space of the effective spacetime QFT (a point in the configuration space of mainly scalar fields in the spacetime, if you wish).

Most of the degrees of freedom in \(C_{12}^3\) are non-dynamical ones. There is a lot of freedom hidden in the ability to linearly mix the operators into each other; and on top of that, all these coefficients are apparently obliged to obey the crossing symmetry constraints. But there should still exist a quantization of states on the configuration space of these \(C_{12}^3\) coefficients and the resulting space should be equivalent to a configuration space of fields in the target spacetime.

Since the 1995 papers by Kutasov and Martinec, I've been amazed by the observation that "whatever super-general conditions of quantum gravity hold in the spacetime, they must also apply in the world sheet, and vice versa". So I think that there are analogous operators to the local operators in the world sheet CFT but in the spacetime – labeling the "creation operators for all black hole microstates" – and their counterpart of \(C_{12}^3\) tells us about all the "changes of one microstate to another" that you get when a black hole devours another particle (or another black hole microstate). These probably obey some bootstrap equations as well although as far as I know, the research of those is non-existent in the literature and may only be kickstarted after the potential researchers learn about this possibility from me.

I tend to think that the remaining degrees of freedom in \(C_{12}^3\) aren't fully eliminated. They're just very heavy on the world sheet – and responsible for quantum gravity, non-local phenomena on the world sheet that allow the world sheet topology to change.

In the optimal scenario, one may write down the general list of the fields like \(C_{12}^3\) – possibly, three-point functions may fail to be enough and \(n\)-point functions for all positive \(n\) may be needed as well – and there will be some universal conditions. These should have the solutions in terms of the usual consistent, conformal, modular invariant world sheet theories with the state-operator correspondence. The rules could have a direct generalization outside the weakly coupled limit but the solutions should simplify in the weakly coupled limit. Dualities should be manifest – the theories or vacua dual to each other would explicitly correspond to the same values of the degrees of freedom such as \(C_{12}^3\) and the particular "dual descriptions" would be just different strategies to construct these solutions, usually starting with some approximations.

More generally, the term "quantum gravity" sounds a bit more general than "string/M-theory" although they're ultimately equivalent. "Quantum gravity" doesn't have any obvious strings in it to start with – and has just all the black hole microstates and their behavior. It seems clear to me that people need to understand the equivalence between "quantum gravity" and "string theory" better and to do so, they have to rewrite the rules of string theory in the language of a much larger number of degrees of freedom. The word "consistency" before "of quantum gravity" sounds too verbal and qualitative so far – we should have a much clearer translation of this abstract noun to the language of equations.

It seems to me that the number of people in the world who are really intensely thinking on foundational questions of any similar kind or importance is of order one and the ongoing anti-science campaign has the goal to reduce this estimate to a number much smaller than one.

by Luboš Motl (noreply@blogger.com) at September 15, 2019 09:40 AM

September 14, 2019

John Baez - Azimuth

Klein on the Green New Deal

I’m going to try to post more short news items. For example, here’s a new book I haven’t read yet:

• Naomi Klein, On Fire: The (Burning) Case for a Green New Deal, Simon and Schuster, 2019.

I think she’s right when she says this:

I feel confident in saying that a climate-disrupted future is a bleak and an austere future, one capable of turning all our material possessions into rubble or ash with terrifying speed. We can pretend that extending the status quo into the future, unchanged, is one of the options available to us. But that is a fantasy. Change is coming one way or another. Our choice is whether we try to shape that change to the maximum benefit of all or wait passively as the forces of climate disaster, scarcity, and fear of the “other” fundamentally reshape us.

Nonetheless Robert Jensen argues that the book is too “inspiring”, in the sense of unrealistic optimism:

• Robert Jensen, The danger of inspiration: a review of On Fire: The (Burning) Case for a Green New Deal, Resilience, 10 September 2019.

Let me quote him:

On Fire focuses primarily on the climate crisis and the Green New Deal’s vision, which is widely assailed as too radical by the two different kinds of climate-change deniers in the United States today—one that denies the conclusions of climate science and another that denies the implications of that science. The first, based in the Republican Party, is committed to a full-throated defense of our pathological economic system. The second, articulated by the few remaining moderate Republicans and most mainstream Democrats, imagines that market-based tinkering to mitigate the pathology is adequate.

Thankfully, other approaches exist. The most prominent in the United States is the Green New Deal’s call for legislation that recognizes the severity of the ecological crises while advocating for economic equality and social justice. Supporters come from varied backgrounds, but all are happy to critique and modify, or even scrap, capitalism. Avoiding dogmatic slogans or revolutionary rhetoric, Klein writes realistically about moving toward a socialist (or, perhaps, socialist-like) future, using available tools involving “public infrastructure, economic planning, corporate regulation, international trade, consumption, and taxation” to steer out of the existing debacle.

One of the strengths of Klein’s blunt talk about the social and ecological problems in the context of real-world policy proposals is that she speaks of motion forward in a long struggle rather than pretending the Green New Deal is the solution for all our problems. On Fire makes it clear that there are no magic wands to wave, no magic bullets to fire.

The problem is that the Green New Deal does rely on one bit of magical thinking—the techno-optimism that emerges from the modern world’s underlying technological fundamentalism, defined as the faith that the use of evermore advanced technology is always a good thing. Extreme technological fundamentalists argue that any problems caused by the unintended consequences of such technology eventually can be remedied by more technology. (If anyone thinks this definition a caricature, read “An Ecomodernist Manifesto.”)

Klein does not advocate such fundamentalism, but that faith hides just below the surface of the Green New Deal, jumping out in “A Message from the Future with Alexandria Ocasio-Cortez,” which Klein champions in On Fire. Written by U.S. Rep. Ocasio-Cortez (the most prominent legislator advancing the Green New Deal) and Avi Lewis (Klein’s husband and collaborator), the seven-and-a-half minute video elegantly combines political analysis with engaging storytelling and beautiful visuals. But one sentence in that video reveals the fatal flaw of the analysis: “We knew that we needed to save the planet and that we had all the technology to do it [in 2019].”

First, talk of saving the planet is misguided. As many have pointed out in response to that rhetoric, the Earth will continue with or without humans. Charitably, we can interpret that phrase to mean, “reducing the damage that humans do to the ecosphere and creating a livable future for humans.” The problem is, we don’t have all technology to do that, and if we insist that better gadgets can accomplish that, we are guaranteed to fail.

Reasonable people can, and do, disagree about this claim. (For example, “The science is in,” proclaims the Nature Conservancy, and we can have a “future in which catastrophic climate change is kept at bay while we still power our developing world” and “feed 10 billion people.”) But even accepting overly optimistic assessments of renewable energy and energy-saving technologies, we have to face that we don’t have the means to maintain the lifestyle that “A Message from the Future” promises for the United States, let alone the entire world. The problem is not just that the concentration of wealth leads to so much wasteful consumption and wasted resources, but that the infrastructure of our world was built by the dense energy of fossil fuels that renewables cannot replace. Without that dense energy, a smaller human population is going to live in dramatically different fashion.

I don’t know what Klein actually thinks about this, but she does think drastic changes are coming, one way or another.  She writes:

Because while it is true that climate change is a crisis produced by an excess of greenhouse gases in the atmosphere, it is also, in a more profound sense, a crisis produced by an extractive mind-set, by a way of viewing both the natural world and the majority of its inhabitants as resources to use up and then discard. I call it the “gig and dig” economy and firmly believe that we will not emerge from this crisis without a shift in worldview at every level, a transformation to an ethos of care and repair.

Jensen adds:

The domination/subordination dynamic that creates so much suffering within the human family also defines the modern world’s destructive relationship to the larger living world. Throughout the book, Klein presses the importance of telling a new story about all those relationships. Scientific data and policy proposals matter, but they don’t get us far without a story for people to embrace. Klein is right, and On Fire helps us imagine a new story for a human future.

I offer a friendly amendment to the story she is constructing: Our challenge is to highlight not only what we can but also what we cannot accomplish, to build our moral capacity to face a frightening future but continue to fight for what can be achieved, even when we know that won’t be enough.

One story I would tell is of the growing gatherings of people, admittedly small in number today, who take comfort in saying forthrightly what they believe, no matter how painful—people who do not want to suppress their grief, yet do not let their grief overwhelm them.

 

by John Baez at September 14, 2019 08:11 AM

September 13, 2019

Emily Lakdawalla - The Planetary Society Blog

Astronomers May Have Found an Interstellar Comet. Here's Why That Matters.
Astrophysicist Karl Battams tells us what we can learn by studying objects from outside our solar system.

September 13, 2019 04:22 PM

astrobites - astro-ph reader's digest

Hide and Seek with Satellite Galaxies

Title: The hidden satellites of massive galaxies and quasars at high-redshifts

Authors: Tiago Costa, Joakin Rosdahl & Taysun Kimm

First Author’s Institution: Max-Planck-Institut fur Astrophysik

Status: Accepted to MNRAS, open access on Arxiv

 

Simulations of galaxy formation are powerful laboratories for testing astronomer’s ideas about how the universe works. If those ideas are wrong, the simulations will not produce galaxies that look like the ones we observe. Satellite galaxies, which are small galaxies gravitationally bound to larger hosts, are especially good tests of the accuracy of simulations, since their properties are sensitive to many of the processes that affect their host.

Encompassing a host and its satellites is a dark matter halo, and the most massive of these have been shown to lie on spots of over-density in the early universe. In a previous paper, today’s authors predicted a significant over-density in the number of satellite galaxies around massive halos (greater than 10^12 times the mass of the sun). However, observations have not come back with concrete results confirming this idea. So where are these extra satellites?

Feedback

Like checking the box that says “I receive too many emails” when unsubscribing from a mailing list, galaxies have their own ways of reporting an inundation of spam – or in this case, star formation. If left unchecked, most gas in galaxies would have already formed into stars and we wouldn’t see so much of it. However, galaxies have ways of preventing more stars from forming, either by energy released from the area around the central black hole (known as AGN feedback), or by energy released from the stars themselves (stellar feedback).

The authors of today’s paper wanted to understand the effect that stellar feedback has on satellite galaxies, so they left out AGN feedback from their simulations. They then ran two versions of the same simulation of a massive galaxy at a redshift of z=6 (about 12.8 Gigayears ago, roughly 1 Gy since the universe began), one with radiation (light) from stars and one without. Figure 1 shows some of the results of that simulation.

Figure 1: The two simulations run by Costa et al. The top is without stellar radiation and the bottom is with. The left panels show the mass-weighted entropy, or how “well-mixed” the system is. The right panels show a close-up view of the mass density. (Source: Figure 1 in the paper)

The left panels show entropy, or randomness of the system. The simulation without stellar radiation (SN, top left) shows more randomness and a less contained system than the simulation with stellar radiation (SN+RT, bottom left). With star formation being suppressed in the SN+RT simulation, outflows of material by exploding stars (supernovae) are weaker, which leads to less randomness surrounding the galaxy.

Ready or Not

In the top right panel of Figure 1 we see satellite galaxies in circles. The red circles are satellites that appear in both simulations, but in the SN simulation (top) there are additional satellites circled in black that don’t appear in the simulation without stellar radiation. These are the hidden galaxies.

By looking at earlier redshifts (i.e. back in time), the authors found that these satellite galaxies form separately from their host, but their equivalents aren’t found at later times in SN+RT.

Figure 2: Three simulated galaxies (in green, grey and orange) at three different points in their lifetime. The top and bottom row represent the same systems in the simulation without stellar radiation (top) and with stellar radiation (bottom). In the SN+RT (bottom) simulation, tidal forces (like how our Moon causes tides on Earth) are more effective at tearing apart smaller galaxies as they near larger ones. (Source: Figure 3 in the picture)

Here They Come

To find these missing satellites, the authors looked back in the galaxy’s history and found three individual galaxies, shown on the left in Figure 2 for both the SN and SN+RT simulations. The colors trace the individual components of each separate galaxy until they become mixed up in the right panels.

This history shows that the satellites seen in SN but not SN+RT are actually the remainders of galaxies that got eaten up by what is now the host. The authors postulate that they do not appear in the simulation with stellar radiation because the wider envelope of gas around the host consumes the satellites more effectively than the denser host in the simulation without stellar radiation.

 

The authors of today’s paper have shown that stellar radiation is an influential component in simulations of galaxy formation. It causes galaxies to be less centrally-dense and have less satellite galaxies than predicted from less rigorous methods. With stellar radiation, there are also weaker supernovae and less structure. When the James Webb Space Telescope launches, it will be able to detect more satellite galaxies and confirm how accurate simulations like these are.

Satellite galaxies also have an important lesson to teach us about how to succeed at hide and seek: blend into your surroundings by being almost completely consumed by them. On second thought, it may be best for you to keep your structure and stick to hiding behind the couch.

by Bryanne McDonough at September 13, 2019 01:00 PM

September 12, 2019

astrobites - astro-ph reader's digest

Galaxy collisions and star-formation: a surprising causal examination

Title: Effect of galaxy mergers on star formation rates

Authors: W. J. Pearson, L. Wang, M. Alpaslan, I. Baldry, M. Bilicki, M. J. I. Brown, M. W. Grootes, B. W. Holwerda, T. D. Kitching, S. Kruk, F. F. S. van der Tak

First Author’s Institution: SRON Netherlands Institute for Space Research & Kapteyn Astronomical Institute, University of Groningen

Status: Accepted to Astronomy & Astrophysics, open access on ArXiv

 

The fundamental constituents of galaxies are stars, gas, dust, black holes, and dark matter. The extent to which we understand each of them and their interplay varies greatly. Yet current understanding drives us to acknowledge that the growth of the mass of galaxies must be driven by either active star-formation, galaxy-galaxy mergers, or both. But which of these two effects dominates is an ongoing mystery nearly as old as the discovery of galaxies themselves.

Figure 1. (Left) The galaxy-galaxy merger remnant NGC 7252 through gas-rich merging. (Right) The star-forming galaxy NGC 1559 grows through forming stars. Images courtesy of ESO and NASA/ESA, respectively.

Growing a Galaxy

The notion that active star-formation drives galaxies to grow in mass is not disputed. Massive bright blue O/B stars, although sparse in number, live extremely short lives — about a million years. In contrast, the extreme abundance of late-type stars of lower mass and luminosity live considerably longer lives, and some remain stalwart contributors to the galaxy stellar mass over timescales longer than the age of the universe. Thus, although we may easily trace episodes of star-formation via outbursting of luminous blue stars, it is through the coincident birth of many smaller stars by which a galaxy grows. This is called secular growth.

However, there is also clear evidence for the merger of two galaxies. This can take the form of a spectacular trainwreck of two large galaxies, or by the accretion of a smaller galaxy by a larger one, albeit a less exciting event than the first. In this mode of environmental growth, a galaxy is able to quickly grow its mass through the acquisition of these components from another nearby galaxy. At the same time, tidal forces resulting from the gravitational interaction of the two bodies can act to compress and shock the gas, and hence produce a burst of star-formation.

The relation of mergers to various properties of a galaxy has been an attractive subject of study for many, and although much progress has been made both observationally and theoretically, several insights remain in contradiction.

Today’s astrobite explores the causal connection between star-formation and galaxy mergers.

 

The Devil is in the Details

Recent observations have shown that a typical galaxy merger fails to produce a starburst featuring the extreme star-formation rates usually assigned to bona fide starburst systems. Seeing as up to 20% of starburst systems in the nearby universe are thought to be undergoing a merger, this seems contradictory. However, it is not only the star-forming systems which are known to undergo mergers. In fact, they are in the minority: massive galaxies live in dense galaxy clusters where mergers are statistically more likely, and yet they also lack the large reservoirs of gas common to star-forming galaxies — a key ingredient required to form stars. It is in these cluster environments in which these so-called dry mergers dominate. 

Figure 2. The massive and densely crowded galaxy cluster MACSJ0717.5+3745. Image courtesy of NASA, ESA and the HST Frontier Fields team (STScI).

The timescales involved in merger events prohibit a comprehensive study of the same galaxy from start to finish. Hence, it is from a patchwork of observations of different galaxies at different stages of merging that these observationally-driven theories are derived. If we wish to examine the life of a single galaxy, then we must compare to simulations of galaxy mergers.

One key result from simulation explains the rarity of starburst activity in merging galaxies. Simply put, the merger sequence is so much longer than an individual episode of starbursting that the starburst episode is usually missed by observations. Although a simulation can render countless merging systems (and make for some great movies!), finding enough merging systems through observation has proven enormously challenging.

 

A New Statistical Approach

The authors of this work approach this sample size issue by employing carefully trained convolutional neural network (CNN) to soundly identify merging systems in image cutouts from three prodigious galaxy surveys: SDSS, KiDS, and CANDELS. Much of the training relied upon existing visual classification, with several steps taken to mitigate false-positives. A CNN is trained on each survey sample separately with effective accuracies ranging from 83-94%.

Figure 3. (Left) Distribution of star-formation rates (SFR) for the CANDELS sample, after being subtracted for the star-formation main sequence (MS). (Right) Distribution means and standard deviations for the three samples corresponding to different cosmic epochs (i.e. redshift). Adapted from Figures 12 and 13 in the paper. 

After subtracting the correlation between star-formation and stellar mass (the star-formation main sequence), the resulting classification of merging and non-merging systems show that merging systems display only a slight preference towards higher star-formation rates at all cosmic times, as shown in Figure 3. Moreover, this preference is statistically insignificant considering the observed spread in star-formation rates.

In close agreement with both previous observation and simulation, they find that the star-formation enhancement due to mergers is less than a factor of two. These findings also suggest that these star-burst episodes are short-lived relative to the timescale required to complete the merger, highlighting the comparable rarity of starburst episodes within merger process. They also find that although mergers ultimately have an insignificant effect on star-formation rate, extreme starbursts are commonly merging systems.

So, despite the rapid increase in star-formation, the magnitude of the burst is not great enough for these systems to become differentiated from the non-merging population as described by its star-formation rate. 

Not only do the authors of this work demonstrate the success and practicality of employing machine learning techniques to astronomy, but provide important evidence for refuting the causal connection between starbursts and mergers, which without careful consideration can be naively taken as common sense.

 

 

by John Weaver at September 12, 2019 01:00 PM

Lubos Motl - string vacua and pheno

Moore & ladies: high-Hodge vacua are less numerically dominant than thought
There are many interesting new hep-th papers today. The first author of the first paper is Heisenberg, Kallosh and Linde have a post-Planck update on the CMB – a pattern claimed to be compatible with the KKLT, there are 17 new papers omitting cross-listings, but I choose the second paper:
Flux vacua: A voluminous recount
If we overlook the title that tries to please Al Gore if not Hillary Clinton (too late), Miranda Cheng, Greg Moore, and Natalie Paquette (Amsterdam-Rutgers-Caltech) work to avoid an approximation that is often involved while counting the flux vacua – you know, the computations that yield the numbers such as the insanely overcited number of 10500.



In particular, the previous neglected "geometric factor" is \[

\frac{1}{\pi^{m/2}}\int \det ({\mathcal R} + \omega\cdot 1)

\] OK, something like a one-loop measure factor in path integrals. This factor influences some density of the vacua. Does the factor matter?



They decide that sometimes it does, sometimes it does not. More curiously, they find out that this factor tends to be an interestingly behaved function of the topological invariants. It's intensely decreasing towards a minimum, as you increase some topological numbers, and then it starts to increase again.

Curiously enough, the minimum is pretty much reached for the values of the topological numbers that are exactly expected to be dominant in the string compactifications. In this sense, the critical dimensions and typical invariants in string theory conspire to produce the lowest number of vacua that is mathematically possible, at least when this geometric factor is what we look at.

This is a "hope" I have been explicitly articulating many times – that if you actually count the vacua really properly, or perhaps with some probability weighting that has to be there to calculate which of them could have arisen at the beginning, the "most special or simplest" vacua could end up dominating.

They're not quite there but they have some substantial factor that reduces – but not sufficiently reduces – the number of vacua for very large Hodge numbers i.e. in this sense "complicated topologies of the compactification manifolds". I mean large Hodge numbers. Note that large Hodge numbers (which may become comparable to 1,000 for Calabi-Yau threefolds) are really needed to get high estimates of the number of vacua such as that 10500. You need many cycles and many types of fluxes to obtain the high degeneracies.

Wouldn't it be prettier if the Occam-style vacua with the lowest Hodge numbers were the contenders to become the string vacuum describing the world around us? There could still be a single viable candidate. I have believed that the counting itself is insufficient and the Hartle-Hawking-like wave function gives a probabilistic weighting that could pick the simplest one. They have some evidence that previously neglected effects could actually suppress the very number of the vacua with the large Hodge numbers or other signs of "contrivedness".

Clearly, everyone whose world view finely depends on claims about "the large number of string vacua" has the moral duty to study the paper and perhaps to try to go beyond it.

by Luboš Motl (noreply@blogger.com) at September 12, 2019 04:04 AM

astrobites - astro-ph reader's digest

New Cosmological Detectives: Using FRBs to Constrain the Diffuse Gas Fraction

Title: Probing Diffuse Gas with Fast Radio Bursts

Authors: Anthony Walters, Yin-Zhe Ma, Jonathan Sievers, and Amanda Weltman

First Author’s Institution: School of Chemistry and Physics, University of KwaZulu-Natal, Durban 4000, South Africa; and NAOC-UKZN Computational Astrophysics Centre (NUCAC)

Status: Open access on the arXiv

 

A Fast-Paced, Far-Reaching Field

Although Fast Radio Bursts (FRBs), brief millisecond flashes of extremely energetic radio emission, were first discovered over a decade ago in 2007, only in the past few years has the community seen a revolutionary increase in the number of detected FRBs. In early March, the known total number of FRBs was at least 65, two of which were repeating sources, and one of which had been localized to a host galaxy. The number is now at least 85 known FRBs, with nine repeaters announced this past August (eight new sources and one previously single-burst source), and two more localized FRBs announced this past summer. (See this astrobites for more on one of them!)

Beyond their mysterious origins, FRBs have also captured the attention of some scientists because of their possible applications to cosmology. FRBs have large observed dispersion measures (DMs), which means that the lowest energy radio photons from the burst are observed some time after the most energetic photons, and the time depends on the number of free electrons the photons travel through. Free electrons can be found in the intergalactic medium (IGM), circumgalactic medium (CGM), Milky Way, and the host galaxies of FRBs. FRBs that are very far away –– at cosmological distances –– may have a “cosmic DM” with contributions to the DM from the IGM and the CGM. 

If the redshifts (z) of the FRBs are measured, possible only if they are localized to a host galaxy, then a relation between cosmological DM and redshift, DM(z), can probe cosmological parameters. The DM(z) relation could also shed light on the “missing baryon problem” by constraining the fraction of diffuse, ionized gas –– baryons –– in the IGM. These problems can be explored with DM(z) because DM as a function of z can provide an estimate for the baryonic matter in the IGM and CGM between us and the source as a function of redshift (and the density of baryonic matter is a cosmological parameter). We don’t have a lot of DM(z) data from FRBs yet, but it’s always useful to think ahead and characterize what kind of science can be done if the data were there. That’s where simulations, and today’s paper, come into play.

 

Simulating Our Way to Cosmological and Diffuse Gas Fraction Constraints

In order to study what cosmological parameters might be best constrained, and how well the missing baryon problem can be reduced by FRB data, the authors of today’s paper simulate catalogs of mock FRBs out to redshift z ≃ 3. They then combine the mock DM(z) data, illustrated in Figure 1, with current cosmological constraints from the Planck 2016 data release: measurements of the cosmic microwave background (CMB), baryon acoustic oscillations (BAO), Type 1a supernova (SNIa), and the Hubble Constant (H0), which they combine and call the CBSH parameter values.

Figure 1. Results from modeling the DM from the simulations. The different colors represent various redshift bins (Δz = 0.03), and are labeled in the legend with the central redshifts of the bin. The top panel shows the probability distribution function of an FRB’s cosmological DM, given its z. Due to the log-normal shape of the distribution, they fit Gaussian distributions to the log of the DM (with a constant offset), defined as X and shown in the lower panel. The best-fit lines are in solid colors. (Adapted from Figure 1 in today’s paper.)

With the simulated FRBs combined with the CBSH parameters, the authors then use Markov chain Monte Carlo (MCMC) techniques in order to constrain the diffuse gas fraction, and to forecast cosmological constraints, i.e. to predict the precision a future experiment could have on some measurable cosmological parameters, with the diffuse gas fraction as a parameter. Their results show no improvement over constraints already given by CBSH information, which is unsurprising considering that observational constraints on the diffuse gas fraction are much weaker than cosmological constraints. However, their results do show great promise for constraining the diffuse gas fraction itself, finding typical constraints of a few percent for a catalog with DM(z) data from 100 FRBs, and constraints < 1% for a catalog with 1,000 FRBs. 

 

A New Generation of FRBs and Cosmological Studies

There is a new generation of radio telescopes that already have, or can, detect FRBs: Canadian Hydrogen Intensity Mapping Experiment (CHIME), Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX), Five-hundred metre Aperture Spherical Telescope (FAST), Australian Square Kilometre Array Pathfinder (ASKAP), Karoo Array Telescope (MeerKAT), Murchison Widefield Array (MWA), Deep Synoptic Array (DSA), and in the future, the Square Kilometre Array (SKA). With an increase in localization efforts of FRBs, DM(z) data (real, not simulated) can be obtained that could tell us about the makeup of our Universe –– especially the diffuse gas fraction and the “missing baryon problem.” In other words, FRBs, which are only milliseconds long, can tell us about cosmological mysteries that go back billions of years. With the new generation of radio telescopes, these studies might be just around the corner!

by Kaitlyn Shin at September 11, 2019 07:39 PM

September 09, 2019

Jon Butterworth - Life and Physics

Scientific Exile?
This article in the Guardian describes a situation which is already happening. I have personally been in two recent scientific/training network planning meetings in which UK leadership was ruled out as a possibility (by mutual agreement) as too risky in the … Continue reading

by Jon Butterworth at September 09, 2019 12:45 PM

September 08, 2019

Emily Lakdawalla - The Planetary Society Blog

Here's How I'm Celebrating 53 Years of Star Trek
Star Trek Voyager Emergency Medical Hologram and Planetary Society Board member Robert Picardo uncovers rare Star Trek artifacts at The Planetary Society.

September 08, 2019 05:30 AM

September 06, 2019

Emily Lakdawalla - The Planetary Society Blog

India's Vikram Spacecraft Apparently Crash-Lands on Moon
Communications were lost with the lander, which was carrying a small rover named Pragyan to the lunar surface.

September 06, 2019 09:49 PM

September 05, 2019

Axel Maas - Looking Inside the Standard Model

Reflection, self-criticism, and audacity as a scientist
Today, I want to write a bit about me as a scientist, rather than about my research. It is about how I deal with our attitude towards being right.

As I still do particle physics, we are not done with it. Meaning, we have no full understanding. As we try to understand things better, we make progress, and we make both wrong assumptions and actual errors. The latter because we are human, after all. The former because we do not yet know better. Thus, we necessarily know that whatever we do will not be perfect. In fact, especially when we enter unexplored territory, what we do is more likely not the final answer than not. This led to a quite defensive way of how results are presented. In fact, many conclusions of papers read more like an enumeration what all could be wrong with what was written than what has been learned. And because we are not in perfect control of what we are doing, anyone who is trying to twist things in a way they like, they will find a way due to all the cautious presentation. On the other hand, if we would not be so defensive, and act like we think we are right, but we are not - well, this would also be held against us, right?

Thus, as a scientist one is caught in an eternal limbo about actually believing one's own results and thinking that they can only be wrong. If you browse through scientist on, e.g, Twitter, you will see that this is a state which is not easy to endure. This becomes aggravated by a science system which was geared by neoliberalism towards competition and populist movements who need to discredit science to further their own ends, no matter the cost. To deal with both, we need to be audacious, and make our claims bold. At the same time, we know very well that any claims to be right are potentially wrong. Thus enhancing the perpetual cycle of self-doubt on an individual level. On a collective level this means that science gravitates to things which are simple and incremental, as there the chance to being wrong is smaller then when trying to do something more radical or new. Thus, this kind of pressure reduces science from revolutionary to evolutionary, with all the consequences. It also damns us to avoid taking all consequences of our results, because they could be wrong, couldn't they?

In the case of particle physics, this slows us down. One of the reasons, at least in my opinion, why there is no really big vision of how to push forward, is exactly being too afraid of being wrong. We are at a time, where we have too little evidence to do evolutionary steps. But rather than to make the bold step of just go exploring, we try to cover every possible evolutionary direction. Of course, one reason is that because of being in a competitive system, we have no chance of being bold more than once. If we are wrong with this, this will probably create a dead stop for decades. Of course, it other fields of science the consequence can be much more severe. E.g. in climate sciences, this may very well be the difference between extinction of the human species and its survival.

How do I deal with this? Well, I have been far too privileged and in addition was lucky a couple of time. As a consequence, I could weather the consequences to be a bit more revolutionary and bit more audacious than most. However, I also see that if I would not have been, I would probably had an easier career still. But this does not remove my own doubt about my results. After all, what I do has far-reaching consequences. In fact, I am questioning very much conventional wisdom in textbooks, and want to reinterpret the way how the standard model (and beyond) describes the particles of the world we are living in. Once in a while, when I realize what I claim, I can get scared. Other times, I feel empowered by how things seem to fall into place, and I do not see how edges not fit. Thus, I live in my own cycle of doubt.

Is there anything we can do about the nagging self-doubt, the timidity and the feeling of being an imposter? Probably not so much as individuals, except for taking good care of oneself, and working with people with a positive attitude about our common work. Much of the problems are systemic. Some of them could be dealt with by taking the heat of completion out of science, and have a cooperative model. This will only work out, if there is more access to science positions, and more resources to do science. After all, there are right now far too many people wanting a position as a scientist than there are available. No matter what we do, this always creates additional pressure. But even that could be reduced by having controllable career paths, more mentoring, easier transitions out of science, and much more feedback. But this not only requires long-term commitments on behalf of research institutes, but also that scientists themselves acknowledge these problems. I am very happy to see that this consciousness grows, especially with younger people getting into science. Too many scientist I encounter blatantly deny that these problems exist.

However, in the end, also these problems are connected to societal issues at large. The current culture is extremely competitive, and more often than not rewards selfish behavior. Also, there is, both in science and in society, a strong tendency to give those who have already. And such a society shapes also science. It will be necessary that society reshapes itself to a more cooperative model to get a science, which is much more powerful and forward-moving than we have today. On the other hand, existential crises of the world, like the climate crises or the rise of fascism, are also facilitated by a competitive society. And could therefore likely be overcome by having a more cooperative and equal society. Thus, dealing with the big problems will also help solving the problems of scientists today. I think this is worthwhile, and invite any fellow scientist, and anyone, to do so.

by Axel Maas (noreply@blogger.com) at September 05, 2019 02:51 PM

September 04, 2019

John Baez - Azimuth

UN Climate Action Summit

Christian Williams

Hello, I’m Christian Williams. I study category theory with John Baez at UC Riverside. I’ve written two posts on Azimuth about promising distributed computing endeavors. I believe in the power of applied theory – that’s why I left my life in Texas just to work with John. But lately I’ve begun to wonder if these great ideas will help the world quickly enough.

I want to discuss the big picture, and John has kindly granted me this platform with such a diverse, intelligent, and caring audience. This will be a learning process. All thoughts are welcome. Thanks for reading.

(Greta Thunberg, coming to help us wake up.)

…..
I am the master of my fate,
      I am the captain of my soul.

It’s important to be positive. Humanity now has a global organization called the United Nations. Just a few years ago, members signed an amazing treaty called The Paris Agreement. The parties and signatories:

… basically everyone.

By ratifying this document, the nations of the world agreed to act to keep global warming below 2C above pre-industrial levels – an unparalleled environmental consensus. (On Azimuth, in 2015.) It’s not mandatory, and to me that’s not the point. Together we formally recognize the crisis and express the intent to turn it around.

Except… we really don’t have much time.

We are consistently finding that the ecological crisis is of a greater magnitude and urgency than we thought. The report that finally slapped me awake is the IPCC 2018, which explains the difference between 2C and 1.5C in terms of total devastation and lives, and states definitively:

We must reduce global carbon emissions by 45% by 2030, and by 100% by 2050 to keep within 1.5C. We must have strong negative emissions into the next century. We must go well beyond our agreement, now.

(Blue is essentially, “we might still have a stable society”.)

So… how is our progress on the agreement? That is complicated, and a whole analysis is yet to be done. Here is the UN progress tracker. Here is an NRDC summary. Some countries are taking significant action, but most are not yet doing enough. Let that sink in.

However, the picture is much deeper than only national. Reform sparks at all levels of society: a US politician wanting to leave the agreement emboldened us to form the vast coalition We Are Still In. There are many initiatives like this, hundreds of millions of people rising to the challenge. A small selection:

City and State Levels
Mayors National Climate Action Agenda, U.S. Climate Alliance
Covenant of Mayors for Climate & Energy
International Levels
Reducing emissions from deforestation and forest degradation (REDD)

RE100, Under2 Coalition (The Climate Group)
Everyone Levels
Fridays for Future, Sunrise Movement, Extinction Rebellion
350.org, Climate Reality

Each of us must face this challenge, in their own way.

…..

Responding to the findings of the IPCC, the UN is meeting in New York on September 23, with even higher ambitions and higher stakes: UN Climate Action Summit 2019. The leaders will not sit around and give pep talks. They are developing plans which will describe how to transform society.

On the national level, we must make concrete, compulsory commitments. If they do not soon then we must demand louder, or take their place. The same week as the summit, there will be a global climate strike. It is crucial that all generations join the youth in these demonstrations.

We must change how the world works. We have reached global awareness, and we have reached an ethical imperative.

Please listen to an inspiring activist share her lucid thoughts.

by christianbwilliams at September 04, 2019 05:47 PM

August 31, 2019

Clifford V. Johnson - Asymptotia

Two Days at San Diego Comic-Con 2019

[caption id="attachment_19354" align="aligncenter" width="499"] Avengers cosplayers in the audience of my Friday panel.[/caption]It might surprise you to know just how much science gets into the mix at Comic-Con. This never makes it to the news of course - instead its all stories about people dressing up in costumes, and of course features about big movie and TV announcements. Somewhere inside this legendary pop culture maelstrom there’s something for nearly everyone, and that includes science. Which is as it should be. Here’s a look at two days I spent there. [I took some photos! (All except two here - You can click on any photo to enlarge it.]

Day 1 – Friday

I finalized my schedule rather late, and so wasn’t sure of my hotel needs until it was far too late to find two nights in a decent hotel within walking distance of the San Diego Convention Center — well, not for prices that would fit with a typical scientist’s budget. So, I’m staying in a motel that’s about 20 minutes away from the venue if I jump into a Lyft.

My first meeting is over brunch at the Broken Yolk at 10:30am, with my fellow panellists for the panel at noon, “Entertaining Science: The Real, Fake, and Sometimes Ridiculous Ways Science Is Used in Film and TV”. They are Donna J. Nelson, chemist and science advisor for the TV show Breaking Bad (she has a book about it), Rebecca Thompson, Physicist and author of a new book about the science of Game of Thrones, and our moderator Rick Loverd, the director of the Science and Entertainment Exchange, an organization set up by the National Academy of Sciences. I’m on the panel also as an author (I wrote and drew a non-fiction graphic novel about science called The Dialogues). My book isn’t connected to a TV show, but I’ve worked on many TV shows and movies as a science advisor, and so this rounds out the panel. All our books are from [...] Click to continue reading this post

The post Two Days at San Diego Comic-Con 2019 appeared first on Asymptotia.

by Clifford at August 31, 2019 05:56 AM

August 29, 2019

John Baez - Azimuth

The Binary Octahedral Group

The complex numbers together with infinity form a sphere called
the Riemann sphere. The 6 simplest numbers on this sphere lie at points we could call the north pole, the south pole, the east pole, the west pole, the front pole and the back pole. They’re the corners of an octahedron!

On the Earth, I’d say the “front pole” is where the prime meridian meets the equator at 0°N 0°E. It’s called Null Island, but there’s no island there—just a buoy. Here it is:

Where’s the back pole, the east pole and the west pole? I’ll leave two of these as puzzles, but I discovered that in Singapore I’m fairly close to the east pole:

If you think of the octahedron’s corners as the quaternions \pm i, \pm j, \pm k, you can look for unit quaternions q such that whenever x is one of these corners, so is qxq^{-1}. There are 48 of these! They form a group called the binary octahedral group.

By how we set it up, the binary octahedral group acts as rotational symmetries of the octahedron: any transformation sending x to qxq^{-1} is a rotation. But this group is a double cover of the octahedron’s rotational symmetry group! That is, pairs of elements of the binary octahedral group describe the same rotation of the octahedron.

If we go back and think of the Earth’s 6 poles as points 0, \pm 1,\pm i, \infty on the Riemann sphere instead of \pm i, \pm j, \pm k, we can think of the binary octahedral group as a subgroup of \mathrm{SL}(2,\mathbb{C}), since this acts as conformal transformations of the Riemann sphere!

If we do this, the binary octahedral group is actually a subgroup of \mathrm{SU}(2), the double cover of the rotation group—which is isomorphic to the group of unit quaternions. So it all hangs together.

It’s fun to actualy see the unit quaternions in the binary octahedral group. First we have 8 that form the corners of a cross-polytope (the 4d analogue of an octahedron):

\pm 1, \pm i , \pm j , \pm k

These form a group on their own, called the quaternion group. Then we have 16 that form the corners of a hypercube (the 4d analogue of a cube, also called a tesseract or 4-cube):

\displaystyle{ \frac{\pm 1 \pm i \pm j \pm k}{2} }

These don’t form a group, but if we take them together with the 8 previous ones we get a 24-element subgroup of the unit quaternions called the binary tetrahedral group. They’re also the vertices of a 24-cell, which is yet another highly symmetrical shape in 4 dimensions (a 4-dimensional regular polytope that doesn’t have a 3d analogue).

That accounts for half the quaternions in the binary octahedral group! Here are the other 24:

\displaystyle{  \frac{\pm 1 \pm i}{\sqrt{2}}, \frac{\pm 1 \pm j}{\sqrt{2}}, \frac{\pm 1 \pm k}{\sqrt{2}},  }

\displaystyle{  \frac{\pm i \pm j}{\sqrt{2}}, \frac{\pm j \pm k}{\sqrt{2}}, \frac{\pm k \pm i}{\sqrt{2}} }

These form the vertices of another 24-cell!

The first 24 quaternions, those in the binary tetrahedral group, give rotations that preserve each one of the two tetrahedra that you can fit around an octahedron like this:

while the second 24 switch these tetrahedra.

The 6 elements

\pm i , \pm j , \pm k

describe 180° rotations around the octahedron’s 3 axes, the 16 elements

\displaystyle{   \frac{\pm 1 \pm i \pm j \pm k}{2} }

describe 120° clockwise rotations of the octahedron’s 8 triangles, the 12 elements

\displaystyle{  \frac{\pm 1 \pm i}{\sqrt{2}}, \frac{\pm 1 \pm j}{\sqrt{2}}, \frac{\pm 1 \pm k}{\sqrt{2}} }

describe 90° clockwise rotations holding fixed one of the octahedron’s 6 vertices, and the 12 elements

\displaystyle{  \frac{\pm i \pm j}{\sqrt{2}}, \frac{\pm j \pm k}{\sqrt{2}}, \frac{\pm k \pm i}{\sqrt{2}} }

describe 180° clockwise rotations of the octahedron’s 6 opposite pairs of edges.

Finally, the two elements

\pm 1

do nothing!

So, we can have a lot of fun with the idea that a sphere has 6 poles.

by John Baez at August 29, 2019 01:00 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

The DevOps Effect: Big and Beneficial Changes You Can Expect from Adapting DevOps Practices

There are plenty of excellent reasons why any business should implement the DevOps approach within their company. But here, we’re going to focus on the four major things you can expect when you implement DevOps strategies within your company.

Better Tools

One of the things that DevOps brings is the opportunity to use new technology, especially when it comes to tools and systems. Any company can benefit from re-tooling the workforce. For example, implementing the use of a production and operations planning software and automating standard processes to ensure a smoother flow of a process.

This is done more efficiently. Including these two important things in your toolchain will ensure that no work is left undone and that each and every step in the process is completed. Tool efficiency improves work output, which in turn produces value that is reflected in the products and services that reach your customers.

Improved Mindset

Seeing things with fresh eyes is another thing that DevOps is particular about. By improving the ways and methods that have been in place for some time, an organization can assess the efficiency of their value stream mapping.

They have the opportunity to eliminate waste in the stream and improve work dependencies and relationships within and between teams. This results in a new perspective and outlook on how work should and must be done.

DevOps removes the barrier between back and front end teams, allowing them to work together seamlessly and effectively, with better communication and work visibility, to boot. This keeps both ends on the same page and be able to change each end’s approach towards handing off work and providing feedback from either side.

Smoother Workflow

We mentioned value stream and workflow in the above paragraphs; these two elements are crucial to the successful application and adaptation of DevOps practices. Removing wasteful steps and processes, like unnecessary work, in the stream reduces the amount of time in between tasks, handoffs, and lead times.

The result is a higher work turn-in rate, faster delivery rates, and improved product and service quality. It fosters a more ideal work environment where the flow of work is continuous and with fewer constraints throughout the entire process.

Better Internal Communication

tech devices using to communicate

Dividing the line between dev and ops is systematically eradicated as better, and more systematic communication methods are being implemented. Stronger communication paths bridge the gap between dev and ops. More collaborative effort toward creating valuable products and services enhances customer relationships and which customers can benefit from. It is often said that the work output mirrors the internal work systems and communication methods within the company or a large organization.

Putting a premium on internal communication methods provides any business – big or small, clear direction and the ability to ensure that things are flowing at the right pace, in the right order. It also paves the way for easier problem solving as matters of concern can be raised quickly and easily through the appropriate channels.

It’s often been said that deeply ingrained institutional habits are hard to break and often the cause for any business to fail in keeping up with the ever-changing demands of the business and its customers. This is where adapting the DevOps approach will prove beneficial to your business – whether big or small.

The post The DevOps Effect: Big and Beneficial Changes You Can Expect from Adapting DevOps Practices appeared first on None Equilibrium.

by Bertram Mortensen at August 29, 2019 01:00 AM

August 28, 2019

Matt Strassler - Of Particular Significance

The New York Times Remembers A Great Physicist

The untimely and sudden deaths of Steve Gubser and Ann Nelson, two of the United States’ greatest talents in the theoretical physics of particles, fields and strings, has cast a pall over my summer and that of many of my colleagues.

I have not been finding it easy to write a proper memorial post for Ann, who was by turns my teacher, mentor, co-author, and faculty colleague.  I would hope to convey to those who never met her what an extraordinary scientist and person she was, but my spotty memory banks aren’t helping. Eventually I’ll get it done, I’m sure.

(Meanwhile I am afraid I cannot write something similar for Steve, as I really didn’t know him all that well. I hope someone who knew him better will write about his astonishing capabilities and his unique personality, and I’d be more than happy to link to it from here.)

In this context, I’m gratified to see that the New York Times has given Ann a substantive obituary, https://www.nytimes.com/2019/08/26/science/ann-nelson-dies.html, and appearing in the August 28th print edition, I’m told. It contains a striking (but, to those of us who knew her, not surprising) quotation from Howard Georgi.  Georgi is a professor at Harvard who is justifiably famous as the co-inventor, with Nobel-winner Sheldon Glashow, of Grand Unified Theories (in which the electromagnetic, weak nuclear, and strong nuclear force all emerge from a single force.) He describes Ann, his former student, as being able to best him at his own game.

  • “I have had many fabulous students who are better than I am at many things. Ann was the only student I ever had who was better than I am at what I do best, and I learned more from her than she learned from me.”

He’s being a little modest, perhaps. But not much. There’s no question that Ann was an all-star.

And for that reason, I do have to complain about one thing in the Times obituary. It says “Dr. Nelson stood out in the world of physics not only because she was a woman, but also because of her brilliance.”

Really, NYTimes, really?!?

Any scientist who knew Ann would have said this instead: that Professor Nelson stood out in the world of physics for exceptional brilliance — lightning-fast, sharp, creative and careful, in the same league as humanity’s finest thinkers — and for remarkable character — kind, thoughtful, even-keeled, rigorous, funny, quirky, dogged, supportive, generous. Like most of us, Professor Nelson had a gender, too, which was female. There are dozens of female theoretical physicists in the United States; they are a too-small minority, but they aren’t rare. By contrast, a physicist and person like Ann Nelson, of any gender? They are extremely few in number across the entire planet, and they certainly do stand out.

But with that off my chest, I have no other complaints. (Well, admittedly the physics in the obit is rather garbled, but we can get that straight another time.) Mainly I am grateful that the Times gave Ann fitting public recognition, something that she did not actively seek in life. Her death is an enormous loss for theoretical physics, for many theoretical physicists, and of course for many other people. I join all my colleagues in extending my condolences to her husband, our friend and colleague David B. Kaplan, and to the rest of her family.

by Matt Strassler at August 28, 2019 12:31 PM

August 26, 2019

Jon Butterworth - Life and Physics

Being English abroad, 2019
My weekend was mostly spent on the French side of border country, experiencing serial incidents of Englishness. On Saturday we went to a lake and swam. There was a French guy who seemed to be staring at me while I … Continue reading

by Jon Butterworth at August 26, 2019 08:05 PM

John Baez - Azimuth

Civilizational Collapse (Part 4)

This is part 4 of an intermittent yet always enjoyable series:

Part 1: the rise of the ancient Puebloan civilization in the American Southwest from 10,000 BC to 750 AD.

Part 2: the rise and collapse of the ancient Puebloan civilization from 750 AD to 1350 AD.

Part 3: a simplified model of civilizational collapse.

This time let’s look at the collapse of Greek science and resulting loss of knowledge!

The Antikythera mechanism, found undersea in the Mediterranean, dates to somewhere between 200 and 60 BC. It’s a full-fledged analogue computer! It had at least 30 gears and could predict eclipses, even modelling changes in the Moon’s speed as it orbits the Earth.

What Greek knowledge was lost during the Roman takeover? We’ll never really know.

They killed Archimedes and plundered Syracuse in 212 BC. Ptolemy the Fat—”Physcon” —put an end to science in Alexandria in 154 BC with brutal persecutions.

Contrary to myth, Library of Alexandria was not destroyed once and for all in a single huge fire. The sixth head librarian, Aristarchus of Samothrace, fled when Physcon took over. The library was indeed set on fire in the civil war of 48 BC. But it seems to have lasted until 260 AD, when it basically lost its funding.

When the Romans took over, they dumbed things down. In his marvelous book The Forgotten Revolution, quoted below, Lucio Russo explains the evil effects.

Another example: we have the first four books by Apollonius on conic sections—the more elementary ones—but the other three have been lost.

Archimedes figured out the volume and surface area of a sphere, and the area under a parabola, in a letter to Eratosthenes. He used modern ideas like ‘infinitesimals’! The letter was repeatedly copied and made its way into a 10th-century Byzantine parchment manuscript. But this parchment was written over by Christian monks in the 13th century, and only rediscovered in 1906.

There’s no way to tell how much has been permanently lost. So we’ll never know the full heights of Greek science and mathematics. If we hadn’t found one example of an analogue computer in a shipwreck in 1902, we wouldn’t have guessed they could make those!

And we shouldn’t count on our current knowledge lasting forever, either.

Here are some more things to read. Most of all I recommend this book:

• Lucio Rosso, The Forgotten Revolution: How Science Was Born In 300 BC And Why It Had To Be Reborn, Springer, Berlin, 2013. (First chapter.)

Check out the review by Sandro Graffi (who taught me analysis when I was an undergrad at Princeton):

• Sandro Graffi, La Rivoluzione Dimenticata (The Forgotten Revolution), AMS Notices (May 1998), 601–605.

Only in 1998 did scholars get serious about recovering information from the Archimedes palimpsest using ultraviolet, infrared and other imaging techniques! You can now access it online:

The Archimedes Palimpsest Project.

Here’s a good book on the rediscovery and deciphering of the Archimedes palimpsest, and its mathematical meaning:

• Reviel Netz and William Noel, The Archimedes Codex: Revealing the
Secrets of the World’s Greatest Palimpsest
, Hachette, UK, 2011.

Here’s a video:

• William Noel, Revealing the lost codex of Archimedes, TED, May 29, 2012.

Here are 9 videos on recreating the Antikythera mechanism:

Machining the Antikythera mechanism, Clickspring.

The Wikipedia articles are good too:

• Wikipedia, Antikythera mechanism.

• Wikipedia, Archimedes palimpsest.

• Wikipedia, Library of Alexandria.

by John Baez at August 26, 2019 08:10 AM

August 22, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Types of Email Security Protocol

Communicating with your clients and business partners is among the crucial elements of business operations. Of course, you will not achieve much if your communication lines are broken. Thanks to the Internet, it is not as difficult to keep in touch with clients and advertise your products. One of the leading platforms for communication and advertising is email. Unfortunately, this platform is also one of the most vulnerable to modern cyberattacks.

Local managed IT service companies in Phoenix and other cities will now focus on security protocols for your emails. The protocols are structures designed for the protection of your emails from third-party interference. Your SMTP (simple mail transfer protocol) has no embedded security and is vulnerable to all manner of malware that hackers might send to your company in the forms of attachments on seemingly genuine emails.

Here are your email security protocol alternatives:

TLS and SSL

Receiving an email

Transport layer security (TLS) is the successor of the secure sockets layer (SSL) that was depreciated in 2015. These are application-layer protocols that will standardize communication for end-users. In email security, the security protocols provide a security framework that works in conjunction with the SMTP to secure emails. TLS works by initiating a series of ‘’handshakes’’ with your email server when you receive an email. These are steps the server takes to validate the email’s encryption settings and validate its security before the transmission of the email. TLS, therefore, generates base-level email encryption for your network.

Digital Certificates

These are encryption tools used to secure your emails cryptographically. The certificates allow you to send encrypted emails on a predefined encryption key while encrypting your outgoing mails. You, after all, would not want to be known as the company that sends malware to clients and partners. The public key for your digital certificate is available for those who want to send you encrypted emails while you will decrypt your received emails using a private key.

The SPF (Sender Policy Framework)

This is an authentication protocol specifically designed for the protection of your network against domain spoofing. SPF introduces extra security checks into your email server that will determine whether your incoming messages came from a specified domain or if a person is masking their identity using the domain. The domain, in this case, is a section of the Internet under one name. Most hackers will often hide their domains to avoid being blacklisted or traced when spoofing a malicious mail as a healthy domain.

DKIM (Domain Keys Identified Mail)

Email popup warning window concept

DKIM denotes an anti-tamper procedure, which ensures that your sent emails remain intact before reaching the recipient. It employs digital signatures to verify that emails have been submitted by a particular domain and checks that the domain is authorized to send the email. To this end, DKIM is considered an extension of SPF. It also eases the process of developing domain whitelists and blacklists.

Hackers are all more focused on your email security vulnerabilities nowadays. They know that opening emails is a crucial undertaking in your business since you cannot afford to ignore messages. The above security protocols will give you a guarantee that none of your emails will open you up to cyberattacks.

The post Types of Email Security Protocol appeared first on None Equilibrium.

by Bertram Mortensen at August 22, 2019 06:11 AM

August 21, 2019

Jon Butterworth - Life and Physics

MMR and me. And propaganda.
Originally posted on Life and Physics:
I have a doctorate in physics. My wife has one in chemistry. We have an 11-year-old son, who should have got his MMR jab in 2003 A lot has been written about the MMR…

by Jon Butterworth at August 21, 2019 05:28 PM

August 20, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Choosing the Right Flow Meter for an Application

Flow meters are used to measure water, gasoline, chemicals, engine oils, dairy, industrial liquids, airflow, and transmission fluids. Getting the right one for a specific application is a must, as all of your money and budget would be put to waste if you choose the wrong one.

These flow meters are essential for data collection, which is important for engineers. There is a process for choosing the right industrial flow meters for water and today, we will discuss what you should remember when choosing a flow meter for your application.

Don’t Just Go for the Popular or Cheap One

You should never get a flow meter just because it is cheap or popular. Most engineers decide based on these factors and they often end up regretting their choices as most often than not, they would have to spend more money than what they originally intended to.

Chances are if the flow meter is cheap, then you have to spend a lot of money on ancillary equipment and expensive maintenance. This means that if you decide to invest in a high-quality flow meter, then you would get to save a lot of cash, as you can use this for years and years to come. Also, some or most of the most popular flow meters are not suitable for your application, so it would be best to do your research first before actually buying the flow meter.

Consider New Flow Technologies

New flow technologies offer new solutions, which is why you should consider looking at the newer ones in the market. Older models such as inline ultrasound flow meters had to be re-calibrated whenever a new type of fluid was introduced to it. This also cannot be used in an application wherein hygiene was important, which means that you would have to spend on another flow meter if this is one of your main concerns.

Flow meters are technical and is influenced by a lot of variables. Every application is unique and needs different types of flow meters to properly work.

Consider the Flow Measurement

Fluids are measured based on two units: volume and mass. Know which one you will be measuring so you can get the right type of flow meter. Yes, different flow meters are used for volume and mass. If you know how to calculate volume from mass and vice versa, then you can get just one flow meter. This would take a lot more time, though, so you would need to know the density and agreed variables.

Know What You’ll be Measuring

As mentioned, there are different categories when it comes to flow meter measurement. Before getting a flow meter, you should know what you’ll be measuring: is it gas, liquid, slurry, or vapor? A lot of flow meters cannot measure gas or slurry, which is why it is important to do your research so you would know what flow meter can measure the units that you are handling.

There are tons of variables that you would need to consider when buying a flow meter, so you should carefully decide as to not waste any time and money.

 

The post Choosing the Right Flow Meter for an Application appeared first on None Equilibrium.

by Bertram Mortensen at August 20, 2019 01:00 AM

August 19, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

5 Marketing Mistakes Franchisors Should Watch Out For

After finalizing your franchise plans and models, packages, and opportunities, the next step is selling your franchise. But as with all marketing activities, selling your franchise can be quite costly and can take up a lot of resources, which is why you’d want to make sure that every marketing activity and resource you invest in has a good chance of attracting potential franchisees. However, it’s important for franchisors to be aware of common marketing mistakes that won’t only end up costing you money but may end up pushing away potential franchisees.

Relying Too Much on Hard Selling

It’s indeed a huge plus to have aggressive franchise brokers and franchise sales agents/team, but you shouldn’t only rely on them. Many potential franchisees don’t want to be “pushed” or “persuaded” to buy a franchise but prefer to have all the information they need so that they, themselves, can decide whether or not to buy your franchise. That said, you should be able to provide this information in your website, and perhaps even in the franchise opportunity brochures and portfolios (both digital and printed copies) that your franchise brokers and sales team can distribute to potential and/or interested franchisees.

Lacking Proof and Feedback

People discussing about business and sales

Even if your business is already well-known and has proven itself to be quite lucrative, you’d have to think like your potential franchisees. Both seasoned businessmen and those who wish to start a business are aware that buying a franchise is a huge decision that requires a lot of money, which is why they’d want to minimize risk as much as possible by having proof that the franchise they’re buying is indeed a safe and lucrative investment. As such, you should be able to provide data showing all the information they need such as ROI and forecasted sales, as well as data and even testimonials/feedback from your successful franchisees.

Not Having A Dedicated Franchising Website

If you’re already opening your business to franchising, chances are, you already have a business website with all the products and information about your business. However, a common mistake for franchisors is having their “franchise page” as just a sub-page on their business website. As mentioned earlier, many potential franchisees prefer to get all the information they need to decide whether or not to buy the franchise, instead of having it “sold” to them. As such, one of the best things you can do when selling your business franchise is having an entirely separate website dedicated for franchising which contains all the information they need, and perhaps even an active online customer service representative that they can chat with if they wish to know more about the franchise opportunities you’re selling.

Unclear or Poorly-Written/Made Reading Materials

Providing information to your potential franchisees is something that can’t be stressed enough. However, it’s vital for you to oversee and ensure that the instructional materials and franchise brochures/portfolios (including those in your website and social media) are done concisely and could be understood easily to minimize any confusion — your potential clients are more likely to hesitate if the information you’re showing isn’t clear or transparent, or are simply poorly-worded.

Neglecting Social Media

Woman using her phone

While you may have invested in print advertisements, active and professional sales agents, and maybe even TV/radio ads, you should give as much focus to online digital marketing, specifically, social media. Many consumers and entrepreneurs search for products, services, and even business and investment opportunities through social media. That said, you should focus on social media marketing for business franchises; this includes content creation and also managing your franchising business’ social media accounts. One should also have a social media account manager to handle all the inquiries on comments and private messages from interested or curious parties.

Conclusion

Opening your business for franchise opportunities is a good sign that it’s grown big enough and has made a name for itself. But in order to boost your chance of selling your franchise, you should definitely watch out for these common franchise marketing mistakes.

The post 5 Marketing Mistakes Franchisors Should Watch Out For appeared first on None Equilibrium.

by Bertram Mortensen at August 19, 2019 06:03 AM

August 18, 2019

ZapperZ - Physics and Physicists

Big Bang Disproved?!
Of course not! But this is still a fun video for you to watch, especially if you are not up to speed on (i) how we know that the universe is expanding and (ii) the current discrepancy in the measurement of the Hubble constant via two different methods.



But unlike politics or social interactions, discrepancies and disagreement in science are actually welcomed and a fundamental aspects of scientific progress. It is how we refine and polish our knowledge into a more accurate form. As Don Lincoln says at the end of the video, scientists love discrepancies. It means that there are more things that we don't know, and more opportunities to learn and discover something new.

Zz.

by ZapperZ (noreply@blogger.com) at August 18, 2019 04:13 PM

August 16, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Keep the Heat Up in Your Oven

Industrial equipment can be sturdy, but it is not unbreakable. It will encounter problems if you don’t maintain it. This is why you need to maintain it. One of the more important pieces of equipment is your industrial oven. Considering how it blasts incredible amounts of heat into a small space, it is surprising that it doesn’t break down more often. With the following tips, you can ensure that problems with it will be few and far between:

Lubricate the Blower

An essential part of the oven, the blower motor supplies the needed air for the oven. Without oxygen, there is no fire, so keeping it running is important. Lubrication is critical here. Some models don’t need lubrication but need regular cleaning. Regular lubrication every six months ensures that the blower will keep doing its job. Check the manual for how you can lubricate the motor so that you can ensure that there is nothing wrong. With regular cleaning and lubrication, you can prevent sudden breakdowns. These can ruin your oven’s performance and productivity.

Maintain the Airflow

If you want your blower motor to work well, nothing must restrict the airflow to the oven. Do this by positioning the oven so that the blower motor’s air inlets are clear. Placing items around the inlets is a bad idea. Clearing the space around the oven to ensure maximum air consumption is possible.

Use the Right Voltage

Closeup photo of a stainless steel appliances in modern residential kitchen

Industrial ovens consume great amounts of electricity to function well. If they are not supplied with the right power, they will not perform well. For example, if you hook up a 240 VAC machine up to a weaker power supply like 200 VAC, you can expect lower performance. Always plug your machines into the proper power supply for the best results.

Check the Inside

There are several components that you will need to check every few months. One of these is the heating elements. They heat things in the oven, and when they malfunction, you will not get the high temperatures that you need. If they do break, replace them. Do the same for other components like the thermal sensor and wiring.

Know When to Replace

Even the sturdiest items break. If your oven is old, then you will need to replace it. Any model over ten years old will not have the innovations of new models. This is when you should start shopping around. Suppliers like SupaGEEK Designs can offer your company a great new oven that will meet your requirements. You will have to consider whether it is ideal to keep your old oven rather than buy a new one and enjoy the benefits.

Your industrial oven can cause delays down the line. This is bad news for your company’s productivity and bottom line. This is why the tips above are so important. With their help, you can be sure that breakdowns are at a minimum and that your oven performs well every day. The results are better productivity and a bump in profits.

The post Keep the Heat Up in Your Oven appeared first on None Equilibrium.

by Bertram Mortensen at August 16, 2019 02:36 AM

August 15, 2019

Jon Butterworth - Life and Physics

Nature Careers: Working Scientist podcast
I talked to Julie Gould for Nature recently, about the challenges of working on big collaborations, of doing physics in the media spotlight, on why LHC had more impact with the public than LEP, and more. (I also occasionally manage … Continue reading

by Jon Butterworth at August 15, 2019 06:19 PM

August 14, 2019

ZapperZ - Physics and Physicists

Relativisitic Length Contraction Is Not So Simple To See
OK, I actually had fun reading this article, mainly because it opened up a topic that I was only barely aware of. This Physics World article describes the simple issue of length contraction, but then delves into why OBSERVING this effect, such as with our own eyes, is not so simple.

If the Starship Enterprise dipped into the Earth’s atmosphere at a sub-warp speed, would we see it? And if the craft were visible, would it look like the object we’re familiar with from TV, with its saucer section and two nacelles? Well, if the Enterprise were travelling fast enough, then – bright physicists that we are – we’d expect the craft to experience the length contraction dictated by special relativity.

According to this famous principle, a body moving relative to an observer will appear slightly shorter in the direction the body’s travelling in. Specifically, its observed length will have been reduced by the Lorentz factor (1–v2/c2)1/2, where v is the relative velocity of the moving object and c is the speed of light in a vacuum. However, the Enterprise won’t be seen as shorter despite zipping along so fast. In fact, it will appear to be the same length, but rotated.

You might not have heard of this phenomenon before, but it’s often called the “Terrell effect” or “Terrell rotation”. It’s named after James Terrell – a physicist at the Los Alamos National Laboratory in the US, who first came up with the idea in 1957. The apparent rotation of an object moving near the speed of light is, in essence, a consequence of the time it takes light rays to travel from various points on the moving body to an observer’s eyes.
You can read the rest of the explanation and graphics in the article. Again, this is not to say that your "pole-in-barn" exercise that you did in relativity lessons is not valid. It is just that in that case, you were not asked what you actually SEE with your eyes when that pole is passing through the barn, and that your pole is long and thin, as opposed to an object with a substantial size and width. The notion that such object will be seen with our eyes flat as a pancake is arguably may not be true here.

Zz.

by ZapperZ (noreply@blogger.com) at August 14, 2019 04:17 PM

Lubos Motl - string vacua and pheno

Coincidences, naturalness, and Epstein's death
The circumstances of Jeffrey Epstein's death seem to be a drastic but wonderful metaphor for naturalness in physics: those who say "there is nothing to see here" in the irregularities plaguing Epstein's jail seem to be similar to those who say "there is nothing to see here" when it comes to fine-tuning or unlikely choices of parameters in particle physics.

As far as I can say, a rational person who thinks about these Epstein events proceeds as follows:
  • an invention of rough hypotheses or classes of hypotheses
  • usage of known or almost known facts to adjust the probabilities of each hypothesis
It's called logical or Bayesian inference! That's a pretty much rigorous approach justified by basic probability calculus – which is just a continuous generalization of mathematical logic. The opponents of this method seem to prefer a different Al Gore rhythm:
  • choose the winning explanation at the very beginning, according to some very simple e.g. ideological criteria or according to your own interests; typically, the winning explanation is the most politically correct one
  • rationalize the choice by saying that all other possible explanations are hoaxes, conspiracy theories, "not even wrong" theories that are simultaneously unfalsifiable and already falsified, and by screaming at, accusing, and insulting those who argue that their other choices seem more likely – often those who do some really fine research
Which of the approaches is more promising as a path towards the truth? Which is the more honest one? These are rhetorical questions – of course Bayesian inference is the promising and ethical approach while the other one is a sign of stupidity or dishonesty. I am just listing the "second approach" to emphasize that some people are just dumb or dishonest – while they or others often fail to appreciate this stupidity or dishonesty.



OK, the basic possible explanations of the reported death seem to be the following:
  1. Epstein committed suicide and all the "awkward coincidences" are really just coincidences that don't mean anything
  2. Epstein committed suicide and someone helped to enable this act, perhaps because of compassion
  3. Epstein was killed by somebody and it's accidentally hard to determine who was the killer because the cameras etc. failed to do their job
  4. Epstein was killed by somebody who took care of details and most of these coincidences are issues that the killer had to take care of
  5. Epstein is alive – he was probably transferred somewhere and will be allowed a plastic surgery and new identity
I have ordered the stories in a certain way – perhaps from the most "politically correct" to the most "conspiracy theory-like" explanations. I had to order them in some way. Also, some completely different explanation could be completely missing in my list – but at some level, it should be possible to group the explanations to boxes according to Yes/No answers to well-defined questions which means that there is a semi-reliable way to make sure that you won't miss any option.



OK, I think that there are lots of politically correct, basically brainwashed and brain-dead, people who imagine a similar list, order it similarly, and pick the first choice – the most politically correct choice – because it's what makes them feel good, obedient, and it's right according to them. They may have been trained to think that it's ethical if not morally superior to believe the first explanation according to a similar ordering.

And then there is the rest of us, the rational people who realize that the most politically correct explanation is often false and one should treat the explanations fairly and impartially, regardless of whether they sound politically correct or convenient for certain people etc.

In the absence of special data and circumstances, the rational people among us also favor the "least conspirational" explanation – well, the most likely one. However, it isn't necessarily the "most politically correct" one in general. Also, the fact that we try to favor the "most likely" explanation is a tautology – it's the task we are solving from the beginning.

But in this case, and many others, there are lots of special facts that seem to matter and affect the probabilities. In this case, and quite generally, they just make the "conspiracy-like explanations" more likely. (A much more detailed analysis should be written to clarify which hypotheses are strengthened by which special circumstances.) In this Epstein story, they are e.g. the following:
  1. Epstein was on suicide watch just three weeks ago but he was taken from the suicide watch days before he was found dead
  2. Epstein has previously claimed that someone tried to kill him in jail
  3. the cameras that could watch him were looking the other way for a very long time – a fact that may clearly be counted as a case of malfunctioning camera (and Polymath is just batšit crazy when he claims that a camera looking the other way, away from Epstein, for hours (?) is not malfunctioning)
  4. Epstein's cellmate was transferred hours before Epstein's death (a possible witness)
  5. the cellmate was taken out from a cell that has a bunk bed (double decker) which is probably needed for a suicide claim (but the very presence of a bunk bed increases the probability of the suicide option 1, too)
  6. he should have been checked every 30 minutes but around the death, the protocol was violated for hours
  7. one of the two relevant guards wasn't a corrections officer but a more unrelated employee
  8. he was claimed to hang himself using bed sheets but the sheets should have been made of paper and the bed frame was unmovable while the room was 8-9+ feet high
  9. a new huge batch of documents about the ring was released by court a day before his death
  10. the number of people who had the motive to kill Epstein was huge – and their combined power is even greater because they were usually rich and high-profile people (note that I don't make any claim about whether the potential killer was left-wing or right-wing – people in both camps speculate but the left-wing killers seem more likely because they were more connected with Epstein and apparently more sinful)
And I am pretty much certain that this list is incomplete, even when it comes to coincidences that have really shocked me. I tried to add some hyperlinks (sources) to the list above but there's no objective way to determine what is the "best" source. Most of these things simply look credible. Some of them really look implicitly "proven". If there were a good camera recording of his suicide, we would have probably learned about it, right?

So I think it's just OK to list similar coincidences even without other "sources". In my case, they are a result of my research and careful curation of sources. I am proud of offering occasional investigative stories that are both more accurate and more early than elsewhere. So if someone suggests that I should be just a follower who copies some MSNBC articles, I feel incredibly insulted because TRF is obviously better, more accurate, and more groundbreaking than the MSNBC. If you really disagree with such a claim, then it would be sensible for you to avoid my website altogether, wouldn't it?

At any rate, there is a very large number of "coincidences" that are generally increasing the probability of the more "conspiracy-like" explanations. Everyone who doesn't acknowledge this fact is a brainwashed or brain-dead irrational moron, a stupid sheep that might be used for wool but not for thinking. The event may still turn out to be a suicide and the coincidences may be just coincidences. But even if that is the case, it will still be true that the people who accept this conclusion immediately are either stupid or dishonest – or perhaps even involved in the plan.

A broken clock is correct twice a day. A wrong reasoning may sometimes end up with a conclusion that happens to be right, too. But even when it is so, we can still analyze how the reasoning was made and if it is demonstrably fallacious or stupid, it can be demonstrated that it is fallacious or stupid – and that the person reasoning in this way is analogous to the broken clock.

Now the analogy. You have the people who won't ever acknowledge any arguments involving fine-tuning or naturalness or the preference for theories that just look more solid, less contrived etc. Like in the Epstein case, these people find their winning explanation in advance, i.e. by ignoring all the relevant detailed evidence that may be collected later. And then they just rationalize this explanation and spit on the alternatives and everyone who "dares" to defend them.

So these people may decide that the best theory is a "quantum field theory with the smallest number of component fields" – their form of Occam's razor. Supergravity or string theory "add fields", according to their counting, so they are less compatible with this version of Occam's razor, and therefore they eliminate these theories even though they don't have any negative evidence.

But competent physicists don't think like that. The claim that a "field theory with the smallest number of fields is most likely" is just a hypothesis and there is an extremely strong body of evidence – both anecdotal empirical evidence and theoretical evidence in the form of incomplete but nearly mathematical proofs – that this assumption is incorrect. Competent physicists really know that the relevant realization of Occam's razor is different and when some multiplets (or supermultiplets) of fields are guaranteed to exist by a symmetry principle or another qualitative principle, they cannot be counted as a disadvantage of the theory that makes them unlikely, despite the fact that the number of component fields may grow very high.

So once again, competent physicists are actually doing something that is analogous to the rational people who care about the peculiarities involving Epstein's guards, documents, camera, and cell maintenance. They just work with the evidence in a nontrivial way – with lots of evidence. The rational usage changes the odds of various theories and even classes of theories. In particular, people have learned that theories with greater numbers of component fields implied by powerful enough symmetry principles (or similar principles) seem like the more natural, default, apparently more likely hypothesis than the naive theory with the smallest number of component fields.

Both in the case of particle physics and Epstein's death, there simply exist two groups of people. One of them prefers an impartial treatment of the hypothesis and relentless, rigorous work with the detailed evidence and its ramifications; and the people who just prefer naively simple explanations picked by some stupid – and in generality, clearly incorrect – criteria followed by a repetitive rationalization and frantic but content-free attacks against everyone who disagrees with them.

by Luboš Motl (noreply@blogger.com) at August 14, 2019 03:55 AM

August 13, 2019

Marco Frasca - The Gauge Connection

Where we are now?

Summer conferences passed by, we have more precise data on the Higgs particle and some new results were announced. So far, this particle appears more and more in agreement with the Standard Model expectations without no surprise in view. Several measurements were performed with the full dataset at 140 {\rm fb}^{-1}. Most commentators avoid to tell about this because it does not warrant click-bait anymore. At EPS-HEP 2019 in Ghent (Belgium), the following slide was presented by Hulin Wang on behalf of the ATLAS Collaboration

ZZ decay and higher resonances

There appears to be an excess at 250 GeV and another at 700 GeV but we are talking of about 2 sigma, nothing relevant. Besides, ATLAS keeps on seeing an excess in the vector boson fusion for ZZ decay, again about 2 sigma, but CMS sees nothing, rather they are somewhat on the missing side!

No evidence of supersymmetry whatsoever, neither the multiplet of Higgs nor charged Higgs are seen that could hint to supersymmetry. I would like to remember that some researchers were able to obtain the minimal supersymmetric standard model from string theory and so, this is a diriment aspect of the experimental search. Is the Higgs particle just the first one of an extended sector of electroweak (soft) supersymmetry breaking?

So, why could the slide I just posted be so important? The interesting fact is the factor 2 between the mass of this presumed new resonance and that of the Higgs particle. The Higgs sector of the Standard Model can be removed from it and treated independently. Then, one can solve it exactly and the spectrum is given by an integer multiple of the mass of the Higgs particle. This is exactly the spectrum of a Kaluza-Klein particle and it would represents an indirect proof of the existence of another dimension in space. So, if confirmed, we would move from a desolating scenario with no new (beyond standard model) physics in view to a completely overturned situation! We could send all the critics back to sleep wishing them a better luck for the next tentative.

Back to reality, the slide yields the result for the dataset of 36.1 {\rm fb}^{-1} and no confirmation from CMS has ever arrived. We can just hope that the dreaming scenario takes life.

by mfrasca at August 13, 2019 05:59 PM

August 11, 2019

Lubos Motl - string vacua and pheno

Four Tommaso Dorigo's SUGRA blunders
Almost all the media informed about the new Special Breakthrough Prize in Fundamental Physics (which will be given to the guys during a TV broadcast event on November 3rd; in NASA's Hangar One, Mountain View, CA) – a prize to three founders of supergravity – as if it were any other prize.

The winners are lucky to divide the $3 million and/or they deserve the award which was chosen by a nontrivial process, like in the case of the Nobel Prize or any other prize. Thankfully, in this case, most journalists didn't try to pretend that they know more about supergravity than the committee. The judgements or information about the importance of work in theoretical physics should be left to the experts because these are damn hard things that an average person – and even an average PhD – simply hasn't mastered.

I detected three amazing exceptions. Nature, Prospect Magazine, and Physics World wrote something completely different. The relevant pages of these media have been hijacked by vitriolic, one-dimensional, repetitive, scientifically clueless, deceitful, and self-serving anti-science activists and they tried to sling as much mud on theoretical physics as possible – which seems to be the primary job description of many of these writers and the society seems to enthusiastically fund this harmful parasitism.



It could be surprising, especially in the case of Nature and Physics World, because under normal circumstances, you would expect Nature and Physics World to be more expert-oriented and closer to the "scientific establishment". But the evolution of the media has produced the opposite outcome. The media that should be close to the scientific establishment are actually almost completely controlled by the self-anointed Messiahs – another branch of all those SJWs who want to destroy the civilized world as we have known it for centuries.

It's ironic but if you look at the reasons, it's logical. It has analogous reasons as the fact that the "inner cities" typically become the ghettos or homes to poor demographic groups – while the productive parts of the society typically have to move to more generic and less "central" suburbs. Similarly, the richest Western European countries are those that seem to be more likely to lose their civilized status very soon. What is the reason? Well, the most special and prosperous places – the inner cities or the rich Western countries – are those that also maximally attract the people who are destined to ruin them.

That's why the "most pro-science journals", inner cities, and wealthiest Western countries putrefy well before others.



Sadly, experimental particle physicist and blogger Tommaso Dorigo has partly joined these anti-civilization warriors. He wrote
My Take On The Breakthrough Prizes
where he repeats several deep misconceptions of the scientifically illiterate public. First, he recommended the three winners a particular way to spend the money. But Tommaso is no longer capable of even doing jokes properly, so let me fix his failed attempt. He advised
  • Ferrara to buy a new Ferrari
  • van Nieuwenhuizen to buy a newer housing in Malibu
  • and a new van for Dan Freedman for his bikes so that may become a truly freed man
OK, Dorigo failed in humor as well – now the more serious things. Dorigo says that it's good news that a rich guy named Milner has randomly decided to pay money for a failed theoretical idea named supergravity. Such a statement is wrong at every discernible level.

First, Dorigo completely misunderstood who picks the winners.

Future winners of the Breakthrough Prize in Fundamental Physics must first be nominated. I know everything about the process of nomination – because I am a nominator. But more importantly, Dorigo failed to read even the most elementary press release. If he had read it, he would know that
A Special Breakthrough Prize in Fundamental Physics can be awarded by the Selection Committee at any time, and in addition to the regular Breakthrough Prize awarded through the ordinary annual nomination process. Unlike the annual Breakthrough Prize in Fundamental Physics, the Special Prize is not limited to recent discoveries.
The quote above says that it is the Selection Committee that decides to grant this special prize – and it can do so at any moment. Is the committee composed of Milner? Or Milner and Zuckerberg? Not at all. Just do a simple Google search and you will find the composition of the Selection Committee. You will find out that the committee consists of the winners of the full-sized Breakthrough Prize in Fundamental Physics – the page contains names of 28 men alphabetically sorted from Arkani-Hamed to Witten (the list of men is surely open to hypothetical women as well). There is no Milner or Zuckerberg on the committee.

(After the SUGRA update, the list will include 4 former co-authors of mine. So I should also win the prize by default, without the needless bureaucracy.)

So you can see, the collection of the winners so far does exactly the same thing during their meetings as members of the Arista that Feynman was once admitted to: to choose who else is worthy to join the wonderful club of ours! ;-) Feynman didn't like it – because he didn't like any honors or the related pride about the status – but if you look at it rationally, you will agree that it's the "least bad" way of choosing new winners.

I find it puzzling that despite Dorigo's (and similar people's) obsession with the money, awards, and all the sociological garbage, he was incapable of figuring out whether the new winners are picked by Milner or by top physicists. It's the latter, Tommaso. You got another failing grade.

The main failing grade is given for the ludicrous comments about the "failed supergravity", however.

Well, to be sure that his dumb readers won't miss it, he wrote that supergravity was a "failed theory" not once but thrice:
I'll admit, I wanted to rather title this post "Billionaire Awards Prizes To Failed Theories", just for the sake of being flippant. [...]

It is a sad story that SUGRA never got a confirmation by experiment to this day, so that it remains a brilliant, failed idea. [...]

(SUGRA is, to this day, only a beautiful, failed theory)
Sorry, Tommaso, but just like numerous generic crackpots who tightly fill assorted cesspools on the Internet, you completely misunderstand how the scientific method works. A theory cannot become "failed" for its not having received an experimental proof yet.

On the contrary, the decisions about the validity of scientific theories are all about the falsification. For a scientific theory or hypothesis to become failed, one has to falsify it – i.e. prove that it is wrong. The absence of a proof in one way or another isn't enough to settle the status of a theory.

Instead, a theory or hypothesis must be in principle falsifiable – which SUGRA is – and once it's discovered, defined, or formulated, it becomes provisionally viable or temporarily valid up to the moment when it's falsified. And that's exactly the current status of SUGRA: it is provisionally viable or temporarily valid.

A physicist must decide whether the Einsteinian general relativity with or without the local supersymmetry – GR or SUGRA – seems like the more likely long-distance limit of the effective field theories describing Nature (in both cases, GR or SUGRA must be coupled to extra matter). But the actual experts who study these matters simply find SUGRA to be more likely for advanced reasons (realistic string vacua seem to need SUSY, naturalness, and others) – so SUGRA is the default expectation that will be considered provisionally valid up to the moment when it's ruled out.

In a typical case of falsification, an old theory is ruled out simultaneously with some positive evidence supporting an alternative, usually newer, theory.

But even if you adopted some perspective or counting in which SUGRA is not the default expectation about the relevant gravitational local symmetries in Nature, supergravity is still found in 176,000 papers according to the Google Scholar. It's clearly a theory that has greatly influenced physics according to the physicists. Of course the sane science prizes should exhibit some positive correlation with the expert literature. A layman may claim to know more than the theoretical physicists but it's unwise.

Everyone who writes that SUGRA is a "failed idea" is just a scientifically illiterate populist writer who clearly has nothing to do with good science of the 21st century – and whose behavior is partly driven by the certainty that he or she could never be considered as a possible winner of an award that isn't completely rigged. Sadly, Tommaso Dorigo belongs to this set. He may misunderstand why good physicists consider SUGRA to be the "default expectation" – that would be just ignorance, an innocent fact that Dorigo has no chance to make it to the list from Arkani-Hamed to Witten.

However, he is a pompous fool because he also brags about this ignorance. He boasts how wonderfully perfumed the cesspool where he belongs is.

Egalitarianism

If you're not following the failing grades, Dorigo has gotten three of them so far: for the inability to convey good jokes if he tries; for the misunderstanding of the decisions that pick the new winners; and for the misunderstanding what you need to make a theory "failed" in science. He deserves the fourth failing grade for the comments at the end of his text. He tried to emulate my "memos" but his actual memo – in an article about supergravity! – is that the inequality in the world is the principal cancer that must be cured.

Holy cow. First of all, such totally ideological comments are out of place in an article pretending to be about supergravity – but if he deserved a passing grade, he would have written that the real cancer is egalitarianism, Marxism, and especially its currently active mutation, neo-Marxism. This is the disease of mankind that all decent people are trying to cure right now!

by Luboš Motl (noreply@blogger.com) at August 11, 2019 12:34 PM

Jon Butterworth - Life and Physics

Space Shed at Latitude
I did an interview with Jon Spooner, Director of Human Space Flight at the Unlimited Space Agency at Latitude 2018. It is now available as a podcast, which you can listen to here (Series 1, Episode 3). It is intended to … Continue reading

by Jon Butterworth at August 11, 2019 07:25 AM

August 08, 2019

ZapperZ - Physics and Physicists

RIP J. Robert Shrieffer
I'm sad to hear the passing of a giant in our field, and certainly in the field of Condensed Matter Physics. Nobel Laureate J. Robert Schrieffer has passed away at the age of 88. He is the "S" in BCS theory of superconductivity, one of the most monumental theories of the last century, and one of the most cited. So "complete" was the theory that, by early 1986, many people thought that the field of superconductivity has been fully "solved", and that nothing new can come out of it. Of course, that got completely changed after that.

Unfortunately, I wasn't aware of his predicament during the last years of Schrieffer's life. I certainly was not aware that he was incarcerated for a while.

Late in life, Dr. Schrieffer’s love of fast cars ended in tragedy. In September 2004, he was driving from San Francisco to Santa Barbara, Calif., when his car, traveling at more than 100 miles per hour, slammed into a van, killing a man and injuring seven other people.

Dr. Schrieffer, whose Florida driver’s license was suspended, pleaded no contest to felony vehicular manslaughter and apologized to the victims and their families. He was sentenced to two years in prison and released after serving one year.

Florida State placed Dr. Schrieffer on leave after the incident, and he retired in 2006.

I've met him only once while I was a graduate student, and he was already at Florida State/NHML at that time. His book and Michael Tinkham's were the two that I used when I decided to go into superconductivity.

Leon Cooper is the only surviving members left of the BCS trio.

Zz.

by ZapperZ (noreply@blogger.com) at August 08, 2019 07:59 PM

August 07, 2019

Axel Maas - Looking Inside the Standard Model

Making connections
Over time, it has happened that some solution in one area of physics could also be used in a quite different area. Or, at least, inspired the solution. Unfortunately, this does not always work. Even quite often it happened that when reaching the finer points it turns out that something promising did in the end not work. Thus, it pays off to be always careful with such a transfer, and never believe a hype. Still, in some cases it worked, and even lead to brilliant triumphs. And so it is always worthwhile to try.

Such an attempt is precisely the content of my latest paper. In it, I try to transfer ideas from my research on electroweak physics and the Brout-Englert-Higgs effect to quantum gravity. Quantum gravity is first and foremost still an unsolved issue. We know that mathematical consistency demands that there is some unification of quantum physics and gravity. We expect that this will be by having a quantum theory of gravity. Though we are yet lacking any experimental evidence for this assumption. Still, I also make the assumption for now that quantum gravity exists.

Based on this assumption, I take a candidate for such a quantum gravity theory and pose the question what are its observable consequences. This is a question which has driven me since a long time in particle physics. I think that by now I have an understanding of how it works. But last year, I was challenged whether these ideas can still be right if there is gravity in the game. And this new paper is essentially my first step towards an answerhttps://arxiv.org/abs/1908.02140. Much of this answer is still rough, and especially mathematically will require much work. But at least it provides a first consistent picture. And, as advertised above, it draws from a different field.

The starting point is that the simplest version of quantum gravity currently considered is actually not that different from other theories in particle physics. It is a so-called gauge theory. As such, many of its fundamental objects, like the structure of space and time, are not really observable. Just like most of the elementary particles of the standard model, which is also a gauge theory, are not. Thus, we cannot see them directly in an experiment. In the standard model case, it was possible to construct observable particles by combining the elementary ones. In a sense, the particles we observe are bound states of the elementary particles. However, in electroweak physics one of the bound elementary particles totally dominates the rest, and so the whole object looks very similar to the elementary one, but not quite.

This works, because the Brout-Englert-Higgs effect makes it possible. The reason is that there is a dominating kind of not observable structure, the so-called Higgs condensate, which creates this effect. This is something coincidental. If the parameters of the standard model would be different, it would not work. But, luckily, our standard model has just the right parameter values.

Now, when looking at gravity around us, there is a very similar feature. While we have the powerful theory of general relativity, which describes how matter warps space, we rarely see this. Most of our universe behaves much simpler, because there is so little matter in it. And because the parameters of gravity are such that this warping is very, very small. Thus, we have again a dominating structure: A vacuum which is almost not warped.

Using this analogy and the properties of gauge theories, I figured out the following: We can use something like the Brout-Englert-Higgs effect in quantum gravity. And all observable particles must still be some kind of bound states. But they may now also include gravitons, the elementary particles of quantum gravity. But just like in the standard model, these bound states are dominated by just one of its components. And if there is a standard model component it is this one. Hence, the particles we see at LHC will essentially look like there is no gravity. And this is very consistent with experiment. Detecting the deviations will be so hard in comparison to those which come from the standard model, we can pretty much forget about it for earthbound experiments. At least for the next couple of decades.

However, there are now also some combinations of gravitons without standard model particles involved. Such objects have been long speculated about, and are called geons, or gravity balls. But in contrast to the standard model case, they are not stable classically. But they may be stabilized due to quantum effects. The bound state structure strongly suggests that there is at least one stable one. Still, this is pure speculation at the moment. But if they are, these objects could have dramatic consequences. E.g., they could be part of the dark matter we are searching for. Or, they could make up black holes very much like neutrons make a neutron star. I have no idea, whether any of these speculations could be true. But if there is only a tiny amount of truth in it, this could be spectacular.

Thus, some master students and I will set out to have a look at these ideas. To this end, we will need to some hard calculations. And, eventually, the results should be tested against observation. These will be coming form the universe, and from astronomy. Especially from the astronomy of black holes, where recently there have been many interesting and exciting developments, like observing two black holes merge, or the first direct image of a black hole (obviously just black inside a kind of halo). These are exciting times, and I am looking forward to see whether any of these ideas work out. Stay tuned!

by Axel Maas (noreply@blogger.com) at August 07, 2019 08:37 AM

August 06, 2019

ZapperZ - Physics and Physicists

Light Drags Electrons Backward?
As someone who was trained in condensed matter physics, and someone who also worked in photoemmission, light detectors, and photoelectron sources, research work on light interaction with solids, and especially with metallic surfaces, is something I tend to follow rather closely.

I've been reading this article for the past few days and it gets fascinating each time. This is a report on a very puzzling photon drag effect in metals, or in this case, on gold, which is the definitive Drude metal if there is any. What is puzzling is not the photon drag on the conduction electron itself. What is puzzling is that the direction of the photon drag appears to be completely reversed between the effect seen in vacuum versus in ambient air.

A review of the paper can be found here. If you don't have access to PRL, the arXiv version of the paper can be found here. So it appears as if that, when done in vacuum, light appears to push the conduction electrons backward, while when done in air, it pushes electrons forward as expected.

As they varied the angle, the team measured a voltage that largely agreed with theoretical expectations based on the simple light-pushing-electrons picture. However, the voltage they measured was the opposite of that expected, implying that the current flow was in the wrong direction. It’s a weird effect," says Strait. “It’s as if the electrons are somehow managing to flow backward when hit by the light.”
Certainly, surface effects may be at play here. And those of us who have done photoemission spectroscopy can tell you all about surface reconstruction, even in vacuum, when a freshly-cleaved surface literally changes characteristics right in front of your eyes as you continually perform a measurement on it. So I am not surprised by the differences detected between vacuum and in-air measurement.

But what is very puzzling is the dramatic difference here, and why light appears to push the conduction electrons one way in air, and in the opposite direction in vacuum. I fully expect more experiments on this, and certainly more theoretical models to explain this puzzling observation.

This is just one more example where, as we apply our knowledge to the edge of what we know, we start finding new mysteries to solve or to explain. Light interaction with matter is one of the most common and understood phenomena. Light interaction with metals is the basis of the photoelectric effect. Yet, as we push the boundaries of our knowledge, and start to look at very minute details due to its application in, say, photonics, we also start to see the new things that we do not expect.

It is why I always laugh whenever someone thinks that there is an "end of physics". Even on the things that we think we know or things that are very common, if we start to make better and more sensitive measurement, I don't doubt that we will start finding something else that we have not anticipated.

Zz.

by ZapperZ (noreply@blogger.com) at August 06, 2019 02:15 PM

Matt Strassler - Of Particular Significance

A Catastrophic Weekend for Theoretical High Energy Physics

It is beyond belief that not only am I again writing a post about the premature death of a colleague whom I have known for decades, but that I am doing it about two of them.

Over the past weekend, two of the world’s most influential and brilliant theoretical high-energy physicists — Steve Gubser of Princeton University and Ann Nelson of the University of Washington — fell to their deaths in separate mountain accidents, one in the Alps and one in the Cascades.

Theoretical high energy physics is a small community, and within the United States itself the community is tiny.  Ann and Steve were both justifiably famous and highly respected as exceptionally bright lights in their areas of research. Even for those who had not met them personally, this is a stunning and irreplaceable loss of talent and of knowledge.

But most of us did know them personally.  For me, and for others with a personal connection to them, the news is devastating and tragic. I encountered Steve when he was a student and I was a postdoc in the Princeton area, and later helped bring him into a social group where he met his future wife (a great scientist in her own right, and a friend of mine going back decades).  As for Ann, she was one of my teachers at Stanford in graduate school, then my senior colleague on four long scientific papers, and then my colleague (along with her husband David B. Kaplan) for five years at the University of Washington, where she had the office next to mine. I cannot express what a privilege it always was to work with her, learn from her, and laugh with her.

I don’t have the heart or energy right now to write more about this, but I will try to do so at a later time. Right now I join their spouses and families, and my colleagues, in mourning.

by Matt Strassler at August 06, 2019 12:35 PM

July 29, 2019

Clifford V. Johnson - Asymptotia

News from the Front XIX: A-Masing de Sitter

[caption id="attachment_19335" align="alignright" width="215"] Diamond maser. Image from Jonathan Breeze, Imperial College[/caption]This is part 2 of a chat about some recent thoughts and results I had about de Sitter black holes, reported in this arxiv preprint. Part 1 is here, so maybe best to read that first.

Now let us turn to de Sitter black holes. I mean here any black hole for which the asymptotic spacetime is de Sitter spacetime, which is to say it has positive cosmological constant. This is of course also interesting since one of the most natural (to some minds) possible explanations for the accelerating expansion of our universe is a cosmological constant, so maybe all black holes in our universe are de Sitter black holes in some sense. This is also interesting because you often read here about explorations of physics involving negative cosmological constant, so this is a big change!

One of the things people find puzzling about applying the standard black hole thermodynamics is that there are two places where the standard techniques tell you there should be a temperature associated with them. There's the black hole horizon itself, and there's also the cosmological horizon. These each have temperature, and they are not necessarily the same. For the Schwarzschild-de Sitter black hole, for example, (so, no spins or charges... just a mass with an horizon associated with it, like in flat space), the black hole's temperature is always larger than that of the cosmological horizon. In fact, it runs from very large (where the black hole is small) all the way (as the black hole grows) to zero, where the two horizons coincide.

You might wonder, as many have, how to make sense of the two temperatures. This cannot, for a start, be an equilibrium thermodynamics system. Should there be dynamics where the two temperatures try to equalise? Is there heat flow from one horizon to another, perhaps? Maybe there's some missing ingredient needed to make sense of this - do we have any right to be writing down temperatures (an equilibrium thermodynamics concept, really) when the system is not in equilibrium? (Actually, you could ask that about Schwarzschild in flat space - you compute the temperature and then discover that it depends upon the mass in such a way that the system wants to move to a different temperature. But I digress.)

The point of my recent work is that it is entirely within the realm of physics we have to hand to make sense of this. The simple system described in the previous post - the three level maser - has certain key interconnected features that seem relevant:

  • admits two distinct temperatures and
  • a maximum energy, and
  • a natural instability (population inversion) and a channel for doing work - the maser output.

My point is that these features are all present for de Sitter black holes too, starting with the two temperatures. But you won't see the rest by staring at just the Schwarzschild case, you need to add rotation, or charge (or both). As we shall see, the ability to reduce angular momentum, or to reduce charge, will be the work channel. I'll come back to the maximum [...] Click to continue reading this post

The post News from the Front XIX: A-Masing de Sitter appeared first on Asymptotia.

by Clifford at July 29, 2019 06:03 PM

July 26, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVIII: de Sitter Black Holes and Continuous Heat Engines

[caption id="attachment_19313" align="alignright" width="250"] Hubble photo of jupiter's aurorae.[/caption]Another title for this could be "Making sense of de Sitter black hole thermodynamics", I suppose. What I'm going to tell you about is either a direct correspondence or a series of remarkable inspiring coincidences. Either way, I think you will come away agreeing that there is certainly something interesting afoot.

It is an idea I'd been tossing around in my head from time to time over years, but somehow did not put it all together, and then something else I was working on years later, that was seemingly irrelevant, helped me complete the puzzle, resulting in my new paper, which (you guessed it) I'm excited about.

It all began when I was thinking about heat engines, for black holes in anti-de Sitter, which you may recall me talking about in posts here, here, and here, for example. Those are reciprocating heat engines, taking the system through a cycle that -through various stages- takes in heat, does work, and exhausts some heat, then repeats and repeats. And repeats.

I've told you the story about my realisation that there's this whole literature on quantum heat engines that I'd not known about, that I did not even know of a thing called a quantum heat engine, and my wondering whether my black hole heat engines could have a regime where they could be considered quantum heat engines, maybe enabling them to be useful tools in that arena...(resulting in the paper I described here)... and my delight in combining 18th Century physics with 21st Century physics in this interesting way.

All that began back in 2017. One thing I kept coming back to that really struck me as lovely is what can be regarded as the prototype quantum heat engine. It was recognized as such as far back as 1959!! It is a continuous heat engine, meaning that it does its heat intake and work and heat output all at the same time, as a continuous flow. It is, in fact a familiar system - the three-level maser! (a basic laser also uses the key elements).

A maser can be described as taking in energy as heat from an external source, and giving out energy in the form of heat and work. The work is the desired [...] Click to continue reading this post

The post News from the Front, XVIII: de Sitter Black Holes and Continuous Heat Engines appeared first on Asymptotia.

by Clifford at July 26, 2019 03:44 PM

July 25, 2019

Axel Maas - Looking Inside the Standard Model

Talking about the same thing
In this blog entry I will try to explain my most recent paper. The theme of the paper is rather simply put: You should not compare apple with oranges. The subtlety comes from knowing whether you have an apple or an orange in your hand. This is far less simple than it sounds.

The origin of the problem are once more gauge theories. In gauge theories, we have introduced additional degrees of freedom. And, in fact, we have a choice of how we do this. Of course, our final results will not depend on the choice. However, getting to the final result is not always easy. Thus, ensuring that the intermediate steps are right would be good. But they depend on the choice. But then they are only comparable between two different calculations, if in both calculations the same choice is made.

Now it seems simple at first to make the same choice. Ultimately, it is our choice, right? But this is actually not that easy in such theories, due to their mathematical complexity. Thus, rather than making the choice explicit, the choice is made implicitly. The way how this is done is, again for technical reasons, different for methods. And because of all of these technicalities and the fact that we need to do approximations, figuring out whether the implicit conditions yield the same explicit choice is difficult. This is especially important as the choice modifies the equations describing our auxiliary quantities.

In the paper I test this. If everything is consistent between two particular methods, then the solutions obtained in one method should be a solution to the equations obtained in the other method. Seems a simple enough idea. There had been various arguments in the past which suggested that this should be he case. But there had been more and more pieces of evidence over the last couple of years that led me to think that there was something amiss. So I made this test, and did not rely on the arguments.

And indeed, what I find in the article is that the solution of one method does not solve the equation from the other method. The way how this happens strongly suggests that the implicit choices made are not equivalent. Hence, the intermediate results are different. This does not mean that they are wrong. They are just not comparable. Either method can still yield in itself consistent results. But since neither of the methods are exact, the comparison between both would help reassure that the approximations made make sense. And this is now hindered.

So, what to do now? We would very much like to have the possibility to compare between different methods at the level of the auxiliary quantities. So this needs to be fixed. This can only be achieved if the same choice is made in all the methods. The though question is, in which method we should work on the choice. Should we try to make the same choice as in some fixed of the methods? Should we try to find a new choice in all methods? This is though, because everything is so implicit, and affected by approximations.

At the moment, I think the best way is to get one of the existing choices to work in all methods. Creating an entirely different one for all methods appears to me far too much additional work. And I, admittedly, have no idea what a better starting point would be than the existing ones. But in which method should we start trying to alter the choice? In neither method this seems to be simple. In both cases, fundamental obstructions are there, which need to be resolved. I therefore would currently like to start poking around in both methods. Hoping that there maybe a point in between where the choices of the methods could meet, which is easier than to push all all the way. I have a few ideas, but they will take time. Probably also a lot more than just me.

This investigation also amazes me as the theory where this happens is nothing new. Far from it, it is more than half a century old, older than I am. And it is not something obscure, but rather part of the standard model of particle physics. So a very essential element in our description of nature. It never ceases to baffle me, how little we still know about it. And how unbelievable complex it is at a technical level.

by Axel Maas (noreply@blogger.com) at July 25, 2019 08:27 AM

July 08, 2019

Sean Carroll - Preposterous Universe

Spacetime and Geometry: Now at Cambridge University Press

Hard to believe it’s been 15 years since the publication of Spacetime and Geometry: An Introduction to General Relativity, my graduate-level textbook on everyone’s favorite theory of gravititation. The book has become quite popular, being used as a text in courses around the world. There are a lot of great GR books out there, but I felt another one was needed that focused solely on the idea of “teach students general relativity.” That might seem like an obvious goal, but many books also try to serve as reference books, or to put forward a particular idiosyncratic take on the subject. All I want to do is to teach you GR.

And now I’m pleased to announce that the book is changing publishers, from Pearson to Cambridge University Press. Even with a new cover, shown above.

I must rush to note that it’s exactly the same book, just with a different publisher. Pearson was always good to me, I have no complaints there, but they are moving away from graduate physics texts, so it made sense to try to find S&G a safe permanent home.

Well, there is one change: it’s cheaper! You can order the book either from CUP directly, or from other outlets such as Amazon. Copies had been going for roughly $100, but the new version lists for only $65 — and if the Amazon page is to be believed, it’s currently on sale for an amazing $46. That’s a lot of knowledge for a minuscule price. I’d rush to snap up copies for you and your friends, if I were you.

My understanding is that copies of the new version are not quite in stores yet, but they’re being printed and should be there momentarily. Plenty of time for courses being taught this Fall. (Apologies to anyone who has been looking for the book over the past couple of months, when it’s been stuck between publishers while we did the handover.)

Again: it’s precisely the same book. I have thought about doing revisions to produce an actually new edition, but I think about many things, and that’s not a super-high priority right now. Maybe some day.

Thanks to everyone who has purchased Spacetime and Geometry over the years, and said such nice things about it. Here’s to the next generation!

by Sean Carroll at July 08, 2019 08:03 PM

June 19, 2019

Axel Maas - Looking Inside the Standard Model

Creativity in physics
One of the most widespread misconceptions about physics, and other natural sciences, is that they are quite the opposite to art: Precise, fact-driven, logical, and systematic. While art is perceived as emotional, open, creative, and inspired.

Of course, physics has experiments, has data, has math. All of that has to be fitted perfectly together, and there is no room for slights. Logical deduction is central in what we do. But this is not all. In fact, these parts are more like the handiwork. Just like a painter needs to be able to draw a line, a writer needs to be able to write coherent sentences, so we need to be able to calculate, build, check, and infer. But just like the act of drawing a line or writing a sentence is not what we recognize already as art, so is not the solving of an equation physics.

We are able to solve an equation, because we learned this during our studies. We learned, what was known before. Thus, this is our tool set. Like people read books before start writing one. But when we actually do research, we face the fact that nobody knows what is going on. In fact, quite often we do not even know what is an adequate question to pose. We just stand there, baffled, before a couple of observations. That is, where the same act of creativity has to set in as when writing a book or painting a picture. We need an idea, need inspiration, on how to start. And then afterwards, just like the writer writes page after page, we add to this idea various pieces, until we have a hypotheses of what is going on. This is like having the first draft of a book. Then, the real grinding starts, where all our education comes to bear. Then we have to calculate and so on. Just like the writer has to go and fix the draft to become a book.

You may now wonder whether this part of creativity is only limited to the great minds, and at the inception of a whole new step in physics? No, far from it. On the one hand, physics is not the work of lone geniuses. Sure, somebody has occasionally the right idea. But this is usually just the one idea, which is in the end correct, and all the other good ideas, which other people had, did just turn out to be incorrect, and you never hear of them because of this. And also, on the other hand, every new idea, as said above, requires eventually all that what was done before. And more than that. Creativity is rarely borne out of being a hermit. It is often by inspiration due to others. Talking to each other, throwing fragments of ideas at each other, and mulling about consequences together is what creates the soil where creativity sprouts. All those, with whom you have interacted, have contributed to the idea you have being born.

This is, why the genuinely big breakthroughs have often resulted from so-called blue-sky research or curiosity-driven research. It is not a coincidence that the freedom of doing whatever kind of research you think is important is an, almost sacred, privilege of hired scientists. Or should be. Fortunately I am privileged enough, especially in the European Union, to have this privilege. In other places, you are often shackled by all kinds of external influences, down to political pressure to only do politically acceptable research. And this can never spark the creativity you need to make something genuine new. If you are afraid about what you say, you start to restrain yourself, and ultimately anything which is not already established to be acceptable becomes unthinkable. This may not always be as obvious as real political pressure. But if whether you being hired, if your job is safe, starts to depend on it, you start going for acceptable research. Because failure with something new would cost you dearly. And with the currently quite common competitive funding prevalent particularly for non-permanently hired people, this starts to become a serious obstruction.

As a consequence, real breakthrough research can be neither planned nor can you do it on purpose. You can only plan the grinding part. And failure will be part of any creative process. Though you actually never really fail. Because you always learn how something does not work. That is one of the reasons why I strongly want that failures become also publicly available. They are as important to progress as success, by reducing the possibilities. Not to mention the amount of life time of researchers wasted because they fail with them same attempt, not knowing that others failed before them.

And then, perhaps, a new scientific insight arises. And, more often than not, some great technology arises along the way. Not intentionally, but because it was necessary to follow one's creativity. And that is actually where most technological leaps came from. So,real progress in physics, in the end, is made from about a third craftsmanship, a third communication, and a third creativity.

So, after all this general stuff, how do I stay creative?

Well, first of all, I was and am sufficiently privileged. I could afford to start out with just following my ideas, and either it will keep me in business, or I will have to find a non-science job. But this only worked out because of my personal background, because I could have afforded to have a couple of months with no income to find a job, and had an education which almost guarantees me a decent job eventually. And the education I could only afford in this quality because of my personal background. Not to mention that as a white male I had no systemic barriers against me. So, yes, privilege plays a major role.

The other part was that I learned more and more that it is not effort what counts, but effect. Took me years. But eventually, I understood that a creative idea cannot be forced by burying myself in work. Time off is for me as important. It took me until close to the end of my PhD to realize that. But not working overtime, enjoying free days and holidays, is for me as important for the creative process as any other condition. Not to mention that I also do all non-creative chores much more efficiently if well rested, which eventually leaves me with more time to ponder creatively and do research.

And the last ingredient is really exchange. I have had now the opportunity, in a sabbatical, to go to different places and exchange ideas with a lot of people. This gave me what I needed to acquire a new field and have already new ideas for it. It is the possibility to sit down with people for some hours, especially in a nicer and more relaxing surrounding than an office, and just discuss ideas. That is also what I like most about conferences. And one of the reasons I think conferences will always be necessary, even though we need to make going there and back ecologically much more viable, and restrict ourselves to sufficiently close ones until this is possible.

Sitting down over a good cup of coffee or a nice meal, and just discuss, is really jump starting my creativity. Even sitting with a cup of good coffee in a nice cafe somewhere and just thinking does wonders for me in solving problems. And with that, it seems not to be so different for me than for artists, after all.

by Axel Maas (noreply@blogger.com) at June 19, 2019 02:53 PM

June 18, 2019

Marco Frasca - The Gauge Connection

Cracks in the Witten’s index theorem?

In these days, a rather interesting paper (see here for the preprint) appeared on Physical Review Letters. These authors study a Wess-Zumino model for {\cal N}=1, the prototype of any further SUSY model, and show that there exists an anomaly at one loop in perturbation theory that breaks supersymmetry. This is rather shocking as the model is supersymmetric at the classical level and, in agreement with Witten’s index theorem, no breaking of supersymmetry should ever be observed. Indeed, the authors, in the conclusions, correctly ask how the Witten’s theorem copes with this rather strange behavior. Of course, Witten’s theorem is correct and the question comes out naturally and is very much interesting for further studies.

This result is important as I have incurred in a similar situation for the Wess-Zumino model in a couple of papers. The first one (see here and here)  went published and shows how the classical Wess-Zumino model, in a strong coupling regime, breaks supersymmetry. Therefore, I asked a similar question as for the aforementioned case: How quantum corrections recover the Witten’s theorem? The second one is remained a preprint (see here). I tried to send it to Physics Letters B but the referee, without any check of mathematics, just claimed that there was the Witten’s theorem to forbid my conclusions. The Editor asked me to withdraw the paper in view of this identical reason. This was a very strong one. So, I never submited this paper again and just checked the classical case where I was more lucky.

So, my question is still alive: Has supersymmetry in itself the seeds of its breaking?

This is really important in view of the fact that the Minimal Supersymmetric Standard Model (MSSM), now in disgrace after LHC results, can have a dark side in its soft supersymmetry breaking sector. This, in turn, could entail a wrong understanding of where the superpartners could be after the breaking. Anyway, it is really something exciting already at the theoretical level. We are just stressing Witten’s index theorem in search for answers.

by mfrasca at June 18, 2019 03:06 PM

June 14, 2019

Matt Strassler - Of Particular Significance

A Ring of Controversy Around a Black Hole Photo

[Note Added: Thanks to some great comments I’ve received, I’m continuing to add clarifying remarks to this post.  You’ll find them in green.]

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public. Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t. There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt.EHTDiscoveryM87

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? These have been problematic issues from the beginning, but discussion is starting to heat up. And it’s important: it has implications for the measurement of the black hole’s mass (which EHT claims is that of 6.5 billion Suns, with an uncertainty of about 15%), and for any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it. So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

One note before I begin: this post is long. But it starts with a summary of the situation that you can read quickly, and then comes the long part: a step-by-step non-technical explanation of an important aspect of the black hole ‘photo’ that, to my knowledge, has not yet been given anywhere else.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What Does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well. If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information. We don’t get to observe the black hole itself (it’s black, after all!) What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (a plasma of mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.) The black hole’s gravity then causes the paths on which the radio waves travel to bend, even more than a glass lens will bend the path of visible light, so that where things appear in the ‘photo’ is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time. (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and it has just been said publicly in a paper by Samuel Gralla, Daniel Holz and Robert Wald.

Two months ago, when the EHT `photo’ appeared, it was widely reported in the popular press and on blogs that the photo shows the image of a photon sphere at the edge of the shadow of the M87bh. (Instead of `shadow’, I suggested the term ‘quasi-silhouette‘, which I viewed as somewhat less misleading to a non-expert.)

Unfortunately, it seems these statements are not true; and this was well-known to (but poorly communicated by, in my opinion) the EHT folks.  This lack of clarity might perhaps annoy some scientists and science-loving non-experts; but does this issue also matter scientifically? Gralla et al., in their new preprint, suggest that it does (though they were careful to not yet make a precise claim.)

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully when the `photo’ first appeared, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was. You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. Studying the equations and conversing with expert colleagues, I soon learned why: for a rotating black hole, the photon sphere doesn’t really exist.

But let’s first define what the photon sphere is for a non-rotating black hole! Like the Earth’s equator, the photon sphere is a location, not an object. This location is the surface of an imaginary ball, lying well outside the black hole’s horizon. On the photon sphere, photons (the particles that make up light, radio waves, and all other electromagnetic waves) travel on special circular or spherical orbits around the black hole.

By contrast, a rotating black hole has a larger, broader `photon-zone’ where photons can have special orbits. But you won’t ever see the whole photon zone in any image of a rotating black hole. Instead, a piece of the photon zone will appear as a `photon ring‘, a bright and very thin loop of radio waves. However, the photon ring is not the edge of anything spherical, is generally not perfectly circular, and generally is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it has a photon-zone rather than a photon-sphere, and images of it will have a photon ring. Ok, fine; but then, can we interpret EHT’s `photo’ simply as showing the photon ring, blurred by the imperfections in the `telescope’? Although some of the EHT folks have seemed to suggest the answer is “yes”, Gralla et al. suggest the answer is likely “no” (and many of their colleagues have been pointing out the same thing in private.) The circlet of radio waves that appears in the EHT `photo’ is probably not simply a blurred image of M87bh’s photon ring; it probably shows a combination of the photon ring with something brighter (as explained below). That’s where the controversy starts.

…so the Dark Patch May Not Be the Full Shadow…

The term `shadow’ is confusing (which is why I prefer `quasi-silhouette’ in describing it in public contexts, though that’s my own personal term) but no matter what you call it, in its ideal form it is supposed to be an absolutely dark area whose edge is the photon ring. But in reality the perfectly dark area need not appear so dark after all; it may be partly filled in by various effects. Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway. The EHT folks are well aware of this, but at the time the photo came out, many science writers and scientist-writers (including me) were not.

…so EHT’s Measurement of the M87bh’s Mass is Being Questioned

It was wonderful that EHT could make a picture that could travel round the internet at the speed of light, and generate justifiable excitement and awe that human beings could indirectly observe such an amazing thing as a black hole with a mass of several billion Sun-like stars. Qualitatively, they achieved something fantastic in showing that yes, the object at the center of M87 really is as compact and dark as such a black hole would be expected to be! But the EHT telescope’s main quantitative achievement was a measurement of the mass of the M87bh, with a claimed precision of about 15%.

Naively, one could imagine that the mass is measured by looking at the diameter of the dark spot in the black hole ‘photo’, under the assumption that it is the black hole’s shadow. So here’s the issue: Could interpreting the dark region incorrectly perhaps lead to a significant mistake in the mass measurement, and/or an underestimate of how uncertain the mass measurement actually is?

I don’t know.  The EHT folks are certainly aware of these issues; their simulations show them explicitly.  The mass of the M87bh isn’t literally measured by putting a ruler on the ‘photo’ and measuring the size of the dark spot! The actual methods are much more sophisticated than that, and I don’t understand them well enough yet to explain, evaluate or criticize them. All I can say with confidence right now is that these are important questions that experts currently are debating, and consensus on the answer may not be achieved for quite a while.

———————————————————————-

The Appearance of a Black Hole With Nearby Matter

Ok, now I’m going to explain the most relevant points, step-by-step. Grab a cup of coffee or tea, find a comfy chair, and bear with me.

Because fast-rotating black holes are more complicated, I’m going to start illuminating the controversy by looking at a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87bh as seen from our perspective; that’s important because the M87bh may well be rotating at a very good clip.

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.) See Figure 1. Where will the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That path of that light is the orange arrow in Figure 1. But then there will be an indirect image (the green arrow in Figure 1) from light that goes halfway around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. In short, Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

BHTruthBulb.png

Figure 1: A light bulb (yellow) outside but near the non-rotating black hole’s horizon (in black) can be seen by someone at the top of the image not only through the light that goes directly upward (orange line) — a “direct image” — but also through light that makes partial or complete orbits of the black hole — “indirect images.” The first indirect and second indirect images are from light taking the green and blue paths. For light to make orbits of the black hole, it must travel near the grey-dashed circle that indicates the location of a “photon-sphere.” (A rotating black hole has no such sphere, but when seen from the north or south pole, the light observed takes similar paths to what is shown in this figure.) [The paths of the light rays were calculated carefully using Mathematica 11.3.]

What you can see in Figure 1 is that both the first and second indirect images are formed by light that spends part of its time close to a special radius around the back hole, shown as a dotted line. This imaginary surface, the edge of a ball,  is an honest “photon-sphere” in the case of a non-rotating black hole.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north (or south) pole; there’s a special circle then too. But that circle is not the edge of a photon-sphere! In general, photons can have special orbits in a wide region, which I called the “photon-zone” earlier, and only a small set of them are on this circle. You’ll see photons from other parts of the photon zone if you look at the black hole not from the poles but from some other angle.

[If you’d like to learn a bit more about the photon zone, and you have a little bit of knowledge of black holes already, you can profit from exploring this demo by Professor Leo Stein: https://duetosymmetry.com/tool/kerr-circular-photon-orbits/ ]

Back to the non-rotating case: What our camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is large and bright; this is one of Gralla et al.‘s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high-powered camera, we’ll never pick those other images up. Let’s therefore focus our attention on the direct image and the first indirect image.

BHObsvBulb3.png

Figure 2: What the drawing in Figure 1 actually looks like to the observer peering toward the black hole; all the indirect images lie at almost exactly the same distance from the black hole’s center.

WARNING (since this seems to be a common confusion):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGES ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Proceeding step by step toward a more realistic situation, let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now?

BHTruthCirc2.png

Figure 3: if we replace the light bulb with a circle of light, the paths of the light are the same as in Figure 1, except now for each point along the circle. That means each direct and indirect image itself forms a circle, as shown in the next figure.

That’s shown in Figure 4: the direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

BHObsvCirc3.png

Figure 4: The circular bulb’s direct image is the bright circle, but a somewhat dimmer first indirect image appears further out, and just beyond one finds all the other indirect images, forming a thin `photon ring’.

Importantly, if we consider circular bulbs of different diameter [yellow, red and blue in Figure 5], then although the direct images reflect the differences in the bulbs’ diameters (somewhat enlarged by lensing), the first indirect images all are about the same diameter, just a tad larger or smaller than the photon ring.  The remaining indirect images all sit together at the radius of the photon ring.

BH3Circ4.png

Figure 5: Three bulbs of different diameter (yellow, blue, red) create three distinct direct images, but their first indirect images are located much closer together, and very close to the photon ring where all their remaining indirect images pile up.

These statements are also essentially true for a rotating black hole seen from the north or south pole; a circular bulb generates a series of circular images, and the indirect images all pile more or less on top of each other, forming a photon ring. When viewed off the poles, the rotating black hole becomes a more complicated story, but as long as the viewing angle is small enough, the changes are relatively minor and the picture is qualitatively somewhat similar.

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ made of material orbiting the black hole, and eventually spiraling in. The accretion disk may well be the dominant source emitting radio waves at the M87bh. (I’m showing a very thin uniform disk for illustration, but a real accretion disk is not uniform, changes rapidly as clumps of material move within it and then spiral into the black hole, and may be quite thick — as thick as the black hole is wide, or even thicker.)

Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown in Figure 6 left, on one side of the disk, as an orange wash) would form a disk in your camera, the dim red region in Figure 6 right; the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be similar. However, the indirect images would all pile up in almost the same place from your perspective, forming a bright and quite thin ring, the bright yellow circle in Figure 6 right. (The path of the disk’s first indirect image is shown in Figure 6 left, going halfway about the black hole as a green wash; notice how it narrows as it travels, which is why it appears as a narrow ring in the image at right.) This circle — the full set of indirect images of the whole disk — is the edge of the photon-sphere for a non-rotating black hole, and the circular photon ring for a rotating black hole viewed from its north or south pole.

BHDisk2.png

Figure 6: A glowing disk of material (note it does not touch the black hole) looks like a version of Figure 5 with many more circular bulbs. The direct image of the disk forms a disk (illustrated at left, for a piece of the disk, as an orange wash) while the first indirect image becomes highly compressed (illustrated, for a piece of the disk, as a green wash) and is seen as a narrow circle of bright light.  (It is expected that the disk is mostly transparent in radio waves, so the indirect image can pass through it.) That circle, along with the other indirect images, forms the photon ring. In this case, because the disk’s inner edge lies close to the black hole horizon, the photon ring sits within the disk’s direct image, but we’ll see a different example in Figure 9.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, currently unobservable at EHT, the `photon ring’, while EHT refers to all the indirect images as the `photon ring’. Just letting you know in case you hear `lensed ring’ referred to in future.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — i.e., along its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, technology had to be pushed to its absolute limits. Someday we’ll see both the disk and the ring, but right now, they’re all blurred together. So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim and the ring so bright that the camera will mainly show you a blurry image of the bright photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that the ​photo will show mainly the disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10). (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — Doppler-like effect from rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the brightness of the image is dominated by the disk, not the ring.

BHBlurDisk_a1_2.png

Figure 7: At left is repeated the image in Figure 6, as seen in a perfect camera, while at right the same image is shown when observed using a camera with imperfect vision. The disk and ring blur together into a single thick ring, whose brightness is dominated by the disk. Note that the shadow — the region surrounded by the yellow photon ring — is not the same as the dark patch in the right-hand image; the dark patch is considerably smaller than the shadow.

Let’s say that again: the black hole `photo’ may mainly show the M87bh’s accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’ In general, the patch could be considerably smaller than what is usually termed the `shadow’ of the black hole.

M87BH_Vicinity_Photo_2a.png

Figure 8: (Left) We probably observe the M87bh at a small angle off its south pole. Its accretion disk has an unknown size and shape — it may be quite thick and non-uniform — and it may not even lie at the black hole’s equator. The disk and the black hole interact to create outward-going jets of material (observed already many years ago but not clearly visible in the EHT ‘photo’.) (Right) The EHT `photo’ of the M87bh (taken in radio waves and shown in false color!) Compare with Figure 7; the most important difference is that one side of the image is brighter than the other. This likely arises from (a) our view being slightly off from the south pole, combined with (b) rotation of the black hole and its disk, and (c) possibly other more subtle issues.

This is important. The photon ring’s diameter, and thus the width of the `shadow’ too, barely depend on the rotation rate of the black hole; they depend almost exclusively on the black hole’s mass. So if the ring in the photo were simply the photon ring of the M87bh, you’d have a very simple way to measure the black hole’s mass without knowing its rotation rate: you’d look at how large the dark patch is, or equivalently, the diameter of the blurry ring, and that would give you the answer to within 10%. But it’s nowhere near so simple if the blurry ring shows the accretion disk, because the accretion disk’s properties and appearance can vary much more than the photon ring; they can depend strongly on the black hole’s rotation rate, and also on magnetic fields and other details of the black hole’s vicinity.

The Important Role of Rotation

If we conclude that EHT is seeing a mix of the accretion disk with the photon ring, with the former dominating the brightness, then this makes EHT’s measurement of the M87bh’s mass more confusing and even potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important. Despite what I showed (for pedagogical simplicity) in Figure 7, for a non-rotating black hole the accretion disk’s central gap is actually expected to lie outside the photon ring; this is shown at the top of Figure 9.  But  the faster the black hole rotates, the smaller this central gap is expected to be, to the point that for a fast-rotating black hole the gap will lie inside the photon ring, as shown at the bottom of Figure 9. (This tendency is not obvious; it requires understanding details of the black hole geometry.) And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette), which is the region inside the photon ring. It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

The effect of blurring in the two cases of slow (or zero) and fast rotation are illustrated in Figure 9, where the photon ring’s size is taken to be the same in each case but the disk’s inner edge is close in or far out. (The black holes, not illustrated since they aren’t visible anyway, differ in mass by about 10% in order to have the photon ring the same size.) This shows why the size of the dark patch can be quite different, depending on the disk’s shape, even when the photon ring’s size is the same.

BHBlurDisk_a0_a1_3.png

Figure 9: Comparing the appearance of slightly more realistically-shaped disks around slowly rotating or non-rotating black holes (top) to those around fast-rotating black holes (bottom) of the same mass, as seen from the north or south pole. (Left) the view in a perfect camera; (right) rough illustration of the effect of blurring in the current version of the EHT. The faster the black hole is spinning, the smaller the central gap in the accretion disk is likely to be. No matter what the extent of the accretion disk (dark red), the photon ring (yellow) remains at roughly the same location, changing only by 10% between a non-rotating black hole and a maximally rotating black hole of the same mass. But blurring in the camera combines the disk and photon ring into a thick ring whose brightness is dominated by the disk rather than the ring, and which can therefore be of different size even though the mass is the same. This implies that the radius of the blurry ring in the EHT `photo’, and the size of the dark region inside it, cannot by themselves tell us the black hole’s mass; at a minimum we must also know the rotation rate (which we do not.)

Gralla et al. subtly raise these questions but are careful not to overstate their case, perhaps because they have not yet completed their study of rotating black holes. But the question is now in the air.

I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures. In particular, EHT’s simulations show all of the effects mentioned above; there’s none of this of which they are unaware. (In fact, the reason I know my illustrations above are reasonable is partly because you can see similar pictures in the EHT papers.) As long as the EHT folks correctly accounted for all the issues, then they should have been able to properly measure the mass and estimate their uncertainties correctly. In fact, they don’t really use the photo itself; they use more subtle techniques applied to their telescope data directly. Thus it’s not enough to argue the photo itself is ambiguous; one has to argue that EHT’s more subtle analysis methods are flawed. No one has argued that yet, as far as I am aware.

But the one thing that’s clear right now is that science writers almost uniformly got it wrong [because the experts didn’t explain these points well] when they tried to describe the image two months ago. The `photo’ probably does not show “a photon ring surrounding a shadow.” That would be nice and simple and impressive-sounding, since it refers to fundamental properties of the black hole’s warping effects on space. But it’s far too glib, as Figures 7 and 9 show. We’re probably seeing an accretion disk supplemented by a photon ring, all blurred together, and the dark region may well be smaller than the black hole’s shadow.

(Rather than, or in addition to, the accretion disk, it is also possible that the dominant emission in the photo comes from the inner portion of one of the jets that emerges from the vicinity of the black hole; see Figure 8 above. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Someday in the not distant future, improved imaging should allow EHT to separately image the photon ring and the disk, so both can be observed easily, as in the left side of Figure 9. Then all these questions will be answered definitively.

Why the Gargantua Black Hole from Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11. And that leads to the Gargantua image from the movie, also shown in Figure 11. Notice the photon ring (which is, as I cautioned you earlier, off-center!)   [Note added: this figure has been modified; in the original version I referred to the top and bottom views of the disk’s far side as the  “1st indirect image”, but as pointed out by Professor Jean-Pierre Luminet, that’s not correct terminology here.]

BHGarg4.png

Figure 10: The movie Interstellar features a visit to an imaginary black hole called Gargantua, and the simulated images in the movie (from 2014) are taken from near the equator, not the pole. As a result, the direct image of the disk cuts across the black hole, and indirect images of the back side of the disk are seen above and below the black hole. There is also a bright photon ring, slightly off center; this is well outside the surface of the black hole, which is not visible. A real image would not be symmetric left-to-right; it would be brighter on the side that is rotating toward the viewer.  At the bottom is shown a much more realistic visual image (albeit not so good quality) from 1994 by Jean-Alain Marck, in which this asymmetry can be seen clearly.

However, the movie image leaves out an important Doppler-like effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so a real image from this vantage point would be very asymmetric — bright on the left, dim on the right — unlike the movie image.  At the suggestion of Professsor Jean-Pierre Luminet I have added, at the bottom of Figure 10, a very early simulation by Jean-Alain Marck that shows this effect.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole.

Final Remarks: Where a Rotating Black Hole Differs from a Non-Rotating One

Before I quit for the week, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) As I’ve just emphasized, what a rotating black hole looks like to a distant observer depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A pole observer, an equatorial observer, and a near-pole observer see quite different things. (As noted in Figure 8, we are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are some reasons to expect this. Even then, the story is complex.

2) As I mentioned above, instead of a photon-sphere, there is a ‘photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. For high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; and this can cause a filling-in of the ‘shadow’.

3) Depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from along the rotation axis (i.e., above the north or south pole) will the direct and indirect images of the disk all be circular and concentric. We’re not viewing the M87bh on its axis, and that further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that I, at least, cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 14, 2019 12:15 PM

June 11, 2019

Georg von Hippel - Life on the lattice

Looking for guest bloggers to cover LATTICE 2019
My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, please send me an email and we can arrange to grant you posting rights.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:28 AM

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field-theoretic background; while some more advanced topics such as the Nielsen-Ninomiya theorem and the Symanzik effective theory are touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are omitted altogether. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation for pure gauge fields, and hybrid Monte Carlo (HMC) for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov-space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging are explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:27 AM

June 10, 2019

Matt Strassler - Of Particular Significance

Minor Technical Difficulty with WordPress

Hi all — sorry to bother you with an issue you may not even have noticed, but about 18 hours ago a post of mine that was under construction was accidentally published, due to a WordPress bug.  Since it isn’t done yet, it isn’t readable (and has no figures yet) and may still contain errors and typos, so of course I tried to take it down immediately.  But it seems some of you are still getting the announcement of it or are able to read parts of it.  Anyway, I suggest you completely ignore it, because I’m not done working out the scientific details yet, nor have I had it checked by my more expert colleagues; the prose and perhaps even the title may change greatly before the post comes out later this week.  Just hang tight and stay tuned…

by Matt Strassler at June 10, 2019 11:43 PM

Matt Strassler - Of Particular Significance

The Black Hole Photo: Controversy Begins To Bubble Up

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public.  Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t.  There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt. This concern seems to have been warranted.  This is the first post of several in which I’ll explain the issue as I see it.

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? That’s been a problematic issue from the beginning, but discussion is starting to heat up.  And it’s important: it has implications for the measurement of the black hole’s mass, and of any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it.  So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well.  If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information.  We don’t get to observe black hole itself (it’s black, after all!)   What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.)  The black hole then bends the paths of the radio waves extensively, even more than does a glass lens, so that where things appear in the image is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time.  (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and a paper by Gralla, Holz and Wald has just said it publicly.

When the EHT `photo’ appeared, it was widely reported that it shows the image of a photon sphere at the edge of the shadow (or ‘quasi-silhouette‘, a term I suggested as somewhat less misleading) of the M87bh.

[Like the Earth’s equator, the photon sphere is a location, not an object.  Photons (the particles that make up light, radio waves, and all other electromagnetic radiation) that move along the photon sphere have special, spherical orbits around the black hole.]

Unfortunately, it seems likely that these statements are incorrect; and Gralla et al. have said almost as much in their new preprint (though they were careful not to make a precise claim.)

 

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully back then, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was.  You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. A couple of weeks later, as I studied the equations and conversed with colleagues, I learned why; for a rotating black hole, the photon sphere doesn’t really exist.  There’s a broad photon-zone' where photons can have special orbits, but you won't ever see the whole photon zone in an image of a rotating black hole.  Instead a piece of the photon zone will show up asphoton ring, a bright thin loop of radio waves.

But this ring is not the edge of anything spherical, is generally not perfectly circular, and is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it probably has no photon-sphere.  But does it show a photon ring?  Although some of the EHT folks seemed to suggest the answer was ‘yes’, Gralla et al. suggest the answer is likely `no’ (and my Harvard colleagues were finding the same thing.)  It seems unlikely that the circlet of radio waves that appears in the EHT `photo’ is really an image of M87bh’s photon ring anyway; it’s probably something else.  That’s where controversy starts.

…so the Dark Patch is Probably Not the Full Shadow

The term shadow' is confusing (which is why I prefer quasi-silhouette’) but no matter what you call it, in its ideal form it is a perfectly dark area whose edge is the photon ring.    But in reality the perfectly dark area need not appear so dark after all; it may be filled in by various effects.  Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway.

Step-By-Step Approach

To explain these points will take some time and care, so I’m going to spread the explanation out over several blog posts.  Otherwise it’s just too much information too fast, and I won’t do a good job writing it down.  So bear with me… expect at least three more posts, probably four, and even then there will still be important issues to return to in future.

The Appearance of a  Black Hole With Nearby Matter

Because fast-rotating black holes are complicated, I’m going to illuminate the controversy using a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87BH as seen from our perspective; that’s important because the M87BH is probably rotating at a very good clip. (At the end of this post I’ll briefly describe some key differences between the appearance of non-rotating black holes, rotating black holes observed along the rotation axis, and rotating black holes observed just a bit off the rotation axis.)

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.)  See Figure 1.  Where would the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That’s the orange arrow in Figure 1.  But then there will be an indirect image from light that goes halfway (the green arrow in Figure 1) around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

What you can see in Figure 1 is that both the first and second indirect images are formed by light (er, radio waves) that spends part of its time close to a special radius around the back hole, shown as a dotted line. This, in the case of a non-rotating black hole, is an honest “photon-sphere”.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north pole; there’s a special circle then too.  But that circle is not the edge of a photon-sphere!  In general, photons can orbit in a wide region, which I’ll call the “photon-zone.” You’ll see photons from other parts of the photon zone if you look at the black hole not from the north pole but from some other angle.

What our radio-wave camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is bright; this is one of Gralla et al’s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high resolution camera, we’ll never pick those other images up up. Consequently, all we can really hope to detect with something like EHT is the direct image and the first indirect image.

WARNING (since this seems to be a common confusion even after two months):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGE ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now? Figure 4: The direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

Importantly, if we replace that circular bulb [shown yellow in Figure 5] by one of a larger or smaller radius [shown blue in Figure 5], then (Figure 6) the inner direct image would look larger or smaller to us, but the indirect images would barely move. They remain very close to the same size no matter how big a circular bulb we chose would barely move!

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ of material (possibly quite thick — I’m showing a very thin one for illustration, but they can be as thick as the black hole is wide, or even thicker) that orbit them. The accretion disk may be the source of the radio waves at M87’s black hole. Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown on one side of the disk as an orange wash) would form a disk in your camera (Figure 8); the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be the same. However, the indirect images (the first of which is shown going halfway about the black hole as a green wash) would all pile up in the same place from your perspective, forming a bright and quite thin ring. This is the photon ring for a non-spinning black hole — the full set of indirect images of everything that lies at or inside the photon sphere but outside the black hole horizon.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, completely unobservable at EHT, the `photon ring’. I don’t know if their notation will be adopted but you might hear `lensed ring’ referred to in future. In any case, what EHT calls the photon ring includes what Gralla et al. call the lensed ring.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, it had to be pushed to its absolute limits.  Someday we’ll see both the disk and the ring, but right now, they’re all blurred together.  So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim that the camera will mainly show you a blurry image of the bight photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that blurring the ring makes it dimmer than the disk! The photo, therefore, will show mainly the accretion disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10).  (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the image is dominated by the disk.

Let’s say that again: the black hole `photo’ is likely showing the accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’  In general, the patch may well be smaller than what is usually termed the `shadow’ of the black hole.

This is very important. The photon ring’s radius barely depend on the rotation rate of the black hole, and therefore, if the light were coming from the ring, you’d know (without knowing the black hole’s rotation right) how big its dark patch will appear for a given mass. You could therefore use the radius of the ring in the photo to determine the black hole’s mass. But the accretion disk’s properties and appearance can vary much more. Depending on the spin of the black hole and the details of the matter that’s spiraling in to the black hole, its radius can be larger or smaller than the photon ring’s radius… making the measurement of the mass both more ambiguous and — if you partially mistook the accretion disk for the photon ring — potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important.  For a non-rotating black hole the accretion disk’s inner edge is expected to lie outside the photon ring, but for a fast-rotating black hole (as M87’s may well be), it will lie inside the photon ring. And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette). It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

Gralla et al. subtly raise these questions but are careful not to overstate their case, because they have not yet completed their study of rotating black holes. But the question is now in the air. I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures.

(Rather than the accretion disk, it is also possible that the dominant emission comes from the inner portion of one of the jets that emerges from the vicinity of the black hole. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Why the Gargantua Black Hole From Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11.

One thing that isn’t included in the Gargantua image from the movie (Figure 12) is a sort of Doppler effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so the image will be very asymmetric, unlike the movie image. See Figure 13 with what it would really `look’ like to the EHT.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole, and does not show the Doppler effect correctly (for artistic reasons).

Where a Rotating Black Hole Differs

Before I quit for the day, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) What a rotating black hole looks like to a distant observe depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A north-pole observer, an equatorial observer, and a near-north-pole observer see quite different things. (We are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are reasons to expect this. Even then, the story is complex.

2) Instead of a photon-sphere, there is what you might call a `photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. As I mentioned above, for high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; this leads to multiple indirect images of the disk and a potentially bright photon ring.

3) However, depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from points along the rotation axis (i.e., above the north or south pole) will the direct and indirect image of the disk both be circular and concentric. That further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that we cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 10, 2019 04:04 AM

June 05, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVII: Super-Entropic Instability

I'm quite excited because of some new results I got recently, which appeared on the ArXiv today. I've found a new (and I think, possibly important) instability in quantum gravity.

Said more carefully, I've found a sibling to Hawking's celebrated instability that manifests itself as black hole evaporation. This new instability also results in evaporation, driven by Hawking radiation, and it can appear for black holes that might not seem unstable to evaporation in ordinary circumstances (i.e., there's no Hawking channel to decay), but turn out to be unstable upon closer examination, in a larger context. That context is the extended gravitational thermodynamics you've read me talking about here in several previous posts (see e.g. here and here). In that framework, the cosmological constant is dynamical and enters the thermodynamics as a pressure variable, p. It has a conjugate, V, which is a quantity that can be derived once you know the pressure and the mass of the black hole.

Well, Hawking evaporation is a catastrophic quantum phenomenon that follows from the fact that the radiation temperature of a Schwarzschild black hole (the simplest one you can think of) goes inversely with the mass. So the black hole radiates and loses energy, reducing its mass. But that means that it will radiate at even higher temperature, driving its mass down even more. So it will radiate even more, and so on. So it is an instability in the sense that the system drives itself even further away from where it started at every moment. Like a pencil falling over from balancing on a point.

This is the original quantum instability for gravitational systems. It's, as you probably know, very important. (Although in our universe, the temperature of radiation is so tiny for astrophysical black holes (they have large mass) that the effect is washed out by the local temperature of the universe... But if the univverse ever had microscopic black holes, they'd have radiated in this way...)

So very nice, so very 1970s. What have I found recently?

A nice way of expressing the above instability is to simply say [...] Click to continue reading this post

The post News from the Front, XVII: Super-Entropic Instability appeared first on Asymptotia.

by Clifford at June 05, 2019 02:11 AM

May 27, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A conference in Paris

This week I’m in Paris, attending a conference in memory of the outstanding British astronomer and theoretician Arthur Stanley Eddington. The conference, which is taking place at the Observatoire de Paris, is designed to celebrate the centenary of Eddington’s famous measurement of the bending of distant starlight by the sun.  a key experiment that offered important early support for Einstein’s general theory of relativity. However, there are talks on lots of different topics, from Eddington’s philosophy of science to his work on the physics of stars, from his work in cosmology to his search for a unified field theory. The conference website and programme is here.

IMG_2761

The view from my hotel in Denfert-Rochereau

All of the sessions of the conference were excellent, but today was a particular treat with four outstanding talks on the 1919 expedition. In ‘Eddington, Dyson and the Eclipse of 1919’, Daniel Kennefick of the University of Arkansas gave a superb overview of his recent book on the subject. In ‘The 1919 May 29 Eclipse: On Accuracy and Precision’, David Valls-Gabaud of the Observatoire de Paris gave a forensic analysis of Eddington’s calculations. In ‘The 1919 Eclipse; Were the Results Robust?’ Gerry Gilmore of the University of Cambridge described how recent reconstructions of the expedition measurements gave confidence in the results; and in ‘Chasing Mare’s Nests ; Eddington and the Early Reception of General Relativity among Astronomers’, Jeffrey Crelinsten of the University of Toronto summarized the doubts expressed by major American astronomical groups in the early 1920s, as described in his excellent book.

Image result for no shadow of a doubt by daniel kennefick        Image result for einstein's jury

I won’t describe the other sessions, but just note a few things that made this conference the sort of meeting I like best. All speakers were allocated the same speaking time (30 mins including questions); most speakers were familiar with each other’s work; many speakers spoke on the same topic, giving different perspectives; there was plenty of time for further questions and comments at the end of each day. So a superb conference organised by Florian Laguens of the IPC and David Valls-Gabaud of the Observatoire de Paris.

IMG_2742

On the way to the conference

In my own case, I gave a talk on Eddington’s role in the discovery of the expanding universe. I have long been puzzled by the fact that Eddington, an outstanding astronomer and strong proponent of the general theory of relativity, paid no attention when his brilliant former student Georges Lemaître suggested that a universe of expanding universe could be derived from general relativity, a phenomenon that could account for the redshifts of the spiral nebulae, the biggest astronomical puzzle of the age. After considering some standard explanations (Lemaître’s status as an early-career researcher, the journal he chose to publish in and the language of the paper), I added two considerations of my own: (i) the theoretical analysis in Lemaître’s 1927 paper would have been very demanding for a 1927 reader and (ii) the astronomical data that Lemaître relied upon were quite preliminary (Lemaître’s calculation of a redshift/distance coefficient for the nebulae relied upon astronomical distances from Hubble that were established using the method of apparent magnitude, a method that was much less reliable than Hubble’s later observations using the method of Cepheid variables).

IMG_2759

Making my points at the Eddington Conference

It’s an interesting puzzle because it is thought that Lemaitre sent a copy of his paper to Eddington in 1927 – however I finished by admitting that there is a distinct possibility that Eddington simply didn’t take the time to read his former student’s paper. Sometimes the most boring explanation is the right one! The slides for my talk can be found here.

All in all, a superb conference.

 

by cormac at May 27, 2019 07:39 PM

May 24, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVI: Toward Quantum Heat Engines

(The following post is a bit more technical than usual. But non-experts may still find parts helpful.)

A couple of years ago I stumbled on an entire field that I had not encountered before: the study of Quantum Heat Engines. This sounds like an odd juxtaposition of terms since, as I say in the intro to my recent paper:

The thermodynamics of heat engines, refrigerators, and heat pumps is often thought to be firmly the domain of large classical systems, or put more carefully, systems that have a very large number of degrees of freedom such that thermal effects dominate over quantum effects. Nevertheless, there is thriving field devoted to the study—both experimental and theoretical—of the thermodynamics of machines that use small quantum systems as the working substance.

It is a fascinating field, with a lot of activity going on that connects to fields like quantum information, device physics, open quantum systems, condensed matter, etc.

Anyway, I stumbled on it because, as you may know, I've been thinking (in my 21st-meets-18th century way) about heat engines a lot over the last five years since I showed how to make them from (quantum) black holes, when embedded in extended gravitational thermodynamics. I've written it all down in blog posts before, so go look if interested (here and here).

In particular, it was when working on a project I wrote about here that I stumbled on quantum heat engines, and got thinking about their power and efficiency. It was while working on that project that I had a very happy thought: Could I show that holographic heat engines (the kind I make using black holes) -at least a class of them- are actually, in some regime, quantum heat engines? That would be potentially super-useful and, of course, super-fun.

The blunt headline statement is that they are, obviously, because every stage [...] Click to continue reading this post

The post News from the Front, XVI: Toward Quantum Heat Engines appeared first on Asymptotia.

by Clifford at May 24, 2019 05:16 PM

May 14, 2019

Axel Maas - Looking Inside the Standard Model

Acquiring a new field
I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁

by Axel Maas (noreply@blogger.com) at May 14, 2019 03:03 PM

May 12, 2019

Marco Frasca - The Gauge Connection

Is it possible to get rid of exotic matter in warp drive?

On 1994, Miguel Alcubierre proposed a solution of the Einstein equations (see here) describing a space-time bubble moving at arbitrary speed. It is important to notice that no violation of the light speed limit happens because is the space-time moving and inside the bubble everything goes as expected. Miguel AlcubierreThis kind of solutions of the Einstein equations have a fundamental drawback: they violate Weak Energy Condition (WEC) and, in order to exist, some exotic matter with negative energy density must exist. Useless to say, nobody has ever seen such kind of matter. There seems to exist some clue in the way Casimir effect works but this just relies on the way one interprets quantum fields rather than an evidence of existence. Besides, since the initial proposal, a great number of studies have been published showing how pathological the Alcubierre’s solution can be, also recurring to quantum field theory (e.g. Hawking radiation). So, we have to turn to dream of a possible interstellar travel hoping that some smart guy will one day come out with a better solution.

Of course, Alcubierre’s solution is rather interesting from a physical point of view as it belongs to a number of older solutions, likeKip Thorne wormholes, time machines and like that, yielded by very famous authors as Kip Thorne, that arise when one impose a solution and then check the conditions of its existence. This turns out to be a determination of the energy-momentum tensor and, unavoidably, is negative. Then, they violate whatever energy condition of the Einstein equations granting pathological behaviour. On the other side, they appear the most palatable for science fiction of possible futures of space and time travels. In these times where this kind of technologies are largely employed by the film industry, moving the fantasy of millions, we would hope that such futures should also be possible.

It is interesting to note the procedure to obtain these particular solutions. One engineers it on a desk and then substitute them into the Einstein equations to see when are really a solution. One fixes in this way the energy requirements. On the other side, it is difficult to come out from the blue with a solution of the Einstein equations that provides such a particular behaviour, moving the other way around. It is also possible that such solutions are not possible and imply always a violation of the energy conditions. Some theorems have been proved in the course of time that seem to prohibit them (e.g. see here). Of course, I am convinced that the energy conditions must be respected if we want to have the physics that describes our universe. They cannot be evaded.

So, turning at the question of the title, could we think of a possible warp drive solution of the Einstein equations without exotic matter? The answer can be yes of course provided we are able to recover the York time, or warp factor, in the way Alcubierre obtained it with its pathological solution. At first, this seems an impossible mission. But the space-time bubble we are considering is a very small perturbation and perturbation theory can come to rescue. Particularly, when this perturbation can be locally very strong. On 2005, I proposed such a solution (see here) together with a technique to solve the Einstein equations when the metric is strongly perturbed. My intent at that time was to give a proof of the BKL conjecture. A smart referee suggested to me to give an example of application of the method. The metric I have obtained in this way, perturbing a Schwarzaschild metric, yields a solution that has an identical York time (warp factor) as for the Alcubierre’s metric. Of course, I am respecting energy conditions as I am directly solving the Einstein equations that do.

The identity between the York times can be obtained provided the form factor proposed by Alcubierre is taken to be 1 but this is just the simplest case. Here is an animation of my warp factor.

Warp factor

It seen the bubble moving as expected along the x direction.

My personal hope is that this will go beyond a mathematical curiosity. On the other side, it should be understood how to provide such kind of perturbations to a given metric. I can think to the Einstein-Maxwell equations solved using perturbation theory. There is a lot of literature about and a lot of great contributions on this argument.

Finally, this could give a meaning to the following video by NASA.

by mfrasca at May 12, 2019 05:59 PM

April 30, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Week at The Surf Experience

I don’t often take a sun holiday these days, but I had a fabulous time last week at The Surf Experience in Lagos, Portugal. I’m not an accomplished surfer by any measure, but there is nothing quite like the thrill of catching a few waves in the sea with the sun overhead – a nice change from the indoors world of academia.

Not for the first time, I signed up for a residential course with The Surf Experience in Lagos. Founded by veteran German surfer Dago Lipke, guests of The Surf Experience stay at the surf lodge Vila Catarina, a lovely villa in the hills above Lagos, complete with beautiful gardens and swimming pool. Sumptuous meals are provided by Dagos’s wife Connie, a wonderful cook. Instead of wandering around town trying to find a different restaurant every evening, guests enjoy an excellent meal in a quiet setting in good company, followed by a game of pool or chess. And it really is good company. Guests at TSE tend mainly to hail from Germany and Switzerland, with a sprinkling from France and Sweden, so it’s truly international – quite a contrast to your average package tour (or indeed our college staff room). Not a mention of Brexit, and an excellent opportunity to improve my German. (Is that what you tell yourself?- Ed)

IMG_2637 (1)

Hanging out at the pool before breakfast

IMG_2634

Fine dining at The Surf Experience

IMG_2624

A game of cards and a conversation instead of a noisy bar

Of course, no holiday is perfect and in this case I managed to pick up an injury on the first day. Riding the tiniest wave all the way back to the beach, I got unexpectedly thrown off, hitting my head off the bottom at speed. (This is the most elementary error you can make in surfing and it risks serious injury, from concussion to spinal fracture). Luckily, I walked away with nothing more than severe bruising to the neck and chest (as later established by X-ray at the local medical clinic, also an interesting experience). So no life-altering injuries, but like a jockey with a broken rib, I was too sore to get back on the horse for few days. Instead, I tried Stand Up Paddling for the first time, which I thoroughly enjoyed. It’s more exciting than it looks, must get my own board for calm days at home.

E6Jc2LvY

Stand Up Paddling in Lagos with Kiteschool Portugal

Things got even better towards the end of the week as I began to heal. Indeed, the entire surf lodge had a superb day’s surfing yesterday on beautiful small green waves at a beach right next to town (in Ireland, we very rarely see clean conditions like this, the surf is mainly driven by wind). It was fantastic to catch wave after wave throughout the afternoon, even if clambering back on the board after each wasn’t much fun for yours truly.

This morning, I caught a Ryanair flight back to Dublin from Faro, should be back in the office by late afternoon. Oddly enough, I feel enormously refreshed – perhaps it’s the feeling of gradually healing. Hopefully the sensation of being continuously kicked in the ribs will disappear soon and I’ll be back on the waves in June. In the meantime, this week marks a study period for our students before their exams, so it’s an ideal time to prepare my slides for the Eddington conference in Paris later this month.

Update

I caught a slight cold on the way back, so today I’m wandering around college like a lunatic going cough, ‘ouch’ , sneeze, ‘ouch’.  Maybe it’s karma for flying Ryanair – whatever about indulging in one or two flights a year, it’s a terrible thing to use an airline whose CEO continues to openly deny the findings of climate scientists.

 

by cormac at April 30, 2019 09:49 PM