Particle Physics Planet


May 26, 2019

Peter Coles - In the Dark

A Story of St Stephen’s Green

I had a bit of spare time before the Opera on Friday and the weather was fine so I decided to go for a walk around the park in St Stephen’s Green.

I have walked past St Stephen’s Green many times but have never been inside, or if I have it was so long ago that I’ve now forgotten.

On my way around I noticed that there are posters here and there marking the events of the 1916 Easter Rising. Here’s an example:

The artwork is a bit ‘Boy’s Comic’ style but the descriptions are fascinating especially because the Park and the area around it are pretty much unchanged in more than a hundred years since the momentous events of 1916. There are still bullet holes in the Fusiliers Arch at the North West Corner of the Green, as there are in a number of other locations around Dublin.

St Stephen’s Green is inside the area marked ‘Citizens Army’. One look at the map will tell you why this was considered an important location to control as it is at the junction of several main roads. On the other hand if you actually visit the location you will see a big problem, namely that the Green itself is surrounded on all sides by very tall buildings, including the swanky Shelbourne Hotel to the North.

When a contingent of about 120 members of the Citizens Army arrived in St Stephen’s Green on Easter Monday, 24th April 1916, they immediately began erecting barricades outside, and digging trenches inside, the Park. They did not, however, have the numbers needed to seize and hold the buildings around it except for the Royal College of Surgeons building to the West.

The following morning, Tuesday 25th April, the British moved two machine guns into position, one in the Shelbourne Hotel and the other in the United Services club, along with numerous snipers. From these vantage points British soldiers could shoot down into the Park making it impossible for the rebels to move around.

The position inside the Green being untenable the Rebels effected an orderly (but perilous) withdrawal to the Royal College of Surgeons which they had fortified for the purpose. And that’s where they stayed until the end of the Rising.

The British realised that there was no need to assault the RCS building, as the force inside was contained and offered no real threat. From the roof of the building the Rebels watched helplessly as the British systematically reduced the resistance around the Rebel Headquarters in the GPO Building to the North, using artillery fired from College Green. Smoke rose into the sky as one entire side of O’Connell Street went up in flames and the perimeter slowly tightened around the GPO.

In the morning of Thursday 27th April the occupants of the RCS were alarmed to see that British soldiers had installed another machine gun on the roof of the University Church on the South side of St Stephen’s Green along with snipers in adjacent buildings.

This was a very dangerous development that required a rapid response. A plan was devised that involved sending a squad of about 30 to break into buildings on the South side of the Green and start fires therein that would force the British to withdraw. This action is the subject of the poster shown above.

This is a photograph of the remarkable Margaret Skinnider, who is shown in the graphic leading the attempted assault. Before the Rising she was a school teacher. During the hostilities she initially acted as a scout and a runner, carrying messages to and from the GPO, but when given the chance she proved herself a crack shot with a rifle and showed conspicuous courage during the heavy fighting in and around St Stephen’s Green.

Skinnider insisted on leading the assault on Thursday 27th April despite the obviously high risk as it would involve running about 30 yards in full view of the machine gun position.

The detachment made its way out of the RCS to the South West corner of the Green safely enough but when men began breaking windows in Harcourt Street in order to gain entry to buildings there, the sound alerted the British soldiers who realised what was happening and opened fire as the Rebels made their move. Margaret Skinnider was leading from the front and she was inevitably among the casualties: she was hit three times by rifle bullets and very badly wounded.

Such was the volume of fire that the assault was abandoned and the surviving members of the squad retreated to the RCS where they remained until the general surrender on Saturday (29th April).

Skinnider was taken to hospital after the Rising ended but escaped and made her way to Scotland. She later returned to Ireland to take part in the War of Independence and subsequent Civil War. When the latter ended, in 1923, she went back to her job as a teacher in a primary school. She passed away in 1971 at the age of 79.

by telescoper at May 26, 2019 04:12 PM

Lubos Motl - string vacua and pheno

Murray Gell-Mann: 1929-2019
Sadly, as reported by The New York Times and many others, Murray Gell-Mann died at home yesterday, May 24th, at age of 89.7. He was clearly one of the greatest living physicists – by integrated achievements. It was cancer that killed him – for years, he's used his broad scientific expertise to defeat that lethal process inside his body. See this talk by Gell-Mann and David Agus about cancer. Physician Agus considered Gell-Mann to be his mentor. Cancer has also killed Gell-Mann's first beloved wife.



A picture of MGM and Thomas Appelquist that I took in Harvard's Science Center in 2005. If you think that you are a theoretical physicist but I haven't photographed you, then you effectively fail to exist.

He was born in Manhattan in 1929 to a Jewish family that arrived from the present Ukrainian territory – then a town named Černovice (close enough to Czechoslovakia so we have our name!) in our beloved homeland of Austria-Hungary.

He got his PhD when he was 21. His students included Ken Wilson (Gell-Mann really helped the renormalization group ideas to emerge, think about the Gell-Mann–Low equations), Sidney Coleman (who was celebrated at the event where I met Gell-Mann), Jim Hartle (that's linked to Gell-Mann's interest in foundations of QM – they were also among the people who authored the consistent histories), and Barton Zwiebach (a top expert in string field theory today, I discuss Gell-Mann and strings later). Gell-Mann has also discovered the seesaw mechanism that might give neutrinos their masses of the right magnitude.

Gell-Mann received his 1969 Nobel prize in physics mainly for the 1964 theoretical discovery of quarks (independently of George Zweig) – more precisely, as Ed S. insists, he got the prize for the Eightfold Way which only "led" to quarks – which made the classification of hadrons (proton, neutron, and their cousins) meaningful. Gell-Mann copied the name "quark" from Finnegans Wake by James Joyce ("Three quarks for Muster Mark" – indeed, it's not "Mister Clark" as I wanted to write LOL). That fancy choice was an early example of his deep interest in linguistics (he was claimed to speak 13 languages fluently, wow).

He was also obsessed with birdwatching or ornithology, archaeology, and more conventional intellectual interests. He possessed three houses in different U.S. states (Santa Fe, Aspen, Pasadena in NM,CO,CA), one of them was a "museum", and he loved-and-knew expensive wine and cars (guess where the "Jaguar" comes from in "Quark and the Jaguar" – the animal was there just to mask that he wanted to brag about the car).



Gell-Mann also stole the term "Eightfold Way" from the Eastern religions to describe some \(SU(3)\) octets and then he was surprised that many people thought that particle physics had something to do with Eastern religions. He has clearly done more fundamental work involving \(SU(3)\) in physics (especially the flavor symmetry) than any other physicist – which is also why the \(SU(3)\) counterparts of \(SU(2)\) Pauli matrices are called the Gell-Mann matrices.

Inspire shows that he has written 9 renowned papers – which are dominated by the strong force but actually include the electromagnetic and the weak force papers, too. (The weak force paper is the famous Feynman—Gell-Mann FG paper that isn't FG, i.e. fully good.)



Gell-Mann was a collaborator and a well-known rival of Feynman at Caltech. Gell-Mann represented the approach that wants to be "more conventional or aligned with the community" while Feynman was the "maverick".

Gell-Mann has had one adopted child and two biological children, Lisa and Nick. Those have had a strained relationship with their dad. In particular, instead of being inspired to do some intelligent stuff, Lisa – previously a dutiful child – was suck into extreme left-wing politics – the Central U.S. Organization for Marxism-Leninism – by an incredibly hypocritical rich man from a New York neighborhood. So I guess that he didn't like the left-wing bastards who have basically destroyed his daughter.

I have spoken to Gell-Mann for half an hour during the 2005 Sidneyfest (celebration of Coleman's life). He was a captivating storyteller and he was excited that I was interested in Feynman's opposition to toothrbrushes – Gell-Mann really thought that Feynman's teeth were decaying and he was doing really stupid things for his maverick status – and in Gell-Mann's role in a wonderful TV commercial for Enron (for younger readers: Enron was something like Tesla but 20 years ago).

While Feynman loved to mock John Schwarz in the elevators of Caltech ("How many dimensions does your world have today, John?"), Gell-Mann was a key sponsor and defender who has allowed an early string theory group to emerge at Caltech. In the early 1970s, Gell-Mann was just capable of figuring out that string theory was likely to be here with us to stay – as the default state-of-the-art foundation of all of physics – at least for 50 more years. Barton Zwiebach's years as Gell-Mann's student say something, too.

But just to be sure, Gell-Mann didn't understand the value of string theory right away. As recently as in 1970, he mocked Lenny Susskind for strings, too – and he played some role in the delaying of Susskind's paper.

The difference between Feynman's and Gell-Mann's attitude to string theory had a very simple primary reason. As this 3-minute monologue by Gell-Mann shows, Gell-Mann actually understood string theory at a level that was about 50 times more detailed and technical than Feynman's (Gell-Mann knew something that no critic does, namely the structure of actual papers, such as those about sectors in the superstring). So Feynman did talk as an absolute layman, Gell-Mann did not. Every person in the world who says negative things about string theory is really a layman. In this 2-minute monologue, Gell-Mann complained that it was a pity that Shelly Glashow became the original hostile promoter of the fundamental misconceptions about "string theory that couldn't be tested" etc. More on Gell-Mann and strings, also in his Note on Prehistory of String Theory (making Gell-Mann's connections to early stringy technical results clear).

This book is an introduction to the quark theory for 1-year-old students.

Aside from his support for string theory, Gell-Mann is also one of the forefathers of naturalness – he coined the totalitarian principle which states that everything that is not forbidden is mandatory (just like in the totalitarian regimes). In particle physics, it means that all coefficients that can't be proven to be zero by some principles are bound to be nonzero and probably "of order one" in some units. He has also opposed "quantum flapdoodle", the misuse of quantum mechanics' alleged weirdness designed to push other topics in strange directions.

Gell-Mann's name has appeared in 103 TRF blog posts. You may want to read some of them. Try a Google ordering by relevance.

RIP, Murray.


P.S.: If you want to share my frustration about how much the BBC's and other popular science programs have dumbed down in recent 50 years, watch Strangeness Minus Three from 1964 (three parts featuring both Feynman and Gell-Mann; and Yuval Ne'eman, too). One part of the gloomy evolution is that journalists have generally turned into a cesspool. Another cause of the evolution is that the scientists have lost the spine and self-confidence to insist that they determine what is actually being said about science. The body of scientists have been turned into a obedient herd of effeminate tools.



Here you have another, 12-minute-long monologue about the birth of the quark model (thanks, Willie!):



He ordered the hadrons into multiplets, thought about subunits, and found out that they – the quarks – had to have fractional charges \(+2/3\) and \(-1/3\). With a napkin during a visit, he realized that this counterintuitive trait wouldn't be a problem as long as the quarks stayed confined (it was almost a decade before the confinement was derived from QCD). He called the quarks "mathematical" because they had to be "confined". Many historians later wrote that the word "mathematical" meant that he didn't believe the quarks at all which is completely wrong!

While the Nobel foundation didn't make that mistake, the fabricated confusion was a reason why the prize wasn't given to him easily and simply "for the quarks", as dictated at the top. That's obviously what should have taken place.

These days this position is even more widespread – various overgrown clones of the critics of physics that emerged in recent 15 years use the word "mathematical" as if it were something wrong – and they completely contaminate the journalistic environment surrounding physics with these toxic delusions. But there is absolutely nothing wrong about it – quarks are just inseparable from each other. The quark theory turned out to be perfectly correct – the quarks were just not directly observable in isolation which doesn't contradict the fact that they're demonstrably and completely real. The case of string theory is so far perfectly analogous. It's also "mathematical" but it doesn't mean that there is anything "physically unreal" about it! Clearly, all the incompetent people have failed to understand that mathematics must play and has played a decisive role in physics since the very "Newtonian" beginnings – they're still dreaming about a setup in which mathematics is suppressed, declared to be either wrong or not even wrong, and decisions are made by some sociological tools or bullying.

Gell-Mann has repeatedly tried to talk to the historians who write the untruths that were frustrating to him and fix their perverted story about the history but it was like talking to the wall. Also, Gell-Mann said that he had the sound approximating "cork" before he found the final spelling "quark" in Joyce's book. I think that if this chronology is authentic, it was quite some good luck that Joyce wrote about quarks – and in fact, there were three quarks in the sentence!

by Luboš Motl (noreply@blogger.com) at May 26, 2019 11:55 AM

May 25, 2019

Christian P. Robert - xi'an's og

Christian P. Robert - xi'an's og

Young Americans

I heard this song from David Bowie’s 1975 album on the [national public] radio the other day and it reminded me this was one of the first LPs I bought… and played till it was no longer audible.

by xi'an at May 25, 2019 05:47 PM

Peter Coles - In the Dark

R.I.P. Murray Gell-Mann (1929-2019)

I heard this morning of the death of Murray Gell-Mann who passed away yesterday at the age of 89. Professor Gell-Mann was awarded the Nobel Prize for Physics in 1969 for his work on elementary particle physics, specifically for the development of the quark model. It was Gell-Mann who appropriated the phrase from James Joyce’s Finnegans Wake (‘Three quarks for Muster Mark’) from which the word `quark’ passed into the scientific lexicon.

There will be proper tributes from people who knew the man and his science far better than I do, so I’ll just say here that he was a man who made enormous contributions to physics and who will be greatly missed.

Rest in peace Murray Gell-Mann (1919-2019).

by telescoper at May 25, 2019 03:48 PM

Peter Coles - In the Dark

The Magic Flute at the Gaiety Theatre

Last night went for the first time to the Gaiety Theatre in Dublin for a performance of Mozart’s The Magic Flute by Irish National Opera in conjunction with the Irish Chamber Orchestra. It was my first INO performance and my first visit to the Gaiety Theatre (although I’m sure it won’t be the last of either of those). I’ve actually lost count of the number of times I’ve seen the Magic Flute but I hope this won’t be the last either!

The Gaiety Theatre is quite compact, which engenders a more intimate atmosphere than is often experienced at the Opera. The music being provided by a small-ish chamber orchestra also suited the venue, but more importantly gave a fresh and sprightly feeling to Mozart’s wonderful score. You would think it would be hard to make Mozart sound stodgy, but some orchestras seem to manage it. Not last night though.

The scenery is rather simple, as is needed for touring Opera playing in relatively small venues. The stage directions of the Magic Flute are in any case so outlandish that it’s virtually impossible to enact them precisely according to instructions.

For example, what is the set designer supposed to do with this?

The scene is transformed into two large mountains; one with a thundering waterfall, the other belching out fire; each mountain has an open grid, through which fire and water may be seen; where the fire burns the horizon is coloured brightly red, and where the water is there lies a black fog.

This production takes the sensible approach of leaving a lot to the imagination of the audience though that does mean, for example, that there is no dragon…

The costumes are a different matter. The hero Tamino begins in the drab clothes of a working man of the 19th century, as do the three ladies that he encounters early on in Act I. The enigmatic Sarastro and his followers are however dressed as the gentry of a similar period, and are accompanied by a chorus of domestic servants. As Tamino works his way into the Brotherhood he becomes progressively gentrified in manner and in clothing. A central idea of the Opera is that of enlightenment values prevailing over superstition, but under the surface oppression remains, both in the form imposed by property-owners on the working poor, but also in the misogynistic behaviour of Sarastro and others, and the racist stereotyping of the villainous and lustful `Moor’, Monastatos. This production is sung in the original German, and there were gasps from the audience when they saw some of the surtitles in English. Although Magic Flute is on one level a hugely enjoyable comic fantasy, it also holds up a mirror to attitudes of Mozart’s time – and what you see in it is not pleasant, especially when you realize that many of these are still with us.

Importantly, however, this undercurrent does not detract from the basic silliness which I believe is the real key to this Opera. It’s fundamentally daft, but it succeeds because it’s daft in exactly the same way that real life is.

In last night’s performance the two fine leads were Anna Devin was Pamina (soprano) and Nick Pritchard Tamino (tenor). The excellent Gavan Ring provided suitable comic relief and a fine baritone voice to boot. Kim Sheehan (soprano) as the Queen of the Night doesn’t have the biggest voice I’ve ever heard, but she sang her extraordinarily difficult coloratura arias (one of them including a top `F’) with great accuracy and agility and brought a considerable pathos to her role instead of making it the pantomime villain you sometimes find. Sarastro was Lukas Jakobski (bass), memorable not only for his superb singing way down in the register, but for his commanding physical presence. Well over 2 metres tall, he towered over the rest of the cast. I think he’s the scariest Sarastro I’ve ever seen!

And finally I should congratulate the three boys: Nicholas O’Neill, Seán Hughes and Oran Murphy. These roles are extremely demanding for young voices and the three who performed last night deserved their ovation at the end.

The last performances in this run are today (Saturday 25th May, matinée and evening) so this review is too late to make anyone decide to go and see it but last night’s was recorded for RTÉ Lyric Fm and will be broadcast at a future date.

by telescoper at May 25, 2019 03:05 PM

Jon Butterworth - Life and Physics

Murray Gell-Mann
Sad to learn that Murray Gell-Mann, pioneer of particle physics and more, has died at the age of 89.  Here is the obituary from Caltech. The first person to bring some order to Hadron Island and point the way to … Continue reading

by Jon Butterworth at May 25, 2019 06:50 AM

May 24, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVI: Toward Quantum Heat Engines

(The following post is a bit more technical than usual. But non-experts may still find parts helpful.)

A couple of years ago I stumbled on an entire field that I had not encountered before: the study of Quantum Heat Engines. This sounds like an odd juxtaposition of terms since, as I say in the intro to my recent paper:

The thermodynamics of heat engines, refrigerators, and heat pumps is often thought to be firmly the domain of large classical systems, or put more carefully, systems that have a very large number of degrees of freedom such that thermal effects dominate over quantum effects. Nevertheless, there is thriving field devoted to the study—both experimental and theoretical—of the thermodynamics of machines that use small quantum systems as the working substance.

It is a fascinating field, with a lot of activity going on that connects to fields like quantum information, device physics, open quantum systems, condensed matter, etc.

Anyway, I stumbled on it because, as you may know, I've been thinking (in my 21st-meets-18th century way) about heat engines a lot over the last five years since I showed how to make them from (quantum) black holes, when embedded in extended gravitational thermodynamics. I've written it all down in blog posts before, so go look if interested (here and here).

In particular, it was when working on a project I wrote about here that I stumbled on quantum heat engines, and got thinking about their power and efficiency. It was while working on that project that I had a very happy thought: Could I show that holographic heat engines (the kind I make using black holes) -at least a class of them- are actually, in some regime, quantum heat engines? That would be potentially super-useful and, of course, super-fun.

The blunt headline statement is that they are, obviously, because every stage [...] Click to continue reading this post

The post News from the Front, XVI: Toward Quantum Heat Engines appeared first on Asymptotia.

by Clifford at May 24, 2019 05:16 PM

ZapperZ - Physics and Physicists

Charles Kittel
Physicist Charles Kittel passed away this past May 15th, 2019.

This is one of those names that will not ring a bell to the public. But for most of us in the field of condensed matter physics, his name has almost soared to mythical heights. His book "Introduction to Solid State Physics" has become almost a standard to everyone entering this field of study. That text alone has educated innumerable number of physicists that went on to make contribution to a field of physics that has a direct impact on our world today. It is also a text that are used (yes, they are still being used in physics classes today) in many electrical engineering courses.

He has been honored with many awards and distinctions, including the Buckley prize from the APS. He may be gone, but his legacy, influence, and certainly his book, will live on.

Zz.

by ZapperZ (noreply@blogger.com) at May 24, 2019 01:34 PM

Peter Coles - In the Dark

Exercising the Franchise

First thing this morning I cast my vote in Maynooth, the polling station for which is in the Presentation Girls School, a Catholic Primary School. It wasn’t amazingly busy inside but there was a steady flow of people coming through. There were 8 desks dishing out ballot papers, more desks than you usually get at a polling station in the UK. There were three ballot papers, one for the European Parliament, one for the Local Council, and one for the Constitutional Referendum.

Anyway, Polling Card in hand I eventually found the right desk. Having done my homework last night I ranked all 17 candidates for the European Parliament Elections and all 9 for the Local Council Elections, copying my preferences from a piece of paper I had taken with me. The Single Transferable Vote system must making counting quite a lengthy process so it will take some time before the results are known.

At least I got to vote, which many EU citizens in the UK were unable to do. There’s a major scandal brewing about what looks like deliberate disenfranchisement. These things shouldn’t happen in a democracy, but apparently in the United Kingdom they do.

I had a very busy morning after arriving at the Department so I’ve just discovered that Theresa May has resigned. Part of me is delighted as I thought she was callous and mean-spirited as well as being useless. Apparently she cried when she read out her resignation statement. You’d have to have a heart of stone not to burst out laughing.

The feeling of happiness that the current PM is leaving is however tempered by the very high probability that whoever replaces her will be even worse…

So I’m now heading off to Dublin again for the second session of IQF 2019 after which I’ll be going to the Gaiety Theatre for a performance of the Magic Flute, an Opera about Particle Physics.

by telescoper at May 24, 2019 12:41 PM

May 23, 2019

Christian P. Robert - xi'an's og

truncated Normal moments

An interesting if presumably hopeless question spotted on X validated: a lower-truncated Normal distribution is parameterised by its location, scale, and truncation values, μ, σ, and α. There exist formulas to derive the mean and variance of the resulting distribution,  that is, when α=0,

\Bbb{E}_{\mu,\sigma}[X]= \mu + \frac{\varphi(\mu/\sigma)}{1-\Phi(-\mu/\sigma)}\sigma

and

\text{var}_{\mu,\sigma}(X)=\sigma^2\left[1-\frac{\mu\varphi(\mu/\sigma)/\sigma}{1-\Phi(-\mu/\sigma)}  -\left(\frac{\varphi(\mu/\sigma)}{1-\Phi(-\mu/\sigma)}\right)^2\right]

but there is no easy way to choose (μ, σ) from these two quantities. Beyond numerical resolution of both equations. One of the issues is that ( μ, σ) is not a location-scale parameter for the truncated Normal distribution when α is fixed.

by xi'an at May 23, 2019 10:19 PM

Peter Coles - In the Dark

LiteBIRD Newsflash

Just a quick post to pass on the news that the space mission LiteBIRD has been selected as the next major mission by the Japanese Space Agency JAXA and  Institute for Space and Astronautical Science (ISAS).

LiteBIRD (which stands for `Lite (Light) satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection’) is a planned space observatory that aims to detect the footprint of the primordial gravitational waves on the cosmic microwave background (CMB) in a form of a B-mode polarization pattern. This is the signal that BICEP2 claimed to have detected five years ago to much excitement, but was later shown to be a caused by galactic dust.

It’s great news for a lot of CMB people all round the world that this mission has been selected – include some old friends from Cardiff University. Congratulations to all of them!

I’m not sure when the launch date will be, but the mission will last three years and will be at Earth-Sun Lagrange point known as L2.It will be a very difficult task to extract the B-mode signal from foregrounds and instrumental artifacts so although there’s joy that it has been selected, the real work starts now!

by telescoper at May 23, 2019 03:16 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Digital Business Trends for Older Adults

There’s no denying that a business-savvy adult might know more about keeping a company afloat than their younger counterparts. A survey of 138,000 small businesses with less than 500 employees revealed that the third of these companies owned by people older than 55 stick around longer than other businesses. Older adults aged over 55 have 17 “cash buffer days”, five more than businesses held by younger adults and four more than those aged between 35 to 54.

Age as an Asset

Being older does not necessarily mean getting slower. Older adults can use their age to further their company’s fortunes. About 24 percent of new entrepreneurs by 2016 are aged 55 and up. About 16 percent of self-employed Americans are also aged 65 and up.

Starting a business of their own is a necessity for older professionals. That means their accumulated experience becomes an investment—for themselves and the people who will avail of their services and products. By their age, professionals have already accumulated a vast network of contacts and accrued business experience. Furthermore, their expertise makes older adults better mentors.

Digital Marketing

Digital marketing is unique for each demographic. Older people are no exception. Traditional methods like newspaper advertisements and direct mail are still popular options. However, as more adults become online natives, digital marketers are creating unique digital strategies for this market.

Statistics show that 42 percent of people aged 65 are on the internet daily, while 40 percent of these people used e-commerce at least once in their life. Digital marketers are making use of this fact through search engine optimization, ease of use for website interfaces, and targeted pay per click campaigns.

Search engine optimization caters to the tendency of older persons to use blunter search terms. Making websites easier to use convinces older adults to stay on the site longer. PPC campaigns take advantage of a business owner’s knowledge of their customer’s digital habits.

Business Intelligence Tools

Woman working with computer in the office

Business intelligence tools are a huge boon for startups and bigger businesses. These tools help businesses make sense of the data they’re receiving through analytic tools. Information gathered from these tools help businesses predict future trends, adjust current strategies, and study past successes. Cloud-based business intelligence tools don’t require installation and are more affordable options for smaller enterprises.

Aside from subscribing to a cloud-based service, businesses may also acquire data from other sources. Transactional data can be derived from Enterprise Resource Planning systems. Customer Relationship Management and e-commerce are also viable sources of data.

The Right Business Idea for Seniors

The business strategy you used in the ’80s or ’90s to expand an older business may not work now. Older business owners have to adapt their knowledge to tools and business trends of today. Entering the right business can spell their venture’s success in their later years.

Catering to the needs of their fellow retirees can pay dividends in the end. Related to this are in-house care for older adults without special health needs, gardening and lawn care, pet and house sitting, house cleaning, concierge and esthetic services.

On the flipside, age is not an immediate advantage for business owners. Truly successful older entrepreneurs use their experience efficiently. They become more receptive and not resistant to new business models. Stubbornness ruins business opportunities for young and old entrepreneurs. Open-minded, firm-handed older entrepreneurs can run their businesses well after retirement.

The post Digital Business Trends for Older Adults appeared first on None Equilibrium.

by Bertram Mortensen at May 23, 2019 03:32 AM

May 22, 2019

Emily Lakdawalla - The Planetary Society Blog

Organizing a Watch Party
Watch parties are a fun and informal way to bring together space fans and the public to experience the excitement of space exploration.

May 22, 2019 01:44 PM

May 21, 2019

Emily Lakdawalla - The Planetary Society Blog

LightSail Launch Event
LightSail 2’s launch window opens on June 22, and we are finalizing plans for our launch viewing celebrations. Once we have finished coordinating the details with the Air Force’s STP-2 mission team and the Kennedy Space Center, we will share them with all of our members and backers so that you can join us in person or remotely via the internet.

May 21, 2019 09:52 PM

Emily Lakdawalla - The Planetary Society Blog

Hayabusa2 Encounters Snag Trying to Drop Second Target Marker
The spacecraft is healthy and safe, but time is running out to collect a second sample from asteroid Ryugu.

May 21, 2019 08:22 PM

John Baez - Azimuth

The Monoidal Grothendieck Construction

My grad student Joe Moeller is talking at the 4th Symposium on Compositional Structures this Thursday! He’ll talk about his work with Christina Vasilakopolou, a postdoc here at U.C. Riverside. Together they created a monoidal version of a fundamental construction in category theory: the Grothendieck construction! Here is their paper:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

The monoidal Grothendieck construction plays an important role in our team’s work on network theory, in at least two ways. First, we use it to get a symmetric monoidal category, and then an operad, from any network model. Second, we use it to turn any decorated cospan category into a ‘structured cospan category’. I haven’t said anything about structured cospans yet, but they are an alternative approach to open systems, developed by my grad student Kenny Courser, that I’m very excited about. Stay tuned!

The Grothendieck construction turns a functor

F \colon \mathsf{X}^{\mathrm{op}} \to \mathsf{Cat}

into a category \int F equipped with a functor

p \colon \int F \to \mathsf{X}

The construction is quite simple but there’s a lot of ideas and terminology connected to it: for example a functor F \colon \mathsf{X}^{\mathrm{op}} \to \mathsf{Cat} is called an indexed category since it assigns a category to each object of \mathsf{X}, while the functor p \colon \int F \to \mathsf{X} is of a special sort called a fibration.

I think the easiest way to learn more about the Grothendieck construction and this new monoidal version may be Joe’s talk:

• Joe Moeller, Monoidal Grothendieck construction, SYCO4, Chapman University, 22 May 2019.

Abstract. We lift the standard equivalence between fibrations and indexed categories to an equivalence between monoidal fibrations and monoidal indexed categories, namely weak monoidal pseudofunctors to the 2-category of categories. In doing so, we investigate the relation between this global monoidal structure where the total category is monoidal and the fibration strictly preserves the structure, and a fibrewise one where the fibres are monoidal and the reindexing functors strongly preserve the structure, first hinted by Shulman. In particular, when the domain is cocartesian monoidal, lax monoidal structures on a functor to Cat bijectively correspond to lifts of the functor to MonCat. Finally, we give some indicative examples where this correspondence appears, spanning from the fundamental and family fibrations to network models and systems.

To dig deeper, try this talk Christina gave at the big annual category theory conference last year:

• Christina Vasilakopoulou, Monoidal Grothendieck construction, CT2018, University of Azores, 10 July 2018.

Then read Joe and Christina’s paper!

Here is the Grothendieck construction in a nutshell:

by John Baez at May 21, 2019 05:15 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Harness the Weather for Better Business

Extreme weather changes are happening more frequently in recent years as we experience global climate change. This affects not only our daily lives but the nation’s economy as well, as businesses – big or small – are affected in many ways.

Weather events bring unique challenges to every business. They directly impact the geography, and they even influence consumer behavior. Rather than fearing the adverse effects of extreme weather changes, you can plan for it to ensure that your business remains competitive and profitable.

Prepare Your Establishment for the Weather

Preparation is vital to avoid business interruption or slowdown. Understand the weather pattern in your area and pay attention to forecasts. Make sure that your building design and systems are ready to face the anticipated challenges, such as snowstorms or power outages. As heat waves and extreme cold are experienced throughout the country, it is essential to maintain efficient air conditioners and comfort air heaters to protect your employees and customers alike.

Understand the Impact of Ambient Temperature on Workers and Consumers

Be aware that the Occupational Health and Safety Administration (OSHA) recommends setting the workplace temperature between 68 to 76 degrees Fahrenheit. Aside from this range being optimal for physical health, it also helps keep the workforce morale in top shape. Right thermal room conditions provide a shield against the dreary cold and even Seasonal Affective Disorder (SAD).  Interestingly, one study has shown that as high as 77 degrees, Fahrenheit is ideal for office productivity. At this temperature, employees were found to produce more work with minimal errors.

Another reason to ensure that your establishment maintains a comfortable ambient temperature is that it influences consumer behavior. The dreadful winter in the past year has had people hunkering down in their homes, reducing revenues for many businesses. Naturally, customers are more likely to explore a shop or dine in a restaurant if the air quality inside is comfortably heated against the extreme cold outdoor. Moreover, studies show that pleasantly warm temperatures elicit emotional warmth from people, which improve their perceptions of the products they were interested in and made them feel better about purchasing and spending more.

Put Protocols in Place

It is essential that you educate your employees on how to respond to challenging situations brought on by inclement weather. Prepare emergency operations and staffing plan in case it gets too bad to come in for work. Train them how to prevent and deal with possible property damages. Also, build a backup plan for your supply chain to ensure that your inventory does not suffer during bad weather.

Weatherproof Your Business

employees getting out of the building in a rainy day

There are many more ways to protect your business from the undesirable effects of extreme weather changes. You remain in control of your office management, marketing, and customer service strategies. Harness the weather conditions to create a business atmosphere that will appeal to consumers. Stay ahead of the competition by being the people’s choice when weather conditions become discouraging for sales potential. No matter what your business is, the environment you create for both your workers and customers can be more influential than your actual products.

The post Harness the Weather for Better Business appeared first on None Equilibrium.

by Bertram Mortensen at May 21, 2019 01:00 AM

May 20, 2019

Lubos Motl - string vacua and pheno

WGC and modular invariance: does the WGC constrain low-energy physics at all?
Physicists writing papers about the Weak Gravity Conjecture (WGC) seem to be particularly excited about their work so they often submit their paper to be at the top of the hep-th list. Two weeks ago, a paper on the axion WGC was posted one second after the collection of papers for the new day started.

That's exactly what was achieved by another group of authors, Shiu and Cole at Amsterdam and Aalsma in Madison, who just submitted
Weak Gravity Conjecture, Black Hole Entropy, and Modular Invariance
a second after the beginning of the new arXiv day. A funny achievement of this paper is that it is the 500th followup of the WGC paper according to Inspire, so the Weak Gravity Conjecture has made it to the highest, "renowned", category of papers. Both Arkani-Hamed and Vafa have 19 renowned papers so there is no reason to congratulate them, and Alberto has 2+1 renowned papers with Nima et al. But it's my first renowned paper, so congratulations to me. ;-)



One followup is needed for our 7-author pp-wave paper to join the Screwing String Theory as a famous paper (250+) and one or two dozens are needed for the two quasinormal papers to do the same.

In a recent discussion, commenter nicknamed TwoBs has suggested that the WGC and perhaps similar swampland criteria are (almost?) vacuous because low-energy considerations along with the mere fact that the low-energy effective theory is an RG limit of a more fundamental theory at high energies is enough to prove that e.g. gravity is the weakest force and similar inequalities.



In particular, the WGC says that there exist elementary particles whose charge-to-mass ratio exceeds the ratio calculated for extremal black holes. That's enough for a weak form of WGC – it's enough for large extremal black holes to be able to evaporate although their "right to do so" is on the edge. And TwoBs has pointed out that one may basically show that some higher-derivative corrections to the black holes' charge-to-mass ratio is guaranteed to have the right sign required by the WGC.

At this level, I would agree that the WGC is basically "proven" because the slightly "superextremal" black holes may be counted as objects that are predicted by the WGC to exist.

Well, Aalsma et al. – the authors of the new paper – disagree with this view of TwoBs. Instead, they say that these large "superextremal" black holes whose existence may be proven just from some low-energy effective field theory cannot be considered elementary particles because their mass is too high and the effective theory where they're treated as elementary particles is no longer valid at the high energies comparable to these black holes' mass.

I am not sure where I stand on this subtle debate. Of course, we weren't terribly rigorous about our statement. But I think that what we wrote was that there exist some objects with the "superextremal" charge-to-mass ratio and the black holes that are heavier than the Planck mass – and those are the microstates whose charge-to-mass ratio only depends on the low-energy effective field theory, in some expansion – are enough to satisfy the WGC. We didn't require these states to be very light.

In fact, it's very likely that if the heavy black holes that are "superextremal" according to the charge-to-mass ratio exist, there must also exist "rather light" ones whose mass is at most of the order of the Planck mass because it's not clear where the extra "parameteric" enhancement of the minimum black hole mass that obeys the WGC could come from.

At any rate, Aalsma et al. at least think that we should have formulated a stronger version of the conjecture that basically says that the elementary particles with the "superextremal" charge-to-mass ratio should also exist at masses that are lower than the Planck mass, or at least lower than or comparable to the Planck mass. After all, the probability that a large black hole Hawking emits another "rather large" black hole is exponentially tiny and maybe we don't want the "loophole" to be this shaky.

And once we strengthen the WGC in this way, i.e. once we require the "superextremal" sub-Planckian particles, the low-energy arguments no longer guarantee the existence of such states. I am not aware of terribly strong arguments that would say that we should strengthen the WGC in this way. But to be sure, this strengthening must be considered important for the paper by Aalsma et al. to be important, too! ;-)

OK, so Aalsma et al. implicitly say that all of us should strengthen the WGC and the states with the "superextremal" charge-to-mass ratio should be sub-Planckian. Can we say something about the existence of these light particles that guarantee the WGC in that case? To justify their "Yes" answer, they use a paradigm that all of us have always wanted to be relevant for the swampland arguments, namely the UV-IR connection.

The laws apparent at high energies (UV, ultraviolet region) – like the detailed patterns of the heavy black hole microstates – are being linked to the laws seen at low energies (IR, infrared region). While banned in the effective field theories, pretty much by their definition, the UV-IR connections are believed to be omnipresent in string/M-theory. But there really exists a subclass of these connections that is understood well – the modular invariance of perturbative string theory.

When the string coupling constant \(g_s\ll 1\) is very small, string theory may be well described by the actual old-fashioned version of string theory with world sheets of a well-defined low-genus topology. The one-loop topology for closed-string diagrams is the torus which is a rectangle with the periodic identification of the top-and-bottom sides and the left-and-right sides, like in Pacman.



This only has the left-right identification.

A funny fact is that a rectangle may be rotated by 90 degrees. So you may consider the vertical dimension of the torus to be the Euclidean time in the thermal partition sum (expressed as a path integral); or the horizontal one (or infinitely many other, tilted direction, but let's not go there). And the invariance of the toroidal path integral under the rotation by 90 degrees may be interpreted as an identity relating two seemingly different partition sums. This equality is known as the modular invariance.

When one of the rectangles is "short and wide", the other one is "tall and thin". In the language of temperatures, it means that when one partition sum is dominated by low-mass string states, relatively to the string scale, the other one is dominated by high-mass string states. It means that the distribution of the low-string vibration states isn't really independent from the distribution of the high-string vibration states. And in a heterotic string case, Aalsma et al. calculate the masses as calculable functions of \(\alpha'\), the inverse string tension, and decide that it's really enough to strengthen the WGC.

Due to a calculation involving the toroidal world sheets, the weak form of the WGC which may be justified by purely low-energy effective field theory arguments – the existence of some mildly "superextremal" black holes – may be strengthened to the strong form of the WGC – the existence of low-lying string excitations that are "superextremal" by their charge-to-mass ratio. The strengthening itself requires characteristically stringy identities, those resulting from the modular invariance.

It's very interesting but I think that the very question whether the WGC and swampland conditions "really depend" on some characteristically stringy properties of quantum gravity will not quite be settled by this paper. When it comes to this big question (which is morally close to the question "how much we have proven that a consistent theory of quantum gravity has to be string/M-theory"), the paper by Aalsma et al. – albeit a very interesting paper – may be considered a rationalization of one possible answer ("Yes, stringiness is demonstrably essential for QG to work"), not really an impartial proof that this answer is better than the other answer.

by Luboš Motl (noreply@blogger.com) at May 20, 2019 09:56 AM

May 17, 2019

Lubos Motl - string vacua and pheno

Heckman, Vafa: QG bounds the number of hierarchy-like problems
Every competent physicist knows that fine-tuning is a kind of a problem for a theory claimed to be a sufficiently fundamental description of Nature.

Fundamental physicists have wrestled with the cosmological constant problem, the Higgs hierarchy problem,... and perhaps other problems of this kind. Fine-tuning is a problem because assuming that the fundamental "theory of everything" works like a quantum field theory and produces the couplings of the low-energy effective field theories via renormalization group flows, the observed hierarchies between the scales etc. seem extremely unlikely to emerge.

In principle, there could be arbitrarily many couplings and even fine-tuned couplings which could cause an infinite headache to every theorist. In a new paper, Cumrun Vafa – the Father of F-theory and the Swampland Program (where this paper belongs) – and Jonathan Heckman, a top young research on both topics, present the optimistic evidence that in string/M-theory and/or quantum gravity, the infinite fine-tuning worries are probably unjustified:
Fine Tuning, Sequestering, and the Swampland (just 7 pages, try to read all)
What's going on? Effective field theories outside quantum gravity may be built by "engineers". You may apparently always add new fields, new sectors, and they allow you to tune or fine-tune many new couplings. There doesn't seem to be a limit.



String/M-theory is more predictive and chances are that even if there were another consistent theory of quantum gravity, it would be more predictive, too. In particular, as they say, the number of couplings that can be independently fine-tuned to unnatural values is finite.

I have a feeling that they count the moduli among the couplings that can be "fine-tuned", even if they correspond to physical fields. But that doesn't invalidate their statement because they say that the number of moduli is bounded, too.



Moreover, the bound is a fixed finite number for every choice of the number of large dimensions and the number of supercharges. Fine, what's the evidence?

First, the number of the Minkowski, flat spacetime solutions in string/M-theory seems to be finite. Also, the number of Calabi-Yau topologies seems to be finite. The latter statement hasn't been quite proven but propositions that are very close have been proven. For example, if you restrict the manifolds to be elliptically fibered and the base to be a toric geometry, it's been proven that Calabi-Yau three-fold topologies form a finite set. It seems very likely that the manifolds that cannot be represented like that are a "minority", so even the number of all Calabi-Yau topologies should be finite.

Their first full-blown discussion is in 6D field theories. Conformal field theories have either \((1,0)\) or \((2,0)\) supersymmetry; \((1,1)\) cannot be conformal. Infinitely many classes of such theories with lots of deformations exist as CFTs. But if you want to couple them to gravity, you see restrictions. The cancellation of anomalies requires the total number of tensor multiplets to be 21 which is a particular finite number. In fact, all stringy 6D CFTs only allow deformations that result from operators that exist in the theory. In this 6D case, their new principle largely reduces to the anomaly cancellation.

In another related example, the total rank of some gauge group is 22. Perturbative string theory obviously restricts these ranks by the central charge – the rank cannot be too high for the same reason why the spacetime dimension cannot be arbitrarily high. Well, the central charge is also a gravitational anomaly – on the world sheet.

They discuss a few more rather specific examples – so their paper has many more equations and inequalities than what is actually needed for their main claims. But the overall new swampland principle has ramifications. In particular, if you imagine many sequestered or hidden sectors in artificially engineered apartheid-style models of particle physics, all their couplings seem to be independent, and could therefore admit independent fine-tuning.

According to Heckman and Vafa, if the number of such sectors is too high, quantum gravity actually implies some correlation between the fine-tunings. At the level of effective field theory without gravity, many parameters \(g_i\) could be independently adjusted and very small. But if you require that the theory may be coupled to quantum gravity, it already follows that there are equations that correlate almost all these constants \(g_i\), up to a finite (pre-determined) number of exceptions.

Sometimes people express their doubts about the reasoning involving naturalness and the disfavoring of fine-tuned theories. Indeed, the thinking based on quantum field theories is ultimately imprecise and incomplete and has to be adjusted. But "just ignore all the fine-tuning problems" isn't a scientifically valid adjustment to the problem. The problems cannot be completely ignored because they're implied to be problems by a rather specific, successful framework of physics that we use all the time – quantum field theory – combined with the probability calculus. To ignore the problem would mean to cherry-pick what we like about the framework – quantum field theory – and what we don't.

Instead, the adjustment to the fine-tuning rules must have the form of "quantum field theory isn't an exact description of Nature and the correct framework differs in respects A,B,C, and these differences also imply different predictions concerning the fine-tuning". This new Heckman-Vafa swampland may be counted as an actual scientific way to go beyond the existing rules about the naturalness and fine-tuning in effective field theories. The paper tells us how string/M-theory actually modifies the semi-rigorously proven intuition or lore about the fine-tuning in our effective field theories.

The modification primarily says that the couplings are automatically more constrained than naively indicated by the low-energy effective field theory analysis. In other words, string/M-theory is – in a new specific sense – more predictive than quantum field theory. It shouldn't be surprising because quantum gravity needs to reconcile the low-energy behavior with the high-energy behavior – where the particle spectrum must gradually merge with the black hole microstates whose entropy is again dictated by a low-energy effective field theory (including Einstein's gravity). When you're playing with the low-energy couplings, quantum gravity actually tells you that you have to aim at and hit several targets for the trans-Planckian behavior of the theory to remain consistent (with gravity).

by Luboš Motl (noreply@blogger.com) at May 17, 2019 06:27 AM

May 16, 2019

Emily Lakdawalla - The Planetary Society Blog

Lunar Reconnaissance Orbiter Images Beresheet Impact Site
Lunar Reconnaissance Orbiter has successfully imaged the impact site of the Beresheet lander, which made a really good run at performing the first privately funded Moon landing on 11 April, but crashed after the failure of its main engine.

May 16, 2019 09:47 PM

John Baez - Azimuth

Enriched Lawvere Theories

My grad student Christian Williams and I finished this paper just in time for him to talk about it at SYCO:

• John Baez and Christian Williams, Enriched Lawvere theories for operational semantics.

Abstract. Enriched Lawvere theories are a generalization of Lawvere theories that allow us to describe the operational semantics of formal systems. For example, a graph-enriched Lawvere theory describes structures that have a graph of operations of each arity, where the vertices are operations and the edges are rewrites between operations. Enriched theories can be used to equip systems with operational semantics, and maps between enriching categories can serve to translate between different forms of operational and denotational semantics. The Grothendieck construction lets us study all models of all enriched theories in all contexts in a single category. We illustrate these ideas with the SKI-combinator calculus, a variable-free version of the lambda calculus, and with Milner’s calculus of communicating processes.

When Mike Stay came to U.C. Riverside to work with me about ten years ago, he knew about computation and I knew about category theory, and we started trying to talk to each other. I’d heard that categories and computer science were deeply connected: for example, people like to say that the lambda-calculus is all about cartesian closed categories. But we soon realized something funny was going on here.

Computer science is deeply concerned with processes of computation, and category theory uses morphisms to describe processes… but when cartesian closed categories are applied to the lambda calculus, their morphisms do not describe processes of computation. In fact, the process of computation is effectively ignored!

We decided that to fix this we could use 2-categories where

• objects are types. For example, there could be a type of integers, INT. There could be a type of pairs of integers, INT × INT. There could also be a boring type 1, which represents something there’s just one of.

• morphisms are terms. For example, a morphism f: 1 → INT picks out a specific natural number, like 2 or 3. There could also be a morphism +: INT × INT → INT, called ‘addition’. Combining these, we can get expressions like 2+3.

• 2-morphism are rewrites. For example, there could be a rewrite going from 2+3 to 5.

Later Mike realized that instead of 2-categories, it can be good to use graph-enriched categories: that is, things like categories where instead of a set of morphisms from one object to another, we have a graph.

In other words: instead of hom-sets, a graph-enriched category has ‘hom-graphs’. The objects of a graph-enriched category can represent types, the vertices of the hom-graphs can represent terms, and the edges of the hom-graphs can represent rewrites.

Mike teamed up with Greg Meredith to write a paper on this:

• Mike Stay and Greg Meredith, Representing operational semantics
with enriched Lawvere theories
.

Christian decided to write a paper building on this, and I’ve been helping him out because it’s satisfying to see an old dream finally realized—in a much more detailed, beautiful way than I ever imagined!

The key was to sharpen the issue by considering enriched Lawvere theories. Lawvere theories are an excellent formalism for describing algebraic structures obeying equational laws, but they do not specify how to compute in such a structure, for example taking a complex expression and simplifying it using rewrite rules. Enriched Lawvere theories let us study the process of rewriting.

Maybe I should back up a bit. A Lawvere theory is a category with finite products T generated by a single object t, for ‘type’. Morphisms t^n \to t represent n-ary operations, and commutative diagrams specify equations these operations obey. There is a theory for groups, a theory for rings, and so on. We can specify algebraic structures of a given kind in some ‘context’—that is, in some category C with finite products—by a product-preserving functor \mu: T \to C. For example, if T is the theory of groups and C is the category of sets then such a functor describes a group, but if C is the category of topological space then such a functor describes a topological group.

All this is a simple and elegant form of what computer scientists call denotational semantics: roughly, the study of types and terms, and what they signify. However, Lawvere theories know nothing of operational semantics: that is, how we actually compute. The objects of our Lawvere are types and the morphisms are terms. But there are no rewrites going between terms, only equations!

This is where enriched Lawvere theories come in. Suppose we fix a cartesian closed category V, such as the category of sets, or the category of graphs, or the category of posets, or even the category of categories. Then V-enriched category is a thing like a category, but instead of having a set of morphisms from any object to any other object, it has an object of V. That is, instead of hom-sets it can have hom-graphs, or hom-posets, or hom-categories. If it has hom-categories, then it’s a 2-category—so this setup includes my original dream, but much more!

Our paper explains how to generalize Lawvere theories to this enriched setting, and how to use these enriched Lawvere theories in operational semantics. We rely heavily on previous work, especially by Rory Lucyshyn-Wright, who in turn built on work by John Power and others. But we’re hoping that our paper, which is a bit less high-powered, will be easier for people who are familiar with category theory but not yet enriched categories. The novelty lies less in the math than its applications. Give it a try!

Here is a small piece of a hom-graph in the graph-enriched theory of the SKI combinator calculus, a variable-free version of the lambda calculus invented by Moses Schönfinkel and Haskell Curry back in the 1920s:

SKI

by John Baez at May 16, 2019 12:42 AM

May 15, 2019

Emily Lakdawalla - The Planetary Society Blog

Chang’e-4 may have discovered material from the Moon’s mantle
The first science results from the unprecedented Chang’e-4 lunar far side mission are in. The mission’s Yutu-2 rover, deployed from the lander shortly after the Chang’e-4 landing on 3 January, has, with the help of the Queqiao relay satellite, returned data which suggests it has discovered material derived from the Moon’s mantle.

May 15, 2019 08:09 PM

Lubos Motl - string vacua and pheno

EVs vs ICEs, NOx, critics of science as thought police, Ponzi scheme, Soph
There are too many terrible events happening in the world right now – every day, both famous and unknown people are getting fired and hunted for saying the truth or for not being far left extremists; scientifically illiterate snake oil salesmen are receiving the Hawking Prizes; media are bombarding us with lies against science and the Western civilization.

A major Dutch publication has written a text on the topic "is physics a Ponzi scheme?". My once co-author Robbert Dijkgraaf and Juan Maldacena are the only voices that actually and calmly explain the state of theoretical physics now. They're overwhelmed by critics who don't understand the field at the technical level at all and who are being presented as if they were equal – Maldacena is the top theoretical physicist of his generation and Dijkgraaf is, among other things, the director of IAS Princeton where Einstein used to work.

Those special attributes don't seem to matter to the journalists anymore. Random angry activists and hecklers who are allies of the journalists are often made more visible.



The critics say it's very important that they're receiving supportive e-mail from other scientifically illiterate laymen and the journalist implicitly agrees with that. Meanwhile, Robbert is correctly pointing out that research works when it's not constrained by a thought police. These witch hunts against physics are obviously just another part of the thought police that is gaining strength in our society – and theoretical physics is naturally another expected target of the far left movement, as something evil because it has been overwhelmingly built by the white males. People who don't have the ability to do meaningful science are being happily hired by the fake news media as the inquisitors who are presented as equal to the top physicists.



This anti-meritocratic distortion of the life, Universe, and everything in the media affects all fields and all age groups. A hysterical (adjective chosen by Czech president Zeman), inarticulate, brainwashed, psychologically troubled 16-year-old Swedish girl who believes that the Earth is burning – probably much like an impolite Bill Nye with a flamethrower – is presented as a celebrity. (Sorry, the IQ of the people who are affected by these things by Bill Nye has to be so low that I refuse to count them as full-blown members of the homo sapiens species.) Readers are supposed to be interested in her book deal – she can obviously write just a worthless incoherent rant because she is not an intelligent girl, and this rant will be written by some older but almost equally unremarkable environmentalist activists, anyway.

Meanwhile, the contemporary teenagers are way more conservative and sensible than the millennial generation. Everyone who cares about the future of mankind must make sure that this generation will grow into a pro-Western, sane, mostly right-wing bunch. And it's possible. We must only start to care about the education!

OK, there's a wonderful comparison of Greta Thunberg with someone on the other side. If you don't know her, look at the videos by Soph. Soph is a 14-year-old (*9/23/2004 as Sophia Totterman) girl – two years younger than Greta Thunberg – whose YouTube videos (don't be afraid and try the latest Be Not Afraid) get three hundred thousand views per video in average. (Update: hours after this blog post was posted, this particular latest excellent video by Soph was removed by some nasty YouTube aßholes as "hate speech". It was far from the only one. Here you have a backup.) And she is discussing rather adult topics, indeed (starting with the co-existence of cultures and high school students' life). By the counting of the viewers, this girl is a self-made millionaire (OK, I still believe that there are some adults helping her with her videos – she says the older brother is a key but they say she's more radical than he is – but the result looks both more true, more impressive, more entertaining, and more authentic than Thunberg's). Do the media celebrate an actually brilliant girl who has achieved something by herself, without the media machinery?

Not at all. In fact, the answer "not at all" is far too optimistic. Yesterday, Joseph Bernstein wrote a disgusting hit piece against the 14-year-old girl at BuzzFeed News. Using a giant media machinery to attack teenage girls is how your far left movement defines a Gentleman today, isn't it?

Mr Bernstein, it hurts when someone is 14-year-old and more sophisticated and smarter than you and all your far left comrades combined, doesn't it? She makes you realize where (in the political sense) are the people who have some talent – and which remainder of the mankind is just a field of weeds that fail to achieve anything remarkable despite their usage of all immoral and illegitimate tools and weapons we may think of. And you dislike the truth, don't you? Soph's sentence that follows your "or how about, simply" is spot on.

In two or three decades, if the likes of Soph happen to be outnumbered by the brainwashed sheep of her generation, Soph et al. will have the duty to fully appreciate that she's equivalent to 100 or so sheep, and adjust the rules of democracy accordingly. It will be your world, Soph, and you can't allow sheep to overtake it.

But I want to talk about a relatively lighter topic, the electric vehicles (EVs). OK, so we have exchanged some e-mails with Gene about the advantages and disadvantages of EVs and cars with internal combustion engines (ICEs). I won't cite the precise sentences but I needed to mention the e-mail conversation for you to understand why I was surprised by Gene's comments posted a few hours ago that indicated that I should celebrate Brussels for encouraging Audi to produce EVs.

What? Surely you have understood that I am absolutely against this push to spread EVs by now, Gene. And indeed, this push is largely empowered by the European Union. It's another example of the criminal behavior of that international organization, a reason why most of the powers that this organization has acquired must be reversed, another reason to disband the EU in its current form.

Just days ago, I translated Ondřej Neff's essay which clearly stated that the statements by the Volkswagen Group that they only want to produce EVs in 2030 or something like that are terrifying, sick, ideologically driven, and directly threatening at least a quarter of the Czech economy. You won't get any support of mine for the EVs from Audi. They may produce some, the products may have some good characteristics, they will probably lose money on them, but the idea that this should be supported – and maybe even by the likes of me – is absolutely insane.

Gene pretends to be more open-minded and less ideological than the rest of Northern California and maybe he is. But I still find his PC virtue signaling unbearable way too often. He must have understood that I am generally against the expansion of the EVs at the moment because the disadvantages clearly trump the advantages. Have I been unclear about this elementary point? I don't believe it's possible. So why would Gene assume that I am going to praise the EU for Audi's EVs? Let me tell you why.

He doesn't really believe it but he's one of the promoters of this ideology – and a part of the strategy of such people is to create the atmosphere in which it is "believed" that all the people, perhaps including your humble correspondent, support the transition to EVs. He likes to strengthen the perception that the preference for ICEs is an unthinkable heresy, a thought crime – and he personally helps to nurture this atmosphere of non-freedom. I don't support the transition to EVs. Do you need this simple sentence to be translated to many languages? Sensible people who have thought about the issue know that the ICEs are superior at this moment and the EVs are inferior and if someone is telling you something else, he is not saying the truth.

The price that the actual buyer pays for an EV – when the vehicle is bought in the first place – is about twice as high than for an otherwise comparable ICE right now. This is the primary difference which is enough to conclude that the EVs are simply not competitive with the ICEs now. But even if the progress were much faster in EVs than ICEs – there's no reason to believe so – and EVs became as cheap as comparable ICEs, ICEs would still have other, secondary but very important, advantages.

These advantages of the ICEs, if I include the lower price, are e.g.:
  1. lower price of the vehicle in the first place
  2. much shorter refuelling times of the ICEs than charging times of EVs
  3. existing network of gas stations, minimum of superchargers
  4. environmental disadvantages of EVs: toxic elements
  5. safety involving some special processes, e.g. self-ignition of EVs
  6. a centennial experience with the ICEs showing that there's no time bomb waiting for us
This list is far from complete but it's quite a list. The price of the car is clearly a primary variable and the ICEs win 2-to-1 over EVs. The charging times are incomparable. You spend a few minutes by refuelling petrol or diesel but you need 20-40 minutes to recharge 50-80 percent of a Tesla battery. This difference is huge, I will discuss it later.

Now, you only recharge an EV if you're lucky and there's a nearby supercharger. Are these networks comparable? Czechia gives us a shocking example. We have over 7,000 gas stations and 3 Tesla superchargers – in Prague, Humpolec, and Olomouc. That's where you recharge the car as "quickly" as in 30 minutes. Outside these places, you find at least overnight chargers where you need to be connected... you know, overnight.

Now, will the network of superchargers grow? It will. Will it be fast? Are there good reasons for the growth? There aren't because the number of EVs is small. So it's clearly too bad an investment to build too many chargers for too few EVs. This is a vicious circle. A century ago, a similar "vicious circle" arguably slowed down the growth of the normal gas stations. But there was a difference. A century ago, ICEs were competing against horses, and cars are more convenient than horses even despite the rare network of gas stations.

Now, the EVs are competing against the ICEs which are really comparable – it's not a difference similar to the difference between a horse and a car. So the construction of a dense network of superchargers is clearly an investment that will create a financial loss for quite some time. The belief that it's worth to do it is just a belief. And it is clearly a belief that is driven by an ideology right now.

I mentioned that there are 7,000+ gas stations and 3 Tesla superchargers in my country. The ratio looks huge. But what about the ratio of the cars? In 2018, Czechs bought some 250,000+ new cars, about 30% of them were diesel, a drop from 37% in the previous year. Aside from petrol and diesel, all the other cars are negligible: 5,000 hybrids, 2,000 CNGs, 1,000 LNGs, and 1,000 purely electric vehicles, including 85 Teslas. In 2018, 0.03% of the cars sold in Czechia were Teslas. 3 superchargers are 0.05% of the 7,000 gas stations – so within a factor of two, it's fair.

There is absolutely no reason to think that the EVs will naturally beat the ICEs anytime soon. In particular, the market obviously wants to keep the petrol/diesel gas stations up to 2030 because in 2030, there will still be lots of cars purchased recently because it's normal for many people to keep the same car for a decade.

Now, the environmental advantages of ICEs. They produce just H2O (water vapor) and CO2, harmless and beneficial gases. There's some NOx, nitrogen's oxides, in the diesel case. This must be compared to the noxious elements that are used in the production of the batteries for EVs, that occasionally burn when a car self-ignites (Hong Kong saw another self-igniting Tesla yesterday) or when a whole EV factory burns (which seems to be a frequent event, too). People don't really know whether it's possible to safely deal with the worn old lithium batteries.

Gene admits that the real pollution from ICEs is much smaller than it used to be – a drop by 97%, using his numbers. But even the world with the high pollution was OK enough. When it drops to 3% to what it used to be, should we still consider the situation unacceptable? I don't think so. This opinion is nothing else than an extremist ideology. Look at the death rates.

Every year, some 1.3 million people die in the world as a result of a car accident – some mechanical damage to the body. It's estimated that the NOx emissions may be blamed for 10,000 deaths in the EU per year. The total for the world is probably below 100,000. Now, is it too high? It's clearly not too high. The deaths blamed on the fuel are less than 10%, and maybe around 5%, of the deaths caused by the vehicles in total. In what sense could we claim that it's too much?

Every year, some 55 million people die globally. Those 50,000-100,000 from NOx are between 0.1% and 0.2% of the deaths. If you eliminated petrol and especially diesel cars, you would reduce the deaths by 0.1%-0.2% or so. Great. Temporarily, of course. After some time, the population would be upgraded to a higher life expectancy and the same number of people would be dying at a higher age as without the reduction of NOx.

But imagine that the ratio of the deaths is comparable to the increase of the life expectancy – it's not quite so but it's a good order-of-magnitude estimate. So the NOx emissions from cars may be reducing the lives of the people by 0.1% or 0.2%. Great. What about the waiting times in front of the superchargers? If you recharge every other day, you waste 30 minutes per 2 days (48 hours) in front of the supercharger. That's about 1% of your time! To a large extent, this has shortened your useful life. And 1% is 5-10 times larger than 0.1% or 0.2%.

The result is that the superchargers are robbing you of a greater portion of your life than the NOx car pollution in average!

Even if CO2 emissions were a problem, and they're not, one may show that in the present real-world conditions, the total CO2 emissions connected with the production and usage of an EV actually trump those of a diesel car.

It's similar with all such comparisons. If you actually compare the variables on both sides fairly, you may see that the ICEs are superior than the EVs. It may change in some time – as the technologies evolve – but the difference is so significant that it's unlikely to change for many years. But this discussion has been largely hijacked by dishonest ideologues who are close to the environmentalist movement and the deceptive "mainstream" media of the present. Because they have decided to stick this particular EV agenda mindlessly, they only push memes about advantages of EVs and disadvantages of ICEs down into their viewers' and readers' throats. Virtually all of this is garbage. People intuitively know it – they subconsciously perform many of these calculations which make them keep their ICEs and avoid EVs. But the massage by the media and their allied ideologues is unbelievable. The percentage of the EVs in a given country or state may be considered a very good measure of "how much the population of that territory likes to be brainwashed".

Now, advocates of EVs also say that the EVs are simpler, and therefore less likely to break.

This is another totally demagogical sleight-of-hand. EVs have fewer mechanically moving parts but they have a greater number of "transistors" and other electronic parts. Can they break? You bet. The electric cars depend on lots of software and it can break – and cripple your car – too. It's happening. Functionalities of cars are often broken after a software update. It's completely analogous to the mechanical breaking of an ICE. More importantly, the probability that an engine breaks isn't a simple increasing function of the "number of parts". It depends which parts, how well they're made, how robust the material is, and other things.

In practice, the breaking of the ICEs is not such a problem. Many problems may be fixed. It's been business-as-usual for a century. And we don't really want to assume that the cars serve for more than 20 years or something like that. Cars that are this old look obsolete. They have other disadvantages. People usually prefer to buy a new car after a shorter time – perhaps 5 years in such cases – and carmakers obviously want this "refreshment" to take place sufficiently often, too. So the "simplicity advantage" of the EVs only exists under assumptions that are utterly unrealistic.

Even more conceptually, simplicity is heavily overrated. I have also often said that I preferred things to be simple. But I saw others saying similar things – and saw that their reasons for saying such things are totally bad. In most cases, people say "they prefer simple things" because they're lazy or intellectual limited. They want "things" to be simpler because harder things mean extra work for them and they don't like it! It's that simple. My explanation is actually simple which is why you should appreciate it! It's also true. That's why schoolkids prefer a simple homework, for example. There may exist legitimate justifications of "simplicity" but they're rare.

But does it mean that "simple" is "superior" in general? Not at all. The schoolkids and their adult counterparts are doing some work. And if the work were "simple", it probably means that they didn't do too much work, and that's "bad" for the client or buyer. The buyer has a completely different perspective than the producer. If something is simple, it should often be expected to be cheap and unremarkable because not much work has been done! There is almost nothing inside the Tesla Model 3's interior which is why it should be an extremely cheap car. An extensive essay should be written about the simplicity in fundamental physics – which is a sufficiently different topic than simplicity in engineering. We prefer as simple things as possible, but not more so, as Einstein wisely said. Again, the laymen usually want things to be simpler than possible and that's too bad.

This "simplicity" has been added to the preferred buzzwords of the Luddite movement, too. "Simple" things are supposed to be preferred. That may include organic food. But much of this "simple" stuff is the same as the "cheap stuff before the technological advances reshaped the industry". So the "simplicity" often directly contradicts "technological progress"! It's not a shame for an engineer to design complex engines. If someone denies that this is really the bulk of the work of every engineer, then this someone is a Luddite who fights against the technological progress in general. And even complex engines may be made more reliable, more resilient, and more lasting. "Complexity of an engine" isn't any insurmountable lethal flaw.

An ICE has a lot of parts, especially if it has various gadgets to reduce emissions of various compounds or particles. But that doesn't mean that it's bad. Complex engines are the standard product of engineering. Engineering also wants to keep things simple if all other things are equal. But the "if" condition isn't obeyed here – it is almost never obeyed. Things aren't equal. You can't compare things that aren't commensurable. And the "number of parts in an EV or an ICE" is not commensurable. To achieve certain things, a certain degree of complexity is often needed – EVs and ICEs mainly have a different kind of complexity, not a different amount. So we just don't want things to be too simple. At the end, a car or a phone should have "many functionalities" and some complexity is necessary for that. In the 1980s, I was surely happy that my watch received from my Australian uncle had a calculator, stopwatch, and many other functions. Whoever thinks that a small number of functions is a universal advantage is simply a Luddite.

Now, Gene and others say that "the market will decide". But sadly, that's not what is happening today. Liars and charlatans in the media and unelected EU officials who are actually controlled by brain-dead members of various NGOs and other pig farms owned by the likes of George Soros are determining whether companies – perhaps, by 2030, including Škoda Auto – will be allowed to produce proper cars that the consumers actually want at all, and whether the buyers will be "allowed" to buy the cars of the kind they prefer. It's too bad.

In 2019, EVs are a niche market and every argument building on the assumption that the EVs are as important as or more important than the ICEs is just self-evidently fraudulent. If it is allowed to speak, the market will speak but to some extent, it has already spoken, too. Both EVs and ICEs have been around for more than a century but ICEs became and remained dominant. Given the political atmosphere and the amount of lies and illegitimate pressures that we see everywhere around, it seems very likely that a hypothetical suppression of ICEs and proliferation of EVs may be explained by the emerging totalitarianism, not by the natural and legitimate market forces.

by Luboš Motl (noreply@blogger.com) at May 15, 2019 08:59 AM

May 14, 2019

CERN Bulletin

Canoe-Kayak Club

Avec le retour du printemps le Canoë Kayak CERN retrouve ses activités qu’il vous invite à venir découvrir.

Que vous soyez débutant au plus chevronné le club peut vous proposer des sorties qui vous conviendront.

Des cours d’initiation sur eau plate donnés par un moniteur diplômé, des sorties sur le Rhône, le lac Léman ou en mer, aux sorties en eau vive sur le Rhône à Chancy, sur l’Arve à Annemasse, sur les rivières voisines de France, de Suisse et d’Italie, aux sorties dragon boat sur le lac de Divonne, le club ne manquera pas de satisfaire vos envies.

Le club propose également, pour les groupes qui veulent développer le Team building, des initiations au dragon boat.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Pour tous renseignements:

club.kayak@cern.ch

https://canoe-kayak.web.cern.ch/fr

May 14, 2019 04:05 PM

CERN Bulletin

Interfon

Coopérative des fonctionnaires internationaux. Découvrez l'ensemble de nos avantages et remises auprès de nos fournisseurs sur notre site internet www.interfon.fr ou à notre bureau au bâtiment 504 (ouvert tous les jours de 12h30 à 15h30).

May 14, 2019 04:05 PM

CERN Bulletin

CERN Bulletin

CERN Bulletin

Axel Maas - Looking Inside the Standard Model

Acquiring a new field
I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁

by Axel Maas (noreply@blogger.com) at May 14, 2019 03:03 PM

John Baez - Azimuth

Props in Network Theory (Part 2)

Here’s my talk for SYCO4 next week:

Props in network theory.

Abstract. To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, Petri nets, electrical circuit diagrams, signal-flow graphs, chemical reaction networks, Feynman diagrams and the like. All these different diagrams fit into a common framework: the mathematics of symmetric monoidal categories. Two complementary approaches are presentations of props using generators and relations (which are more algebraic in flavor) and structured cospan categories (which are more geometrical). In this talk we focus on the former. A “prop” is a strict symmetric monoidal category whose objects are tensor powers of a single generating object. We will see that props are a flexible tool for describing many kinds of networks.

You can read a lot more here:

• John Baez, Props in network theory (part 1), Azimuth, April 27, 2018.

by John Baez at May 14, 2019 02:33 AM

May 12, 2019

Marco Frasca - The Gauge Connection

Is it possible to get rid of exotic matter in warp drive?

On 1994, Miguel Alcubierre proposed a solution of the Einstein equations (see here) describing a space-time bubble moving at arbitrary speed. It is important to notice that no violation of the light speed limit happens because is the space-time moving and inside the bubble everything goes as expected. Miguel AlcubierreThis kind of solutions of the Einstein equations have a fundamental drawback: they violate Weak Energy Condition (WEC) and, in order to exist, some exotic matter with negative energy density must exist. Useless to say, nobody has ever seen such kind of matter. There seems to exist some clue in the way Casimir effect works but this just relies on the way one interprets quantum fields rather than an evidence of existence. Besides, since the initial proposal, a great number of studies have been published showing how pathological the Alcubierre’s solution can be, also recurring to quantum field theory (e.g. Hawking radiation). So, we have to turn to dream of a possible interstellar travel hoping that some smart guy will one day come out with a better solution.

Of course, Alcubierre’s solution is rather interesting from a physical point of view as it belongs to a number of older solutions, likeKip Thorne wormholes, time machines and like that, yielded by very famous authors as Kip Thorne, that arise when one impose a solution and then check the conditions of its existence. This turns out to be a determination of the energy-momentum tensor and, unavoidably, is negative. Then, they violate whatever energy condition of the Einstein equations granting pathological behaviour. On the other side, they appear the most palatable for science fiction of possible futures of space and time travels. In these times where this kind of technologies are largely employed by the film industry, moving the fantasy of millions, we would hope that such futures should also be possible.

It is interesting to note the procedure to obtain these particular solutions. One engineers it on a desk and then substitute them into the Einstein equations to see when are really a solution. One fixes in this way the energy requirements. On the other side, it is difficult to come out from the blue with a solution of the Einstein equations that provides such a particular behaviour, moving the other way around. It is also possible that such solutions are not possible and imply always a violation of the energy conditions. Some theorems have been proved in the course of time that seem to prohibit them (e.g. see here). Of course, I am convinced that the energy conditions must be respected if we want to have the physics that describes our universe. They cannot be evaded.

So, turning at the question of the title, could we think of a possible warp drive solution of the Einstein equations without exotic matter? The answer can be yes of course provided we are able to recover the York time, or warp factor, in the way Alcubierre obtained it with its pathological solution. At first, this seems an impossible mission. But the space-time bubble we are considering is a very small perturbation and perturbation theory can come to rescue. Particularly, when this perturbation can be locally very strong. On 2005, I proposed such a solution (see here) together with a technique to solve the Einstein equations when the metric is strongly perturbed. My intent at that time was to give a proof of the BKL conjecture. A smart referee suggested to me to give an example of application of the method. The metric I have obtained in this way, perturbing a Schwarzaschild metric, yields a solution that has an identical York time (warp factor) as for the Alcubierre’s metric. Of course, I am respecting energy conditions as I am directly solving the Einstein equations that do.

The identity between the York times can be obtained provided the form factor proposed by Alcubierre is taken to be 1 but this is just the simplest case. Here is an animation of my warp factor.

Warp factor

It seen the bubble moving as expected along the x direction.

My personal hope is that this will go beyond a mathematical curiosity. On the other side, it should be understood how to provide such kind of perturbations to a given metric. I can think to the Einstein-Maxwell equations solved using perturbation theory. There is a lot of literature about and a lot of great contributions on this argument.

Finally, this could give a meaning to the following video by NASA.

by mfrasca at May 12, 2019 05:59 PM

ZapperZ - Physics and Physicists

The Geekiest T-Shirt That I've Ever Bought
I just had to get this one. I found this last week during the members night at Chicago's Adler Planetarium.


The people that I were with of course knew that this is referring to "force", but they didn't get the connection. So I had to explain to them that Newton's 2nd law, i.e. F=ma can be expressed in a more general form, i.e. F = dp/dt, where p is momentum mv. Thus

F = d/dt (mv)

Of course, I'm not surprised that most people, and probably most of Adler's visitors, would not get this unless they know a bit of calculus and have done general physics with calculus. Maybe that was why this t-shirt was on sale! :)

Maybe I'll wear this when I teach kinematics this Fall!

Zz.

by ZapperZ (noreply@blogger.com) at May 12, 2019 02:40 PM

May 11, 2019

John Baez - Azimuth

Symposium on Compositional Structures 4: Program

Here’s the program for this conference:

Symposium on Compositional Structures 4, 22–23 May, 2019, Chapman University, California. Organized by Alexander Kurz.

A lot of my students and collaborators are speaking here! The meeting will take place in Beckman Hall 101.

Wednesday May 22, 2019

• 10:30–11:30 — Registration.

• 11:30–12:30 — John Baez, “Props in Network Theory“.

• 12:30–1:00 — Jade Master, “Generalized Petri Nets”.

• 1:00–2:00 — Lunch.

• 2:00–2:30 — Christian Williams, “Enriched Lawvere Theories for Operational Semantics”.

• 2:30–3:00 — Kenny Courser, “Structured Cospans”.

• 3:00–3:30 — Daniel Cicala, “Rewriting Structured Cospans”.

• 3:30–4:00 — Break.

• 4:00–4:30 — Samuel Balco and Alexander Kurz, “Nominal String Diagrams”.

• 4:30–5:00 — Jeffrey Morton, “2-Group Actions and Double Categories”.

• 5:00–5:30 — Michael Shulman, “All (∞,1)-Toposes Have Strict Univalent Universes”.

• 5:30–6:30 — Reception.

Thursday May 23, 2019

• 9:30–10:30 — Nina Otter, “A Unified Framework for Equivalences in Social Networks”.

• 10:30–11:00 — Kohei Kishida, Soroush Rafiee Rad, Joshua Sack and Shengyang Zhong, “Categorical Equivalence between Orthocomplemented Quantales and Complete Orthomodular Lattices”.

• 11:00–11:30 — Break.

• 11:30–12:00 — Cole Comfort, “Circuit Relations for Real Stabilizers: Towards TOF+H”.

• 12:00–12:30 — Owen Biesel, “Duality for Algebras of the Connected Planar Wiring Diagrams Operad”.

• 12:30–1:00 — Joe Moeller and Christina Vasilakopoulou, “Monoidal Grothendieck Construction”.

• 1:00–2:00 — Lunch.

• 2:00–3:00 — Tobias Fritz, “Categorical Probability: Results and Challenges”.

• 3:00–3:30 — Harsh Beohar and Sebastian Küpper, “Bisimulation Maps in Presheaf Categories”.

• 3:30–4:00 — Break.

• 4:00–4:30 — Benjamin MacAdam, Jonathan Gallagher and Rory Lucyshyn-Wright, “Scalars in Tangent Categories”.

• 4:30–5:00 — Jonathan Gallagher, Benjamin MacAdam and Geoff Cruttwell, “Towards Formalizing and Extending Differential Programming via Tangent Categories”.

• 5:00–5:30 — David Sprunger and Shin-Ya Katsumata, “Differential Categories, Recurrent Neural Networks, and Machine Learning”.

by John Baez at May 11, 2019 03:52 PM

May 10, 2019

ZapperZ - Physics and Physicists

Table-Top Laser Ablation Unit
I was at the Chicago's Field Museum Members Night last night. Of course, there were lots of fascinating things to see, and wonderful scientists and museum staff to talk to. But inevitably, the experimentalist in me can't stop itself from geeking out over neat gadgets.

This was one such gadget. It is, believe it or not, a table-top laser ablation unit. It is no more bigger than shoe box. I was surprised when I was told what it was, and of course, I wanted to learn more. It appears that this is still a prototype, invented by the smart folks at ETH Zurich (of course!). The scientist at Field Museum uses it to do chemical analysis on trace elements in various objects in the field, where the trace elements are just too minute in quantity that x-ray fluorescence would not be effective.


Now, you have to understand that typically, laser ablation systems tend to occupy whole rooms! It's job is to shoot laser pulses at a target, causing the evaporation of that material. The vapor then typically will migrate to a substrate where it will form a thin film, or coat another object. People use this technique often to make what is known as epitaxial films, where, if suitably chosen, the new film will have the same crystal structure as the substrate, usually up to a certain thickness.

So that was why I was fascinated to see a laser ablation kit that is incredibly small. Granted, they don't need to do lots of ablating. They only need to sample the vapor enough to do elemental analysis. The laser source is commercially bought, but the unit that is in the picture directs the laser to the target, collects the vapor, and then siphon it to a mass spectrometer or something to do its analysis. The whole thing, with the laser and the analyzer, fits on a table top, making it suitable to do remote analysis on items that can't be moved.

And of course, as always, I like to tout of the fact that many of these techniques originate out of physics research, and that eventually, they trickle down to applications elsewhere. But you already know that, don't you?

Zz.

by ZapperZ (noreply@blogger.com) at May 10, 2019 01:12 PM

Lubos Motl - string vacua and pheno

Pheno papers on \(96\GeV\) Higgs, trilepton excess, and \(60\GeV\) dark matter
I want to mention two new hep-ph papers about supersymmetry-like anomalies seen by the accelerators. In the paper
An N2HDM Solution for the possible \(96\GeV\) Excess,
B+C+Heinemeyer discuss some detailed models for the apparent weak signals indicating a new Higgs boson of mass around \(96\GeV\). Recall that the only well-established Higgs boson has the mass of \(125\GeV\).

Concerning the \(96\GeV\) little brother, the CMS has seen an excess in the diphoton channel; and decades ago, LEP has seen an excess in the bottom quark pair channel. Heinemeyer and friends say that these excesses may be explained by a two-Higgs model with an extra Higgs singlet. Is that surprising at all? There seems to be a lot of freedom to accommodate two independent excesses, right?

At any rate, concerning supersymmetric models, the NMSSM – next-to-minimal supersymmetric standard model – and its extension, µνSSM seem like aesthetically pleasing completions of the two-Higgs-plus-a-singlet models. In the model with the two Greek letters, the singlet is interpreted as a right-handed neutrino superfield and the seesaw mechanism is incorporated. These models look OK for the excesses – there are other reasons to prefer NMSSM over MSSM. But they're also less constrained and predictive than the MSSM, so I think the good news isn't remarkably victorious.



Another paper on the excesses is
The Return of the WIMP: Missing Energy Signals and the Galactic Center Excess
by Carena+Osborne+Shah+Wagner. They promote a model with the dark matter of mass \(m_\chi = 60\GeV\) and its justification by anomalies that exist out there.



The dark matter of that mass would be the lightest neutralino. It could naturally agree with the 3-sigma trilepton ATLAS excess (and a confirmation by GAMBIT), the gamma ray excess at the center of our galaxy seen by Fermi-LAT, as well as the antiproton excess observed by AMS-02.

In their model, the LSP is a bino-like neutralino and another, wino-like neutralino should exist with the mass of \(160\GeV\). \(\tan\beta\) should be greater than ten. This paper may be viewed as a counter-argument against the recent efforts to claim that the central galactic gamma-ray excess was "due to some boring pulsars" only.

At any rate, dark matter of mass \(60\GeV\) within supersymmetry is still plausible and somewhat recommended by some observations, much like the NMSSM-like new Higgs of mass \(96\GeV\). I can't tell you the probability that these particles exist – it depends on lots of priors and methodology – but I am sure that it is just wrong and prejudiced to behave as if these probabilities were zero.

by Luboš Motl (noreply@blogger.com) at May 10, 2019 09:56 AM

May 08, 2019

Jon Butterworth - Life and Physics

Mosquitos and Toblerones
A couple of years ago I went to see Lucy Kirkwood’s play Mosquitos at the National Theatre. It starred Olivia Coleman and Olivia Williams, who were both brilliant, and was set largely in and around CERN. There was a lot … Continue reading

by Jon Butterworth at May 08, 2019 07:20 PM

May 04, 2019

Clifford V. Johnson - Asymptotia

Endgame Memories

About 2-3 (ish) years ago, I was asked to visit the Disney/Marvel mothership in Burbank for a meeting. I was ushered into the inner workings of the MCU, past a statue of the newly acquired Spidey, and into a room. Present were Christopher Markus and Stephen McFeely, the writers of … Click to continue reading this post

The post Endgame Memories appeared first on Asymptotia.

by Clifford at May 04, 2019 06:34 PM

ZapperZ - Physics and Physicists

Why Does Light Bend When It Enters Glass?
Don Lincoln tackles another "everyday" phenomenon. This time, he tries to give you an "explanation" on why light changes direction when it goes from one medium to another, and why some of the more popular explanation that have been given may be either incomplete, or wrong.



Certainly, any undergraduate physics student would have already dealt with the boundary conditions using Maxwell's equations, so this should be entirely new. However, he skipped rather quickly something that I thought was not handled thoroughly.

The continuity of the parallel component of E to the boundary is fine. However, Lincoln argued that the reason why the perpendicular component of the F field is shorter in glass is due to the polarization of the material, and thus, the sum of the light's E-field and the E-field from the polarization will cause the net, resultant E-field to be shorter.

But if the material's polarization can affect the perpendicular component, why doesn't it also affect the parallel component? After all, we assume that the material is isotropic. This, he left out, and at least to me, made it sound that the parallel component is not affected. If this is so, why?

Zz.

by ZapperZ (noreply@blogger.com) at May 04, 2019 02:55 PM

April 30, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Week at The Surf Experience

I don’t often take a sun holiday these days, but I had a fabulous time last week at The Surf Experience in Lagos, Portugal. I’m not an accomplished surfer by any measure, but there is nothing quite like the thrill of catching a few waves in the sea with the sun overhead – a nice change from the indoors world of academia.

Not for the first time, I signed up for a residential course with The Surf Experience in Lagos. Founded by veteran German surfer Dago Lipke, guests of The Surf Experience stay at the surf lodge Vila Catarina, a lovely villa in the hills above Lagos, complete with beautiful gardens and swimming pool. Sumptuous meals are provided by Dagos’s wife Connie, a wonderful cook. Instead of wandering around town trying to find a different restaurant every evening, guests enjoy an excellent meal in a quiet setting in good company, followed by a game of pool or chess. And it really is good company. Guests at TSE tend mainly to hail from Germany and Switzerland, with a sprinkling from France and Sweden, so it’s truly international – quite a contrast to your average package tour (or indeed our college staff room). Not a mention of Brexit, and an excellent opportunity to improve my German. (Is that what you tell yourself?- Ed)

IMG_2637 (1)

Hanging out at the pool before breakfast

IMG_2634

Fine dining at The Surf Experience

IMG_2624

A game of cards and a conversation instead of a noisy bar

Of course, no holiday is perfect and in this case I managed to pick up an injury on the first day. Riding the tiniest wave all the way back to the beach, I got unexpectedly thrown off, hitting my head off the bottom at speed. (This is the most elementary error you can make in surfing and it risks serious injury, from concussion to spinal fracture). Luckily, I walked away with nothing more than severe bruising to the neck and chest (as later established by X-ray at the local medical clinic, also an interesting experience). So no life-altering injuries, but like a jockey with a broken rib, I was too sore to get back on the horse for few days. Instead, I tried Stand Up Paddling for the first time, which I thoroughly enjoyed. It’s more exciting than it looks, must get my own board for calm days at home.

E6Jc2LvY

Stand Up Paddling in Lagos with Kiteschool Portugal

Things got even better towards the end of the week as I began to heal. Indeed, the entire surf lodge had a superb day’s surfing yesterday on beautiful small green waves at a beach right next to town (in Ireland, we very rarely see clean conditions like this, the surf is mainly driven by wind). It was fantastic to catch wave after wave throughout the afternoon, even if clambering back on the board after each wasn’t much fun for yours truly.

This morning, I caught a Ryanair flight back to Dublin from Faro, should be back in the office by late afternoon. Oddly enough, I feel enormously refreshed – perhaps it’s the feeling of gradually healing. Hopefully the sensation of being continuously kicked in the ribs will disappear soon and I’ll be back on the waves in June. In the meantime, this week marks a study period for our students before their exams, so it’s an ideal time to prepare my slides for the Eddington conference in Paris later this month.

Update

I caught a slight cold on the way back, so today I’m wandering around college like a lunatic going cough, ‘ouch’ , sneeze, ‘ouch’.  Maybe it’s karma for flying Ryanair – whatever about indulging in one or two flights a year, it’s a terrible thing to use an airline whose CEO continues to openly deny the findings of climate scientists.

 

by cormac at April 30, 2019 09:49 PM

April 29, 2019

ZapperZ - Physics and Physicists

How Beauty Leads Physics Astray
Sabine Hossenfelder is probably doing a "book tour", since this talk certainly addressed many points that she brought up in her book.



As I've said many times on here, I don't disagree with many things that she brought up. I find the trend of foundational physics to even think about discarding experimental verification to be very troubling. I'm just glad that the field that I'm in is still strongly experimental.

Zz.

by ZapperZ (noreply@blogger.com) at April 29, 2019 08:21 PM

April 25, 2019

Clifford V. Johnson - Asymptotia

Black Hole Session

Well I did not get the special NYT issue as a keepsake, but this is maybe better: I got to attend the first presentation of the “black hole picture” scientific results at a conference, the APS April meeting (Sunday April 14th 2019). I learned so much! These are snaps of … Click to continue reading this post

The post Black Hole Session appeared first on Asymptotia.

by Clifford at April 25, 2019 06:35 PM

April 24, 2019

Andrew Jaffe - Leaves on the Line

Spring Break?

Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.

But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.

Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.

by Andrew at April 24, 2019 01:19 AM

April 23, 2019

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:19 PM

Georg von Hippel - Life on the lattice

Looking for guest blogger(s) to cover LATTICE 2018
Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

Georg von Hippel - Life on the lattice

Looking for guest bloggers to cover LATTICE 2019
My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, send me an email or write a comment to this post.

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

April 16, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: Seeing More Clearly

Ok, after yesterday’s post, in which I told you what I still didn’t understand about the Event Horizon Telescope (EHT) black hole image (see also the pre-photo blog post in which I explained pedagogically what the image was likely to show and why), today I can tell you that quite a few of the gaps in my understanding are filling in (thanks mainly to conversations with Harvard postdoc Alex Lupsasca and science journalist Davide Castelvecchi, and to direct answers from professor Heino Falcke, who leads the Event Horizon Telescope Science Council and co-wrote a founding paper in this subject).  And I can give you an update to yesterday’s very tentative figure.

First: a very important point, to which I will return in a future post, is that as I suspected, it’s not at all clear what the EHT image really shows.   More precisely, assuming Einstein’s theory of gravity is correct in this context:

  • The image itself clearly shows a black hole’s quasi-silhouette (called a `shadow’ in expert jargon) and its bright photon-sphere where photons [particles of light — of all electromagnetic waves, including radio waves] can be gathered and focused.
  • However, all the light (including the observed radio waves) coming from the photon-sphere was emitted from material well outside the photon-sphere; and the image itself does not tell you where that material is located.  (To quote Falcke: this is `a blessing and a curse’; insensitivity to the illumination source makes it easy to interpret the black hole’s role in the image but hard to learn much about the material near the black hole.) It’s a bit analogous to seeing a brightly shining metal ball while not being able to see what it’s being lit by… except that the photon-sphere isn’t an object.  It’s just a result of the play of the light [well, radio waves] directed by the bending effects of gravity.  More on that in a future post.
  • When you see a picture of an accretion disk and jets drawn to illustrate where the radio waves may come from, keep in mind that it involves additional assumptions — educated assumptions that combine many other measurements of M87’s black hole with simulations of matter, gravity and magnetic fields interacting near a black hole.  But we should be cautious: perhaps not all the assumptions are right.  The image shows no conflicts with those assumptions, but neither does it confirm them on its own.

Just to indicate the importance of these assumptions, let me highlight a remark made at the press conference that the black hole is rotating quickly, clockwise from our perspective.  But (as the EHT papers state) if one doesn’t make some of the above-mentioned assumptions, one cannot conclude from the image alone that the black hole is actually rotating.  The interplay of these assumptions is something I’m still trying to get straight.

Second, if you buy all the assumptions, then the picture I drew in yesterday’s post is mostly correct except (a) the jets are far too narrow, and shown overly disconnected from the disk, and (b) they are slightly mis-oriented relative to the orientation of the image.  Below is an improved version of this picture, probably still not the final one.  The new features: the jets (now pointing in the right directions relative to the photo) are fatter and not entirely disconnected from the accretion disk.  This is important because the dominant source of illumination of the photon-sphere might come from the region where the disk and jets meet.

My3rdGuessBHPhoto.png

Updated version of yesterday’s figure: main changes are the increased width and more accurate orientation of the jets.  Working backwards: the EHT image (lower right) is interpreted, using mainly Einstein’s theory of gravity, as (upper right) a thin photon-sphere of focused light surrounding a dark patch created by the gravity of the black hole, with a little bit of additional illumination from somewhere.  The dark patch is 2.5 – 5 times larger than the event horizon of the black hole, depending on how fast the black hole is rotating; but the image itself does not tell you how the photon-sphere is illuminated or whether the black hole is rotating.  Using further assumptions, based on previous measurements of various types and computer simulations of material, gravity and magnetic fields, a picture of the black hole’s vicinity (upper left) can be inferred by the experts. It consists of a fat but tenuous accretion disk of material, almost face-on, some of which is funneled into jets, one heading almost toward us, the other in the opposite direction.  The material surrounds but is somewhat separated from a rotating black hole’s event horizon.  At this radio frequency, the jets and disk are too dim in radio waves to see in the image; only at (and perhaps close to) the photon-sphere, where some of the radio waves are collected and focused, are they bright enough to be easily discerned by the Event Horizon Telescope.

 

 

by Matt Strassler at April 16, 2019 12:53 PM

Jon Butterworth - Life and Physics

The Universe Speaks in Numbers
I have reviewed Graham Farmelo’s new book for Nature. You can find the full review here. Mathematics, physics and the relationship between the two is a fascinating topic which sparks much discussion. The review only came out this morning and … Continue reading

by Jon Butterworth at April 16, 2019 12:16 PM

April 15, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: What Are We Looking At?

The short answer: I’m really not sure yet.  [This post is now largely superseded by the next one, in which some of the questions raised below have now been answered.]

Neither are some of my colleagues who know more about the black hole geometry than I do. And at this point we still haven’t figured out what the Event Horizon Telescope experts do and don’t know about this question… or whether they agree amongst themselves.

[Note added: last week, a number of people pointed me to a very nice video by Veritasium illustrating some of the features of black holes, accretion disks and the warping of their appearance by the gravity of the black hole.  However, Veritasium’s video illustrates a non-rotating black hole with a thin accretion disk that is edge-on from our perspective; and this is definitely NOT what we are seeing!]

As I emphasized in my pre-photo blog post (in which I described carefully what we were likely to be shown, and the subtleties involved), this is not a simple photograph of what’s `actually there.’ We all agree that what we’re looking at is light from some glowing material around the solar-system-sized black hole at the heart of the galaxy M87.  But that light has been wildly bent on its path toward Earth, and so — just like a room seen through an old, warped window, and a dirty one at that — it’s not simple to interpret what we’re actually seeing. Where, exactly, is the material `in truth’, such that its light appears where it does in the image? Interpretation of the image is potentially ambiguous, and certainly not obvious.

The naive guess as to what to expect — which astronomers developed over many years, based on many studies of many suspected black holes — is crudely illustrated in the figure at the end of this post.  Material around a black hole has two main components:

  • An accretion disk of `gas’ (really plasma, i.e. a very hot collection of electrons, protons, and other atomic nuclei) which may be thin and concentrated, or thick and puffy, or something more complicated.  The disk extends inward to within a few times the radius of the black hole’s event horizon, the point of no-return; but how close it can be depends on how fast the black hole rotates.
  • Two oppositely-directed jets of material, created somehow by material from the disk being concentrated and accelerated by magnetic fields tied up with the black hole and its accretion disk; the jets begin not far from the event horizon, but then extend outward all the way to the outer edges of the entire galaxy.

But even if this is true, it’s not at all obvious (at least to me) what these objects look like in an image such as we saw Wednesday. As far as I am currently aware, their appearance in the image depends on

  • Whether the disk is thick and puffy, or thin and concentrated;
  • How far the disk extends inward and outward around the black hole;
  • The process by which the jets are formed and where exactly they originate;
  • How fast the black hole is spinning;
  • The orientation of the axis around which the black hole is spinning;
  • The typical frequencies of the radio waves emitted by the disk and by the jets (compared to the frequency, about 230 Gigahertz, observed by the Event Horizon Telescope);

and perhaps other things. I can’t yet figure out what we do and don’t know about these things; and it doesn’t help that some of the statements made by the EHT scientists in public and in their six papers seem contradictory (and I can’t yet say whether that’s because of typos, misstatements by them, or [most likely] misinterpretations by me.)

So here’s the best I can do right now, for myself and for you. Below is a figure that is nothing but an illustration of my best attempt so far to make sense of what we are seeing. You can expect that some fraction of this figure is wrong. Increasingly I believe this figure is correct in cartoon form, though the picture on the left is too sketchy right now and needs improvement.  What I’ll be doing this week is fixing my own misconceptions and trying to become clear on what the experts do and don’t know. Experts are more than welcome to set me straight!

In short — this story is not over, at least not for me. As I gain a clearer understanding of what we do and don’t know, I’ll write more about it.

 

MyFirstGuessBHPhoto.png

My personal confused and almost certainly inaccurate understanding [the main inaccuracy is that the disk and jets are fatter than shown, and connected to one another near the black hole; that’s important because the main illumination source may be the connection region; also jets aren’t oriented quite right] of how one might interpret the black hole image; all elements subject to revision as I learn more. Left: the standard guess concerning the immediate vicinity of M87’s black hole: an accretion disk oriented nearly face-on from Earth’s perspective, jets aimed nearly at and away from us, and a rotating black hole at the center.  The orientation of the jets may not be correct relative to the photo.  Upper right: The image after the radio waves’ paths are bent by gravity.  The quasi-silhouette of the black hole is larger than the `true’ event horizon, a lot of radio waves are concentrated at the ‘photon-sphere’ just outside (brighter at the bottom due to the black-hole spinning clockwise around an axis slightly askew to our line of sight); some additional radio waves from the accretion disk and jets further complicate the image. Most of the disk and jets are too dim to see.  Lower Right: This image is then blurred out by the Event Horizon Telescope’s limitations, partly compensated for by heavy-duty image processing.

 

by Matt Strassler at April 15, 2019 04:02 PM

April 11, 2019

Jon Butterworth - Life and Physics

Exploring the “Higgs Portal”
The Higgs boson is unique. Does it open a door to Dark Matter? All known fundamental particles acquire mass by interacting with the Higgs boson. Actually, more correctly, they interact with a quantum field which is present even in “empty” … Continue reading

by Jon Butterworth at April 11, 2019 04:16 PM

April 10, 2019

Jon Butterworth - Life and Physics

Particle & astro-particle physics annual UK meeting
The annual UK particle physics and astroparticle physics conference was hosted by Imperial this week, and has just finished. Some slightly random highlights. Crisis or no crisis, the future of particle physics is a topic, of course. An apposite quote from … Continue reading

by Jon Butterworth at April 10, 2019 03:25 PM

Matt Strassler - Of Particular Significance

A Black Day (and a Happy One) In Scientific History

Wow.

Twenty years ago, astronomers Heino Falcke, Fulvio Melia and Eric Agol (a former colleague of mine at the University of Washington) pointed out that the black hole at the center of our galaxy, the Milky Way, was probably big enough to be observed — not with a usual camera using visible light, but using radio waves and clever techniques known as “interferometry”.  Soon it was pointed out that the black hole in M87, further but larger, could also be observed.  [How? I explained this yesterday in this post.]   

And today, an image of the latter, looking quite similar to what we expected, was presented to humanity.  Just as with the discovery of the Higgs boson, and with LIGO’s first discovery of gravitational waves, nature, captured by the hard work of an international group of many scientists, gives us something definitive, uncontroversial, and spectacularly in line with expectations.

EHTDiscoveryM87.png

An image of the dead center of the huge galaxy M87, showing a glowing ring of radio waves from a disk of rapidly rotating gas, and the dark quasi-silhouette of a solar-system-sized black hole.  Congratulations to the Event Horizon Telescope team

I’ll have more to say about this later [have to do non-physics work today 😦 ] and in particular about the frustration of not finding any helpful big surprises during this great decade of fundamental science — but for now, let’s just enjoy this incredible image for what it is, and congratulate those who proposed this effort and those who carried it out.

 

by Matt Strassler at April 10, 2019 02:16 PM

Clifford V. Johnson - Asymptotia

It’s a Black Hole!

Yes, it’s a black hole all right. Following on from my reflections from last night, I can report that the press conference revelations were remarkable indeed. Above you see the image they revealed! It is the behemoth at the centre of the galaxy M87! This truly groundbreaking image is the … Click to continue reading this post

The post It’s a Black Hole! appeared first on Asymptotia.

by Clifford at April 10, 2019 01:37 PM

Clifford V. Johnson - Asymptotia

Event!

Well, I’m off to get six hours of sleep before the big announcement tomorrow! The Event Horizon Telescope teams are talking about an announcement of “groundbreaking” results tomorrow at 13:00 CEST. Given that they set out to “image” the event horizon of a black hole, this suggests (suggests) that they … Click to continue reading this post

The post Event! appeared first on Asymptotia.

by Clifford at April 10, 2019 07:06 AM

April 09, 2019

Matt Strassler - Of Particular Significance

A Non-Expert’s Guide to a Black Hole’s Silhouette

[Note added April 16: some minor improvements have been made to this article as my understanding has increased, specifically concerning the photon-sphere, which is the main region from which the radio waves are seen in the recently released image. See later blog posts for the image and its interpretation.]

About fifteen years ago, when I was a professor at the University of Washington, the particle physics theorists and the astronomer theorists occasionally would arrange to have lunch together, to facilitate an informal exchange of information about our adjacent fields. Among the many enjoyable discussions, one I was particularly excited about — as much as an amateur as a professional — was that in which I learned of the plan to make some sort of image of a black hole. I was told that this incredible feat would likely be achieved by 2020. The time, it seems, has arrived.

The goal of this post is to provide readers with what I hope will be a helpful guide through the foggy swamp that is likely to partly obscure this major scientific result. Over the last days I’ve been reading what both scientists and science journalists are writing in advance of the press conference Wednesday morning, and I’m finding many examples of jargon masquerading as English, terms poorly defined, and phrasing that seems likely to mislead. As I’m increasingly concerned that many non-experts will be unable to understand what is presented tomorrow, and what the pictures do and do not mean, I’m using this post to answer a few questions that many readers (and many of these writers) have perhaps not thought to ask.

A caution: I am not an expert on this subject. At the moment, I’m still learning about the more subtle issues. I’ll try to be clear about when I’m on the edge of my knowledge, and hopefully won’t make any blunders [but experts, please point them out if you see any!]

Which black holes are being imaged?

The initial plan behind the so-called “Event Horizon Telescope” (the name deserves some discussion; see below) has been to make images of two black holes at the centers of galaxies (the star-cities in which most stars are found.) These are not black holes formed by the collapse of individual stars, such as the ones whose collisions have been observed through their gravitational waves. Central galactic black holes are millions or even billions of times larger!  The ones being observed are

  1. the large and `close’ black hole at the center of the Milky Way (the giant star-city we call home), and
  2. the immense but much more distant back hole at the center of M87 (a spectacularly big star-megalopolis.)
MilkyWayAndM87.jpg

Left: the Milky Way as seen in the night sky; we see our galaxy from within, and so cannot see its spiral shape directly.  The brightest region is toward the center of the galaxy, and deep within it is the black hole of interest, as big as a large star but incredibly small in this image.  Right: the enormous elliptically-shaped galaxy M87, which sports an enormous black hole (but again, incredibly small in this image) at its center.  The blue stripe is a jet of material hurled at near-light-speed from the region very close to the black hole.

Why go after both of these black holes at the same time? Just as the Sun and Moon appear the same size in our sky because of an accident — the Sun’s greater distance is almost precisely compensated by its greater size — these two black holes appear similarly sized, albeit very tiny, from our vantage point.

Our galaxy’s central black hole has a mass of about four million Suns, and its size is about twenty times wider than our Sun (in contrast to the black holes whose gravitational waves were observed, which are the size of a human city.) But from the center of our galaxy, it takes light tens of thousands of years to reach Earth, and at such a great distance, this big black hole appears as small as would a tiny grain of sand from a plane at cruising altitude. Try to see the sand grains on a beach from a plane window!

Meanwhile, although M87 lies about two thousand times further away, its central black hole has a mass and radius about two thousand times larger than our galaxy’s black hole. Consequently it appears roughly the same size on the sky.

We may get our first images of both black holes at Wednesday’s announcement, though it is possible that so far only one image is ready for public presentation.

How can one see something as black as a black hole?!?

First of all, aren’t black holes black? Doesn’t that mean they don’t emit (or reflect) any light? Yes, and yes. [With a caveat — Hawking radiation — but while that’s very important for extremely small black holes, it’s completely irrelevant for these ones.]

So how can we see something that’s black against a black night sky? Well, a black hole off by itself would indeed be almost invisible.  [Though detectable because its gravity bends the light of objects behind it, proving it was a black hole and not a clump of, say, dark matter would be tough.]

But the centers of galaxies have lots of `gas’ — not quite what we call `gas’ in school, and certainly not what we put in our cars. `Gas’ is the generalized term astronomers use for any collection of independently wandering atoms (or ions), atomic nuclei, and electrons; if it’s not just atoms it should really be called plasma, but let’s not worry about that here. Some of this `gas’ is inevitably orbiting the black hole, and some of it is falling in (and, because of complex interactions with magnetic fields, some is being blasted outward  before it falls in.) That gas, unlike the black hole, will inevitably glow — it will emit lots of light. What the astronomers are observing isn’t the black hole; they’re observing light from the gas!

And by light, I don’t only mean what we can see with human eyes. The gas emits electromagnetic waves of all frequencies, including not only the visible frequencies but also much higher frequencies, such as those we call X-rays, and much lower frequencies, such as those we call radio waves. To detect these invisible forms of light, astronomers build all sorts of scientific instruments, which we call them `telescopes’ even though they don’t involve looking into a tube as with traditional telescopes.

Is this really a “photograph” of [the gas in the neighborhood of] a black hole?

Yes and (mostly) no.  What you’ll be shown is not a picture you could take with a cell-phone camera if you were in a nearby spaceship.  It’s not visible light that’s being observed.  But it is invisible light — radio waves — and since all light, visible and not, is made from particles called `photons’, technically you could still say is a “photo”-graph.

As I said, the telescope being used in this doesn’t have a set of mirrors in a tube like your friendly neighbor’s amateur telescope. Instead, it uses radio receivers to detect electromagnetic waves that have frequencies above what your traditional radio or cell phone can detect [in the hundred gigahertz range, over a thousand times above what your FM radio is sensitive to.]  Though some might call them microwaves, let’s just call them radio waves; it’s just a matter of definition.

So the images you will see are based on the observation of electromagnetic waves at these radio frequencies, but they are turned into something visible for us humans using a computer. That means the color of the image is inserted by the computer user and will be arbitrary, so pay it limited attention. It’s not the color you would see if you were nearby.  Scientists will choose a combination of the color and brightness of the image so as to indicate the brightness of the radio waves observed.

If you were nearby and looked toward the black hole, you’d see something else. The gas would probably appear colorless — white-hot — and the shape and brightness, though similar, wouldn’t be identical.

If I had radios for eyes, is this what I would see?

Suppose you had radio receivers for eyes instead of the ones you were born with; is this image what you would see if you looked at the black hole from a nearby planet?

Well, to some extent — but still, not exactly.  There’s another very important way that what you will see is not a photograph. It is so difficult to make this measurement that the image you will see is highly processed — that is to say, it will have been cleaned up and adjusted using special mathematics and powerful computers. Various assumptions are key ingredients in this image-processing. Thus, you will not be seeing the `true’ appearance of the [surroundings of the] black hole. You will be seeing the astronomers’ best guess as to this appearance based on partial information and some very intelligent guesswork. Such guesswork may be right, but as we may learn only over time, some of it may not be.

This guesswork is necessary.  To make a nice clear image of something so tiny and faint using radio waves, you’d naively need a radio receiver as large as the Earth. A trick astronomers use when looking at distant objects it to build gigantic arrays of large radio receivers, which can be combined together to make a `telescope’ much larger and more powerful than any one receiver. The tricks for doing this efficiently are beyond what I can explain here, but involve the term `interferometry‘. Examples of large radio telescope arrays include ALMA, the center part of which can be seen in this image, which was built high up on a plateau between two volcanoes in the Atacama desert.

But even ALMA isn’t up to the task. And we can’t make a version of ALMA that covers the Earth. So the next best thing is to use ALMA and all of its friends, which are scattered at different locations around the Earth — an incomplete array of single and multiple radio receivers, combined using all the tricks of interferometry. This is a bit like (and not like) using a telescope that is powerful but has large pieces of its lens missing. You can get an image, but it will be badly distorted.

To figure out what you’re seeing, you must use your knowledge of your imperfect lens, and work backwards to figure our what your distorted image really would have looked like if you had a perfect lens.

Even that’s not quite enough: to do this, you need to have a pretty good guess about what you were going to see. That is where you might go astray; if your assumptions are wrong, you might massage the image to look like what you expected instead of how it really ought to look.   [Is this a serious risk?   I’m not yet expert enough to know the details of how delicate the situation might be.]

It is possible that in coming days and weeks there will be controversies about whether the image-processing techniques used and the assumptions underlying them have created artifacts in the images that really shouldn’t be there, or removed features that should have been there. This could lead to significant disagreements as to how to interpret the images. [Consider these concerns a statement of general prudence based on past experience; I don’t have enough specific expertise here to give a more detailed opinion.]

So in summary, this is not a photograph you’d see with your eyes, and it’s not a complete unadulterated image — it’s a highly-processed image of what a creature with radio eyes might see.  The color is arbitrary; some combination of color and brightness will express the intensity of the radio waves, nothing more.  Treat the images with proper caution and respect.

Will the image show what [the immediate neighborhood of] a black hole looks like?

Oh, my friends and colleagues! Could we please try to be a little more careful using the phrase `looks like‘? The term has two very different meanings, and in contexts like this one, it really, really matters.

Let me put a spoon into a glass of water: on the left, I’ve drawn a diagram of what it looks like, and on the right, I’ve put a photograph of what it looks like.

GlassNSpoonBoth.png

On the left, a sketchy drawing of a spoon in water, showing what it “looks like” in truth; on the right, what it “looks like” to your eyes, distorted by the bending of the light’s path between the spoon and my camera.

You notice the difference, no doubt. The spoon on the left looks like a spoon; the spoon on the right looks like something invented by a surrealist painter.  But it’s just the effect of water and glass on light.

What’s on the left is a drawing of where the spoon is located inside the glass; it shows not what you would see with your eyes, but rather a depiction of the `true location’ of the spoon. On the right is what you will see with your eyes and brain, which is not showing you where the objects are in truth but rather is showing you where the light from those objects is coming from. The truth-depiction is drawn as though the light from the object goes straight from the object into your eyes. But when the light from an object does not take a straight path from the objects to you — as in this case, where the light’s path bends due to interaction of the light with the water and the glass — then the image created in your eyes or camera can significantly differ from a depiction of the truth.

The same issue arises, of course, with any sort of lens or mirror that isn’t flat. A room seen through a curved lens, or through an old, misshapen window pane, can look pretty strange.  And gravity? Strong gravity near a black hole drastically modifies the path of any light traveling close by!

In the figure below, the left panel shows a depiction of what we think the region around a black hole typically looks like in truth. There is a spinning disk of gas, called the accretion disk, a sort of holding station for the gas. At any moment, a small fraction of the gas, at the inner edge of the disk, has crept too close to the black hole and is starting to spiral in. There are also usually jets of material flying out, roughly aligned with the axis of the black hole’s rapid rotation and its magnetic field. As I mentioned above, that material is being flung out of the black hole’s neighborhood (not out of the black hole itself, which would be impossible.)

accretiondisk_real_apparent.jpg

Left: a depiction `in truth’ of the neighborhood of a black hole, showing the disk of slowly in-spiraling gas and the jets of material funneled outward by the black hole’s magnetic field.  The black hole is not directly shown, but is significantly smaller than the inner edge of the disk.  The color is not meaningful.  Right: a simulation by Hotaka Shiokawa of how such a black hole may appear to the Event Horizon Telescope [if its disk is tipped up a bit more than in the image at left.]  The color is arbitrary; mainly the brightness matters.  The left side of the disk appears brighter than right side due to a ​`Doppler effect’; on the left the gas is moving toward us, increasing the intensity of the radio waves, while on the right side it is moving away.  The dark area at the center is the black hole’s sort-of-silhouette; see below.

The image you will be shown, however, will perhaps look like the one on the right. That is an image of the radio waves as observed here at Earth, after the waves’ paths have been wildly bent — warped by the intense gravity near the black hole. Just as with any lens or mirror or anything similar, what you will see does not directly reveal what is truly there. Instead, you must infer what is there from what you see.

Just as you infer, when you see the broken twisted spoon in the glass, that probably the spoon is not broken or twisted, and the water and glass have refracted the light in familiar ways, so too we must make assumptions to understand what we’re really looking at in truth after we see the images tomorrow.

How serious are these assumptions?  Certainly, at their first attempt, astronomers will assume Einstein’s theory of gravity, which predicts how the light is bent around a black hole, is correct. But the details of what we infer from what we’re shown might depend upon whether Einstein’s formulas are precisely right. It also may depend on the accuracy of our understanding of and assumptions about accretion disks. Further complicating the procedure is that the rate and axis of the black hole’s rotation affects the details of the bending of the light, and since we’re not sure of the rotation yet for these black holes, that adds to the assumptions that must gradually be disentangled.

Because of these assumptions, we will not have an unambiguous understanding of the true nature of what appears in these first images.

Are we seeing the `shadow’ of a black hole?

A shadow: that’s what astronomers call it, but as far as I can tell, this word is jargon masquerading as English… the most pernicious type of jargon.

What’s a shadow, in English? Your shadow is a dark area on the ground created by you blocking the light from the Sun, which emits light and illuminates the area around you. How would I see your shadow? I’d look at the ground — not at you.

This is not what we are doing in this case. The gas is glowing, illuminating the region. The black hole is `blocking’ [caution! see below] some of the light. We’re looking straight toward the black hole, and seeing dark areas where illumination doesn’t reach us. This is more like looking at someone’s silhouette, not someone’s shadow!

SilhouetteShadow2.png

With the Sun providing a source of illumination, a person standing between you and the Sun would appear as a silhouette that blocks part of the Sun, and would also create a shadow on the ground. [I’ve drawn the shadow slightly askew to avoid visual confusion.]  In the black hole images, illumination is provided by the glowing gas, and we’ll see a sort-of-silhouette [but see below!!!] of the black hole.  There’s nothing analogous to the ground, or to the person’s shadow, in the black hole images.

That being said, it’s much more clever than a simple silhouette, because of all that pesky bending of the light that the black hole is doing.  In an ordinary silhouette, the light from the illuminator travels in straight lines, and an object blocks part of the light.   But a black hole does not block your view of what’s behind it;  the light from the gas behind it gets bent around it, and thus can be seen after all!

Still, after you calculate all the bending, you find out that there’s a dark area from which no light emerges, which I’ll informally call a quasi-silhouette.  Just outside this is a `photon-sphere’, which creates a bright ring; I’ll explain this elsewhere.   That resembles what happens with certain lenses, in contrast to the person’s silhouette shown above, where the light travels in straight lines.  Imagine that a human body could bend light in such a way; a whimsical depiction of what that might look like is shown below:

QuasiSilhouette.png

If a human body could bend light the way a black hole does, it would distort the Sun’s appearance.  The light we’d expect to be blocked would instead be warped around the edges.  The dark area, no longer a simple outline of the body, would take on a complicated shape.

Note also that the black hole’s quasi-silhouette probably won’t be entirely dark. If material from the accretion disk (or a jet pointing toward us) lies between us and the black hole, it can emit light in our direction, partly filling in the dark region.

Thus the quasi-silhouette we’ll see in the images is not the outline of the black hole’s edge, but an effect of the light bending, and is in fact considerably larger than the black hole.  In truth it may be as much as 50% larger in radius than the event horizon, and the silhouette as seen in the image may appear  more than 2.5 to 5 times larger (depending on how fast the black hole rotates) than the true event horizon — all due to the bent paths of the radio waves.

Interestingly, it turns out that the details of how the black hole is rotating don’t much affect the size of the quasi-silhouette. The black hole in the Milky Way is already understood well enough that astronomers know how big its quasi-silhouette ought to appear, even though we don’t know its rotation speed and axis.  The quasi-silhouette’s size in the image will therefore be an important test of Einstein’s formulas for gravity, even on day one.  If the size disagrees with expectations, expect a lot of hullabaloo.

What is the event horizon, and is it present in the image?

The event horizon of a black hole, in Einstein’s theory of gravity, is not an object. It’s the edge of a region of no-return, as I’ve explained here. Anything that goes past that point can’t re-emerge, and nothing that happens inside can ever send any light or messages back out.

Despite what some writers are saying, we’re not expecting to see the event horizon in the images. As is hopefully clear by now, what astronomers are observing is (a) the radio waves from the gas near the black hole, after the waves have taken strongly curved paths, and (b) a quasi-silhouette of the black hole from which radio waves don’t emerge. But as I explained above, this quasi-silhouette is considerably larger than the event horizon, both in truth and in appearance. The event horizon does not emit any light, and it sits well inside the quasi-silhouette, not at its edge.

Still, what we’ll be seeing is closer to the event horizon than anything we’ve ever seen before, which is really exciting!   And if the silhouette has an unexpected appearance, we just might get our first hint of a breakdown of Einstein’s understanding of event horizons.  Don’t bet on it, but you can hope for it.

Can we hope to see the singularity of a black hole?

No, for two reasons.

First, there probably is no singularity in the first place. It drives me nuts when people say there’s a singularity inside a black hole; that’s just wrong. The correct statement is that in Einstein’s theory of gravity, singularities (i.e. unavoidable infinities) arise in the math — in the formulas for black holes (or more precisely, in the solutions to those formulas that describe black holes.) But a singularity in the math does not mean that there’s a singularity in nature!  It usually means that the math isn’t quite right — not that anyone made a mistake, but that the formulas that we’re using aren’t appropriate, and need to be modified in some way, because of some natural phenomenon that we don’t yet understand.

A singularity in a formula implies a mystery in nature — not a singularity in nature.

In fact, historically, in every previous case where singularities have appeared in formulas, it simply meant that the formula (or solution) was not accurate in that context, or wasn’t being used correctly. We already know that Einstein’s theory of gravity can’t be complete (it doesn’t accommodate quantum physics, which is also a part of the real world) and it would be no surprise if its incompleteness is responsible for these infinities. The math singularity merely signifies that the physics very deep inside a black hole isn’t understood yet.

[The same issue afflicts the statement that the Big Bang began with a singularity; the solution to the equations has a singularity, yes, but that’s very far from saying that nature’s Big Bang actually began with one.]

Ok, so how about revising the question: is there any hope of seeing the region deep inside the black hole where Einstein’s equations have a singularity? No. Remember what’s being observed is the stuff outside the black hole, and the black hole’s quasi-silhouette. What happens inside a black hole stays inside a black hole. Anything else is inference.

[More precisely, what happens inside a huge black hole stays inside a black hole for eons.  For tiny black holes it comes out sooner, but even so it is hopelessly scrambled in the form of `Hawking radiation.’]

Any other questions?

Maybe there are some questions that are bothering readers that I haven’t addressed here?  I’m super-busy this afternoon and all day Wednesday with non-physics things, but maybe if you ask the question early enough I can get address it here before the press conference (at 9 am Wednesday, New York City time).   Also, if you find my answers confusing, please comment; I can try to further clarify them for later readers.

It’s a historic moment, or at least the first stage in a historic process.  To me, as I hope to all of you, it’s all very exciting and astonishing how the surreal weirdness of Einstein’s understanding of gravity, and the creepy mysteries of black holes, have suddenly, in just a few quick years, become undeniably real!

by Matt Strassler at April 09, 2019 01:54 PM

April 08, 2019

John Baez - Azimuth

Symposium on Compositional Structures 4

There’s yet another conference in this fast-paced series, and this time it’s in Southern California!

Symposium on Compositional Structures 4, 22–23 May, 2019, Chapman University, California. Organized by Alexander Kurz.

The Symposium on Compositional Structures (SYCO) is an interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.
The first SYCO was in September 2018, at the University of Birmingham. The second SYCO was in December 2018, at the University of Strathclyde. The third SYCO was in March 2019, at the University of Oxford. Each meeting attracted about 70 participants.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere. Think creatively—you could submit a recent paper, or notes on work in progress, or even a recent Masters or PhD thesis.

While no list of topics could be exhaustive, SYCO welcomes submissions
with a compositional focus related to any of the following areas, in
particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;

• graphical calculi, including string diagrams, Petri nets and reaction networks;

• languages and frameworks, including process algebras, proof nets, type theory and game semantics;

• abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

• quantum algebra, including quantum computation and representation theory;

• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;

• industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

SYCO will be a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, and to avoid the need to make difficult choices between strong submissions, in the event that more good-quality submissions are received than can be accommodated in the timetable, the programme committee may choose to
defer some submissions to a future meeting, rather than reject them. This would be done based largely on submission order, giving an incentive for early submission, but would also take into account other requirements, such as ensuring a broad scientific programme. Deferred submissions can be re-submitted to any future SYCO meeting, where they would not need peer review, and where they would be prioritised for inclusion in the programme. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive reviewing process. Meetings will be held sufficiently frequently to avoid a backlog of deferred papers.

Invited speakers

John Baez, University of California, Riverside: Props in network theory.

Tobias Fritz, Perimeter Institute for Theoretical Physics: Categorical probability: results and challenges.

Nina Otter, University of California, Los Angeles: A unified framework for equivalences in social networks.

Important dates

All times are anywhere-on-earth.

• Submission deadline: Wednesday 24 April 2019
• Author notification: Wednesday 1 May 2019
• Registration deadline: TBA
• Symposium dates: Wednesday 22 and Thursday 23 May 2019

Submission

Submission is by EasyChair, via the following link:

https://easychair.org/conferences/?conf=syco4

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

Programme Committee

• Miriam Backens, University of Oxford
• Ross Duncan, University of Strathclyde and Cambridge Quantum Computing
• Brendan Fong, Massachusetts Institute of Technology
• Stefano Gogioso, University of Oxford
• Amar Hadzihasanovic, Kyoto University
• Chris Heunen, University of Edinburgh
• Dominic Horsman, University of Grenoble
• Martti Karvonen, University of Edinburgh
• Kohei Kishida, Dalhousie University (chair)
• Andre Kornell, University of California, Davis
• Martha Lewis, University of Amsterdam
• Samuel Mimram, École Polytechnique
• Benjamin Musto, University of Oxford
• Nina Otter, University of California, Los Angeles
• Simona Paoli, University of Leicester
• Dorette Pronk, Dalhousie University
• Mehrnoosh Sadrzadeh, Queen Mary
• Pawel Sobocinski, University of Southampton
• Joshua Tan, University of Oxford
• Sean Tull, University of Oxford
• Dominic Verdon, University of Bristol
• Jamie Vicary, University of Birmingham and University of Oxford
• Maaike Zwart, University of Oxford

by John Baez at April 08, 2019 01:00 AM

April 06, 2019

Andrew Jaffe - Leaves on the Line

@TheMekons make the world alright, briefly, at the 100 Club, London.

by Andrew at April 06, 2019 10:17 AM

March 31, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

My favourite conference; the Institute of Physics Spring Weekend

This weekend I attended the annual meeting of the Institute of Physics in Ireland. I always enjoy these meetings – more relaxing than a technical conference and a great way of keeping in touch with physicists from all over the country. As ever, there were a number of interesting presentations, plenty of discussions of science and philosophy over breakfast, lunch and dinner, all topped off by the annual awarding of the Rosse Medal, a highly competitive competition for physics postgraduates across the nation.

banner

The theme of this year’s meeting was ‘A Climate of Change’ and thus the programme included several talks on the highly topical subject of anthropogenic climate change. First up was ‘The science of climate change’, a cracking talk on the basic physics of climate change by Professor Joanna Haigh of Imperial College London. This was followed by ‘Climate change: where we are post the IPCC report and COP24’, an excellent presentation by Professor John Sweeney of Maynooth University on the latest results from the IPCC. Then it was my turn. In ‘Climate science in the media – a war on information?’,  I compared the coverage of climate change in the media with that of other scientific topics such as medical science and and big bang cosmology. My conclusion was that climate change is a difficult subject to convey to the public, and matters are not helped by actors who deliberately attempt to muddle the science and downplay the threat. You can find details of the full conference programme here and the slides for my own talk are here.

 

Images of my talk from IoP Ireland 

There followed by a panel discussion in which Professor Haigh, Professor Sweeney and I answered questions from the floor on climate science. I don’t always enjoy panel discussions, but I think this one was useful thanks to some excellent chairing by Paul Hardaker of the Institute of Physics.

IMG_2504 (1)

Panel discussion of the threat of anthopogenic climate change

After lunch, we were treated to a truly fascinating seminar: ‘Tropical storms, hurricanes, or just a very windy day?: Making environmental science accessible through Irish Sign Language’, by Dr Elizabeth Mathews of Dublin City University, on the challenge of making media descriptions of threats such as storms hurricanes and climate change accessible to deaf people. This was followed by a most informative talk by Dr Bajram Zeqiri of the National Physical Laboratory on the recent redefinition of the kilogram,  ‘The measure of all things: redefinition of the kilogram, the kelvin, the ampere and the mole’.

Finally, we had the hardest part of the day, the business of trying to select the best postgraduate posters and choosing a winner from the shortlist. As usual, I was blown away by the standard, far ahead of anything I or my colleagues ever produced. In the end, the Rosse Medal was awarded to Sarah Markham of the University of Limerick for a truly impressive poster and presentation.

D25jKhvXcAE8vdA

Viewing posters at the IoP 2019 meeting; image courtesy of IoP Ireland

All in all, another super IoP Spring weekend. Now it’s back to earth and back to teaching…

by cormac at March 31, 2019 08:51 PM

March 29, 2019

Robert Helling - atdotde

Proving the Periodic Table
The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.



In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$ a^2+b^2=c^2$$

$$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)


Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

by Unknown (noreply@blogger.com) at March 29, 2019 11:02 AM

March 21, 2019

Alexey Petrov - Symmetry factor

CP-violation in charm observed at CERN

 

There is a big news that came from CERN today. It was announced at a conference called Recontres de Moriond, one of the major yearly conferences in the field of particle physics. One of the CERN’s experiments, LHCb, reported an observation — yes, observation, not an evidence for, but an observation, of CP-violation in charmed system. Why is it big news and why should you care?

You should care about this announcement because it has something to do with how our Universe looks like. As you look around, you might notice an interesting fact: everything is made of matter. So what about it? Well, one thing is missing from our everyday life: antimatter.

As it turns out, physicists believe that the amount of matter and antimatter was the same after the Universe was created. So, the $1,110,000 question is: what happened to antimatter? According to Sakharov’s criteria for baryonogenesis (a process of creating  more baryons, like protons and neutrons, than anti-baryons), one of the conditions for our Universe to be the way it is would be to have matter particles interact slightly differently from the corresponding antimatter particles. In particle physics this condition is called CP-violation. It has been observed in beauty and strange quarks, but never in charm quarks. As charm quarks are fundamentally different from both beauty and strange ones (electrical charge, mass, ways they interact, etc.), physicists hoped that New Physics, something that we have not yet seen or predicted, might be lurking nearby and can be revealed in charm decays. That is why so much attention has been paid to searches for CP-violation in charm.

Now there are indications that the search is finally over: LHCb announced that they observed CP-violation in charm. Here is their announcement (look for a news item from 21 March 2019). A technical paper can be found here, discussing how LHCb extracted CP-violating observables from time-dependent analysis of D -> KK and D-> pipi decays.

The result is generally consistent with the Standard Model expectations. However, there are theory papers (like this one) that predict the Standard Model result to be about seven times smaller with rather small uncertainty.  There are three possible interesting outcomes:

  1. Experimental result is correct but the theoretical prediction mentioned above is not. Well, theoretical calculations in charm physics are hard and often unreliable, so that theory paper underestimated the result and its uncertainties.
  2. Experimental result is incorrect but the theoretical prediction mentioned above is correct. Maybe LHCb underestimated their uncertainties?
  3. Experimental result is correct AND the theoretical prediction mentioned above is correct. This is the most interesting outcome: it implies that we see effects of New Physics.

What will it be? Time will show.

More technical note on why it is hard to see CP-violation in charm.

Once reason that CP-violating observables are hard to see in charm is because they are quite small, at least in the Standard Model.  All final/initial state quarks in the D -> KK or D -> pi pi transition belong to the first two generations. The CP-violating asymmetry that arises when we compare time-dependent decay rates of D0 to a pair of kaons or pions to the corresponding decays of anti-D0 particle can only happen if one reaches the weak phase taht is associated with the third generation of quarks (b and t), which is possible via penguin amplitude. The problem is that the penguin amplitude is small, as Glashow-Illiopulos -Maiani (GIM) mechanism makes it to be proportional to m_b^2 times tiny CKM factors. Strong phases needed for this asymmetry come from the tree-level decays and (supposedly) are largely non-perturbative.

Notice that in B-physics the situation is exactly the opposite. You get the weak phase from the tree-level amplitude and the penguin one is proportional to m_top^2, so CP-violating interference is large.

Ask me if you want to know more!

by apetrov at March 21, 2019 06:45 PM

Matt Strassler - Of Particular Significance

LHCb experiment finds another case of CP violation in nature

The LHCb experiment at the Large Hadron Collider is dedicated mainly to the study of mesons [objects made from a quark of one type, an anti-quark of another type, plus many other particles] that contain bottom quarks (hence the `b’ in the name).  But it also can be used to study many other things, including mesons containing charm quarks.

By examining large numbers of mesons that contain a charm quark and an up anti-quark (or a charm anti-quark and an up quark) and studying carefully how they decay, the LHCb experimenters have discovered a new example of violations of the transformations known as CP (C: exchange of particle with anti-particle; P: reflection of the world in a mirror), of the sort that have been previously seen in mesons containing strange quarks and mesons containing bottom quarks.  Here’s the press release.

Congratulations to LHCb!  This important addition to our basic knowledge is consistent with expectations; CP violation of roughly this size is predicted by the formulas that make up the Standard Model of Particle Physics.  However, our predictions are very rough in this context; it is sometimes difficult to make accurate calculations when the strong nuclear force, which holds mesons (as well as protons and neutrons) together, is involved.  So this is a real coup for LHCb, but not a game-changer for particle physics.  Perhaps, sometime in the future, theorists will learn how to make predictions as precise as LHCb’s measurement!

by Matt Strassler at March 21, 2019 11:52 AM

March 16, 2019

Robert Helling - atdotde

Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"
Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.

Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.

Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.

Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?

Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.

Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.

Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.

by Unknown (noreply@blogger.com) at March 16, 2019 09:43 AM

March 13, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

RTE’s Brainstorm; a unique forum for public intellectuals

I have an article today on RTE’s ‘Brainstorm’ webpage, my tribute to Stephen Hawking one year after his death.

"Hawking devoted a great deal of time to science outreach, unusual for a scientist at this level"

I wasn’t aware of the RTE brainstorm initiative until recently, but I must say it is a very interesting and useful resource. According to the mission statement on the website“RTÉ Brainstorm is where the academic and research community will contribute to public debate, reflect on what’s happening in the world around us and communicate fresh thinking on a broad range of issues”.  A partnership between RTE, University College Cork, NUI Galway, University of Limerick, Dublin City University, Ulster University, Maynooth University and the Technological University of Dublin, the idea is to provide an online platform for academics and other specialists to engage in public discussions of interesting ideas and perspectives in user-friendly language.  You can find a very nice description of the initiative in The Irish Times here .

I thoroughly approve of this initiative. Many academics love to complain about the portrayal of their subject (and a lot of other subjects) in the media; this provides a simple and painless method for such people to reach a wide audience. Indeed, I’ve always liked the idea of the public intellectual. Anyone can become a specialist in a given topic; it’s a lot harder to make a meaningful contribution to public debate. Some would say this is precisely the difference between the academic and the public intellectual. Certainly, I enjoy engaging in public discussions of matters close to my area of expertise and I usually learn something new.  That said, a certain humility is an absolute must – it’s easy to forget that detailed knowledge of a subject does not automatically bestow the wisdom of Solomon. Indeed, there is nothing worse than listing to an specialist use their expertise to bully others into submission – it’s all about getting the balance right and listening as well as informing….

by cormac at March 13, 2019 07:28 PM

March 06, 2019

Robert Helling - atdotde

Challenge: How to talk to a flat earther?
Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).

These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.

Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.

So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).

My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.

How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.


by Unknown (noreply@blogger.com) at March 06, 2019 02:24 PM

February 24, 2019

Michael Schmitt - Collider Blog

Miracles when you use the right metric

I recommend reading, carefully and thoughtfully, the preprint “The Metric Space of Collider Events” by Patrick Komiske, Eric Metodiev, and Jesse Thaler (arXiv:1902.02346). There is a lot here, perhaps somewhat cryptically presented, but much of it is exciting.

First, you have to understand what the Earth Mover’s Distance (EMD) is. This is easier to understand than the Wasserstein Metric of which it is a special case. The EMD is a measure of how different two pdfs (probability density functions) are and it is rather different than the usual chi-squared or mean integrated squared error because it emphasizes separation rather than overlap. The idea is look at how much work you have to do to reconstruct one pdf from another, where “reconstruct” means transporting a portion of the first pdf a given distance. You keep track of the “work” you do, which means the amount of area (i.e.,”energy” or “mass”) you transport and how far you transport it. The Wikipedia article aptly makes an analogy with suppliers delivering piles of stones to customers. The EMD is the smallest effort required.

The EMD is a rich concept because it allows you to carefully define what “distance” means. In the context of delivering stones, transporting them across a plain and up a mountain are not the same. In this sense, rotating a collision event about the beam axis should “cost” nothing – i.e, be irrelevant — while increasing the energy or transverse momentum should, because it is phenomenologically interesting.

The authors want to define a metric for LHC collision events with the notion that events that come from different processes would be well separated. This requires a definition of “distance” – hence the word “metric” in the title. You have to imagine taking one collision event consisting of individual particle or perhaps a set of hadronic jets, and transporting pieces of it in order to match some other event. If you have to transport the pieces a great distance, then the events are very different. The authors’ ansatz is a straight forward one, depending essentially on the angular distance θij/R plus a term than takes into account the difference in total energies of the two events. Note: the subscripts i and j refer to two elements from the two different events. The paper gives a very nice illustration for two top quark events (read and blue):

Transformation of one top quark event into another

The first thing that came to mind when I had grasped, with some effort, the suggested metric, was that this could be a great classification tool. And indeed it is. The authors show that a k-nearest neighbors algorithm (KNN), straight out of the box, equipped with their notion of distance, works nearly as well as very fancy machine learning techniques! It is crucial to note that there is no training here, no search for a global minimum of some very complicated objection function. You only have to evaluate the EMD, and in their case, this is not so hard. (Sometimes it is.) Here are the ROC curves:

ROC curves. The red curve is the KNN with this metric, and the other curves close by are fancy ML algorithms. The light blue curve is a simple cut on N-subjettiness observables, itself an important theoretical tool


I imagine that some optimization could be done to close the small gap with respect to the best performing algorithms, for example in improving on the KNN.

The next intriguing idea presented in this paper is the fractal dimension, or correlation dimension, dim(Q), associated with their metric. The interesting bit is how dim(Q) depends on the mass/energy scale Q, which can plausibly vary from a few GeV (the regime of hadronization) up to the mass of the top quark (173 GeV). The authors compare three different sets of jets from ordinary QCD production, from W bosons decaying hadronically, and from top quarks, because one expects the detailed structure to be distinctly different, at least if viewed with the right metric. And indeed, the variation of dim(Q) with Q is quite different:

dim(Q) as a function of Q for three sources of jets


(Note these jets all have essentially the same energy.) There are at least three take-away points. First, the dim(Q) is much higher for top jets than for W and QCD jets, and W is higher than QCD. This hierarchy reflects the relative complexity of the events, and hints at new discriminating possibilities. Second, they are more similar at low scales where the structure involves hadronication, and more different at high scales which should be dominated by the decay structure. This is born out by they decay products only curves. Finally, there is little difference in the curves based on particles or on partons, meaning that the result is somehow fundamental and not an artifact of hadronization itself. I find this very exciting.

The authors develop the correlation distance dim(Q) further. It is a fact that a pair of jets from W decays boosted to the same degree can be described by a single variable: the ratio of their energies. This can be mapped onto an annulus in a abstract dimensional space (see the paper for slightly more detail). The interesting step is to look at how the complexity of individual events, reflected in dim(Q), varies around the annulus:

Embedding of W jets and how dim(Q) varies around the annulus and inside it


The blue events to the lower left are simple, with just a single round dot (jet) in the center, while the red events in the upper right have two dots of nearly equal size. The events in the center are very messy, with many dots of several sizes. So morphology maps onto location in this kinematic plane.

A second illustration is provided, this time based on QCD jets of essentially the same energy. The jet masses will span a range determined by gluon radiation and the hadronization process. Jets at lower mass should be clean and simple while jets at high mass should show signs of structure. This is indeed the case, as nicely illustrated in this picture:

How complex jet substructure correlates with jet mass


This picture is so clear it is almost like a textbook illustration.

That’s it. (There is one additional topic involving infrared divergence, but since I do not understand it I won’t try to describe it here.) The paper is short with some startling results. I look forward to the authors developing these studies further, and for other researchers to think about them and apply them to real examples.

by Michael Schmitt at February 24, 2019 05:16 PM

February 22, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The joys of mid term

Thank God for mid-term, or ‘reading week’ as it is known in some colleges. Time was I would have spent the week on the ski slopes, but these days I see the mid-term break as a precious opportunity to catch up – a nice relaxed week in which I can concentrate on correcting assessments, preparing teaching notes and setting end-of-semester exams. There is a lot of satisfaction in getting on top of things, if only temporarily!

Then there’s the research. To top the week off nicely, I heard this morning that my proposal to give a talk at the forthcoming Authur Eddington conference  in Paris has been accepted; this is great news as the conference will mark the centenary of Eddington’s measurement of the bending of starlight by the sun, an experiment that provided key evidence in support Einstein’s general theory of relativity. To this day, some historians question the accuracy of Eddington’s result, while most physicists believe his findings were justified, so it should make for an interesting conference .

Eddinton

 

by cormac at February 22, 2019 04:45 PM

February 12, 2019

Robert Helling - atdotde

Bohmian Rapsody

Visits to a Bohmian village


Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language.

Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud.

Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today.

Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories.

"But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current

$$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$

So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field

$$v = j/\rho = 2\Im(\nabla \psi/\psi).$$

What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$.

What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts.

The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern.

What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much.

But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example:

You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval.

From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference.

From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach).

The trajectory of the second particle depending on its initial position


This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way.

And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not.

Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state.

As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system).

My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend.

PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.

by Unknown (noreply@blogger.com) at February 12, 2019 07:20 AM

February 07, 2019

Axel Maas - Looking Inside the Standard Model

Why there won't be warp travel in times of global crises
One of the questions I get most often at outreach events is: "What is about warp travel?", or some other wording for faster-than-light travel. Something, which makes interstellar travel possible, or at least viable.

Well, the first thing I can say is that there is nothing which excludes it. Of course, within our well established theories of the world it is not possible. Neither the standard model of particle physics, nor general relativity, when constrained to the matter we know of, allows it. Thus, whatever describes warp travel, it needs to be a theory, which encompasses and enlarges what we know. Can a quantized combination of general relativity and particle physics do this? Perhaps, perhaps not. Many people think about it really hard. Mostly, we run afoul of causality when trying.

But these are theoretical ideas. And even if some clever team comes up with a theory which allows warp travel, this does not say that this theory is actually realized in nature. Just because we can make it mathematical consistent does not guarantee that it is realized. In fact, we have many, many more mathematical consistent theories than are realized in nature. Thus, it is not enough to just construct a theory of warp travel. Which, as noted, we failed so far to do.

No, what we need is to figure out that it really happens in nature. So far, this did not happen. Neither did we observe it in any human-made experiment, nor did we have any observation in nature which unambiguously point to it. And this is what makes it real hard.

You see, the universe is a tremendous place, which is unbelievable large, and essentially three times as old as the whole planet earth. Not to mention humanity. There happen extremely powerful events out there. This starts from quasars, effectively like a whole galactic core on fire, to black hole collisions and supernovas. These events put out an enormous amount of energy. Much, much more than even our sun generates. Hence, anything short of a big bang is happening all the time in the universe. And we see the results. The earth is hit constantly by particles with much, much higher energies than we can produce in any experiment. And this since earth came into being. Incidentally, this also tells us that nothing we can do at a particle accelerator can really be dangerous. Whatever we do there has happened so often in our Earth's atmosphere, it would have killed this planet long before humanity entered the scene. Only bad thing about it, we do never know when and where such an event happens. And the rate is also not that high, it is only that earth existed already so very long. And is big. Hence, we cannot use this to make controlled observations.

Thus, whatever could happen, happens out there. In the universe. We see some things out there, which we cannot explain yet, e.g. dark matter. But by and large a lot works as expected. Especially, we do not see anything which begs warp travel to explain. Or anything else remotely suggesting something happening faster than the speed of light. Hence, if something like faster-than-light travel is possible, it is neither common nor easily happening.

As noted, this does not mean it is impossible. Only that if it is possible, it is very, very hard. Especially, this means it will be very, very hard to make an experiment to demonstrate the phenomenon. Much less to actually make it a technology, rather than a curiosity. This means, a lot of effort will be necessary to get to see it, if it is really possible.

What is a lot? Well, the CERN is a bit. But human, or even robotic, space exploration is an entire different category, some one to two orders of magnitudes more. Probably, we would need to combine such space exploration with particle physics to really get to it. Possible the best example for such an endeavor is the future LISA project to measure gravitational waves in space. It is perhaps even our current best bet to observe any hints of faster-than-light phenomena, aside from bigger particle physics experiments on earth.

Do we have the technology for such a project? Yes, we do. We have it since roughly a decade. But it will likely take at least one more decade to have LISA flying. Why not now? Resources. Or, often put equivalently, costs.

And here comes the catch. I said, it is our best chance. But this does not mean it is a good chance. In fact, even if faster-than-light is possible, I would be very surprised if we would see it with this mission. There is probably a few more generations of technology, and another order of magnitude of resources, needed, before we could see something, given of what I know how well everything currently fits. Of course, there can always be surprises with every little step further. I am sure, we will discover something interesting, possibly spectacular with LISA. But I would not bet anything valuable that it will be having to do with warp travel.

So, you see, we have to scale up, if we want to go to the stars. This means investing resources. A lot of them. But resources are needed to fix things on earth as well. And the more we damage, the more we need to fix, and the less we have to get to the stars. Right now, humanity moves into a state of perpetual crises. The damage wrought by the climate crises will require enormous efforts to mitigate, much more to stop the downhill trajectory. As a consequence of the climate crises, as well as social inequality, more and more conflicts will create further damage. Finally, isolationism, both nationally as well as socially, driven by fear of the oncoming crises, will also soak up tremendous amounts of resources. And, finally, a hostile environment towards diversity and putting individual gains above common gains create a climate which is hostile to anything new and different in general, and to science in particular. Hence, we will not be able to use our resources, or the ingenuity of the human species as a whole, to get to the stars.

Thus, I am not hopeful to see faster-than-light in my lifetime, or those of the next generation. Such a challenge, if it is possible at all, will require a common effort of our species. That would be truly one worthy endeavour to put our minds at. But right now, as a scientist, I am much more occupied with protecting a world in which science is possible, both metaphorically as well as literally.

But, there is always hope. If we rise up, and decide to change fundamentally. When we put the well-being of us as a whole in front. Then, I would be optimistic that we can get out there. Well, at least as fast as nature permits. How fast this ever will be.

by Axel Maas (noreply@blogger.com) at February 07, 2019 09:17 AM

January 18, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Back to school

It was back to college this week, a welcome change after some intense research over the hols. I like the start of the second semester, there’s always a great atmosphere around the college with the students back and the restaurants, shops and canteens back open. The students seem in good form too, no doubt enjoying a fresh start with a new set of modules (also, they haven’t yet received their exam results!).

This semester, I will teach my usual introductory module on the atomic hypothesis and early particle physics to second-years. As always, I’m fascinated by the way the concept of the atom emerged from different roots and different branches of science: from philosophical considerations in ancient Greece to considerations of chemistry in the 18th century, from the study of chemical reactions in the 19th century to considerations of statistical mechanics around the turn of the century. Not to mention a brilliant young patent clerk who became obsessed with the idea of showing that atoms really exist, culminating in his famous paper on Brownian motion. But did you know that Einstein suggested at least three different ways of measuring Avogadro’s constant? And each method contributed significantly to establishing the reality of atoms.

fig-2

 In 1908, the French physicist Jean Perrin demonstrated that the motion of particles suspended in a liquid behaved as predicted by Einstein’s formula, derived from considerations of statistical mechanics, giving strong support for the atomic hypothesis.  

One change this semester is that I will also be involved in delivering a new module,  Introduction to Modern Physics, to first-years. The first quantum revolution, the second quantum revolution, some relativity, some cosmology and all that.  Yet more prep of course, but ideal for anyone with an interest in the history of 20th century science. How many academics get to teach interesting courses like this? At conferences, I often tell colleagues that my historical research comes from my teaching, but few believe me!

Update

Then of course, there’s also the module Revolutions in Science, a course I teach on Mondays at University College Dublin; it’s all go this semester!

by cormac at January 18, 2019 04:15 PM

January 17, 2019

Robert Helling - atdotde

Has your password been leaked?
Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are).

These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago.

To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub.

by Unknown (noreply@blogger.com) at January 17, 2019 07:43 PM

January 12, 2019

Sean Carroll - Preposterous Universe

True Facts About Cosmology (or, Misconceptions Skewered)

I talked a bit on Twitter last night about the Past Hypothesis and the low entropy of the early universe. Responses reminded me that there are still some significant misconceptions about the universe (and the state of our knowledge thereof) lurking out there. So I’ve decided to quickly list, in Tweet-length form, some true facts about cosmology that might serve as a useful corrective. I’m also putting the list on Twitter itself, and you can see comments there as well.

  1. The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.
  2. The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)
  3. People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.
  4. The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.
  5. Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)
  6. The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.
  7. The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.
  8. If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.
  9. The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.
  10. The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.
  11. Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.
  12. The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.
  13. Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.
  14. We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.
  15. Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.
  16. Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.
  17. We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.
  18. We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.
  19. Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.

Feel free to leave suggestions for more misconceptions. If they’re ones that I think many people actually have, I might add them to the list.

by Sean Carroll at January 12, 2019 08:31 PM

January 09, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Physical Methods of Hazardous Wastewater Treatment

Hazardous waste comprises all types of waste with the potential to cause a harmful effect on the environment and pet and human health. It is generated from multiple sources, including industries, commercial properties and households and comes in solid, liquid and gaseous forms.

There are different local and state laws regarding the management of hazardous waste in different localities. Irrespective of your jurisdiction, the management starts from a proper hazardous waste collection from your Utah property through to its eventual disposal.

There are many methods of waste treatment after its collection using the appropriate structures recommended by environmental protection authorities. One of the most common and inexpensive ones is physical treatment. The following are the physical treatment options for hazardous wastewater.

Sedimentation

In this treatment technique, the waste is separated into a liquid and a solid. The solid waste particles in the liquid are left to settle at a container’s bottom through gravity. Sedimentation is done in a continuous or batch process.

Continuous sedimentation is the standard option and generally used for the treatment for large quantities of liquid waste. It is often used in the separation of heavy metals in the steel, copper and iron industries and fluoride in the aluminum industry.

Electro-Dialysis

This treatment method comprises the separation of wastewater into a depleted and aqueous stream. The wastewater passes through alternating cation and anion-permeable membranes in a compartment.

A direct current is then applied to allow the passage of cations and anions to opposite directions. This results in solutions with elevated concentrations of positive and negative ions and another with a low ion concentration.

Electro-dialysis is used to enrich or deplete chemical solutions in manufacturing, desalting whey in the food sector and generating potable water from saline water.

Reverse Osmosis

Man checking wastewater

This uses a semi-permeable membrane for the separation of dissolved organic and inorganic elements in wastewater. The wastewater is forced through the semi-permeable membrane by pressure, and larger molecules are filtered out by the small membrane pores.

Polyamide membranes have largely replaced polysulphone ones for wastewater treatment nowadays owing to their ability to withstand liquids with high pH. Reverse osmosis is usually used in the desalinization of brackish water and treating electroplating rinse waters.

Solvent Extraction

This involves the separation of the components of a liquid through contact with an immiscible liquid. The most common solvent used in the treatment technique is supercritical fluid (SCF) mainly CO2.

These fluids exist at the lowest temperature where condensation occurs and have a low density and fast mass ion transfer when mixed with other liquids. Solvent extraction is used for extracting oil from the emulsions used in steel and aluminum processing and organ halide pesticide from treated soil.

Superficial ethane as a solvent is also useful for the purification of waste oils contaminated with water, metals, and PCBs.

Some companies and household have tried handling their hazardous wastewater to minimize costs. This, in most cases, puts their employees at risk since the “treated” water is still often dangerous to human health, the environment and, their machines.

The physical processes above sometimes used with chemical treatment techniques are the guaranteed options for truly safe wastewater.

The post Physical Methods of Hazardous Wastewater Treatment appeared first on None Equilibrium.

by Nonequilibrium at January 09, 2019 11:35 PM