A repeating pattern in the Christmas season: At the end of the year a result is loudly announced, pyhsicists are on the verge of discovering something exceptional if not revolutional…. now it’s the „Fat cousin of the Higgs boson“ , a potentially „supersymmettric“ partner of the Higgs bosons. And again it’s some arbitrary signal to which scientists like to dedicate their fairy tale stories – as they managed to do in 2013 with the so-called Higgs discovery. I ask myself how long the public is willing to sustain this unshamed boasting and waste of funding. And I am curious how many arbitrary signals will come out of the CERN data wash that will be declared as “extensions of the standard model” – cousins, uncles, grand aunts and good sisters of the Higgs boson?
Not exactly what I consider a pleasure at Christmas…
Author Archives: AlexanderUnzicker
Cosmology Turns into Pseudoscience
BICEP2 Collaboration –the Hot Air Merchants from the South Pole
It is a little suspicious when a discovery is declared worth a Nobel Prize the very same day of the discovery by the discoverers. The BICEP2 claim that gravitational waves have been found was breaking news this week.
Let us have brief a look at gravitational wave detection—since the 1960s, this has always been a tricky business. Its history is beautifully described in Gravity’s Shadow by Harry Collins—this is probably the field with the highest number of retracted claims. Huge efforts were undertaken to prove the existence of gravitational waves directly, but all big sience experiments such as LIGO have been unable to confirm gravitational waves to this day. One may even wonder why none of the various anomalous signals were interpreted as gravitational waves in the meantime.
Ultimately, this is a merit of the very clear methodology and definition of the wave signal in laboratory experiments—it must show up in different places at fixed times, determined by the wave velocity c. This difficulty can hardly be appreciated highly enough—it is an almost unsourmountable barrier for artifacts to be falsely declared as signals. Unfortunately, the cosmic microwave background does not contain such a barrier.
One persisting problem is, as a speaker at the conference “The first billion years” (Garching, 2005) already put it in a nutshell: “The limit of background measurement is foreground.” A considerable amount of filtering and modeling is included in the analysis, rendering it gradually more prone to artifacts. (One would wish the entire anaylsis to be publicly available and transparent.)
This is already a huge problem for the tiny polarization signal seen at the time of the formation of the cosmic microwave background (CMB), assumed to be 380,000 years after the Big Bang, but it renders ridiculous the claims about the first seconds of the universe. How could such subtle information survive in a sizzling hot soup for almost half a million years?
In addition, the signal declared as evidence for gravitational waves, the “B-Mode polarization,” is utterly banal—polarization is (practically) a vector field and every vector field can be decomposed into “div” and “curl” parts. Cosmologists now claim that they cannot explain the origin of the curl part, but so what? Why gravitational waves? A pile of theoretical assumptions enters through the back door of this so-called observation. Hundreds of unknowns, artifacts, or dirt effects could cause such a “B-mode polarization.” To call this ‘direct observation’ (much more indirect as the already inconclusive evidence from the Taylor-Hulse pulsar) is audacious, to put it mildly.
To further the hype, they added the preposterous claim that those gravitational waves constitute evidence for the theory of inflation, an obviously nonsensical assertion that should leap to the eye of any sane researcher. How could an effect of the first 10-32 seconds be tracked over 50 orders of magnitude, to 380,000 years? Utterly absurd.
However, we live in a period in which the borders of science and fairy tale stories are blurring. Laura Mersini-Houghton, in all seriousness, has claimed that a “cold spot” in the CMB chart is evidence for the multiverse. MIT cosmologist Max Tegmark has recently published a book with ridiculous speculations, but he is nevertheless respected for his work on the CMB. I prefer to view it the other way around: whoever spreads such obvious baloney will hardly engage in serious science elsewhere.
The analysis of the CMB, since Penzias and Wilson, gradually became a more complicated issue and is now running for the most intransparent astrophysical business, much alike particle physics. It becomes increasingly harder to separate the hard facts from the tea leaves; thus, I would like to mention the work of Pierre-Marie Robitaille, who is even more skeptical about the CMB than I am. So much nonsense being published about the CMB makes me increasingly suspicious about its very foundations.
Fairy Tale Stories near the Big Bang
Yesterday’s claim that gravitational waves have been detected in the cosmic microwave background … I cannot help but posting a subchapter of Bankrupting Physics here (more comments will follow soon):
LANTERNFISH SPECIES CLEARLY IDENTIFIED
If an electron hits its rare, exotic, mirror-image positron, both end their lives in a dramatic gamma-ray flash of pair annihilation. For that reason, particles such as the positron are called “antimatter.” Conversely, particle couples made of matter and antimatter can be generated out of a single photon just as well. Even the much heavier proton-antiproton pairs can be created if, of course, the photon’s energy is high enough to meet Einstein’s formula E = mc2. Antimatter puzzles physicists a lot. It hardly ever appearsin everyday life because it would promptly be annihilated by its counterpartin normal matter, but it is always present when new particles are created. The heavier the particles are the more effort is needed in big colliders to produce them, so it is sometimes speculated that these particles could have been formed cost-free shortly aft er the Big Bang, in the so-called primordial phase of tremendous heat. It’s a wonderful idea, but unfortunately it is also untestable. How could the information about high-energy particle creation and annihilation survive for 380,000 years in a sizzling hot soup?
Notwithstanding this, I have heard conference presentations in which people in all seriousness talk about their hope to detect such “signatures” of pair annihilation in the cosmic microwave background. This is already absurd because of the complexity of what happens to radiation aft er it is released at the end of the plasma era, but especially because we know literally nothing about what happened prior to that. How many physical processes could have superimposed themselves on the radiation and rendered any “signature” void by this point!
You can compare such a data analysis to that of satellite images of the ocean surface. We can ascertain the sea level to the centimeter, and infrared images can tell us the exact temperature. Thus, one can clearly identify the gulf stream of the North Atlantic, maybe even deduce the salinity and the algae percentage from a spectral analysis. And if you’re lucky, you read the wind speed from the ripples on the water. But fi nding primordial particles on the WMAP chart would be as if, by analyzing the movement of the ocean surface, you would identify deep sea fish and classify them zoologically. Th is doesn’t mean, of course, that there aren’t several research groups devoted to doing exactly this.
Who does not lose one’s mind while reasoning about certain things, has nothing to lose – Gotthold Ephraim Lessing
The Art of Thinking Clearly, (not yet) Applied to Particle Physics
Rolf Dobelli’s collection of cognitive biases, fallacies and wrong decision strategies has become a bestseller because people are becoming more and more aware of the irrational elements of the human mind. The textbook example, of course, is economics, where nobody anticipated the 2008 crash. “Never has a group of experts failed so spectacularly,” Dobelli comments, but it is obvious that our deficiencies in rational decision making can produce bizarre situations elsewhere – particle physics is a field that comes to mind when reading Dobelli’s book.
An obvious concern is social proof or groupthink (Dobelli’s error No. 4). The particle physics community, consisting of more than 10,000 physicists, devotes its entire activity to a model of reality that may well be plain wrong (scientists prefer to call it “incomplete”) – but not a single individual dares to spell out the catastrophic consequences – that eight decades of research might be completely useless for a profound understanding of the laws of nature.
An important contributor at work here is the sunk cost fallacy (No. 5). The excessive funding for particle physics must continue – despite no visible advance either in fundamental questions or in technological applications. Questioning the need for a new particle accelerator would mean admitting that the investments of the past, tens of billions of dollars, would have been spent in vain. It is inconceivable not only for the experts working in the field, but also for those responsible for the funding (even if they happen to coincide frequently).
And when watching the CERN seminar in which the Higgs discovery was celebrated, the following description in the calamity of conformity (No. 25) fits perfectly: “Members of a close-knit group cultivate team spirit… if others are of the same opinion, any dissenting view must be wrong. Nobody wants to be the naysayer who destroys ream unity. Finally, each person is happy to be part of the group. Expressing reservations could mean exclusion from it.” Imagine a dissenting physicist in the seminar asking for more explanations of a certain data analysis… unthinkable.
But even when looking at more technical aspects, the experimenter’s ears should be burning when hearing about the rara sunt cara illusion (No. 27): the rarer the occurrence of today’s elementary particles, the more interesting they are considered – for no good methodological reason.
The deep reason why the standard model of particle physics has not been replaced yet is Dobelli’s illusion No. 11: “Why prefer a wrong map to no map at all. Well, we just have this standard model of particle physics”, which is what you hear everywhere. Many other fallacies could be mentioned:
- How bonuses destroy motivation (No. 56, the abundant funding…)
- Chauffeur knowledge (No. 16), which you hear from the dozens of science polularizers that allegedly `explain’ the Higgs boson…
- Make engineers stand underneath their constructions at their bridge opening ceremonies (No. 18, no way to implement such a policy in particle physics).
- Clear thoughts become clear statements, whereas ambiguous ideas transform into vacant ramblings… (No. 57 – think about it the next time somebody explains what the LHC might discover next).
- Effort justification (No. 60): Think about it when listening to particle physicists who tell you about their 20-year hunt for the Higgs boson.
Finally, there is one point where Dobelli explicitly mentions science, the feature-positive-effect (No. 95): “The falsification of a hypothesis is a lot harder to get published, and as far as I know, there has never been a Nobel Prize awarded for this.” Correct! That’s exactly what Gary Taubes noted in his book Nobel Dreams (about the W and Z boson search) … but this is another story!
In short, Dobelli’s book could well be useful for scientists, but alas, the last place its message is likely to sink in is a big science laboratory such as CERN.
A Brief History of Quantum Gravity
Problems cannot be solved by thinking the way we thought when we created them. – Albert Einstein
The incompatibility of quantum mechanics and gravity is seen as a big issue by all physicists. The Planck length
contains the gravitational constant G and Planck’s quantum h, and therefore it is the scale at which “quantum effects of gravity” are supposed to become important. But that is all what physicists have found out about quantum gravity. No theory exists, let alone any evidence of an observable effect. You’ll find the topic somewhat more elaborated in Hawking’s book A Briefer History of Time. The chapter about quantum gravity is comprised of 21 pages, of which almost 20 pages are devoted to repeating gravitation and quantum theory. Andrzej Staruszkiewicz, editor of a renowned physics journal, commented on this topic:
It is tempting to assume that when so many people write about “quantum gravity”, they must know what they are writing about. Nevertheless, everyone will agree that there is not a single physical phenomenon whose explanation would call for “quantum gravity”.
The fact that no theory for quantum gravity exists does not preclude the existence of numerous experts of quantum gravity. According to the science historian Federico di Trocchio, such a „second category of experts“ consists of those whose knowledge will become immediately obsolete once the riddles scientists are studying have been understood. They make their living on the problems that are being tackled unsuccessfully, or, like some string theorists, make ridicuous claims of having explained „the existence of gravity“.
There is simply no theory that combines general relativity with quantum theory. All theoretical recipes cooked up until now have failed, as for instance the so-called ADM formalism, a reformulation of Einstein’s equations of general relativity. It has gotten nowhere, but it is nevertheless considered a bible leading the way. Another great couturier of theories is Abhay Ashtekar, who regularly summarizes the accomplishments of the Loop Quantum Gravity he fathered. His résumé:“it is interesting”.Actually, it is interesting for 30 years now. Not exactly news anymore. But I fear that we will have a long wait for such a Theory of Everything that eventually unifies quantum gravity and general relativity. Success just comes in different flavors.
Theoretical physicists seem to have deployed a number of assumptions that block deeper reflection. For instance, there is the belief in an unalterable gravitational constant G. Solely for this reason, all theoretical attempts trudge through the Planck length’s eye of a needle and come across as trying to push a door which is labeled “pull”.
The essence of the problem – and I think the only thing worth worrying about – is just this: The ratio of the electric and gravitational force of a proton and an electron that form a hydrogen atom is a huge number and nobody knows where it comes from. Period. The only idea with respect to this riddle came from Paul Dirac, the Large Number Hypothesis. – more about that later. Where does that number, 1039 , come from? Beware wannabe unifiers: either you explain it or you had better shut up.
(with quotes from „Bankrupting Physics“ and „The Higgs Fake“)
Mach’s Principle – Relating Gravity to the Universe
What’s the origin of the inertia of masses? This is one of the most profound questions in physics. Let’s talk about this aspect of Mach’s principle, named after the Viennese physicist and philosopher Ernst Mach (1838-1916). Mach suggested that the weakness of gravity was due to the universe’s enormous size. Of course, the first estimates for the size and mass of the universe were not available before 1930, after Hubble’s discovery of the expanding universe. Sir Arthur Eddington first noticed that the gravitational constant may be numerically interrelated with the properties of the universe. By dividing c2by G, he obtained roughly the same value as dividing the universe’s mass by its radius:
c2/G ≈Mu/Ru. This is quite extraordinary.
Astonishingly, Erwin Schrödinger had already considered this possibility in 1925 (Ann. Phys. 382, p. 325 ff.), though he couldn’t know about the size of the universe!
Mach, around 1887, could not even have dreamed of such measurements, but his aspiration to formulate all the laws of dynamics by means of relative movements (with respect to all other masses!) turned out to be visionary. Einstein was guided to general relativity by that very idea. Although Einstein gave Mach due credit, he didn’t ultimately incorporate Mach’s principle in his theory. Mach’s central idea, that inertia is related to distant objects in the universe, does not appear in general relativity.
One possibility to realize Mach’s principle is to make up a formula where the gravitational constant is related to the mass and size of the universe. Because the universe is so huge, G would be very tiny. This radical idea was advocated by the British-Egyptian cosmologist Dennis Sciama in 1953ii:
c2/G ≈Σ mi/ri,
the sum taken over all masses i in the universe. Or, in other words, the gravitational potential of all masses in the universe amounts just to the square of the speed of light – intriguing!
Later, very few researchers pursued the idea, among them the cosmology “outlaw” Julian Barbour.iii During the inauguration of the Dennis Sciama building at the University of Portsmouth in England in June 2009, everybody talked about all the work that Sciama had done… except for Sciama’s reflections on inertia. What a pity!
It is idle to speculate whether Einstein would have warmed up to the idea. In any case, the size of the universe could only be guessed fifteen years after the completion of general relativity. So today, Mach’s principle has a miserable reputation and is sometimes even dismissed as numerological hokum. To the arrogant type of theorists who consider the question of the origin of inertia as obsolete chatter from Old Europe, I recommend they to look up “Inertia and fathers”, by Richard Feynman, on YouTube. In this video, Feynman wonders about one big question. Where does inertia actually come from? A truly fundamental problem. For more, see chap. 5 of “Bankrupting Physics”.
iiSciama, MNRAS 113 (1953), p.34.
iiiBarbour, gr-qc/0211021.
Can Dimensionful Constants Have a Fundamental Meaning?
As with many of the thoughts in this blog, this is contrary to common wisdom, but I think it particularly weird how the perceived wisdom that “only dimensionless constants can have fundamental meaning” has been established. Not only has this idea become representative of a methodology that has replaced thinking by calculating, but the full ignorance of the statement reveals itself only if we look at the history of physics, for example, with the book “A History of the Theories of Aether and Electricity” by Sir E. Whittaker.
First, let’s recall how the idea came to be. It doesn’t require exceptional intelligence to realize that our definitions of the meter and second, etc., are arbitrary, and it is just as evident that dimensionful constants, such as h, c, G, and others, are expressed with such arbitrary units. This discussion has attracted some attention in the debate over theories about the variable speed of light, and it has been claimed (e.g., by Ellis) that every dimensionful quantity can be set to unity using an appropriate reference. While this is a possible mathematical formulation, the question remains whether such a procedure makes sense physically. It doesn’t!
If we think of a temperature map, then according to the above logic, as temperature is a dimensionful quantity, it could be set to unity at every point – the analogy is one-to-one, but the number of people appreciating a forecast with unit temperature would appear limited.
This is probably why Einstein didn’t mind pondering the variation of a dimensionful quantity when he considered a variable speed of light in 1911. Are we to understand from Ellis’ critique that Einstein didn’t have a clue about the basics of his own theory? As one of the few reasonable people discussing the subject, John Duffield, has recently pointed out by reference to original quotes, that Einstein’s attempts around 1911 were a sound approach to describe the phenomenology of general relativity with a variable speed of light. In the meantime, other researchers have shown that even general relativity can be formulated in terms of a variable speed of light.
Today’s physicists are not only ignorant about these ideas, but they are actively distributing the ideology that “only dimensionless constants can have fundamental meaning”, just like three theorists summarizing their discussion in the CERN cafeteria. Oh, had Einstein had the opportunity to listen to their half-assed thoughts, while having a cappuccino there! Surely, he would suddenly have understood how misguided his 1911 attempts on a variable speed of light were. (cf. Whittaker II, p. 153). There is more to tell, but to see the full absurdity of the “only dimensionless constants are fundamental” argument, look at Whittaker’s treatise on the development of electrodynamics. (vol. I, p. 232). Kirchhoff, Weber, and Maxwell would never have discovered the epochal relation ε0 μ0 =1/c2, and had they not assigned a meaning to the above “dimensionful” constants – our civilization would never have been bothered by electromagnetic waves.
The Higgs Fake …
And Why It’s Hard For Particle Physicists to Appreciate it
It is not particlularly surprising that my book will hardly appeal to particle physicists, and not even lay much of a basis they will wish to discuss. There is no way to convince an expert that he or she has done nonsense for thirty years.
Over the decades, high energy physicists have been hunting for ever rarer effects, just to declare as new particles everything they did not understand. Their model has grown to a nonsensical complexity nobody can oversee, thus their convictions about it rely – much more than in any other field of physics – on trust in expert opinions, one might as well say parroting. As a consequence, in any discussion with particle physicists one soon comes to know that everything is done properly and checked by many people. If you still express slight doubts about the complication, they will easily turn stroppy and claim that unless you study their byzantine model thoroughly, you are not qualified to have an opinion. But you don’t have to be an ichthyologist to know when a fish stinks.
It is hard to make somebody understand something when his income is based on not understanding it. – Upton Sinclair
An obvious argument to make is that more than 10,000 physicists, obviously skilled and smart people, would not deal with a theoretical model if it was baloney, and presumably this is the strongest unconscious argument for all of them. It is a flawed argument, however, disproved many times in history. And it is inherently biased because it disregards all other physicists (probably the majority) who intuitively realized at the outset of their careers that a giant experiment involving a huge number of people was not the field where their creative ideas would flourish. Quantum optics, astrophysics and fields like nanotechnology have attracted the most talented in the past decades. No one who had a proper appreciation for the convictions of Einstein, Dirac, Schrödinger, Heisenberg or de Broglie could find satisfaction in post-war particle physics. This does not mean that all high energy physicists are twerps. Religion is said to make good people do evil things. To make intelligent people do stupid things, it takes particle physics. Many scientists, by the way, are busy fighting the religious nonsense that pervades the world’s societies (let some political parties go unnamed). Intellectually, this is a cheap battle, and thus some are blind to the parallels of science and religion: groupthink, relying on authority, and trust to the extent of gullibility.
Though people will accuse me of promoting a conspiracy theory, I deny the charge. Most high energy physicists indeed believe that what they are doing makes sense, but they are unable to disentangle their belief from what they think is evidence. The more thoroughly one examines that evidence, however, the more frail it becomes. But, above all, it is impenetrable. Only the super-specialized understand their small portion of the data analysis, while a superficial babble is delivered to the public. This is a scandal. It is their business, not anyone else’s, to provide a transparent, publicly reproducible kind of evidence that deserves the name.
It is no excuse that, unfortunately, there are other degenerations of the scientific method in the realm of theoretical physics: supersymmetry, and string theory which never predicted anything about anything and never will.1 It is a sign of the rottenness of particle physics that nobody has the guts to declare the nontestable as nonsense, though many know perfectly well that it is. They are all afraid of the collateral damage to their own shaky building, should the string bubble collapse. The continuous flow of public funding they depend so much on requires consensus and appeasement. However, experimental particle physics is somehow more dangerous to science as a whole, because with its observational fig leaves, it continues to beguile everybody that they are doing science instead of just pushing technology to the limits.
I don’t care too much about the public money being wasted. We live in a rotten world where billions of dollars are squandered on bank bailouts, while every ten seconds a child dies of hunger. But the worst thing about the standard model of particle physics is the stalling in the intellectual progress of humankind it has caused. We need to get rid of that junk to evolve further.
Beware of false knowledge, it is more dangerous than ignorance – George Bernard Shaw
The Higgs Fake
How Particle Physicists Fooled the Nobel Committee
The 2013 Nobel Prize in physics was awarded very soon after the announcement of the discovery of a new particle at a press conference at CERN on July 4, 2012. The breaking news caused excitement worldwide. Yet the message conveyed to the public, as if something had happened like finding a gemstone among pebbles, is, if we take a sober look at the facts, at best an abuse of language, at worst, a lie.
What had been found by the researchers did not resolve a single one of the fundamental problems of physics, yet it was immediately declared the discovery of the century. Whether this claim is fraudulent, charlatanry, or just thoroughly foolish, we may leave aside; that the greatest physicists such as Einstein, Dirac or Schrödinger would have considered the “discovery” of the Higgs particle ridiculous, is sure. They would never have believed such a complicated model with dozens of unexplained parameters to reflect anything fundamental. Though on July 4, 2012, the absurdity of high energy physics reached its culmination, its follyhad begun much earlier.
I shall argue in my book that particle physics, as practiced since 1930, is a futile enterprise in its entirety. Indeed physics, after the groundbreaking findings at the beginning of the twentieth century, has undergone a paradigmatic change that has turned it into another science, or better, a high-tech sport, that has little to do with the laws of Nature. It is not uncommon in history for researchers to follow long dead ends, such as geocentric astronomy or the overlooking of the continental drift. Often, the seemingly necessary solutions to problems, after decades of piling assumptions on top of each other, gradually turn into something that is ludicrous from a sober perspective. A few authors, such as Andrew Pickering and David Lindley, have lucidly pointed out the shortcomings, failures and contradictions in particle physics in much detail, providing, between the lines, a devastating picture. Though their conclusions may not be very different from mine, I cannot take the detached perspective of a science historian. It annoys me too much to see another generation of physicists deterred by the dumb, messy patchwork called the standard model of particle physics that hides the basic problems physics ought to deal with.
Therefore, my opinion is expressed very explicitly in the book. It is written for the young scholar who wants to dig into the big questions of physics, rather than dealing with a blend of mythology and technology. It should demonstrate to the majority of reasonable physicists that the high energy subsidiary is something they would be better getting rid of, because it doesn’t meet their standards. All scientists who maintain a healthy skepticism towards their particle colleagues should be encouraged to express their doubts, and the general public, many of whom intuitively felt that the irrational exuberance of July 4, 2012 had little to do with genuine science, should come to know the facts. Last but not least, it should provide journalists and people responsible for funding decisions with information they need to challenge the omnipresent propaganda.
Why did Nature Invent Spin?
I think this issue receives too little attention. Usually, it is said that spin is a consequence of the Dirac equation and thus, something that follows necessarily from relativity and quantum mechanics. Let us have a brief look at the argument. Schrödinger’s non-relativistic equation is .
The momentum operator p = m v is and thus, the term on the left-hand side of Schrödinger’s equation is derived simply from the kinetic energy ½ mv2. It is interesting that none of the successful predictions of Schrödinger’s equation for the hydrogen atom make specific reference to the nature of the electron (for which the wave function gives a probability that it will be found in a certain state). They refer only to the kinetic energy; irrespective of the type of wave-natured particle that orbits the nucleus (in fact, it also works for muonic atoms).
Dirac used the correct special relativistic term for energy E = and replaced Schrödinger’s term. However, there was no explicit justification for switching from the kinetic energy to the total energy of the particle. This conceptual problem was overshadowed somehow by the mathematical problem arising from the Delta operator in the square root, to which Dirac found an ingenious solution using the matrices named after him. The algebra of Dirac matrices transpired to be a description of spin. In the following, the opinion spread that spin was a consequence of putting relativistic energy into the basic equations of quantum mechanics.
However, the initial problem of the missing equivalence of kinetic and total energy persisted. Dirac was also disappointed that he could not deduce any concrete properties for the electron from his equation. The retrospective narrative is that the positron, undiscovered in 1928, was a ‘prediction’ of Dirac’s theory, but Dirac had rather sought to explain the huge mass relation of the proton and electron, which is 1836.15. In fact, in his latter days, Dirac distanced himself a little from his earlier findings and according to his biographer Helge Kragh, he was “disposed to give up everything for what he had become famous”.
Let us adopt another perspective regarding the nature of spin, one which is related to the properties of three-dimensional space; the world we perceive (those who perceive more dimensions should see a doctor). The group of rotations SO(3) obviously must have some significance, but its topology is a little intricate. It lacks a property called ‘simple connectedness’ because the paths in SO(3) may not be contracted. Objects connected to a fixed point with a ribbon must twist 720 degrees, not just 360 degrees, in order to perform a full rotation that leaves the ribbon untwisted (see the visualization here). It seems that nature has a predilection for the generalized rotations called SU(2), which are simpler mathematically and have a surprising feature; they represent precisely the electron’s spin – you need to perform a double twist of 720 degrees, rather than just 360 degrees, to get into the original position. However, there are no Dirac matrices and thus, I think there is an open problem. It seems that the properties of three-dimensional space alone are sufficient to cause spin to emerge – no relativity or quantum mechanics are needed. To put it another way, a direct understanding of quantum mechanics from the geometrical properties of space, if there is one, is still missing.