Over two years ago, a series of posts on tHWZ relations was launched here, starting with the observation that
<ϕ> ~ √2 mt, mt ~ √2 mH
which I was prompted to record, when Andrew Oh-Willeke remarked that
<ϕ> ~ 2 mH
(Where <ϕ>, also often written as v, is the Higgs field "vacuum expectation value".)
Recently a paper appeared on arxiv, noting that first relation, in the form
4 mH2 = 2 mt2 = v2
and Andrew commented that
"I think it is more likely that the observed relationship is really an approximation of the relationships
sum((Fi(^2)=v^2/2 and sum((Bj)^2)=v^2/2 for all fundamental fermion rest masses Fi and fundamental boson rest masses Bj"
which is an aspect of the LC&P sum rule, of which he also says that it is
"quite a bit more profound than the fact that the heaviest fermion by
itself accounts for about half of the Higgs vev squared, or that the
Higgs mass square accounts for about a quarter of the Higgs vev squared."
I agree that the LC&P sum rule looks to be the fundamental thing here. But there is an interesting final twist which he didn't note.
To recapitulate:
1. The sum of the squares of all the fundamental particle masses, is approximately the square of the Higgs VEV.
2. The contributions to this total from bosons and fermions are approximately equal. (Given the love of supersymmetry in the particle physics community, it really is remarkable that this isn't visibly being talked about.)
3. The top quark is responsible for the great majority of the fermion contribution, and thus about half of the total.
4. The Higgs boson is responsible for about half the bosonic contribution, and thus about a quarter of the total.
So where does the rest of the bosonic contribution come from? It comes from the W and Z bosons. So we have a fifth fact:
5. The W and Z bosons are responsible for the other half of the bosonic contribution, and thus for the remaining quarter of the total.
If we write this up as an equation, we get
mH2 ~ mW2 + mZ2 ~ 1/2 mt2
The first part of this equation appeared as a blog comment by S. Vik, who is apparently a retired physicist from Wilfrid Laurier University in Canada. At the time I gave it a low probability of being meaningful, but I did record it. It would be ironic if it is yet another genuine clue to what lies beneath the standard model.
Friday, September 19, 2014
Friday, September 12, 2014
vixra watch
Many times on this blog I have cited papers from vixra, the alternative to arxiv. Today I just want to note two surprising new additions to the vixra user base, Simon Plouffe and Jacob Barnett. They both have biographies at Wikipedia: Plouffe is, I guess, a computational number theorist, and Barnett is a teenage theoretical physicist who has been in the media since he was 12 (he's 16 now).
Friday, June 27, 2014
Goldfain on LC&P
I record here the existence of two papers by Ervin Goldfain 1 2 claiming to derive the LC&P sum rule.
His concept seems to be that the effective dimension of space-time varies with energy scale, that the masses of SM particles define special scales, and that the LC&P formula follows from a "closure relation" that must connect these different scales.
Incidentally, he is not just talking about spaces with an integer number of dimensions, as in Kaluza-Klein theories or string theories, where e.g. the number of dimensions may increase from 4 to 10 or 11, at energies above the compactification scale. Instead he talks of there being 4+ε dimensions, reminiscent of dimensional regularization... but the modified concept of dimensionality that he really emphasizes is that of fractals.
Informally, one might say that Goldfain's concept is that space is crinkled or creased in a fractal way, so that e.g. the volume of space inside a cube doesn't simply vary as the third power of the side of the cube. Instead, the exponent describing the change in volume is non-integer, and also varies with the size of the cube (length of its side). If we take a cube and shrink it, we might find that as the side shrinks to one millimeter, volume is proportional to size^3.1, but by the time we have shrunk to one micrometer, volume is proportional to size^3.3. Apparently in the world of fractals, such behavior is called multifractal.
The references to millimeter and micrometer above are purely illustrative. Goldfain seems to believe that the first significant deviations from integer dimensionality (4 space-time dimensions) only begin to occur above the electroweak energy scale, which would correspond to distances less than 10^-18 meters.
Goldfain is an independent investigator who publishes at vixra and in various web "journals", but the concept of multifractal space-time isn't just some whimsy of his, it has seen some mathematical development. The real problem I am having with his work so far, is that I don't understand where the "closure relation" comes from - and that's the crucial step towards obtaining the LC&P formula.
See for example equation 5 in paper "1". The "r"s are the different scales, and the "D" is a fractal dimension. The LC&P formula is a sum of squares, and so if scales were associated with masses, and if D was equal to 2, then we might be able to obtain it from equation 5.
Goldfain has written other papers trying to obtain SM mass ratios from fractal dimensional flow. A skeptical reading might say that all we have here is a conceptual framework in which multiple length scales can assume a special significance, and since masses can be mapped to length scales in physics, this multiscale conceptual framework can be a playground for a physics numerologist trying to explain particle masses.
I am skeptical, but dimensional flow is not a bad thing to think about. I will make a follow-up post if I have anything more concrete to add.
His concept seems to be that the effective dimension of space-time varies with energy scale, that the masses of SM particles define special scales, and that the LC&P formula follows from a "closure relation" that must connect these different scales.
Incidentally, he is not just talking about spaces with an integer number of dimensions, as in Kaluza-Klein theories or string theories, where e.g. the number of dimensions may increase from 4 to 10 or 11, at energies above the compactification scale. Instead he talks of there being 4+ε dimensions, reminiscent of dimensional regularization... but the modified concept of dimensionality that he really emphasizes is that of fractals.
Informally, one might say that Goldfain's concept is that space is crinkled or creased in a fractal way, so that e.g. the volume of space inside a cube doesn't simply vary as the third power of the side of the cube. Instead, the exponent describing the change in volume is non-integer, and also varies with the size of the cube (length of its side). If we take a cube and shrink it, we might find that as the side shrinks to one millimeter, volume is proportional to size^3.1, but by the time we have shrunk to one micrometer, volume is proportional to size^3.3. Apparently in the world of fractals, such behavior is called multifractal.
The references to millimeter and micrometer above are purely illustrative. Goldfain seems to believe that the first significant deviations from integer dimensionality (4 space-time dimensions) only begin to occur above the electroweak energy scale, which would correspond to distances less than 10^-18 meters.
Goldfain is an independent investigator who publishes at vixra and in various web "journals", but the concept of multifractal space-time isn't just some whimsy of his, it has seen some mathematical development. The real problem I am having with his work so far, is that I don't understand where the "closure relation" comes from - and that's the crucial step towards obtaining the LC&P formula.
See for example equation 5 in paper "1". The "r"s are the different scales, and the "D" is a fractal dimension. The LC&P formula is a sum of squares, and so if scales were associated with masses, and if D was equal to 2, then we might be able to obtain it from equation 5.
Goldfain has written other papers trying to obtain SM mass ratios from fractal dimensional flow. A skeptical reading might say that all we have here is a conceptual framework in which multiple length scales can assume a special significance, and since masses can be mapped to length scales in physics, this multiscale conceptual framework can be a playground for a physics numerologist trying to explain particle masses.
I am skeptical, but dimensional flow is not a bad thing to think about. I will make a follow-up post if I have anything more concrete to add.
Thursday, May 8, 2014
BICEP2 numerology
It's been a while since I've posted. It's been a while since I talked cosmology. And meanwhile BICEP2 came out with what may be the big measurement of the decade, along with LHC's 2012 determination of the Higgs boson mass.
Specifically, BICEP2 has estimated the cosmological "r" parameter, which quantifies the relative magnitude of tensor perturbations and scalar perturbations of the cosmic microwave background, as 0.2. I'll confess that I'm still working out the basic meaning of this quantity. It seems to be a ratio of energies-squared - the square of the energy in the tensor perturbations, divided by the square of the energy in the scalar perturbations. And the physical meaning of squaring the energy may be, that it corresponds to the "work done" by that type of perturbation. So perhaps it would mean that the fluctuations of the inflaton field (which supposedly caused the scalar perturbations) did five times as much work on the CMB photons, as was done by the fluctuations of the gravitational field (which supposedly caused the tensor perturbations). But you should probably ask someone better informed, before believing me about this.
Now there are all sorts of complicated models out there - Higgs inflation... axion monodromy inflation from string theory... - in which people are trying to get an "r" near 0.2. Meanwhile, what are physics numerologists saying? So far, I have spotted two examples of BICEP2 numerology.
First was a vixra paper by Tony Smith, in which Tony estimates "r" as 7/28 = 0.25. 7 and 28 are the dimensions of different algebras which he associates with the tensor and scalar perturbations, respectively, in the context of an octonionic theory of inflation. Of course I don't understand Tony's logic, but an important part is probably the proposition, a few pages along, that "Cl(64) is the smallest Real Clifford algebra for which we can reflexively identify each component Cl(8) with a vector in the Cl(8) vector space". So it all has something to do with space-time qubits and Bott periodicity and self-embeddings.
Then there was a characteristically laconic post by Marni Sheppeard, in which the idea is that "r" is about 1/5, and that this would be a ratio of... dimensions of certain Hilbert spaces, I think, that are relevant for her theory of mass generation in quantum gravity. In her paradigm, space-time is something like a big concatenation of morphisms between these vector spaces. For more, see her papers at vixra.
My "contribution" to BICEP2 numerology is not going to be based on advanced math - though it does build on the observation that 0.2 = 1/5. My thought is just that this is also the ratio of baryonic matter to dark matter densities in the present-day universe. (I'd also like to acknowledge that work by A. Hattawi helped to fix this fact in my mind - that the OM/DM ratio is about 1/5.) So my question is, is there some theory in which this is not just a coincidence?
Specifically, BICEP2 has estimated the cosmological "r" parameter, which quantifies the relative magnitude of tensor perturbations and scalar perturbations of the cosmic microwave background, as 0.2. I'll confess that I'm still working out the basic meaning of this quantity. It seems to be a ratio of energies-squared - the square of the energy in the tensor perturbations, divided by the square of the energy in the scalar perturbations. And the physical meaning of squaring the energy may be, that it corresponds to the "work done" by that type of perturbation. So perhaps it would mean that the fluctuations of the inflaton field (which supposedly caused the scalar perturbations) did five times as much work on the CMB photons, as was done by the fluctuations of the gravitational field (which supposedly caused the tensor perturbations). But you should probably ask someone better informed, before believing me about this.
Now there are all sorts of complicated models out there - Higgs inflation... axion monodromy inflation from string theory... - in which people are trying to get an "r" near 0.2. Meanwhile, what are physics numerologists saying? So far, I have spotted two examples of BICEP2 numerology.
First was a vixra paper by Tony Smith, in which Tony estimates "r" as 7/28 = 0.25. 7 and 28 are the dimensions of different algebras which he associates with the tensor and scalar perturbations, respectively, in the context of an octonionic theory of inflation. Of course I don't understand Tony's logic, but an important part is probably the proposition, a few pages along, that "Cl(64) is the smallest Real Clifford algebra for which we can reflexively identify each component Cl(8) with a vector in the Cl(8) vector space". So it all has something to do with space-time qubits and Bott periodicity and self-embeddings.
Then there was a characteristically laconic post by Marni Sheppeard, in which the idea is that "r" is about 1/5, and that this would be a ratio of... dimensions of certain Hilbert spaces, I think, that are relevant for her theory of mass generation in quantum gravity. In her paradigm, space-time is something like a big concatenation of morphisms between these vector spaces. For more, see her papers at vixra.
My "contribution" to BICEP2 numerology is not going to be based on advanced math - though it does build on the observation that 0.2 = 1/5. My thought is just that this is also the ratio of baryonic matter to dark matter densities in the present-day universe. (I'd also like to acknowledge that work by A. Hattawi helped to fix this fact in my mind - that the OM/DM ratio is about 1/5.) So my question is, is there some theory in which this is not just a coincidence?
Monday, March 10, 2014
Various developments
Emilio Torrente-Lujan has updated a tHWZ numerology paper to include a number of new relations, and Stephen Adler has put out "SU(8) unification with boson-fermion balance", sketching a theory that would resemble N=8 supergravity, but without actually being supersymmetric. Further comments to come.
Friday, January 17, 2014
Coupling constants II
One form of the LC&P sum rule is
2 λ + g2/4 + (g2 + g'2)/4 + yt2/2 ~ 1
... based on their equation 2, and neglecting yukawa couplings for fermions other than the top quark.
As they remark (but I didn't notice until Andrew pointed it out), the contributions from bosons and fermions are almost equal. So we can also say that
2 λ + g2/2 + g'2/4 ~ yt2/2 ~ 1/2
The "fermionic part" of this makes sense, if we recall that yt ~ 1. But the bosonic part
2 λ + g2/2 + g'2/4 ~ 1/2
... just considered by itself, seems to be very notable new numerology, connecting electromagnetic and weak couplings with the Higgs self-coupling λ.
edit: Actually, if I think about it for a moment, I remember that g is small and g' (the weak coupling) is even smaller. So the bosonic part reduces to
2 λ ~ 1/2
i.e. λ ~ 1/4. I noted almost a year ago that this is implied by the fact that the Higgs VEV / electroweak scale is approximately twice the Higgs boson mass.
edit #2: Study of the literature (e.g. PDG 2013 Higgs review) makes it clear that
λ ~ 1/8
is closer to the truth. Apparently there are some factors of √2 that I missed. But now I don't understand why LC&P works.
(Or are we just dealing with different conventions? Remedial study of Higgs-sector basics is in order...)
2 λ + g2/4 + (g2 + g'2)/4 + yt2/2 ~ 1
... based on their equation 2, and neglecting yukawa couplings for fermions other than the top quark.
As they remark (but I didn't notice until Andrew pointed it out), the contributions from bosons and fermions are almost equal. So we can also say that
2 λ + g2/2 + g'2/4 ~ yt2/2 ~ 1/2
The "fermionic part" of this makes sense, if we recall that yt ~ 1. But the bosonic part
2 λ + g2/2 + g'2/4 ~ 1/2
... just considered by itself, seems to be very notable new numerology, connecting electromagnetic and weak couplings with the Higgs self-coupling λ.
edit: Actually, if I think about it for a moment, I remember that g is small and g' (the weak coupling) is even smaller. So the bosonic part reduces to
2 λ ~ 1/2
i.e. λ ~ 1/4. I noted almost a year ago that this is implied by the fact that the Higgs VEV / electroweak scale is approximately twice the Higgs boson mass.
edit #2: Study of the literature (e.g. PDG 2013 Higgs review) makes it clear that
λ ~ 1/8
is closer to the truth. Apparently there are some factors of √2 that I missed. But now I don't understand why LC&P works.
(Or are we just dealing with different conventions? Remedial study of Higgs-sector basics is in order...)
Tuesday, January 14, 2014
t, H, W, Z and naturalness
The lightness of the Higgs boson is one of the vexing issues in particle physics today. Why isn't it made heavy by virtual particles?
Meanwhile, on this blog I have chronicled a variety of possible relations among the masses of t, H, W, Z. Perhaps the most impressive of these is the sum rule due to Lopez-Castro and Pestieau (anticipated by Garces Doz, and blogged by Andrew Oh-Willeke 1 2 3).
It has a mild resemblance to the "Veltman condition", a t,H,W,Z relation proposed by Martinus Veltman which would imply that the virtual corrections to the Higgs mass cancel out. In its original form, it implies a Higgs mass greater than 300 GeV, which is wrong.
However, the original form of the Veltman condition is specific to the unadorned standard model. Today, Ernest Ma - one of the few theorists to tackle the Koide formula - has told us what a Veltman condition looks like, in a minor extension of the standard model where neutrinos get their mass from dark matter (the "scotogenic" model; skotos means darkness, thus scotogenic, generated from the dark).
The paper is here. The new conditions are equations 8 and 9. With three new free parameters, it may not look so exciting. But it demonstrates that a naturalness condition can deviate a bit from Veltman's original formula, while still retaining a family likeness. (Further examples may be found here.)
This suggests a new interpretation of the LC&P sum rule (and any other valid tHWZ numerology): as a symptom of an underlying, slightly-beyond-standard-model theory, that is natural.
Meanwhile, on this blog I have chronicled a variety of possible relations among the masses of t, H, W, Z. Perhaps the most impressive of these is the sum rule due to Lopez-Castro and Pestieau (anticipated by Garces Doz, and blogged by Andrew Oh-Willeke 1 2 3).
It has a mild resemblance to the "Veltman condition", a t,H,W,Z relation proposed by Martinus Veltman which would imply that the virtual corrections to the Higgs mass cancel out. In its original form, it implies a Higgs mass greater than 300 GeV, which is wrong.
However, the original form of the Veltman condition is specific to the unadorned standard model. Today, Ernest Ma - one of the few theorists to tackle the Koide formula - has told us what a Veltman condition looks like, in a minor extension of the standard model where neutrinos get their mass from dark matter (the "scotogenic" model; skotos means darkness, thus scotogenic, generated from the dark).
The paper is here. The new conditions are equations 8 and 9. With three new free parameters, it may not look so exciting. But it demonstrates that a naturalness condition can deviate a bit from Veltman's original formula, while still retaining a family likeness. (Further examples may be found here.)
This suggests a new interpretation of the LC&P sum rule (and any other valid tHWZ numerology): as a symptom of an underlying, slightly-beyond-standard-model theory, that is natural.
Monday, January 6, 2014
α-numerology from M-theory
The fine-structure constant might be the most popular target of physics numerologists. α numerology has a long history, such as Eddington's efforts and Feynman's remark. It's a recurring topic in this long thread which might be the high point of Internet-era physics numerology.
Today on vixra there is an article which speculates about how to obtain one of the numerological formulas for α, 4π3+π2+π. It's unusual for two reasons. First, the author (Amir Mulic) speaks the technical language of M-theory; he proposes to "interpret... this expression in terms of the volumes of lp-sized three-cycles on G2 holonomy manifolds". (lp would be the Planck length.)
Second, he mentions that the coupling has to "run", i.e. change its value with energy scale. This aspect of quantum field theory is why particle physics professionals tend to ignore even Koide's relation, to say nothing of the more baroque formulae invented by amateur numerologists. The modern paradigm is that simple relations among particle masses and coupling constants exist at ultra-high energies, but that at low energies these relations will be obscured by complicated corrections, e.g. extra terms containing a logarithm of the energy, described by "beta functions" which can be derived from fundamental theory.
I haven't really gone over Mulic's article (I note that he had a similar one on arxiv years ago), and I am apriori skeptical that this particular idea will work out. But what's noteworthy here is just that someone is making this sort of effort - trying to explain the numerological formulas using the full conceptual apparatus of modern mathematical physics.
Before I comment further, it might help to show how things look without such a bridge. On one side, we have the efforts of someone like Angel Garcés Doz, already mentioned several times on this blog. Garcés Doz works hard, and like Mulic, draws inspiration from 7-dimensional geometry. Still, I find his formulas more interesting than his physics.
On the other side, consider this item of F-theory phenomenology (via Lubos). Here we have a genuine example of how a string-theory background geometry might determine a particular value of α: in this case, it's "the number of fuzzy points" in "a non-commutative four-cycle" wrapped by a 7-brane. But the value of α thereby obtained is the high-energy value, the value at the grand unification scale - perhaps 1/24 or 1/25, says Lubos. It only approaches 1/137 at low energies because of those messy correction terms.
Incidentally, this "fuzzy F-theory phenomenology" played a role at the dawn of my own attempts to make sense of what Marni Sheppeard was doing. One day she exhibited a parametrization of the CKM matrix, in terms of circulant matrices, and I was interested in whether this could fit into an existing framework like F-theory. It was very interesting to see that number 24 appearing as one of her parameters, but at the time none of us knew enough to judge whether Brannen and Sheppeard's circulants, and Heckman and H. Verlinde's fuzzy points, could fit into the same theoretical synthesis.
Today on vixra there is an article which speculates about how to obtain one of the numerological formulas for α, 4π3+π2+π. It's unusual for two reasons. First, the author (Amir Mulic) speaks the technical language of M-theory; he proposes to "interpret... this expression in terms of the volumes of lp-sized three-cycles on G2 holonomy manifolds". (lp would be the Planck length.)
Second, he mentions that the coupling has to "run", i.e. change its value with energy scale. This aspect of quantum field theory is why particle physics professionals tend to ignore even Koide's relation, to say nothing of the more baroque formulae invented by amateur numerologists. The modern paradigm is that simple relations among particle masses and coupling constants exist at ultra-high energies, but that at low energies these relations will be obscured by complicated corrections, e.g. extra terms containing a logarithm of the energy, described by "beta functions" which can be derived from fundamental theory.
I haven't really gone over Mulic's article (I note that he had a similar one on arxiv years ago), and I am apriori skeptical that this particular idea will work out. But what's noteworthy here is just that someone is making this sort of effort - trying to explain the numerological formulas using the full conceptual apparatus of modern mathematical physics.
Before I comment further, it might help to show how things look without such a bridge. On one side, we have the efforts of someone like Angel Garcés Doz, already mentioned several times on this blog. Garcés Doz works hard, and like Mulic, draws inspiration from 7-dimensional geometry. Still, I find his formulas more interesting than his physics.
On the other side, consider this item of F-theory phenomenology (via Lubos). Here we have a genuine example of how a string-theory background geometry might determine a particular value of α: in this case, it's "the number of fuzzy points" in "a non-commutative four-cycle" wrapped by a 7-brane. But the value of α thereby obtained is the high-energy value, the value at the grand unification scale - perhaps 1/24 or 1/25, says Lubos. It only approaches 1/137 at low energies because of those messy correction terms.
Incidentally, this "fuzzy F-theory phenomenology" played a role at the dawn of my own attempts to make sense of what Marni Sheppeard was doing. One day she exhibited a parametrization of the CKM matrix, in terms of circulant matrices, and I was interested in whether this could fit into an existing framework like F-theory. It was very interesting to see that number 24 appearing as one of her parameters, but at the time none of us knew enough to judge whether Brannen and Sheppeard's circulants, and Heckman and H. Verlinde's fuzzy points, could fit into the same theoretical synthesis.
Subscribe to:
Posts (Atom)