Andrew Oh-Willeke has posted "A Review of Fundamental Physics in 2012 ".
Clearly the discovery of "boson X(126)" is the big event, the major payoff so far from the LHC. We agree on that.
The failure of anything else to show up so far - not just supersymmetry - is also significant, for its impact on the hierarchy problem. The mainstream discussion continues to be framed as a choice between "naturalness" - unknown particles like super-partners or top-partners make the Higgs mass more natural than it seems - and "finetuning" - e.g. there are anthropic reasons for having a Higgs mass in this range. The third way due to Shaposhnikov and Wetterich - the Higgs mass comes from special boundary condition to the RG flow at high energies - is long overdue for attention.
Technicolor in a narrow sense may be "dead", but the idea of a composite Higgs is still very alive.
"Dark Energy Is Still A Solved Problem" says Andrew - it's just a cosmological constant. Maybe that's what it is, maybe it isn't, but even if it is, we still wouldn't know why it has that magnitude or what it is that cancels out the enormous vacuum energy implied by standard QFT. Since he likes "physics numerology", he should look at some of the numerology that has been proposed regarding the size of the dark energy, e.g. by Riofrio, Chernin, and Padmanabhan.
Andrew is one of the handful of people aware of the generalizations of Koide's formula being pursued by Brannen, Rivero, and others in recent years, so this part of his review you definitely won't find elsewhere. I would quibble with one detail: he mentions Sumino and Goffinet as providing ideas about how to explain the exactness of the Koide relation for the pole masses of the charged leptons, when such relations should instead connect the running masses at high energies. But really it's only Sumino who has proposed a mechanism to cancel the deviations that should be produced by RG flow. Goffinet notices the problem, and his work is of interest for other reasons (e.g. the attempt to relate mixing matrices to the formula), but he doesn't have an answer, unlike Sumino.
As for quantum gravity, I remain pro-string and anti-loop, where "string" means, not just geometric phases of superstring theory, but also a variety of string-like possibilities. I consider the key "stringlike" properties to be (1) the existence of holographic dual descriptions of ordinary space-time physics (2) the possession of very special relations among amplitudes for quantum processes, like those discovered in the twistor revival.
There's a lot more in his review, especially on dark matter. Thanks to Andrew for posting it!
Monday, December 24, 2012
Tuesday, October 2, 2012
Higgs coupling and top Yukawa
Jester blogs about SM vacuum stability (and Lubos comments).
Alejandro Rivero has pointed out many times that the top yukawa is unnaturally close to 1, and I think this ought to be tackled in conjunction with the Higgs coupling. For the Higgs coupling one might look for a watered-down version of asymptotic safety (that is, a less stringent assumption which nonetheless reproduces the final stages of the Shaposhnikov-Wetterich prediction for the Higgs mass). Meanwhile, the only paper known to me which even talks about the top yukawa being close to unity is arxiv:1203.3825.
Alejandro Rivero has pointed out many times that the top yukawa is unnaturally close to 1, and I think this ought to be tackled in conjunction with the Higgs coupling. For the Higgs coupling one might look for a watered-down version of asymptotic safety (that is, a less stringent assumption which nonetheless reproduces the final stages of the Shaposhnikov-Wetterich prediction for the Higgs mass). Meanwhile, the only paper known to me which even talks about the top yukawa being close to unity is arxiv:1203.3825.
Tuesday, September 18, 2012
Dharwadker reminder
At her new blog, Marni Sheppeard asks why the Dharwadker-Khachatryan prediction is being ignored. I think it's no mystery. First, the prediction came in a dubious package; second, even considered in isolation, the relation made no theoretical sense, especially from the perspective of RG flow; third, it's easy to rationalize it as a coincidence when there are so many other relationships that can be found.
Consider the relative lack of attention that even the Shaposhnikov-Wetterich prediction of 126 GeV is receiving. Some people are talking about it; Matilde Marcolli lately coauthored a paper about it, and that ought to give it more attention; but even so, it's mostly absent from the public discourse about the Higgs.
Since the S-W prediction is an RG argument, one ought to see whether a "Koide" approach to the D-K prediction, such as Sheppeard employs (page 97), could be embedded in a more general solution to the RG problems that exist for all Koide relations (such as the "Sumino mechanism"). S-W and D-K could coexist.
Alternatively, one could try to produce the D-K formula as a mass sum rule in a theory of composite electroweak bosons. There have been many of those proposals, but the D-K formula has an unfamiliar form.
Consider the relative lack of attention that even the Shaposhnikov-Wetterich prediction of 126 GeV is receiving. Some people are talking about it; Matilde Marcolli lately coauthored a paper about it, and that ought to give it more attention; but even so, it's mostly absent from the public discourse about the Higgs.
Since the S-W prediction is an RG argument, one ought to see whether a "Koide" approach to the D-K prediction, such as Sheppeard employs (page 97), could be embedded in a more general solution to the RG problems that exist for all Koide relations (such as the "Sumino mechanism"). S-W and D-K could coexist.
Alternatively, one could try to produce the D-K formula as a mass sum rule in a theory of composite electroweak bosons. There have been many of those proposals, but the D-K formula has an unfamiliar form.
Thursday, July 26, 2012
Big Rip in December 2012, first attempt
Wikipedia provides an equation which predicts how long it is until dark energy tears apart the universe, if it's dark energy of the "phantom" type. The neo-Mayan apocalypse is just five months away; what would have to be true, for the Big Rip to happen that soon?
I won't try to reproduce the equation here - just follow the link - but the basic things to note are that the left hand side has units of time, and says how long it is from now, "t_0", until the end, "t_rip"; on the right hand side, H_0 has units of inverse time, and all the other quantities are dimensionless. H_0 is the Hubble constant, which tells you how the speed with which the galaxies move away from us, increases with distance. Speed is distance over time, change of speed with distance is (distance over time) over distance, "distance" and "over distance" cancel out, and that's why H_0 is just a number times "1 over time".
But what is that number? Wikipedia gives us a bunch of estimates, I'll go with "72 km/s/Mpc" for the purposes of calculation. But for km and Mpc to cancel we need to match units... Actually, forget this, the calculation was already done: the "Hubble time" is 1/H_0 is 13.8 billion years.
So, let's go to that first equation. We have decided apriori that the end is this December, so t_rip - t_0 = 5 months = 0.4 years. That equals 1/H_0 x 2 x 1/[3.|1+w|.sqrt(1-Omega_m)], where w is the dark energy equation-of-state parameter and Omega_m is the matter density of the universe, including dark matter. Observationally, Omega_m is about 0.3, so we have:
0.4 = 13.8 x 10^9 x 2 x 1/[3.|1+w|.sqrt(0.7)]
0.4 x [3.|1+w|.sqrt(0.7)] = 13.8 x 10^9 x 2
|1+w| = 10^9 x (13.8 x 2) / (0.4 x 3 x sqrt(0.7))
(that denominator is 1.004, very close to 1)
So basically w approximately equals -2.7 x 10^10, which I think would be grossly incompatible with observation; implying that, if we want dark energy to plausibly end the universe in time for Christmas this year, we'll need a model more complicated than one with a constant equation of state.
I won't try to reproduce the equation here - just follow the link - but the basic things to note are that the left hand side has units of time, and says how long it is from now, "t_0", until the end, "t_rip"; on the right hand side, H_0 has units of inverse time, and all the other quantities are dimensionless. H_0 is the Hubble constant, which tells you how the speed with which the galaxies move away from us, increases with distance. Speed is distance over time, change of speed with distance is (distance over time) over distance, "distance" and "over distance" cancel out, and that's why H_0 is just a number times "1 over time".
But what is that number? Wikipedia gives us a bunch of estimates, I'll go with "72 km/s/Mpc" for the purposes of calculation. But for km and Mpc to cancel we need to match units... Actually, forget this, the calculation was already done: the "Hubble time" is 1/H_0 is 13.8 billion years.
So, let's go to that first equation. We have decided apriori that the end is this December, so t_rip - t_0 = 5 months = 0.4 years. That equals 1/H_0 x 2 x 1/[3.|1+w|.sqrt(1-Omega_m)], where w is the dark energy equation-of-state parameter and Omega_m is the matter density of the universe, including dark matter. Observationally, Omega_m is about 0.3, so we have:
0.4 = 13.8 x 10^9 x 2 x 1/[3.|1+w|.sqrt(0.7)]
0.4 x [3.|1+w|.sqrt(0.7)] = 13.8 x 10^9 x 2
|1+w| = 10^9 x (13.8 x 2) / (0.4 x 3 x sqrt(0.7))
(that denominator is 1.004, very close to 1)
So basically w approximately equals -2.7 x 10^10, which I think would be grossly incompatible with observation; implying that, if we want dark energy to plausibly end the universe in time for Christmas this year, we'll need a model more complicated than one with a constant equation of state.
Wednesday, July 4, 2012
Monday, April 23, 2012
t, H, W, Z
Andrew Oh-Willeke mentions (in a blog post whose headline observation is the same as that made by Dharwadker and Khachatryan) that the Higgs VEV is approximately twice the mass of the LHC's maybe-Higgs. This is one of a set of numerological connections between t, H, W and Z that have been on my list of items to ponder. Obviously I need to rush into print with those observations now, or else Andrew and others will get all the credit (when they figure it out for themselves)...
So, first item: top mass is approximately sqrt(2) times Higgs mass, Higgs VEV is approximately sqrt(2) times top mass. This is an extra twist on the basic observation that Higgs VEV is approximately two times Higgs mass.
Second item... This one isn't an observation so much as a conjunction of observations. I've already blogged Malcolm Mac Gregor's observation that m_top =approx m_W + m_Z. (He has a new paper today with lots of hadron numerology.) One then needs to consider this alongside the Dharwadker-Khachatryan observation (prediction, actually) that m_Higgs = m_W + 1/2 m_Z, and finally alongside the observation from the first item that m_top =approx sqrt(2) m_Higgs.
If you set the two expressions for m_top equal to each other, you get that m_W + m_Z "equals" sqrt(2) x (m_W + 1/2 m_Z), which would be true if m_W = 1/sqrt(2) m_Z, which isn't true. But maybe it's true "to zeroth order"? ... in the same unknown (and possibly nonexistent) theoretical framework where all these relationships aren't just coincidences.
So, first item: top mass is approximately sqrt(2) times Higgs mass, Higgs VEV is approximately sqrt(2) times top mass. This is an extra twist on the basic observation that Higgs VEV is approximately two times Higgs mass.
Second item... This one isn't an observation so much as a conjunction of observations. I've already blogged Malcolm Mac Gregor's observation that m_top =approx m_W + m_Z. (He has a new paper today with lots of hadron numerology.) One then needs to consider this alongside the Dharwadker-Khachatryan observation (prediction, actually) that m_Higgs = m_W + 1/2 m_Z, and finally alongside the observation from the first item that m_top =approx sqrt(2) m_Higgs.
If you set the two expressions for m_top equal to each other, you get that m_W + m_Z "equals" sqrt(2) x (m_W + 1/2 m_Z), which would be true if m_W = 1/sqrt(2) m_Z, which isn't true. But maybe it's true "to zeroth order"? ... in the same unknown (and possibly nonexistent) theoretical framework where all these relationships aren't just coincidences.
Tuesday, April 17, 2012
Dark matter and powers of two
This post will mention my very own crackpot idea!
Right now we have some blog coverage (Reference Frame, Resonaances) of a paper claiming a signal of dark matter annihilation in the galactic center, producing gamma rays of about 130 GeV.
Now, a few years back there was a claim of "lepton jets" that might be produced by "three new states with energies 15 GeV, 7.3 GeV, and 3.6 GeV. The heavier states cascade decay to the lighter ones while the lightest one decays into a tau pair after 20 ps or so: so the 15 GeV particle should decay to 8 tau's! To say that the masses of some dark sector states should be fine-tuned to be 2m_tau, 4m_tau, and 8m_tau surely looks bizarre." (Quote from Lubos Motl.)
Now I notice that this new claim of 130 GeV, plus or minus a few GeV [*], is around 64m_tau. It's producing photons, not muons, but that just means there's a different decay channel available for this mass, right?
A more urgent theoretical question is whether we suppose that there is a sequence of states with mass 2x, 4x, 8x, 16x, 32x, 64x... m_tau, or whether we just jump straight from 2, 4, 8 to 64. Interestingly, the latter four numbers have a status as "Hermite constants" related to hyperspatial packing densities. Those are the Hermite constants for 3, 4, 5 and 7 dimensions. Hyperspace theorists, this is your chance!
[*] quite a few
Right now we have some blog coverage (Reference Frame, Resonaances) of a paper claiming a signal of dark matter annihilation in the galactic center, producing gamma rays of about 130 GeV.
Now, a few years back there was a claim of "lepton jets" that might be produced by "three new states with energies 15 GeV, 7.3 GeV, and 3.6 GeV. The heavier states cascade decay to the lighter ones while the lightest one decays into a tau pair after 20 ps or so: so the 15 GeV particle should decay to 8 tau's! To say that the masses of some dark sector states should be fine-tuned to be 2m_tau, 4m_tau, and 8m_tau surely looks bizarre." (Quote from Lubos Motl.)
Now I notice that this new claim of 130 GeV, plus or minus a few GeV [*], is around 64m_tau. It's producing photons, not muons, but that just means there's a different decay channel available for this mass, right?
A more urgent theoretical question is whether we suppose that there is a sequence of states with mass 2x, 4x, 8x, 16x, 32x, 64x... m_tau, or whether we just jump straight from 2, 4, 8 to 64. Interestingly, the latter four numbers have a status as "Hermite constants" related to hyperspatial packing densities. Those are the Hermite constants for 3, 4, 5 and 7 dimensions. Hyperspace theorists, this is your chance!
[*] quite a few
Friday, March 16, 2012
More W, Z numerology
vixra has a preprint by Malcolm H. Mac Gregor, "The W + Z = T Gauge Boson and Top Quark Experimental Mass and Energy Equality"."Experimentally, the sum of the W and Z gauge boson masses is equal to the top quark t mass, to an accuracy of better than 1%..." There's lots more; the central "results" of the paper might be equations 7 and 8. The appearance of two factors of 1/alpha (alpha the fine structure constant), in the formula for m_top in terms of m_electron, is reminiscent of this mass plot, which is taken from arxiv's "The strange formula of Dr. Koide".
Tuesday, January 3, 2012
Dharwadker and Khachatryan's prediction of the Higgs boson mass
I foresee that I will be making a series of posts about "Higgs Boson Mass predicted by the Four Color Theorem", which has to be the world champion among crackpot physics papers right now, because they may have predicted the Higgs mass correctly! The formula for the Higgs mass that they offer is very simple ... mH = 1/2 (mW+ + mW- + mZ) ... and for anyone impressed by the result, it might be tempting to appropriate the formula, but try to justify it on some other basis.
However, that's not enough for snarxiv blog. I don't want to leave this stone unturned. So I'm going to be drilling down into the "logic" of the paper, trying to unearth the quasi-deductive process whereby this formula is supposed to follow from Ashay Dharwadker's rather unusual construction. Let's start with what must be the final step in the logic - found on pages 55 and 56:
The "particle frame" is a type of disk structure, illustrated many times in the paper, and all the standard model particles are associated with regions of this disk.
It appears that the logic is as follows: The Higgs boson provides the mass for everything. For some reason, we will suppose that it delivers this mass in the form of a "Cooper pair" made of a Higgs particle and a Higgs antiparticle. The energy must be enough to provide the masses of all the massive bosons. But the energy of the Cooper pair will be a minimum. Therefore "masses of all the massive bosons" = "mass of Higgs particle + mass of Higgs antiparticle" and you get the formula.
The logic is illogical because Cooper pairs don't play a role in the Higgs mechanism, Bose condensation is irrelevant when we are comparing different species of boson, and probably for other reasons too. And that's not even addressing the rest of the theoretical framework, starting with Dharwadker's almost certainly wrong proof of the four-color theorem, which employs the Steiner system S(5,8,24). The symmetry group of S(5,8,24) is the exceptional group M24, a subgroup of the permutation group S24, and it appears that types of particle are associated with permutations, possibly elements of M24. But M24 has about a quarter of a billion elements, so either we're talking about certain special elements, or some very large equivalence classes...
Anyway, I don't know when I'll return to this. Mostly I just wanted to begin to understand how the paper is supposed to work. Usually the theme of this blog is about trying to find more sense than expected in a crazy idea. Here I'm instead analyzing the logic of a paper, ultimately in order to show its flaws - but also just to bring into the open how it's supposed to work. So perhaps this post will help other readers of the paper who want to understand where the prediction comes from, but who are lost among its peculiarities.
I should add that I was driven to look again at the paper by the always interesting Marni Sheppeard, who is taking it seriously.
However, that's not enough for snarxiv blog. I don't want to leave this stone unturned. So I'm going to be drilling down into the "logic" of the paper, trying to unearth the quasi-deductive process whereby this formula is supposed to follow from Ashay Dharwadker's rather unusual construction. Let's start with what must be the final step in the logic - found on pages 55 and 56:
Since the Higgs particle/antiparticle will be identified (as a Cooper pair), their combined energy would then be the sum of the masses of all other bosons defined on the particle frame. We can have all types of bosons superposed on a single particle frame, and the single Cooper pair of the Higgs particle/antiparticle must be able to attribute energy/rest mass to all types of bosons on this particle frame, by the Higgs-Kibble mechanism. The particle frames of the bosons can be superposed at a point in space-time because they follow the Bose-Einstein statistics. Hence, this Cooper pair must have at least enough energy to attribute the sum of the rest masses of all types of bosons defined on the particle frame. On the other hand, the most important property of Bose condensation is that the Cooper pair of the Higgs particle/antiparticle must have minimum energy, so it can have at most the energy required to attribute the sum of the rest masses of all types of bosons defined on the particle frame. This must be the lowest energy state possible for the Higgs boson when it undergoes Bose condensation.
The "particle frame" is a type of disk structure, illustrated many times in the paper, and all the standard model particles are associated with regions of this disk.
It appears that the logic is as follows: The Higgs boson provides the mass for everything. For some reason, we will suppose that it delivers this mass in the form of a "Cooper pair" made of a Higgs particle and a Higgs antiparticle. The energy must be enough to provide the masses of all the massive bosons. But the energy of the Cooper pair will be a minimum. Therefore "masses of all the massive bosons" = "mass of Higgs particle + mass of Higgs antiparticle" and you get the formula.
The logic is illogical because Cooper pairs don't play a role in the Higgs mechanism, Bose condensation is irrelevant when we are comparing different species of boson, and probably for other reasons too. And that's not even addressing the rest of the theoretical framework, starting with Dharwadker's almost certainly wrong proof of the four-color theorem, which employs the Steiner system S(5,8,24). The symmetry group of S(5,8,24) is the exceptional group M24, a subgroup of the permutation group S24, and it appears that types of particle are associated with permutations, possibly elements of M24. But M24 has about a quarter of a billion elements, so either we're talking about certain special elements, or some very large equivalence classes...
Anyway, I don't know when I'll return to this. Mostly I just wanted to begin to understand how the paper is supposed to work. Usually the theme of this blog is about trying to find more sense than expected in a crazy idea. Here I'm instead analyzing the logic of a paper, ultimately in order to show its flaws - but also just to bring into the open how it's supposed to work. So perhaps this post will help other readers of the paper who want to understand where the prediction comes from, but who are lost among its peculiarities.
I should add that I was driven to look again at the paper by the always interesting Marni Sheppeard, who is taking it seriously.
Subscribe to:
Posts (Atom)