**Deemed the worst prediction in physics, the vacuum catastrophe may have just been solved.**

By Amira Val Baker

Finally, we can all agree that the vacuum is just not living up to its name and is in fact teeming with energy. The question now is, how much energy?

Well the answer to that question is yet to be agreed upon and as always, it’s those quantum physicists and cosmologists that are in dispute. However, this dispute is rather significant – specifically 122 orders of magnitude significant. This discrepancy, known as the vacuum catastrophe, is named as one of the worst predictions in physics.

So why the discrepancy … well, it all depends on how you see the vacuum.

At the quantum scale scientists are only able to make inferences about what is going on. Albeit those inferences are pretty spot on, with quantum physicists successfully making very precise predictions. However, this predictive power does not give insight into the nature of the quantum realm and thus the quantum vacuum. Previously it was thought to be not much more than a mathematical convenience with no relevant physicality. This thinking was cemented in 1887 with the Michelson Morley experiment, which concluded that space was empty and void. However, as painful as it was for some, whisperings from this dark void started to be heard.

In 1947, Hans Beth showed that spectral observations of hydrogen could be explained if the energetic effects of “quantum vacuum fluctuations” were included. Great scientists like Dirac had alluded to such an effect over a decade before – dubbed the Dirac Sea – and of course Newton and Maxwell did not think of space as completely empty, instead thinking of it as more like a fluid. Even Einstein in his later years agreed that *“according to the general theory of relativity space without ether is unthinkable”*. Finally, in 1996 the effects of the quantum vacuum, theorised by Hendrik Casimir and known as the Casimir effect, were measured thus verifying the effects of this intangible realm. The idea of space not being empty now seems to be the general consensus, with prominent physicists such as Nobel laureate Frank Wilczek, describing us as *“… children of the ether …”* in a 2017 lecture entitled “Materiality of a Vacuum”.

**The Casimir effect**

When two metal plates are placed in a vacuum, they are pushed together. This is because the vacuum actually contains energy existing in different modes of vibration – waves. Some of the waves will occupy the space between the metal plates and some will occupy the space outside, with only waves small enough occupying the space between the plates. The difference in energy density on each side of the plate results in an attractive force between the plates.

So, now on to measuring this infinite sea of energy which can be done by simply adding up the lowest possible energy of a harmonic oscillator over all possible modes. However, the shorter the wavelength of the vibratory mode, the higher the frequency and thus the greater contribution to the vacuum energy density – resulting in an infinite vacuum energy density. We therefore need to first define our frame of reference and only include wavelengths that are greater than that frame of reference. The obvious frame of reference is that of the Planck length – which is the smallest unit of length in the Universe (within our universe at least). This gives a gargantuan value of 10^{93} g/cm^{3 }– which is very very dense!

However, when we look at the opposite end of the scale – the cosmological scale – we find a value that is smaller by an order of 122 magnitudes. To make measurements of the vacuum energy density at this scale – we have to rely on observations by astrophysicists and some assumptions about the cosmological model.

The first assumption is that we live in a homogeneous and isotropic universe. In other words, the universe looks the same from all locations (homogeneity) and has no preferred direction (isotropic) – however this assumption implies that the universe is not spinning, but we’ll leave that for another time.

The second assumption is that at large scales the universe appears flat. Now like most things in the Universe, including the Universe, there is a critical point – at which change happens. The current model states that we live in a flat universe and for this to be true the total mass energy density of the universe must equal this critical value. Based on the current observations the material world only makes up 5% of this critical density of the universe, with dark matter (27%) and dark energy (68%) accounting for the rest.

The third assumption is that the universe is expanding. Originally proposed in 1972 by the Belgian astronomer and cosmologist Georges Lemaître who theoretically postulated that the universe began with the cataclysmic explosion of a small primeval super-atom. This idea came as a shock to the scientists of the time as it was believed the universe was static.

However, in 1929, while making an observational study of galaxies, Edwin Hubble found that the recession velocity of galaxies increased with increasing distance – that is the space between galaxies is expanding. The rate of expansion, now known as Hubble’s constant , is the main parameter in models of the expanding Universe.

Another familiar constant, known as the cosmological constant , was introduced by Einstein in 1917 to stop the universe from expanding as his equations had predicted. However, in light of Hubble’s discovery Einstein realised his equations were correct and therefore removed the need for the cosmological constant.

Eventually it was found that the universe was expanding at an accelerated rate, so despite its removal, it was reintroduced as a “negative” energy that was thought to be driving the expansion. So, although a constant its presence does not seem to be so constant. Should it be here or shouldn’t it, that is the question?

Well, assuming the Universe is pervaded by a form of energy (aka dark energy) and we represent that energy in terms of the cosmological constant – then the answer is yes it should be here. However, instead of being merely an additive factor the cosmological constant is coupled to the density – specifically the critical density of 10^{-29} g/cm^{3} – which is 122 orders of magnitude less than that predicted by quantum field theory!

To understand this better and resolve this discrepancy, we need to first start with a quantized view of the universe from the very small to the very big. The generalized holographic model introduced by Nassim Haramein offers such a view – and it’s all about those Planck units – defining the fundamental quantized information bit, or voxel, of the universe.

In this model the energy – or information – of any spherical system is proportional to the number of Planck Spherical Units (PSUs) or voxels within the spherical volume and the number of voxels available on the spherical surface horizon. This holographic relationship between the interior and the exterior defines the mass-energy density of the system while the inverse defines the mass expressed by the system at any given moment.

When we think of the mass of the proton in terms of the number of voxels it contains, we find a mass-energy density equivalent to the mass of the Universe. If this vacuum energy present in the volume of a proton is expanded to the radius of the Universe, the vacuum energy density of that Universe would equate to the cosmological constant value of 10^{-29} g/cm^{3}. Interestingly the value found from this approach gives the value for dark matter.

Similarly, when looking at the exterior energy available in terms of Planck voxels on the surface horizon of a spherical shell universe it was found to equate exactly with the critical density of the Universe without requiring the addition of dark matter and dark energy. That is, if we scale the vacuum energy density at the Planck scale (10^{93} g/cm^{3}) by the proportion of that energy available on a spherical surface horizon we find that as the horizon expands to the size of our universe the vacuum energy density decreases by 122 orders of magnitude.

It’s as if a proton escaped another universe and expanded to form our universe, much like that of as Lemaître’s expanding primeval super-atom. As well one could conclude from our understanding of energy – or information – that the universe is expanding and accelerating because it is learning about itself and thus it requires more surfaces to store the holographic information. The rate of expansion is thus governed by a pressure gradient due to the information transfer potential at the horizon.

This quantized view of the universe is not only able to resolve the vacuum catastrophe but as well offers insight into the evolution and dynamics of our universe. The details of this work were recently published in the Journal of High Energy Physics, Gravitation and cosmology.

Dear Amira Val Baker – Thank you so much for this extraordinary and good summary of your findings with respect to – what appears to be – a very satisfactory and very scientific resolution to the so-called ‘Vacuum Catastrophe’.

As a ‘child of the either’, I have, since my early adult years, wondered about the ‘quantum soup’ that we live, and move and have our ‘BEING’ in.

I believe that you have, beyond any shadow of doubt, isolated the key principles that, when taken together, serve as the true model for the energy and information that exists and fluxuates in a holographic, relational Universe.

The word Universe, as I am sure you know well, means “ONE SONG”, as in, Uni – Verse.

By solving some of the problematic riddles that had existed with certain aspects of the Einstein field equations, among other things, the fact that the Universe is ‘humming’ as an infinite sea of energy and information at the level of the voxel(s) that you speak of – and with the math for the same working out as elucidated in the paper that you and Niissim Haramein just had published in the Journal of High Energy Physics, Gravitation and Cosmology, all I can say at this present moment is congratulations on establishing this great milestone for all the world to see and best of luck and cheers as you continue to advance these truths and understandings for practical applications for the good of all!!!

Respectfully yours,

Chad Davis

Clinton, New York

“As well one could conclude from our understanding of energy – or information – that the universe is expanding and accelerating because it is learning about itself and thus it requires more surfaces to store the holographic information.”

Rarely do I come across a statement that stops me in my tracks. This is truly profound and the reason that I find quantum physics so fascinating. Unity and simplicity masked in chaos and complexity. After contemplating this for some time, I thought about the quote “As Above, So Below”. Mankind’s evolution from written language to Hubble’s Constant took a mere 5,000 years. One could assume that man, too, is experiencing expansion thus requiring more surface to store information. However, evolution is painfully slow compared to the rate of technological expansion, which leads us back to particle physics by way of quantum computing and artificial intelligence.

Great work! Thank you

Amira,

Awesome work! You never stop blowing my mind!

One insight I postulate here is that the reason the inside of the proton is the mass of the known universe is because it IS the universe – the universe is inside out (and outside in). The acceleration of the universe is measured by redshift and there are two types of redshift postulated – doppler redshift (Hubble) and Gravitational redshift (Einstein). Maybe the acceleration we measure is from the inside out curvature nature of the proton universe as an observational distortion of viewing all the harmonics with observed matter being superpositions of the harmonics phases within a curved inside out universe (a constant velocity in a curve has acceleration)??

I am convinced the universe inside out because every new insight in astrophyics and the discrete nature of quantum mechanics, even entanglement can be inuitive with intuitive obviousness of exactly how to explain new observations when viewing the proton universe as flowing inside out curving back to the inside in a constant rolling blossom.

It has recently been discussed that due to the accelerating expansion of the universe we are losing view of thousands of stars per day because they are traveling beyond the light distance limit of the observable universe from Earth. However, an inside out proton universe explains this intuitively. Just like we stand on the ocean shoreline watching a ship disappear over the horizon in our 2d planetary surface of 3d space curved in 4d, the stars radiating outward in all directions destination is inside, the stars disappear over the horizon, 3d stars, 4d space curved into 5d. If the stars are traveling in a curved higher dimension then the observation would be acceleration hence redshift would be measureable. If the universe is all harmonics of energetic vacuum then there will be superpositions, constructive interference that would reveal discrete intervals of matter and states of matter that would follow the superpositions of the waves. Entanglement would be faster than light because the 5d curvature would.curve forward and then back in time to the present thereby measuring the same wave in two different places at the present.

Curvature explains the acceleration we measure.

One other thing I never got tell Nassim is that maybe he can resolve the 4% discrepancy in the two different proton mass measurements by taking into account the additional surface area of the bubble faces of PSUs on the proton. From my recollection he only looked at the mass as a single sphere surface area and took the ratio of that area to the PSUs that fill the volume but the surface of flower of life patterned PSUs will actually have a greater surface area than that of a single sphere surface. It will be close but not the same, they will asymptote towards the same value for very large objects but the surface will be different for counting the sum of the bubble faces of the surfaces PSUs vs just a spherical surface area. There may be something to be found there but not sure.

Keep up the great work. Sending energy and thoughts your way…

The Michalson Morely experiment did not disprove aether. It merely failed to detect any appreciable aether wind. This could easily be explained by the aether being drug along by the Earth like it’s atmosphere is. Dr. Harold Aspden developed an aether theory which is quite successful in predicting physical phenomena to a high degree of precision.

An example of prediction was the later discovery of quantized red shifts in the Virgo cluster of 72.45 Km/sec. by Astronomer Tift, which exact!y could be explained by Aspden’s aether model.

However, these red shifts cannot possibly be due to recessional velocities of galaxies as postulated by Hubble because different sections of the same galaxy have largely different red shifts, as discovered by later astronomers. Hence no big bang! Aspden’s aether model neatly explain these quantized red shifts as different aether densities.

Pingback: Einstein, l’éther et la constante cosmologique - Ma vie quantique