~ Birth of Gaiaღ ~

Dragons of Thuban To Ban The Falseness


    Faster than light particles found, claim scientists

    Share
    avatar
    Didymos

    Posts : 795
    Join date : 2010-05-20
    Location : Queanbeyan, NSW, Australia

    Faster than light particles found, claim scientists

    Post  Didymos on Sun Oct 23, 2011 2:36 am

    Faster Than Light Neutrinos


    Date: Fri, 21 Oct 2011 01:06:21 -0700
    Subject: neutrinos faster than speed of light?
    From: bmcelligott4789@gmail.com
    To: omniphysics@cosmosdawn.net

    Hello! i've been reading on your website for a couple years now off and on, i really enjoy what you've done, the truth that you have found and helped so many others find for themselves. you are healing our concept of self and that is a commendable task.

    i am writing now because i was wondering what you had to say about the neutrinos breaking lightspeed barrier, i can't remember if you have posted about it or not, did a quick search and it didn't find anything....anyways, would love to hear your take on things.

    Have a wonderful day!

    Brendan


    Faster than light particles found, claim scientists


    Particle physicists detect neutrinos travelling faster than light, a feat forbidden by Einstein's theory of special relativity




    Ian Sample, science correspondent

    guardian.co.uk,


    Thursday 22 September 2011 23.32 BST


    Neutrinos, like the ones above, have been detected travelling faster than light, say particle physicists.

    Photograph: Dan Mccoy /Corbis







    It is a concept that forms a cornerstone of our understanding of the universe and the concept of time – nothing can travel faster than the speed of light.

    But now it seems that researchers working in one of the world's largest physics laboratories, under a mountain in central Italy, have recorded particles travelling at a speed that is supposedly forbidden by Einstein's theory of special relativity.

    Scientists at the Gran Sasso facility will unveil evidence on Friday that raises the troubling possibility of a way to send information back in time, blurring the line between past and present and wreaking havoc with the fundamental principle of cause and effect.

    They will announce the result at a special seminar at Cern – the European particle physics laboratory – timed to coincide with the publication of a research paper (pdf) describing the experiment.

    Researchers on the Opera (Oscillation Project with Emulsion-tRacking Apparatus) experiment recorded the arrival times of ghostly subatomic particles called neutrinos sent from Cern on a 730km journey through the Earth to the Gran Sasso lab.

    The trip would take a beam of light 2.4 milliseconds to complete, but after running the experiment for three years and timing the arrival of 15,000 neutrinos, the scientists discovered that the particles arrived at Gran Sasso sixty billionths of a second earlier, with an error margin of plus or minus 10 billionths of a second.

    The measurement amounts to the neutrinos travelling faster than the speed of light by a fraction of 20 parts per million. Since the speed of light is 299,792,458 metres per second, the neutrinos were evidently travelling at 299,798,454 metres per second.

    The result is so unlikely that even the research team is being cautious with its interpretation. Physicists said they would be sceptical of the finding until other laboratories confirmed the result.

    Antonio Ereditato, coordinator of the Opera collaboration, told the Guardian: "We are very much astonished by this result, but a result is never a discovery until other people confirm it.

    "When you get such a result you want to make sure you made no mistakes, that there are no nasty things going on you didn't think of. We spent months and months doing checks and we have not been able to find any errors.

    "If there is a problem, it must be a tough, nasty effect, because trivial things we are clever enough to rule out."

    The Opera group said it hoped the physics community would scrutinise the result and help uncover any flaws in the measurement, or verify it with their own experiments.

    Subir Sarkar, head of particle theory at Oxford University, said: "If this is proved to be true it would be a massive, massive event. It is something nobody was expecting.

    "The constancy of the speed of light essentially underpins our understanding of space and time and causality, which is the fact that cause comes before effect."

    The key point underlying causality is that the laws of physics as we know them dictate that information cannot be communicated faster than the speed of light in a vacuum, added Sarkar.

    "Cause cannot come after effect and that is absolutely fundamental to our construction of the physical universe. If we do not have causality, we are buggered."

    The Opera experiment detects neutrinos as they strike 150,000 "bricks" of photographic emulsion films interleaved with lead plates. The detector weighs a total of 1300 tonnes.

    Despite the marginal increase on the speed of light observed by Ereditato's team, the result is intriguing because its statistical significance, the measure by which particle physics discoveries stand and fall, is so strong.

    Physicists can claim a discovery if the chances of their result being a fluke of statistics are greater than five standard deviations, or less than one in a few million. The Gran Sasso team's result is six standard deviations.

    Ereditato said the team would not claim a discovery because the result was so radical. "Whenever you touch something so fundamental, you have to be much more prudent," he said.

    Alan Kostelecky, an expert in the possibility of faster-than-light processes at Indiana University, said that while physicists would await confirmation of the result, it was none the less exciting.

    "It's such a dramatic result it would be difficult to accept without others replicating it, but there will be enormous interest in this," he told the Guardian.

    One theory Kostelecky and his colleagues put forward in 1985 predicted that neutrinos could travel faster than the speed of light by interacting with an unknown field that lurks in the vacuum.

    "With this kind of background, it is not necessarily the case that the limiting speed in nature is the speed of light," he said. "It might actually be the speed of neutrinos and light goes more slowly."

    Neutrinos are mysterious particles. They have a minuscule mass, no electric charge, and pass through almost any material as though it was not there.

    Kostelecky said that if the result was verified – a big if – it might pave the way to a grand theory that marries gravity with quantum mechanics, a puzzle that has defied physicists for nearly a century.

    "If this is confirmed, this is the first evidence for a crack in the structure of physics as we know it that could provide a clue to constructing such a unified theory," Kostelecky said.

    Heinrich Paes, a physicist at Dortmund University, has developed another theory that could explain the result. The neutrinos may be taking a shortcut through space-time, by travelling from Cern to Gran Sasso through extra dimensions. "That can make it look like a particle has gone faster than the speed of light when it hasn't," he said.

    But Susan Cartwright, senior lecturer in particle astrophysics at Sheffield University, said: "Neutrino experimental results are not historically all that reliable, so the words 'don't hold your breath' do spring to mind when you hear very counter-intuitive results like this."

    Teams at two experiments known as T2K in Japan and MINOS near Chicago in the US will now attempt to replicate the finding. The MINOS experiment saw hints of neutrinos moving at faster than the speed of light in 2007 but has yet to confirm them.

    • This article was amended on 23 September 2011 to clarify the relevance of the speed of light to causality.




    Dear Brendan!

    The measured violation of the speed of light cosmic acceleration limit as 1 part in 50,000, if confirmed, simply illustrates the nature of the (anti)neutrinos in their role of the standard model of particle physics.

    As you can discern in the accompanying descriptions, reproduced at:

    http://www.thuban.spruz.com/forums/?page=post&fid=&lastp=1&id=E4C02335-4EC2-4BEC-8532-7D7CFFFDF4EC

    the neutrinos are massless in their basic electron- and muon flavours, but experience a Higgsian Restmass Induction at the 0.052 electronvolt energy level and in the form of a ''Sterile Higgs Neutrino'. This constituted a verified experimental result at the Super Kamiokande neutrino detector beneath the Japanese Alps and was published worldwide on June 4th, 1998.

    The (provisionally) measured increase in 'c' as the lightspeed invariant was the ratio 299,792,458/299,798,454=0.99998...

    This can be translated to a decrease in the mass of the electron by this Higgsian neutrino mass induction in the range from: 2.969 - 3.021 eV, centred on 2.995 eV and differing by 0.052106 eV as the mass differential measured by the Kamiokande detector.

    Using a relativistic electronmass at 0.18077c at the 8.5748 keV level of electric potential, (as in the derivation of the Higgs neutrinomass below and using cosmic calibrated units *), then decreases the relativistic electronmass from 9.290527155x10-31 kg* or 520,491.856 eV*, by 2.995 eV* (or so 5.4x10-36 kg*) as 520,491.856 eV* - 2.995 eV* = 520,488.861 eV*.

    This compares as a trisected kernel (see the fractional electron charge distribution as the Leptonic Outer Ring) within the error margins to the variation in 'c' ratio as per the article:

    520,488.861/520,491.856=0.999994=1/1.000006 as 3 parts in 521,000 or about 1 part in 174,000 and so about 30% of the lightspeed variance as the ratio 299,792,458/299,798,454=0.99998...and as 1 part in 50,000.

    A detailed discussion of the subatomic quark-lepton charge distribution is found here:

    The Charge Distribution of the Wavequarkian Standard Model - "Large proton Halo sparks devilish row"

    http://www.thuban.spruz.com/forums/?page=post&id=8C67A51B-F47A-4646-A053-ADAD87677F63&fid=019788EF-0F1C-4E3D-B875-AB41438E63AD



    The cosmic acceleration limit so is not violated by measuring electron-neutrino interactions, but the Higgs Bosonic Restmass Induction manifests in a subatomic internal charge redistribution, which reduces the energy of the interacting electron (or muon or tauon) interactions, which are always associated with their leptonic parental counterparts.

    It so is the massless eigenstate of the 'Precursor Higgs Boson' as a 'Scalar RestmassPhoton' coupled to a colourcharged gauge photon of 'sourcesink antidradiation' as a Gauge Goldstone Boson (string or brane), which partitions this primordial selfstate into a trisected 'Scalar Higgs Neutrino Kernel' encompassed by an 'Inner Mesonic Ring' and an 'Outer Leptonic Ring', then defining a post Big Bang reconfiguration of the unified gauged particle states, commonly known as the Standard Model of Particle Physics.

    Measuring (anti)neutrinos as travelling 'faster than the speed of light', so can be interpreted as a superbrane interaction in the multidimensional selfstate preceding the so called Big Bang, which was in fact preceded by an inflationary de Broglie wavematter phase epoch, where the 'speed of light' is by nature tachyonic as a wavematter speed VdB.

    Data:
    Schwarzschild-HubbleRadius:
    RH=2GoMH/c2

    Gravitational String-Oscillator(ZPE)Harmonic-Potential-Energy:
    EPo=½hfo=hc/4πlP=½lPc2/Go=½mPc2=GomPMH/RH
    for string-duality coupling between wormhole mass mP and closure Black Hole metric
    MHcriticalVSeed

    Gravitational Acceleration per unit-stringmass:
    ag= GoMH/RH2=GoMHc4/4Go2MH2=c4/4GoMH=c2/2RH=½Hoc

    for a Nodal Hubble-Constant:
    Ho=c/RH

    Inflaton parameters:


    De Broglie Phase-Velocity (VdB) and de Broglie Phase-Acceleration (AdB):

    VdB=RHfo ~ 4.793x1056(m/s)* ~ 1.598x1048 c = 1022RH
    AdB=RHfo2~ 1.438x1087 (m/s2)*

    De Broglie Phase Speed: VdB=wavenlengthxfrequency=(h/mVGroup)x(mc²/h)=c²/VGroup>c for all VGroup< c

    This then represents 'my take' on the recent 'faster than light' measurements at San Grasso and CERN according to Quantum Relativity.

    Tonyblue









    Hypersphere volumes and the mass of the Tau-neutrino

    Consider the universe's thermodynamic expansion to proceed at an initializing time (and practically at lightspeed for the lightpath x=ct describing the hypersphere radii) to from a single spacetime quantum with a quantized toroidal volume 2π²rw³ and where rw is the characteristic wormhole radius for this basic building unit for a quantized universe (say in string parameters given in the Planck scale and its transformations).

    At a time tG, say so 18.85 minutes later, the count of space time quanta can be said to be 9.677x10102 for a universal 'total hypersphere radius' of about rG=3.39x1011 meters and for a G-Hypersphere volume of so 7.69x1035cubic meters.

    {This radius is about 2.3 Astronomical Units (AUs) and about the distance of the Asteroid Belt from the star Sol in a typical (our) solar system.}

    This modelling of a mapping of the quantum-microscale onto the cosmological macroscale should now indicate the mapping of the wormhole scale onto the scale of the sun itself.

    rw/RSun(i)=Re/rE for RSun(i)=rwrE/Re=1,971,030 meters. This gives an 'inner' solar core of diameter about 3.94x106 meters.


    As the classical electron radius is quantized in the wormhole radius in the formulation Re=1010rw/360, rendering a finestructure for Planck's Constant as a 'superstring-parametric': h=rw/2Rec3; the 'outer' solar scale becomes RSun(o)=360RSun(i)=7.092x108 meters as the observed radius for the solar disk.

    19 seconds later; a F-Hypersphere radius is about rF=3.45x1011 meters for a F-count of so 1.02x10103spacetime quanta.
    We also define an E-Hypersphere radius at rE=3.44x1014 meters and an E-count of so 10112 to circumscribe this 'solar system' in so 230 AU.

    We so have 4 hypersphere volumes, based on the singularity-unit and magnified via spacetime quantization in the hyperspheres defined in counters G, F and E. We consider these counters as somehow fundamental to the universe's expansion, serving as boundary conditions in some manner. As counters, those googol-numbers can be said to be defined algorithmically and independent on mensuration physics of any kind.



    The mapping of the atomic nucleus onto the thermodynamic universe of the hyperspheres

    Should we consider the universe to follow some kind of architectural blueprint; then we might attempt to use our counters to be
    isomorphic (same form or shape) in a one-to-one mapping between the macrocosmos and the microcosmos. So we define a quantum geometry for the nucleus in the simplest atom, say Hydrogen. The hydrogenic nucleus is a single proton of quark-structure udu and which we assign a quantum geometric template of Kernel-InnerRing-OuterRing (K-IR-OR), say in a simple model of concentricity.
    We set the up-quarks (u) to become the 'smeared out core' in say a tripartition uuu so allowing a substructure for the down-quark (d) to be u+InnerRing. A down-quark so is a unitary ring coupled to a kernel-quark. The proton's quark-content so can be rewritten and without loss of any of the properties associated with the quantum conservation laws; as proton-> udu->uuu+IR=KKK+IR. We may now label the InnerRing as Mesonic and the OuterRing as Leptonic.

    The OuterRing is so definitive for the strange quark in quantum geometric terms: s=u+OR.
    A neutron's quark content so becomes neutron=dud=KIR.K.KIR with a 'hyperon resonance' in the lambda=sud=KOR.K.KIR and so allowing the neutron's beta decay to proceed in disassociation from a nucleus (where protons and neutrons bind in meson exchange); i.e. in the form of 'free neutrons'.

    The neutron decays in the oscillation potential between the mesonic inner ring and the leptonic outer ring as the 'ground-energy' eigenstate.

    There actually exist three uds-quark states which decay differently via strong, electromagnetic and weak decay rates in the uds (Sigmao Resonance); usd (Sigmao) and the sud (Lambdao) in increasing stability.

    This quantum geometry then indicates the behaviour of the triple-uds decay from first principles, whereas the contemporary standard model does not, considering the u-d-s quark eigenstates to be quantum geometrically undifferentiated.

    The nuclear interactions, both strong and weak are confined in a 'Magnetic Asymptotic Confinement Limit' coinciding with the Classical Electron radius Re=ke²/mec² and in a scale of so 3 Fermi or 2.8x10-15 meters. At a distance further away from this scale, the nuclear interaction strength vanishes rapidly.

    The wavenature of the nucleus is given in the Compton-Radius Rc=h/2πmc with m the mass of the nucleus, say a proton; the latter so having Rc=2x10-16 meters or so 0.2 fermi.

    The wave-matter (after de Broglie generalising wavespeed vdB from c in Rcc) then relates the classical electron radius as the 'confinement limit' to the Compton scale in the electromagnetic finestructure constant in Re=Alpha.Rc.

    The extension to the Hydrogen-Atom is obtained in the expression Re=Alpha².RBohr1 for the first Bohr-Radius as the 'ground-energy' of so 13.7 eV at a scale of so 10-11 to 10-10 meters (Angstroems).

    These 'facts of measurements' of the standard models now allow our quantum geometric correspondences to assume cosmological significance in their isomorphic mapping. We denote the OuterRing as the classical electron radius and introduce the InnerRing as a mesonic scale contained within the geometry of the proton and all other elementary baryonic- and hadronic particles.

    Firstly, we define a mean macro-mesonic radius as: rM=½(rF+rG)~ 3.42x1011 meters and set the macro-leptonic radius to rE=3.44x1014 meters.
    Secondly, we map the macroscale onto the microscale, say in the simple proportionality relation, using
    (de)capitalised symbols: Re/Rm=rE/rM.

    We can so solve for the micro-mesonic scale Rm=Re.rM/rE ~ 2.76x10-18 meters.
    So reducing the apparent measured 'size' of a proton in a factor about about 1000 gives the scale of the subnuclear mesonic interaction, say the strong interaction coupling by
    pions.


    The Higgsian Scalar-Neutrino

    The (anti)neutrinos are part of the electron mass in a decoupling process between the kernel and the rings. Neutrino mass is so not cosmologically significant and cannot be utilized in 'missing mass' models'.
    We may define the kernel-scale as that of the singular spacetime-quantum unit itself, namely as the wormhole radius rw=10-22/2π meters.

    Before the decoupling between kernel and rings, the kernel-energy can be said to be strong-weakly coupled or unified to encompass the gauge-gluon of the strong interaction and the gauge-weakon of the weak interaction defined in a coupling between the OuterRing and the Kernel and bypassing the mesonic InnerRing.

    So for matter, a W-Minus (weakon) must consist of a coupled lepton part, yet linking to the strong interaction via the kernel part. If now the colour-charge of the gluon transmutates into a 'neutrino-colour-charge'; then this decoupling will not only define the mechanics for the strong-weak nuclear unification coupling; but also the energy transformation of the gauge-colour charge into the gauge-lepton charge.

    There are precisely 8 gluonic transitive energy permutation eigenstates between a 'radiative
    -additive' Planck energy in W(hite)=E=hf and an 'inertial-subtractive' Einstein energy in B(lack)=E=mc2, which describe the baryonic- and hyperonic 'quark-sectors' in: mc2=BBB, BBW, WBB, BWB, WBW, BWW, WWB and WWW=hf.

    The permutations are cyclic and not linearly commutative. For mesons (quark-antiquark eigenstates), the permutations are BB, BW, WB and WW in the SU(2) and SU(3) Unitary Symmetries.

    So generally, we may state, that the gluon is unfied with a weakon before decoupling; this decoupling 'materialising' energy in the form of mass, namely the mass of the measured 'weak-interaction-bosons' of the standard model (W- for charged matter; W+ for charged antimatter and Zo for neutral mass-currents say).


    Experiment shows, that a W- decays into spin-aligned electron-antineutrino or muon-antineutrino or tauon-antineutrino pairings under the conservation laws for momentum and energy.

    So, using our quantum geometry, we realise, that the weakly decoupled electron must represent the OuterRing, and just as shown in the analysis of QED (Quantum-Electro-Dynamics). Then it can be inferred, that the Electron's Antineutrino represents a transformed and materialised gluon via its colourcharge, now decoupled from the kernel.

    Then the OuterRing contracts (say along its magnetoaxis defining its asymptotic confinement); in effect 'shrinking the electron' in its inertial and charge- properties to its experimentally measured 'point-particle-size'. Here we define this process as a mapping between the Electronic wavelength 2πRe and the wormhole perimeter λw=2πrw.

    But in this process of the 'shrinking' classical electron radius towards the gluonic kernel (say); the mesonic ring will be encountered and it is there, that any mass-inductions should occur to differentiate a massless lepton gauge-eigenstate from that manifested by the weakon precursors.

    {Note: Here the W- inducing a lefthanded neutron to decay weakly into a lefthanded proton, a lefthanded electron and a righthanded antineutrino. Only lefthanded particles decay weakly in CP-parity-symmetry violation, effected by neutrino-gauge definitions from first principles}.

    This so defines a neutrino-oscillation potential at the InnerRing-Boundary. Using our proportions and assigning any neutrino-masses mυ as part of the electronmass me, gives the following proportionality as the mass
    eigenvalue of the Tau-neutrino:

    mυ=meλw.rE/(2πrMRe) ~ 5.4x10-36 kg or 3.0 eV.

    So we have derived, from first principles, a (anti)neutrinomass eigenstate of 3 eV.

    This confirms the Mainz, Germany Result as the upper limit for neutrino masses resulting from ordinary Beta-Decay and indicates the importance of the primordial beta-decay for the cosmogenesis and the isomorphic scale mappings stated above.

    The hypersphere intersection of the G- and F-count of the thermodynamic expansion of the mass-parametric universe so induces a neutrino-mass of 3 eV at the 2.76x10-18 meter marker.

    The more precise G-F differential in terms of eigenenergy is 0.052 eV as the mass-eigenvalue for the Higgs-(Anti)neutrino (which is scalar of 0-spin and constituent of the so called Higgs Boson as the kernel-Eigenstate). This has been experimentally verified in the Super-Kamiokande (Japan) neutrino experiments published in 1998 and in subsequent neutrino experiments around the globe, say Sudbury, KamLAND, Dubna, MinibooNE and MINOS.

    This Higgs-Neutrino-Induction is 'twinned' meaning that this energy can be related to the energy of so termed 'slow- or thermal neutrons' in a coupled energy of so twice 0.0253 eV for a thermal equilibrium at so 20° Celsius and a rms-standard-speed of so 2200 m/s from the Maxwell statistical distributions for the kinematics.



    Neutrinomasses

    The Electron-(Anti)Neutrino is massless as base-neutrinoic weakon eigenstate.
    The Muon-(Anti)Neutrino is also massless as base-neutrinoic weakon eigenstate.
    The Tauon-(Anti)Neutrino is not massless with inertial eigenstate meaned at 3.0 eV.

    The weakon kernel-eigenstates are 'squared' or doubled (2x2=2+2) in comparison with the gluonic-eigenstate (one can denote the colourcharges as (R²G²B²)[½] and as (RGB)[1] respectively say and with the [] bracket denoting gauge-spin and RGB meaning colours Red-Green-Blue).

    The scalar Higgs-(Anti)Neutrino becomes then defined in: (R4G4B4)[0].

    The twinned neutrino state so becomes MANIFESTED in a coupling of the scalar Higgs-Neutrino with a massless base neutrino in a (R6G6B6)[0+½]) mass-induction template.

    The Higgs-Neutrino is bosonic and so not subject to the Pauli Exclusion Principle; but quantized in the form of the FG-differential of the 0.052 Higgs-Restmass-Induction.

    Subsequently all experimentally observed neutrino-oscillations should show a stepwise energy induction in units of the Higgs-neutrino mass of 0.052 eV. This was the case in the Super-Kamiokande experiments; and which was interpreted as a mass-differential between the muonic and tauonic neutrinoic forms.



    Sterile neutrino back from the dead
    17:28 22 June 2010 by David Shiga - Magazine Issue 2766


    [size=18]A ghostly particle given up for dead is showing signs of life.[/size]

    Not only could this "sterile" neutrino be the stuff of dark matter, thought to make up the bulk of our universe, it might also help to explain how an excess of matter over antimatter arose in our universe.

    Neutrinos are subatomic particles that rarely interact with ordinary matter. They are known to come in three flavours – electron, muon and tau – with each able to spontaneously transform into another.

    In the 1990s, results from the Liquid Scintillator Neutrino Detector (LSND) at the Los Alamos National Laboratory in New Mexico suggested there might be a fourth flavour: a "sterile" neutrino that is even less inclined to interact with ordinary matter than the others.




    Hasty dismissal


    Sterile neutrinos would be big news because the only way to detect them would be by their gravitational influence – just the sort of feature needed to explain dark matter.

    Then in 2007 came the disheartening news that the Mini Booster Neutrino Experiment (MiniBooNE, pictured) at the Fermi National Accelerator Laboratory in Batavia, Illinois, had failed to find evidence of them.

    But perhaps sterile neutrinos were dismissed too soon. While MiniBooNE used neutrinos to look for the sterile neutrino,

    LSND used antineutrinos – the antimatter equivalent. Although antineutrinos should behave exactly the same as neutrinos, just to be safe, the MiniBooNE team decided to repeat the experiment – this time with antineutrinos.

    Weird excess

    Lo and behold, the team saw muon antineutrinos turning into electron antineutrinos at a higher rate than expected – just like at LSND. MiniBooNE member Richard Van de Water reported the result at a neutrino conference in Athens, Greece, on 14 June.

    The excess could be because muon antineutrinos turn into sterile neutrinos before becoming electron antineutrinos, says Fermilab physicist Dan Hooper, who is not part of MiniBooNE. "This is very, very weird," he adds.

    Although it could be a statistical fluke, Hooper suggests that both MiniBooNE results could be explained if antineutrinos can change into sterile neutrinos but neutrinos cannot – an unexpected difference in behaviour.

    The finding would fit nicely with research from the Main Injector Neutrino Oscillation Search, or MINOS, also at Fermilab, which, the same day, announced subtle differences in the oscillation behaviour of neutrinos and antineutrinos.

    Antimatter and matter are supposed to behave like mirror versions of each other, but flaws in this symmetry could explain how our universe ended up with more matter.

    Tonyblue


    Aside this multidimensional hyperphysics; a more 'relativistic' lower dimensional explanation is also logistically feasible.

    Tony,
    A Netherlands group claims to have solved the faster than light neutrino problem by applying special relativity to the GPS satellites:
    http://www.kurzweilai.net/faster-than-light-neutrino-puzzle-claimed-solved-by-special-relativity
    As a result the neutrinos are not faster than light.
    Richard (posting in quantumrelativity yahoo forum October 22nd, 2011)

    Thank you Richard for this link and contribution.

    Faster-than-light neutrino puzzle claimed solved by special relativity


    October 14, 2011 by Editor


    (Credit: CERN)


    The relativistic motion of clocks on board GPS satellites exactly accounts for the
    superluminal effect in the OPERA experiment, says physicist Ronald van Elburg at the University of Groningen in the Netherlands, The Physics arXiv Blog reports.

    “From the perspective of the clock, the detector is moving towards the source and consequently the distance travelled by the particles as observed from the clock is shorter,” says van Elburg. By this he means shorter than the distance measured in the reference frame on the ground. The OPERA team overlooks this because it assumes the clocks are on the ground not in orbit.

    Van Elburg calculates that it should cause the neutrinos to arrive 32 nanoseconds early. But this must be doubled because the same error occurs at each end of the experiment. So the total correction is 64 nanoseconds, almost exactly what the OPERA team observed.

    Ref.: Ronald A.J. van Elburg, Times Of Flight Between A Source And A Detector Observed From A GPS Satellite,
    arxiv.org/abs/1110.2685:



    Topics:
    Physics/Cosmology





    Last edited by Didymos on Sun Oct 23, 2011 7:56 am; edited 5 times in total
    avatar
    Didymos

    Posts : 795
    Join date : 2010-05-20
    Location : Queanbeyan, NSW, Australia

    Re: Faster than light particles found, claim scientists

    Post  Didymos on Sun Oct 23, 2011 4:57 am

    October 19, 2011
    The First Monstrous Objects of the Early Universe





    New observations from NASA's Spitzer Space Telescope strongly suggest that infrared light detected in a prior study originated from clumps of the very first objects of the Universe. The recent data indicate this patchy light is splattered across the entire sky and comes from clusters of bright, monstrous objects more than 13 billion light-years away.

    "We are pushing our telescopes to the limit and are tantalizingly close to getting a clear picture of the very first collections of objects," said Dr. Alexander Kashlinsky of NASA's Goddard Space Flight Cente. "Whatever these objects are, they are intrinsically incredibly bright and very different from a.nything in existence today."

    Astronomers believe the objects are either the first stars -- humongous stars more than 1,000 times the mass of our sun -- or voracious black holes that are consuming gas and spilling out tons of energy. If the objects are stars, then the observed clusters might be the first mini-galaxies containing a mass of less than about one million suns. The Milky Way galaxy holds the equivalent of approximately 100 billion suns and was probably created when mini-galaxies like these merged.

    Scientists say that space, time and matter originated 13.7 billion years ago in a tremendous explosion called the Big Bang. Observations of the cosmic microwave background by a co-author of the recent Spitzer studies, Dr. John Mather of Goddard, and his science team strongly support this theory. Mather is a co-winner of the 2006 Nobel Prize for Physics for this work. Another few hundred million years or so would pass before the first stars would form, ending the so-called dark age of the Universe.

    With Spitzer, Kashlinsky's group studied the cosmic infrared background, a diffuse light from this early epoch when structure first emerged. Some of the light comes from stars or black hole activity so distant that, although it originated as ultraviolet and optical light, its wavelengths have been stretched to infrared wavelengths by the growing space-time that causes the Universe's expansion. Other parts of the cosmic infrared background are from distant starlight absorbed by dust and re-emitted as infrared light.

    "There's ongoing debate about what the first objects were and how galaxies formed," said Dr. Harvey Moseley of Goddard, a co-author on the papers. "We are on the right track to figuring this out. We've now reached the hilltop and are looking down on the village below, trying to make sense of what's going on."

    The analysis first involved carefully removing the light from all foreground stars and galaxies in the five regions of the sky, leaving only the most ancient light. The scientists then studied fluctuations in the intensity of infrared brightness, in the relatively diffuse light. The fluctuations revealed a clustering of objects that produced the observed light pattern.

    "Imagine trying to see fireworks at night from across a crowded city," said Kashlinsky. "If you could turn off the city lights, you might get a glimpse at the fireworks. We have shut down the lights of the Universe to see the outlines of its first fireworks."

    "Spitzer has paved the way for the James Webb Space Telescope, which should be able to identify the nature of the clusters," said Mather, who is senior project scientist for NASA's future James Webb Space Telescope.

    The image at the top of the page reveals a background glow of light from a period of time when the universe was less than one billion years old. This light most likely originated from the universe's very first groups of objects -- either huge stars or voracious black holes.

    The image from NASA's Spitzer Space Telescope shows a region of sky in the Ursa Major constellation. To create this image, stars, galaxies and other sources were masked out. This infrared image covers a region of space so large that light would take up to 100 million years to travel across it. Darker shades in the image on the left correspond to dimmer parts of the background glow, while yellow and white show the brightest light.

    The Daily Galaxy via http://www.spitzer.caltech.edu/images/1695-ssc2006-22a1-The-Universe-s-First-Fireworks


    Posted at 01:00 AM | Permalink




    a) The Quantum-Holographic Transformation of the Earth
    b) Why Black Holes preceded Galaxies in the Cosmology
    c) The First Ylemic Stars in the Universe and the Antiwormholes
    d) The Holographic Universe and the Information Processing


    a) The Quantum-Holographic Transformation of the Earth


    When the Universe was born from its subplenar (infinite) void to reflect this void in the physical plenum reality of spacetimematter, a particular 'point of origin' became necessitated to mirror the 'voidal vortex' of the subplenum in the then reality of a metric plenum.

    This part of the agenda shall then show, that the planet earth can be metrically defined to represent this 'mapping of the void'; the earth so becoming enabled to relate the entire information content (and including all inertial systems and spacetime coordinates) of the universe to itself and all and sundry 'sentient civilisations' within it.

    This 'voidal vortex' is like the nonphysicality or abstraction of a 'mathematical point' becoming 'physicalised' in a 2-dimensional manifold or surface as a physicalised 'area' dimension, thus allowing 'measurement' in some unit for displacement.

    The conceptual infinity of the subplenum (Void=Nothing=Everything) so either doubles- or halves itself to enable the Unity operator to emerge from the Void.
    Geometrically, this is simply the dimensional generator of the Null-Dimension as the 'Point' mapping itself as 'Double-Point' yet being the original 'Point' by mathematical induction.



    The 1st dimension so emerges as the arbitrary manner this 'double-point' can align itself; thus defining the minimum displacement between the two points.
    Physically this becomes the Planck-Length, operated on by a dimensional generator and the 'Energization' of the Planck-Length is known as a (open) type I Planck-superstring, radius the Planck-Length {LP=√(hGo/2πc3) .


    This Planck-Radius then can form into a (closed) type I Planck-Membrane or Planck-Circle in forming a Planck-Loop of Planck-Energy {EP=hc/2πLP=√(hc5/2πGo)}.

    The mass of this Planck-Loop is the Planck-Mass {mP=EP/c2=√(hc/2πGo)=LPc2/Go}
    As the Planck-Minimum Energy must be 'halved' for the Void-Unity mapping; a physical description for the latter becomes necessary.


    This description engages the labellings of Gravitational Potential Energy for the 'Point-Mass' and its coupling to the 'bridge' (or wormhole) between the subplenum and the plenum.
    This is known as the Planck-Oscillator of the Zero-Point of energy Eo=hc/4πLP=hω/4π=½hfP=½h/tP=½mPc2.


    This minimised Planck-Oscillator energy is also the minimised Gravitational Potential Energy in EGPE=Gom.mP/LP from the gravitational acceleration per 'point-mass m': mag=Gom.mP/LP2
    Therefore, EGPE=Gom.mP/LP =Eo=½mPc2 for LP=2Gom/c2 and which represents the basic minimum Schwarzschild metric for the solution of the 'field equations' in General Relativity, descriptive of the interaction between spacetime curvature and the inertia content in a local 'neighbourhood of pointmasses' of the universe.

    The geometric mapping of the 'point' onto itself, can now be constructed in physicality.
    The 'voidal vortex' becomes simply the minimum Schwarzschild metric defined in the Planck-String Oscillator and so represents a 'Black Hole' or a 'Voidal Vortex' in superstring/membrane (Planckian) parameters.


    Geometrically then, the 'pointmass' is rendered physical in the 'Planck-area', either as a 'Planck-Square' with side the 'Planck-Length' LP) or a Planck-Circle of Area πLP2.
    The 'bridging' wormhole between the plenum of the Planck parameters and the subplenum of the original Nothing=Infinity void then becomes the topological deformation of the Planck-Circle into a Planck-Ellipse with a corresponding displacement of the singularity focus (the center) of the circle into two foci for the Planck-Ellipse.

    Having two foci, defined in the geometrical definition of an ellipse as an 'eccentric circle' then allows the two foci to communicate with each other in a precise geometrical relationship.
    The locus of an ellipse is defined in any point P on the ellipse joined to both of the two foci being a constant summation of the individual displacement vectors from the focus points to the point P on the ellipsical locus.
    Rotation of such an ellipse about its major (longer) axis, then will give the ellipsoidal volumar, minimised as a ellipsoidal or spheroidal (actually toroidal) Planckian volume in 3 dimensions.

    As the universe (in spacetime) is defined in a summation of the minimum Planck-area counts and so the Planckian loops; a simple deformation of the Planck-area unit into a two-focal ellipsoidal volumar will so allow a higher dimensional 'fractally discrete' continuum to connect any two such discretized locations within the universe via this bifocalised 'holofractal' quantum geometry.

    The intelligence or logos, who transformed (some of) the abstract 'energy' of the subplenum as an infinite 'reservoir' of 'sourcesink energy' into a then finite thermodynamic energy (as defined in the Laws of Nature and the Conservation Laws) then established the 'distribution of this materialised energy'; first as kinematic-thermodynamic gravitational noninertial mass in a hitherto massless cosmos {E=Σhf} and secondly as an inertial electromagnetic mass {E=Σmc2}in a first principle of the Einsteinian 'Principle of Equivalence'.

    When the universe of the plenum emerged from the subplenum about 19.11 billion (civil) years ago; this bifocalisation became established and assigned an axial direction to the subsequent cosmological evolution, albeit relative to the manifestation of the focalisations on the major ellipsoidal axis.
    After the universe had attained an expansion speed of so 29.3% of lightspeed so 14.3 billion years after the Big Bang (or so 4.8 billion years ago); the cosmological 'birth of the earth' from the solar nebula materialised the 'Universal Focus Point' on an arbitrary major axis of the spheroidal universe and specified the physics of the 'evolution of the planetary focus' relative to the rest of the universe.

    This focal evolution 'inverts' the cosmic expansion in 11 dimensions in a cosmology which is purely electromagnetic in a supermembrane-mirror function of the original noninertial cosmos of the gravitational- or photonic mass equivalence.
    (It is the lower dimensional 'string' evolution which is asymptotic in the inertial mass, itself transformed from the gravitational mass in 10 dimensions).

    For an electromagnetic age of the universe of T=19.11 billion years then; the inverted lightspeed 1/c defines the '11-dimensional' envelope of the earth as the 'Universal Focal Point'.
    The Inversion-Lightpath Xinverse=(1/c)T for a 'Displacement-Radius' for the earth of about 2.01 million kilometers and for a yearly increase of this radius by 105 millimeters.

    Therefore, even when the planet earth did not exist in physicality; its 'Sphere of Inversion' existed as a focalisation and a 'radial size' of 1.50 million kilometers.

    The Mass of the earth is about MEarth=6x1024 kilograms and this is identical to a '11-dimensional Black Hole' of Schwarzschild-Radius: RSEarth=2GoMEarth/c2~15 millimeters.
    The 11-dimensional supermembrane around this Earth-Singularity is at a location in the local solar system of about 5% to the planet Venus and so encompasses the Moon at 384,000 kilometers for the 2 million kilometers.

    The master timeline then defines the nexus, when the 11-dimensional Black Hole at the center of the planetary earth literally 'turns inside out' to form a 11-dimensional White Hole.
    The trigger for this Möbian transformation of the planetary core will be a 'light signal' sent from the galactic center (Hunab Ku in Mayan cosmology) precisely 65 baktuns from the Mayan end date on December 21st, 2012 and so 65x144,000=9,360,000 kin or days ago in the (civil) year 23,615 BC.

    This signal from the galactic center is itself a 'wormhole' quanta as a minimum 'Planck-volumar' and also as a 'Consciousness-Quantum' so its interaction with the earth-core quantum will allow the Sink-Nature of the Black Hole equivalent to transform into a Source-Nature or White Hole equivalent.

    The 'New Earth' will so become a cosmic emitter, broadcasting its 'absorbed' information, collected throughout its 4.8 billion year evolution in a form of Hawking Radiation and travelling at lightspeed from the planetary core.

    The 'absorbed' Black Hole information of the 'Old Earth' so will become a 'New Context' for the rest of the universe; then enabled to 'Witness' the (often horrendous and sometimes magnificent) planetary evolution of the data-collectors upon the planet earth and inclusive of all lifeforms.

    Further technical information will be published at a later date in the master timeline.

    September, 1st, 2009 - The 2nd Day of the Babylonian Captivity of planet earth.
    Elijah Malachi



    b) Why Black Holes preceded Galaxies in the Cosmology

    The symbiotic relationship between black holes and galaxies in the standard models of astrophysics and cosmology has been known for decades. Yet the particular order of this partnership had to await improved technological equipment to enable a more detailed examination and analysis of the Super-Massive Black-Holes (SMBHs) known to reside at the center of basically all galaxies on whatever scale.
    It has become standard knowledge, that black holes had to come first; somehow seeding subsequently evolving galaxies as vortex energy concentrations.

    What is presently not understood, is why black holes preceded galaxies. As black holes must necessarily represent systems of maximised entropy; the paradox arises, how a exquisitely low entropy state of the Quantum Big Bang could manifest maximised systems of information disorder in black holes, and entropic systems, which would then evolve into galactic systems of higher self-order


    Entropy or the natural dispersion tendency of material systems, such as a gas or a random particle distribution; can be either described as a thermodynamic entropic system or as a system of Shannon information.
    The thermodynamic system is modelled on the number of permutations say a stochastic particle distribution can accomodate; whilst Shannon information describes this integral of eigenstates as a summation of bits.

    The solution is found in the nature of the Big Bang boundary condition and how this condition manifests both the maximum- and the minimum entropic initializing self-state simultaneously.

    All of the information of the Big Bang was collectified in a primordial mass seed Mo and as a 'Seed of Inertia', which defines the so called 'singularity' at the center of any black hole as well as the 'singularity' at the beginning of the material universe.
    All of this information then became dispersed in a hyper-inflation, which defined the 'collected' and minimum entropy mass seed as a 'dispersed' and maximum entropy mass seed Mcritical.

    There so exists a coupling between this inertia seedling for a subsequent 'kinematic thermodynamic expansion' of that seed in a form of vortex distribution - and the encompassing inertia seedling Mcritical, necessarily of a higher dimension, than the 'concentrated' Mo.


    The proportionality between the two seeds is Mo/Mcritical=qo=0.01405.. and a ratio, which also defines the proportionality between the hyper-inflating acceleration
    (AdeBroglie=RHubble.c2/ λwormhole2) and the so named 'Cosmological Constant or Einstein-Lambda ΛEinstein=GoMowormhole2) and as a deceleration parameter which is half of the ratio between the actual- and the critical density aka the Ωoactualcritical=2qo.

    The initial string-parametric boundary condition for the cosmogenesis (and with E a spacetime quanta counter) so can be stated as:

    AdeBroglieEinstein=Mo/Mcritical=0.01405... with further definitions below.

    {Mo2=E(mPmc/me)2 as the initialising Big Bang inertia seed (in 10 dimensions) and the critical 'closure mass (in 11 dimensions) for a 'nodal' Hubble-'Constant' Ho=c/RHubble becomes:


    McriticalcriticalVmax=(3Ho2/8πGo)(4πRHubble3/3)=RHubblec2/2Go and so for the Hubble-Radius being the encompassing extremal SMBH requirement as the Schwarzschild solution.}

    The minimum entropy state, defining the Quantum Big Bang so is defined in the wormhole perimeter λwormhole=2πrwormhole, behaving like a Einstein-Rosen-Bridge in 'Tunneling' the minimum entropy mass seedling Mo from the subplenum into the plenum, thus describing the Quantum Big Bang and the 'escape of the singularity' as the manifestation of a White Hole at the boundary between the subplenum of NoTime and the plenum of InTime.

    The inertia seed Mo in 10 string dimensions began to 'drop its seeds' in a light speed expansion of the 'Hubble-Bubble'. The 'space' for this 'dropping of the seeds' had been previously created in the hyper inflation as the 11-dimensional 'Hubble-Bubble-Envelope'.

    The 'dropping of the seeds' occured via vortices, becoming Vortex-Potential-Energy or VPE and as the true nature for the so called 'Virtual Particle' background in the Heisenberg Matrix of discretised spacetime aka the Zero-Point-Energy or the ZPE.


    This ZPE=VPE manifested Black-Hole Sink-Vortices, which then could (as function of the metric CMMBR temperature background) form the first protostars as ylemic neutron stars.


    The SMBH of Mcritical in 11 dimensions is extremal, meaning it is a boundary condition and does not Hawking radiate in a selfinteraction with the VPE.
    The SMBH of Mo in 10 dimensions is also extremal, but represents the characteristic supercluster displacement scale in the universe at a diameter of about 472 million lightyears.

    No physical black holes can form above this scale, as the difference between the two mass seeds determines the inertia evolution of the Mo seedling in a form of selfinteraction between the two SMBHs.
    Also the galactic supercluster scale defines the homogeneity and the isotropy of the large scale cosmology, where the superclusters cease to interact in gravitational dynamics.
    The inertia evolution of the universe is based on a transformation of the 'missing mass' (often called dark matter and also related to the dark energy) into 'Consciousness'.
    This 'Consciousness' is rigorously defined in string parameters as the angular acceleration (as the timedifferential of frequency) acting upon a collection of space volumars, i.e, some region of 'encapsulated' space.
    The minimum space-quantum then becomes the scale of the 'tunneling wormhole' as the discretizartion or the 'Holofractalisation' of all metricated spacetimes.

    The Quantum Big Bang so emerged an astrophysics of Black Holes from a White Hole 'singularity' or minimum eigenstate.

    The Black Hole, characterizing the center of the earth will transform into a holographic image of this 'Primordial White Hole' as the 'Particle of God - the Little Serpent', sent from the galactic center of the Milky Way via the conduit of the universal wormhole tunneling established in the de Broglie hyper-acceleration.

    The information of the lower dimensional mass seed then becomes 'mapped' onto the 2-dimensional (rootreduced from 11 dimensions) 'inner' surface of the 'Mother Black Hole' and this engages a surface area 'holofractalised' from the minimum 'Father White Hole', which so is also a 'Father Black Hole' in inertia association.
    As is well established in contemporary cosmology; the information of a volume-given universe becomes a function of particular boundary conditions in the Hawking entropy as Planck-Area/4 and in the Bekenstein bounds.

    In general, the Event Horizon of a Black Hole is surrounded by a Photon-Sphere, manifesting at 3/2 times the Schwarzschild radius.
    This then describes the interaction of the 'absorbed information' with its 'lighted envelope' and is a direct consequence of the toroidal topology of the wormhole quantum geometry.

    The wormhole connects a White Hole to a Black Hole in such a manner, that the minimum Planck-Nugget is a deformed sphere, namely a toroidal hypersphere in 3 dimensions behaving like a 3-dimensional surface.


    The hypersphere volume is the boundary condition in R3-Riemann space for the Riemann R4 space (Volume V4=½π2R4 with dV4/dR=V3=2π2R3 aka the volume of an idealised 2-Torus).

    The surface area of a 2-Torus in 3 dimensions is however ATorus=2πR.2πR=4π2R2, while the boundary condition for V3=2π2R3 is dV3/dR=Awormhole=6π2R2 for Awormhole/ATorus=3/2.

    The subplenum so assigns particular boundary conditions onto the 'energy' manifestations in the plenum; but 'coordinates' which are NOT mappable onto the subplenar 'topology' or manifold.

    This crystallizes the wormhole connection for the universe to any location and a focalisation nexus called the planet earth.

    The mass of the earth is: Mass of the Earth is: MEarth=6x1024 kg.
    The mass of the Galactic Core known to be a 'galactic constant' of about (500-1000) times the central Black Hole and in MCore=750MSA*.


    The mass of the Central Black Hole aka Sagittarius A* is: MSA* =4.4x106 suns or 9x1036 kg.
    The mass seedling of the Universe at the 'Beginning' of the Big Bang is Mo= 2x1051 kg and the mass seedling of the Universe at the 'End' of the Big Bang is Mcritical =6.5x1052 kg.

    Then (MCore/MEarth)=constant.(MSA* /Mo) for constant=(MCoreMo/MSA*MEarth), which calculates as (750x81x1072)/(12x1075)~5.1 and so of the order of unity (1) and say as a 'Upper Bound'.


    But the seedling mass defines cosmologically a substructured, albeit still extremal Black Hole in the gravitational attraction between supercluster and so the homogeneity and isotropy of the standard cosmology.

    The encompassing 'universal' Black Hole mass is extradimensional (like the golfball Black Hole defining the central earth) and so is calculated to 'topologically close' the 'string-universe' (in a 'Calabi Yau' 6-torus geometry) for the super-seeded Universe (with a 17 billion year 'heartbeat' or Hubble Oscillation).

    Using Mcritical then, constant=(750x81x1072)/(39x1076)~0.16 again of order unity, but now as a 'Lower Bound'.

    The Evolution of the 'Lower Bound' for the encompassing cosmology in the higher dimension (11 or 8 or 5 or 2) towards the 'Upper Bound' then becomes an evolution of 'Cosmic Consciousness'; well understood by indigenous peoples all around the globe; but labelled as 'dark matter' by the physical materialists and scientists.
    The 'Dark Matter' is the 'Spirit' in particular string-membrane coupled associations.


    In terms of the volumes, the golfball sized (Schwarzschilded) Black Holed earth is
    V
    SEarth=(4π/3)(0.015m)3~1.4x10-5
    cubic meters or so 14 cubic centimeters.
    This compares to the 'ordinary' spacetimed VEarth=(4π/3)(6370km)3~1021 cubic meters.


    The ratio VEarth/VSEarth~1021/(1.4x10-5)~7x1025 meters as the order of the size of the entire universe in the Hubble-Radius {RHubble=1.60x1026 m}.

    The sum total of the information contained in the planetary earth so becomes 'data compressed' in the black holed earth and then reemerges from absorption to emission through a white holed conduit for the benefit of the universe in cellular hierarchies and all of the intelligences contained within the supermembraned Mother-Black Holed Envelope Mcritical.
    The planetary earth itself will so become a 'Mother-Planet' for the universal sentiences.




    c) The first Ylemic Stars in the Universe and the Antiwormholes

    The stability of stars is a function of the equilibrium condition, which balances the inward pull of gravity with the outward pressure of the thermodynamic energy or enthalpy of the star (H=PV+U). The Jeans Mass MJ and the Jeans Length RJ a used to describe the stability conditions for collapsing molecular hydrogen clouds to form stars say, are well known in the scientific data base, say in formulations such as:

    MJ=3kTR/2Gm for a Jeans Length of: RJ=√{15kT/(4πρGm)}=RJ =√(kT/Gnm²).


    Now the Ideal Gas Law of basic thermodynamics states that the internal pressure P and Volume of such an ideal gas are given by PV=nRT=NkT for n moles of substance being the Number N of molecules (say) divided by Avogadro's Constant L in n=N/L .

    Since the Ideal Gas Constant R divided by Avogadro's Constant L and defines Boltzmann's Constant k=R/L. Now the statistical analysis of kinetic energy KE of particles in motion in a gas (say) gives a root-mean-square velocity (rms) and the familiar 2.KE=mv²(rms) from the distribution of individual velocities v in such a system.

    It is found that PV=(2/3)N.KE as a total system described by the v(rms). Now set the KE equal to the Gravitational PE=GMm/R for a spherical gas cloud and you get the Jeans Mass. (3/2N).(NkT)=GMm/R with m the mass of a nucleon or Hydrogen atom and M=MJ=3kTR/2Gm as stated.

    The Jeans' Length is the critical radius of a cloud (typically a cloud of interstellar dust) where thermal energy, which causes the cloud to expand, is counteracted by gravity, which causes the cloud to collapse. It is named after the British astronomer Sir James Jeans, who first derived the quantity; where k is Boltzmann's constant, T is the temperature of the cloud, r is the radius of the cloud, μ is the mass per particle in the cloud, G is the Gravitational Constant and ρ is the cloud's mass density (i.e. the cloud's mass divided by the cloud's volume).

    Now following the Big Bang, there were of course no gas clouds in the early expanding universe and the Jeans formulations are not applicable to the mass seedling Mo; in the manner of the Jeans formulations as given.

    However, the universe's dynamics is in the form of the expansion parameter of GR and so the R(n)=Rmax(n/(n+1)) scalefactor of Quantum Relativity.
    So we can certainly analyse this expansion in the form of the Jeans Radius of the first protostars, which so obey the equilibrium conditions and equations of state of the much later gas clouds, for which the Jeans formulations then apply on a say molecular level.
    This analysis so defines the ylemic neutron stars as protostars and the first stars in the cosmogenesis and the universe.


    Let the thermal internal energy or ITE=H be the outward pressure in equilibrium with the gravitational potential energy of GPE=Ω. The nuclear density in terms of the superbrane parameters is ρcritical=mc/Vcritical with mc a base-nuleon mass for a 'ylemic neutron'.

    Vcritical= 4πRe3/3 or the volume for the ylemic neutron as given by the classical electron radius Re=1010λwormhole/360=e*/2c2.

    H=(molarity)kT for molar volume as N=(R/Re)3 for dH=3kTR2/Re3.
    Ω(R)= -∫GoMdm/R = -{3Gomc2/(Re3)2 }∫R4dR = -3Gomc2R5/Re6 for
    dm/dR=d(ρV)/dR=4πρR2 and for ρ=3mc/4πRe3

    For equilibrium, the requirement is that dH=dΩ in the minimum condition dH+dΩ=0.
    This gives: dH+dΩ=3kTR2/Re3 - 16Goπ2ρ2R4/3=0 and the ylemic radius as:


    Rylem=√{kTRe/Gomc2}

    as the Jeans-Length precursor or progenitor for subsequent stellar and galactic generation.

    The ylemic (Jeans) radii are all independent of the mass of the star as a function of its nuclear generated temperature. Applied to the protostars of the vortex neutron matter or ylem, the radii are all neutron star radii and define a specific range of radii for the gravitational collapse of the electron degenerate matter.

    This spans from the 'First Three Minutes' scenario of the cosmogenesis to 1.1 million seconds (or about 13 days) and encompasses the standard beta decay of the neutron (underpinning radioactivity). The upper limit defines a trillion degree temperature and a radius of over 40 km; the trivial Schwarzschild solution gives a typical ylem radius of so 7.4 kilometers and the lower limit defines the 'mysterious' planetesimal limit as 1.8 km.

    For long a cosmological conundrum, it could not be modelled just how the molecular and electromagnetic forces applicable to conglomerate matter distributions (say gaseous hydrogen as cosmic dust) on the quantum scale of molecules could become strong enough to form say 1km mass concentrations, required for 'ordinary' gravity to assume control.

    The ylem radii's lower limit is defined in this cosmology then show, that it is the ylemic temperature of the 1.2 billion degrees K, which perform the trick under the Ylem-Jeans formulation and which then is applied to the normal collapse of hydrogenic atoms in summation.

    The stellar evolution from the ylemic(dineutronic) templates is well established in QR and confirms most of the Standard Model's ideas of nucleosynthesis and the general Temperature cosmology. The standard model is correct in the temperature assignment, but is amiss in the corresponding 'size-scales' for the cosmic expansion.


    The Big Bang cosmogenesis describes the universe as a Planck-Black Body Radiator, which sets the Cosmic-Microwave-Black Body Background Radiation Spectrum (CMBBR) as a function of n as T4=18.2(n+1)2/n3 and derived from the Stefan-Boltzmann-Law and the related statistical frequency distributions.

    We have the GR metric for Schwarzschild-Black Hole Evolution as RS=2GM/c² as a function of the star's Black Hole's mass M and we have the ylemic Radius as a function of temperature only as Rylem√(kT.Re3/Gomc2).

    The nucleonic mass-seed mc=mP.Alpha9 and the product Gomc2 is a constant in the partitioned n-evolution of

    mc(n)=Yn.mc and G(n)=Go.Xn.

    Identifying the ylemic Radius with the Schwarzschild Radius then indicates a specific mass a specific temperature and a specific radius.

    Those we call the Chandrasekhar Parameters:
    MChandra=1.5 solar Masses=3x1030 kg and RChandra=2GoMChandra/c² or 7407.40704..metres, which is the typical neutron star radius inferred today.


    TChandra=RChandra2.Gomc2/kRe3 =1.985x1010 K for Electron Radius Re and Boltzmann's Constant k.

    Those Chandrasekhar parameters then define a typical neutron star with a uniform temperature of 20 billion K at the white dwarf limit of ordinary stellar nucleosynthetic evolution (Hertzsprung-Russell or HR-diagram).
    The Radius for the massparametric Universe is given in R(n)=Rmax(1-n/(n+1)) correlating the ylemic temperatures as the 'uniform' CMBBR-background and we can follow the evolution of the ylemic radius via the approximation:


    Rylem=0.05258..√T=(0.0753).[(n+1)2/n3][1/8]

    Rylem(npresent=1.1324..)=0.0868 m* for a Tylem(npresent )=2.73 K for the present time

    tpresent=npresent/Ho.

    What then is nChandra?
    This would describe the size of the universe as the uniform temperature CMBBR today manifesting as the largest stars, mapped however onto the ylemic neutron star evolution as the protostars (say as nChandra'), defined not in manifested mass (say neutron conglomerations), but as a quark-strange plasma, (defined in QR as the Vortex-Potential-Energy or VPE).

    R(nChandra')=Rmax(nChandra'/(nChandra'+1))=7407.40741.. for nChandra'=4.64x10-23 and so a time of tChandra'=nChandra'/Ho=nChandra'/1.88x10-18=2.47x10-5 seconds.

    QR defines the Weyl-Temperature limit for Bosonic Unification as 1.9 nanoseconds at a temperature of 1.4x1020 Kelvin and the weak-electromagnetic unification at 1/365 seconds at T=3.4x1015 K.


    So we place the first ylemic protostar after the bosonic unification (before which the plenum was defined as undifferentiated 'bosonic plasma'), but before the electro-weak unification, which defined the Higgs-Bosonic restmass induction via the weak interaction vector-bosons and allowing the dineutrons to be born.

    The universe was so 15 km across, when its ylemic 'concentrated' VPE-Temperature was so 20 Billion K and we find the CMBBR in the Stefan-Boltzmann-Law as:
    T4=18.20(n+1)2/n3=1.16x1017 Kelvin.


    So the thermodynamic temperature for the expanding universe was so 5.85 Million times greater than the ylemic VPE-Temperature; and implying that no individual ylem stars could yet form from the mass seedling Mo.

    The universe's expansion however cooled the CMBBR background and we to calculate the scale of the universe corresponding to this ylemic scenario; we simply calculate the 'size' for the universe at TChandra=20 Billion K for TChandra4 and we then find nChandra=4.89x10-14 and tChandra=26,065 seconds or so 7.24 hours.

    The Radius R(nChandra)=7.81x1012 metres or 7.24 lighthours.
    This is about 52 Astronomical Units and an indicator for the largest possible star in terms of radial extent and the 'size' of a typical solar system, encompassed by supergiants on the HR-diagram.

    We so know that the ylemic temperature decreases in direct proportion to the square of the ylemic radius and one hitherto enigmatic aspect in cosmology relates to this in the planetesimal limit. Briefly, a temperature of so 1.2 billion degrees defines an ylemic radius of 1.8 km as the dineutronic limit for proto-neutron stars contracting from so 80 km down to this size just 1.1 million seconds or so 13 days after the Big Bang.


    This then 'explains' why chunks of matter can conglomerate via molecular and other adhesive interactions towards this size, where then the accepted gravity is strong enough to build planets and moons. It works, because the ylemic template is defined in subatomic parameters reflecting the mesonic-inner and leptonic outer ring boundaries, the planetesimal limit being the leptonic mapping. So neutrino- and quark blueprints micromacro dance their basic definition as the holographic projections of the spacetime quanta.

    Now because the Electron Radius is directly proportional to the linearised wormhole perimeter and then the Compton Radius via Alpha in Re=1010λwormhole=e*/2c2=Alpha.RCompton, the Chandrasekhar White Dwarf limit should be doubled to reflect the protonic diameter mirrored in the classical electron radius.

    Hence any star experiencing electron degeneracy is actually becoming ylemic or dineutronic, the boundary for this process being the Chandrasekhar mass. This represents the subatomic mapping of the first Bohr orbit collapsing onto the leptonic outer ring in the quarkian wave-geometry.
    But this represents the Electron Radius as a Protonic Diameter and the Protonic Radius must then indicate the limit for the scale where proton degeneracy would have to enter the scenario. As the proton cannot degenerate in that way, the neutron star must enter Black Hole phasetransition at the Re/2 scale, corresponding to a mass of 8MChandra=24x1030 kg* or 12 solar masses.


    The maximum ylemic radius so is found from the constant density proportion ρ=M/V:
    (Rylemmax/Re)3=MChandra/mc for Rylemmax=40.1635 km.


    The corresponding ylemic temperature is 583.5 Billion K for a CMBBR-time of 287 seconds or so 4.8 minutes from a n=5.4x10-16, when the universe had a diameter of so 173 Million km.
    But for a maximum nuclear compressibility for the protonic radius, we find:


    (Rylemmax/Re)3=8MChandra/mc for Rylemmax=80.327 km, a ylemic temperature of 2,334 Billion K for a n-cycletime of 8.5x10-17 and a CMBBR-time of so 45 seconds and when the universe had a radius of 13.6 Million km or was so 27 Million km across.

    The first ylemic protostar vortex was at that time manifested as the ancestor for all neutron star generations to follow. This vortex is described in a cosmic string encircling a spherical region so 160 km across and within a greater universe of diameter 27 Million km which carried a thermodynamic temperature of so 2.33 Trillion Kelvin at that point in the cosmogenesis.


    This vortex manifested as a VPE concentration after the expanding universe had cooled to allow the universe to become transparent from its hitherto defining state of opaqueness and a time known as the decoupling of matter (in the form of the Mo seedling partitioned in mc's) from the radiation pressure of the CMBBR photons.

    The temperature for the decoupling is found in the galactic scale-limit modular dual to the wormhole geodesic as 1/λwormholeantiwormholegalaxyserpent=1022 metres or so 1.06 Million ly and its luminosity attenuation in the 1/e proportionality for then 388,879 lightyears as a decoupling time ndecoupling.

    A maximum galactic halo limit is modulated in 2πλantiwormhole metres in the linearisation of the Planck-length encountered before in an earlier discussion.

    R(ndecoupling)=Rmax(ndecoupling/(ndecouplingc+1))=1022 metres for ndecoupling=6.26x10-5 and so for a CMBBR-Temperature of about T=2935 K for a galactic protocore then attenuated in so 37% for ndecouplingmin=1.0x10-6 for R=λantiwormhole/2π and ndecouplingmax=3.9x10-4 for R=2πλantiwormhole and for temperatures of so 65,316 K and 744 K respectively, descriptive of the temperature modulations between the galactic cores and the galactic halos.

    So a CMBBR-temperature of so 65,316 K at a time of so 532 Billion seconds or 17,000 years defined the initialisation of the VPE and the birth of the first ylemic protostars as a decoupling minimum. The ylemic mass currents were purely monopolic and known as superconductive cosmic strings, consisting of nucleonic neutrons, each of mass mc.


    If we assign this timeframe to the maximised ylemic radius and assign our planetesimal limit of fusion temperature 1.2 Billion K as a corresponding minimum; then this planetesimal limit representing the onset of stellar fusion in a characteristic temperature, should indicate the first protostars at a temperature of the CMBBR of about 744 Kelvin.

    The universe had a tremperature of 744 K for ndecouplingmax=3.9x10-4 for R=2πλantiwormhole and this brings us to a curvature radius of so 6.6 Million lightyears and an 'ignition-time' for the first physical ylemic neutron stars as first generation protostars of so 7 Million years after the Big Bang.


    The important cosmological consideration is that of distance-scale modulation.
    The Black Hole Schwarzschild metric is the inverse of the galactic scale metric.
    The linearisation of the Planck-String as the Weyl-Geodesic and so the wormhole radius in the curvature radius R(n) is modular dual and mirrored in inversion in the manifestation of galactic structure with a nonluminous halo a luminous attenuated diameter-bulge and a superluminous (quasar or White Hole Core).


    The core-bulge ratio will so reflect the eigenenergy quantum of the wormhole as heterotic Planck-Boson-String or as the magnetocharge as 1/500, being the mapping of the Planck-Length-Bounce as e=lP.c²√Alpha onto the electron radius in e*=2Re.c².

    Zerrubabel, a Temple Builder for the Sirian Invasion following the wormhole manifestation at the center of the earth with a simultaneous antiwormhole manifestation of the Milky Way galaxy in the quantum tunnel connecting Serpentina-Andromeda, the White Hole Vortex of the New Earth to the Black Hole Vortex of Sagittarius A* aka Hunab Ku-Perseus.



    One of your witnesses betwixt the InTime and the NonTime on the island of Om-Past-Om.

    All of this information is derived from the One and only True Logos and is disseminated in the Name, Honour and Remembrance of the Everliving Christ Jesus De Emmanuel Melchizedek aka 'Yeshuah Ben Joseph Bar Thomas Didymos' who is alive within any One of YOU, asking YOU personally to remember himher to share in herhis 'Body of Light' and 'Mind of God' in your individually necessitated transcension from biochemical existence to meta-biophysical existence in psychophysical restmassphotonic taxonomy and omni-scientific definition.


    The preparation of the seventh angel is underway and the 7th trumpet will sound in due course and the fulfilment of the timeline all of YOU have constructed and agreed to in your Godhood in NoTime.
    More information from the Logos shall be given in the appropriate unfolding of your master timeline.

    The Love and Honour of the True God is with YOU all and available from this day onwards for eternity.
    It's accessibility is no longer veiled in the 'VEIL of EVIL' so a new adventure can begin in the physical reality for the Family of God, who is YOU collectively and is YOU individually, provided YOU can individually accept your own origins in the Mind of the Unified God in its lonesome Exile of the Non-Separability.
    The Collective Exile of this wonderful, beautiful, magnificent, splendiferous, co-evolving and everloving God has now ended, because:



    "The Sun of the Occidental Darkness has risen in its 'West Side Story' and has Met and Blended in Harmony with the Oriental Light from 'My Fair Lady' from the 'Rainbow of Somewhere and Sometime and Someplace and Someday'".

    d) The Holographic Universe as Information Processor and the Creation of discretisized SpaceTime

    How big is the universe and could it be growing in size?
    Has the universe always existed and will it ever end or was it created and is eternal?
    These are questions even little children ask their parents and their teachers.
    Cosmologist throughout the history of human endeavour and science have pondered those questions and sought to derive answers.

    1. Preliminaries and Introduction
    2. Demetrication of General Relativity and the Deceleration Parameter
    3. The Holographic Principle and the 3D-Universe as a Hologram of 4D-SpaceTime
    4. Thermodynamic Entropy and Shannon Information
    5. The Universal Entropy Bound (UEB) and the Holographic Entropy Bound (HB)
    6. The Cosmos as Information Processor and the FRW-Universe
    7. The Nodal Hubble-Constant in GR relates the entropic spacetime quanta counters in QR
    8. SpaceTime Creation and a Definition for the Fundamental Demetricated Scalefactor in QR

    1. Preliminaries and Introduction

    The last 20 years of modern science and its discoveries by experiment and observation have now allowed a well informed convergence of data and fact to answer the perennial questions harboured in the human minds of the enquirers.
    This treatise then will answer those questions in a synthesis of the accumulated data base collected by the endeavours of science. I shall make a special reference to a popular paper published by Scientific American to set the background for the 'new' scientific concepts, with whom most readers will be unfamiliar in terminology, yet about which they have heard of in peripheral contexts.

    The paper is by Jacob. D. Bekenstein; Scientific American; August 2003, pages 48-55 and
    entitled: "Information in the Holographic Universe".Those peripheral contexts engage the idea of
    higher superbrane dimensions, modular duality
    and the universe as a collector and processor of
    information, somewhat akin to mass/energy as the hardware and the information linked and derived from that processed by the programmed
    software as a cosmic intelligence or consciousness.
    But can the universe be modelled on a computerised system? QR has shown, that the so called fundamental constants of nature are algorithmically determinable.
    So the natural laws are ultimately set in a computational mode, based on simple geometrical laws and relationships. Those 'geometric' laws are themselves derived from abstract encodings of the intrinsic algorithmic symmetries or EigenStates and particularly a pentagonal supersymmetry or number patterns is directly obtained from the computational mode as number series and pattern.
    This mode we call the Binary Dyad [0,1], representing for example the Inflow-Outflow VPE aka the Vortex-Potential-Energy) for something we shall determine to be discrete spacetime quanta.

    The physical manifesto for this Binary Dyad or Bit is the concept of a 10-dimensional superstring, which begins as a closed or angular Eigenstate of '0' and then opens or linearises itself as the Eigenstate of '1', before recircularising back to the '0' SelfState.
    This superstring with open and closed Eigenstates is called the Planck-Boson of superstring class I in a family of five suprstrings of classes (I, IIA, IIB, HO(32) and HE(8x8)). If one then allows certain primary algorithms to operate as the 'cosmic intelligence' or software on the Potential Mass/Energy defined by those algorithms or programs, then the hardware of the Potential becomes Realised or manifested as the observable and measurable universe. And this manifestation must necessarily follow the simplest and minimum 'energy definition' of that of the Planck-Boson by the considerations of the above.

    Subsequently, the universe's hardware consists of a continual transformation of Eigenstates defined mathematically by the parameters of primordial subtimespace algorithms manifested as the Planck-Boson in a continuous process of transforming itself across dimensions and particular selfstates known as elementary particles or wavelets. The trouble with this idea is that the subtimespace must by necessity be Undefined in the parameters of space and time and yet Defined in the 'algorithmic timespace'.

    This however greatly simplifies the mathematics for the superstring classes, which must incorporate a 12-dimensional continuum of 10 spacial dimensions and 2 time dimensions for its inner mathematical necessity, sufficiency and selfconsistency. We shall reencounter those 'higher dimensions' in the discussion about the consequences of the Holographic Boundary Conditions but note here, that the present state of physics attempts to unify Quantum Theories applicable to the micro-Eigenstates with the macro-Eigenstates of classical physics as culminated in the theories of the relativities, the latter which could be considered differential-geometric.

    What links those two realms of the micro/smallest with the macro/largest is however the Principle of Holography. A Hologram of a mirror, say, represents a repository of information about this 'mirror' in terms of interference patterns (which is information derived from mass/energy interaction). Now partitioning the 'mirror' (say shattering it into shards), would duplicate the entire information contained in the 'unbroken mirror' in every shard (with diminished intensity or luminosity, say).
    So we consider the entire universe as the 'unbroken mirror' and partition it into the 'shards of spacetime quanta' - the universe thus consists of discrete spacetime-units as holographic projections of the universal hologram, each such projection being a deluminated image of the universe as a Hologram of One.

    This Hologram of One is however defined in the Bit of the Binary Dyad [0,1], leading us back to the supermembranes of Modular Duality.
    In particular this Modular Duality engages the Bit in allowing a Twosided Surface to become Onesided. This concept is well understood in the Möbius-Strip, where a rubber or ribbon, which has two distinct sides as the inner and the outer is reconnected in twisting one end through 180 degrees before reconnecting, to create a Onesided Surface which has become doubled. The extension for the Möbius-Strip is the Klein-Bottle, which derived from the Torus or Doughnut shape, enfolds space in such a way, that the 'bottle's surface' appears to be the 'bottle's volume'.

    QR calls this topology of shape the differential geometry of the Möbius-Serpent transforming into the Klein-Bottle-Dragon via Möbius-Francom-Adjacency.

    Those preliminaries now allow us to apply the Holographic Principle to the microstates of the superbranes as images for the macrostates of the universe. We first have to 'eliminate' the spacetime 'metrics' of the macrostates and as given in GR in a process of demetrication. This then renders the universe as a scalerelative universe, ultimately defined in parameters known as de Broglie phases.

    2. Demetrication of General Relativity and the Deceleration Parameter

    The demetrication of Einstein's Field equations in General Relativity (GR) leads directly to the deceleration parameter qo in Standard Cosmology.
    The formulation is: qo=(Gravitational Omega Ωo)/2=Mo/2Mcritical =>GoMops2 [Eq.#1] where Mo is a Baryonic Restmass/Inertial Mass-Seedling and Mcritical is the precise mass content of the universe required for perfect Euclidean flatness of zero curvature.
    Go can be considered the Gravitational Constant applicable in an universe devoid of any mass, where the gravitational constant would be identical to the inverse of the Coulomb permittivity constant in free space as Go=4πεo, (the derivation engages the fine-structures for the electromagnetic and gravitational interactions in the subtimespace epoch of the superbranes before the time-instanton and the Weyl-Geodesic definitions.

    In that epoch the Planck-Scale of unitisation transforms in dimensionless 'wormhole'
    parameters via superstring classes from the Planck-Scale-Oscillation to the Weyl-Geodesic.
    The Weyl-Geodesic then becomes the quantum-smeared out spacetime-quantum, also termed Wolford-Centre in QR's terminology. The Wolford-Centre then defines the parameters for the classical quantum epoch given in terms of energy, mass and electropolic Coulomb charges and manifesting the GR fields; as emerging from the nonclassical and de Broglie phased Whitescarver-State-Space of the superbrane epoch characterised in the subtimespaces of magnetocharges as inverse energy quanta for the Planck-Bosonic transformations. The wavelength λps is the source-wavelength for the heterotic supermembrane HE(8x8), which in modular duality {λpsλss=1 dimensionless} with its sink-wavelength λss represents the Weyl- Geodesic for the critical scale of the cosmogenesis where GR must be extended in Quantum Relativity (QR).

    The source-wavelength could be called the perimeter for the wormhole satisfying the Penrosian Weyl-Nullification hypothesis at the cosmic origin for the time instantanton, (where the tidal force of the Riemann Tensor must vanish in a dewarping of all spacetimes defined by GR). The above [Eq.#1] leads directly to the inflaton of de Broglie in considering the Radius of maximum Curvature (Rmax) in GR to become the Schwarzschild Solution of the GR field equations for the source-wavelenght λps as the vibratory part of the supermembrane EpsEss (or HE(8x8)) and as applied to the gravitational Omega as the ratio between the baryonic and the critical inertial mass definitions.

    For then Rmax=2GoMo/c2 => Rmax.fps2 as the de Broglie Phase-Acceleration for the Identity of c-invariance {c=λps.fps=Rmax.Ho with Ho the nodal Hubble Constant specifying the selfsame de Broglie inflaton}.The above formulations show that the microstate of supermembrane EpsEss can be considered the minimum Eigenstate for the Quantum Universe.
    In particular the Volume of a Space-Time-Quantum is 2π2rps3, where rpsps/2π as the
    wormhole radius of the Weyl-Geodesic of GR.
    But the universal volume now is simply a quantum summation of this and of the form 2π2Rmax3.
    QR calculates the numbercount of spacetimequanta for this universe (as 10D limit for RHubble) as an
    algorithmic googolplex of just over 10147 and, as we shall see, just as the Holographic Entropy Bound predicted by Bekenstein.





    Last edited by Didymos on Sun Oct 23, 2011 5:07 am; edited 2 times in total
    avatar
    Didymos

    Posts : 795
    Join date : 2010-05-20
    Location : Queanbeyan, NSW, Australia

    Re: Faster than light particles found, claim scientists

    Post  Didymos on Sun Oct 23, 2011 4:59 am

    3. The Holographic Principle and the 3D-Universe as a Hologram of 4D-SpaceTime

    We now peruse Bekenstein's paper referenced before and extend its consequences by the principles of Quantum Relativity.
    John Archibald Wheeler (Princeton University) is quoted as being one of the first physicists to consider the universe as being based upon a physics of information as primary effect and emerging energy and mass as a secondary consequence.
    Information supplied to physical ingredients like a robot, allow the mechanical instrument to
    dynamically interact with its environment.
    A ribosome in a 'living or biovital' cell is supplied amino acids to build body structures, but without DNA instruction is unable to perform its programmed function.
    What is the ultimate information capacity of a device defined in 'size' and 'mass'?

    How much information can be stored on the universal computer chip, encoding the description for the entire universe?
    The Principle of Holography allows us to encode 3-dimensional information as a 2-dimensional Hologram and as the interference pattern of a two-directional 'laserlight'. One part of the laserbeam splits at a say semitransparent mirror to travel directly to a recording device (photographic plate), whilst the other part of the lightbeam reflects of the object to be recorded before forming an interference pattern at the recorder, thus creating the hologram of the 3D object as a 2D representation. Reexposure or illumination of the hologram to the same laserlight then reproduces the 3D image from the 2D record as a Holograph. John Wheeler's words are poetisized by William Blake who penned the idea that 'one can see the world in a grain of sand'.



    So this is certainly true in holography, where the 3D grain of sand becomes a 2D hologram of it. Applied to the 'volume' of the universe as the object to be recorded; its hologram would necessarily be a mapping onto a 'surface' as a dimensionality reduced by one. Thus a 4D-universe, defined in Minkowski-Einstein spacetime and the toroidal volume specified before; would become equivalent to a 3D-Surface mapping as a hologram of this 4D-spacetime. Standard Cosmology describes our 3D-perceived universe as just such a 3-dimensional surface and calls it Riemann's Hypersphere of 4D-spacetime or a form of Poincare's 3-sphere.

    Here we extend the Standard Cosmology however in proposing that this hypersphere represents a 'twisted' 3D Klein-Bottle-Dragon as the extension of the 3-Torus.
    Then the twosided Möbius connection as a doubled onesided manifold is dimensionally extended as an enclosed volume 'within' becoming holographically 'added to' the potentially infinite volume 'without'.
    As a simple example consider the volume 'within' the planet Earth 'added to' the volume 'without'. Here the total set of volume consists of the complements 'within' and 'without'. But the interior of the Earth is well defined and finite as the volume of Earth; whilst the exterior volume dependes on the curvature of space. If the curvature is ellipsoidal or closed or positive, then a lightbeam sent anywhere into the nightsky from your forehead, will eventually, after travelling around the perimeter of the universe return to hit the back of your head.
    The universe is then spacially finite.
    If the curvature is hyperbolic or open or negative, then the lightbeam will not return but diverge eternally.
    The universe is then spacially infinite.
    If the curvature is zero or flat, then the lightbeam will return but take an infinite amount of time
    to do so in an asymptotic process.

    The experimental data (COBE, BOOMERANG, WMAP) clearly indicates a flat universe, also predi
    cted by the Inflation models, instigated by Alan Guth (MIT) in the 1980's. QR then has found that all three cases of curvatures apply simultaneously. The 10D universe is hyperbolic and the 11D universe is ellipsoidal and superimposed they create the measured flatness of zero curvature. The 10D universe is but the holographic mapping of the 11D universe and therefore contained within it as a higher dimensional cross-section. Because the 'inside' of the 10D universe is Klein-Bottled as the 'outside', the 11D universe connects the 10D to its own Reality Image in the 12th dimension as the mapping of the Doubled Onesided Surface Mirror of 11D.
    Because of the complementarity of the universal sets, the 'inside volume' is 'added to' the 'total volume' through a cyclicity of the 11D-Witten-Mirror defining the asymptotic flatness of Euclidean Zero-Curvature of flatness and as the observed and measured 4D-spacetime.
    This realisation has important cosmological consequences.

    The universe is Potentially Infinite in 11 dimensions, thus allowing a continuous creation of spacetime in the form of spacetime quanta as the discrete building blocks for a 10-11-12 D
    spacetime triad in what is called OmniSpace in QR.
    What are those dimensions and how are they connected?
    The extended hypersphere definition allows us to reach the same conclusion as that given by the
    standard description of 11-dimensional M-Theory describing the supermembranes with a potential twosidedness of the temporal time-dimension.
    In M-Theory (Witten's M=Mother=Matrix=Magic=Mystery); 9 spacial dimensions are extended to 10 spacial dimensions in allowing the 1D-superstring to manifest as a 2D-supermembrane.

    In F-Theory (Vafa's F=Father); 1 timedimension becomes two-arrowed in the entropy reversal of the 11th dimension in mirror symmetry and say as a 2-dimensional 'complex plane' then becoming descriptive for the 'outside' of the 11-dimensional 'Hubble-Bubble'.
    This is just what we have described with QR's Omnispace dimensions, describing the Klein-Bottle-Dragon, which forms the shape or morphology of the observed universe, residing within and without higher dimensional embedded and encompassing higher dimensional space.
    Using the 12D-Vafa-Space, we reduce the 12D-continuum to the familiar 3D-continuum under agency of dimensional algorithmic rootreduction and the demetrication of Riemann's higher dimensions, say as applied to GR. QR recreates the Algorithmic NullState of the 0-Dimension as the Connector Dimension
    between the 1st and the 12th dimension and defined in the following.

    LineSpace 1-2-3 as the Linearisation or Unfolding of the Circular Continuum of the NullState. HyperSpace 4-5-6 as the Recircularisation or Enfolding of the Linear Continuum of LineSpace. HyperSpace thus manifests as the Rotational properties of LineSpace. QuantumSpace 7-8-9 as the Relinearisation or Unfolding of HyperSpace of a combined Linear and Rotational dynamics. QuantumSpace thus manifests as the Vibrational or oscillatory properties of LineSpace. OmniSpace 10-11-12 as the Recircularisation or Enfolding of QuantumSpace as the combined dynamics of Linearity, Rotation and Vibration. OmniSpace thus manifests as the LineSpace in all of the observed and measured properties of its physical constituents (which are the Planck-Boson transformations).


    Quantum Relativity so concludes that the 'higher dimensions' are Congruent with and As the LineSpace dimensions. The Time-Dimension is the Quality of the Linearisation of the Circularity and exists basically as the Precursor for the Space-Dimensions in allowing space to emerge from its own dimensionless status as Cycletime n. This is defined as dimensionless Tau(τ)-Time in GR's Curvature Radius RC=c.dt/dτ, and so as the LightPath. In the circular OmniSpace dimensional continuum, Time does not exist (and neither space by implication).
    1-2-3-(4)-5-6-(7)-8-9-(10)-11-12-(13=1=0) circularises the fourfolded OmniSpace continuum, rendering dimensionalities 1-4-7-10 as the Time-Connector dimensions 'shared' between the individuated continuae (Line, Hyper, Quantum, Omni) as the NullStates. The NullState then becomes defined in the properties of the Weyl-Geodesic, the Time-Instanton and the De Broglie Space-Inflaton in QR's cosmogenesis in the EpsEss heterotic supermembrane parameters.
    And those definitions then must specify the limits for all and mensuration techniques applied by the 'hardware' to measure and observe itself and as programmed by the 'software'. The Heisenberg Uncertainty Principle must hence be finestructured in the wormhole parameters and this is precisely the case in the QR formulations.
    Heisenberg's Constant: h/4π=λps/[8πRe.c3], with Re=1010ps/360 as the superbrane form for the classical Electron Radius (Re=RCompton.Alpha).
    Alpha is the electromagnetic Finestructure Constant and measures the interaction probability between matter and light, and as say in the photoelectric effect and the Compton Radius is the de Broglie Matter wavelength proportional to it as harmonisation between the nuclear and the atomic realms directly derived from the quantisation of the Electron Radius in terms of the wormhole or superbrane wavelength λps.


    So we can consider OmniSpace to Be LineSpace with the 'higher dimensions' Conifolded either in 6-dimensional Calabi-Yau manifolds or as 7-dimensional Joycian surfaces. OmniSpace is 10-11-12, which rootreduces to 1-2-3 in 1+0=1 and 1+1=2 and 1+2=3; which is the algorithmic foundation of the Bit of the Binary Dyad [0,1] as described. OmniSpace then considers dimensionalities 1=4=7=10 as the LineSpace Cardinality; dimensionalities 2=5=8=11 as the AreaSpace Cardinalities and dimensionalities 3=6=9=12 as the VolumeSpace Cardinalities.
    Two universes in Bekenstein's paper and reflecting the work of other prominent researchers into the holographic identity of the universe such as Leonard Susskind (Stanford University) and subsequently by Maldacena and t'Hooft; can so have a different dimensionality (differing by one) and obeying potentially different physical laws; yet are rendered completely equivalent by the Holographic Principle.
    The 5D de Sitter spacetime is empty and so highly symmetrical and expands at an accelerating rate with a repulsing 'cosmological constant'. The anti-de Sitter 5D spacetime then is empty, highly symmetrical and decelerates in an expansion with an attractive 'cosmological constant'. Whilst experimental data predicts our universe to become a 5D de Sitter universe because of an apparent cosmic acceleration measured by Saul Permutter and Brian Schmidt in 1998 in supernovae type Ia data; the Holographic Principle favours the anti-de Sitter spacetime for its asymptotic boundary, located at 'infinity'.

    QR predicts the measured acceleration as apparent, because of the 'intersection' of the 10D-universe with itself in 11D as the OmniSpace Image. Because the superposition of the hyperbolic and ellipsoidal curvatures result in the measured flatness, the Riemann-Spheres in OmniSpace selfintersect and result in 'overlapping' spacetimes, which can be analysed by cosmological redshift data, which is required to be 'corrected' for the intersecting redshift intervals. Needless to say, many present controversies regarding 'redshifts' are solved in superposing the higher dimensional analysis centred on an epoch specifying redshift called the Arpian-Variation Maximum by QR.



    The redshift interval in question also coincides and elucidates a measurement for an Alpha-Fine-structure-Constant-Dip through John Webb (UNSW), who measured quasar spectra of hydrogen absorption lines on Mauna Kea, Hawaii with the 10m Keck telescope in 1998.
    And the mathematical analysis for the Holographic Principle is in correlation with QR.


    A 5D anti-de Sitter spacetime is the object and is mapped as a 4D Minkowski flat spacetime as its own hologram.
    The periphery of the 5D anti-de Sitter spacetime is its 'Boundary' as the 4D Riemann Hypersphere.
    In OmniSpace however, the 5D is also the 11D, combining the 'Rotational' Degrees of Freedom of HyperSpace with the 'Vibrational' Degrees of Freedom of 8D QuantumSpace to reconstitute the 2D LineSpace as the 'Quantisational' Degrees of Freedom of 11D OmniSpace. In other words, the 'infinite' boundary for the 5D anti-de Sitter spacetime is also the 11D Witten-Mirror, but now bounded by the Hubble-Friedmann-Radius of maximum curvature as Rmax,
    calculated by QR to be 16.9 Billion lightyears.

    The 5D anti-de Sitter spacetime is ruled by 10D superstrings, again implying the 11D identification and the conformal mappings of the 4D spacetime onto the 5D spacetime relate the entropies of the two universes to each other.
    It is found, that a Black Hole in 5D is equivalent to 'Hot Radiation' in 4D as the hologram of the
    Black Hole's entropy as thermodynamic entropy.
    The source-entropy of outflow in 4D is found to precisely match the sink-entropy of inflow in 5D.

    4. Thermodynamic Entropy and Shannon Information

    Consider a glass of water.
    Thermodynamic Entropy seeks to describe the number of permutations, which are possible between the smallest constituents which comprise the isolated system (glass of water), without changing the overall state of that system. The 'glass of water' then remains invariant macroscopically, but its microscopic state of flux becomes specified or measured by its entropy as the number of possible rearrangements of those smallest constituents, may those be molecules, atoms, subatomic particles or superstrings.
    Thermodynamic Entropy is thus measured as effect of Avogadro's Constant (NAv), relating the 'amount of substance' as molarity in association with Boltzmann's Constant (k).
    The universal Gas-Constant (R) at STP (Standard Temperature and Pressure) so is R=kNAv.
    Formal Information Theory originated in 1948 with American applied mathematician Claude E. Shannon, who introduced Bit-Entropy as a measure for Information Content.


    Of course, we have already associated the Bit as an algorithmic representation for the superstrings; so Shannon Information automatically relates QR to a measurement of entropy. How many Bits or Binary Digits are required to encode a certain amount of information?

    Every modern communications device, ranging from cellular phones to modems to CD players
    rely on Shannon Entropy as a 'counting of the Bits'. Thermodynamic Entropy is basically Energy/Temperature which has the units of (k); whilst Shannon Entropy is algorithmic and dimensionless.
    A Silicon Computer Chip has dimensions of 1cubic centimetre and a mass of less than a gram.


    If this chip carries one Gigabyte of data (1 Byte=8 Bits), then the Shannon Entropy is about 1010, whilst the Thermodynamic Entropy (at STP) is about 1023 for common unitisation.
    This vast difference is a consequence of the many different arrangements the molecules and atoms with their electrons can assume in their 'Degrees of Freedom' of the before described modes of translation, rotation and vibration.
    Should we now reduce the atoms of the chip down to the superstrings, then the thermodynamic entropy would increase exponentially, yet this can be ignored in thermodynamics because the individual quarks and leptons remain in a sense invariant for the counting of the atomic states under consideration.
    But under the relativistic conditions of the Quantum Big Bang Cosmogenesis and the Creation of the superstrings, all permuation states must be considered and this leads us into the Thermodynamic Entropy of Black Holes and the Limits For Information Density.

    5. The Universal Entropy Bound (UEB) and the Holographic Entropy Bound (HB)

    John Wheeler emphasized in the 1970's that the information 'falling' into a Black Hole seems to violate the second law of thermodynamics, stating that any isolated system must increase its entropy or state of disorder.
    This is the case, when one considers a Black Hole to be a highly ordered system, just specified by its size and mass in the Schwarzschild solution obtained in GR's demetrication.
    The work of Stephen Hawking (Cambridge University) and Demetrious Christodoulou (then at Princeton under Wheeler's guidance), together with that of Jacob Bekenstein (then under Wheeler and now at Hebrew University of Jerusalem) showed however that Black Holes must possess thermodynamic properties, as their characteristic size or Event Horizon must always increase in area under merger.



    Thus Bekenstein proposed in 1972, that the Black Hole's Entropy is proportional to its Surface Area of its Event Horizon.
    Thus the 'lost' entropy of the infalling matter or information is transformed into Black Hole entropy as function of the Black Hole's Temperature.
    So even in the case of a 'shrinking' Black Hole (emitting Hawking Radiation in its 'getting hotter'), the emergent radiation retransmits the previously 'lost' entropy as 'found' disorder.
    In 1986 Rafael D. Sorkin (Syracuse University) applied the 'Generalised Second Law' (GSL) in showing that it must be valid for all Black Hole processes down to the superstring level.

    Hawking's Radiation process then specifies the proportionality between entropy and the Black Hole's Surface Area as precisely A/4, where area A is measured and quantised in Planck-Areas AP, with {AP=Goh/2πc3=lP2 and lP the Planck-Length}.
    The entropy of a Black Hole the mass of the Earth (~6x1024 kg) would be contained in the Earth's Schwarzschild Radius of about 1.5 cm and a surface area of so 2.8x10-3 square meters, which comprises about 6.5x1066/4 =1.6x1066 Bits as entropy counter.

    The thermodynamic entropy for 1 litre of water (10-3 cubic metres) is about R/k or 6x1023 Bits and it would take a 'cube of water' with a side of 1.3x1014 meters to match the Earth's entropy as a Black Hole equivalent just 3cm across.
    This standard for water is used to define the Universal Entropy Bound or UEB.
    We now consider the Holographic Entropy Bound or HB in any energy or matter distribution as a spherical region of space as a Black Hole equivalence in inducing the contained matter distribution to collapse to its boundary of the event horizon, quantised in Planck Areas as the Limit of Information Density given in Bits and representing the mass-content as Black Hole parameter.


    In such a scenario, the Shannon Entropy is equal to the Thermodynamic Entropy as the HB.
    So in adding more and more computer chips together, one obtains Entropy proportional to the Surface Area of the computer chips 'pile' and Not to the Volume of the 'pile'.
    This counterintuitive result is a consequence of the Event Horizon specifying the 'breakdown' of the matter distributions and not the volume it occupies.

    The Bekenstein paper referenced tabulates the following comparative data for the UEB and the HB with the size of the distributions plotted against the information capacity (in Bits) to give the linear proportions:


    Human Chromosome....................(1 micron, 109 Bits); UEB=1023 Bits & HB=1058 Bits

    Music CD......................................(10cm, 1010 Bits); UEB=1040 BITS & HB=1068 Bits

    Liter of Water as UEB Standard..(10cm, 1023 Bits); UEB=1040 Bits & HB=1068 Bits

    Library of Congress......................(10m, 1015 Bits); UEB=1052 Bits & HB=1073 Bits

    Internet..........................................(6500km, 1016 Bits); UEB=1075 Bits & HB=1085 Bits

    Intersection of UEB and HB........(1012 m, 10100 Bits);UEB=HB=10100 Bits

    Universe (projected).....................(1026 m, 10150 Bits).

    One should now point out, that the Bekenstein Intersection for the UEB and the HB has a precise counterpart in Quantum Relativity. In QR the microscopic realm for the subatomic template is mapped onto the macroscopic world of the cosmogenesis, after the subatomic quark quantum geometry has itself become magnified from the supermembrane epoch as exemplified in the quantisation of the classical Electron Radius in terms of the parameters of the Weyl-Geodesic.
    In particular, the cosmogenesis maps the neutron's beta decay onto the evolution of the 10D universe as the hologram of the 11D universe.

    Thus the time-and size-scales for the neutron are matched to what are known as neutron stars in their primordial form of prototypical dineutron- or ylem-stars.
    So called pulsars and magnetars are subsequent generations for the ylem stars.
    In particular, the ylemic evolution defines the Higgs Bosonic Blueprint for the restmass induction of the quark-leptonic families of the Standard Model in Particle Physics.

    The Higgs-Bosonic template is characterised by certain spacetime markers, which allow the nucleonic differentiation into quarks and leptons in a neutrinoic kernel, an inner mesonic ring and an outer leptonic ring.
    The Inner Mesonic Ring maps say markers G and F and the Outer Leptonic Ring maps marker E; all as spacetime quanta counters.
    One can now easily deduce that there will be an intersection of the Riemann-Hyperspheres at those marker points (which were set in the de Broglie inflaton).
    In the subatomic-nucleon template, this intersection corresponds to a precise formulation for the neutrinoic kernel of the Higgs Bosonic Blueprint and defines the Tau-(anti)neutrino inertial mass induction at centred on 3.00 eV (electronvolt).
    The formulation for this restmass induction is given in the Scalar Higgs (anti)neutrino as part of the Higgs Bosonic template:
    nHiggs=(λps.me/2π.Re){E/G-E/F}=0.052 eV {Eq. #2] and where me is the effective mass for the electron.

    This result was experimentally confirmed in the Kamiokande, Japan neutrino data of 1998.
    The experiments measured the massinduction difference for muonic neutrinos hitting the detectors from two different colinear directions, one of those neutrino pathways travelling tthrough the
    Earth's interior and the other impeding directly from the sky.


    This is the mean of the G and F markers in the cosmology, where the corresponding distance scales are 3.39x1011 m and 3.45x1011 m respectively, with marker E setting 3.44x1014 m.


    For a local starsystem containing the planet Earth and centred on the star RahSol; those distances refer to the Asteroid Belt at say 2.2 Astronomical Units (AU) from the Sun and to the Kuiper-Belt as extent of the Solar System at 2,200 AU, bounded by the Oort Cloud in the linearisation factor of 2π further out.

    Hence the entropy bound equivalence verifies QR in proposing that at the scale of the asteroid belt, the cosmogenesis massinduced the scalar Higgs neutrino template as the Minimum scale for inertial mass and predicting that the Universe as an entity is representable as a Black Hole equivalence from that minimum condition onwards.

    Now this is precisely what QR has found in beginning the Neutron Star evolution as the prototypical ylem stars at those spacetime markers of the accumulated spacetime quanta, which comprise the hypersphere volumes in the cosmogenesis.

    QR has derived a beautiful formulation for those ylem-stars as mass-independent protostars, with the ylemic radii depending only on subatomic parameters as a function of the universe's temperature in its Planck-Boson evolution as a macroquantised Black Body Radiator.


    The formulation equates the equilibrium condition between the thermal outward pressure with the gravitational inward pressure and is, with mc the prototypically finestructured nucleon-mass:

    Rylem=√{kT.Re3/Go.mc2} [Eq.#3]



    But from the ylemic times, which map the neutron's beta decay in the G-F interval of 19 seconds from about 2 minutes to 19 minutes of the timeinstanton at the 18 minute markers; the universe's Black Hole evolution became initialised in the ylemic protostars which would allow further stellar generations to evolve and transform into neutron stars, magnetars and Black Holes as a function of their masses and centred on the Chandrasekhar white dwarf upper limit of 1.5 solar masses, which is a dimensionless form of the wormhole source-frequency fps and links to the solar cycles of the magnetic fields generated by spinning masses as magnetocharged electricity forming mass equivalences.

    6. The Cosmos as Information Processor and the FRW-Universe

    The theoretical ultimate information capacity for any massive spherical energy distribution so Increases only with its Surface Area and not its Volume. Because volume increases more rapidly as surface area, the Black Hole limit shows that if the mass of a star collapses under its own gravity, then this is equivalent to information being mapped from its 3D eigenstate onto its 2D eigenstate in a dimensional reduction forming the hologram of the higher dimension in the lower dimension.

    The Holographic Principle was first proposed by Gerard t'Hooft (University of Utrecht) and
    Leonard Susskind in 1993 and fully supports (and explains) the Black Hole evolutionary scenario

    under discussion.
    The information content given by a 3D system of physical interaction can be described by a
    'surface physics' operating in the 2D boundary of the 3D system.

    In the nomenclature of QR then, the information content of the 12D-Vafa-Sphere is mapped
    onto its own boundary-mirror of the 11D-Witten-Sphere from without or within.

    The nature of the Möbian connectivity however adds the smaller subspace of the 10D universe
    as the information mapping onto the same hologram of 11D as the demetricated form of
    supermembrane or M-theory.


    Juan Maldacena (then at Harvard University), first conjectured the full majesty of superstring
    theory in 1997 in proposing the anti-de Sitter 5D universe, which was later confirmed by Edward Witten (Princeton Institute for Advanced Study, New Jersey) and Steven S. Gubser, Igor R. Klebanov and Alexander M Polyakov (all of Princeton University).

    The physics at the boundary of the higher spacetime would be mathematically equivalent to the
    physics of the higher spacetime; the entropy of a Black Hole in 5D would become the
    thermodynamic entropy of hot radiation in 4D.

    Any universe then can be considered to transform its Eigenstate as an Information Processor.
    The energy/mass content of such a universe interacts and maps those interactions in a space defined as volume onto its periphery as a bounding surface.

    The energy/mass distributions form indeed the 'hardware' of the cosmic computer system, which is programmed by the 'software' of the universal intelligence, called the 'Laws of Nature'.
    Those 'Laws of Nature' are however founded upon algorithmic processes by the 'writer of the programs' often called the Logos or the Word.
    Since the Logos is intinsic to the Hologram of One, partitioning itself into the 'shards' of the Many; any One of the Many is also part of that Logos.
    Subsequently, the 'program writers' are all pieces of 'shard', with not all those pieces necessarily being aware of their membership of the Club of the Hologram of One and the fraternity of the Logos.
    But through this Logos, the universe created itself in Möbian connectivity with the One being Two in One.
    Any Experience by any of the 'shards' becomes a 'shared' experience due to that fraternity of the 'shards' and the Hologram of One.
    This can and has been rigorously defined in QR as a propagation of Experience-Factors and as
    the EigenStates of the Binary Dyads [0,1] in the foundations of the pentagonal supersymmetry underpinning the cosmogenesis in what QR terms Awareness-Triplets of the form [Old EigenState, Experience, New Eigenstate] with the New SelfState reiterating as the next Old SelfState.


    So we know that the universe is self-programming itself through the 'shards' experiences as a form of information. The theoretical ultimate information capacity for any massive spherical energy distribution so Increases only with its Surface Area and not its Volume. Because volume increases more rapidly as surface area, the Black Hole limit shows that if the mass of a star collapses under its own gravity, then this is equivalent to information being mapped from its 3D eigenstate onto its 2D eigenstate in a dimensional reduction forming the hologram of the higher dimension in the lower dimension.

    This information accumulation is forever growing as the shared experiences mapped as information onto the universal boundary of the 11-dimensional Mother-Space.
    Is there a limit as to how big the universe can become?
    If the universe is infinite then there is no limit to the accumulation of information, but then the Holographic Bound as described is inapplicable for any universe not representable as a Black Hole as depicted.



    The present Standard Model for Cosmology is based on Einstein's Field Equations with the three case scenario depending on the mass/information content of the universe as a function of its
    curvature, as we have seen previously.

    The Friedmann-Robertson-Walker (FRW) universe is said to be infinite, of hyperbolic curvature and will go on to expand eternally, eventually fading out as the nuclear fuel of successive stellar generations becomes exhausted in an ever growing dilation and diffusion of the entropy.
    This is termed the 'Heat-Death' of the FRW-universe.


    In this case, say of an infinite accelerating expansion, the Black Body model must break down and the Holographic Bound cannot be applied.

    But we have already seen, that the FRW universe is but the 10D universe in QR, which is negatively curved but because of its Möbian connectivity, it is Bounded by Itself as the hologram of its 11-dimensional Mother-Space, which is of positive curvature enfolding the C-Space of the 10-dimensional superstrings as 11-dimensional supermembranes.
    The 10D-FRW universe is the C-Space of the F-Space and the Child to the Father reflected in the Mother.
    The negative curvature of the C-Space also becomes the hyperbolic curvature of the F-Space in the reversal of the Entropy Arrow of Time.
    If the 10D-C-Space is considered convex and imaging 12D-F-Space, then the 11D-M-Space must be concave; just as a doughnut or torus carries both curvatures in the one topological multiconnected form.
    The inner doughnut hole appears concave to an observer situated at the centre, all sides around himherself curving away from herhimself.
    But the outer and larger circular enclosure appears convex in the spherical inner surface, which could encompass the doughnut.
    The convexity of the 10D-universe then becomes the asymptotic expansion of the FRW-cosmos in the predicted and experimentally observed perfect Euclidean flatness.
    Were there only 10 dimensions, then the universe would be a true FRW-universe of infinite extent and accelerated expansion.
    But because there are 11 dimensions, the FRW-universe becomes finite with a decelerating expansion as required, the observed redshift dependent cosmic acceleration being the effect of the intersection of the C-Space convexity with the M-Space concavity.

    And the breakdown of the Holographic Bound for sufficiently large regions then became appropriately addressed by a topological form of Quantum Relativity. In 1999 Raphael Bousso (then at Stanford University) conjectured the Bousso-Bound to overcome the limitations of the UEB and the t'Hooft-Susskind form of the HB; say when the isolated system undergoes rapid evolutionary change, such as gravitational collapse into a Black Hole.


    Bousso considered a simple-connected 2D surface (there are only three such topologies which deform a plane into a same class manifold in the Catenoid, the Helicoid and the Hollow Sphere with an opening to infinity say) and applied the property of convergence in emitting imaginary light rays from every point of the spherical inner surface.
    Bousso then conjectured, that the entropy encoded by this inner surface (say as A/4 measured in Planck-Areas) could not exceed the entropy of the matter and the radiation of the light rays before crossing.
    Bousso so counts the entropy not at any region at a certain time; but rather counts the entropies of different locales at many times.
    But this is our oscillating and cyclic 11D universe intersecting itself in the asymptotically expanding 10D universe again.
    The Steady State of the 11D Witten mirror forms the boundary or Event Horizon of maximum curvature for the expanding universe in 10D; the entire scenario becoming imaged in the 12D Vafa-Space as the Reflection or Shadow Universe in the M-Space of the Mother's supermembranes.
    QR now extends the Bousso Bound in the QR bound of the spacetime-quanta googolplex as the decisive entropy counter for the universe, before approximated by Bekenstein as of the order of around 10150.


    7. The Nodal Hubble-Constant of GR relates the entropic spacetime quanta counters in QR

    The QR-Holographic Bound calculates as Z=[2π.Rmaxps]3~10147 Bits with the volume of a spacetime quantum given in λps3/4π~7.96x10-68 m3.

    Every 16.9 billion years, the nodal intersection of the c-invariant 11D-universe Activates the QR-HB-entropy counter and the 11D cosmos collapses its Information Content onto the nodes, which form the Hologram of the 10D-spacetime mapped nodally onto the 11D-Witten-Mirror.

    As the universe is 19.11 billion years old, the first nodal mapping occurred so 2.2 billion years ago; when a certain sentience began to evolve on a certain localised hologram of this cosmogenesis; and a cosmic intelligence which would one day become enabled to reconstruct the cosmogenesis of its own identity as the holographic image of its own creation in co-creatorship.
    This sentience therefore would Remember its fraternity with the Logos of Creation.


    Because of the Klein-Bottledness, the nodal resonances extend the Bousso Bound in bidirectionality, one in holographic reflection and the other in holographic refraction.
    The inside defines a multivalued and multiconnected 11-dimensional continuum of supermembranes bounded in the 10-dimensional massparametric hologram and the outside continues to grow in 'volume' by adding spacequanta in the creation of new space as the 11D-expansion towards potential infinity and the old anti-de Sitter boundary.


    Every second then, so X=Z.Ho~1.9x10129 spacetime quanta are calculated to add to the number of activated spacetime quanta with 1/Ho=Rmax/c the Hubble-Time as frequency for the universal Hubble-Oscillation in dimensionless cycletimes n=Hot and dn/dt=Ho.
    This amounts to about 1.5x1062 m3 of 'new volume' every second (the universe's HB-boundary volume is so 8.1x1079 m3 for comparison).
    This also Activates X/e*=3.8x10126 Joules of sourcesink energyflux every second under the guiding parameter of the LightMatrix of c-invariance.
    This sourcesink energy is distributed across the surface boundary of the Event Horizon
    comprising the entire universe in the NewSpace Creation of Potential Information.

    So as John Archibald Wheeler first proposed (Jacob Bekenstein dedicated the referenced paper to John Wheeler) in the 1970's: "Information is the basic ingredient and constituent of the universe".

    This sourcesink energy represents the manifested vacuum or zero-point energy of the EpsEss supermembrane as the modular dual BlackBody/WhiteBody Radiator/Absorber.
    In QR it is termed Vortex-Potential-Energy or VPE.
    This is rather different to the manifested and realised information of the hologramic 10D universe however.


    After 1 second into the Birth of the Universe, the Potential Information Content is of the order of 1.9x10129 Bits; but the 10D universe has 'only' expanded to the radial=LightPath of c=300,000 km and a 3D-Volume (as a 3D-surface boundary of a 4D-Volume) of 5.33x1026 cubic metres or Y=8π3c3ps3=[2πfps]3=[ωps]3~6.70x1093 spacetime quanta as the cube of the angular velocity as the source-eigen quantum state.

    The ratio of the two spacetime quanta counters is however precisely given as the Schwarzschild solution for the demetricated form of Einstein's GR field equations in:

    Y/X=Ho2=3.5x10-36=4π2Go.rcritical=2Go.Mcritical/Rmax3 [Eq.#4]

    Since Ho represents the Universe's EigenFrequency, the 11D spacetime quanta creation must relate to the oscillatory cyclicity of the defining M-Space intersectring the C-Space one dimension lower and in the 10D/4D Riemannian Hypersphere With the Calabi-Yau Manifold of toroidal derivative.

    8. SpaceTime Creation and a Definition for the Fundamental Demetricated Scalefactor of QR

    The demetricated form for the Scale Factor in GR is a function of cycletime n and describes the asymptotic expansion of the 10D-Universe as a consequence of algorithmic definitions and relating to the definition for the transcendental masternumber of the natural exponent 'e'.

    QR derives this in the following manner.

    The Cosmic Wavefunction is the following Differential Equation:

    dB/dT + αB(n) = 0; α being the Electromagnetic Finestructure as the probability of light-matter interaction (~1/137).

    This has a solution: B(n) = Bo.exp[-α.T(n)]; Bo=2e/hA from QR boundary conditions defining:

    T(n)=n(n+1) as the Feynman Path-Summation of particular histories under the pentagonal supersymmetry given in the (Euler) identity:

    XY=X+Y=-1=i2=exp[iπ] and lim [n→X]{T(n)}=1

    This allows the Normalisation of the [Y]2 wavefunction to sum to unity in B(n)=(2e/hA).exp[-α.n(n+1)] with Functional Riemann Bound FRB=-1/2, centred on the interval [Y,...-1,...-X,...-1/2,...(X-1),...0,...X].

    Interval [Y,-1] sets F-Space; interval [-1,0] sets M-Space with uncertainty interval [-X,(X-1)] and interval [0,n) sets the C-Space, encompassing OmniSpace.

    n<0 is imaginary as real reflection of real n>0 of the C-Space, metrically defined at the coordinate n=0 mapping n=nps, which is the instanton tps=fss=1/fps.

    Cycletime n is defined in GR as dimensionless Tau(τ)-Time in curvature radius RC=c.dt/dτ for the pathlength of x=ct and become dn/dt=Ho, n=Hot in QR, with Ho the nodal HubbleConstant defined in c=HoRmaxps.fps.

    The Feynman Path so sums both negative and positive integers as: -n......-3...-2...-1...0...1...2...3......n =T(n) in absolute value to double the infinities as the entropy reversal of lightpath x=c.t=(-c)(-t) in the Möbius Property of the 4 worlds as outlined in the 13 dimensions of the time connectors.


    Cantor Cardinality Aleph-Null is thus Unitised in Aleph-All, counting infinities as if they were integers of the Feynman Path.

    This allows the Feynman interpretation of Quantum Mechanics as alternative to the formulations of Schrödinger (fermionic 1/2 spin) and Klein-Gordon (bosonic integral spin) as timeindependent and timedependent (free particle form inconsistent with SR in Schrödinger in 1st order t & 2nd order x), formulations respectively.

    The units of B(n) are 1/J, that is Inverse Energy, with A2 an algorithmic constant defining Current-Squared and 2e/h the Josephson Constant in Amperes/Joules.

    B(n) as the universal cosmic wavefunction describes the universe as a potentially infinite collection of 'frozen' wormhole eigenstates at n=0.

    The timeinstanton 'unfreezes' one such eigenstate and activates the protoverse as described elsewhere.

    This then allows the 'Mappings' of the C-Space 'real time n>0' from the F-Space of the 'imaginary time n<-1' under utility of the M-Space interval as 'mirror-space'.

    QR unifies electromagnetic and gravitational finestructures in F-Space using the Planck-Length-Oscillation lP√α=e/c2 from the subplenum definition as the 'Bounce of the Planck-Length'.

    This yields the decisive mapping for the B(n):
    Coulomb Charge e = lP√α.c2↔ 2Re.c2 = e* (StarCoulomb Charge) [Eq.#5]



    But the StarCoulomb is Inverse Energy by definition of the vibratory part of the modular dual heterotic supermembrane HE(8x8)=EpsEss.

    Eps=hfps=hc/λps=(me/2e).√[2πGo/αhc]=me/{2emP√α}=1/e*. [Eq.#6]


    mP is the Planck-Mass and Go is the initiatory Gravitational Constant, defined in the FineStructure-Relation, which defines the Planck-Length-Oscillation in the unification of electromagnetic interactions with those of gravitational permittivity:

    Go=4πεo=1/30c with dimensionless c-ether constant [c]unified.

    This defines the MacArthur-Gamma as: "LightMass-Constant" LMC=γMac =30[c]unified.

    Thus Quantum Relativity is defined in the charge mappings between the OmniSpace dimensions; F maps magnetocharges e* onto electrocharges e under agency of the 11D-Witten Mirror, which is a Onesided Surface möbian connecting 10D to 12D.

    The all encompassing source energy quantum is the Eps-Gauge Boson, which manifests as the Gauge mediator for the four elemental interactions, suppressing the weak interaction in a primary triplicity however to allow the Higgs Restmass Induction mechanism to proceed in the defining qualities of the Unified Field of Quantum Relativity (UFoQR).



    Now T(n)=n2+n, with first derivative dT/dn=2n+1 and define the Radius of Curvature as R(n)=dT/dn.

    Set T(n)=Rmax2 - R2(n) for the Radius of Curvature R(n), bounded in Rmax.

    Represent the Feynman-Operator dT/dn=Rmax /R(n) as the differential for the asymptotic approach and write T(n)={Rmax + R(n)}.{Rmax - R(n)}=1=(n+1/2)2 - (1/4).

    Hence Rmax=[n+1/2] and R(n)=[1/2] for the identity [Rmax+R(n)]/[Rmax-R(n)]=[n+1]/[n]=[1+1/n].

    Since Rmax/R(n)=dT/dn=2n+1; we can choose the Feynman-Operator to equal the curvature differential in the expression Rmax2 - R2(n)=2Rmax.R(n)=1 which introduces the Modular Duality in Rmax=1/R(n) in the demetricated scalefactor R(n).

    Subsequently, 2n+1=1+1/n identifies 2n2-1=0 and the modular curvature radius
    RC=Rmax/(2n+1) for the Feynman-Operator intersecting R(n) at the n-coordinate n=√2/2 with RC=(√2 - 1)Rmax.


    Rmax/R(n)=[1+1/n] then becomes lim[n→∞]{Rmax/R(n)}n = exponent "e" and defines the demetricated scalefactor in: R(n)=Rmax{n/[n+1]}=Rmax{1 - 1/[n+1]}.


    A further consequence is the simulation of exp[hf/kT]=1+1/n in the Bosonic and Fermionic Statistics of Planck, Maxwell, Boltzmann, Bose and Einstein via the Gamma- and Zeta-Functions in the Black Body spectra describing the temperature evolution for the universe.

    We now understand how the 10D-Universe expands asymptotically under the demetricated form for Einstein's Curvature Radius RC=c.dt/dn, with n a dimensionless cycletime defined in the nodal Hubble-Constant Ho=dn/dt as the Universe's Self-Frequency for the Hubble-Oscillation in 11D-M-Space.

    The demetricated Velocity-Differential is v(n)=c/[n+1]2 and the demetricated Deceleration-Differential becomes a(n)=-2cHo/[n+1]3 (Milgröm Parameter and defining the asymptotic deceleration as the overall deceleration of C-Space modified by a Gravitational Omega and a Quintessential Lambda).
    The Omega and the Milgröm Parameter are always negative, whilst the Lambda evolves from an antigravitational de Broglie phase state, to reach its asymptotic gravitational vanishing value after three zero states defined by the Temperature/cosmological redshift evolution of the cosmos (the first root being at redshift 2.15).
    The so called 'cosmological constant' in Einstein's Field equations is thus the Intrinsic Milgröm Acceleration differential between the Omega and the Milgröm Parameter.


    But the 'Volume' for the 10D-C-Space grows in the factor 2π2R3(n), whilst the 11D-M-Space grows in the factor n.2π2.Rmax3 for a DIM-Factor of V10D/V11D=[n+1]3/n2.

    This calculates for the present epoch as DIM=7.56.. and describes the 'missing' mass or dark matter.

    The baryonic inertial mass-seedling of the deceleration parameter of 2.81% of the critical mass-seedling and has 'evolved' to about 3.68% in a coupling to the gravitational constant evolution from its 'massless-permittivity' initialisation of Go.

    DIM(3.68%)=27.82% and represents the Total Manifested Universal Mass in 11-dimensional M-Space.

    Hence no 'dark matter' scenarios are necessary and the Milgröm Parameter suffices to account for the galactic rotation curves, seemingly violating Newtonian Gravitation in the necessity for 'dark matter haloes' of gravitating but nonluminous mass distributions.

    We recall, that the critical mass-seedling (of about 6.4x1052 kg) gives the Euclidean flat curvature superimposed onto the hyperbolic 10D-Space of diminished mass content in restmass-seedling Mo (of about 1.81x1051 kg, now 2.38x1051 kg).



    The Creation of spacetime quanta occurs in the 11D-M-Space not reflected back into the lower dimensional 10D-C-Space as the OmniSpace Mapping of the LineSpace described previously.

    We have already calculated that X=Z.Ho~1.9x10129 spacetime quanta are added every second to the Outer Boundary of the Universe as the refracted part of the Hubble-Oscillation.

    An infinitely expanding hypersphere would however violate the Holographic Bound set in Black Hole parameters for the Information Content described by the Black Hole's Surface Area, quantised in Planck-Areas.

    Yet, we also know that the Black Hole Event Horizon is LIMITED by the M-Space as the asymptotic boundary for the C-Space to 'expand into'.

    This is of course QR's HB-Bound Z~10147 Bits which itself sets the parameters for the continuous creation of the spacetime quanta.

    The resolution and consequence henceforth is the modular duality and interconnectedness of OmniSpace.
    The 'growth' of the 11-dimensional form of the Hypersphere continues unimpeded with the boundary conditions for the HB satisfied in its 10-dimensional Hologram of the C-Space, always bounded in the Steady State of the Hubble-Oscillation.


    However what is defined as the 10D-Hypersphere is developing new degrees of freedom based on the multidimensional proprtires of translational LineSpace, Rotational HyperSpace, Vibrational QuantumSpace and Quantisational OmniSpace.

    This results in the major axis invariance of the prolate ellipsoid describing the encompassed hypersphere to engage in phaseshifts to define the Multiverse from the Protoversal Universe.

    A prolate ellipsoid becomes an oblate ellipsoid under minor axis rotation, as the previously fixed focus points of the ellipticalcross-section are forced to move as the locus of a point-circle.

    The angular displacements for the minor axis rotating Protoverse, then define an infinite number of Potential Universes with a minimum of two such phaseshifts defining a Multiverse.



    The sumtotal of all possible phaseshifted Multiverses, then constitutes the Omniverse; as the refraction part of the 11-dimensional Hypersphere adding Information at the rate of the QR HB in Z.

    Thus all boundary conditions are satisfied within the Omniverse as the evolved Universe; understood and constructed by the sentience responsible for the cosmogenesis herein described in co-creativity.
    The great fallacy in the standard cosmology in regards to the expansion of the universe is the asumption of a 'continuing stretching of the basic spacetime metric'.
    The Euclidean universe of observation and measurement is flat and of zero curvature; because there was just the one 'stretching' of space in the inflation of the de Broglie hyper acceleration, often termed the inflaton-instanton of timeinstantenuity.
    There exists so no ever-receding Hubble-Horizon, with particular galaxies receding 'out of view' and similar consequences of a continuing 'stretching of the basic metric'.





    Bluey TonyLove of Whynot, Scribe of the Unicorn De Maria

    Sunday, August 30th, 2009 --- 00:00:00 Hours Local MAT

    Let there Be Light in the Darkness in the archetypes Redefined and the Universe Reconfigured


    Last edited by Didymos on Sun Oct 23, 2011 5:02 am; edited 1 time in total
    avatar
    Didymos

    Posts : 795
    Join date : 2010-05-20
    Location : Queanbeyan, NSW, Australia

    Re: Faster than light particles found, claim scientists

    Post  Didymos on Sun Oct 23, 2011 5:00 am

    Roger Penrose and the Big Bang Curvature

    Hi Mike!


    There are a number of points, which align Arp with the mainstream. Now I know you rather accept the prevailing cosmological standard models of the Big Bang Cosmology and the various attempts (barring the multiverses, the anthropic principle and related topics perhaps).

    For about 20 years now, I have supported Alan Sandage's measurements of the Hubble Constant. He for long set it at the 55 km/Mpc.s mark and only recently, with the pressure of the WMAP data, has he 'relented' to somewhere around 65 km/Mpc.s.

    In my decade long analysis and study of the cosmology, I found the following.

    1. The standard model describes the thermodynamic evolution of the cosmos very accurately. So you can reanalyse the WMAP data in their description of the Cosmic Microwave Background BlackBody Radiation (CMBBR) and use this CMBBR as a basis for the emerging parameters of the cosmoevolution.


    2. The standard model has 'misinterpreted' the Guth-inflation in the context of the now prevalent membrane physics of the spacetime metrics.

    The standard model postulates the Big Bang singularity to become a 'smeared out' minimum spacetime configuration (also expressible as quantum foam or in vertex adjacency of Smolin's quantum loops). This 'smearing out' of the singularity then triggers the (extended) Guth-Inflation, supposedly ending at a time coordinate of so 10-32 seconds after the Big Bang.

    Without delving into technical details; the Guth-Inflation ended at a time coordinate of 3.33x10-31 seconds and AT THAT coordinate, the Big Bang became manifest in the emergence of spacetime metrics in the continuity of classical general relativity and the quantum gravitational manifesto.

    This means, that whilst the Temperature background remains classically valid, the distance scales for the Big Bang will become distorted in the standard model in postulating a universe the scale of a 'grapefruit' at the end of the inflation.

    The true size (in Quantum Relativity) of the universe at the end of the inflation was the size of a wormhole, namely at a Compton-Wavelength (Lambda) of 10-22 meters and so significantly smaller, than a grapefruit.

    Needless to say, and in view of the CMBR background of the temperatures, the displacement scales of the standard model will become 'magnified' in the Big Bang Cosmology of the very early universe in the scale ratio of say 10cm/10 -20cm=1021 i.e. the galactic scales in meter units.

    If you study the inflation cosmology more closely, you will find that many cosmologists already know, that the universe had to be 'blown up' to the Hubble Horizon instantaneously (so this is not popularised, as it contradicts the 'grapefruit' scale of Alan Guth).


    3. A result of this is that the 'wormhole' of the Big Bang MUST be quantum entangled (or coupled) to the Hubble Horizon. And from this emerges the modular duality of the fifth class of the superstrings in the Weyl-String of the 64-group heterosis.

    Again, without technical detail, the Big Bang wormhole becomes a hologram of the Hubble Horizon and they are dimensionally separated by the Scale-parameter between a 3-dimensional space and a 4-doimensional space. This is becoming more and more mainstream in the 5-dimensional spacetime of Kaluza-Klein-Maldacena in de Sitter space becoming the BOUNDARY for the 4D-Minkowski-Riemann-Einstein metrics of the classical cosmology. Of course the Holographic Universe of Susskind, Hawking, Bekenstein and Maldacena plays a crucial part in this, especially as M-Theory has proven, (YES PROVEN in scientific terms), the entropic equivalence of the thermodynamics of Black Holes in the quantum eigenstates of the classical Boltzmann-Shannon entropy.

    So your 'speculative' status of string theory is a little 'out of date'. The trouble with the Susskind googolplex solutions is that they (if just Witten would have access to my data) fail to take into account the superstring selftransformations of the duality-coupled five classes. They think that all five classes manifest at the Planck-scale (therefore the zillions of solutions), they do not and transform into each other to manifest the Big Bang in a minimum spacetime configuration at the Weylian wormhole of class HE(8x8).

    Roger Penrose has elegantly described the link of this to classical General Relativity in his "Weyl Curvature Hypothesis".

    Quote from:'The large, the Small and the Human Mind"-Cambridge University Press-1997 from Tanner Lectures 1995"; page 45-46:



    "I want to introduce a hypothesis which I call the 'Weyl Curvature Hypothesis'. This is not an implication of any known theory. As I have said, we do not know what the theory is, because we do not know how to combine the physics of the very large and the very small. When we do discover that theory, it should have as one of its consequences this feature which I have called the Weyl Curvature Hypothesis. Remember that the Weyl curvature is that bit of the Riemann tensor which causes distortions and tidal effects. For some reason we do not yet understand, in the neighbourhood of the Big Bang, the appropriate combination of theories must result in the Weyl tensor being essentially zero, or rather being constrained to be very small indeed.

    The Weyl Curvature Hypothesis is time-asymmetrical and it applies only to the past type singularities and not to the future singularities. If the same flexibility of allowing the Weyl tensor to be 'general' that I have applied in the future also applied to the past of the universe, in the closed model, you would end up with a dreadful looking universe with as much mess in the past as in the future. This looks nothing like the universe we live in. What is the probability that, purely by chance, the universe had an initial singularity looking even remotely as it does?

    The probability is less than one part in (1010)123. Where does this estimate come from? It is derived from a formula by Jacob Bekenstein and Stephen Hawking concerning Black Hole entropy and, if you apply it in this particular context, you obtain this enormous answer. It depends how big the universe is and, if you adopt my own favourite universe, the number is, in fact, infinite.

    What does this say about the precision that must be involved in setting up the Big Bang? It is really very, very extraordinary, I have illustrated the probability in a cartoon of the Creator, finding a very tiny point in that phase space which represents the initial conditions from which our universe must have evolved if it is to resemble remotely the one we live in. To find it, the Creator has to locate that point in phase space to an accuracy of one part in (1010)123. If I were to put one zero on each elementary particle in the universe, I still could not write the number down in full. It is a stupendous number". End of Quote



    4. Then of course I claim, that the Theory of Quantum Relativity represents a kind of 'Newtonian Approximation' to the 'Theory we have yet to find', mentioned by Roger Penrose in the above.

    Then the 'phase spaced' de Broglie inflation is in moduar quantum entanglement with the Weyl-Wormhole of the Zero-Curvature of Roger Penrose's hypothesis and this solves the 'Riddle of Space' in somewhat the manner Allen Francom has postulated.

    The Hubble-Universe consists of 'adjacent' Weyl-wormholes, discretisizing all physical parameters in holofractal selfsimilarity.

    Penrose's Weyl-tensor is zero as the quasi-reciprocal of the infinite curvature of the Hubble Event Horizon - quasi because the two scales (of the wormhole and Hubble Universe) are dimensionally separated in the modular coupling of the 11D supermembrane boundary to the 10D superstring classical cosmology of the underpinning Einstein-Riemann-Weyl tensor of the Minkowski (flat) metric.



    5. Finally then, the Hubble Law as applied in the standard model becomes a restricted case, applicable ONLY at the Node of the 11D asymptotic limit/boundary also BEING the Initial condition, Penrose writes of.

    Then and there the Hubble Constant is truly Constant at 58.03 km/MPc.s; vindicating both Alan Sandage and Halton Arp, the latter in his questioning of the Hubble Law to characterise the cosmic distance scales.



    6. Because of the duality coupling between the wormhole and the Hubble horizon, the Hubble-Horizon in 10D is always smaller than the Hubble Horizon in 11D (the first is defined in a 4D Minkowski spacetime and the second in a 5D Kaluza-Klein hypersphere). So the standard cosmology will measure an 'accelerating universe' where there is actually an 'electromagnetic intersection' of the 11D- Big Bang Light having reflected from the 11D boundary and recoupling with the 10D expansion.

    Halton Arp's redshifts are also dual in that the special relativistic doppler formulation is absolutely sufficient to relate the cosmological redshift to cosmic displacement scales (and without the Hubble Law Ho=vrec/D). So the redshift measurement is the true parameter and must then be correlated with the expansion factor of General Relativity to ascertain the lowerD coordinates of the observed phenpomena encompassed by the higherD coordinates (through the values of the expansion parameter).

    Briefly, the expanding universe presently moves at 0.22c with a deceleration of about 0.01 nanometers per second squared. But because the Hubble Horizon ITSELF recedes presently at 0.22c particular 'redshift corrections' must be applied to the VALID measurements of the latter to ascertain the cosmological distance scales of the lightemitters.

    John Shadow




    --- In Christianity_Debate@yahoogroups.com, "MikeA" wrote:
    --- In Christianity_Debate@yahoogroups.com, drcsanyi drcsanyi@ wrote:
    It is you who do not recognize the vast difference between Arp and the "Big Bang" model.
    [MIKE] I see Arp has been busy since he retired. This is what I gather. Arp disagrees as he has his whole career that quasars' redshifts are due to distance. He now apparently feels that they are being ejected from certain very active galaxies and that because of this the universe the Hubble constant should be about 55 not the 70 something it now is calculated to be. I should point out that there is nothing very unusual in this. Allan Sandage who took over Hubble's task when Hubble died thinks the number is closer to 55 than 70. And Thomas Matthews who discovered these ubiquitous quasars with Sandage has also found some quasars that are nearby. Please note I said some.
    Arp is now of the opinion that Hoyle was right and is fooling with a cyclic steady state universe if that is not an oxymoron. From what i can discern one is going to get the same observations with either model for the forseeable future.



    avatar
    Didymos

    Posts : 795
    Join date : 2010-05-20
    Location : Queanbeyan, NSW, Australia

    Re: Faster than light particles found, claim scientists

    Post  Didymos on Thu Nov 03, 2011 4:42 am

    The Stability of the Electron and its missing mass in QED


    Hi All!

    Allow me to raise a most important issue in quantum mechanics, namely the stability of the electron and the nature of its mass. QED postulates the electron as a point-particle of a size smaller than 10-18 metres; yet the extremely successful calculations in Quantum ElectroDynamics or QED are required to 'scale up' the electron to its 'classical' size of so 3x10-15 metres.

    Postulating a 'Dirac Sea' of 'virtual particles' within this classical electron radius Re=ke²/mec²; the electron should be unstable due to the repulsion of the 'virtual electrons' of that 'uncertainty soup'.

    The following references to the Canadian physicist Vesselin Petkov further detail the scenario and its avenue of resolution in the form of a correct calculation of the electromagnetic mass of the electron.

    I highly recommend the two papers, as they also describe the equivalence principle between gravitational- and inertial mass and should put to rest the 'varying c' lightpath questions which Petkov nicely analyses in the original 'Einstein Elevator' thought experiment in the 'Acceleration-paper'.

    I shall then add and show that what the experimenters measure as the restmass of the electron is actually a REDUCED EFFECTIVE electronmass.

    So the electron's mass is purely electromagnetic and via the equivalence principle it links the classical electron radius to its self-energies of its electric field and its magnetic field.

    Finally, a relationship to the string parameters in Quantum Relativity shall show that in those parameters, the stability of the electron is assured in the electron's energy incorporating the experimental mass defect in the (v/c) ratio assuming its unitary value.

    In other words, the higher dimensional electron is gauge-photonic, so always moving at lightspeed c, say as manifested in its spin angular momentum:
    h/4π=λo/4πe*c=λo/8πRec³=360.10-10/8π.c³.

    This last expression is highly significant in Petkov's side-notes below and seems to relate to a so called 4-atomism advanced concept of the discretization of spacetime.



    Vesselin Petkov on the 4-atomistic model


    "Such a possibility follows from a work [37] which has received little attention so far. By bringing the idea of atomism to its logical completion (discreteness not only in space but in time as well - 4-atomism), it is argued in that work that a quantum-mechanical description of the electron itself (not only of its state) is possible if the electron is represented not by its worldline (as deterministically described in special relativity) but by a set of four-dimensional (4D) points (modeled by the energy-momentum tensor of dust - in this case a sum of delta functions) scattered all over the spacetime region in which the wave function of the electron is different from zero. The 4-atomism hypothesis gives an insight into two questions relevant to the issues discussed here:

    (i) how an elementary charge can have ”parts” and still remain an elementary charge, and

    (ii) why there is no stability problem despite that the ”parts” of an electron repel one another. Since, according to the 4-atomism hypothesis, for 1 second an electron is represented by 1020 4D points (the Compton frequency) at one instant the electron exists as a single 4D point carrying a greater (bare) charge, but for one second, for example, there will be 1020 such points occupying a spherical shell that manifest themselves as an electron whose effective charge is equal to the elementary charge. If we can observe the electron without interacting with it for, say, 10-9 s the electron will appear to us as a spherical shell since during the observation time (10-9 s) the electron is represented by 1011 4D points appearing and disappearing on the spherical shell. Each charged 4D point feels the repulsion from other previously existing constituents of the electron, but cannot be repelled since it exists just one instant.

    Therefore, such a spherical distribution of the electron charge appears to be stable. This hypothesis also appears compatible with the scattering experimental data - the dimensions of the constituents of the electron (its 4D points) can be smaller than 10-18 m. The 4-atomistic model does not lead to the difficulties of a purely particle and a purely wave models of the electron and may be a candidate for what Einstein termed ”something third” (neither a particle nor a wave).



    The electron here actually represents 1011 4D-points which in a 'bosonic unification' time of order a nanosecond (10-9 s) would constitute the electron as the Re shell, which is stable.

    Our description of the higher dimensional electron justifies this in rendering lightspeed c intrinsic to the action quantum being FINESTRUCTURED in h=λo.1010/2Rec³ and with λo.1010/360=10-22.1010/360=10-12 /360=Re describing precisely this 4-atomisation introduced by Petkov.

    I shall continue after the Vesselin Petkov references.



    Did 20th century physics have the means to reveal the nature of inertia and gravitation?
    Authors: Vesselin Petkov
    (Submitted on 14 Dec 2000 (v1), last revised 17 Dec 2000 (this version, v3))

    Abstract: At the beginning of the 20th century the classical electron theory (or, perhaps more appropriately, the classical electromagnetic mass theory) - the first physical theory that dared ask the question of what inertia and mass were - was gaining momentum and there were hopes that physics would be finally able to explain their origin. It is argued in this paper that if that promising research path had not been inexplicably abandoned after the advent of relativity and quantum mechanics, the contemporary physics would have revealed not only the nature of inertia, mass, and gravitation, but most importantly would have outlined the ways of their manipulation. Another goal of the paper is to try to stimulate the search for the mechanism responsible for inertia and gravitation by outlining a research direction, which demonstrates that the classical electromagnetic mass theory in conjunction with the principle of equivalence offers such a mechanism.

    Comments:
    12 pages, LaTeX
    Subjects:
    Classical Physics (physics.class-ph)
    Cite as:
    http://arxiv.org/abs/physics/0012025 [physics.class-ph]



    Acceleration-dependent selfinteraction effects as a basis for inertia

    Vesselin Petkov

    Physics Department, Concordia University
    1455 de Maisonneuve Boulevard West
    Montreal, Quebec H3G 1M8
    vpetkov@alcor.concordia.ca
    (or vpetkov@sympatico.ca)

    http://arxiv.org/abs/physics/9909019

    [18] In order to account for the stability of the classical electron Poincar´e [8] assumed that part of the electron mass (regarded as mechanical) originated from forces (known as the Poincar´e stresses) holding the electron charge together and that it was this mechanical mass that compensated the 4/3 factor (reducing the electron mass from 4/3m to m). However, the 4/3 factor, as discussed above, turned out to be an error in the calculations of electromagnetic mass as shown in [10]-[15]. As there remained nothing to be compensated (in terms of mass), if there were some unknown attraction forces (the Poincar´e stresses) responsible for holding the electron charge together, their negative contribution to the electron mass would result in reducing it from m to 2/3m.

    This made the stability problem even more puzzling - on the one hand, a spherical electron tends to disintegrate due the repulsion of the different parts of the spherical shell; on the other hand, however, an assumption that there is a force that prevents the electron charge from blowing up leads to a wrong expression for its mass.

    Obviously, there is an implicit assumption in the classical model of the electron that leads to such a paradox - it is assumed that at every instant the electron charge occupies the whole spherical shell (see [20]).
    [19] D. J. Griffiths, Introduction to Electrodynamics, 2nd ed., Prentice Hall, New Jersey, 1989, p. 439.

    [20] It is not impossible for an elementary charge to have a spherical but not continuous distribution. Such a possibility follows from a work [21] which has received little attention so far.

    Such a possibility follows from a work [37] which has received little attention so far. By bringing the idea of atomism to its logical completion (discreteness not only in space but in time as well - 4-atomism), it is argued in that work that a quantum-mechanical description of the electron itself (not only of its state) is possible if
    the electron is represented not by its worldline (as deterministically described in special relativity) but by a set of four-dimensional (4D) points (modeled by the energy-momentum tensor of dust - in this case a sum of delta functions) scattered all over the spacetime region in which the wave function of the electron is different
    from zero. The 4-atomism hypothesis gives an insight into two questions relevant to the issues discussed here:

    (i) how an elementary charge can have ”parts” and still remain an elementary charge, and

    (ii) why there is no stability problem despite that the ”parts” of an electron repel one another. Since, according to the 4-atomism hypothesis, for 1 second an electron is represented by 1020 4D points (the Compton frequency) at one instant the electron exists as a single 4D point carrying a greater (bare) charge, but for one second, for example, there will be 1020 such points occupying a spherical shell that manifest themselves as an electron whose effective charge is equal to the elementary charge. If we can observe the electron without interacting with it for, say, 10-9 s the electron will appear to us as a spherical shell since during the observation time (10-9 s) the electron is represented by 1011 4D points appearing and disappearing on the spherical shell. Each charged 4D point feels the repulsion from other previously existing constituents of the electron, but cannot be repelled since it exists just one instant.

    Therefore, such a spherical distribution of the electron charge appears to be stable. This hypothesis also appears compatible with the scattering experimental data - the dimensions of the constituents of the electron (its 4D points) can be smaller than 10-18 m. The 4-atomistic model does not lead to the difficulties of a purely particle and a purely wave models of the electron and may be a candidate for what Einstein termed ”something third” (neither a particle nor a wave).

    What is promisingly original in the 4-atomism hypothesis is its radical approach toward the way we understand the structure of an object. The present understanding is that an object can have structure only in space. The 4-atomism suggests that an object can be indivisible (structureless) in space (like an electron) but structured in time. Whether or not this hypothesis will turn out to have anything to do with reality remains to be seen, but the very fact that it offers conceptual resolutions to several open questions and goes beyond quantum mechanics (which cannot be discussed in this paper) by predicting two new effects that can be tested makes it a valuable candidate for a thorough examination.




    CALCULATION OF THE ELECTRONMASS

    The magnetic energy stored in a magnetic field B of volume V and area A=R2 for a (N-turn toroidal) current inductor N.i=BdR/μo for velocity v and selfinduction L=NBA/i is:

    Um=½Li²=½(μo.N2R)(BR/μoN)²=½B²V/μo and the Magnetic Energy Density per unit volume is then Um/V=½B²/μo.

    Similarly, the Electric Energy density per unit volume is: Ue/V=½εoE² say via the Maxwell equations and Gauss' law.
    By the Biot-Savart and Ampere Law: B=μoq.v./4πr² and εo=1/c²μo for the E=cB foundation for electrodynamic theory.

    So for integrating a spherical surface charge distribution dV=4πr².dr from Re to ∞:

    Um=∫{μoq²v²/8πr²}dr = μoq²v²/8πRe.

    Similarly, Ue=∫dUe=q²v²/8πεoRe =kq²/2Re=½mec² as per definition of the classical electron radius and for the total electron energy mec² set equal to the electric potential energy. We term me here the EFFECTIVE electronmass and so differ it from an actual 'bare' restmass mo.

    We now define the electric electromagnetic mass and the magnetic electromagnetic mass as:

    melectric=kq²/2Rec²=Ue/c²=½me and consider the electric electron energy to be half the total energy (akin the virial theorem for PE=2KE,
    say in the Bohr atom's PE=(-)ke²/RBohr = e²/4πεoRBohr = (2)e²/8πεoRBohr =2KE).

    mmagneticoe²[v/c]²/8πRe=melectric.(v/c)²=½me.(v/c)² and which must be the KE by Einstein's c²dm=c²(me-mo) and for the relativistic electronmass m=mo/√(1-B) for B=(v/c)².


    Note: (B here is not the magnetic flux density vector B=E/c, measured in Tesla or gauss but a conventional label for the (v/c) ratio in Special Relativity).

    But we can see, that should one use the measured electron mass from the Re-definition as the electron's restmass, that mmagnetic + melectric=me{½+½(v/c)²} < me, because of the mass-velocity dependency factor B and the groupvelocities v
    So we introduce the relativistic restmass mo and set Constant Amooe²/8πRe for AB=1/√[1-B] -1 from:

    c2(m-mo)=μoe²v²/8πRe with m=mo/√(1-[v/c]2).

    This leads to the quadratic (in B2):
    1=(1+AB2)2(1-B2)=1+B2(2A+A2B2-2AB2-A2B4-1) and so: {A2}B4+{2A-A2}B2+{1-2A}=0

    with solution in roots: B=([A-2]±√[A²+4A])/2A={(½-1/A)±√(¼+1/A)}.


    This defines a distribution of B=(v/c)² velocity ratios in mo.AB=μoe²[v/c]²/8πRe.

    mmagneticoe²[v/c]²/8πRe=mo.AB=½me.(v/c)² then finestructures mmagnetic in the relation moA=½me and allows correlation between the relativistic and kinetic restmass mo and the effective electron groundmass me (say).

    In particular me =2Amo and is moA for A=½ AS the NEW minimisation condition.

    In string parameters and with me in *units, the following is found:

    moA=30e²c/e*=½me=4.645263574x10-31 kg*.

    This implies, that for A=1, mo=½me, where me=9.290527155x10-31 kg* from the prequantum algorithmic associations, based on the magnetic constant defining the Classical Electronic Radius.


    As B≥0 for all velocities v, bounded as groupspeed (not de Broglie Phasespeed always >c) in c for which B=1; a natural limit is found for the B distribution at A=½ and A=∞.

    The electron's restmass mo so is binomially distributed for the B quadratic.

    Its minimum value is half its effective mass me and as given in melectric=kq²/2Rec²=Ue/c²=½me for A=½ and its maximum for A=∞ is the unity v=c for B=1.

    The X-root is always positive in an interval from 0 to 1 and the Y-root is always negative in the interval from -3 to 0.



    For A=½: B=-3/2±3/2 for roots x=0 and y=-3;

    for A=¾: B=-5/6±√(19/12) for roots x=0.425 and y=-2.092;

    for A=1: B=-½± ½√(5) for roots x=X=0.618033... and y=Y=-1.618033...;

    for A=∞: B=½[-]±½[+] for roots x=1[-] and y=0[-];



    Letting B=n, we obtain the Feynman-Summation and the Binomial Identity gives the limit of A=½ in:

    A=1/2 - B{3/8 - 5B/16 + 35B²/128 -...} and as the nonrelativistic low velocity approximation of E=mc² as KE=½mov².

    But the FRB or Functional-Riemann-Bound in Quantum Relativity (and basic to the pentagonal string/brane symmetries) is defined in the renormalisation of a wavefunction B(n)=(2e/hφ).exp(-alpha.T(n)), exactly about the roots X,Y, which are specified in the electron masses for A=1 in the above.

    The unifying condition is the Euler Identity: XY=X+Y = i2 = -1 = cos(π)+isin(π) = ℮.

    This concludes this introduction to the electron's missing restmass.

    Tony B.



    Sponsored content

    Re: Faster than light particles found, claim scientists

    Post  Sponsored content


      Current date/time is Wed Aug 23, 2017 5:20 pm