In ancient times already, there had been ruminations among philosophers if the divisibility of matter could not be limited. It took thousands of years until chemistry, ultimately, realised a first step into that direction by finding the atomic structure of matter (i.e. the periodic table of elements). In order to uncover the world of elementary particles, then, it just took another few centuries.
In the year of 1900, finally, Planck founded quantum theory and Einstein, shortly later, the relativity of space and time. All of a sudden, the suspicion arose if Einstein‘s spacetime could not be composed of tiny “components of dynamics“ (quanta), as well. In principle, there had been no apparent objection against it. Now, after 100 years, however, physics still is unable to prove this presumption by experiment. (If existing, those spacetime units simply are too small for contemporary measuring devices.)
In parallel, the problem of life and the ensuing problem of a human spirit has proved to be a tough nut to crack. The approach to life is laborious. Darwin‘s theory of evolution and the decoding of the DNA have been striking stages. In addition, there has been little progress in getting access to the detailed mechanism how the sensory organs of animals and humans are working.
On the field of the spirit, on the transmission of sensory impressions into the brain, and on their storage and administration, ideas are vague. According to the present understanding of things, however, some comprehensive solution to those problems still is a long way off. By then, it still will remain the battle field of immature hypotheses and dark conspiration theories. All the more, this is the situation for details of decoding brain functions, too.
As paradoxically as this might sound – the big handicap on this road just has been the technical progress of the last 3 centuries, characterised by the mechanical philosophy of nature; artificial intelligence (AI) still is applying it to-day. With everything subjected to some causal approach, people are believing to be able to extrapolate everything up to the greatest and down to the smallest scales of our universe. (Concepts: The infinitesimal calculus in mathematics, thermodynamics and the meticulous equations of motion in physics, reality and objectivity in philosophy.)
By the incompleteness of his General Theory of Relativity (GR), Einstein had supplemented a contradictory view on our universe. (Singularities!) There, cosmic expansion resulted to be independent of the processes occurring within the surface of a bubble (illustrated by the skin of an inflating balloon), and “the world we are living in” – according to his opinion (dimension = 4) – had to confine itself to that surface. Direct discrepancies from such a concept had been superluminal velocity (in black holes and in cosmic inflation), which he, then, just had to push towards some world outside that bubble (i.e. outside its range of application), where it did not trouble anybody.
Einstein, hence, replaced that ancient ether, which meanwhile had become inopportune, by an expanding space beyond the application area of the physical laws relevant to us. Thus, he decoupled physical dynamics from some superimposing, unphysical, additional dynamics (cosmic expansion, cosmic inflation), into which, now, everything had to be shifted off which contradicted our laws of physics.
Rather similar incompatibilities of a related origin popped up when Feynman treated elementary particles by applying his virtual masses. Feynman‘s diagrams strongly are contradicting Einstein‘s equivalence principle. In spite of that, however, Feynman could be verified experimentally (by quantum electrodynamics and by the physical existence of particle “resonances“).
Analogically, philosophers got lost with their definitions of subjectivity and objectivity. They postulated an ideal notion of reality colliding with the physical notion of measurability. Deliberately, they overlooked that an ideal meant some limit which is not available and, hence, neither can be definitely checked nor ultimately verified. Briefly, their “objectivity” based on subjective statistics and arbitrary specifications of privileged individuals. Their “objectivity“, hence, resulted from gossip depending on group dynamics among selected individuals and on their subjective view of our world. Their “reality“, thus, was unreal – a situation, where subjective opinions are tried to be sold as objective reality.
But, in spite of pitifully looking down on the philosophers, theoretical physicists soon appropriated those notions the more they had to accept not to cope with the unification of Einstein’s relativity with Planck’s quanta towards some “quantum gravity” (QG) meeting the challenges of both models. For, Einstein still had puzzled in detail about how to reconcile measurability with continuous mechanics. With the discreteness of quanta, however, those ideas failed. Discrete quanta do not admit simply to be “smeared out” to some continuum. For, the signs of “neighbouring” quanta could be opposite without having a zero transit between them – because there is no “between”.
Continuous curves are enforcing their smoothness by their postulate of continuity. Einstein managed that on the mathematical base of differential geometry. But discretely distributed quanta are subject to different principles. They, predominantly, depend on combinatorics as materialised by the order of their factors. By mathematics, every quantum is represented by 1 vector of uniform, fixed dimension, and those vectors are baled to tensors (tensor = multiple vector).
In the year of 1900 already, the mathematician A. Young had arranged the index set of a tensor to some pattern of boxes in 2 dimensions (Young Tableau) submitted to certain symmetrisation prescriptions. (Their rows are floated left and columns are pending from some upper line downwards; their lengths may vary, but without leaving gaps. Their column lengths are limited to the uniform number of vector dimensions, while row lengths are not subject to corresponding restrictions.)
The characteristic of combinatorics is the deviation of a commuted product bxa from its original axb. In mathematics, this difference (axb – bxa) is called a “commutator“, expressed by square brackets: axb – bxa = [a,b]. (Young‘s prescription now simply reads: Separately inside every tableau line, all labels first are to be symmetrised. Subsequently, all labels of a column are to be antisymmetrised with respect to their original order.) Quanta, hence, are subject to quite different mathematics than the current numbers we are familiar with from school, where all commutators always are vanishing!
For people inexperienced in mathematics – but for many classical physicists, as well – those non-vanishing commutators are giving rise to serious irritations. People simply do not want to accept that quanta are no numbers but that they behave like actions. The action a = “Ask Mr. X the way“ and b = “Shoot him down“ will give a different result than the opposite order of both actions. That mathematics of pure numbers used by Einstein is not applicable to systems of quanta without drastic changes. This is the crucial fact with “New Physics“ quantum gravity [1] is based upon, and modern cosmologists, with their rigid fixation on Einstein, still do not understand nor are willing to do so until to date.
Our flat earth with the stars as peepholes through the heaven’s curtain dividing us from the kingdom of gods bathing in light – then replaced by the geocentric model of our world (sun and stars revolving about the earth) … All that are familiar perceptions of a recent past, based on the ignorance of the intellectual honesty that blanket assertions should be backed by a reliable verification and be free of internal contradictions. Before, all that just are presumptions, entry points, hypotheses, models.
For natural sciences, therefore, we, actually, are rigorously sticking to the principle of reproducibility: Everything which – in whatsoever sense – is not reproducible neither is subject to a natural science. Religions with their multitude of unproved claims and logical contradictions, thus, are left out in the cold. Physics proved to be the “mother” of all natural sciences. History demonstrated that one natural science after the other could be subordinated as a mere branch of physics.
In order to check reproducibility, natural sciences, usually, are applying mathematical logic. Qualitative logic and quantitative mathematics are special branches of philosophy. Thus, we could call a natural science a “natural philosophy“, as well. Contrary to mathematics, however, natural sciences still are adding a human aspect: Human lifetime is finite, and body size is finite, too. A human’s counting range, hence, is finite as well. A physical measurement, last but not least, means reading some scale. Hence, measuring results produced by experiment are subject to this principle of finiteness, as well: Infinities, in general, are non-physical! The same will hold true for the “free will“ that highly praised; it just does not exist [1, chapter 1].
Mathematical logic, in addition, teaches us how to construct limits. Such a limit, however, implies the existence of some infinity which needs some extrapolation process to satisfy some arbitrary ansatz. This arbitrariness, however, contradicts unambiguous reproducibility. Consequently, it is non-physical, as well. From a philosophical point of view, one highlight of the last 3 centuries has been the analysis of such limit considerations. Thus, it is not surprising that they also have found their entry – and rather successfully – into scientific thinking. The infinitesimal calculus of mathematics allows for describing mechanical procedures (solutions of equations of motions, e.g.) in a much simpler way.
With respect to mathematics, however, this obscures the logical fact that the differentials (of vanishing size) thus generated will have to satisfy continuity restrictions arbitrarily added in order to integrate their infinitely many terms (needed to connect 2 neighbouring points), which not necessarily can be assumed to hold true a priori, already! (Mathematicians, here, are applying the fatale logic “Zero times infinity = finite“.)
By measuring such a “continuous“ length, the application of the differential calculus, hence, implies that the contribution of every term (in the limit) will individually vanish. The application of the differential calculus, hence, automatically excludes the existence of an atomistic model the “quanta“ of which are physical carriers of some finite information. (Their summation always would yield infinity.)
The infinitesimal calculus, thus, can describe some smoothed approximation, at all. For a logical “understanding“ of the physics behind it, hence, it is highly inappropriate! Einstein‘s GR and the fundamental models of particle physics applying Feynman’s diagrams, by its Hamilton-Lagrange formalism, are missing the point with respect to a theory built on it. Only quantum gravity (QG), by its atomistic description and by its elimination of a free will, provides some more realistic base beyond classical physics.
For philosophy, this departure from the infinitesimal view of nature also means a departure from considering nature as the result of an abstract pattern of bits from informatics nobody can tell us how their immaterial bits might proliferate to the world of physics. In addition, that classical Hamiltonian formalism of particle physics admits only 1 temporal coordinate. (Even string-brane models are working with just 1 time but 10 space coordinates!) Einstein applied 3 real space dimensions and 1 imaginary time dimension (which he, then, optionally was able to transform into real numbers by his metric).
On its most primitive configuration level, QG [1] applies Dirac‘s 4 complex dimensions as the base for defining a fermion. 2 of them, however, still are time-like (“b-spin“). Additional 4 dimensions are used by QG for Dirac’s antifermion. QG is an atomistic model. Its 4+4 = 8 types of “quanta“ are the physical carriers of some concrete, material pattern of bits. Contrary to the above abstract, immaterial patterns, their transfer to real nature we are confronted with is no problem any more.
Einstein‘s dynamics only knows 3 real space dimensions and 1 imaginary time dimension. Mathematicians are characterising this geometry as an “SO(1,3)“ (SO = special-orthogonal); for physics, it provides “special relativity“. In 1916, by “crumpling“ this structure (introduction of an additional metric), Einstein succeeded in including gravity into his SO(1,3) (cf. chapter 7): By his “general theory of relativity“ (GR), he, for the first time, succeeded in interpreting some physical interaction (the force of gravity) as a purely geometrical property of his spacetime structure. His subsequent trial to extend this finding to electromagnetism (a so called “internal” force) failed, however.
The cause of his failure was that he had not bothered about the fate of his basic SO(1,3) when constructing his GR. (Einstein‘s vague stress tensor, by far, did not reach the precision of his Ricci tensor!) By his “Dirac algebra“, Dirac showed that this SO(1,3) extended itself towards the “conformal group“ SO(2,4), meaning that Einstein, when constructing his GR, right away had overlooked 2 dimensions (those numbered by 4 and 6 [1, chapter 14]) by setting them equal to constant! That incomplete ansatz is reflecting his equivalence principle (inertial mass = heavy mass), which, thus, became invalid; a correct application would have yielded additional terms. (In quantum gravity, by the way, the combination of both additional dimensions just will represent heavy mass as the “dilation“ of the conformal group!)
For particle physics, Feynman‘s “virtual masses“ are proving it rather obviously – and cosmologists bitterly have learnt the consequences of those additional terms ignored, which are responsible for its singularities behind the event horizon of a black hole. Purely quantitatively, that omission shows up again in terms of dark energy and dark matter astronomers discovered later on, which, in the experiment, just are representing the deviation of Einstein’s 4-dimensional GR against QG (in its 6-dimensional version) and its variable mass [1, chapters 8, 14].
By mathematics, these 6 (pseudo)orthogonal dimensions of this “conformal“ SO(2,4) just correspond to the 4 complex dimensions of a “special-unitary“ SU(2,2) or its “unitary“ extension to a U(2,2), respectively. It comprises equal numbers of (complex) space and time dimensions. The CPT theorem of particle physics, then, will commute both types of dimensions with each other.
This transforms the U(2,2) of fermions to a U(2,2) of antifermions (and v.v.). Thereby, the number 4 of “dynamical“ fermion dimensions of a U(2,2), formally, will double to the 8 dimensions of a U(4,4) treating arbitrary particles (extension of an r-number to a c-number Lie algebra). For cosmology, then [1, chapter 19], the borderline between particles and antiparticles just corresponds to the event horizon separating a black hole from that part of our universe which is accessible to us.
We could ask ourselves why dynamics just should be 8-dimensional in our universe. Here, the evolution of mankind enters. Chapter 12 will show that human perception is recognising spacetime in its “ray” representation, i.e., the division of its quanta matters. Now, number theory (mathematics, keyword “octonions“) is telling us that (“irreducible”) numbers capable to be divided by each other are admitted to have 8 dimensions, at most. (Real numbers can be represented on a straight line, i.e., they are 1-dimensional; for complex numbers we need 2 coordinates (1 and i), i.e., an entire complex plane, already; 8 dimensions are some straight extension).
Verlag: BookRix GmbH & Co. KG
Übersetzung: This is the English translation of the German e-book “Philosophie der Quanten, Tensor-Modell variierender Beobachterstandpunkte“, bookRix, Munich (2021), serving as the original for the English e-book “Philosophy of Quanta, ...".
Tag der Veröffentlichung: 02.01.2021
ISBN: 978-3-7487-7020-6
Alle Rechte vorbehalten