THE STARSHIP TITANIC
There is a widespread belief by the general public and even some scientists, that eventually new technology will make it possible for human beings to travel between the stars. However, when we examine both the physics and the economics involved in such an undertaking it becomes obvious that interstellar travel is absolutely impossible. When we examine an imaginary 4 light-year voyage to our nearest star, Alpha Centauri, it is easy to see why the stars are forever beyond our reach
Perhaps the most well known examples of “future” space travel are the voyages of the Starship Enterprise from the television series Star Trek. The supposed size of this craft is difficult to determine, but judging from its relative appearance on the television screen, it seems to be on the same order of magnitude as a large ocean liner. One such ocean liner familiar to most people, the Titanic, now rests on the bottom of the North Atlantic in approximately 3,660 meters of water. Since any group of humans embarking on a journey to a nearby star would most likely require a vessel at least the size of the Titanic, we will name our imaginary spacecraft the Starship Titanic. At the very least, the trip would last thousands of years and either span many generations of travelers or contain a crew frozen into a state of suspended animation. In order to determine the physical requirements necessary for interstellar travel, we will examine the technical, practical, and economic limitations inherent in sending a group of humans on a four-light-year journey to Alpha Centauri in the Starship Titanic.
Aside from the many technical problems involved in interstellar travel, the largest and seemingly insurmountable obstacle is simple economics. The amount of energy needed to accelerate a craft to the high velocities necessary to make such a journey practical is simply enormous. To illustrate this problem, we will examine five different kinds of fuel that could be used to propel our craft. We will assume that the Starship Titanic can use any kind of fuel at 100% efficiency.
Our first option is chemical fuel. To be practical, the fuel must be plentiful, and produce a high amount of energy per unit mass of fuel mixture. The best possible chemical fuel is the burning of hydrogen and oxygen to produce steam. It produces 6,789 BTU/lb of fuel mix (1.6 x 107J/kg). By comparison, the burning of carbon and oxygen to produce carbon dioxide produces only 3,846 BTU/lb of fuel mix (8.9 x 106J/kg).
To obtain a more powerful fuel, we must take the great leap to nuclear fuel. The best possible fuel in terms of weight-to-energy ratio would be the fusion of H-2 and He-3 to produce He-4 and H-1. This reaction produces 3.66 MeV/nucleon, or 2.2 x 1027MeV/kg (3.5×1014J/kg). This is about 22,000,000 times more energy per kilogram of fuel than that produced by burning hydrogen and oxygen. The problem with this fuel is that He-3 is a very rare isotope (.00013%) of a very rare element (.0005% of atmosphere).
Uranium-235 Fission Fuel
The next fuel to consider is uranium-235. The fission of these nuclei produces about 1MeV per nucleon mass (9.58 x 1013J/kg). Uranium-235 makes up only .072% of naturally occurring uranium, and the difficulty in separating enormous quantities of uranium-235 is dwarfed only by the problem of storing it in one place without it reaching critical mass. An atomic bomb contains only a few ounces of U-235 that are pushed together into one spot to initiate the chain reaction that causes the explosion. The tremendous amount of uranium necessary to fuel the Starship Titanic could not be safely stored within its structure without it reaching critical mass.
Hydrogen Fusion Fuel
The next fuel to consider is deuterium (H-2). The fusion of two H-2 nuclei produces either He-3 and a neutron, or H-3 and H-1. The average energy produced by these two reactions is about .9MeV per nucleon (8.684 x 1013J/kg). Compared to other nuclear fuels, H-2 is both safe to store in large quantities, and is much more plentiful. For example, one cubic kilometer of sea water contains about 17,000,000kg of deuterium.
The last fuel to be considered is the only one that has enough energy to make travel between the stars possible for mankind, but the technical and economic difficulties of creating and storing it in large quantities will probably never be overcome. This fuel is antimatter, or more correctly, anti-atoms made from antiprotons, antineutrons, and positrons. For example, when a proton and an antiproton come together and touch, they are both converted into photons, which have an energy of 938 MeV per nucleon (8.99 x 1016J/kg). This is approximately 1,000 times the energy per kilogram of fuel as that produced by either Uranium-235 fission or the thermonuclear fusion of hydrogen.
The energy produced by the annihilation of matter and antimatter is enormous. One small drop of anti-water (1/2 gram) falling into the ocean would create an explosion equal to approximately 10,000 tons of TNT. One gallon of anti-water mixed with one gallon of ordinary water would have the energy equivalent to approximately 8 billion gallons of gasoline.
Because of its extremely high energy-to-weight ratio, antimatter would seem to be the ultimate star ship fuel. However, it has three serious drawbacks, any one of which would most probably prevent it from ever being used for this purpose.
The first is that there is no conceivable way to store antimatter in an appreciable quantity. Antiprotons and positrons can be created in particle accelerators, stored in magnetic storage rings, and even assembled into anti-atoms, but as soon as they come into contact with any atoms of ordinary matter, both matter and antimatter disappear into a flash of photons.
The second drawback is the tremendous amount of energy that it would take to manufacture antimatter. Since it can’t be found in nature (except for an occasional particle produced by cosmic rays), antimatter must be manufactured from energy obtained from other sources. Unlike the energy obtained from hydrogen fusion, which is virtually free, except for the small amount of energy needed to gather the deuterium atoms from the ocean, the energy produced by matter/antimatter annihilation would be offset by the enormous amount of energy that must be obtained from another energy source to manufacture the antimatter in the first place. Also, it is not possible to make just antimatter by itself, so for every positron and antiproton produced, an unneeded electron and proton would also have to be manufactured from this same energy source.
Let’s suppose that we were one day able to invent a device that could produce and store anti-water with 100% efficiency from household electricity. We simply plug the unit into a wall receptacle and out comes equal quantities of anti-water and water. To make one kilogram of anti-water would take 9 x 1016 Joules of energy or about 25,000,000,000 kilowatt hours of electricity. At 10 cents per kilowatt hour, that amounts to about $2,500,000,000. Unfortunately, in order to make that one liter of anti-water, an additional liter of ordinary water must also be made, also at the electrical cost of $2,500,000,000. To make one gallon of anti-water would cost nearly $20,000,000,000 and require the total USA electrical generating capacity for at least a month.
The third drawback is safety. Even with only a gallon of anti-water, the problem of safe storage becomes critical. Were it to leak out and come into contact with ordinary matter, the energy released would be equal to about ten tons of fissionable uranium-235. This is enough to make thousands of nuclear warheads. Certainly, the far side of the moon would be the only place where the manufacture and storage of even small quantities of antimatter fuel could ever be attempted.
The original Titanic sank after running into an iceberg at a speed of about 30 km/hr. Imagine what would happen if the Starship Titanic were to hit something while traveling at 1/2 the speed of light. When the Titanic sank to the bottom, its fall to the bottom released about the same amount of energy as the explosion of the first atomic bomb in New Mexico. Most of this energy was dissipated into the seawater as turbulence. However, if the Titanic had fallen the same 12,000-ft through a vacuum, it would be difficult to comprehend the sheer destruction that would have occured when it struck the bottom at a velocity of over 600 mph. This is about the same amount of damage that would be expected if the Starship Titanic were to hit a one gram bumblebee while moving at one-half the speed of light. Even the impact of a small micrometeorite could completely disintegrate the starship.
To demonstrate the sheer impossibility of interstellar travel we will examine the physics and economics involved in sending a one person spacecraft to Alpha Centauri. We will begin by providing our astronaut with a life-support capsule the size of a small car and weighing 1000 kg. Since matter/antimatter fuel provides the only possible means of accelerating a craft to such a high velocity, we will assume that we have access to large quantities of this fuel. Now, how much fuel would it take to accelerate our 1,000-kg craft to .99C? We can calculate that at .99C, the 1,000 kg ship would have a momentum (P) of about 7000 where (MC=1) (p=MV/√1-V2/C2). Thus, it would take 14,000 kg of matter/antimatter fuel to accelerate the craft to .99C because the photons produced by its annihilation would have a total momentum of 14,000 (p=MC). According to Newton’s third law of motion, half of this momentum would be contained within the backward moving photons and the other half would remain with the forward moving space ship.
However, the round trip to a star would require not one acceleration to .99C but four, because it takes just as much energy to decelerate to a stop. Since the possibility of finding matter/antimatter fuel readily available within the Alpha Centauri solar system is very uncertain, prudence would dictate that the entire amount of fuel necessary for the journey be carried from earth. This would mean that when the astronaut left the earth he would have to carry 41,160,000 kg of matter/antimatter containing about 3.7 x 1024 J of energy. At 10 cents per kilowatt-hour it would cost over $4,800,000,000,000,000,000. This is approximately the same amount of energy that the entire earth receives from the sun over a period of about 100 days and the cost would be several millions times greater than the current US national debt. These figures clearly demonstrate the physical as well as the financial impossibility of travel between the stars. Even to send a one-kilogram probe on a fly by mission to a star at .99C would cost $182 billion in energy alone.
Faster Than Light
It is possible in practice to travel faster than the speed of light, but not in principle. To understand how this can be possible, we will send Max on a journey to Alpha Centauri. This is the nearest star to our solar system and is four light years away. While antimatter fuel may be too expensive and too dangerous for humans to use, these factors are of no concern to Max. In his spacecraft powered by antimatter, Max would be able to accelerate past what would be his measurement of the speed of light. Max loads his 1 kg spacecraft with one kilogram of antiwater and one kilogram of water for a total mass of three kilograms. Max then points his craft towards Alpha Centauri, turns on the engine and accelerates until all of his fuel is depleted. Max then uses three different methods to determine his velocity.
The first method of determining his velocity is to gauge it by the amount of fuel expended. Since his propulsion system converts mass directly into photons and since all photons have a momentum equal to their mass times the speed of light (P = MC), the momentum of his one kilogram craft must be equal to the momentum of the two kilograms of photons used to propel his ship (2P = 2kgC). Since momentum is equal to mass times velocity (P=MV), Max must conclude that his two units of momentum is taking him to twice the speed of light. (V=2P/M=2C)
The second method is to use the ship’s accelerometer and clock to operate a speedometer. The amount and duration of all acceleration are combined to monitor the vessel’s speed. Because of the transformations of mass, and time caused by Max’s increasing velocity, his accelerometer measures more acceleration than would be measured by an observer on earth who was monitoring the ship’s progress. As the mass of the accelerometer’s movable weight increased, more and more tension would be put on the spring for the same amount of acceleration. At the point where all of Max’s fuel was used up, his speedometer would register two times the speed of light.
The third method Max uses is to simply measure the time that it takes him to get to Alpha Centauri. Since his slowed down, time dilated on-board clock only registers two years for the journey, Max is able to confirm his other calculations and conclude that he is indeed traveling at twice the speed of light.
To the observer on earth, Max is only moving at 89.5% of the speed of light and, by earth time, takes 4.47 years to complete his trip. However, because of the time dilation caused by its motion, Max’s clock only registers .447 year for each year measured by clocks on earth.
From his perspective, Max would indeed be traveling at twice the speed of light, because any measurement that he can make will verify this conclusion. However, to an observer on Earth, his spacecraft will always appear to be moving at less than the speed of light.
One of the fallacious assumptions of classical physics is the belief that physical events and phenomena occur within a simple and precise mathematical framework. This idea is an extension of the principle of Occam’s Razor, which states that the simplest idea that can account for all the facts is preferred over an unnecessarily complex one. While this may be true for conceptual ideals, it cannot be extended to mathematics.
When James Clerk Maxwell first tried to understand and explain the electromagnetic interaction, he developed a system in which the magnetic forces were enacted by hollow tubes extending from the magnetic bodies. The interaction occurred when the hollow tubes from one body came in contact with hollow tubes from another body. However, when he tried to apply mathematical analysis to this idea, he was hindered from proceeding by a wall of mathematical complexity beyond his abilities. Finally, in desperation, he abandoned the whole idea in favor of the concept of the “field.” When Maxwell applied mathematics to the idea of the electromagnetic field, he finally attained a set of beautiful differential equations that described the results of his electromagnetic experiments. This discovery was viewed by many to be the first great revelation toward the ultimate field explanation of everything.
The great beauty of the field in theoretical physics is that the mathematics involved to describe an event are simple and concise when compared with the mathematics of same event in non-field terminology, such as Maxwell attempted to do with his “hollow tube” explanation of magnetic force. The requirement of theoretical physics, that all fundamental theories be presented within a concise mathematical framework, virtually prevented the serious theoretician from ever considering a non-field theory because of its mathematical complexities. Non-field theories are denied serious consideration because they cannot be given a precise mathematical description. Even if someone was able to develop a precise set of non-field equations, they would likely be so complex and mystifying that only the most gifted mathematician would be able to understand them.
Prevailing logic maintains that as long as we can explain the all-pervading electromagnetic force as a field with such beautiful mathematics, then the truth of the field’s existence cannot be denied. Therefore, if the true underlying nature of electromagnetism is a field, then all other forces must also be field events at their basic level.
Albert Einstein readily embraced this line of reasoning when he developed the theory of General Relativity, which depicted the force of gravity being transferred between objects through a “gravitational field” that each body of mass generated.
Johannes Kepler spent a good many years studying Tycho Brahe’s detailed observations of planetary motion in an effort to resolve their seemingly complex movements into a simple system which could be applied to all of them. Finally, armed with the Copernican assumption that the earth and the planets revolved around the sun, he was able to develop his four laws of planetary motion that were able to explain these motions in a simple and concise mathematical way.
Later, Isaac Newton was able to take Kepler’s laws and combine them with his own three laws of motion, his invention of calculus, and his theory of gravitation, to develop a simple and very precise system that was able not only to describe the motions of the planets but the motions of falling apples and the trajectories of cannon balls as well.
For many years Kepler’s laws and Newton’s equations were offered as proof of the idea that the underlying truths of complex natural phenomena could ultimately be expressed in simple and concise mathematical form. Then along came Einstein with his theory of General Relativity, which offered a far more complex system of mathematics that was more accurate than Newton’s system. Newton’s equations were thus shown to be a mere generalization of gravitational phenomena that had nothing whatsoever to do with any kind of underlying truth.
From its earliest beginnings, particle physics has developed and been built upon two basic assumptions: the “point particle” and the “field.” These two ideas have persisted to the present day not so much because they have been fruitful in revealing underlying truth but because they can be so easily manipulated mathematically.
Recently it has been proposed by some that the idea of the point particle be discarded and replaced with the idea of the “string particle.” String theories have shown much promise, but the extremely complex mathematics needed to manipulate them has severely limited their progress. The closer string theories approach the truth of matter, the more complex the mathematical treatment of matter’s actual dynamics become.
The idea that ultimate truth can be represented with simple mathematical equations is probably totally false. A simple example of this is the familiar series of circular waves that move away from the point where a pebble is dropped into a quiet pool of water. While these waves can be described in a general way with a simple set of mathematical equations, any true and precise mathematical description of this event would have to include the individual motion of each molecule within this body of water. Such an equation would require more than the world’s supply of paper to print and its complexity would make it virtually meaningless.
The idea of the circlon is easy to describe and illustrate. However, any kind of mathematical description of its complex internal dynamics is presently beyond my abilities. This deficiency does not mean that circlon theory cannot compete with the mathematically simplistic point-particle and field theories of matter. It simply means that perhaps ultimate truth is not as easily accessible to a mathematical format as was once hoped.
In the last portion of this book, nuclear structure is described in what I feel is a very precise mathematical presentation. While these nuclear equations are not what might be called “real” mathematical equations, they describe very accurately the entire body of experimental knowledge surrounding the 2000 or so known isotopes of the chemical elements.
Fields Vs Particles
The conventional wisdom of physics has long maintained that the universe consists of a “substance” which fills the void of space. This substance has had many names over the years as its level of complexity has been repeatedly increased. In earlier times, this space-like substance was called the “lumineferous aether,” and was used as a medium to explain the vast spectrum of wavelike particles that came to be called photons. Later, it was referred to as the “electromagnetic field” in order to explain the non-local effects of electromagnetism, and then as the “gravitational field” to explain the apparent non-local effects of gravitation. In modern times, it is called the “space-time continuum” to explain the conversion of photons to matter, matter to photons, and the relativistic mass, space and time distortions of the Lorentz Transformation. It has been called the “negative energy sea” to explain the existence of antimatter, and the “pilot wave” to explain the non-local view of quantum mechanics demanded by the Heisenberg Uncertainty Principle.
In short, the voice of modern physics declares that this “space-like” substance is all there is. Matter, energy, space, time and gravity are merely aspects of the many different contortions, dimensions, and dances that this unified substance can exhibit to the observer. Individual particles are seen as mere illusions of measurement, existing only as an intricate part of the unity of this all-pervasive “stuff” that in relativity theory is simply referred to as “space-time.”
The Principle of Absolute Motion in many ways takes the opposite direction from contemporary physics in its quest for a “unified field” theory. Absolute Motion is a totally non-field theory that admits only the existence of expanding mass particles within the negative reality of an absolutely empty Newtonian-Euclidean void called space. Discarded is the idea that space has a fourth dimension or any other dimensions capable of containing time or fields. The only interaction is one mass particle touching, pushing, pulling or becoming physically intertwined with another mass particle as their gravitational expansion continually crowds them towards one another.
The primary parameter of these particles is mass. Space and time are simply aspects of this mass, having no meaning other than to describe the shape and motion of this mass. Within the theory of Special Relativity, time is a function of the properties of space, whereas in the Principle of Absolute Motion, time is merely the means used to quantify the motion of mass. The idea of space-time as used in Relativity is replaced by the opposite and complementary mechanical concept of mass-time in the Principle Absolute Motion. All interactions are physical and local.
Unified Field Theory
The result of the century-long quest for a unified field theory to explain non-local effects is the generalized idea of the “field.” The field concept replaces the idea of empty space and its simple, Newtonian-Euclidean coordinates of mass, space, and time with a witches brew of poly-dimensional mathematical equations, often leading to paradoxes that block the mind from any kind of intuitive conceptualizations of the process being described.
The secret to the discovery of a unified field theory is not to show how the four fundamental forces can all be part of a single field process. It is to show that, in fact, there are no “fields” at all. The universe consists of empty, infinite, and fixed space within which numerous particles of mass move with complex motions. The mass of these particles is contained in long, slender, hollow “strings” that are wound into the complex, three-dimensional shapes of the particles. These particles each have a shape that is called a circlon.
One advantage of the Principle of Absolute Motion over the standard theory is that the mechanical circlon-shaped particles of mass can be depicted accurately with three-dimensional models and two-dimensional drawings. In contrast, the point particle with its many fields described by the standard model can only be represented by complex multidimensional equations.