The transport phenomena of the admixture of two different gases does not proceed as the theory predicts. This observation is confirmed in practice by the mixing of gases for commercial use and an example is the mixture of nitrogen and helium, which is used to test high pressure piping and equipment for leaks, as it escapes through the smallest of apertures and simple equipment is available to detect the gas. However Helium, as a rare gas, is in short supply and is very expensive, and a mixture of 5% helium and 95% nitrogen serves the purpose, and companies producing medical and industrial gases are able to supply this mixture. But simply introducing both gases into a storage cylinder, in any order, does not achieve a homogeneous mixture suitable for practical use (i.e. with the helium atoms evenly distributed within the more numerous nitrogen atoms) unless it is left undisturbed for weeks, which is, commercially speaking, impractical. If the principles of the kinetic atomic theory of gases are applied to the example of static mixing, we see that, according to the theory, the average velocity of nitrogen molecules in air is around 500 metres per second and that of helium atoms is 1300 metres per second. The relative mass of nitrogen is around 14, 3.5 times the mass of helium at 4 amu, and a typical industrial gas cylinder is around 1.5 metres high and 200 mm in diameter. If introduced after the nitrogen into the top of the cylinder, the lighter helium content would occupy 75mm of the internal height, while the nitrogen the remaining 1425mm.
The diagram above depicts the nitrogen molecules and the helium atoms at the separation point in terms of the Kinetic Atomic Theory of Gases and the numbers conform to Avogadro’s Law. If uninhibited by collisions, at these velocities it would be possible for the slower nitrogen molecules that are in the vicinity of the helium atoms at the top of the cylinder to travel to the top of the cylinder and back 3,300 times in one second, 200,000 times in one minute. Extending the time to one hour would enable each nitrogen molecule to travel this distance 12 million times, a total distance of 600 kilometres. With respect to each of the helium atoms, in one second they could each travel in the other direction to the bottom of the cylinder and back around 400 times, 24,000 times in a minute. In one hour 1,500,000 times and traveling a total distance of over 2,800 kilometres. But of course collisions of any single atom with others are frequent and such an atom would not move linearly, but in a completely random manner. This is an unusually frank comment from a Russian textbook:- (1) “Since this transport is ensured by motion of the molecules, and the velocities of the molecules are high, diffusion should seem to occur rapidly with the concentrations leveling out almost instantaneously . Experiments show, however, that at atmospheric pressure diffusion is a very slowprocess, and mixing in the absence of motion of the gas as a whole may last several days.” (My emphases) Ref: Molecular Physics, Kirkoin and Kirkoin, Mir Publishers, Moscow In an attempt to explain this problem the proponents of Kinetic Theory suggest that while the molecules in the above example move chaotically at high velocity, somehow collisions with the molecules of the other gas mean they always end up in the area from which they started in the first place, somehow, in this particular instance, showing both chaotic and ordered characteristic’s at the same time. In other words suggesting that any collisions that they endure with molecules of the other gas must result in their returning to the area in which they originated. But this is a direct contradiction of the principle that the collisions are completely random or chaotic, and instead is saying that these interactions are, by some inexplicable means, regulated or controlled. Given the postulated random, kinetic movement, together with the large volumes of empty space separating molecules and atoms and their high velocities, the fact that mixing, in commercial and experimental practice, is very slow, is direct and incontrovertible proof that this theory is invalid. To produce a usable gas for commercial usage, after injecting the helium into the nitrogen in the cylinder, a quicker method of mixing is achieved by placing it horizontally on rollers and rotating (or ‘rumbling’) it for hours. This process creates a frictional effect between the internal walls of the rotating cylinder and the gases in contact with it, as indicated in the diagram below, where the more massive nitrogen atoms are experiencing a greater frictional effect from the cylinder walls than the lighter helium atoms. Clearly mixing can only occur if there is a consistent interaction between the internal gases and the internal surfaces of the cylinder.
From about 600 BC Greek philosophers were speculating about the nature of the physical world and of matter itself. Thales at this time suggested that all matter originated from water (and that the earth was a flat disk floating in a sea of water). Anaxagoras, who died in 428 BC suggested that all matter consisted of infinite numbers of infinitely small particles he called ‘seeds’ and that all bodies are simply aggregations of these particles. Leucippus, who was aware that matter had three natural states, and has been credited with founding an atomic theory of matter, developed Anaxagoras’ ideas. However his writings on the subject did not survive like those of his pupil Democritus, who about 400 BC suggested that ‘matter consisted of minute hard particles moving as separate units in empty space’. This being the first suggestion of the ‘existence’ of the vacuum state.
But numerous questions remained, such as how these solid particles, even if moving, could remain suspended in empty space without falling and, as Aristotle (384-322 BC) subsequently asked; “How did these particles originally attain their velocity?” Aristotle also rejected the concept of an ‘empty space’ or a vacuum, clearly articulating that a vacuum could not exist, and also promoted the idea that the world was composed of just four elements, earth, air, fire and water and his ideas were generally accepted for two millenia, when Galileo’s pupil Torricelli’s experiments in 1643 were widely believed to have proven the ‘existence’ of the vacuum. This led to a re-evaluation of Aristotle’s four elements concept, and of the alternative ideas of Democritus. As a result, soon after in 1647, Pierre Gassendi resurrected atomic theory and wrote that ‘atoms (are) similar in substance, although different in size and form, (and) move in all directions through empty space and (are) devoid of all qualities except absolute rigidity’.
Bernoulli suggested in a 1738 publication that ‘the pressure of a gas on the walls of a vessel is the result of the innumerable collisions of its molecules with the walls’ and the fluctuations in pressure were explained by the suggestion that ‘heat applied to a gas results in an increase in the velocity of the molecules and a corresponding increase in collisions with the walls’. In the latter part of the 1700s two of Aristotle’s four elements, air and water, were separated into their constituent gases, which gases were identified and named, and the ‘four elements’ concept was finally proven to be false.
In 1808 Dalton published his theory of atoms (as solid, perfectly elastic, and indestructible spheres) based upon his observations of how different elements combine to form compounds, such as with the combination of Hydrogen and Oxygen to form Water. Dalton also presented in this publication his Laws of Multiple Proportions, (i.e. ‘when two elements combine in a series of compounds, the ratio of weights of one element combines with the fixed weight of the second element in a ratio of small whole numbers’). These laws together with Gay-Lussac’s Laws of Combining Volumes (i.e. when gases combine they do so in volumes that are in a ratio of small whole numbers) indicated that matter is divided into discrete, separate particles, which laws were seen as a confirmation of the atomic hypothesis.
In 1827 the British botanist Robert Brown discovered the phenomenon, later called Brownian Motion, which is the observed random movement of microscopic particles suspended in a gas or liquid. This motion of, for example, grains of pollen or smoke particles in air, appears to be completely random in both direction and dimension. This was later put forward as a visual manifestation of the effect of ‘kinetic’ atoms/molecules colliding with these particles. It was suggested that the inherent motion of the atoms/molecules in their collisions with the suspended particles induce their observed random motions.
In 1834 Émile Clapeyron introduced his equation of state for an ideal gas, which “is a good approximation to the behavior of many gases under many conditions, although it has several limitations”. “The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved.” (Wikipedia)
Debate continued on the merits of one or the other theories of matter during the first half of the 1800’s but the next significant development came when Clerk Maxwell, following work by Krönig and Clausius, in 1859 put forward his ‘Law of Distribution of Velocities’ as a statistical or mathematical explanation of the distribution of kinetic molecular velocities in gases. The importance of the Maxwell distribution function, and of the later, more general Maxwell-Boltzmann distribution function ‘is that they contain all the information necessary to calculate any measurable variable of a gas’ such as the pressure, temperature, or volume. Clerk Maxwell based his statistics on the following assumptions.
1) Molecules are perfectly elastic balls of atomic dimensions that are in perpetual random motion.
2) The average kinetic energy of the molecules is proportional to the absolute temperature of the gas.
3) The molecules do not exert any appreciable attraction on each other.
4) The volume of the molecules is infinitesimal when compared to the volume of the gas.
5) The time spent in collisions is small compared with the time between collisions
(Inherent in assumption 4 is the concept of ‘empty space’, however Clerk Maxwell did not define this, either as a pure vacuum or as an ‘aether’, however he did not accept that a vacuum was a possible state)
Some principles of kinetic atomic theory are described as follows:–
Atoms in a gas, within a container, are ‘rushing around at different velocities and bouncing off each other and the walls like a three-dimensional game of billiards’ and ‘are moving in random directions, and because as many move in one direction as another, the average velocity of the molecules is zero’ – in other words the gas as a whole is not moving or producing unequal pressure on any inside surface of the container. ‘Pressure arises from the multiple collisions the atoms of a gas have with the walls that contain the gas’ and ‘heat applied to a gas results in an increase in the velocity of the atoms and a corresponding increase in collisions with the walls’. Also ‘when the fast moving atoms of a hot gas collide with slower moving atoms of a cooler gas, kinetic energy is transferred from the ‘hot’ to the ‘cold’ atoms’. ‘The collisions between atoms/molecules are completely elastic’, or in other words no energy of motion or ‘kinetic’ energy is lost as a result of any collision with other atoms/molecules of the gas or of the container.
‘The duration of collisions of atoms is about one thousandth of the time between collisions. Atoms spend the overwhelming part of their time in free motion, and collisions are a rare event in their life.’ In addition the theory suggests that the atoms of a gas only take up a minute proportion of the actual space the gas occupies. ‘An atom generally takes up only 1/1000th of the volume available to it and if we were to scale atoms to the size of human beings with a radius of 0.5 m, they would be spaced some 10m apart.’
In other words in any given volume of gas only about 0.1% is matter in the form of atoms. To put this in some sort of perspective 1000 cubic centimetres (one Litre) of gas contains a total volume of atomic matter that could be fitted into 1 cc while the remaining 999 cc is empty ‘space’. With this spacing the atoms, on average, have to go some distance before colliding with another and the theory states that ‘the mean free path of an atom is some 3000 times greater than the diameter of the atom itself’. Note: Quotations above are extracts from various University level textbooks.
In 1873 van der Waals introduced his equation of state, which was “an equation relating the density of gases and liquids to the pressure (p), volume (V), and temperature (T) conditions. It can be viewed as an adjustment to the ideal gas law that takes into account the non-zero volume of gas molecules, which are subject to a inter-particle attraction. It successfully approximates the behavior of real fluids above their critical temperatures and is qualitatively reasonable for their liquid and low-pressure gaseous states at low temperatures. However, near the transitions between gas and liquid, in the range of p, V, and T where the liquid phase and the gas phase are in equilibrium, the van der Waals equation fails to accurately model observed experimental behaviour, in particular that p is a constant function of V at given temperatures. As such, the van der Waals model is not useful only for calculations intended to predict real behavior in regions near the critical point. Empirical corrections to address these predictive deficiencies have been inserted into the van der Waals model, e.g., by Clerk Maxwell 1890 in his equal area rule, and related but distinct theoretical models, e.g., based on the principal of corresponding states, have been developed to achieve better fits to real fluid behaviour in equations of comparable complexity. (Wikipedia)
“It is quite clear from the (given) examples that this (the van der Waals) equation (of state) is only approximately true and is suitable only for rough quantitative assessments of the relationships between the parameters determining the state of a real substance.”1(My emphases)
With respect to the atom itself, in 1884 J J Thompson relegated Dalton’s model to history and introduced a new structure, his ‘plum pudding atom’. “J.J. Thomson studied the conduction of electricity through gases, and experimented with cathode rays. He realised that he could deflect the cathode rays in an electric field produced by a pair of metal plates and argued that the cathode ray consisted of small charged particles, and by using different types of cathodes realised that the particles existed in many types of atoms. He concluded that the particles were a universal constituent of matter – they form part of all the atoms in the universe. We now know these particles as electrons.”2
To return to kinetic-atomic theory it necessary here to point out that it was not generally accepted at the turn of the century. Nobel Laureate Max Planck for example wrote that “every attempt at elaborating the theory has not only not led to new physical results but has run into overwhelming difficulties”.3 Another Nobel winner, Wilhelm Ostwald, said that it is “a superficial habit to cover up rather than promote actual scientific tasks by arbitrary assumptions about atomic positions, motion and vibrations”.3
But in a 1905 paper on Brownian motion, Albert Einstein asserted ‘that Brownian motion, although random obeys a definite statistical law and is in accordance with statistics used by Boltzmann and Maxwell to describe the kinetic motion of molecules’. And then in 1908, Jean Perrin (who ‘was committed to the usefulness and the truth of molecular kinetic theory’) subjected Brownian motion to detailed microscopic analysis over a period of five years. The results of which work were generally accepted as confirming the existence of atoms and molecules and of their random kinetic motion. In his book ‘Les Atomes’ (an English translation of which was published in 1916) he states, that ‘each molecule of the air we breathe is moving with the velocity of a rifle bullet: travels in a straight line between two impacts for a distance of nearly one ten thousandth of a millimetre: is deflected from its course 5000 million times per second –. There are 30 milliard milliard (billion billion) molecules in a cubic centimetre of air, under normal conditions. 3000 million of them placed side-by-side in a straight line would be required to make up 1 millimetre. 20,000 million must be gathered together to make up 1000 millionth of a milligram’. He also subsequently states with respect to Brownian motion that ‘every granule suspended in a fluid (i.e. gas or liquid) is being struck continually by the molecules in its neighbourhood and receives impulses from them that do not in general exactly counterbalance each other; consequently it is tossed hither and thither in an irregular fashion.’
Further Perrin says that ‘the work developed by the stoppage of a molecule would be sufficient to raise a spherical drop of water 1 micron in diameter to a height of nearly 1 micron’.
The diagram below shows Perrin’s comparative dimensions with atoms increased to 2 mm diameter, and in this perspective the surface of the gamboge particle at this scale can only be shown as a straight line. Accordingly, at this perspective, he is suggesting that a single collision, out of untold billions of simultaneous collisions from every direction over the whole surface of the particle, could move this particle the equivalent of more than 600 metres.
With the subsequent elevation of Einstein to worldwide fame after the First World War, any doubts about the validity of the kinetic theory of gases dissipated.
Ernest Rutherford in 1909 set up an experiment that involved directing helium nuclei, which he called alpha particles, at sheets of gold foil of a thickness of 0.00004 cm. Most of the particles went straight through the foil, however a small number, 1 in 20,000, were deflected strongly at an average of 90° while some came directly back, which astonished Rutherford. Analysing these results led him to propose a completely different picture of the atom in 1911. The ‘Rutherford’ atom has a very small, (relative to the total suggested volume of the atom) unimaginably dense nucleus and is surrounded by one or more minute, and also very dense, electrons orbiting the nucleus at high speed. The nucleus consists of protons, which are particles with a positive electrical charge, and neutrons, which are electrically neutral, while the orbiting electrons have negative charge.
The mass of the electron is calculated to be about 0.0005 of the mass of the proton (the hydrogen nucleus) and they are extremely dense at 2 x 1017 Kg/M3. Rutherford calculated the diameter of the nucleus to be between 1/10,000 and 1/100,000 of that of the outer orbit of the electron/s. The outer orbit of the electrons is considered the extent of the atom, and the remaining space between the nucleus and the orbiting electrons is a “perfect vacuum”.4
The outer limits of these orbits, or the ‘shield’ of an atom are assumed to describe a sphere. To put this in rough perspective, if a hydrogen nucleus was scaled up to the size of a pea then the orbit of its electron would be greater than the diameter of a football stadium and would be very difficult to see with a diameter of 3mm. The mass of the pea to scale would be 800 million tonnes while the electron would weigh about 400,000 tonnes (and would be invisible to the human eye with a diameter of .003 mm).
Thus the change in the hypothetical structure of the atom is dramatic, from the solid indestructible spheres of Dalton, via Thomson’s ‘plum pudding’ model, to an atom whose mass is concentrated in an almost insignificant volume.
But this new picture did not lead to any dramatic modification or adjustment to kinetic atomic theory in respect to the presumption of the perfect elasticity of molecules and atoms during their mutual collisions, or to the nature of the separating ’empty space’.
With respect to Kinetic Theory, Richard Feynman states in The Feynman Lectures in Physics 5 :- “It is obvious that this is a difficult subject, and we emphasize at the beginning that it is in fact an extremely difficult subject, and that we have to deal with it differently than we have dealt with the other subjects so far. In the case of mechanics and in the case of light, we were able to begin with a precise statement of some laws, like Newton’s laws, or the formula for the field produced by an accelerating charge, from which a whole host of phenomena could be essentially understood, and which would produce a basis for our understanding of mechanics and of light from that time on. – but we do not learn different physics, we only learn better methods of mathematical analysis to deal with the situation.
We cannot use this approach effectively in studying the properties of matter. We can discuss matter only in a most elementary way; it is much too complicated a subject to analyze directly from its specific basic laws, which are none other than the laws of mechanics and electricity. But these are a bit too far away from the properties we wish to study; it takes too many steps to get from Newton’s laws to the properties of matter, and these steps are, in themselves, fairly complicated. We will now start to take some of these steps, but while many of our analyses will be quite accurate, they will eventually get less and less accurate. We will have only a rough understanding of the properties of matter.”
(Note the emphasis ‘extremely‘ is Feynman’s own, the others are mine)
So today this, now very complex, theory remains firmly in place as the ultimate basis of all atomic physics, and from the core assumptions of this theory the hypothetical structures of the wider universe and of the sub-atomic arena today have been developed and are based. But the main, and by far the most important, problem with this theory of discontinuous atoms also remains firmly in place, as the transmission of gravity through and within this hypothetical structure is inexplicable. It is an undeniable fact that it is impossible to transmit a force between two masses, of any dimension, through a non-resistive, non-interacting ’empty’ space of any hypothetical, speculative description. The transmission of any force is totally dependent on action and reaction, as defined by Newton’s Third Law, and there is no such thing as ‘action-at-a-distance’. As Newton wrote over 350 years ago :-
“That gravity should be innate, inherent and essential to matter, so that one body may act upon another at a distance through a vacuum without the mediation of any thing else by and through which their action or force may be conveyed from one to another, is to me so great an absurdity that I believe no man who has in philosophical matters any competent faculty of thinking can ever fall into it.”
He also wrote that “Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things.” and today it is not only the kinetic theory of gases that is extremely complex it is all of current theoretical physics, from sub-atomic particle physics to astrophysics. The result of this complexity is that any two physicists, even those working in similar areas, will have different interpretations and opinions, and put four or more together they will argue for hours and still not come to any mutual agreement.
Around 170 years ago, in his experiments, Faraday discovered that when a magnet is rotated, the field lines that it generates within and through the atmosphere do not rotate in concert, as shown in the diagrams below.
Francisco Müller writes in his paper “Unipolar Induction Revisited” here:-
“The problem of unipolar induction arises from experiments performed by Michael Faraday in 1832 as part of his investigation of electromagnetic induction.
These experiments created some difficulties that Faraday sought to answer in a series of experiments that he performed in 1851. These experiments resulted in the surprising conclusion that the magnetic field lines do not rotate or participate in the rotational motion of the magnetic lines of force, which produces an electromotive force or emf.”
“Rotating a copper disk above a magnet (Fig. A) Faraday induced a current in OECR. Rotating disk AND magnet together he obtained the same result, (Fig. B) and also removing the disk altogether (Fig. C). WHERE is the seat of induction in the latter case? Along OR, within the magnet? Or along ECR?”
When this copper disk is rotated in the magnets field, it is observed that a current is generated, and when both are rotated in concert the same current is generated.
This demonstrates clearly that the field generated by the magnet is transmitted to the copper disk and is acting directly through it, as is depicted in Müller’s images below.
Muller added that Faraday’s –
“conclusion was received as counter-intuitive and has been resisted as the correct explanation ever since.”
Such an assumed rotative motion of the field is as indicated in the diagram below, but the facts are that over the intervening 200 years no experiment has proven that the field rotates in concert with the magnet.
“Several experiments have been proposed using electrostatic measurements or electron beams to resolve the issue, but apparently none have been successfully performed to date.” Wikipedia
In the following image the magnet is again reduced to sub-atomic dimensions to show its atomic structure, where in the vertical image all the individual iron atoms of this permanent magnet are shown aligned N-S and, as is observed, the external magnetic field generated by it does not rotate in concert in the surrounding atmosphere.
But as this field is, of hypothetical necessity, said to be independent of the discontinuous ‘kinetic’ atoms of the atmosphere and is assumed to act through the relatively huge volume of interceding vacuum, and in such hypothetical circumstances there could not be any resistance to a rotative motion of the observed, and continuously generated, magnetic field through such a vacuum.
The question now is how is the field transferred through the copper disk in Faraday’s experiments.
In the images below, X depicts the currently accepted structural arrangement of the atoms in a copper disk, where it is stated that all these atoms are randomly aligned and are rotating and vibrating in place.
But when this copper disk is placed above the strong iron magnet, for this magnetic field to emanate, to act, through the copper and, as observed, emerge from the top surface into the atmosphere (as is depicted in A & B above) and to then link onwards to the south pole of the magnet, this can only mean that the fields of all the copper atoms are induced into alignment with the strong N-S field of the permanent iron magnet, as is shown in Image Y below.
But in scientific publications it is asserted that most elements, including copper, are non-magnetic and so are not influenced by this field.
Faraday was made aware that the flames of a fire deviate when subjected to a magnetic field, and he also carried out experiments which showed that a piece of plate glass, when suspended vertically between two magnets within their mutual N-S fields, was induced into rotating in concert with the magnets.
This proves that matter in general is influenced by a magnetic field, and it is quite obvious for these interactions to occur, it means that the field is transmitted directly to and through these materials.
The magnetic field generated by a strong magnet is observed to act around it continuously, both laterally and longitudinally, and accordingly there can be no ultimate, dimensional point where the field is not acting within the atmospheric gases between the N&S poles of the magnet.
In Faraday’s experiments the magnet is rotated in a vertical plane (at say 60 rpm) but this rotational velocity of the magnet is minuscule in comparison to the, apparently instantaneous, transmission of the magnetic field acting externally through the atmosphere.
It is accordingly assumed that this field is essentially static, but in fact it is propagating at the, practically immeasurable, speed of light in the atmosphere, and accordingly is also rotating immeasurably within the atmosphere.
But as a magnetic field cannot possibly act and react within and through a vacuum, of any minuscule, hypothetical, inter-atomic volume, it is therefore propagating directly atom to atom through a continuum of atmospheric gases.
The atoms, which compose the atmosphere, are naturally aligned to the Earth’s observed, and continuous, magnetic field, and from which alignment they are diverted by the far stronger local field of an iron magnet.
There is only one possible reason for this observed, and apparently static, magnetic field, which is that the field is not acting independently of the gases surrounding the magnet but is acting directly via the atoms of the relatively static atmospheric gases.
The image below is of a magnet rotating at 60 rpm within the static and continuous atmospheric gases surrounding it, and which rotative motion is generating an emf.