Faraday Paradox

Around 170 years ago, in his experiments, Faraday discovered that when a magnet is rotated, the field lines that it generates within and through the atmosphere do not rotate in concert.

Francisco Müller in his paper “Unipolar Induction Revisited” writes:-

The problem of unipolar induction arises from experiments performed by Michael Faraday in 1832 as part of his investigation of electromagnetic induction.

These experiments created some difficulties that Faraday sought to answer in a series of experiments that he performed in 1851. These experiments resulted in the surprising conclusion that the magnetic field lines do not rotate or participate in the rotational motion of the magnetic lines of force, which produces an electromotive force or emf.”

Rotating a copper disk above a magnet (Fig. A) Faraday induced a current in OECR. Rotating disk AND magnet together he obtained the same result, (Fig. B) and also removing the disk altogether (Fig. C). WHERE is the seat of induction in the latter case? Along OR, within the magnet? Or along ECR?”

(Müller notes that physicist Wilhelm Eduard Weber introduced the phrase “unipolar induction”, “probably because only one pole of the magnet is involved”)

http://www.marmet.org/louis/induction_faraday/mueller/muller.htm

When this copper disk is rotated in the magnets field, as is observed a current is generated, and when both are rotated in concert the same current is generated.

This demonstrates clearly that the field generated by the magnet is transmitted to the copper disk and is acting directly through it, as is depicted in Müller’s images above.

The question now is how is the field transferred through the copper disk.

The image, X, below depicts the currently accepted, i.e. the hypothetical structural arrangement of the atoms in an isolated copper disk, where it is said the atoms are randomly aligned and are rotating and vibrating in place.

But when this copper disk is placed above the strong iron magnet, for this magnetic field to emanate, to act, through the copper and as observed emerge from the top surface into the atmosphere (as in A & B above), and to then link onwards to the south pole of the magnet, it means that the fields of all the copper atoms are induced into alignment with the strong N-S field of the permanent iron magnet, as depicted in Image Y.

In scientific publications it is asserted that most elements, including copper, are non-magnetic.

Faraday was made aware that the flames of a fire deviate when subjected to a magnetic field, and his experiments showed that a piece of plate glass, when placed vertically between two magnets placed N-S, was induced into partially rotating in their mutual field.

Other more recent experiments were carried out with a wooden toothpick placed horizontally between two magnets, which then rotated 90° into a vertical alignment with their N-S field.

This proves that matter in general is influenced by a magnetic field, and it is quite obvious for these interactions to occur, it means that the field is transmitted directly through these materials.

Muller added that Faraday’s –

conclusion was received as counter-intuitive and has been resisted as the correct explanation ever since.

There is an extensive literature detailing arguments both for and against the conclusion advanced by Faraday.

Opponents argue that the magnetic field lines do rotate with the magnet and present experiments and arguments that support that position.”

Such a rotative motion of the field is indicated in the diagram below, but the facts are that, over the intervening 200 years, no experiment has been carried out that proves this alternative conjecture.

In the following image the magnet is again reduced to sub-microscopic dimensions to show its atomic structure, where in the vertical image all the individual iron atoms of this permanent magnet are shown aligned N-S and, as is observed, the magnetic field generated by the magnet does not rotate in concert.

But as this field is, of hypothetical necessity, said to be independent of the discontinuous ‘kinetic’ atoms of the atmosphere and thus acts through the relatively huge volume of interceding vacuum, there can be no resistance to a rotative motion of the generated magnetic field through this vacuum, in such hypothetical circumstances.

Of course the magnetic field generated by a strong magnet acts around it continuously, laterally and longitudinally and, accordingly there is no ultimate, dimensional point where the field is not acting within the atmospheric gases between the poles of the magnet.

There is only one possible reason for this observed static magnetic field, which is that the field is not acting independently of the gases surrounding the magnet but is acting directly through the atoms of static atmospheric gases.

In this case it is obvious that the magnetic field propagates at the speed of light, but the rotative motion of the magnet within the field is in relative terms minuscule at (say) 60 rpm.

And so while the internal magnetic field acting within the magnet itself is rotating at 60 rpm, the external magnetic field generated by the magnet, and acting at the speed of light in the atmosphere, is essentially static.

But as a magnetic field cannot possibly act and react within and through a vacuum, of any minuscule, hypothetical, inter-atomic volume, it is therefore propagating directly atom to atom through a continuum of atmospheric gases.

The atoms, which compose the atmosphere, are naturally aligned to the Earth’s observed, and continuous, magnetic field, and from which alignment they are diverted by the far stronger local field of an iron magnet.

There is no other possible explanation for Faraday’s experimental results.

The image below is of a magnet rotating at 60 rpm within the static and continuous atmospheric gases surrounding it, and which rotative motion is generating an emf.

Posted in Physics | Leave a comment

The Speed of Light

From Newton’s time to the middle of the 20th century it was generally believed by scientists that the atmosphere of the Earth extended to a certain altitude, around 80 km, whereafter the perfect vacuum of space began.

In the 1950’s “most scientists visualised our planet as a solitary sphere traveling in a cold dark vacuum of space around the Sun”

In 1919 The Royal Society of London sent out two expeditions to observe a total eclipse of the sun, the purpose of which was to view a star beyond the sun whose position would be seen as being close to its surface when shielded by an eclipse of the moon.

One went to Sobral in Brazil and one to the island of Principe, off West Africa, and the latter was led by Sir Arthur Eddington.

Photographic plates were exposed at the time of the eclipse in both places and later analysis showed that the position of a star that was near to the suns surface was shown to deviate from its true position by about 1.6” (seconds of arc) in Principe and 2” in Sobral.

Eddington considered the possibility of gases surrounding the sun having a refractive effect and accordingly worked out what would be the necessary density to produce the required refractive index, but concluded: – “It seems obvious that there can be no material of this order of density at such a distance (an altitude of 400,000 miles) from the sun”.

In other words he made an assumption, based upon the beliefs current at this time, that there was a perfect vacuum at this altitude above the sun’s surface, and accordingly assumed that this result was a proof of Einstein’s prediction of a relativistic influence.

Today however the picture is different. The sending of satellites into orbit around the Earth and probes further out into the solar system and the moon landings have demolished the belief in a thin and finite layer of terrestrial atmosphere, and at altitudes of 300-400 km it is known that the proportions of oxygen and nitrogen are in the same ratios as at sea level. And it is also now known that the atmosphere of the earth has no defined border at any altitude.

The sun too has an atmosphere, and because the sun accounts for more than nine tenths of the total mass of the solar system, its atmosphere is much larger than that of any planet. The solar atmosphere extends far beyond the orbit of the earth, and at 80,000 Km our atmosphere merges imperceptibly with that of the sun.’

Of course the density of the solar atmosphere at any point in the solar system is dependent upon the particular altitude from the sun, and the density of its extended atmosphere is influenced by the Earth and other planets and other lesser satellites.

With respect to the Earth, it is now acknowledged that:-

Atmospheric refraction is the deviation of light or other electromagnetic wave from a straight line as it passes through the atmosphere due to the variation in air density as a function of altitude. This refraction is due to the velocity of light through air decreasing (the index of refraction increases) with increased density.”

As it is known that the Sun has an atmosphere that extends beyond the Earth’s orbit and accordingly merges with the Earth’s volume of atmospheric influence, then it is obvious that there is no clearly defined border between the two.

Accordingly there can be no, arbitrarily defined, altitude from either body where the refraction of light begins or ends and thus where its velocity is constant.

It is now known that space is not a perfect vacuum, the Sun also has an atmosphere that extends far beyond the Earth’s own sphere of atmospheric influence, and that in inter-galactic regions there is a consistent distribution of matter, but of course at densities that are at or near the lowest occurring universally.

In these circumstances it is quite obvious that light will always travel at a speed appropriate to the density of the medium and that light never travels at a constant speed (apart from short distances through a solid, translucent matter such as glass).

Thus light when emitted from one of the Jovian moons, as it emerges from behind that planet, will accelerate up to the point of gravitational neutrality, and density, between it and the planet, it will then decelerate up to where it passes the tangential point of greatest density of the Jupiter’s atmosphere, whereupon it will again progressively accelerate to the point of neutrality between Jupiter and Earth (this assuming no other planetary influences are involved during this passage) and again, as discussed, it velocity will progressively decrease down to the Earth’s surface.

It is therefore clear that an increasing density of the medium results in an increasing inhibitory effect on the motion of light, but in terms of the currently accepted atomic structure of gaseous, macroscopic matter, which assumes that it is composed mostly of ’empty space’, a vacuum, this is inexplicable, as this theory assumes that light is somehow transported through a vacuum that, by definition, can have no inhibitory effect.*

If c is calculated on the basis of the average velocity of light through the variable densities of matter between the Earth and Jupiter, then in the progressively decreasing densities outwards to the median points between the Sun and the nearest stars and onwards to these points between the “Milky Way” and the nearest galaxies, its velocity at any point will be appropriate to the variable densities of matter in these positions.

And as the densities of gases here in these regions are significantly lower, its velocities will accordingly, and also significantly, exceed the hypothetical maximum “c”.

* Mark D. Roberts – Vacuum Energy P 15 :- “If one considers the propagation of light, the speed c is its speed of propagation in a vacuum, there is never an absolute vacuum so that light never propagates at c.” https://arxiv.org/pdf/hep-th/0012062.pdf

Posted in Physics | Leave a comment

Creating a Vacuum

For centuries scientists have been trying to create lower and lower temperatures and pressures, initially by evacuating gas from containers with mechanical pumps.

But today more refined technologies such as diffusion, ionisation, chemisorption etc. are used to produce ‘high’ partial vacuums for commercial and experimental use and it is possible to (momentarily) achieve extremely low pressures, termed as ultra-high vacuums (UHV), to within a fraction of absolute zero pressure and temperature.

Ultra-high vacuum is vacuum regime characterised by pressures lower than about 10pascal or 100 nanopascals (10mbar, ~10torr). (Wikipedia)

But there is no single vacuum pump that can operate all the way from atmospheric pressure to ultra-high vacuum. Instead, a series of different pumps is used, according to the appropriate pressure range for each pump. High pumping speeds are necessary and multiple vacuum pumps are used in series and/or parallel.

Pumps commonly used in combination to achieve UHV include:-

1) Turbomolecular pumps (especially compound and/or magnetic bearing types)

2) Ion pumps

3) Titanium sublimation pumps

4) Non-evaporable getter (NEG) pumps

5) Cryopumps

But the UHV’s produced cannot be sustained for any length of time, this is due to contamination of the sample resulting from such effects as ‘out-gassing’.

Out-gassing can include sublimation and evaporation, which are phase transitions of a solid or liquid substance into a gas, in other words at these extremely low conditions of pressure, atoms, either contained within, or vapourised from the surfaces of, the solid matter of the apparatus are drawn into the volume under examination.

Out-gassing is a significant problem for UHV systems. Out-gassing can occur from two sources: surfaces and bulk materials. Out-gassing from bulk materials is minimized by careful selection of materials with low vapor pressures (such as glass, stainless steel, and ceramics) for everything inside the system. Hydrogen and carbon monoxide are the most common background gases in a well-designed UHV system. Both Hydrogen and CO diffuse out from the grain boundaries in stainless steel and Helium can diffuse through steel and glass from the outside air.” (Wikipedia)

And extraordinary preparatory steps are required to reduce these effects, which include the following:-

1) Baking the system (for one-two days at up to 400°C while the pumps are running) to remove water or hydrocarbons adsorbed to the walls.

2) Minimizing the surface area in the chamber.

3) High conductance tubing to pumps — short and fat, without obstruction.

4) Low out-gassing materials such as certain stainless steels.

5) Avoiding creating pits of trapped gas behind bolts, welding voids, etc.

6) Electro-polishing all metal parts after machining or welding.

7) Low vapor pressure materials (ceramics, glass, metals, and teflon if unbaked).

8) Chilling chamber walls to cryogenic temperatures during use.

9) Avoid all traces of hydrocarbons, including skin oils in a fingerprint.

These preparatory requirements, together with the actual pumping processes, use an enormous amount of energy and, as it is not technically possible to completely eliminate out-gassing or other contaminating efflux, these very low pressures, or conditions, cannot be sustained for any length of time, it is therefore clear that there is a progressively increasing force of resistance to the decompression of a gas, and a very strong resistance to the maintenance of such levels of pressure.

Why should this be the case when the only external resistance is that generated by atmospheric pressure, which in theory should be easily overcome by modern machinery?

In the opposite direction, for example, there are sophisticated machines in regular use today that compress materials to upwards of 200,000 times atmospheric pressure, for example to produce industrial diamond from carbon.

This high level of resistance requires an explanation.

It is a core premise of currently accepted physics theory that a perfect vacuum cannot influence matter in any way, and accordingly nor can any of its hypothetical, aetherial constituents.

This being the case, the question is:-

What forces are operating in these circumstances to prevent the extraction of all matter from within the compartment, and what is the source of this resistance?

The simple diagram below illustrates this situation with a perfectly sealed piston cylinder apparatus, and a single atom within the cylinder.

The hypothetical, non-material, empty space which is believed to occupy virtually all the chamber, by definition, can have no influence upon matter itself and, as matter is undeniably present within the compartmental space under investigation (and in the surrounding structure), it can only be either the single atom, and/or the atomic structure of the apparatus which generates this exponentially increasing resistance.

In other words it is matter, and matter alone, that is the cause of this resistance.

As mentioned earlier, today it is being said that the vacuum is not empty, but is permeated with waves of energy, etc. etc., but again such a medium has to have the qualities of non-resistance to the free motion of atoms and molecules within it (i.e. is a zero-inertia medium) and so could not generate any resistance.

But, in terms of the kinetic atomic theory of gases, where the only force allowed is a positive one generated by the collisions of atoms, the generation of such a resistive or negative force is inexplicable, and if the matter of our experience is almost entirely composed of a non-material ’empty space’ (of any speculative description) then, technically speaking, it should be very easy to remove all atoms from within it.

It is an undeniable fact that current physics theory has no answer to this question and, as these numerous empirical results are a direct falsification of current, kinetic-atomic atomic theory of gases, it would be accordingly necessary to conclude that this, the base theory of the science of physics, is invalid.

So to summarise, there exists no proof of the existence of the state of a vacuum in any circumstance, on the contrary, to objective observers at least, electron microscopy images show that in the solid state atoms are in contact and are therefore continuous.

The assumption by theoretical physicists that this state ‘exists’, and is by far the largest component of macroscopic matter, is a fundamental problem for science in general, and this was clearly articulated by Isaac Newton in a letter to Richard Bentley over 300 years ago : –

That one body may act upon another at a distance through a vacuum, without the mediation of anything else, by which their action and force may be conveyed from one to another, is to me so great an absurdity, that I believe no man, who has in philosophical matters a competent faculty of thinking, can ever fall into it.”.

What this brilliant mathematician and inventor/artisan/technician was saying is that it is both conceptually and mathematically impossible to describe the transmission of a force in these circumstances as such a void cannot sustain the necessary process of ‘action and reaction’.

As Archimedes said 2500 years ago, ‘give me a point on which to place a lever and I will move the world’, in other words there has to be a ‘something’ for a force to act upon and in the case of two atoms or of two massive bodies separated by vacuum, as this space by definition has no qualities, a force emanating from one mass has no base from which to act upon the other.

This when applied to atomic matter means that, if no continuous contact is assumed between two atoms in any state of matter, there is no possible way to describe how any force acting on one is transmitted to, and acts upon the other.

In the middle of the 20th century however eminent physicists, such as Bohr, ultimately came to the realisation that a vacuum that had no characteristics that could affect atomic matter, was an insurmountable obstacle to progress, and accordingly the vacuum subsequently began to be attributed with hypothetical characteristics, and such concepts as ‘vacuum fluctuations’ and ‘vacuum polarisation’ were introduced, more recently it is suggested that it has such qualities as ‘an infinite energy density’ or ‘quantum potential’ etc. etc.

Also hypothetical vehicles for the transmission of forces and light through outer space were proposed, such as represented by ‘super-string’ and ‘loop’ theories.

And, with the realisation that this universal structure would mean that only a infinitely small proportion of the mass of the universe could be identified as matter, the concepts of ‘dark matter’ and ‘dark energy’ occupying the vacua in outer space have been floated.

All these completely unproven, and unprovable, concepts are simply attempts to endow the vacuum with qualities that, amongst other things, can transmit or transfer a force, providing it with various hypothetical qualities that can influence atomic matter. In other words physicists have tacitly accepted that the definition of the supposedly all-pervading ‘empty space’ as a vacuum is ‘superfluous’.

Thus the concept of an all-pervading non-material medium, effectively the aether that was ridiculed in the first half of the century, has been subtly, and surreptitiously, reintroduced by theoretical physicists in attempts to deal with the present complete impasse in atomic level physics.

100 years ago the, then strongly disputed, vacuum was set by Einstein into scientific consciousness, and 50 years ago scientists belatedly began to patch up the already unsatisfactory base of accepted atomic theory, kinetic atomic theory, by investing the essential, ‘empty space’ component with numerous hypothetical qualities.

And today it is known that atoms are the ultimate natural division of matter, in other words it is effectively proven that this is the case.

Atoms are the basic units of matter and the defining structure of elements.”

Atoms are the basis of chemistry and they are the basis for everything in the Universe.” (Textbook quotes)

But up until around 35 years ago the atom was still a hypothetical entity. And, while for most of the last century its existence was almost a certainty, a definitive proof had to wait until the technology of electron microscopy was perfected in the early 1980’s.

Since then many thousands of images of atoms in solid matter have been produced and published for all to see, and individual atoms have even been manipulated into positions on surfaces to create company logos, rings and other shapes, as in the image below.

At the beginning of the 1800’s Dalton introduced his solid, spherical, indestructible atoms and, if we ignore the belated acceptance of Avogadro’s multi-atomic molecular structures and J J Thompson’s ‘plumb pudding’ model atom later in that century, the next significant change to the internal structure of atoms was Rutherford’s model of 1919.

Since this time theoretical physicists have focused their attention on examining this structure and today have arrived at a hypothetical structure described, broadly speaking, as the Standard Model.

So the hypothetical atomic structure has changed dramatically from an indestructible solid sphere to what could be termed, essentially, as a ‘vacuum’ atom, and if this model is put into a comprehensible perspective with a nucleus of a hydrogen atom presented as having the diameter of 1mm (the dot below on the left represents such a nucleus) the atoms single electron would be orbiting at an altitude from it of over 2 metres.

Nucleus ▪ <————– 2.3 metres ————> · Electron

Note that on this scale the electron, the dot on the right, would not be visible on this page as it would be less than one pixel in diameter.

This 2mm diameter nucleus of such an atom would exert influence over a nominally spherical, sub-atomic ‘empty space’, as defined by its electron, having a diameter of 4.6 metres, and two such atoms are presented below at the point of a ‘kinetic’ collision. The nuclei are not included as obviously on this scale they would be invisible, while the dashed circles represent the extent of the nominal orbits of their single electrons.

This projected collision, at a combined velocity of up to 3600 metres per second is, in terms of the kinetic atomic theory of gases, required to be one of perfect elasticity, i.e. no distortion of their spherical forms and no loss of energy, and no reduction in the average motions of both atoms.

But it is rather difficult to imagine how a collision of such ‘vacuum’ atoms could result in such a ‘perfect’ collision.

However this picture is a simple one and the, tiny, material structure of the nuclei of atoms today, as postulated by particle physicists, are one of an extreme complexity and which are said to be composed of around 300 separate particles.

This hypothetical structure is the result of a huge investment by governments (i.e. taxpayers) around the world over the last 70-80 years, exemplified by the cost of the Large Hadron Collider at CERN, which has cost over $13 billion to date and has an annual budget of $1 billion.

But for all this effort a commentator has said “There have been tremendous advances in most areas of physics, such as materials science and hydrodynamics, which remain tied to experiment, but since the development of QED in 1928-1930 there have been no major gains in our understanding of the underlying structure of matter” 1

1 ‘The Big Bang Never Happened’, Eric J Lerner, P 358

It could be said that these advances are due to the fact that in these disciplines solid and liquid matter are today analysed by applied physicsists using Huygen’s concept of a continuum of atoms, in other words atoms in these states are treated as forming a continuous structure.

But today, for theoretical physicists, in essence, the atom is composed of a nucleus, and the extent of the atom’s influence is defined by a ‘cloud’ of particles – electrons. The nucleus and the surrounding electrons are said to be separated by a “perfect vacuum” * (*The Void, Frank Close, Oxford University Press, 2007) which vacuum occupies almost all of the volume of an atom, while the proportion of matter represented by these sub-atomic particles is one trillionth of its total volume, and that atomic interactions are based upon their ‘kinetic’ motion within an extra-atomic vacuum.

The reason for physicist’s focus on the atom’s internal structure is that, as there is no possibility of the transmission of forces through and between the vacuum separating such discontinuous atoms, and they live in hope that somehow the answers could lie in the sub-atomic structure of the atom itself.

But if the atom is itself almost entirely a perfect vacuum and its mass is overwhelmingly concentrated in the nucleus, then again there is no possibility of an sensible explanation for the transfer of a force from the mass of the nucleus outward to and beyond its outer periphery.

Clearly this Standard Model ‘vacuum’ atom is an absurdity, however the suggestion that its mass is concentrated at its central core is not, but this does not mean that the remaining volume is empty of matter, as no one, and certainly no physicist, knows what matter is ultimately and so they cannot say with any certainty that matter is confined to the nucleus and electrons and so is ‘here’ and not ‘there’.

But if gravity is a function of mass and perhaps it is time, after decades of failure, to consider that there is something fundamentally wrong with the theory on which all of theoretical physics is based, and which needs a relatively vast inter- atomic, non-material, ‘empty space’ to function, i.e. the kinetic atomic theory of gases.

If the atom is the ultimate natural repository of matter, then surely it is also, both collectively and individually, the ultimate natural source of all forces and the ultimate natural vehicle for transmission.

Posted in Physics | Leave a comment