Bases and Base Networks

Bases and Base Networks
Bases (base stations) are important in gravity and magnetic surveys, and in
some electrical and radiometric work. They may be:
1. Drift bases – Repeat stations that mark the starts and ends of sequences
of readings and are used to control drift.
2. Reference bases – Points where the value of the field being measured has
already been established.
3. Diurnal bases – Points where regular measurements of background are
made whilst field readings are taken elsewhere.
A single base may fulfil more than one of these functions. The reliability
of a survey, and the ease with which later work can be tied to it, will often
depend on the quality of the base stations. Base-station requirements for
individual geophysical methods are considered in the appropriate chapters,
but procedures common to more than one type of survey are discussed below.
 Base station principles
There is no absolute reason why any of the three types of base should coincide,
but surveys tend to be simpler and fewer errors are made if every drift
base is also a reference base. If, as is usually the case, there are too few
existing reference points for this to be done efficiently, the first step in a
survey should be to establish an adequate base network.
It is not essential that the diurnal base be part of this network and, because
two instruments cannot occupy exactly the same point at the same time, it
may actually be inconvenient for it to be so. However, if a diurnal monitor
has to be used, work will normally be begun each day by setting it up and
end with its removal. It is good practice to read the field instruments at a drift
base at or near the monitor position on these occasions, noting any differences
between the simultaneous readings of the base and field instruments.
 ABAB ties
Bases are normally linked together using ABAB ties (Figure 1.13). A reading
is made at Base A and the instrument is then taken as quickly as possible






to Base B. Repeat readings are then made at A and again at B. The times
between readings should be short so that drift, and sometimes also diurnal
variation, can be assumed linear. The second reading at B may also be the
first in a similar set linking B to a Base C, in a process known as forward
looping.
Each set of four readings provides two estimates of the difference in field
strength between the two bases, and if these do not agree within the limits
of instrument accuracy (±1 nT in Figure 1.13), further ties should be made.
Differences should be calculated in the field so that any necessary extra links
can be added immediately.

 Base networks
Most modern geophysical instruments are accurate and quite easy to read,
so that the error in any ABAB estimate of the difference in value between
two points should be trivial. However, a final value obtained at the end of
an extended series of links could include quite large accumulated errors. The
integrity of a system of bases can be assured if they form part of a network in
which each base is linked to at least two others. Misclosures are calculated by
summing differences around each loop, with due regard to sign, and are then
reduced to zero by making the smallest possible adjustments to individual
differences. The network in Figure 1.14 is sufficiently simple to be adjusted






by inspection. A more complicated network could be adjusted by computer,
using least-squares or other criteria, but this is not generally necessary in
small-scale surveys.
 Selecting base stations
It is important that bases be adequately described and, where possible, permanently
marked, so that extensions or infills can be linked to previous work
by exact re-occupations. Concrete or steel markers can be quickly destroyed,
either deliberately or accidentally, and it is usually better to describe station
locations in terms of existing features that are likely to be permanent. In any
survey area there will be points that are distinctive because of the presence
of manmade or natural features. Written descriptions and sketches are the
best way to preserve information about these points for the future. Good
sketches are usually better than photographs, because they can emphasize
salient points.
Permanence can be a problem, e.g. maintaining gravity bases at international
airports is almost impossible because building work is almost always
under way. Geodetic survey markers are usually secure but may be in isolated
and exposed locations. Statues, memorials and historic or religious buildings
often provide sites that are not only quiet and permanent but also offer some
shelter from sun, wind and rain.

Global Positioning Satellites

Small, reasonably cheap, hand-held GPS receivers have been available since
about 1990. Until May 2000, however, their accuracy was no better than
a few hundred metres in position and even less in elevation, because of
deliberate signal degradation for military reasons (‘selective availability’ or
SA). The instruments were thus useful only for the most regional of surveys.
For more accurate work, differential GPS (DGPS) was required, involving
a base station and recordings, both in the field and at the base, of the estimated
ranges to individual satellites. Transmitted corrections that could be
picked up by the field receiver allowed real-time kinetic positioning (RTKP).
Because of SA, differential methods were essential if GPS positioning was
to replace more traditional methods in most surveys, even though the accuracies
obtainable in differential mode were usually greater than needed for
geophysical purposes.
 Accuracies in hand-held GPS receivers
The removal of SA dramatically reduced the positional error in non-differential
GPS, and signals also became easier to acquire. It is often now possible to
obtain fixes through forest canopy, although buildings or solid rock between
receiver and satellite still present insuperable obstacles. The precision of the
readouts on small hand-held instruments, for both elevations and co-ordinates,
is generally to the nearest metre, or its rough equivalent in latitude and longitude
(0.00001◦). Accuracies are considerably less, because of multi-path
errors (i.e. reflections from topography or buildings providing alternative
paths of different lengths) and because of variations in the properties of the
atmosphere. The main atmospheric effects occur in the ionosphere and depend
on the magnitude and variability of the ionization. They are thus most severe
during periods of high solar activity, and particularly during magnetic storms
(Section 3.2.4).
Because of atmospheric variations, all three co-ordinates displayed on a
hand-held GPS will usually vary over a range of several metres within a
period of a few minutes, and by several tens of metres over longer time
intervals. Despite this, it is now feasible to use a hand-held GPS for surveys
with inter-station separations of 100 m or even less because GPS errors,
even if significant fractions of station spacing, are not, as are so many other
errors, cumulative. Moreover, rapid movement from station to station is, in
effect, a primitive form of DGPS, and if fixes at adjacent stations are taken
within a few minutes of each other, the error in determining the intervening
distance will be of the order of 5 metres or less. (In theory, this will not
work, because corrections for transmission path variations should be made
individually for each individual satellite used, and this cannot be done with
the hand-held instruments currently available. However, if distances and time
intervals between readings are both small, it is likely that the same satellite
constellation will have been used for all estimates and that the atmospheric
changes will also be small.)
 Elevations from hand-held GPS receivers
In some geophysical work, errors of the order of 10 metres may be acceptable
for horizontal co-ordinates but not for elevations, and DGPS is then still
needed. There is a further complication with ‘raw’ GPS elevations, since
these are referenced to an ellipsoid. A national elevation datum is, however,
almost always based on the local position of the geoid via the mean sea level
at some selected port. Differences of several tens of metres between geoid
and ellipsoid are common, and the source of frequent complaints from users
that their instruments never show zero at sea level! In extreme cases, the
difference may exceed 100 m.
Most hand-held instruments give reasonable positional fixes using three
satellites but need four to even attempt an elevation. This is because the
unknown quantities at each fix include the value of the offset between
the instrument’s internal clock and the synchronized clocks of the satellite




constellation. Four unknowns require four measurements. Unfortunately, in
some cases the information as to whether ‘3D navigation’ is being achieved
is not included on the display that shows the co-ordinates (e.g. Figure 1.15),
and the only indication that the fourth satellite has been ‘lost’ may be a
suspicious lack of variation in the elevation reading.

Geophysical Data

Some geophysical readings are of true point data but others are obtained
using sources that are separated from detectors. Where values are determined
between rather than at points, readings will be affected by orientation. Precise
field notes are always important but especially so in these cases, since reading
points must be defined and orientations must be recorded.

If transmitters, receivers and/or electrodes are laid out in straight lines
and the whole system can be reversed without changing the reading, the midpoint
should be considered the reading point. Special notations are needed
for asymmetric systems, and the increased probability of positioning error is
in itself a reason for avoiding asymmetry. Especial care must be taken when
recording the positions of sources and detectors in seismic work.

 Station numbering
Station numbering should be logical and consistent. Where data are collected
along traverses, numbers should define positions in relation to the traverse
grid. Infilling between traverse stations 3 and 4 with stations 3.25  , 3.5 and 3.75
is clumsy and may create typing problems, whereas defining as 325E a
station halfway between stations 300E and 350E, which are 50 metres apart,
is easy and unambiguous. The fashion for labelling such a station 300+25E
has no discernible advantages and uses a plus sign which may be needed,
with digital field systems or in subsequent processing, to stand for N or E. It
may be worth defining the grid origin in such a way that S or W stations do
not occur, and this may be essential with data loggers that cannot cope with
either negatives or points of the compass.
Stations scattered randomly through an area are best numbered sequentially.
Positions can be recorded in the field by pricking through maps or
air-photos and labelling the reverse sides. Estimating coordinates in the field
from maps may seem desirable but mistakes are easily made and valuable
time is lost. Station coordinates are now often obtained from GPS receivers
(Section 1.5), but differential GPS may be needed to provide sufficient accuracy
for detailed surveys.
If several observers are involved in a single survey, numbers can easily
be accidentally duplicated. All field books and sheets should record the name
of the observer. The interpreter or data processor will need to know who to
look for when things go wrong.
Recording results
Geophysical results are primarily numerical and must be recorded even more
carefully than qualitative observations of field geology. Words, although
sometimes difficult to read, can usually be deciphered eventually, but a set of
numbers may be wholly illegible or, even worse, may be misread. The need
for extra care has to be reconciled with the fact that geophysical observers are
usually in more of a hurry than are geologists, since their work may involve
instruments that are subject to drift, draw power from batteries at frightening
speed or are on hire at high daily rates.
Numbers may, of course, not only be misread but miswritten. The circumstances
under which data are recorded in the field are varied but seldom

ideal. Observers are usually either too hot, too cold, too wet or too thirsty.
Under such conditions, they may delete correct results and replace them with
incorrect ones, in moments of confusion or temporary dyslexia. Data on geophysical
field sheets should therefore never be erased. Corrections should
be made by crossing out the incorrect items, preserving their legibility, and
writing the correct values alongside. Something may then be salvaged even
if the correction is wrong. Precise reporting standards must be enforced and
strict routines must be followed if errors are to be minimized. Reading the
instrument twice at each occupation of a station, and recording both values,
reduces the incidence of major errors.
Loss of geophysical data tends to be final. Some of the qualitative observations
in a geological notebook might be remembered and re-recorded, but
not strings of numbers. Copies are therefore essential and should be made
in the field, using duplicating sheets or carbon paper, or by transcribing the
results each evening. Whichever method is used, originals and duplicates
must be separated immediately and stored separately thereafter. Duplication
is useless if copies are stored, and lost, together. This, of course, applies
equally to data stored in a data logger incorporated in, or linked to, the field
instrument. Such data should be checked, and backed up, each evening.
Digital data loggers are usually poorly adapted to storing non-numeric
information, but observers are uniquely placed to note and comment on a
multitude of topographic, geological, manmade (cultural ) and climatic factors
that may affect the geophysical results. If they fail to do so, the data that
they have gathered may be interpreted incorrectly. If data loggers are not
being used, comments should normally be recorded in notebooks, alongside
the readings concerned. If they are being used, adequate supplementary positional
data must be stored elsewhere. In archaeological and site investigation
surveys, where large numbers of readings are taken in very small areas, annotated
sketches are always useful and may be essential. Sketch maps should
be made wherever the distances of survey points or lines from features in
the environment are important. Geophysical field workers may also have a
responsibility to pass on to their geological colleagues information of interest
about places that only they may visit. They should at least be willing to
record dips and strikes, and perhaps to return with rock samples where these
would be useful.
Accuracy, sensitivity, precision
Accuracy must be distinguished from sensitivity. A standard gravity meter,
for example, is sensitive to field changes of one-tenth of a gravity unit
but an equivalent level of accuracy will be achieved only if readings are
carefully made and drift and tidal corrections are correctly applied. Accuracy
is thus limited, but not determined, by instrument sensitivity. Precision,


which is concerned only with the numerical presentation of results (e.g. the
number of decimal places used), should always be appropriate to accuracy
(Example 1.1). Not only does superfluous precision waste time but false conclusions
may be drawn from the high implied accuracy.


Example 1.1
Gravity reading = 858.3 scale units
Calibration constant = 1.0245 g.u. per scale division (see Section 2.1)
Converted reading = 879.32835 g.u.
But reading accuracy is only 0.1 g.u. (approximately), and therefore:
Converted reading = 879.3 g.u.
(Four decimal place precision is needed in the calibration constant, because
858.3 multiplied by 0.0001 is equal to almost 0.1 g.u.)


Geophysical measurements can sometimes be made to a greater accuracy
than is needed, or even usable, by the interpreters. However, the highest
possible accuracy should always be sought, as later advances may allow the
data to be analysed more effectively.
 Drift
A geophysical instrument will usually not record the same results if read
repeatedly at the same place. This may be due to changes in background
field but can also be caused by changes in the instrument itself, i.e. to drift.
Drift correction is often the essential first stage in data analysis, and is usually
based on repeat readings at base stations (Section 1.4).
Instrument drift is often related to temperature and is unlikely to be linear
between two readings taken in the relative cool at the beginning and end of
a day if temperatures are 10 or 20 degrees higher at noon. Survey loops may
therefore have to be limited to periods of only one or two hours.
Drift calculations should be made whilst the field crew is still in the
survey area so that readings may be repeated if the drift-corrected results
appear questionable. Changes in background field are sometimes treated as
drift but in most cases the variations can either be monitored directly (as in
magnetics) or calculated (as in gravity). Where such alternatives exist, it is
preferable they be used, since poor instrument performance may otherwise
be overlooked.
 Signal and noise
To a geophysicist, signal is the object of the survey and noise is anything
else that is measured but is considered to contain no useful information. One
observer’s signal may be another’s noise. The magnetic effect of a buried




normal distribution lie within 1 SD of the mean, and less than 0.3% differ
from it by more than 3 SDs. The SD is popular with contractors when quoting
survey reliability, since a small value can efficiently conceal several major
errors. Geophysical surveys rarely provide enough field data for statistical
methods to be validly applied, and distributions are more often assumed to
be normal than proven to be so.
 Anomalies
Only rarely is a single geophysical observation significant. Usually, many
readings are needed, and regional background levels must be determined,
before interpretation can begin. Interpreters tend to concentrate on anomalies,
i.e. on differences from a constant or smoothly varying background.
Geophysical anomalies take many forms. A massive sulphide deposit containing
pyrrhotite would be dense, magnetic and electrically conductive. Typical
anomaly profiles recorded over such a body by various types of geophysical
survey are shown in Figure 1.8. A wide variety of possible contour patterns
correspond to these differently shaped profiles.
Background fields also vary and may, at different scales, be regarded as
anomalous. A ‘mineralization’ gravity anomaly, for example, might lie on
a broader high due to a mass of basic rock. Separation of regionals from
residuals is an important part of geophysical data processing and even in the
field it may be necessary to estimate background so that the significance of
local anomalies can be assessed. On profiles, background fields estimated by
eye may be more reliable than those obtained using a computer, because of
the virtual impossibility of writing a computer program that will produce a
background field uninfluenced by the anomalous values (Figure 1.9). Computer
methods are, however, essential when deriving backgrounds from data
gathered over an area rather than along a single line.
The existence of an anomaly indicates a difference between the real world
and some simple model, and in gravity work the terms free air, Bouguer
and isostatic anomaly are commonly used to denote derived quantities that
represent differences from gross Earth models. These so-called anomalies
are sometimes almost constant within a small survey area, i.e. the area is
not anomalous! Use of terms such as Bouguer gravity (rather than Bouguer
anomaly) avoids this confusion.
 Wavelengths and half-widths
Geophysical anomalies in profile often resemble transient waves but vary
in space rather than time. In describing them the terms frequency and frequency
content are often loosely used, although wavenumber (the number of
complete waves in unit distance) is pedantically correct. Wavelength may be
quite properly used of a spatially varying quantity, but is imprecise where




geophysical anomalies are concerned because an anomaly described as having
a single ‘wavelength’ would be resolved by Fourier analysis into a number
of components of different wavelengths.
A more easily estimated quantity is the half-width, which is equal to half
the distance between the points at which the amplitude has fallen to half the
anomaly maximum (cf. Figure 1.8a). This is roughly equal to a quarter of
the wavelength of the dominant sinusoidal component, but has the advantage
of being directly measurable on field data. Wavelengths and half-widths are
important because they are related to the depths of sources. Other things
being equal, the deeper the source, the broader the anomaly.
 Presentation of results
The results of surveys along traverse lines can be presented in profile form,
as in Figure 1.8. It is usually possible to plot profiles in the field, or at
least each evening, as work progresses, and such plots are vital for quality
control. A laptop computer can reduce the work involved, and many modern
instruments and data loggers are programmed to display profiles in ‘real time’
as work proceeds.
A traverse line drawn on a topographic map can be used as the baseline
for a geophysical profile. This type of presentation is particularly helpful
in identifying anomalies due to manmade features, since correlations with
features such as roads and field boundaries are obvious. If profiles along a
number of different traverses are plotted in this way on a single map they are

said to be stacked, a word otherwise used for the addition of multiple data
sets to form a single output set (see Section 1.3.5).
Contour maps used to be drawn in the field only if the strike of some
feature had to be defined quickly so that infill work could be planned, but
once again the routine use of laptop computers has vastly reduced the work
involved. However, information is lost in contouring because it is not generally
possible to choose a contour interval that faithfully records all the features
of the original data. Also, contour lines are drawn in the areas between traverses,
where there are no data, and inevitably introduce a form of noise.
Examination of contour patterns is not, therefore, the complete answer to
field quality control.
Cross-sectional contour maps (pseudo-sections) are described in
Sections 6.3.5 and 7.4.2.
In engineering site surveys, pollution monitoring and archaeology, the
objects of interest are generally close to the surface and their positions in
plan are usually much more important than their depths. They are, moreover,
likely to be small and to produce anomalies detectable only over very small
areas. Data have therefore to be collected on very closely spaced grids and can
often be presented most effectively if background-adjusted values are used
to determine the colour or grey-scale shades of picture elements (pixels) that
can be manipulated by image-processing techniques. Interpretation then relies
on pattern recognition and a single pixel value is seldom important. Noise
is eliminated by eye, i.e. patterns such as those in Figure 1.10 are easily
recognized as due to human activity.



 Data loggers
During the past decade, automation of geophysical equipment in small-scale
surveys has progressed from a rarity to a fact of life. Although many of the
older types of instrument are still in use, and giving valuable service, they now
compete with variants containing the sort of computer power employed, 30
years ago, to put a man on the moon. At least one manufacturer now proudly
boasts ‘no notebook’, even though the instrument in question is equipped
with only a numerical key pad so that there is no possibility of entering
text comments into the (more than ample) memory. On other automated
instruments the data display is so small and so poorly positioned that the
possibility that the observer might actually want to look at, and even think
about, his observations as he collects them has clearly not been considered.
Unfortunately, this pessimism may all too often be justified, partly because of
the speed with which readings, even when in principle discontinuous, can now
be taken and logged. Quality control thus often depends on the subsequent
playback and display of whole sets of data, and it is absolutely essential that
this is done on, at the most, a daily basis. As Oscar Wilde might have said
(had he opted for a career in field geophysics), to spend a few hours recording
rubbish might be accounted a misfortune. To spend anything more than a day
doing so looks suspiciously like carelessness.
Automatic data loggers, whether ‘built-in’ or separate, are particularly
useful where instruments can be dragged, pushed or carried along traverse
to provide virtually continuous readings. Often, all that is required of the
operators is that they press a key to initiate the reading process, walk along
the traverse at constant speed and press the key again when the traverse is
completed. On lines more than about 20 m long, additional keystrokes can
be used to ‘mark’ intermediate survey points.
One consequence of continuous recording has been the appearance in
ground surveys of errors of types once common in airborne surveys which
have now been almost eliminated by improved compensation methods and
GPS navigation. These were broadly divided into parallax errors, heading
errors, ground clearance/coupling errors and errors due to speed variations.
With the system shown in Figure 1.11, parallax errors can occur because
the magnetic sensor is about a metre ahead of the GPS sensor. Similar errors
can occur in surveys where positions are recorded by key strokes on a data
logger. If the key is depressed by the operator when he, rather than the sensor,
passes a survey peg, all readings will be displaced from their true positions.
If, as is normal practice, alternate lines on the grid are traversed in opposite
directions, a herringbone pattern will be imposed on a linear anomaly, with
the position of the peak fluctuating backwards and forwards according to the
direction in which the operator was walking (Figure 1.12a).


False anomalies can also be produced in airborne surveys if ground
clearance is allowed to vary, and similar effects can now be observed in
ground surveys. Keeping the sensor shown in Figure 1.11 at a constant height
above the ground is not easy (although a light flexible ‘spacer’ hanging from
it can help). On level ground there tends to be a rhythmic effect associated
with the operator’s motion, and this can sometimes appear on contour maps
as ‘striping’ at right angles to the traverse when minor peaks and troughs on
adjacent lines are linked to each other by the contouring algorithm. On slopes
there will, inevitably, be a tendency for a sensor in front of the observer to
be closer to the ground when going uphill than when going down. How
this will affect the final maps will vary with the nature of the terrain, but
in an area with constant slope there will a tendency for background levels
to be different on parallel lines traversed in opposite directions. This can
produce herringbone effects on individual contour lines in low gradient areas
(Figure 1.12b).
Heading errors occurred in airborne (especially aeromagnetic) surveys
because the effect of the aircraft on the sensor depended on aircraft orientation.



A similar effect can occur in a ground magnetic survey if the observer is
carrying any iron or steel material. The induced magnetization in these objects
will vary according to the facing direction, producing effects similar to those
produced by constant slopes, i.e. similar to those in Figure 1.12b.
Before the introduction of GPS navigation, flight path recovery in airborne
surveys relied on interpolation between points identified photographically.
Necessarily, ground speed was assumed constant between these points, and
anomalies were displaced if this was not the case. Similar effects can now be
seen in datalogged ground surveys. Particularly common reasons for slight
displacements of anomalies are that the observer either presses the key to start
recording at the start of the traverse, and then starts walking or, at the end of
the traverse, stops walking and only then presses the key to stop recording.
These effects can be avoided by insisting that observers begin walking before
the start of the traverse and continue walking until the end point has been

safely passed. If, however, speed changes are due to rugged ground, all that
can be done is to increase the number of ‘marked’ points.
Many data loggers not only record data but have screens large enough
to show individual and multiple profiles, allowing a considerable degree of
quality control in the field. Further quality control will normally be done each
evening, using automatic contouring programs on laptop PCs, but allowance
must be made for the fact that automatic contouring programs tend to introduce
their own distortions (Figure 1.12c).

Geophysical Fieldwork

Geophysical instruments vary widely in size and complexity but all are used
to make physical measurements, of the sort commonly made in laboratories, at
temporary sites in sometimes hostile conditions. They should be economical
in power use, portable, rugged, reliable and simple. These criteria are satisfied
to varying extents by the commercial equipment currently available.
 Choosing geophysical instruments
Few instrument designers can have tried using their own products for long
periods in the field, since operator comfort seldom seems to have been
considered. Moreover, although many real improvements have been made
in the last 30 years, design features have been introduced during the same
period, for no obvious reasons, that have actually made fieldwork more difficult.
The proton magnetometer staff, discussed below, is a case in point.
If different instruments can, in principle, do the same job to the same
standards, practical considerations become paramount. Some of these are
listed below.
Serviceability: Is the manual comprehensive and comprehensible? Is a
breakdown likely to be repairable in the field? Are there facilities for repairing
major failures in the country of use or would the instrument have to be sent
overseas, risking long delays en route and in customs? Reliability is vital but
some manufacturers seem to use their customers to evaluate prototypes.
Power supplies: If dry batteries are used, are they of types easy to replace
or will they be impossible to find outside major cities? If rechargeable batteries
are used, how heavy are they? In either case, how long will the batteries
last at the temperatures expected in the field? Note that battery life is reduced
in cold climates. The reduction can be dramatic if one of the functions of the
battery is to keep the instrument at a constant temperature.
Data displays: Are these clearly legible under all circumstances? A torch
is needed to read some in poor light and others are almost invisible in
bright sunlight. Large displays used to show continuous traces or profiles
can exhaust power supplies very quickly.
Hard copy: If hard copy records can be produced directly from the field
instrument, are they of adequate quality? Are they truly permanent, or will
they become illegible if they get wet, are abraded or are exposed to sunlight?
Comfort: Is prolonged use likely to cripple the operator? Some instruments
are designed to be suspended on a strap passing across the back of
the neck. This is tiring under any circumstances and can cause serious medical
problems if the instrument has to be levelled by bracing it against the
strap. Passing the strap over one shoulder and under the other arm may
reduce the strain but not all instruments are easy to use when carried in
this way.
Convenience: If the instrument is placed on the ground, will it stand
upright? Is the cable then long enough to reach the sensor in its normal
operating position? If the sensor is mounted on a tripod or pole, is this strong
enough? The traditional proton magnetometer poles, in sections that screwed
together and ended in spikes that could be stuck into soft ground, have now
been largely replaced by unspiked hinged rods that are more awkward to
stow away, much more fragile (the hinges can twist and break), can only be
used if fully extended and must be supported at all times.
Fieldworthiness: Are the control knobs and connectors protected from
accidental impact? Is the casing truly waterproof? Does protection from damp
grass depend on the instrument being set down in a certain way? Are there
depressions on the console where moisture will collect and then inevitably
seep inside?
Automation: Computer control has been introduced into almost all the
instruments in current production (although older, less sophisticated models
are still in common use). Switches have almost vanished, and every instruction
has to be entered via a keypad. This has reduced the problems that
used to be caused by electrical spikes generated by switches but, because the
settings are often not permanently visible, unsuitable values may be repeatedly
used in error. Moreover, simple operations have sometimes been made
unduly complicated by the need to access nested menus. Some instruments
do not allow readings to be taken until line and station numbers have been
entered and some even demand to know the distance to the next station and
to the next line!
The computer revolution has produced real advances in field geophysics,
but it has its drawbacks. Most notably, the ability to store data digitally in
data loggers has discouraged the making of notes on field conditions where
these, however important, do not fall within the restricted range of options
the logger provides. This problem is further discussed in Section 1.3.2. 
 Cables
Almost all geophysical work involves cables, which may be short, linking
instruments to sensors or batteries, or hundreds of metres long. Electrical
induction between cables (electromagnetic coupling, also known as crosstalk
) can be a serious source of noise (see also Section 11.3.5).
Efficiency in cable handling is an absolute necessity. Long cables always
tend to become tangled, often because of well-intentioned attempts to make
neat coils using hand and elbow. Figures of eight are better than simple loops,
but even so it takes an expert to construct a coil from which cable can be
run freely once it has been removed from the arm. On the other hand, a
seemingly chaotic pile of wire spread loosely on the ground can be quite
trouble-free. The basic rule is that cable must be fed on and off the pile in
opposite directions, i.e. the last bit of cable fed on must be the first to be
pulled off. Any attempts to pull cable from the bottom will almost certainly
end in disaster.
Cable piles are also unlikely to cause the permanent kinks which are often
features of neat and tidy coils and which may have to be removed by allowing
the cable to hang freely and untwist naturally. Places where this is possible
with 100-metre lengths are rare.
Piles can be made portable by feeding cables into open boxes, and on
many seismic surveys the shot-firers carried their firing lines in this way in
old gelignite boxes. Ideally, however, if cables are to be carried from place
to place, they should be wound on properly designed drums. Even then,
problems can occur. If cable is unwound by pulling on its free end, the drum
will not stop simply because the pull stops, and a free-running drum is an
effective, but untidy, knitting machine.
A drum carried as a back-pack should have an efficient brake and should
be reversible so that it can be carried across the chest and be wound from
a standing position. Some drums sold with geophysical instruments combine
total impracticality with inordinate expense and are inferior to home-made or
garden-centre versions.
Geophysical lines exert an almost hypnotic influence on livestock. Cattle
have been known to desert lush pastures in favour of midnight treks through
hedges and across ditches in search of juicy cables. Not only can a survey be
delayed but a valuable animal may be killed by biting into a live conductor,
and constant vigilance is essential.
 Connections
Crocodile clips are usually adequate for electrical connections between single
conductors. Heavy plugs must be used for multi-conductor connections and
are usually the weakest links in the entire field system. They should be
placed on the ground very gently and as seldom as possible and, if they do
not have screw-on caps, be protected with plastic bags or ‘clingfilm’. They
must be shielded from grit as well as moisture. Faults are often caused by dirt
increasing wear on the contacts in socket units, which are almost impossible
to clean.
Plugs should be clamped to their cables, since any strain will otherwise
be borne by the weak soldered connections to the individual pins. Inevitably,
the cables are flexed repeatedly just beyond the clamps, and wires may break
within the insulated sleeving at these points. Any break there, or a broken or
dry joint inside the plug, means work with a soldering iron. This is never easy
when connector pins are clotted with old solder, and is especially difficult if
many wires crowd into a single plug.
Problems with plugs can be minimized by ensuring that, when moving,
they are always carried, never dragged along the ground. Two hands should
always be used, one holding the cable to take the strain of any sudden pull,
the other to support the plug itself. The rate at which cable is reeled in should
never exceed a comfortable walking pace, and especial care is needed when
the last few metres are being wound on to a drum. Drums should be fitted
with clips or sockets where the plugs can be secured when not in use.
 Geophysics in the rain
A geophysicist, huddled over his instruments, is a sitting target for rain, hail,
snow and dust, as well as mosquitoes, snakes and dogs. His most useful piece

of field clothing is often a large waterproof cape which he can not only wrap
around himself but into which he can retreat, along with his instruments, to
continue work .
Electrical methods that rely on direct or close contact with the ground
generally do not work in the rain, and heavy rain can be a source of seismic
noise. Other types of survey can continue, since most geophysical instruments
are supposed to be waterproof and some actually are. However, unless
dry weather can be guaranteed, a field party should be plentifully supplied
with plastic bags and sheeting to protect instruments, and paper towels for
drying them. Large transparent plastic bags can often be used to enclose
instruments completely while they are being used, but even then condensation
may create new conductive paths, leading to drift and erratic behaviour.
Silica gel within instruments can absorb minor traces of moisture but cannot
cope with large amounts, and a portable hair-drier held at the base camp may
be invaluable.
 A geophysical toolkit
Regardless of the specific type of geophysical survey, similar tools are likely
to be needed. A field toolkit should include the following:
• Long-nose pliers (the longer and thinner the better)
• Slot-head screwdrivers (one very fine, one normal)
• Phillips screwdriver
• Allen keys (metric and imperial)
• Scalpels (light, expendable types are best)
• Wire cutters/strippers
• Electrical contact cleaner (spray)
• Fine-point 12V soldering iron
• Solder and ‘Solder-sucker’
• Multimeter (mainly for continuity and battery checks, so small size and
durability are more important than high sensitivity)
• Torch (preferably of a type that will stand unsupported and double as a
table lamp. A ‘head torch’ can be very useful)
• Hand lens
• Insulating tape, preferably self-amalgamating
• Strong epoxy glue/‘super-glue’
• Silicone grease
• Waterproof sealing compound
• Spare insulated and bare wire, and connectors
• Spare insulating sleeving
• Kitchen cloths and paper towels
• Plastic bags and ‘clingfilm’
A comprehensive first-aid kit is equally vital.

Fields

Although there are many different geophysical methods, small-scale surveys
all tend to be rather alike and involve similar, and sometimes ambiguous,
jargon. For example, the word base has three different common meanings,
and stacked and field have two each.
Measurements in geophysical surveys are made in the field but, unfortunately,
many are also of fields. Field theory is fundamental to gravity,
magnetic and electromagnetic work, and even particle fluxes and seismic
wavefronts can be described in terms of radiation fields. Sometimes ambiguity
is unimportant, and sometimes both meanings are appropriate (and
intended), but there are occasions when it is necessary to make clear distinctions.
In particular, the term field reading is almost always used to identify
readings made in the field, i.e. not at a base station.
The fields used in geophysical surveys may be natural ones (e.g. the
Earth’s magnetic or gravity fields) but may be created artificially, as when
alternating currents are used to generate electromagnetic fields. This leads to
the broad classification of geophysical methods into passive and active types,
respectively.
Physical fields can be illustrated by lines of force that show the field
direction at any point. Intensity can also be indicated, by using more closely
spaced lines for strong fields, but it is difficult to do this quantitatively where
three-dimensional situations are being illustrated on two-dimensional media.
 Vector addition
Vector addition (Figure 1.1) must be used when combining fields from different
sources. In passive methods, knowledge of the principles of vector
addition is needed to understand the ways in which measurements of local
anomalies are affected by regional backgrounds. In active methods, a local
anomaly (secondary field) is often superimposed on a primary field produced
by a transmitter. In either case, if the local field is much the weaker of the two
(in practice, less than one-tenth the strength of the primary or background
field), then the measurement will, to a first approximation, be made in the
direction of the stronger field and only the component in this direction of
the secondary field (ca in Figure 1.1) will be measured. In most surveys the
slight difference in direction between the resultant and the background or
primary field can be ignored.

If the two fields are similar in
strength, there will be no simple
relationship between the magnitude
of the anomalous field and the
magnitude of the observed anomaly.
However, variations in any given
component of the secondary field
can be estimated by taking all
measurements in an appropriate
direction and assuming that the
component of the background or
primary field in this direction is
constant over the survey area.
Measurements of vertical rather than
total fields are sometimes preferred
in magnetic and electromagnetic
surveys for this reason.
The fields due to multiple sources
are not necessarily equal to the
vector sums of the fields that would
have existed had those sources
been present in isolation. A strong
magnetic field from one body can
affect the magnetization in another,
or even in itself (demagnetization
effect), and the interactions between fields and currents in electrical and
electromagnetic surveys can be very complex.
 The inverse-square law
Inverse-square law attenuation of signal strength occurs in most branches of
applied geophysics. It is at its simplest in gravity work, where the field due
to a point mass is inversely proportional to the square of the distance from
the mass, and the constant of proportionality (the gravitational constant G)
is invariant. Magnetic fields also obey an inverse-square law. The fact that
their strength is, in principle, modified by the permeability of the medium
is irrelevant in most geophysical work, where measurements are made in
either air or water. Magnetic sources are, however, essentially bipolar, and
the modifications to the simple inverse-square law due to this fact are much
more important (Section 1.1.5).
Electric current flowing from an isolated point electrode embedded in
a continuous homogeneous ground provides a physical illustration of the




significance of the inverse-square law. All of the current leaving the electrode
must cross any closed surface that surrounds it. If this surface is a sphere
concentric with the electrode, the same fraction of the total current will cross
each unit area on the surface of the sphere. The current per unit area will
therefore be inversely proportional to the total surface area, which is in turn
proportional to the square of the radius. Current flow in the real Earth is, of
course, drastically modified by conductivity variations.
1.1.3 Two-dimensional sources
Rates of decrease in field strengths depend on source shapes as well as on
the inverse-square law. Infinitely long sources of constant cross-section are
termed two-dimensional (2D) and are often used in computer modelling to
approximate bodies of large strike extent. If the source ‘point’ in Figure 1.2
represents an infinite line source seen end on, the area of the enclosing (cylindrical)
surface is proportional to the radius. The argument applied in the
previous section to a point source implies that in this case the field strength
is inversely proportional to distance and not to its square. In 2D situations,
lines of force drawn on pieces of paper illustrate field magnitude (by their
separation) as well as direction.





The lines of force or radiation intensity from a source consisting of a homogeneous layer of
constant thickness diverge only near its edges (Figure 1.3). The Bouguer plate of gravity reductions (Section 2.5.1) and the radioactive source with 2π geometry (Section 4.3.3) are examples of infinitely extended layer sources, for which field strengths are independent
of distance. This condition is approximately achieved if a detector is only a short distance
above an extended source and a long way from its edges.

A dipole consists of equal-strength positive and negative point sources a very small distance apart. Field strength decreases as the inverse cube of distance and both strength and direction change with ‘latitude’ (Figure 1.4). The intensity of the field at a point on a dipole
axis is double the intensity at a point the same distance away on the dipole ‘equator’, and in the opposite direction.





Electrodes are used in some
electrical surveys in approximately
dipolar pairs and magnetization is
fundamentally dipolar. Electric currents
circulating in small loops are
dipolar sources of magnetic field.





Exponential decay
Radioactive particle fluxes and seismic and electromagnetic waves are subject
to absorption as well as geometrical attenuation, and the energy crossing




closed surfaces is then less than the energy emitted by the sources they
enclose. In homogeneous media, the percentage loss of signal is determined
by the path length and the attenuation constant. The absolute loss is proportional
also to the signal strength. A similar exponential law (Figure 1.5),
governed by a decay constant, determines the rate of loss of mass by a
radioactive substance.
Attenuation rates are alternatively characterized by skin depths, which
are the reciprocals of attenuation constants. For each skin depth travelled, the
signal strength decreases to 1/e of its original value, where e (= 2.718) is the
base of natural logarithms. Radioactivity decay rates are normally described in
terms of the half-lives, equal to loge2 (= 0.693) divided by the decay constant.
During each half-life period, one half of the material present at its start is lost.

The Second Law and Molecular Behavior

At the present time we are familiar enough with molecules to formulate the Second
Law entirely in relation to an intuitive perception of their behavior. It is easy to see that the
Second Law, as expressed in terms of heat flow in section 11, could be violated with some
cooperation from molecules.
Consider two systems each consisting of a fixed quantity of gas. At the boundary of
each system are rigid, impenetrable, and well insulated walls except for a metal plate made
of a good thermal conductor and located between the systems as shown in Figure 1. The gas
in one system is at a low temperature T1 and in the other at a higher temperature T2. In
terms of molecular behavior the
temperature difference is produced by different distributions of molecules among the
velocities in the molecular states of each system. The number of different velocities,
however, is so large that the high temperature system contains some molecules with lower

speeds than some molecules in the low temperature system and vice versa.
Now suppose we prepare some instructions for the molecules in each system as
shown on the signs in Figure 1. When following these instructions, only the high speed
molecules in the low temperature gas collide with molecules on the surface of the plate, give
up energy to them, and thus create on this surface a higher temperature than in the gas.
The boundary between the gas and the plate then has a temperature difference across it and,
according to our definition, the energy thus transported is heat. In the plate this becomes
thermal energy which is conducted through it because the molecules on its other surface are
at a lower temperature as a consequence of their energy exchanges only with the low speed
molecules in the adjacent high temperature gas system. This constitutes likewise a heat flow
into this system. The overall result is then the continuous unaided transfer of heat from a
low temperature region to one at a higher temperature, clearly the wrong direction and a
Second Law violation. The Second Law therefore, is related to the fact that in completely
isolated systems molecules will never of their own accord obey any sort of instructions such
as these.
                          Microstates in Isolated Systems

To explain why molecules always behave as though instructions of this type are
completely ignored, imagine that we have a fantastic camera capable of making a
multidimensional picture which could show at any instant where all the ultimate particles
in the system are located and reveal every type of motion taking place, indicating its
location, speed, and direction. Every type of distinguishably different action at any moment,
the vibration, twisting, or stretching within molecules as well as their translational and
rotational movements, would be identified in this manner for every molecule in the system.
This picture would thus be a photograph of what we have defined as an instantaneous
microstate of the system.
Now, instead of the instructions on the signs in Figure I, suppose we ask the
molecules to do everything they can do by themselves in a rigid walled and isolated
container where no external arranging or directing operations are possible. We will say,
"Molecules, please begin now and arrange yourselves in a sequence of poses for pictures
which will show every possible microstate which can exist in your system under the
restrictions imposed by your own nature and the conditions of isolation in the container".
If we expressed these restrictions as a list of rules to be followed in assigning molecules to various positions and motions, the list would appear as follows:
1. In distributing yourselves among the various positions and motions for each microstate
picture, do not violate any energy conservation laws. Consequently, because you are in
an isolated system the sum of all your individual translational, vibrational, and
rotational kinetic energies plus all your intermolecular potential energies must always
be the same and equal to the fixed total internal energy of the system.
2. Likewise, do not violate any mass conservation laws. There are to be no chemical
reactions among you so that the total number of individual molecules assigned must
always remain the same.
3. All of you must, of course, remain at all times within the container so the total volume
in which you distribute yourselves must be constant.
4. Do not violate any laws of physics applicable to your particular molecular species. You
must remember that no two of you can have all of your microstate position and motion
characteristics exactly the same otherwise you would have to occupy the same space at
the same time. Furthermore, do not be concerned that there might not be enough
different microstates available for each of you to have a different one. Although you are
numerous, the number of different possible position and motion values is even more
numerous, so that there will never be enough of you to fill all of them and many
possible values will be left unoccupied by a molecule in each picture for which you pose.

The Total Energy Transfer

Because thermodynamic systems are conventionally defined so that no bulk
quantities of matter are transported across their boundaries by stream flow, no energy
crosses the system boundary in the form of internal energy carried by a flowing fluid. With
the system defined in this way the only energy to cross its boundary because of the flow
process is that of work measured by the product of the pressure external to a fixed mass
system in a stream conduit and the volume change it induces in this system. In the case of
diffusion mass transport, as discussed in section 15, the system does not have a fixed mass
but the entire change associated with the diffusion mass transport is given by work
evaluated by computing the product of an external chemical potential and a specific
transported mass change within the system. As a result the combination of heat, work, and
any energy transport by non-thermodynamic carriers includes all the energy in transition
between a system and its surroundings. Energy by non-thermodynamic carriers is that
transported by radiant heat transfer, X-rays, gamma radiation, nuclear particles, cosmic
rays, sonic vibration, etc. Energy of this type is not usually considered as either heat or
work and must be evaluated separately in system where it is involved. Energy transport by
nuclear particles into a system ultimately appears as an increase in thermal energy within
the system and is important in thermodynamic applications to nuclear engineering.
In every application of thermodynamics, however, it is essential that we account for
all the energy in transition across the system boundary and it is only when this is done that
the laws of thermodynamics can relate this transported energy to changes in properties
within the system. In the processes we will discuss, heat and work together include all the
transported energy.

The Chemical Potential

For processes involving diffusion mass transport we can, however, define a
thermodynamic intensive driving force responsible specifically for the total energy change
accompanying the diffusion mass transfer of molecules from one region to another. This
driving force can be defined simply as a partial derivative representing the variation of the
total internal energy of a region with respect to an increment in the number of moles of one
particular species in the region when no other extensive properties are altered. This partial
derivative is an important intensive property called the chemical potential. By reason of its
definition the chemical potential is an intensive property because whenever it is multiplied
by the extensive property change in moles of a particular molecular species within a system
the result is identically the internal energy change of the system resulting only from this
change in moles, and not from the change in any other extensive property.
In elementary physics the energy per unit mass, per mole, or per particle involved in
moving the mass, mole, or particle from one region to another is generally defined as a
potential. Table I lists several types of potentials (driving forces) which are important in
thermodynamic applications. A potential therefore can always be regarded as a driving
force for a mass change. The chemical potential is a driving force of this type. Physically the
driving force represented by the chemical potential results from the same molecular actions
which give rise to a partial vapor pressure in a liquid or a partial pressure in a gas. Each of
these has the ability to expel molecules of a given type out of a multi-component phase.
The energy change within a system accompanying a change in the number of moles
of a given component of the system by molecular processes can now be defined as a type of
work which results from a difference in a chemical potential driving force between the
system and its surroundings. As is the case of other types of work, in order to evaluate
quantitatively the work of a chemical potential driving force it is first necessary to define a
system. In accordance with the principles discussed in section 14,this work is then defined
as the product of a chemical potential of a component outside the system on its external
boundary and a change in the number of moles of this component inside the system. When
the number of moles of a molecular species increases in a system, work must be done on the
system to overcome the molecular forces tending to expel molecules of this species.
Consequently, in accordance with the sign convention, the work relative to the system
receiving this increase in moles within it must be a negative number.
Although we have discussed the chemical potential as a thermodynamic driving force
for the diffusion mass transport, its utility is not confined to this particular process alone.
Because of its definition the chemical potential is a driving force for changes in moles of a
molecular species in a system not only by means of diffusion mass transfer but by any other
molecular process as well. The most important example is the role of the chemical potential
within a system as the driving force for changes in moles brought about in the system by
chemical reactions.

Energy Transport by Mass Transfer

For any region with a boundary which is penetrated by mass, a thermodynamic
analysis always requires a distinction between mass carried across the boundary by bulk
stream flow and mass carried across by diffusion processes resulting from molecular action.
In the case of bulk stream flow with no diffusion mass transport, energy is carried
into the region in two distinct ways. Part of the energy added to a region receiving mass by
stream flow is the work of a pressure which displaces a quantity of flowing fluid into the
region. The remaining part of the energy added is the internal energy content of this
quantity of fluid which enters. In a bulk stream flow process these two parts of the total
energy transport can be separated and evaluated. This is done most conveniently by
defining the system in this case as a fixed mass enclosed by moveable boundaries which are
not penetrated by mass at all. In this manner a small contiguous quantity of fluid in an
entering conduit becomes a homogeneous sub-region within the system and its energy thus
becomes a part of the total internal energy of the entire system. The boundary of this subregion
is acted upon by an external pressure which performs work on the entire system in
moving the boundary of the sub-region. When the system is defined in this way, no energy
is carried into the system in the form of the internal energy of mass crossing its boundaries.
In a region receiving mass transported by a diffusion process, part of the energy content of all molecules outside the region is used to propel some of them into the region. In
contrast to the situation in a purely bulk stream flow process, there is no way in this case to
define a system which excludes the internal energy of transported molecules from the energy
crossing the system boundary. There is no way to define a system in which the propelling
forces which induce the mass transport are a driving force for all of the energy which crosses
the system boundary in the transport process. The diffusion processes these propelling
forces result from the behavior of individual molecules and are not scalar thermodynamic
properties at all so that we cannot define an intensive thermodynamic driving force property
to represent them.

Work

Now that we have used the term "work" it is necessary to emphasize that work, like
heat, must also be regarded only as a type of energy in transition across a well defined, zero
thickness, boundary of a system. Consequently work, like heat, is never a property or any
quantity contained within a system. Whereas heat is energy driven across this boundary by
a difference in temperature, work is energy driven across by differences in other driving
forces on either side of it. Various kinds of work are identified by the kind of driving force
involved and the characteristic extensive property change which accompanied it.
Work is measured quantitatively in much the same manner as heat. Any driving
force other than temperature, located outside the system on its external boundary, is
multiplied by a transported extensive property change within the system which was
transferred across the system boundary in response to this force. The result is the numerical
value of the work associated with this system and driving force. It is important to
emphasize that the extensive property change within the system which is used in this
computation must be a transported quantity whose transfer across the system boundary
depends on a particular driving force with different values inside and outside the system.
This transported extensive property change within the system always occurs with the same
magnitude but with opposite sign in the surroundings.
Neither work nor heat results from any part of a change in an extensive property of a
system which has not been transported in this manner without alteration in magnitude
across the system boundary. A non-transported extensive property change within a system
when multiplied by an appropriate driving force property located within the system measures a form of internal energy change in the system but not work or heat.
Conventionally the quantity of work calculated by this procedure is given a positive
sign when work is done by the system on the surroundings and energy crosses the boundary
in a direction from the system to the surroundings. An energy transport in the opposite
direction, when work is done by the surroundings on the system, is given a negative sign.
It is awkward that the sign given to energy transferred as work is opposite to that given to
energy transferred as heat in the same direction, but tradition has established the convention
and it is important that it be followed consistently. Like heat, both the absolute value and the
sign of what is called work depend entirely on how the system is specified.
Several thermodynamic driving forces and their characteristic displacements are
listed in Table I. Any of these properties, other than temperature and entropy, can measure
various types of work when the driving force is located on the outer side of the system
boundary and the displacement is a transported quantity whose change is located within the
system. The product, when given the proper sign, is a type of work transfer for this system.

Entropy

As indicated in Table I, entropy is the name given to the extensive property whose
change when multiplied by temperature gives a quantity of thermal energy. In classical
thermodynamics there is no need to give any physical description of this property in terms
of molecular behavior. A change in entropy is defined simply as a quantity of thermal
energy divided by the temperature driving force which propels it so that it always produces
the thermal energy identically when multiplied by the temperature. Because temperature is
an intensive property and this product is energy, we know that the entropy must be an
extensive property. Furthermore, thermal energy is a part of the total internal energy within
a system so that the entropy change computed this way is a change in a property of the
system Thermal energy crossing a system boundary is defined as heat so that the entropy
change transported by it is simply the quantity of heat transported divided by the
temperature which transports it. This transporting temperature is the temperature of the
external or surroundings side of the system boundary. It is important to realize that this
transported entropy change may be only apart of the total entropy change within the
system. Because thermal energy can be produced within a system by other means than
adding heat to it, a thermal energy increase in the system can be greater than the heat
transported into it. In this case the entropy change within the system accompanying its
thermal energy increase will be greater than the entropy change transported into the system
with the heat flow.

Basic Principles of Classical and Statistical Thermodynamics lec (2)

7 Intensive and Extensive Properties
In discussing microstate driving forces in section 5, we noted that the force to be
applied or the force to be overcome in order to make a change in the position or motion of
any one particle in a multi-particle system depends both on the nature of the particle and on
its environment. When these remain the same then the necessary force to induce a change is
also the same, no matter how many other individual particles are present in the system.
Because a thermodynamic driving force in a system is the composite result of all the
individual particle forces, it likewise should be independent of the number of particles
present as long as they all have the same environment and individual characteristics.
Properties of a system which have this type of independence of the number of
particles present are called "intensive properties" and all the thermodynamic driving forces
are selected from among properties of this type. The test for an intensive property is to
observe how it is affected when a given system is combined with some fraction of an exact
replica of itself to create a new system differing only in size. Intensive properties are those
which are unchanged by this process, whereas those properties whose values are
increased/decreased in direct proportion to the enlargement/reduction of the system are
called "extensive properties." For example, if we exactly double the size of a system by
combining it with an exact replica of itself, all the extensive properties are then exactly
double and all intensive properties are unchanged.
As we have explained the displacements in a system induced by thermodynamic
driving forces are a summation of all the motion and position changes in all the ultimate
particles of the system. Consequently, if we alter the number of particles by changing only
the size of the system, we should then alter the overall displacement in exactly the same
proportion. This means that the overall change which we call a displacement must be a
change in an extensive thermodynamic property of the system.
If the magnitude of a displacement thus varies directly with the size of a system in
which it occurs, whereas the driving force is not affected, their product must likewise change
directly with the system size so that energy itself is always an extensive property.
8 Identification of Thermodynamic Driving Forces and Displacements
In addition to the differences between thermodynamic driving forces which arise
because the thermodynamic forces are scalar properties instead of vectors, another important
difference is that they are quite different dimensionally. This is a consequence of the fact
that the thermodynamic driving forces are defined in a quite intuitive manner.
Thermodynamic driving forces are identified empirically as the intensive property
whose difference on each side of some part of the boundary between a system and its
surroundings control both the direction and the rate of transfer of one specific extensive
property displacement across it. For example, consider the volume filled with air within a
pump and bicycle tire as a system and the inner surface of the piston of the pump as the
boundary across which a volume change is transferred. When the piston is moved the
magnitude of the volume change in the surroundings is exactly the magnitude of the volume
change of the system and the increase in volume of one is exactly the decrease in volume of
the other. We can say, therefore, that volume is an extensive property transferred across this
boundary. When only volume and no other extensive property change is transferred, then
we find by experiment that the pressure difference is the only intensive property across this
boundary that controls both the direction and rate of change of the volume. Then we define
pressure as the thermodynamic driving force. It is important that only one extensive
property be transferred across this boundary in the experiment. For example, suppose there
was a crack in the piston which allowed air to leak through it. We now can have both
volume and mass transferred across this same boundary and we observe in this case that
lowering the pressure outside the piston may not necessarily cause the volume of the system
to expand. To properly identify a driving force we must always examine the transport of
only one displacement and one characteristic type of energy.
Although the dimensional and physical nature of each thermodynamic driving force
identified in this manner are very different, the product of each with its associated
displacement always measures a distinctive type of energy and must have the characteristic
energy dimensions of force multiplied by length.
Once a particular type of energy crossing a boundary has been identified, the manner
in which it is divided into a driving force and displacement is completely arbitrary as long
as the driving force is intensive and the displacement is a change in an extensive property.
For example, in this illustration of the transfer of pressure-volume work we could have
equally well called the displacement the distance traveled by the pump piston and the
driving force a product of pressure and piston area. We would thus change the dimensions
of the driving force and displacement, but this would not affect any thermodynamic
computations where only the magnitudes and not the rates of changes in energy and
properties are to be determined. In a subject called "non-equilibrium thermodynamics
where a description of the rates of various changes is an objective, the definition of driving
force and displacement is not at all arbitrary and must be done only in certain ways.1
Some of the diversity of driving force-displacement combinations and their
dimensions, which represent various types of energy in some important thermodynamic
applications is shown in Table I. The product of the two represents a change in the energy
of a region in which both the driving force and displacement are properties. It also gives the
energy transported between a system and its surroundings when the driving force is located
on its outer boundary and the displacement is within the system.
9 The Laws of Thermodynamics
Now that we have discussed the nature of different forms of energy and properties of
matter, we must describe the basic principles of thermodynamics which are used to relate
them.
Classical thermodynamics is one of the most important examples of the axiomatic
form of the scientific method.2 In this method certain aspects of nature are explained and
predicted by deduction from a few basic axioms which are assumed to be always true. The
axioms themselves need not be proved but they should be sufficiently self-evident to be
readily acceptable and certainly without known contradictions. The application of
thermodynamics to the prediction of changes in given properties of matter in relation to
energy transfers across its boundaries is based on only two fundamental axioms, the First
and Second Laws of thermodynamics, although the total field of thermodynamics requires
two other axioms. What is called the Zeroth Law considers three bodies in thermal contact,
transferring heat between themselves, yet insulated from their external surroundings. If two
of these have no net heat flow between them, a condition defined as thermal equilibrium,
then thermal equilibrium exists also between each of these and the third body. This is
necessary axiom for the development of the concept of temperature, but if one begins with
temperature as an already established property of matter, as we will do, the Zeroth Law is
not needed. The Third Law states that the limit of the entropy of a substance is zero as its
temperature approaches zero, a concept necessary in making absolute entropy calculations
and in establishing the relationship between entropy as obtained from the statistical
behavior of a multi-particle system, and the entropy of classical thermodynamics. Because in
this work we are concerned only with predicting changes in thermodynamic properties,
including the entropy, the Third Law also will not be needed or discussed.
10 The Intuitive Perception of the First Law
The First Law of thermodynamics is simply the law of conservation of energy and
mass. The ready acceptability of this law is apparent from the fact that the concept of
conservation in some form has existed from antiquity, long before any precise demonstration
of it could be made. The ancient biblical affirmation, "What so ever a man sows, that shall
he also reap" is, in a sense, a conservation law. The Greek philosophers generally considered
matter to be indestructible, although its forms--earth, fire, air, or water-- could be
interchanged. The situation was confused in the Middle Ages by a feeling that a
combustion process actually "destroyed" the matter which burned. This was not set right
until 1774 when Lavoisier conclusively demonstrated the conservation of mass in chemical
reactions.
It is fortunate that an intuitive feeling for energy conservation is also deep-rooted
because its demonstration is experimentally more difficult than that for mass conservation
and that which is conserved is more abstract. As discussed in section 4, that which is called
energy in classical thermodynamics is a quantity which measures a combination of effort
11 The Second Law as Common Experience
The Second Law is likewise a concept which is a part of basic human experience. In its
intuitive perception the Second Law is a sense of the uniqueness of the direction of the
change which results from the action of a particular thermodynamic driving force. For
example, no one has to be told that when the earth's gravitational potential is the driving
force it will cause water to flow from a tank on top of the hill to one at the bottom, but it
alone will never cause the reverse to occur. This direction of water flow is always the same
unless we supply some work, as for example with a pump, or unless we allow a change in
the properties of some region outside the two tanks, such as the water level in some other
reservoir. We identify the earth's gravitational attraction at a given water level as a driving
force because when the water levels are the same in each tank there is no further transfer of
water and also because the rate of transfer increases with an increase in the difference in
elevation.
An analogous example occurs when heat is driven from one system to another by a
difference in their temperatures. In our earliest experience temperature is the degree of
"hotness to the touch" which in this case is different for each system. We observe that when
this temperature difference is large the rate of change of their temperatures is greater than
when it is small and when the two have the same temperature we observe no further
changes. Consequently we identify temperature as a driving force which causes something
called heat to be transferred.
No theoretical knowledge of any kind is required for us to know that if we bring two
objects into close contact and exclude any interaction between them and their surroundings,
the cold one will always get hotter and the hot one cooler but never the opposite. This
direction is always the same unless we do some work, as with a refrigerator, or allow some
energy transfer between the objects and their surroundings.
When expressed more generally to include all types of driving forces and their driven
quantities, this uniqueness of direction becomes the Second Law. This is not a concept in
any way contained within the First Law, but one involving a completely new requirement.
For example, in either the water flow or in the heat flow situations, a flow in the wrong
direction would not necessarily violate the conservation of energy or mass.

Basic Principles of Classical and Statistical Thermodynamics lec (1)

In the most general sense thermodynamics is the study of energy -- its transformations and
its relationship to the properties of matter. In its engineering applications thermodynamics
has two major objectives. One of these is to describe the properties of matter when it exists
in what is called an equilibrium state, a condition in which its properties show no tendency
to change. The other objective is to describe processes in which the properties of matter
undergo changes and to relate these changes to the energy transfers in the form of heat and
work which accompany them. These objectives are closely related and a text such as this,
which emphasizes primarily the description of equilibrium properties, must include as well a
discussion of the basic principles involved in accomplishing these two objectives.
Thermodynamics is unique among scientific disciplines in that no other branch of
science deals with subjects which are as commonplace or as familiar. Concepts such as
"heat", "work", "energy", and "properties" are all terms in everyone's basic vocabulary.
Thermodynamic laws which govern them originate from very ordinary experiences in our
daily lives. One might think that this familiarity would simplify the understanding and
application of thermodynamics. Unfortunately, quite the opposite is true. In order to
accomplish these objectives, one must almost entirely forget a life-long acquaintance with
the terms of thermodynamics and redefine them in a very scientific and analytical manner.
We will begin with a discussion of the various properties of matter with which we will be
concerned.
1 Thermodynamic and Non-Thermodynamic Properties
A property of matter is any characteristic which can distinguish a given quantity of
a matter from another. These distinguishing characteristics can be classified in several
different ways, but for the purposes of this text it is convenient to divide them into what
may be called thermodynamic and non-thermodynamic properties.
The non-thermodynamic properties describe characteristics of what are often called
the "ultimate particles" of matter. An ultimate particle from a thermodynamic view point
is the smallest subdivision of a quantity of matter which does not undergo any net internal
changes during a selected set of processes which alter properties of the entire quantity. The
ultimate particles with which we will be concerned are generally considered to be molecules
or atoms, or in some cases groups of atoms within a molecule. When the meaning is clear
we will some times delete the adjective "ultimate" and refer to them simply as "particles".
Because it has no internal changes an ultimate particle can always be regarded as a
rigid mass. Its only alterable distinguishing characteristics which could possibly be
detected, if some experimental procedure could do so, are its position and its motion. As a
result, the fundamental properties of this particle, which cannot be calculated or derived
from any others, consist only of its mass and shape plus the vectors or coordinates needed to
describe its position and motion. It is convenient to combine the mass and motion
characteristics and represent them as a momentum property. These fundamental
characteristics, mass, position, and momentum, are called "microstate" properties and as a
group they give a complete description of the actual behavior of an ultimate particle.
Everyone realizes of course, that molecules are not actually inert rigid masses. The
forces of attraction and repulsion which we ascribe to them are in reality the consequence of
variations in the quantum states of a deformable electron cloud which fills practically all the
space occupied by a molecule so that when we represent it as a rigid mass we are
constructing a model which allows us to apply classical mechanics to relate its energy
changes to changes in its microstate properties. For example, an effective model for a
complex molecule is to regard it as a group of rigid spheres of various size and mass held
together by flexible springs. The only justification for this model is that calculations of its
energy, when properly averaged, give good agreement with values of energy per molecule
obtained from experimental measurements using bulk quantities of the substance.
Constructing models is important in all aspects of thermodynamics, not only for individual
molecules, but also in describing the behavior of bulk matter.
Values which can be calculated from the microstate properties of an individual
particle or of a cluster containing only a few particles represent another group of nonthermodynamic
properties. We will refer to these derived values as "molecular" properties.
Examples are the translational, vibrational, or rotational energies of an individual molecule,
and also the calculated potential energy at various separation distances in a pair of
molecules or between other small groups of near neighbors. In some cases we wish to
calculate special functions of the potential energy within a group composed of a few
neighbors. An important feature of all of these combinations of fundamental microstate
properties is that they can produce the same value of a calculated molecular property. For
example, assigning values to the microstate properties of a molecule determines its energy
but specifying the energy of a molecule does not specify any one particular set of values for
its microstate properties.
Whereas the non-thermodynamic properties pertain to a single or to only a few
ultimate particles, the characteristics of matter which are called thermodynamic properties
are those which result from the collective behavior of a very large number of its ultimate
particles. Instead of only one or a few particles, this number is typically on the order of
Avogadro's number. In a manner analogous to the way in which molecular properties can
be calculated from the fundamental microstate properties of an individual or small group of
particles, the various thermodynamic properties likewise depend upon the vastly greater
number of all the microstate properties of the very large group. Furthermore, an even larger
number of different sets of microstate properties can produce the same overall
thermodynamic property value. In contrast to non-thermodynamic properties,
thermodynamic properties can always be measured experimentally or calculated from such
measurements.
Establishing relationships between non-thermodynamic and thermodynamic
properties of matter in equilibrium states is the task of statistical thermodynamics while the
study of relationships among the thermodynamic properties alone is generally the topic of
classical thermodynamics. In the past it has been customary for textbooks and their readers
to make a sharp distinction between the two disciplines. The historical development of
classical thermodynamics and its applications to a wide range of engineering problems took
place without any reference at all to ultimate particles or molecular properties. This
development is entirely rigorous and has the merit of establishing the validity of general
thermodynamic principles to all types of matter regardless of its molecular character.
However, the problem of predicting and correlating thermodynamic properties of an
increasing diversity of substances both in pure form and in mixtures with the accuracy
needed in modern technology requires a combination of the classical and molecular
viewpoints. It is this combination which is the objective of this text.
2 The Selection of a System
The first concept which must be understood in applying thermodynamics is the
necessity to begin with the definition of what is called a "system". In thermodynamics this
is any region completely enclosed within a well defined boundary. Everything outside the
system is then defined as the surroundings. Although it is possible to speak of the subject
matter of thermodynamics in a general sense, the establishment of analytical relationships
among heat, work, and thermodynamic properties requires that they be related to a
particular system. We must always distinguish clearly between energy changes taking place
within a system and energy transferred across the system boundary. We must likewise
distinguish between properties of material within a system and properties of its
surroundings.
In accordance with their definition, thermodynamic properties apply to systems
which must contain a very large number of ultimate particles. Other than this there are no
fundamental restrictions on the definition of a system. The boundary may be either rigid or
movable. It can be completely impermeable or it can allow energy or mass to be transported
through it. In any given situation a system may be defined in several ways; although with
some definitions the computations to be performed are quite simple, with others they are
difficult or even impossible.
For example, it is often impossible by means of thermodynamic methods alone to
make heat transfer calculations if a system is defined so that both heat transfer and
diffusional mass transfer occur simultaneously through the same area on the boundary of
the system. For processes in which mass transfer takes place only by bulk stream flow this
problem can be avoided easily by a proper definition of the system. In a flow process of this
type the system is defined so that it is enclosed by moveable boundaries with no stream flows
across them. Heat transfer then always occurs across a boundary not crossed by mass.
3 Microstates and Thermodynamic States
The state of a system is an important concept in thermodynamics and is defined as
the complete set of all its properties which can change during various specified processes.
The properties which comprise this set depend on the kinds of interactions which can take
place both within the system and between the system and its surroundings. Any two
systems, subject to the same group of processes, which have the same values of all properties
in this set are then indistinguishable and we describe them as being in identical states.
A process in thermodynamics is defined as a method of operation in which specific
quantities of heat and various types of work are transferred to or from the system to alter its
state. As we pointed out, one of the objectives of thermodynamics is to relate these state
changes in a system to the quantity of energy in the form of heat and work transferred
across its boundaries.
In discussing non-thermodynamic processes, a system may be chosen as a single
ultimate particle within a larger quantity of matter. In the absence of chemical reactions the
only processes in which it can participate are transfers of kinetic or potential energy to or
from the particle. In this case we would like to relate these energy transfers to changes in
the microstate of the system. A microstate for this one-particle system is a set of coordinates
in a multi-dimensional space indicating its position and its momenta in various vector
directions. For example, a simple rigid spherical monatomic molecule would require a total
of six such coordinates, three for its position and three for its momentum in order to
completely define its microstate.
Now consider a system containing a large number of these ultimate particles. A
microstate of this system is a set of all position and momentum values for all the particles.
For example, if there were N rigid spherical molecules we would then need 6N coordinates
to give a complete set of all the microstate properties and define a microstate for this system.
In a multiparticle system a particular microstate exists only for an instant and is then
replaced by another so that there is no experimental way to measure the set of positions and
motions which comprise one microstate among the vast number of them which occur
sequentially.
Because the microstates of a multiparticle system represent exactly what all the
particles are doing, all thermodynamic properties of the group are thus determined by them.
With this common origin all the thermodynamic properties are therefore related to each
other and we need to develop this relationship. The set of all the thermodynamic properties
of a multiparticle system its temperature, pressure, volume, internal energy, etc., is defined
as the thermodynamic state of this system.
An important aspect of this relationship between thermodynamic properties is the
question of how many different thermodynamic properties of a given equilibrium system are
independently variable. The number of these represents the smallest number of properties
which must be specified in order to completely determine the entire thermodynamic state of
the system. All other thermodynamic properties of this system are then fixed and can be
calculated from these specified values. The number of these values which must be specified
is called the variance or the degrees of freedom of the system.
4 The Concept of Energy
In elementary physics energy is often defined as "the capacity to produce work". At
a descriptive level the idea expressed is correct, but for thermodynamics which is to be
applied quantitatively this definition is not a good one because the term "work" itself
requires a more precise definition than the general idea it ordinarily conveys. A better
definition of energy from the viewpoint of thermodynamics would be "the capacity to induce
a change in that which inherently resists change". This capacity represents a combination
of an effort, expended in overcoming resistance to a particular type of change, with the
change it produces. The combination is called energy.
The effort involved is measured quantitatively by what is defined as a "driving
force" in thermodynamics. A driving force is a property which both causes and also
controls the direction of change in another property. The quantitative value of this change
is called a "displacement". The product of a driving force and its associated displacement
always represents a quantity of energy, but in thermodynamics this quantity has meaning
only in relation to a specifically defined system.
Relative to a particular system there are generally two ways of locating a driving
force and the displacement it produces. In one way both the driving force and the
displacement are properties of the system and are located entirely within it, so that the
energy calculated from their product represents a change in the internal energy of the
system. Similarly, both the driving force and its displacement could be located entirely
within the surroundings so that the calculated energy is then a change in the total energy of
the surroundings.
In another way, however, the displacement occurs within the system but the driving
force producing it is a property of the surroundings and is applied externally at the system
boundary. By definition, the boundary of a system is a region of zero thickness containing
no matter at all so that the energy calculated in this way is not a property of matter either in
the system or in its surroundings but represents a quantity of energy in transition between
the two. In any quantitative application of thermodynamics it is always important to make
a careful distinction between energy changes within a system or within its surroundings
and energy in transition between them.
5 Microstate Driving Forces
In order to explain the nature of driving forces, suppose we consider first a system
defined as a single ultimate particle of a simple fluid, either a gas or a liquid. The system in
this case is a rigid spherical mass with no possibilities for any internal changes and obeying
Newtonian mechanics. In its surroundings are similar ultimate particles of this fluid.
From a Newtonian point of view the mass of this system resists any change in its condition
of motion and a specific change occurs only with the application of an external force to
overcome the inertial resistance inherent in the mass. In the presence of mutual attraction
and repulsion between this system and neighboring particles it may be considered to resist
any displacement from a position in which this attraction and repulsion are balanced. In
this situation a force vector directed toward the center of mass must be applied for a fixed
time period to produce a change. This force is produced by the environment around the
particle chosen as the system. The mechanism for its generation is by the action of
neighboring particles in exerting attraction or repulsion or in colliding with the system.
The scalar product of the vector force generated in this manner with other vectors which
represent the resulting displacements in position and velocity of the system determine the
energy added to the system when its velocity is increased, when its position is moved away
from attracting neighbors, or when moved toward neighbors which repel it.
Since these displacements represent changes in microstate properties, we define the
force vector producing them as a "microstate driving force." According to Newtonian
mechanics this applied force is always opposed by an equal and opposite force representing
the resistance of the system to change. Although mechanically we could position these two
forces anywhere along their line of action, in terms of the system it is convenient to think of
them as opposing one another at the boundary of the system to describe energy in transition
across it and then as opposing one another within the system when we describe this
quantity of energy as the energy change of the system. An important characteristic of
microstate driving forces is that they are true force vectors in the Newtonian sense and there
is never a condition of unbalanced driving forces. This is not at all the case for what we will
define as "thermodynamic driving forces" which are the agents of change for
thermodynamic properties in multiparticle systems.
6 Thermodynamic Driving Forces
In contrast to the one-particle system which we have discussed in section 5, for
thermodynamic systems consisting of many particles we are usually as interested in
internal energy changes as we are in changes in position or motion of the entire system. In
this case we wish to define these internal energy changes in terms of thermodynamic
properties, each of which are the collective results of the enormous number of microstates for
all the ultimate particles of the system. Because the fundamental agents of change within
the system are microstate driving forces, the corresponding agents of change or driving
forces in thermodynamic systems are the composite result of all the microstate driving force
vectors in the system. However, the only case in which the collective behavior of all these
microstate driving force vectors defines a thermodynamic property is the one in which these
microstate vectors for all the individual particles are oriented in a completely random
manner in every conceivable direction. In this case their overall resultant in the entire
system is completely scalar in nature and a thermodynamic property of the system. We
define this resultant as a "thermodynamic driving force."
Likewise, the cumulative effect of all the microstate changes induced, which are also
vectors, produces in this case a completely scalar thermodynamic property change for the
multiparticle system. This overall change is the displacement induced by the
thermodynamic driving force.
Because these thermodynamic driving forces are not true vector forces in the
Newtonian sense but are scalar properties, the thermodynamic driving forces tending to
cause a change are not always balanced by equal and opposite driving forces opposing the
change. Changes in internal thermodynamic properties within a system can be controlled
as to direction, and in some instances as to their rates, by the degree of difference between
the value of a particular thermodynamic driving force property outside the system at its
boundary and a value of this same property somewhere within the system. Between
thermodynamic driving forces this difference can be of any magnitude, finite or
infinitesimal. When they are exactly equal there is then no net change induced and no
energy is transferred