Some geophysical readings are of true point data but others are obtained
using sources that are separated from detectors. Where values are determined
between rather than at points, readings will be affected by orientation. Precise
field notes are always important but especially so in these cases, since reading
points must be defined and orientations must be recorded.
If transmitters, receivers and/or electrodes are laid out in straight lines
and the whole system can be reversed without changing the reading, the midpoint
should be considered the reading point. Special notations are needed
for asymmetric systems, and the increased probability of positioning error is
in itself a reason for avoiding asymmetry. Especial care must be taken when
recording the positions of sources and detectors in seismic work.
Station numbering
Station numbering should be logical and consistent. Where data are collected
along traverses, numbers should define positions in relation to the traverse
grid. Infilling between traverse stations 3 and 4 with stations 3.25 , 3.5 and 3.75
is clumsy and may create typing problems, whereas defining as 325E a
station halfway between stations 300E and 350E, which are 50 metres apart,
is easy and unambiguous. The fashion for labelling such a station 300+25E
has no discernible advantages and uses a plus sign which may be needed,
with digital field systems or in subsequent processing, to stand for N or E. It
may be worth defining the grid origin in such a way that S or W stations do
not occur, and this may be essential with data loggers that cannot cope with
either negatives or points of the compass.
Stations scattered randomly through an area are best numbered sequentially.
Positions can be recorded in the field by pricking through maps or
air-photos and labelling the reverse sides. Estimating coordinates in the field
from maps may seem desirable but mistakes are easily made and valuable
time is lost. Station coordinates are now often obtained from GPS receivers
(Section 1.5), but differential GPS may be needed to provide sufficient accuracy
for detailed surveys.
If several observers are involved in a single survey, numbers can easily
be accidentally duplicated. All field books and sheets should record the name
of the observer. The interpreter or data processor will need to know who to
look for when things go wrong.
Recording results
Geophysical results are primarily numerical and must be recorded even more
carefully than qualitative observations of field geology. Words, although
sometimes difficult to read, can usually be deciphered eventually, but a set of
numbers may be wholly illegible or, even worse, may be misread. The need
for extra care has to be reconciled with the fact that geophysical observers are
usually in more of a hurry than are geologists, since their work may involve
instruments that are subject to drift, draw power from batteries at frightening
speed or are on hire at high daily rates.
Numbers may, of course, not only be misread but miswritten. The circumstances
under which data are recorded in the field are varied but seldom
ideal. Observers are usually either too hot, too cold, too wet or too thirsty.
Under such conditions, they may delete correct results and replace them with
incorrect ones, in moments of confusion or temporary dyslexia. Data on geophysical
field sheets should therefore never be erased. Corrections should
be made by crossing out the incorrect items, preserving their legibility, and
writing the correct values alongside. Something may then be salvaged even
if the correction is wrong. Precise reporting standards must be enforced and
strict routines must be followed if errors are to be minimized. Reading the
instrument twice at each occupation of a station, and recording both values,
reduces the incidence of major errors.
Loss of geophysical data tends to be final. Some of the qualitative observations
in a geological notebook might be remembered and re-recorded, but
not strings of numbers. Copies are therefore essential and should be made
in the field, using duplicating sheets or carbon paper, or by transcribing the
results each evening. Whichever method is used, originals and duplicates
must be separated immediately and stored separately thereafter. Duplication
is useless if copies are stored, and lost, together. This, of course, applies
equally to data stored in a data logger incorporated in, or linked to, the field
instrument. Such data should be checked, and backed up, each evening.
Digital data loggers are usually poorly adapted to storing non-numeric
information, but observers are uniquely placed to note and comment on a
multitude of topographic, geological, manmade (cultural ) and climatic factors
that may affect the geophysical results. If they fail to do so, the data that
they have gathered may be interpreted incorrectly. If data loggers are not
being used, comments should normally be recorded in notebooks, alongside
the readings concerned. If they are being used, adequate supplementary positional
data must be stored elsewhere. In archaeological and site investigation
surveys, where large numbers of readings are taken in very small areas, annotated
sketches are always useful and may be essential. Sketch maps should
be made wherever the distances of survey points or lines from features in
the environment are important. Geophysical field workers may also have a
responsibility to pass on to their geological colleagues information of interest
about places that only they may visit. They should at least be willing to
record dips and strikes, and perhaps to return with rock samples where these
would be useful.
Accuracy, sensitivity, precision
Accuracy must be distinguished from sensitivity. A standard gravity meter,
for example, is sensitive to field changes of one-tenth of a gravity unit
but an equivalent level of accuracy will be achieved only if readings are
carefully made and drift and tidal corrections are correctly applied. Accuracy
is thus limited, but not determined, by instrument sensitivity. Precision,
which is concerned only with the numerical presentation of results (e.g. the
number of decimal places used), should always be appropriate to accuracy
(Example 1.1). Not only does superfluous precision waste time but false conclusions
may be drawn from the high implied accuracy.
Example 1.1
Gravity reading = 858.3 scale units
Calibration constant = 1.0245 g.u. per scale division (see Section 2.1)
Converted reading = 879.32835 g.u.
But reading accuracy is only 0.1 g.u. (approximately), and therefore:
Converted reading = 879.3 g.u.
(Four decimal place precision is needed in the calibration constant, because
858.3 multiplied by 0.0001 is equal to almost 0.1 g.u.)
Geophysical measurements can sometimes be made to a greater accuracy
than is needed, or even usable, by the interpreters. However, the highest
possible accuracy should always be sought, as later advances may allow the
data to be analysed more effectively.
Drift
A geophysical instrument will usually not record the same results if read
repeatedly at the same place. This may be due to changes in background
field but can also be caused by changes in the instrument itself, i.e. to drift.
Drift correction is often the essential first stage in data analysis, and is usually
based on repeat readings at base stations (Section 1.4).
Instrument drift is often related to temperature and is unlikely to be linear
between two readings taken in the relative cool at the beginning and end of
a day if temperatures are 10 or 20 degrees higher at noon. Survey loops may
therefore have to be limited to periods of only one or two hours.
Drift calculations should be made whilst the field crew is still in the
survey area so that readings may be repeated if the drift-corrected results
appear questionable. Changes in background field are sometimes treated as
drift but in most cases the variations can either be monitored directly (as in
magnetics) or calculated (as in gravity). Where such alternatives exist, it is
preferable they be used, since poor instrument performance may otherwise
be overlooked.
Signal and noise
To a geophysicist, signal is the object of the survey and noise is anything
else that is measured but is considered to contain no useful information. One
observer’s signal may be another’s noise. The magnetic effect of a buried
normal distribution lie within 1 SD of the mean, and less than 0.3% differ
from it by more than 3 SDs. The SD is popular with contractors when quoting
survey reliability, since a small value can efficiently conceal several major
errors. Geophysical surveys rarely provide enough field data for statistical
methods to be validly applied, and distributions are more often assumed to
be normal than proven to be so.
Anomalies
Only rarely is a single geophysical observation significant. Usually, many
readings are needed, and regional background levels must be determined,
before interpretation can begin. Interpreters tend to concentrate on anomalies,
i.e. on differences from a constant or smoothly varying background.
Geophysical anomalies take many forms. A massive sulphide deposit containing
pyrrhotite would be dense, magnetic and electrically conductive. Typical
anomaly profiles recorded over such a body by various types of geophysical
survey are shown in Figure 1.8. A wide variety of possible contour patterns
correspond to these differently shaped profiles.
Background fields also vary and may, at different scales, be regarded as
anomalous. A ‘mineralization’ gravity anomaly, for example, might lie on
a broader high due to a mass of basic rock. Separation of regionals from
residuals is an important part of geophysical data processing and even in the
field it may be necessary to estimate background so that the significance of
local anomalies can be assessed. On profiles, background fields estimated by
eye may be more reliable than those obtained using a computer, because of
the virtual impossibility of writing a computer program that will produce a
background field uninfluenced by the anomalous values (Figure 1.9). Computer
methods are, however, essential when deriving backgrounds from data
gathered over an area rather than along a single line.
The existence of an anomaly indicates a difference between the real world
and some simple model, and in gravity work the terms free air, Bouguer
and isostatic anomaly are commonly used to denote derived quantities that
represent differences from gross Earth models. These so-called anomalies
are sometimes almost constant within a small survey area, i.e. the area is
not anomalous! Use of terms such as Bouguer gravity (rather than Bouguer
anomaly) avoids this confusion.
Wavelengths and half-widths
Geophysical anomalies in profile often resemble transient waves but vary
in space rather than time. In describing them the terms frequency and frequency
content are often loosely used, although wavenumber (the number of
complete waves in unit distance) is pedantically correct. Wavelength may be
quite properly used of a spatially varying quantity, but is imprecise where
geophysical anomalies are concerned because an anomaly described as having
a single ‘wavelength’ would be resolved by Fourier analysis into a number
of components of different wavelengths.
A more easily estimated quantity is the half-width, which is equal to half
the distance between the points at which the amplitude has fallen to half the
anomaly maximum (cf. Figure 1.8a). This is roughly equal to a quarter of
the wavelength of the dominant sinusoidal component, but has the advantage
of being directly measurable on field data. Wavelengths and half-widths are
important because they are related to the depths of sources. Other things
being equal, the deeper the source, the broader the anomaly.
Presentation of results
The results of surveys along traverse lines can be presented in profile form,
as in Figure 1.8. It is usually possible to plot profiles in the field, or at
least each evening, as work progresses, and such plots are vital for quality
control. A laptop computer can reduce the work involved, and many modern
instruments and data loggers are programmed to display profiles in ‘real time’
as work proceeds.
A traverse line drawn on a topographic map can be used as the baseline
for a geophysical profile. This type of presentation is particularly helpful
in identifying anomalies due to manmade features, since correlations with
features such as roads and field boundaries are obvious. If profiles along a
number of different traverses are plotted in this way on a single map they are
said to be stacked, a word otherwise used for the addition of multiple data
sets to form a single output set (see Section 1.3.5).
Contour maps used to be drawn in the field only if the strike of some
feature had to be defined quickly so that infill work could be planned, but
once again the routine use of laptop computers has vastly reduced the work
involved. However, information is lost in contouring because it is not generally
possible to choose a contour interval that faithfully records all the features
of the original data. Also, contour lines are drawn in the areas between traverses,
where there are no data, and inevitably introduce a form of noise.
Examination of contour patterns is not, therefore, the complete answer to
field quality control.
Cross-sectional contour maps (pseudo-sections) are described in
Sections 6.3.5 and 7.4.2.
In engineering site surveys, pollution monitoring and archaeology, the
objects of interest are generally close to the surface and their positions in
plan are usually much more important than their depths. They are, moreover,
likely to be small and to produce anomalies detectable only over very small
areas. Data have therefore to be collected on very closely spaced grids and can
often be presented most effectively if background-adjusted values are used
to determine the colour or grey-scale shades of picture elements (pixels) that
can be manipulated by image-processing techniques. Interpretation then relies
on pattern recognition and a single pixel value is seldom important. Noise
is eliminated by eye, i.e. patterns such as those in Figure 1.10 are easily
recognized as due to human activity.
Data loggers
During the past decade, automation of geophysical equipment in small-scale
surveys has progressed from a rarity to a fact of life. Although many of the
older types of instrument are still in use, and giving valuable service, they now
compete with variants containing the sort of computer power employed, 30
years ago, to put a man on the moon. At least one manufacturer now proudly
boasts ‘no notebook’, even though the instrument in question is equipped
with only a numerical key pad so that there is no possibility of entering
text comments into the (more than ample) memory. On other automated
instruments the data display is so small and so poorly positioned that the
possibility that the observer might actually want to look at, and even think
about, his observations as he collects them has clearly not been considered.
Unfortunately, this pessimism may all too often be justified, partly because of
the speed with which readings, even when in principle discontinuous, can now
be taken and logged. Quality control thus often depends on the subsequent
playback and display of whole sets of data, and it is absolutely essential that
this is done on, at the most, a daily basis. As Oscar Wilde might have said
(had he opted for a career in field geophysics), to spend a few hours recording
rubbish might be accounted a misfortune. To spend anything more than a day
doing so looks suspiciously like carelessness.
Automatic data loggers, whether ‘built-in’ or separate, are particularly
useful where instruments can be dragged, pushed or carried along traverse
to provide virtually continuous readings. Often, all that is required of the
operators is that they press a key to initiate the reading process, walk along
the traverse at constant speed and press the key again when the traverse is
completed. On lines more than about 20 m long, additional keystrokes can
be used to ‘mark’ intermediate survey points.
One consequence of continuous recording has been the appearance in
ground surveys of errors of types once common in airborne surveys which
have now been almost eliminated by improved compensation methods and
GPS navigation. These were broadly divided into parallax errors, heading
errors, ground clearance/coupling errors and errors due to speed variations.
With the system shown in Figure 1.11, parallax errors can occur because
the magnetic sensor is about a metre ahead of the GPS sensor. Similar errors
can occur in surveys where positions are recorded by key strokes on a data
logger. If the key is depressed by the operator when he, rather than the sensor,
passes a survey peg, all readings will be displaced from their true positions.
If, as is normal practice, alternate lines on the grid are traversed in opposite
directions, a herringbone pattern will be imposed on a linear anomaly, with
the position of the peak fluctuating backwards and forwards according to the
direction in which the operator was walking (Figure 1.12a).
False anomalies can also be produced in airborne surveys if ground
clearance is allowed to vary, and similar effects can now be observed in
ground surveys. Keeping the sensor shown in Figure 1.11 at a constant height
above the ground is not easy (although a light flexible ‘spacer’ hanging from
it can help). On level ground there tends to be a rhythmic effect associated
with the operator’s motion, and this can sometimes appear on contour maps
as ‘striping’ at right angles to the traverse when minor peaks and troughs on
adjacent lines are linked to each other by the contouring algorithm. On slopes
there will, inevitably, be a tendency for a sensor in front of the observer to
be closer to the ground when going uphill than when going down. How
this will affect the final maps will vary with the nature of the terrain, but
in an area with constant slope there will a tendency for background levels
to be different on parallel lines traversed in opposite directions. This can
produce herringbone effects on individual contour lines in low gradient areas
(Figure 1.12b).
Heading errors occurred in airborne (especially aeromagnetic) surveys
because the effect of the aircraft on the sensor depended on aircraft orientation.
A similar effect can occur in a ground magnetic survey if the observer is
carrying any iron or steel material. The induced magnetization in these objects
will vary according to the facing direction, producing effects similar to those
produced by constant slopes, i.e. similar to those in Figure 1.12b.
Before the introduction of GPS navigation, flight path recovery in airborne
surveys relied on interpolation between points identified photographically.
Necessarily, ground speed was assumed constant between these points, and
anomalies were displaced if this was not the case. Similar effects can now be
seen in datalogged ground surveys. Particularly common reasons for slight
displacements of anomalies are that the observer either presses the key to start
recording at the start of the traverse, and then starts walking or, at the end of
the traverse, stops walking and only then presses the key to stop recording.
These effects can be avoided by insisting that observers begin walking before
the start of the traverse and continue walking until the end point has been
safely passed. If, however, speed changes are due to rugged ground, all that
can be done is to increase the number of ‘marked’ points.
Many data loggers not only record data but have screens large enough
to show individual and multiple profiles, allowing a considerable degree of
quality control in the field. Further quality control will normally be done each
evening, using automatic contouring programs on laptop PCs, but allowance
must be made for the fact that automatic contouring programs tend to introduce
their own distortions (Figure 1.12c).
using sources that are separated from detectors. Where values are determined
between rather than at points, readings will be affected by orientation. Precise
field notes are always important but especially so in these cases, since reading
points must be defined and orientations must be recorded.
If transmitters, receivers and/or electrodes are laid out in straight lines
and the whole system can be reversed without changing the reading, the midpoint
should be considered the reading point. Special notations are needed
for asymmetric systems, and the increased probability of positioning error is
in itself a reason for avoiding asymmetry. Especial care must be taken when
recording the positions of sources and detectors in seismic work.
Station numbering
Station numbering should be logical and consistent. Where data are collected
along traverses, numbers should define positions in relation to the traverse
grid. Infilling between traverse stations 3 and 4 with stations 3.25 , 3.5 and 3.75
is clumsy and may create typing problems, whereas defining as 325E a
station halfway between stations 300E and 350E, which are 50 metres apart,
is easy and unambiguous. The fashion for labelling such a station 300+25E
has no discernible advantages and uses a plus sign which may be needed,
with digital field systems or in subsequent processing, to stand for N or E. It
may be worth defining the grid origin in such a way that S or W stations do
not occur, and this may be essential with data loggers that cannot cope with
either negatives or points of the compass.
Stations scattered randomly through an area are best numbered sequentially.
Positions can be recorded in the field by pricking through maps or
air-photos and labelling the reverse sides. Estimating coordinates in the field
from maps may seem desirable but mistakes are easily made and valuable
time is lost. Station coordinates are now often obtained from GPS receivers
(Section 1.5), but differential GPS may be needed to provide sufficient accuracy
for detailed surveys.
If several observers are involved in a single survey, numbers can easily
be accidentally duplicated. All field books and sheets should record the name
of the observer. The interpreter or data processor will need to know who to
look for when things go wrong.
Recording results
Geophysical results are primarily numerical and must be recorded even more
carefully than qualitative observations of field geology. Words, although
sometimes difficult to read, can usually be deciphered eventually, but a set of
numbers may be wholly illegible or, even worse, may be misread. The need
for extra care has to be reconciled with the fact that geophysical observers are
usually in more of a hurry than are geologists, since their work may involve
instruments that are subject to drift, draw power from batteries at frightening
speed or are on hire at high daily rates.
Numbers may, of course, not only be misread but miswritten. The circumstances
under which data are recorded in the field are varied but seldom
ideal. Observers are usually either too hot, too cold, too wet or too thirsty.
Under such conditions, they may delete correct results and replace them with
incorrect ones, in moments of confusion or temporary dyslexia. Data on geophysical
field sheets should therefore never be erased. Corrections should
be made by crossing out the incorrect items, preserving their legibility, and
writing the correct values alongside. Something may then be salvaged even
if the correction is wrong. Precise reporting standards must be enforced and
strict routines must be followed if errors are to be minimized. Reading the
instrument twice at each occupation of a station, and recording both values,
reduces the incidence of major errors.
Loss of geophysical data tends to be final. Some of the qualitative observations
in a geological notebook might be remembered and re-recorded, but
not strings of numbers. Copies are therefore essential and should be made
in the field, using duplicating sheets or carbon paper, or by transcribing the
results each evening. Whichever method is used, originals and duplicates
must be separated immediately and stored separately thereafter. Duplication
is useless if copies are stored, and lost, together. This, of course, applies
equally to data stored in a data logger incorporated in, or linked to, the field
instrument. Such data should be checked, and backed up, each evening.
Digital data loggers are usually poorly adapted to storing non-numeric
information, but observers are uniquely placed to note and comment on a
multitude of topographic, geological, manmade (cultural ) and climatic factors
that may affect the geophysical results. If they fail to do so, the data that
they have gathered may be interpreted incorrectly. If data loggers are not
being used, comments should normally be recorded in notebooks, alongside
the readings concerned. If they are being used, adequate supplementary positional
data must be stored elsewhere. In archaeological and site investigation
surveys, where large numbers of readings are taken in very small areas, annotated
sketches are always useful and may be essential. Sketch maps should
be made wherever the distances of survey points or lines from features in
the environment are important. Geophysical field workers may also have a
responsibility to pass on to their geological colleagues information of interest
about places that only they may visit. They should at least be willing to
record dips and strikes, and perhaps to return with rock samples where these
would be useful.
Accuracy, sensitivity, precision
Accuracy must be distinguished from sensitivity. A standard gravity meter,
for example, is sensitive to field changes of one-tenth of a gravity unit
but an equivalent level of accuracy will be achieved only if readings are
carefully made and drift and tidal corrections are correctly applied. Accuracy
is thus limited, but not determined, by instrument sensitivity. Precision,
which is concerned only with the numerical presentation of results (e.g. the
number of decimal places used), should always be appropriate to accuracy
(Example 1.1). Not only does superfluous precision waste time but false conclusions
may be drawn from the high implied accuracy.
Example 1.1
Gravity reading = 858.3 scale units
Calibration constant = 1.0245 g.u. per scale division (see Section 2.1)
Converted reading = 879.32835 g.u.
But reading accuracy is only 0.1 g.u. (approximately), and therefore:
Converted reading = 879.3 g.u.
(Four decimal place precision is needed in the calibration constant, because
858.3 multiplied by 0.0001 is equal to almost 0.1 g.u.)
Geophysical measurements can sometimes be made to a greater accuracy
than is needed, or even usable, by the interpreters. However, the highest
possible accuracy should always be sought, as later advances may allow the
data to be analysed more effectively.
Drift
A geophysical instrument will usually not record the same results if read
repeatedly at the same place. This may be due to changes in background
field but can also be caused by changes in the instrument itself, i.e. to drift.
Drift correction is often the essential first stage in data analysis, and is usually
based on repeat readings at base stations (Section 1.4).
Instrument drift is often related to temperature and is unlikely to be linear
between two readings taken in the relative cool at the beginning and end of
a day if temperatures are 10 or 20 degrees higher at noon. Survey loops may
therefore have to be limited to periods of only one or two hours.
Drift calculations should be made whilst the field crew is still in the
survey area so that readings may be repeated if the drift-corrected results
appear questionable. Changes in background field are sometimes treated as
drift but in most cases the variations can either be monitored directly (as in
magnetics) or calculated (as in gravity). Where such alternatives exist, it is
preferable they be used, since poor instrument performance may otherwise
be overlooked.
Signal and noise
To a geophysicist, signal is the object of the survey and noise is anything
else that is measured but is considered to contain no useful information. One
observer’s signal may be another’s noise. The magnetic effect of a buried
normal distribution lie within 1 SD of the mean, and less than 0.3% differ
from it by more than 3 SDs. The SD is popular with contractors when quoting
survey reliability, since a small value can efficiently conceal several major
errors. Geophysical surveys rarely provide enough field data for statistical
methods to be validly applied, and distributions are more often assumed to
be normal than proven to be so.
Anomalies
Only rarely is a single geophysical observation significant. Usually, many
readings are needed, and regional background levels must be determined,
before interpretation can begin. Interpreters tend to concentrate on anomalies,
i.e. on differences from a constant or smoothly varying background.
Geophysical anomalies take many forms. A massive sulphide deposit containing
pyrrhotite would be dense, magnetic and electrically conductive. Typical
anomaly profiles recorded over such a body by various types of geophysical
survey are shown in Figure 1.8. A wide variety of possible contour patterns
correspond to these differently shaped profiles.
Background fields also vary and may, at different scales, be regarded as
anomalous. A ‘mineralization’ gravity anomaly, for example, might lie on
a broader high due to a mass of basic rock. Separation of regionals from
residuals is an important part of geophysical data processing and even in the
field it may be necessary to estimate background so that the significance of
local anomalies can be assessed. On profiles, background fields estimated by
eye may be more reliable than those obtained using a computer, because of
the virtual impossibility of writing a computer program that will produce a
background field uninfluenced by the anomalous values (Figure 1.9). Computer
methods are, however, essential when deriving backgrounds from data
gathered over an area rather than along a single line.
The existence of an anomaly indicates a difference between the real world
and some simple model, and in gravity work the terms free air, Bouguer
and isostatic anomaly are commonly used to denote derived quantities that
represent differences from gross Earth models. These so-called anomalies
are sometimes almost constant within a small survey area, i.e. the area is
not anomalous! Use of terms such as Bouguer gravity (rather than Bouguer
anomaly) avoids this confusion.
Wavelengths and half-widths
Geophysical anomalies in profile often resemble transient waves but vary
in space rather than time. In describing them the terms frequency and frequency
content are often loosely used, although wavenumber (the number of
complete waves in unit distance) is pedantically correct. Wavelength may be
quite properly used of a spatially varying quantity, but is imprecise where
geophysical anomalies are concerned because an anomaly described as having
a single ‘wavelength’ would be resolved by Fourier analysis into a number
of components of different wavelengths.
A more easily estimated quantity is the half-width, which is equal to half
the distance between the points at which the amplitude has fallen to half the
anomaly maximum (cf. Figure 1.8a). This is roughly equal to a quarter of
the wavelength of the dominant sinusoidal component, but has the advantage
of being directly measurable on field data. Wavelengths and half-widths are
important because they are related to the depths of sources. Other things
being equal, the deeper the source, the broader the anomaly.
Presentation of results
The results of surveys along traverse lines can be presented in profile form,
as in Figure 1.8. It is usually possible to plot profiles in the field, or at
least each evening, as work progresses, and such plots are vital for quality
control. A laptop computer can reduce the work involved, and many modern
instruments and data loggers are programmed to display profiles in ‘real time’
as work proceeds.
A traverse line drawn on a topographic map can be used as the baseline
for a geophysical profile. This type of presentation is particularly helpful
in identifying anomalies due to manmade features, since correlations with
features such as roads and field boundaries are obvious. If profiles along a
number of different traverses are plotted in this way on a single map they are
said to be stacked, a word otherwise used for the addition of multiple data
sets to form a single output set (see Section 1.3.5).
Contour maps used to be drawn in the field only if the strike of some
feature had to be defined quickly so that infill work could be planned, but
once again the routine use of laptop computers has vastly reduced the work
involved. However, information is lost in contouring because it is not generally
possible to choose a contour interval that faithfully records all the features
of the original data. Also, contour lines are drawn in the areas between traverses,
where there are no data, and inevitably introduce a form of noise.
Examination of contour patterns is not, therefore, the complete answer to
field quality control.
Cross-sectional contour maps (pseudo-sections) are described in
Sections 6.3.5 and 7.4.2.
In engineering site surveys, pollution monitoring and archaeology, the
objects of interest are generally close to the surface and their positions in
plan are usually much more important than their depths. They are, moreover,
likely to be small and to produce anomalies detectable only over very small
areas. Data have therefore to be collected on very closely spaced grids and can
often be presented most effectively if background-adjusted values are used
to determine the colour or grey-scale shades of picture elements (pixels) that
can be manipulated by image-processing techniques. Interpretation then relies
on pattern recognition and a single pixel value is seldom important. Noise
is eliminated by eye, i.e. patterns such as those in Figure 1.10 are easily
recognized as due to human activity.
Data loggers
During the past decade, automation of geophysical equipment in small-scale
surveys has progressed from a rarity to a fact of life. Although many of the
older types of instrument are still in use, and giving valuable service, they now
compete with variants containing the sort of computer power employed, 30
years ago, to put a man on the moon. At least one manufacturer now proudly
boasts ‘no notebook’, even though the instrument in question is equipped
with only a numerical key pad so that there is no possibility of entering
text comments into the (more than ample) memory. On other automated
instruments the data display is so small and so poorly positioned that the
possibility that the observer might actually want to look at, and even think
about, his observations as he collects them has clearly not been considered.
Unfortunately, this pessimism may all too often be justified, partly because of
the speed with which readings, even when in principle discontinuous, can now
be taken and logged. Quality control thus often depends on the subsequent
playback and display of whole sets of data, and it is absolutely essential that
this is done on, at the most, a daily basis. As Oscar Wilde might have said
(had he opted for a career in field geophysics), to spend a few hours recording
rubbish might be accounted a misfortune. To spend anything more than a day
doing so looks suspiciously like carelessness.
Automatic data loggers, whether ‘built-in’ or separate, are particularly
useful where instruments can be dragged, pushed or carried along traverse
to provide virtually continuous readings. Often, all that is required of the
operators is that they press a key to initiate the reading process, walk along
the traverse at constant speed and press the key again when the traverse is
completed. On lines more than about 20 m long, additional keystrokes can
be used to ‘mark’ intermediate survey points.
One consequence of continuous recording has been the appearance in
ground surveys of errors of types once common in airborne surveys which
have now been almost eliminated by improved compensation methods and
GPS navigation. These were broadly divided into parallax errors, heading
errors, ground clearance/coupling errors and errors due to speed variations.
With the system shown in Figure 1.11, parallax errors can occur because
the magnetic sensor is about a metre ahead of the GPS sensor. Similar errors
can occur in surveys where positions are recorded by key strokes on a data
logger. If the key is depressed by the operator when he, rather than the sensor,
passes a survey peg, all readings will be displaced from their true positions.
If, as is normal practice, alternate lines on the grid are traversed in opposite
directions, a herringbone pattern will be imposed on a linear anomaly, with
the position of the peak fluctuating backwards and forwards according to the
direction in which the operator was walking (Figure 1.12a).
False anomalies can also be produced in airborne surveys if ground
clearance is allowed to vary, and similar effects can now be observed in
ground surveys. Keeping the sensor shown in Figure 1.11 at a constant height
above the ground is not easy (although a light flexible ‘spacer’ hanging from
it can help). On level ground there tends to be a rhythmic effect associated
with the operator’s motion, and this can sometimes appear on contour maps
as ‘striping’ at right angles to the traverse when minor peaks and troughs on
adjacent lines are linked to each other by the contouring algorithm. On slopes
there will, inevitably, be a tendency for a sensor in front of the observer to
be closer to the ground when going uphill than when going down. How
this will affect the final maps will vary with the nature of the terrain, but
in an area with constant slope there will a tendency for background levels
to be different on parallel lines traversed in opposite directions. This can
produce herringbone effects on individual contour lines in low gradient areas
(Figure 1.12b).
Heading errors occurred in airborne (especially aeromagnetic) surveys
because the effect of the aircraft on the sensor depended on aircraft orientation.
A similar effect can occur in a ground magnetic survey if the observer is
carrying any iron or steel material. The induced magnetization in these objects
will vary according to the facing direction, producing effects similar to those
produced by constant slopes, i.e. similar to those in Figure 1.12b.
Before the introduction of GPS navigation, flight path recovery in airborne
surveys relied on interpolation between points identified photographically.
Necessarily, ground speed was assumed constant between these points, and
anomalies were displaced if this was not the case. Similar effects can now be
seen in datalogged ground surveys. Particularly common reasons for slight
displacements of anomalies are that the observer either presses the key to start
recording at the start of the traverse, and then starts walking or, at the end of
the traverse, stops walking and only then presses the key to stop recording.
These effects can be avoided by insisting that observers begin walking before
the start of the traverse and continue walking until the end point has been
safely passed. If, however, speed changes are due to rugged ground, all that
can be done is to increase the number of ‘marked’ points.
Many data loggers not only record data but have screens large enough
to show individual and multiple profiles, allowing a considerable degree of
quality control in the field. Further quality control will normally be done each
evening, using automatic contouring programs on laptop PCs, but allowance
must be made for the fact that automatic contouring programs tend to introduce
their own distortions (Figure 1.12c).
No comments:
Post a Comment