06 January 2013

What to look for in a magnetometer

I have been looking at buying a GPR recently, and trying to get decent comparative information between makes and models is difficult to say the least. It occurred to me that other people must have the same problem when buying a magnetometer, so I thought I would write a quick guide detailing what knowledge I have gained over the years. I wont be discussing anything too expensive, like alkali-vapour magnetometers, as they are outside most peoples budgets. I will stick to fluxgate magnetometers. There are three makes I will discuss.

Geoscan

Geoscan make the FM256, a fluxgate gradiometer with 0.5 metre sensor spacing. I have personally used its predecessor, the FM36. I am not entirely sure of the differences between the two models, though I gather than the main points I will touch on have not changed.

Bartington

Bartington make the GRAD601, a fluxgate gradiometer with 1 metre sensor spacing. I personally have a GRAD601-2, the version with two sensor columns.

Foerster

Foerster make the Ferex. I personally have no experience of these devices, and what I have to say on the matter is merely what I have learned from other people and the internet.

With all that in mind, I will discuss the differences between the devices in various categories.


Sensitivity

One of the most important things to consider is the sensitivity of the instrument, and how good it is at picking up the slight changes we are looking for in archaeology. In my personal experience, the Bartington is more sensitive than the Geoscan, perhaps helped by the longer sensor columns. Apparently, the Foerster is not very sensitive. The Bartington wins here, not sure who comes second, but I would guess the Geoscan.

Setup & Stability

Fluxgate instruments, being directional in nature, need to be balanced before use. The Geoscan instrument has a manual process, where physical knobs are turned to align the sensors. The process is somewhat time consuming, and is not helped by the device suffering a lot from thermal drift, so you may find yourself realigning the sensors after each grid. The Bartington is much better here. It has an electronic balancing process, which calculates the differences in sensor alignments and compensates electronically. It is also helped by being very temperature stable. I only tend to balance it a couple of times in a day. The Foerster is apparently set up in such a way that it doesn't need balancing. I'm not quite sure how this works, but it seems to do so, thus the Foerster wins here, with Bartington second.

Array Options

The Geoscan instrument has an option to carry two separate devices on a carrying frame, with a single button to start recording. Each device has to be balanced and downloaded separately. The Bartington comes in the single column GRAD601-1 variety or the dual column GRAD601-2 variety. You do not need  separate balancing and downloading for each column with the GRAD601-2. Because of the lack of setup needed with the Foerster instrument, it is easy to have a large array of devices, perhaps towed behind a vehicle even. Foerster wins this one with the Bartington second.

GNSS Integration

The geoscan instrument has no GNSS integration. The Bartington has an option of a separate data recorder that uses GNSS, but that will not do normal gridded recording. The Foerster has full GNSS integration, which helps with its cart and vehicle towed setups. The Foerster wins this one with the Bartington second.

Reliability

The Geoscan has a very good reputation for reliability, these things never seem to go wrong. The Bartington has a poor reputation for reliability. Personally I've had to have my machine repaired twice. Once to replace the motherboard in the data recorder, and once to have a sensor column rebuilt after water got in, causing thermal drift. They can be damaged by rain, especially after seals have perished. Other people I have talked to have had a similar experience. I don't have any information on the reliability of the Foerster instruments, but I would guess somewhere between the other two, so Geoscan wins this one with Foerster second.

Cost

I am somewhat lacking in information here. All I can say that is concrete is that my GRAD601-2 cost me £10,500 a few years back. I gather that the GRAD601-1 and FM256 are roughly the same price, but for two sensor columns, the GRAD601-2 is much better value than buying two FM256's. I know nothing about the Foerster prices. I can't call who wins this one, get some quotes.

Conclusion

What I would recommend going for depends on how you will be using it. If you are surveying using a gridless GNSS technique, then the Foerster is probably your best bet. If you are doing a gridded survey, I would recommend the Bartington. If anyone out there has further information to contribute to this guide, especially regarding the Foerster instruments, please leave a comment below.

08 December 2012

NSGG Conference 2012

I managed to get to the Near Surface Geophysics Group Conference this year. It was good to chat to a few old faces, such as swapping geophysics software notes with the author of Archaeosurveyor, a thoroughly nice chap. I also spoke to Erica Utsi about buying a GPR, but I have yet to hear from anyone who has used one yet. I was quite impressed by what I heard, but I haven't got the money to buy one quite yet. Sometime next year hopefully. So what about the talks. Many of them used the sort of equipment that you can buy when money is no object, as it seems to be for academic departments, but it is not all about the bling. It's not even about the pretty pictures, though that helps. It's about some of the new ideas and how people go about things differently. Here are some of the highlights for me. 

James Bonsall talked about a new EM instrument called the CMD Mini-Explorer. I hadn't been hugely impressed by the results from EM in the past, but the results shown in the talk were quite impressive. The instrument takes both In-Phase and Quadrature readings at three different depths, increasing the chance that you will find something. The speaker said it gave better results and was easier to use than the Geonics EM38, though someone from Geonics in the audience suggested that a lot of the problems that the speaker had described had been sorted in the EM38 MK2, which takes readings at 2 depths compared to the 3 of the CMD. It would be interesting to hear from someone who has used both of these.

Armin Schmidt talked about GPR. This time, he took data from a Roman era cemetery and converted the raster data to vector data. This allowed him to use various GIS functions to process that data into a more agreeable format for viewing in 3D. You don't get the fine detail, but it makes things much easier to see for the average person not used to staring at geophysics plots.

James Lyall, famous in geophysics circles for the giant survey in the Vale of Pickering, talked about a national archive for geophysics results, much in the same way as aerial photography achieved. It is actually quite hard to get your hands on the data for any given geophysics project, and it is rare for any geophysics practitioner to store the data in any readable form outside the survey report. It is certainly possible, but it takes time and is expensive, so most surveyors don't. James asked the audience to get their collective heads around the problem.

Robert Fry talked about the work of the DART Project, which is something I've had my eye on for a while. One of the things they are attempting to do is see how earth resistance changes over the course of a year. Robert explained how they found a ditch feature using magnetometry, then put a series of fixed resistance probes across it in order to find out how the contrast between the ditch and its surroundings changed over time. Not surprisingly, the very wet weather in 2012 made everything waterlogged and made the ditch all but invisible to earth resistance for most of the year.

James Bonsall (again) talked about ground truthing geophysics data by comparing geophysics survey results to excavation results in Ireland. Most of the work was done with magnetometry, and the results showed a big difference depending on which geology the survey was taken, with limestones suffering. The results were broken down into true-positives, where both geophysics and excavation found features, true-negatives where neither did, false-positives where the geophysics found something but the excavators didn't, and
false-negatives, where the excavators found something that the geophysics hadn't spotted. The speaker suggested that for certain geologies, alternative methods to magnetometry, such as EM, should be used. Some members of the audience didn't agree with this, and suggested that, in particular, there wasn't a problem with limestones in mainland UK. Some also suggested that many of the false-positives were down to the excavators machining through shallow features, which everyone seemed to agree with.

Just for the pretty pictures this one. Lieven Verdonck demonstrated the sort of results you could get when you perform a GPR survey on a Roman town in Portugal at an absurdly high resolution. Nice if you have the time for it, and the results were certainly worth it, with very clear high resolution wall lines.

Closer to home, Paul Cheetham has been doing a very similar thing in Dorset as I have been doing in Sussex, and examining Roman rural settlement on a grand scale. The sort of results he was getting was quite familiar to me, and made me feel right at home amongst all the speakers with their expensive bling machines.

30 September 2012

Latest Results: Oaklands Park

Finally, here is a blog post about the big Independent Historical Research Group survey I have been working on this summer, which is at Oaklands Park, Sedlescombe. Oaklands Park is one of the 'Big Three' Roman iron-working sites by volume of their slag heap, and right next to the Roman road down from Bodiam, so we approached Pestalozzi, the children's charity, who own the land, and they very kindly agreed to let us survey it all.


Margary's line for the main Roman road from Bodiam is marked in green, but for reasons I wont go into just yet (we are still investigating), that is somewhat in doubt south of the river. The site wasn't quite as big as we expected it to be, but there are certainly some very interesting features. To give you an idea of the geography, the playing field you can see towards the northern end is the floodplain of the River Brede, and the site is on the side of a hill, rising to the south, with a paleochannel cutting through. The paleochannel is visible on the results running north-south towards the eastern end of the main survey area. Iron ore can be found on the top of the hill to the south.

The main iron-working area is pretty obvious, hugging the north end of the field, which would also have been the northern edge of the land, with water coming close to this point in Roman times. It has previously been supposed that there was a port here in Roman times, which makes sense. They used to bring coal up the River Brede to Sedlescombe into Victorian times. A couple of enclosures can be seen towards the eastern end, but apart from these, there is a lot less settlement that we expected. Much of the local settlement may be towards the west, under the trees and houses.

Tracks seem to lead everywhere. Here are a couple to note. A track leads south, joining the edge of the paleochannel. This seems to head towards the top of the channel, which also seems to have been dug out to exploit the easy access to iron ore that the channel provides. There seem to be two tracks leading out of the iron-working area to the west. They both appear on the other side of the road, in a field owned by Luff's Farm, where they can be seen to join just before heading around the hill to the south.

IHRG are far from done at this site, there is still more to investigate. Most importantly, the main Roman road to the south, which most likely, is not how Ivan Margary envisioned it.

23 September 2012

Latest Results: Ringmer Again

As an update to This Post about Roman road hunting around Ringmer, I've been spending more time with the Roman Ringmer Study Group to track more of this road. We did another survey in the field to the east, and found more of the road there (see image below), which also showed the side ditches a bit better. They are roughly 20 metres apart. The group also did some test pits on features in the first field, the road is flint metalled, with the occasional bit of iron. The other features in the field turned out to be geological, most likely that annoying gley stuff again. This is probably down to the field being quite boggy. Annoying geology is annoying. The full report for this survey has now been written. You can find it here.


Further to this, the Roman Ringmer Study Group very kindly left their home parish to track the road a bit further east. The image below is just west of Laughton Place, which is owned by the Landmark Trust. The road was found to head along the northern edge of the moat, which could possibly mean that the road was still in existence in some form in medieval times. Hopefully the course of the road will be fully mapped out at some point, but the course, at least to the east, is still a bit of mystery.


08 July 2012

Latest Results: Ringmer

Searching for Roman roads in Sussex is the subject of my long term project, and recently I have had the chance to survey a previously unknown section of road near Ringmer. This section of road connects with two others, but whilst they are made of flint, this road, as you can see from the results, is made from both flint and Roman iron bloomery slag, which makes it easy to spot. There have apparently been a lot of coins found in the area, but it looks like the site has been picked clean, as nothing seems to be appearing now. To the south of the road are a number of 'features' I can't decide yet whether they are geological or archaeological, but I am tending towards the latter. I will be starting a different project soon, but I will be returning to this site with a new blog post at some point in the future.


04 July 2012

Latest Results: Teston

It is not often that you get to survey a Roman villa, this is only my second. Whilst I usually go for magnetometry these days, masonry eroding from the surface at the site of Teston Roman villa in Kent made resistivity the obvious choice, as resistivity is much better at finding walls than magnetometry. A bath house had already been found here,to the north-west, and building material in the field to the north, but here was an opportunity to look for buried walls in a relatively undamaged area. There is a distinct difference in types of walls found at this site. There are strong features towards the north, and more ephemeral linear features towards the south. Walls of both types show in both survey areas, and in the eastern area, a difference in alignment between two sets of walls suggest two different phases of building. The southern part of the eastern set of walls also contains what looks like an apsidal room, with a wing to what looks like a villa building headings south. You may hear more from me on this site in the future.




03 June 2012

Snuffler: Destripe Filtering

In my last post, I said that I would try and talk about Snuffler a bit more, so here is a post about a certain type of filtering in Snuffler, called 'Destripe'. This filter is used for processing magnetometry data. The most common type of magnetometry sensor is the fluxgate sensor, which is constructed of a metal ring with wire wound around it. The sensor is cheap to make and sensitive, but the downside is, it is directional, i.e. the reading will change according to the direction it is facing, effectively drowning out the slight changes you are trying to observe with what is effectively just a fluxgate compass.

The way to get around this problem is to use a gradiometer setup, where two sensors are arranged one above the other, and aligned so that when the device is turned, the earths magnetic field will affect both sensors equally, so that when you take the reading of one sensor away from the other, you will be left with the minor variations in reading from the sensor closest to the ground, which is just what you want.

Of course, in order for the gradiometer setup to function correctly, the two sensors must be perfectly aligned, and there are two ways of going about this, both of which use something called a zero point. A zero point is a magnetically sterile point on the ground where the device can be faced towards the cardinal compass points in order to balance it correctly. Such a point can be found by scanning around for one.

The first balancing method, used by devices such as the Geoscan FM256, uses physical balancing. The physical alignment of the sensors is adjustable, and these are manually adjusted according to the readings visible to the user of the device, which is quite fiddly.

The second balancing method, used by devices such as the Bartington GRAD601, uses electronic balancing. The fluxgate sensors are physically fixed within the sensor columns, rather than physically adjustable as above. The machine will get the user to point the device in various directions and record the readings. It can then compensate by adjusting any readings you get according to any deficiencies in the balancing process.

The place you may have chosen for your zero point may not be magnetically sterile, so it is worth making sure that in all directions, the readings you get at that point are broadly the same. If your zero point is not good enough, scan around for another one.  I personally try and get the readings all under 1nT, but that is a matter for personal preference. Filtering will take care of the rest. Of course there are some people why don't like filtering spoiling the purity of their lovely data. Personally I think these people are up themselves, but that is just me. You are unlikely to get close enough to 0nT in all directions when you balance your machine, and this will lead to a stripeyness on your results, which is where the destripe filter comes in. Like with most filtering, artifacts can be produced as a result, which is why the up-themselves brigade don't like them, but as long as you understand what the filters do, and what they have done to your data, there will be no problem with interpretation.

In its most basic form, destriping will average the readings in each line, and then adjust that line so that the average will be zero. On destripe options window in Snuffler, this is the 'Zero Mean Line' option. For the average survey, this is probably ok, but there is a better way. Each sensor column facing in each direction will have its own bias, so if you are zig-zagging (walking in two directions) with a GRAD601-2 (two sensor columns), you will have a total of four separate biases, which is a bias per sensor/direction. The other two destripe modes in Snuffler take account of this. The 'Grid Per Sensor/Direction' mode will average the sensor/direction biases across a grid, whilst the 'Image Per Sensor/Direction' mode will take the average across the entire image. For the sensor/direction averages to work, Snuffler has to know about the sensors and progression across the grid. For each image, it will get this when your view is created from the import data. On old versions of the software, this transfer of information did not happen, so if the sensor/direction options are not available to you, try recreating the Grids, Map and View from the original import data.

So why is averaging per sensor/direction approach to destriping better than averaging each line? I will give you two examples why. Take the below image, which is raw data showing some nasty metal water pipes.

Ewww, nasty pipes. Anyway, the problem with this data is that the large areas of positive or negative readings will produce a bias in the averaging, causing artifacts to appear when the 'Zero Mean Line' option is used, as you can see below.
Snuffler tries to reduce this by clipping the data used for averaging to the display limits, which in the latest version of the software is automatically set to +/-2nT for magnetometry data. This will only help so much however, leaving the light and dark areas shown above. If these local variations to the average are spread out using the sensor/direction averaging, you will get the below instead. This is sensor/direction per grid.
Well that's not much better, I hear you cry. You will see however, that the individual grids (there are four here) are mostly stripe free, even though they don't quite agree with eachother. So lets try sensor/direction per image.
It's not perfect, and is slightly stripier than the previous image, but at least the grids match eachother. The downside of averaging across the image is that instruments will drift, so in most cases, averaging per grid will be better, but where there is extreme disruption, as in the data set above, averaging per image may help. In most cases, averaging per grid will give you the best result, and from the latest of Snuffler, this will be the default mode selected.

So that is an example of how the sensor/direction model helps with features that you don't want. Now here is an example of how it can help with features that you do want. Take this set of raw data. You can see, despite the striping, that there is a broad enclosure ditch.

The trouble is, when a linear feature you are surveying is aligned to the way you are walking, the 'Zero Mean Line' method can wipe it out, as you can see below.
That's not much help is it, but if we take averages acros the grid per sensor/direction rather than per line, we get the following.
That's a lot better isn't it. You can still see the feature and the data is destriped.

As I said at the start of this blog post, it helps if you understand what the filters are doing when you use them, you will get much better results out of it. It is easy for me, because I wrote it, so I understand perfectly. I hope this article goes some way to helping all you Snuffler users out there understand how it works.