03 June 2012

Snuffler: Destripe Filtering

In my last post, I said that I would try and talk about Snuffler a bit more, so here is a post about a certain type of filtering in Snuffler, called 'Destripe'. This filter is used for processing magnetometry data. The most common type of magnetometry sensor is the fluxgate sensor, which is constructed of a metal ring with wire wound around it. The sensor is cheap to make and sensitive, but the downside is, it is directional, i.e. the reading will change according to the direction it is facing, effectively drowning out the slight changes you are trying to observe with what is effectively just a fluxgate compass.

The way to get around this problem is to use a gradiometer setup, where two sensors are arranged one above the other, and aligned so that when the device is turned, the earths magnetic field will affect both sensors equally, so that when you take the reading of one sensor away from the other, you will be left with the minor variations in reading from the sensor closest to the ground, which is just what you want.

Of course, in order for the gradiometer setup to function correctly, the two sensors must be perfectly aligned, and there are two ways of going about this, both of which use something called a zero point. A zero point is a magnetically sterile point on the ground where the device can be faced towards the cardinal compass points in order to balance it correctly. Such a point can be found by scanning around for one.

The first balancing method, used by devices such as the Geoscan FM256, uses physical balancing. The physical alignment of the sensors is adjustable, and these are manually adjusted according to the readings visible to the user of the device, which is quite fiddly.

The second balancing method, used by devices such as the Bartington GRAD601, uses electronic balancing. The fluxgate sensors are physically fixed within the sensor columns, rather than physically adjustable as above. The machine will get the user to point the device in various directions and record the readings. It can then compensate by adjusting any readings you get according to any deficiencies in the balancing process.

The place you may have chosen for your zero point may not be magnetically sterile, so it is worth making sure that in all directions, the readings you get at that point are broadly the same. If your zero point is not good enough, scan around for another one.  I personally try and get the readings all under 1nT, but that is a matter for personal preference. Filtering will take care of the rest. Of course there are some people why don't like filtering spoiling the purity of their lovely data. Personally I think these people are up themselves, but that is just me. You are unlikely to get close enough to 0nT in all directions when you balance your machine, and this will lead to a stripeyness on your results, which is where the destripe filter comes in. Like with most filtering, artifacts can be produced as a result, which is why the up-themselves brigade don't like them, but as long as you understand what the filters do, and what they have done to your data, there will be no problem with interpretation.

In its most basic form, destriping will average the readings in each line, and then adjust that line so that the average will be zero. On destripe options window in Snuffler, this is the 'Zero Mean Line' option. For the average survey, this is probably ok, but there is a better way. Each sensor column facing in each direction will have its own bias, so if you are zig-zagging (walking in two directions) with a GRAD601-2 (two sensor columns), you will have a total of four separate biases, which is a bias per sensor/direction. The other two destripe modes in Snuffler take account of this. The 'Grid Per Sensor/Direction' mode will average the sensor/direction biases across a grid, whilst the 'Image Per Sensor/Direction' mode will take the average across the entire image. For the sensor/direction averages to work, Snuffler has to know about the sensors and progression across the grid. For each image, it will get this when your view is created from the import data. On old versions of the software, this transfer of information did not happen, so if the sensor/direction options are not available to you, try recreating the Grids, Map and View from the original import data.

So why is averaging per sensor/direction approach to destriping better than averaging each line? I will give you two examples why. Take the below image, which is raw data showing some nasty metal water pipes.

Ewww, nasty pipes. Anyway, the problem with this data is that the large areas of positive or negative readings will produce a bias in the averaging, causing artifacts to appear when the 'Zero Mean Line' option is used, as you can see below.
Snuffler tries to reduce this by clipping the data used for averaging to the display limits, which in the latest version of the software is automatically set to +/-2nT for magnetometry data. This will only help so much however, leaving the light and dark areas shown above. If these local variations to the average are spread out using the sensor/direction averaging, you will get the below instead. This is sensor/direction per grid.
Well that's not much better, I hear you cry. You will see however, that the individual grids (there are four here) are mostly stripe free, even though they don't quite agree with eachother. So lets try sensor/direction per image.
It's not perfect, and is slightly stripier than the previous image, but at least the grids match eachother. The downside of averaging across the image is that instruments will drift, so in most cases, averaging per grid will be better, but where there is extreme disruption, as in the data set above, averaging per image may help. In most cases, averaging per grid will give you the best result, and from the latest of Snuffler, this will be the default mode selected.

So that is an example of how the sensor/direction model helps with features that you don't want. Now here is an example of how it can help with features that you do want. Take this set of raw data. You can see, despite the striping, that there is a broad enclosure ditch.

The trouble is, when a linear feature you are surveying is aligned to the way you are walking, the 'Zero Mean Line' method can wipe it out, as you can see below.
That's not much help is it, but if we take averages acros the grid per sensor/direction rather than per line, we get the following.
That's a lot better isn't it. You can still see the feature and the data is destriped.

As I said at the start of this blog post, it helps if you understand what the filters are doing when you use them, you will get much better results out of it. It is easy for me, because I wrote it, so I understand perfectly. I hope this article goes some way to helping all you Snuffler users out there understand how it works.


  1. Not related to this post, but what's the reason you limit the greyscale plot to 16 or 32 shades rather than using the full available range of 256?

    I found when playing with display algorithms myself that for resistance data in particular a wider range helped, so I'm just curious.

  2. The human eye will have trouble picking out more than 32 shades of grey. If you are having contrast problems with picking out weak features, then either manually playing with the upper and lower bounds of the display parameters, or filters such as Remove Geology for resistivity data, will probably help.

  3. "The human eye will have trouble picking out more than 32 shades of grey."

    I would guess that this probably varies over the population, but the following link suggests something between 32 and 64: http://www.cs.unm.edu/~brayer/vision/perception.html (personally I can distinguish even more, but that may develop from a lot of computer graphics work -- and a high quality LCD can make a lot of difference)

    From a theoretical and program design viewpoint, however, by limiting it to 32 shades you are giving a limited display of the data. The eye will never see differences between particular readings even if they exist, because they're being displayed with the same grey. If you output 256 shades then the only limitation on the display is the range of the readings.

    Whether or not people can distinguish the different greys is then left up to their personal visual perception.

    Does that make sense? Have you tried it with 256 shades? I'd be really interested in seeing the difference between a 32 and 256 shade plot of the same data.

  4. This comment has been removed by a blog administrator.