Sharp Distance Sensors and Eliminating Noise

Sharp Distance Sensors

Sharp Distance Sensor

Sharp distance sensors (or Sharp range detection sensors) are great for detecting a range of distances. Analog versions in particular can provide you with accurate range measurements within 2-150 centimeters however certain sensors will have specific ranges. While there are digital versions that can be used as a proximity sensor you can easily use an analog sensor as a proximity sensor either through software or using digital circuitry. The main reason that there isn’t a magical sensor that can measure such a wide range of distances is simply because the sensor distance depends on the angle at which the emitter and detector are oriented. To reach farther distances the angle must be greater than that of the sensors detected close objects.

How they work

Each model of distance sensor has a preset angle at which the emitter and detector are oriented. Using this known angle and an internal CCD sensor, the angle of the returning light will hit the lens and cover a certain area/amount of the CCD sensor. From the preset angle, the lens shape and CCD light coverage the sensor can measure the distance to an object by means of “triangulation”. The great thing about how this works is that the accuracy is less dependent on the reflectance of the object since it relies not on intensity of returned light but the amount of the CCD that senses light. One more reason why this particular range detection sensor is fairly immune to environmental influences is that the emitted light is modulated which allows for filtering of light detected at other frequencies. However, the modulation of this light is the cause for many inaccurate readings.

A major fault

One of the biggest problems people encounter when using this distance sensor is noise. Let me be clear, this is not the typical noise that you find where a sensor is inaccurate due to an unsteady output. This noise is due to simple fact that you cannot read from this sensor while it is updating because you might catch one of the spikes caused by the modulated emitted light. Unfortunately there is no way to know when it is updating so it can be difficult to get consistently accurate results.

Above, you can see the modulated pulses on the output line resulting from the modulation of the emitter. You may also notice that these spikes can cause the reading to be an entire volt higher than the actual reading.

Taking a closer look at these pulses reveals that the width of the pulse is about 125us with an approximate 875us for a total cycle of ~1ms. After 32 cycles there is a 9ms resting period which would be the ideal time to read the distance sensor but again, there’s no way of knowing exactly when this period is. Unfortunately, with 32ms of pulsing and only 9ms of rest probability is not in your favor and you are bound to get numerous readings that are significantly higher than they should be.

Approach

I attempted numerous smoothing methods including a Kalman filter, exponential decay, simply averaging but none gave the results I desired. My goal was to completely eliminate the noise specific to polling the sensor during a voltage spike.

Kalman Filter

The Kalman filter worked great, I was able to get a super smooth output completely eliminating noise. The problem with the Kalman filter is that my test was only looking at a static distance, once you start adding in changes in distance the Kalman filter doesn’t work. Kalman filters work on the basis of trusting a sensor based on a few preconditions. Without going into too much detail, the problem was there is no way to mathematically model what my sensor readings should be since at times it could be a smooth sequence of readings but at other times the reading could jump to an extreme value (say I’m following a wall and then I encounter a doorway, the value will jump). When the sensor changes erratically, unless the Kalman filter has a math model to support this, the filter will likely assume that you are getting a bad reading when in fact it is a good one.

Exponential Decay and Moving Average

I’m lumping moving average and exponential decay together because exponential decay is a modified moving average so they both suffer from the same setbacks for the intent of this article. These are very simple solutions to many noise problems I’ve had in the past but they don’t work well in this case because of the extreme fluctuation in the reading when read during a pulse. The equation for exponential smoothing is basically as follows

result = movingAverage*rate + currentReading*(1-rate)

So, you essentially take a percentage of a moving average and add it to the percentage of the current reading such that the percentages would add up to 100%. Again, as you’ll see below, the issue with this is that the extreme increase in output actually still gives you a bad reading. For example, say you use 80% of the moving average and 20% of the current reading, if the moving average has multiple bad readings you’ll get a slightly bad reading but also with this function you get somewhat of a delay in response due to the fact that you’re taking an average of some number of previous samples. You can also see evidence of the delayed response in the chart below.

A few things to note here are is how the value increases following multiple noise readings, this is because the moving average now consists of multiple high valued readings skewing the moving average. Another observation to make here is the delay in response when the value moves up or down, the very intent of exponential smoothing (damping fluctuations) causes a delayed response to fluctuations that are intended.

The exponential smoothing is definitely acceptable, I would take these readings over the actuals in a heartbeat. I am still determined to get readings void of all noise related to this pulsing. Additionally, for my purposes, I need to see immediate responses to fluctuations in distance, if this were a case where I simply wanted to monitor slowly changing distances or a static distance this would be more than acceptable. I could increase the number of samples taken for the moving average to nearly eliminate any noise but the more samples you add, the slower your response time.

Finally A Solution

There is nothing detrimental to the results when taking multiple readings that exceed the update frequency. What I mean by this is that, for example, the Sharp GP2Y0A21YK distance sensor has a recommended sample rate of 34ms but that doesn’t mean you can’t read 10 times during that 34ms and that is the basis for my solution. To clarify, the recommended sample rate of the sensor is 34ms because the voltage is not updated any faster than at that rate. Essentially, the voltage is set, the sensor waits 9ms, spends 32ms taking a new reading and repeats. Knowing this makes it even easier to rule out noise because we know that all of our samples should be the same value with the exception that we sample while the voltage is changing but that issue would rule itself out since we are sampling cyclicly.

I tried numerous sample rates basing them off of the modulation frequency and then just reading at 1ms, 2ms, 3ms, etc to determine the sampling rate that resulted in the least percentage of noise samples, somewhat expectedly that result was to read from 1-2ms. Remember that the pulse we saw was approximately 1kHz so reading slightly slower than that would ensure that even if one reading hit that pulse, the next would probably not and even if it did, one of the next few would definitely not.

My ultimate goal here is to, for example, every 34ms take ten readings at 1.5ms each, then throw away all but the lowest reading. This will still allow me to get a sample rate of 34ms and provided that at least one of those ten samples is a good one, eliminate the presence of noise due to the modulation. I will refer to these multiple readings within the actual sample rate as “burst readings”.

Above is an example of the output I could get sampling at 1012 microseconds, at times I got up to 15 bad readings in a row so this wouldn’t suffice as a good burst sampling rate if I’m only taking ten readings but it proves that we can use burst to get a number of samples within the recommended sampling rate in order to filter out the noise readings.

The next step

Now I have modified my test software to take a set of burst readings at some specified rate every 38ms (I found that 38ms was the fastest rate which provided the least consecutive noisy values when simply sampling the raw value).

Test 1

In this test, I’m taking 30 seconds of samples at a rate of 38ms per sample where I am taking three readings and eliminating all but the lowest value.

The above chart is difficult to read but you can click it to open it and zoom in. The important thing to note is that all three readings gave me noise but there were about four times during the test where my test software completely ignored all noise and even though it wasn’t every time we still avoided some noise.

Test 2

No more messing around, now I’m increasing the burst reading count to eight. I’m now taking eight burst readings at 38ms intervals and throwing away all but the lowest reading.

The above chart is difficult to see in detail but you can click it and zoom in. Essentially, by changing my burst sample size to eight readings instead of three I am able to get a steady reading with no noise at all during a 30 second recording. The red line above depicts the readings after taking the burst sample and using only the lowest value.

Library

So, with these findings I developed a library which can be found here. Currently the library only works for Sharp GP2Y0A21YK distance sensors but I will be working to modify it to work with all other sharp sensors as well.

Above you can see a chart of readings that were taken every 500ms for 10 minutes with the library (blue) and without the library (orange) and I’m pretty happy with the results. Ten minutes of readings and not one single noisy reading due to the modulation pulses. There is still noise as you can see, the value bounces between 276 and 283 but that could be smoothed out with a simple moving average or exponential decay with a small number of readings for the moving average.

What’s to come?

I of course have future goals for this library, including other distance sensors as mentioned. I would also like to include smoothing to remove even that small fluctuation of ~10 points. Lastly, because of the way the burst readings are taken there is  currently a delay of about 14ms that takes place at each update. You still get updates every 38ms as you normally would but every 38ms when update is called on sensor object it will consume 14ms of time to take the burst readings. Please note, if update is called on the sensor object and 38ms has not elapsed, the library will not consume any time, it will simply return, the delay of 14ms happens only once every 38ms regardless of how many times you call update on the sensor object.

2 Comments

  1. kat

    I was looking for a way to get rid of the noise from this sensor and your explanation of the cause and a method to remove the noise is very nice. I think it might be too involved for my project but I admire the effort you put into reducing the noise.

    1. I wouldn’t count yourself out just yet, it’s really just a matter of using the library, I was just adding this article to explain the issue in more detail and how I went about solving it.

      I appreciate the feedback, thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *