Robotics: Range and Identity detection of Robotic Agents in Indoor Applications

9/5/2016

Andrew B. Wright, Ph. D., SM ’88

The method described below works, up to a point.  Although the initial onset of the wave is detectable, the rectification is not going to be useful to allow the signal to be brought into a digital input, without significantly more signal processing.  Analog signal processing is too much of a drag for me to continue down that path.

My original plan on this was to use digital signal processing.  While DSP is a drag as well, it’s a drag that I’m well-versed in.

The following papers give some insight on how to proceed:

Villadangos, a. M., Urena, J., Garcia, J. J., Mazo, M., Hernandez, A., Jimenez, A., Ruiz, D., De Marziani, C., “Measuring Time-of-Flight in an Ultrasonic LPS System Using Generalized Cross-Correlation,”  Sensors, v. 11, pp. 10326-10342, 2011.

Knapp, C. H., Carter, G. C., “The Generalized Correlation Method for Estimation of Time Delay, IEEE Transactions on Acoustics, Speech, and Signal Processing, v. ASSP-24, n. 4, 1976.

Perez, M. C., Urena, J., Hernandez, A., Jimenez, A., Ruiz, D., Alvarez, F. J., de Marziani, C., “Performance comparison of different codes in an ultrasonic positioning system using DS-CDMA,” Proc. IEEE Intl Symposium on Intelligent Signal Proc, WISP ’09, Budapest, Hungary, 2009, pp. 125-130.

I had actually cited the second paper in my Ph. D. as I was reviewing methods for determining the time delay in an active sound cancellation system.  I implemented the algorithm as well.  So, when I saw this in the paper which is describing a very similar problem to the one I’m working, I was both happy (I know the method) and irritated (I had forgotten that I know the method).

So, the next steps are:

read the analog input on the Beaglebone

implement FFT on the Beaglebone to estimate the PSD functions.  There’s a prepackaged FFT algorithm, hooray, so I don’t have to port the algorithm from my Ph. D. work over to the Beaglebone.

design and implement a digital  filter bank to bandpass the various frequency bands, allowing multiple sampling rates (see Oppenheim and Schafer, Dis. Time Sig. Proc.)

implement Kasami codes for robot identity detection (not sure if I need to do this – see 3rd reference above)

7/30/2016

In teams of mobile robots, it is sometimes necessary to know the location (relative to an agent) and identity of the other members of the team.

Two technologies that are used for distance detection are ultrasonic sensors and infrared sensors.  Ultrasound seems to be the leading technology, in part because infrared can be made ineffective by transparent materials and bright lights, including sunlight.

Ultrasound can be disrupted by other sources of sound, reflections, and sound absorbent materials.  The beam width can cause issues with resolution of the spatial location of the nearest object.  Ultrasonic sensors are deliberately designed to receive only from in front of the sensor, so a static ultrasonic sensor has a limited sensing range.

This project intends to use the technologies for ultrasound, but configured differently to resolve both distance and identity of the nearest partner robot in a robot’s vicinity.

Ultrasonic speakers broadcast a burst of sound at a frequency.  The receiver is blanked while the speaker broadcasts.  It receives the first echo from whatever the sound hits and reflects from.  The sound source can be seen as a square pulse of some duration multiplied by a carrier frequency.  At the receiver, a demodulating circuit removes the carrier and leaves only the square pulse.  By comparing the start of the broadcast pulse with the start of the received pulse, the time of flight of the wave, and therefore the distance between source and receipt of reflection can be determined.

In [1] (Shoval, S., Borenstein, J., “Measuring the Relative Position and Orientation between Two Mobile Robots with Binaural Sonar,” ANS 9th International Topical Meeting on Robotics and Remote Systems, Seattle, WA, March 4-8, 2001),  an attempt was made to use two receivers on a robot with a broadcast from a second robot to determine the angle between the two robots, as well as the distance.  However, the narrow beam pattern of the broadcast source made the measurement work well only if the speaker was aligned with the receivers.  The math is based on an omni-directional source, so the narrow-band ultrasonic sensor didn’t comply with the assumptions.  This limitation was not inherent in an ultrasound broadcast, but rather in the use of commercial, unmodified ultrasonic sensors.

Rather than use ultrasonic sensors, this work will use standard microphones and speakers.   Aside from the fact that I have bunches of them around the lab from my active sound cancellation work, they are omnidirectional.   The main disadvantage of using audible sound is that it’s audible and reflections become an increasing problem the lower the frequency.  Several robots broadcasting pings can be annoying in an environment that would otherwise be quiet.  However, if the concept works, there’s no reason to avoid ultrasonic sensors; the sensors would have to be redesigned to be omnidirectional (something that I did in my PhD work).

Transmitter

ASIDE:  Another approach to generate tones and measure microphones is to use a USB sound card with the beaglebone (e.g. Sabrent AU-MMSA USB external stereo sound adapter).  I’m not sure how effective this will be since there is a need to tightly control the timing between broadcast and reception.  When I investigated high level solutions a few years ago, the need for tight timing defeated the pre-packaged solutions.

The following circuit will be used to create the broadcast wave.  A beaglebone will create a few cycles of square wave on a digital output with frequency, f.  This wave will repeat at a period, Tp.  The wave will be fed into a Texas Instruments TLC084 op-amp configured as a four-pole low-pass filter (see circuit). The filter is designed using the Sallen-Key topology.  This will give a sine wave.  The sine wave will be fed into an LM386 audio amplifier and broadcast through a CMS0341KLX speaker.

Each robot will broadcast on a different frequency, e.g. 2000 Hz, 3000 Hz, 4000 Hz.  The frequency will serve as the identifier of the robot.  This results in extra complexity on the receiver side, as there will be one band-pass filter for each robot in the team.  For a prototype proof-of-concept, the relative ease of implementation should justify the added circuit complexity.

Sallen Key double pole low pass filter

Implementation of low pass filter. See later for the implementation of the high pass filter as this implementation will not work with a single supply op-amp

The first harmonic for a 2000 Hz square wave occurs at 3 times the fundamental (6000 Hz). The 2000 Hz should pass through and the 6000 Hz and greater should be blocked. The values for resistor and capacitor to achieve a 5000-6000 Hz (35 krad per sec) cut-off are R = 1.3 kΩ and C = .022 μF, which gives an actual cut off as 2.4 kHz. Higher is better, since the actual sine wave needs to pass completely through the filter while the first (and later) harmonic needs to be heavily attenuated. At the cut off frequency, the signal has already been attenuated by 1/2.

The LM386 is a nice amplifier for audio applications, but it has some quirks.  The default gain is 20.  The output when driven by the + input seems to be limited between 0-Vss.  In other words, the signal will not come out negative.   If your output from a Beaglebone is a 0-2.5V signal, feeding this directly into the LM386 would result in a 0-50V, which would be clipped to supply voltage.  In order to match these two circuits, the 2.5V Beaglebone signal needs to be knocked down, so that it can be amplified again.  Figure a gain of 1/10 will be a good start.  This can be accomplished with a voltage divider or with an amplifier wired with a sub-1.0 gain (in this case 0.1).  For the speaker, 1 Vpp is a nice, loud signal.  However, it might be desirable to reduce to more like a 50 mVpp signal to save your ears.  In this application, the detection needs to be made about 2 m away, so this value should be adjusted based on that desired outcome. Quieter is better for a lot of reasons, including the removal of unwanted echoes.

From the app note for the LM386, the following circuit is used repeatedly.

Audio amplifier using the LM386

The 0.05 μF capacitor in series with the 10 Ω resistor to ground interacts with the 250 μF capacitor to bring the output to an AC signal that can be used with a low impedance speaker load.  The large value of the capacitor allows the low impedance speaker to be driven without pulling too much current through the amplifier and dramatically reducing the voltage to the speaker and hence the sound level.

The code to load onto the pru (the header files are here.) to generate a 2 kHz square wave, 5 cycle burst separated by 5 ms intervals is:

#include "pru_cfg.h"
#include "pru_gpio.h"
#include "pru_ctrl.h"

//CMD file must have SYSCFG : o = 0x00026000 l = 0x00000100 CREGISTER=4 in the MEMORY portion
volatile pruCfg C4 __attribute__((cregister("PRU_ICSS_CFG",near),peripheral));

volatile register unsigned int __R31; //connected to PRU's input pins and INTC controller
volatile register unsigned int __R30; //connected to PRU's output pins

far volatile pruGPIO GPIO0 __attribute__((location(0x44e07000)));

far volatile pruCtrl PRU0CTRL __attribute__((location(0x22000)));

//LOOP should be in PRU cycles, which are 5 ns long
#define LOOP 4000 //100 kHz wave, 200 kHz cycle, 5x10^-9*4x10^3 = 2x10^-5
#define WIDTH 25
#define CYC 500
#define DTY 250
#define TIME 100000 //2x10^-5*TIME = 2 sec

void main()
{
	int k,shadow,j,l;

	/*Intialise OCP Master port for accessing external memories*/
  	C4.SYSCFG_bit.STANDBY_INIT = 0;  	
	PRU0CTRL.CONTROL_bit.CTR_EN = 1; //turn on cycle counter

	volatile unsigned int gpo;
	C4.GPCFG0 = 0; //defines GPI/O for PRU0, take the default values

	GPIO0.OE_bit.OE_bit16 = 0; //enable bit gpio0[22], 22->0x16

//insert the loop to output the burst here.
	for(k=0,j=0,l=0;k<TIME;k++){
                if((j++)%WIDTH == 0){ 			
                     if(shadow == 0){ GPIO0.SD_bit.SD_bit16 = 1; shadow = 1; } //turn on bit 0x16 			
                     else { GPIO0.CD_bit.CD_bit16 = 1; shadow = 0; } //turn off bit 0x16 		
                }
  		if((l++)%CYC > DTY) GPIO0.CD_bit.CD_bit16 = 1; //turn off outside duty cycle
		while(PRU0CTRL.CYCLE < LOOP);	
		PRU0CTRL.CYCLE = 0; //reset the cycle counter/stall counter

	}
	GPIO0.CD_bit.CD_bit16 = 1; //turn off bit 0x16

	GPIO0.OE_bit.OE_bit16 = 1; //disable bit gpio0[22]

        /*Exiting procedure*/
	__R31 = 0x24;			// Send notification to Host for program completion
	__halt();			//only compatible with v2.0.0B1 + for lower verions of Compiler use
					//asm(" HALT");
}

The code runs for 2 seconds.

The fragment dts to allocate P8.19 is

/dts-v1/;
/plugin/;

/ {
    compatible = "ti,beaglebone", "ti,beaglebone-black";

    part-number = "BB-PRU-01";
    version = "00A0";

    /* state the resources this cape uses or prepare to get winged! */
    exclusive-use =
	"P8.19", //gpio0_22 from BBB SRM, name is GPMC_AD8 from AM3358 data sheet

	"pru0";

    fragment@0 {
        target = <&am33xx_pinmux>;
        __overlay__ {
            pruicss_cassy: pinmux_pruicss_cassy{
                pinctrl-single,pins = < 			0x020 0x07 //gpio is mode 7, GPMC_AD8 -> 0x820 from TRM			
                >;
            };        
	};
    };

    fragment@1{
        target = <&pruss>;
        __overlay__{
            status = "okay";
            pinctrl-names = "default";
            pinctrl-0       = <&pruicss_cassy>;
        };
    };
};

 

Receiver

I found a nice article from Texas Instruments (John Caldwell, “Single-supply, electret microphone pre-amplifier reference design,” http://www.ti.com/lit/ug/tidu765/tidu765.pdf) which contains design info on how to figure out what resistors and capacitors can be used with a specific microphone.

NOTE (7/16/2017): The OPA1644 amplifier is specifically designed for JFET transimpedance amplifier applications.  The only problem with it is that it does not come in a dip package for prototyping.  You can use the SOIC to DIP product (see the cape blog) to make a prototype.  TBD.

The microphone, CUI CMC-6018-38T, has the following key specs:

dimensions: 6 x 1.8 mm

Impedance: 2.2 kohms

directivity: omnidirectional

frequency: 20Hz to 20 kHz

Max operating voltage: 10 v

Standard operating voltage: 2 v

current consumption: 0.5 mA maximum

sensitivity reduction: within -3 dB at 1.5 v

S/N ratio: more than 58 dBA (f = 1 kHz, 1 Pa, A-weighted)

sensitivity: -44 dB (0dB = 1V/Pa, 1kHz) Vs = 2V, RL=2.2kohm, min = -47 dB, max = -41 dB

I spent a lot of time reviewing the various components of a microphone pre-amp.  I’m not an expert by any means.  But, I thought the results were sufficiently interesting that I’m going to write about them here.

There’s a nice video blog about electret microphones and signal conditioning (https://www.youtube.com/watch?v=UhG83WS51q8).

The membrane is made from a material called an electret.  This is similar to a permanent magnet, but with a “permanent” electric field rather than a “permanent” magnetic field.  It will generate a potential for a long, long time.  The membrane is separated from a plate, which forms a capacitor.  When the membrane moves, the capacitance changes.  If an electrode is attached to the membrane and another to the plate, the combination appears like a voltage source which changes proportionally to the motion of the membrane, which is proportional to the sound pressure on the membrane.  In other words, the voltage changes proportionally with sound pressure.

There is a nice article (Gentex Electro-Acoustic Products, “Apply Electret Microphones to Voice-Input Designs,” http://www.gentexcorp.com/assets/gentex/ea/eapdfs/electretappguide.pdf) that goes into much more detail on electret microphones.

The JFET threw me for a while.  Here’s a nice tutorial on JFETs. It appears to be a n-type JFET from the TI article (although this is a different microphone).  The electret provides a voltage between the gate and the source, Vgs.  The user supplies voltage across the drain to source (which is usually grounded) called Vds.  The JFET can be modulated either by Vds or by Vgs.  Vgs adjusts the channel width between drain and source (i.e. the resistance in this path).  The path can be fully open (Vgs = 0V) or fully closed or pinched (Vgs = Vp).

If the channel is not pinched, then the current flowing from drain to source will vary between 0 amps and a saturation current.  The magnitude of the saturation current depends on the gate voltage.  If Vds is configured such that the JFET is saturated regardless of gate voltage, then the current flowing through the JFET will be proportional to Vgs.  It will be largely invariant to changes in Vds.  In other words, if the supply voltage stays above the saturation voltage, the JFET will sink current invariant of the supply voltage.  If the JFET is powered by a battery, the performance will not change much until the battery drops below the saturation point.

What is this magic voltage?  The datasheet for the microphone uses a 2V source and 0.5 ma maximum current and a 2.2 kΩ bias resistor.  This gives a voltage at the top of the JFET as (2-x)  = 2200 * 0.0005, x = 0.9 V.  So, the JFET should be in saturation for around 0.9V.

Using a bigger supply allows more head-room for a battery which is below it’s nominal voltage.  The first part of the circuit is the supply for the microphone.  The supply voltage will be 5 v.

The microphone resistor, R1, can be calculated from (desired supply voltage – nominal supply voltage)/max current consumption for microphone:

(5v-0.9v)/0.5 ma =  8.2 kΩ (just barely brings transistor to saturation.  maximum.)

(5v-2v)/0.5ma =  6 kΩ (Gives 1.1v headroom)

(5v-4v)/0.5ma =  2 kΩ (Give 3.1v headroom)

The bigger the resistor, the larger the transconductance (voltage->current) gain.  If the current through the JFET is i(t), varying between 0 and 0.5 ma, then the voltage at the JFET drain will be R i(t), varying between supply voltage at 0 ma (no voltage drop across the resistor) and R*0.5 ma at maximum conductance.

The smaller the resistor, the more headroom for battery droop.  If you use unconditioned battery, then the battery droop will be a problem.  If you use a boost converter, the battery droop is not so much of an issue.  A filter capacitor to alleviate spikes (assuming this doesn’t interfere with the boost converter) can manage any issues with instantaneous voltage drops due to load or low battery.

The output capacitor has to have a lower impedance than the resistor at audio frequencies to allow the majority of the current to flow through the capacitor as opposed to the resistor.  The impedance in the capacitor is 1/j/C/ω.  This will be smaller than the resistor if C and ω are bigger.  The smallest ω occurs for f = ω/2/π = 20 Hz.

The biggest capacitor is desired to achieve this function.

The resistor and capacitor will form a high pass filter as well, with the filter pole occurring at

fC = 1/2/π/R/C

The pole of the filter needs to stay below the audio frequency, 20 Hz.  As C increases, the cutoff frequency decreases.

Here are some sample capacitance values

R=6.8 kΩ
C (μF) impedance (Ω) fC (Hz)
1 7958 23
4.7 1693 5
22 362 1
47 169 0.5

The first value, 1 μF, yields an impedance that is bigger than R.  The last two values may have a bit low cut off frequency.  The 4.7 μF seems to be a pretty good fit.

The high pass filter threw me for a bit (and maybe continues to throw me).  The amplifier in the TI app note is a transimpedance amplifier, which means that it converts current input into voltage output.  So, the high-pass filter described here is blocking the dc current, not the offset voltage introduced by the JFET bias.  So, when you run a ‘scope between the end of the capacitor and ground, you get a dc offset.  When connected to the op-amp, V+ = V-, so the capacitor will be forced to the offset on V+.

The gain on the op-amp is set by the resistor between V- and Vout.  The gain of the microphone is 20 log 10 (G ref 1V/Pa) = -44 dB -> G = 6.3 mV/Pa.

Converting this to a current with the 2.2 kΩ microphone impedance, G’ = 2.87 μA/Pa.  In other words, a 1 Pa sine wave will induce a drain current oscillation with amplitude 2.87 μA.  Big sound results in tiny current.  That’s the measurement challenge.

If it is desired that 2 Pa is the maximum pressure, then Imic = 2 Pa * 2.87 μA/Pa = 5.74 μA.

The transimpedance gain of the op-amp is set by R = (Vout-V-)/Imic.  NOTE: without the transimpedance amplifier, the microphone circuit works really poorly.  You have to wire up the whole thing to get any meaningful results.

For a unipolar solution, the amplified signal needs to be biased to 1/2 Vcc or 2.5v.  So, V- = V+ = 2.5v. This gives an amplitude at maximum sound as 2.5v.

R = 2.5v/Imic =2.5/.00000574=435 kΩ (2 Pa = 2.5 V)

2 Pa corresponds to a 100 dB signal (rock concert loud).  At this high a gain, stability is proving to be an issue.

I have 390kΩ resistors, so … The maximum amplitude for this resistor is

 V = (390 kΩ)(5.74 μA) = 2.2v

which is pretty  close to the 2.5v spec.

Transimpedance amplifiers at high gain suffer from issues related to the capacitance across the input terminals.  There is a considerable literature on this issue.  In short, you need a filter capacitor in parallel with the feedback resistor.  The combination of filter capacitor and feedback resistor is a low pass filter, and it needs to be set above the audible range.

Some possible values:

R=390 kΩ
C (nF) fC (Hz)
.022 18500
2.2 185

I threw a cog and set this up as a high pass filter.  I couldn’t understand why I had this low pass roll-off until I realized that I had designed the thing that way.  This filter has a really big impact on the circuit’s performance.


The signal needs to be offset so that the op-amp output will sit between 0V and the op-amp supply (5V).  The app note uses a voltage divider at the V+ terminal of the op-amp.  However, if you need to bias more than one op-amp, a virtual ground is necessary.

There is an article on using single supply op-amps in a variety of applications, including filtering (Carter, B., “A Single-Supply Op-Amp Circuit Collection,” Texas Instruments Application Report, SLOA058,2000).  This document may be reproduced in Carter, B., Mancini,  R., ed.,  Op-Amps for Everyone,   Elsevier, Newnes, 2009, ISBN 978-1-85617-505-0, although I do not have a copy.

Since there will be a high pass filter stage next, I used set up an op-amp as a follower with the positive terminal biased by two 100kΩ resistors. Normally, a filter cap will be connected between Vcc/2 and ground, although that is not shown in the circuit.

The TI app note puts a capacitor in the offset leg.  I tried some values and I tried a capacitor across the power supply and ground.  This made performance worse in all cases.  Some power supplies manage their noise and do not want a filter capacitor.  It’s possible, when using the beablebone’s 5v that the filter cap is a problem, not a solution.

Since things are working without it, I’m going to leave it out.

The cut off frequency should be lower than the audio range.  It does not directly affect the signal (like the filter in the feedback path).

f = 1/2/π/100 kΩ/1 μF = 1.6 Hz (good enough)


The basic mic circuit is shown in this figure.

Basic microphone circuit with band-pass filter (2nd order high pass, 2nd order low pass) and voltage limiting to 1.8v


The amplified microphone signal needs to be filtered so that only the signal around the desired broadcast frequency is passed.  The TLC084 can be configured with a double pole low-pass filter and a double-pole high pass filter to give a band pass filter.

A high-pass filter will remove frequency content below the cut-off frequency, including the DC (0 frequency) offset.

Using a single-supply op-amp for the high pass filter requires a virtual ground to bias the signal back to Vcc/2.  Therefore, a high-pass filter with a virtual ground has no frequency content below the cut-off frequency, but is centered in the time domain around the virtual ground.

For the high pass filter, RC = 1/2/π/1400 Hz =  113.6 μs. C = .01 μF means R = 11.3 kΩ.

 

Sallen Key double pole high pass filter using virtual ground – Mic amplifier (Vout) and virtual ground (Vcc/2) circuits not shown

A low pass filter will remove frequency content above the cut-off frequency.

For the low pass filter, RC = 1/2/π/2600 Hz =  61.2 μs. C = .01 μF means R =  6.1 kΩ -> 5.6 kΩ.

Sallen Key double pole low pass filter with VHP circuit not shown

The low pass filter adds a very high frequency oscillation.  Apparently this is a practical issue with the Sallen Key architecture.  A combination RC filter, voltage divider (to get the 0-5v signal into the 0-1.8v range), and zener diode are added after the low pass filter.

The circuit can be assembled in four parts.  First, the microphone signal (top half of the design, incuding the virtual ground), which will give Vout, a voltage proportional to air oscillations and which falls into the 0-5V range.

The high pass filter can be constructed next, the result of which is a signal that has only high frequency content, Vhp. The output from the microphone, Vout, is the input to the high pass filter.

The low pass filter can be constructed next.  The output of the high pass filter, Vhp, is the input to this block.  The result will be a band-limited signal, Vlp.  It will contain some extra junk, so it does not look great on the oscilloscope.

The final block, the RC low pass filter + voltage divider + zener diode, can be added to give the band-limited microphone signal, Vadc, which only has frequency content between flp and fhp, which is offset by 0.9v, and which is limited between 0-1.8v.

This signal should be presentable to the ADC converter of the beaglebone.

I have not yet tested the issues related to the ADC ground versus the signal ground on the beaglebone.  It should be as simple as connecting the two grounds.  I’ve read a little bit about the ground separation.  If it looks like some protection is necessary, I’ll add it to the above circuit.


Marco-Polo

Range can be determined by considering the time from broadcast to reception.  The speed of sound in air, c, is about 343 meters per second.  d = c*t.

However, when did the wave start?  The beablebone transmitting and the beaglebone receive would have to synchronize their watches in order to make this work.

If each beaglebone were programmed to broadcast a sound at the conclusion of receiving a sound from another robot, then each robot would be able to calculate the distance to the other robot.  This idea is similar to the children’s game, “Marco-Polo,” where one child would say “Marco” and the other “Polo.”  As the children got further away, the sound would decrease.  As they got closer, the sound would increase.

In this proposed Marco-Polo scheme, the time between sending and receiving is 2d/c+T, allowing the distance to be easily calculated from the measured time of flight, the known speed of sound in air and the known pulse width.

If the default time between a robot sending pulses is more than 2*d/c, then any pulse received in less time would possibly be the result of a robot responding.  (NOTE: it would be easy to make the pulse width a function of a response to a robot.  E.g. if robot 2 were responding to robot 1, the pulse width could be varied to reflect this.  In the prototype, proof-of-concept, this is not necessary, since the experimental configuration will be determined such that robot 1 must be responding to robot 2.)

timing for time of flight

What if a robot hears multiple robots?  In this scheme, the robot may need to respond only to the closest robot or may need to queue up responses (i.e. respond to robot 2 with a specific pulse length and respond to robot 3 with a different pulse length, where the start of the response to robot 3 would follow the completion of the response to robot 2).

This scheme is going to use a “marco-polo” approach to measuring distance.

Robot A will broadcast a pulse of a specific length, p1.  This pulse will repeat at the period, T.

Robot A will listen.  If Robot A hear’s a pulse out of one of its detectors, it will add a pulse of length, p2, after it’s pulse of length, p1, with a space of length, Tp-p1, where Tp is the length of the character.  T = nTp, where n is the number of possible robots in the swarm.

If a robot B hears a response with pulse, p1 (robot A), and pulse, p2 (robot B), then robot B knows that robot A is responding.  Robot B can compute the difference between when it sent it’s pulse and when it received the response.  This tells Robot B the distance that robot A is away from itself.

The difference in arrival times between the two microphones on the robot can be used to determine the relative orientation of the other robot.


Dual-Microphone Ears: 3D Printed Speaker/Microphone Holder

Here is a design for a microphone and speaker holder to allow mounting on a Vex based robot.

In order to reduce the “field of view”, the microphones will be shrouded with a parabolic shotgun receiver.  This will make them more uni-directional, but increase the gain.  Here is an article describing analysis of shotgun microphones (Bai, MR, Lo YY, “Refined acoustic modeling and analysis of shotgun microphones,” JASA, 2013, v. 133, n. 4, pp. 2036-2045).  Here is an early paper on tube-based microphones (W. P. Mason and R. N. Marshall, “A tubular directional microphone,” J. Acoust. Soc. Am. v 10, n 3, 206–215 (1939).)

A tubular design and a shotgun design are analyzed the same way.  In the tubular design, a circular housing contains many tubes of the same diameter and different lengths.  In a shotgun design, a tube has holes or slots along the length.  The analysis doesn’t care if holes are drilled or slots are milled.  I presume that slots are easier to machine or look cooler.  For on-axis sound waves, the distance travelled is the same for the sound entering the end and for sound entering the slot/holes.  For off-axis sound waves, the distance travelled is different for the sound entering the end and the sound entering the successive holes.  When these out-of-phase waves reach the microphone, a complex, frequency dependent interference pattern is established, which usually attenuates the off-axis sound.

The bottom line for these designs is that the longer the tube, the lower the frequency for which directionality is achieved.

Horn-type concentrator: Harry F. Olson, Irving Wolff, SOUND CONCENTRATOR FOR MICROPHONES, JASA, v1, n 3a, pp. 410-417.

Parabolic microphone: O. B. Hanson, MICROPHONE TECHNIQUE IN RADIO BROADCASTING, JASA, v3, n1a, 1931, pp. 81-93.

Here is the beam-pattern for the design.

Difference in arrival time between two microphone receivers can be used to calculate an angle between the microphone holder and the source.

———

Mounting the speaker and the microphone:

Since I’ve got a nice, dandy 3D printer, I can make a parabolic or shotgun shroud for the microphones easily.  I can make baffles for the speaker.   I can make a resonant cavity, different for each speaker, that amplifies the desired frequency.  I can make a horn.  Or a flute.  So many possibilities.

The microphones should be mounted in an ear configuration, with a bias towards the front of the robot.  The speaker needs to broadcast so that it can be a planar wave front extending from the ground to somewhat above the robot.

The fact that I can finally put all that acoustic knowledge from my PhD to productive use just makes me happy.

————-

Since sound falls off at 1/r-squared, the volume to the speaker can be set so that the detection threshold in the demodulator circuit is insensitive beyond the maximum distance for which detection is desired.  Use of resonators should allow the sound to be broadcast with small power consumption.  Although an LM386 will be used initially, this is probably an over-kill amplifier for this application, and a lower power amplifier should be considered.


NOTES

An RC high pass filter is used to strip off the DC offset before sending it to the envelope detector.  R = 39 MOHM and C = .033 uF works nicely.  This has a cutoff frequency of 1/2/pi/39 MOHM/.033 uF = .1 Hz.  I used stuff I had lying around, so a better calculation might be worth doing here.

The simple envelope detector uses a diode, a resistor, and a capacitor.   The RC network should have a cut-off frequency below the carrier frequency.  For a 2000 Hz carrier, start with a 500 Hz cut-off.  RC values of 16 kOHM and .02 uF make a good start. NOTE: This doesn’t seem to work too well.  Looks like a much lower frequency is called for.  Getting peaks that decay rapidly.

A little more fiddling and it becomes obvious that the voltage in my signal is not rising much above the diode’s 0.6 v turn on voltage.  So, the peaks that I’m seeing are only those peaks that are getting above that value.  So, either I need to amplify the signal more (not really that great a plan, since my total signal range in my amplifier is 5 v, which means my max signal would be about 2.5 V.  This means that distant signals would be very hard to detect.  So, on to the superdiode, which does not suffer from this problem.

Another issue reported in the literature is that the frequency response of the diode can impact the problem.  We’ll see.


NOTE (7/14/2017): I’m in process of changing to the CUI CMS-6542PF microphone.  Although the cartridge is bigger, the pin spacing (.1″) and size allows a standard connector to be used.

dimensions: 9.7 x 6.5 mm

Impedance: 2.2 kΩ

directivity: omnidirectional

frequency: 50Hz to 20 kHz

Max operating voltage: 10 v

Standard operating voltage: 4.5 v

current consumption (Vs = 4.5 Vdc, RL = 2.2 kΩ): 0.5 mA maximum

sensitivity reduction (f = 1 kHz, 1 Pa, Vs = 4.5 to 1.5 Vdc): within -3 dB at 1.5 v

S/N ratio (f = 1 kHz, 1 Pa, A-weighted): more than 60 dB

sensitivity: -42 ±3 dB (0dB = 1V/Pa, 1kHz) Vs = 4.5V, RL=2.2kΩ

The datasheet for the CMS-6542PF uses a 4.5V source and 0.5 ma maximum current and a 2.2 kOHM bias resistor.  This gives a voltage at the top of the JFET as (4.5-x)  = 2200 * 0.0005, x = 3.4 V.  So, the JFET should be in saturation for around 3.4V.


The Microchip MCP6044 proved to be a problem.  It only has bandwidth up to 14 kHz.  It seemed to start attenuating at about 500 Hz, so it became unreliable for audio applications.


The first bit in this project is to work out the demodulator.  I’m going to start with a cheap and dirty envelope-detector circuit.  This circuit uses a diode and a low-pass filter, where the filter is tuned to remove the carrier frequency (ie the cut off is below the carrier frequency.  The cheap and dirty circuit can be improved with a super-diode circuit (to give a lower minimum voltage) and an active filter to give a more accurate cut-off frequency.

For each robot, a different frequency would be chosen.  This allows many signals to be flying around the space at once.  However, band-pass filters will be required so that only the signal in the band of the robot will be passed.  For every robot to be detected, two digital inputs (one per microphone) will have to be assigned.  So, for eight robots in the swarm, 16 digital inputs will be needed.  Some form of multiplexing will likely be required.

The classic bandpass filter for AM transmission is a “tank circuit.”  After much research and experimentation, I got the following circuit to function as a bandpass filter.

Microphone conditioning circuit

For this circuit, the bandpass frequency is 2pi/sqrt(LC).  I’m not sure about the bandwidth (or Q).  But, I’ll worry about that if it becomes a problem.

I’m using a 47 mH inductor (mouser p/n ???).  This requires the following capacitors to set the resonant frequency: 12000 Hz (6), 11000 Hz (7), 10000 Hz (8), 9000 Hz (10), 8000 Hz (13), 7000 Hz (18), 6000 Hz (22), 5000 Hz (33).

Posted in: Robotics

Comments are closed.