Apparently-To: john.smith@gravis.com GUS Programmer's Digest Sun, 3 Oct 93 009 MDT Volume 5: Issue 2 Today's Topics: How to avoid a recording pitfall (long ramblings) Standard Info: - Meta-info about the GUS can be found at the end of the Digest. - Before you ask a question, please READ THE FAQ. ---------------------------------------------------------------------- Date: Sat, 2 Oct 93 13:22 PDT From: Tom_Klok@mindlink.bc.ca (Tom Klok) Subject: Re: How to avoid a recording pitfall (long ramblings) Message-ID: davidm@marcam.com (David MacMahon) writes: > I had attributed this to missing samples until I noticed that the > problem got worse at low sampling frequencies! I connected an o'scope > to the record DMA request line (DRQ) to see what was happening. I > recorded in mono with a sampling frequency of 7015 Hz (sample time of > 143 uS). I measured the time between DMA requests to be about 280 uS > (I use a 16 bit DMA channel so in mono the rate gets halved). So far > so good. Then I noticed that the time between the last DMA request of > one buffer and the first DMA request of the next buffer was only about > 220 uS! In other words, everytime I started recording into the next > buffer I would start to sample the input at the desired frequency, but > the sampling started 60 uS before it should. This is disconcerting, to say the least. From what you've observed, it sounds like the GF1 doesn't allow synchronous restart of the ADC DMA. At least the SDK code doesn't. Here's how I see it: The GF1 has it's own clock, running at 9.8784 Mhz (CLOCK_RATE). This is divided by 16 internally, yielding 617.4 KHz (I'll call it CLOCK16). CLOCK16 is fed to the ADC timer. The ADC timer register is loaded with the rate you want to record at. It's called asdr in the SDK source. From the SDK (sample.c), the value is calculated like this: adsr = (CLOCK16 / rate) - 2; I'm not sure why the -2 is needed. Fencepost problems on the counter? Probably easier to design the hardware that way. No problem for us. Every time the ADC timer counts down, it does two things: - grabs a sample from the ADC - initiates a DMA transfer of that sample Two things to do, -2 on the rate... ? After the PC DMA stuff is set up (I believe it's irrelevant to the problem), and the ADC rate is set, we start the ADC DMA transfer. Here's a fragment from my own code: mov dx, port_gf1_reg_sel mov al, SAMPLE_CONTROL out dx, al ; select ADC control reg mov dx, port_gf1_data_hi in al, dx ; dummy read (!) mov al, ENABLE_ADC OR ADC_IRQ_ENABLE out dx, al ; start the ADC The critical thing here is setting ENABLE_ADC. It appears that this bit gates the CLOCK16 into the ADC timer circuitry. If the bit isn't set, the timer isn't running. If the bit is set, the timer is clocked. Note that there are two basic ways of wiring up the ADC counter. It can be ticking over all the time, but it's output is gated by the ENABLE_ADC bit. Or it's output can be hardwired to the ADC stuff, and the ENABLE_ADC bit gates the timer clock (CLOCK16). The former method has a minor problem. It means that you can't start the ADC *now*. It will start on the next timer tick regardless of when you set the enable bit. For example, if you're recording at 4000 Hz, then when you set ENABLE_ADC the next sample will be arriving any time between now and 1/4000 seconds from now. They'll all line up nicely after that, even when restarting on the next buffer. The latter method has a much bigger problem. You can start the ADC sampling whenever you like, but the delay between buffers is left up to the CPU/programmer. From your observations, it looks like that is how Forte did it. :( It would be really nice to hear a few words from Gravis or Forte on this. We'd all be better off if they were more open about programming the GUS at the register level. I'm still waiting to hear about base_port+15. :( Anyways, solutions to the problem? Let's see... the GUS general timers won't help; they're too slow. At 11.025 KHz sampling, the delay between samples is 90.70 uS. At 44.1 KHz, 22.68 uS. The two GUS timers have a resolution of 80 and 320 uS per tick. What about wasting a voice as a timer? This is all off the top of my head, so forgive me if I'm out in left field. :) Maybe something like this: Enable ADC Calculate voice freq values for the needed delay On ADC TC IRQ, ack the IRQ prime the PC's DMA controller for the next buffer start the voice playing silence, with IRQ enabled On voice IRQ, ack the IRQ hit ENABLE_ADC to restart the recording disable the voice Lots of overhead. Probably won't work on slow machines or fast record rates. Ugly. What about the PC's timer chip? Have to look into that. Finally, if nothing else works, a silly busy loop in the ADC TC IRQ handler before the re-enable: mov cx, adc_dma_delay_val ; must be >0! delay: loop delay 'adc_dma_delay_val' needs to be pre-calculated during initialization, depending on both the PC's speed and the record frequency. There's still a few problems: - PC's interrupt latency isn't consistent, but hops around. If another IRQ is being serviced, we could be late. Even things like CPU cache fetches put the timing off a little. - The routines to pre-calculate the delay would be dog-ugly. It's a lot of trouble to get it right. - If we're using a very high rate, like 44.1 KHz, then we might not need any delay at all. The CPU's own interrupt latency and the speed of the IRQ service code could be enough. Then the delay code actually hurts, not helps. Just checking could be enough clock cycles lost to drop a sample. Also, I find your measurements of time between DMA requests to be very interesting. I use an 8-bit DMA channel myself. So if you're using a 16-bit channel, recording with 8-bit sampling, the GUS gives you two bytes at a time at half the rate? In that case, using an 8-bit channel might reduce the problem (depending on the sampling rate, 'natch). Hmm... probably not. Worth a try though. It's also a little annoying that all the code I've written so far has been geared towards maximum speed. Here I was, thinking that I had to be quick in the IRQ service or I'd drop samples. As it turns out, it's better to be slow. :) I like your quick&dirty approach of fine-tuning the playback rate to account for the timing problems with ADC, Dave. I also agree that larger buffers should minimize the problem. Might let you run higher sample rates, too. But I also wonder... is there something we can to do to the GUS at the register level to let the GUS solve this problem for us? If not, is there any chance that Forte will correct the GF1 for the next ASIC revision? Or is the clocking problem not on the GF1, but on some external hardware? In the PALs? Will the 16-bit daughter board fix this? Will the GUS Max have this problem? Gravis/Forte technical support, talk to us! We're busy drumming up software support for your products. The least you could do is help us out. Oh! One last idea. Are you resetting the adsr (sample rate register) every time you restart on the next buffer? In the SDK, UltraSetRecordFreqency(). Maybe, just maybe, that's causing all the trouble. -- Tom Klok MIND LINK! Support team member - Vancouver BC a344@mindlink.bc.ca What, me hurry? -- Alfred E. Von Neuman ------------------------------ End of GUS Programmer's Digest V5 #2 ************************************ To post to tomorrow's digest: To (un)subscribe or get help: To contact a human (last resort): FTP sites: archive.epas.utoronto.ca pub/pc/ultrasound wuarchive.wustl.edu systems/msdos/ultrasound Hints: - Get the FAQ from the FTP sites or the request server. - Mail to for info about other GUS related mailing lists (UNIX, OS/2, GUS-MIDI, etc.)