Implementing ADC data capture system and DAC control loop

Go To Last Post
7 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm in the process of designing a data capture and feedback loop control system. The system is comprised of five AD7685 ADCs (http://www.analog.com/static/imp...) that will be read at a sampling rate of no more than 1 kHz.

A sample is initiated by bringing high a CNV pin on the ADCs. All CNV pins of the ADCs are wired together. Each of the five ADCs are then read over SPI in sequential order.

I then have to compare the voltage sampled from one ADC with a known value. If the sampled voltage is greater than this known value, I need to send a control word to an SPI DAC (AD5621, http://www.analog.com/static/imp...).

I've selected the ATMEGA324P-20AU as the microcontroller used to implement this system.

Once I've collected data from each of the five ADCs, I need to send the data over a serial port to a 32-bit ARM microcontroller running Linux. I assume that serial port communications will occur at 115200 baud.

Questions:

(1) What should I choose as the clocking frequency of the ATMEGA?

(2) Is it possible to sustain a sampling rate of 1 kHz from the 5 ADCs over the serial port?

(3) Suppose that I compute a CRC checksum. What should be the length of the checksum, and would I still be able to sustain this sampling rate while calculating the checksum?

Could anyone comment on this scheme?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

1) Something that will give you 115200 baud with no error (the datasheet has a baud rate table that will guide you).
2) That depends on how you send the data. If all you send is two bytes per reading, then you are sending 2 * 5 * 1000 * 10 bits = 100,000 baud which is less than your 115,200 baud. If you have to send it as ASCII, or if you need other data with it, your out of luck.
3) The CRC could be no more than 1 byte if it is for each group of 5 readings and still keep in under 115200 baud. There shouldn't be much problem in calculating it, depending on what CRC scheme you use.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Steve, thank you very much for your response. This prevents me from not going down too many blind alleys. Here's what I am going to do:

(1) Choose the AVR master clock frequency as 18.432 MHz. This is a multiple of 1.8432 MHz, and the choice ensures that I can get a USART operating at 115200 baud without any error.

(2) The 18.432 MHz master clock frequency is under the maximum 20 MHz clock limit for the ATMEGA324P-20AU.

(3) The SPI bus will have a master clock frequency of 18.432 MHz / 2 = 9.216 MHz. I would use a multiple of two as the clock divisor.

Since there are five ADCs, and each must be read over SPI, it will take approximately (16 bits)(5 ADCs)(1.085e-7 s) = 8.68e-6 seconds to read a sample from all of the ADCs. This is well under the time between sample triggers (1 kHz sampling rate, 1.0e-3 seconds).

(4) The CRC checksum shouldn't take too long to calculate, since this can be done using binary shifts in WinAVR. I would suppose that it would take only a few main clock cycles.

Three questions:

(1) When calculating baud rate, 2 * 5 * 1000 * 10 bits = 100,000 baud, why do we multiply by 10 bits?
I understand that (2 bytes) * (5 ADCS) * (1000 Hz) gives the number of bytes in the data stream.

(2)I would assume that the receiving serial port on the ARM processor (running Linux) would have to be set up as "raw TTY" since we are not dealing with ASCII. Is this the case?

If we were sending data to a computer serial port, and the computer was running the Linux OS, I would suppose that the serial port should also be set up to accept "raw TTY" communications. Perhaps this is also possible on Windows (i.e. using TerraTerm).

(3) I would also assume that AVR can indeed send raw data over the serial port. I've always used ASCII, but has this been done with some success?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Forwarding a new set of samples every millisecond seems like overkill. But perhaps your app needs it. If you forward new readings every 5ms or 10ms, you see how you can be much more comfortable with your serial line throughput.

(Even though it >>is<< an ARM at the other end, that is going to be a lot of info for it to swallow and process on a continual basis. Can it handle it? Is it really necessary?)

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks, Lee. Well, I'm designing a system to collect data from an experimental sensor, and the physics of the device requires that samples be taken at maximum 1 kHz. I would also like to reduce the sampling rate down to ~100 Hz or lower. So the AVR acts as a simple state machine, and accepts control commands from the ARM over the serial port. The ARM sets up the sampling rate and tells the AVR when to start sampling. In the past, I've used simple ASCII commands such as "tm" for "Take Measurement."

I am thinking that a 16-bit AVR timer can be used to trigger samples at a rate of 1 kHz. But for a sampling rate of 100 Hz, I will probably have to use my own software-based counter along with the timer.

The idea is to have the AVR send all of the data back to the ARM. The ARM stuffs the data into a more than generous amount of SDRAM, two 256M (16Mx16) chips that are also used for LCD controller memory.

The idea is to collect data for approximately 2 minutes at a maximum 1 kHz sampling rate.

Then, the ARM program is to run a non-linear curve-fitting algorithm on the collected data.

The ARM then saves the data to SDHC card.

When I initially started this, I considered the idea of doing non-linear curve-fitting on the AVR, but I didn't think that this would work very well.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If, during a run, the samples are expected to be close to a baseline value then you could just send the offset from the base/expected value. Would that fit into 8 bits?

Unless it is important to do the conversions synched, I guess I'd lean to more-or-less continuous conversions and result gathering, and then at your millisecond mark you send the result of the latest conversion.

It sounds doable, if the ARM, under operating system control, can swallow all the info.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sending the offset from a base value is a very good idea; it may work well for this application. The sensor output is a voltage that slowly changes over the two minute sampling time, so if the change is less than 8 bits this would help to reduce the amount of data sent.

I think that if there is too much latency on the ARM side of things, the real-time Linux RT_PREEMPT patch might help. The ARM processor being used is the AT91SAM9RL64, clocked at ~200 MHz.

Apparently Linux (even the real-time variants) do not guarantee interrupt latency, hence the use of the mighty little AVR to initiate the sampling conversions.

Thanks, Lee.