USB to achieve a fixed (no gaps) data streaming ?

Go To Last Post
3 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This application is rather like a simple Oscilloscope, or Logic analyzer, where we need to stream samples _without_gaps_, to a PC host.

As fast as possible, of course, but we CAN trade off some peak bandwidth, to get the deterministic rate.

Problem is, by how much?, and what size buffers will be needed ?.
ie if we use under 50% of the USB timeslots, what is the chance of something else still causing enough disturbance, to interrupt our sample-stream ?

It is not audio, so we do not mind arrival time variations, but we cannot tolerate lost samples, or buffer overruns etc.

Seems a common requirement, so has anyone cracked this already, using either AVR32, AVR8, or even FTDI devices ?

I did find a USB Scope article, but it was old, and the author simply side stepped the issue of missing samples....

Or, is this simply 'a windows artifact' that everyone lives with ?

Do we need to time-stamp on the send end, to catch this happening ? (that will cost more bandwidth..)

tia.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You don't lose packets on the USB bus - it includes an inbuilt CRC mechanism to ensure data integrity, and the host has to *poll* for endpoint data rather than the device pushing it onto the bus. That means that regular bulk/interrupt endpoints will ensure perfect data integrity, although the timing won't be perfect.

You can use Isochronous type endpoints to ensure bandwidth and timing constraints are met.

- Dean :twisted:

Make Atmel Studio better with my free extensions. Open source and feedback welcome!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks Dean,
Microsoft says this ( rather depressingly )
["Increasing the size of the USB device's on-device buffer would reduce its vulnerability to the lack of predictability in USB Bulk transfer scheduling. However, to guarantee absolutely against data loss would require a buffer large enough to hold one entire "session" worth of captured data.

Implementing data transfer via Isochronous endpoints instead of Bulk endpoints will provided a guaranteed transfer rate (bandwidth), but no mechanism for ensuring reliable delivery of data.

Implementing data transfer via Interrupt endpoints instead of Bulk or Isochronous endpoints may provide an acceptable combination of periodic bandwidth and reliable delivery of data. Disabling CPU power-management on affected systems may reduce delays in USB Bulk transfers to acceptable levels. "]
- which rather reads like their SW license aggrements!

However, in the real world, how much of an issue is losing data in Isochronous endpoints ?

Or, how much of an issue is using a buffer _smaller_ than the session, but with bandwidth to spare ?

From your reply sounds like the USB-itself does not lose packets, but the same effective result will occur, if the PC is unable to empty the buffer in time.

One target we had, was 150000 samples, (byte wide), which has ~41% timeslot loading - any idea if that is sustainable over ~300ms to ~3 seconds ?

It looks like we do need some way to detect a buffer over-fill occurred ?. That, plus maybe let users vary the sample rate over a range ?
FTDI info is annoyingly sparse on such error flags...