I'm new to libftdi, so please forgive me if I've missed some obvious documentation....
I'm converting a program written (by someone else) for windows using the D2XX driver to libftdi (so awesome that it's available given the suckiness under linux of D2XX). We're using an FT2232H connected to an FPGA to stream data. The test program my coworker created sends the "start streaming command" with an FT_write, and then FT_reads in the data for a certain number of packets, then sends the "stop streaming command" with another FT_write, and closes up. The D2XX FT_read command blocks until either the requested number of bytes has been received or a timeout (set using FT_SetTimeouts) has been reached. When I started this afternoon, I assumed that ftdi_read_data() would behave similarly, returning when either the requested number of bytes was received or usb_read_timeout was reached. In fact, it appears that ftdi_read_data() actually returns immeadiately, and that the proper way to understand the "size" argument is "the maximum number of bytes that will be returned into the buffer. Is that understanding correct? So should my code just poll on ftdi_read_data()? If I'm hoping to get really high throughput (ideally > 50 Mbps sustained) are there any hints in terms of balancing the chunksize and the size argument for read_data? thanks much! – caleb -- libftdi - see http://www.intra2net.com/en/developer/libftdi for details. To unsubscribe send a mail to [email protected]
