[casper] DRAM confusion
Hi All, I'm a little confused about addressing in the DRAM controller on a ROACH. If, for example, I set the address to be 0 and toggle the cmd_valid on the off (2 clk cycles). I receive (at some arbitary time later) 2 144 "words" (of which only 128 bits are data). If I now fill up a section of the dram with a series of numbers from python: x = arange(1000, dtype="int32") write_dram(x.tostring(), 0 , True) I set the dram address to be 0 in the controller and readout the first "128" bits I get [0,1,2,3], then I readout the second "128" bits (clk cycle 2) and I get [4,5,6,7]. If I then set the address to 1 the process continues. Next "144" bits are [8,9,10,11], the next clk cycle [12,13,14,15]. This to me makes me think that each register is 256(288) bits wide. The cpu interface on the other hand claims that the register is only 128(144) bits wide. e.g. here is what I would expect from the docs given the above dram values address | fpga | cpu 0| [0,1,2,3] [4,5,6,7] | [0,1,2,3] 1| [8,9,10,11] [12,13,14,15] | [4,5,6,7] I'm sure I'm missing something obvious here Ross -- Ross Williamson Research Scientist - Sub-mm Group California Institute of Technology 626-395-2647 (office) 312-504-3051 (Cell)
Re: [casper] Linux Valon Synthesizer
Make sure you have the right permissions for the port, i.e. do: ls -l /dev/ttyUSB0 and make sure you have permissions. On a lot of systems, the ttyUSB* are in the "dialout" group, so if you add your username to that group and log out and back in, you should be able to access it. Glenn On Thu, Dec 11, 2014 at 3:29 PM, Matías Vidal Valladares < matimetalvi...@hotmail.cl> wrote: > Hi everyone, > > I'm trying to use de Linux Valon Synthesizer made by Patrick Brandt (link > below), but i have problems with the port. > I have the following error: > SerialException: Could not configure port: (5, 'Input/output error') > > If i have a Roach plugged to ttyUSB0, what parameter should i use when i > declare an object of type Synthesizer? > > I have Ubuntu 14.04, and i run the code with ipython. > > Link: https://github.com/nrao/ValonSynth > > > *Matías Vidal Valladares.* > >
[casper] Linux Valon Synthesizer
Hi everyone, I'm trying to use de Linux Valon Synthesizer made by Patrick Brandt (link below), but i have problems with the port. I have the following error: SerialException: Could not configure port: (5, 'Input/output error') If i have a Roach plugged to ttyUSB0, what parameter should i use when i declare an object of type Synthesizer? I have Ubuntu 14.04, and i run the code with ipython. Link: https://github.com/nrao/ValonSynth Matías Vidal Valladares.
Re: [casper] inverse PFB
Hey Laura, that technique sounds just fine. You're right that the fft_wideband_real block wouldn't do it for you in this case, you'd have to do a complex FFT. This would be pretty easy to stitch together from fft_biplex and fft_direct modules (consider how they are stitched together in the fft_wideband_real, replacing fft_biplex_real blocks with twice as many fft_biplex blocks) For a long FFT (32k complex fft), this would be very large and would consume a significant portion of the entire FPGA. >It would be really neat if there was a dsp trick out there that used the wideband_real as an inverse, but we'd like to go with simplest solution regardless. I betcha you can do this (mathematically, not saying there's an actual block out there), if you are sure that your output signal is going to be real. There are similar techniques for performing a n point DCT using a n point FFT.. But there's no guarantee of that, especially considering that we won't have the information at frequency pi (good catch, there) Cheers! --Ryan On Thu, Dec 11, 2014 at 6:59 AM, Vertatschitsch, Laura E. < lvertatschit...@cfa.harvard.edu> wrote: > Hi Ryan, > > I have used a method of similar simplicity that involves swapping the real > and imaginary parts of samples before and after the fft, so a mathematical > equivalent of multiplying by j after taking the conjugate of the samples. > For that design I used the fft_direct block and operated only on 32 > incoming parallel samples. > > The issue is more that we aren't sure which fft block to place in that > algorithm for the case Jonathan describes, or if there is a clever > algorithm to use another block. We use the fft_wideband_real to generate > half of the full fft, so 16k points coming out over many clock cycles. > This block expects input data that is real and produces output data that is > complex. It strikes me that this block will not natively slide into the > real/imag-swap algorithm. > > We could obviously try and produce the full fft output from the data by > flipping and concatenation (and find the value at pi?), but we are still > left with complex data in need of an fft block that will accept it and > perform a 32k point transform. > > Do others use such a block with success? It was suggested to me that the > wideband real block was much more widely used than the other blocks, thus > it is up to date, tested, and working. > > It would be really neat if there was a dsp trick out there that used the > wideband_real as an inverse, but we'd like to go with simplest solution > regardless. > > -Laura > > > > On Thursday, December 11, 2014, Ryan Monroe > wrote: > >> IIRC, an inverse FFT can be implemented as >> 1. Complex conjugate >> 2. Fft >> 3. Complex conjugate >> >> Which is mathematically identical iirc to an ifft, if slightly less >> efficient computationally. >> >> In general, the output will not be real valued of course >> >> On Tue, Dec 9, 2014, 2:45 PM Jonathan Weintroub < >> jweintr...@cfa.harvard.edu> wrote: >> >>> Thanks to Richard and everyone who responded earlier for the comments, >>> which in some cases are very detailed. It is good to know we are not the >>> only ones worrying about this. Our DSP group is digesting the material and >>> looking at options, and other followup will likely follow. I did not want >>> to delay thanks and acknowledgment. >>> >>> One basic question which did come up is it appears that even an inverse >>> FFT would present some challenges. We stuff the 32k forward FFT with real >>> time series data and extract 16k complex frequency domain points. Might I >>> ask if any CASPER folks have experience implementing an inverse FFT >>> relevant to this case, as a real time FPGA bit code? >>> >>> Thanks again. >>> >>> Jonathan Weintroub >>> SAO >>> >>> >>> > On Dec 8, 2014, at 9:50 PM, Richard Shaw >>> wrote: >>> > >>> > Hi, >>> > >>> > I thought I'd comment as this is a problem we've been having to deal >>> > with recently for some VLBI observations. Fortunately we've had some >>> > success with an offline least-squares inversion of the PFB. This is >>> > probably not the scheme that you want, as it essentially operates on >>> > the whole PFB'd timestream at once, so realistically you need a >>> > cluster to do it. However, there is prototype code available here [1] >>> > if it's useful. >>> > >>> > The rationale for doing this is is that when you look at the whole PFB >>> > timestream very little information is actually lost (essentially only >>> > a few samples at the ends), though it may be spread across frequency >>> > and time samples. For N PFB samples of length M, there are roughly >>> > 2*N*M total numbers measured, which depend on 2*(N+P-1)*M numbers in >>> > the underlying timestream (where P is the number of taps). As >>> > typically P << N, there are very few unmeasured linear combinations, >>> > and so a statistical inversion can be pretty accurate. Fortunately it >>> > turns out this inversion can also be do
Re: [casper] inverse PFB
Hey Laura, Have you tried the vanilla complex 'fft' block? If you generate the full spectra including negative frequencies before inputting (I think there's a mirror_spectrum block that might do this(?)), I would have thought you could add two input streams together, as streamA + j*streamB. Since the FFT of either stream should give you a real output, you'll get the two independent FFTs in the real and imag outputs of the FFT block. I'm relatively confident that the complex fft block works -- all the fft's are pretty much the same under the hood, aside from some data scrambulation, and I trust Andrew :) (Caveat: I haven't thought about this for very long, and I'm a little distracted by the Frozen soundtrack right now, so what I said might be nonsense). Jack On Thu Dec 11 2014 at 14:59:46 Vertatschitsch, Laura E. < lvertatschit...@cfa.harvard.edu> wrote: > Hi Ryan, > > I have used a method of similar simplicity that involves swapping the real > and imaginary parts of samples before and after the fft, so a mathematical > equivalent of multiplying by j after taking the conjugate of the samples. > For that design I used the fft_direct block and operated only on 32 > incoming parallel samples. > > The issue is more that we aren't sure which fft block to place in that > algorithm for the case Jonathan describes, or if there is a clever > algorithm to use another block. We use the fft_wideband_real to generate > half of the full fft, so 16k points coming out over many clock cycles. > This block expects input data that is real and produces output data that is > complex. It strikes me that this block will not natively slide into the > real/imag-swap algorithm. > > We could obviously try and produce the full fft output from the data by > flipping and concatenation (and find the value at pi?), but we are still > left with complex data in need of an fft block that will accept it and > perform a 32k point transform. > > Do others use such a block with success? It was suggested to me that the > wideband real block was much more widely used than the other blocks, thus > it is up to date, tested, and working. > > It would be really neat if there was a dsp trick out there that used the > wideband_real as an inverse, but we'd like to go with simplest solution > regardless. > > -Laura > > > > On Thursday, December 11, 2014, Ryan Monroe > wrote: > >> IIRC, an inverse FFT can be implemented as >> 1. Complex conjugate >> 2. Fft >> 3. Complex conjugate >> >> Which is mathematically identical iirc to an ifft, if slightly less >> efficient computationally. >> >> In general, the output will not be real valued of course >> >> On Tue, Dec 9, 2014, 2:45 PM Jonathan Weintroub < >> jweintr...@cfa.harvard.edu> wrote: >> >>> Thanks to Richard and everyone who responded earlier for the comments, >>> which in some cases are very detailed. It is good to know we are not the >>> only ones worrying about this. Our DSP group is digesting the material and >>> looking at options, and other followup will likely follow. I did not want >>> to delay thanks and acknowledgment. >>> >>> One basic question which did come up is it appears that even an inverse >>> FFT would present some challenges. We stuff the 32k forward FFT with real >>> time series data and extract 16k complex frequency domain points. Might I >>> ask if any CASPER folks have experience implementing an inverse FFT >>> relevant to this case, as a real time FPGA bit code? >>> >>> Thanks again. >>> >>> Jonathan Weintroub >>> SAO >>> >>> >>> > On Dec 8, 2014, at 9:50 PM, Richard Shaw >>> wrote: >>> > >>> > Hi, >>> > >>> > I thought I'd comment as this is a problem we've been having to deal >>> > with recently for some VLBI observations. Fortunately we've had some >>> > success with an offline least-squares inversion of the PFB. This is >>> > probably not the scheme that you want, as it essentially operates on >>> > the whole PFB'd timestream at once, so realistically you need a >>> > cluster to do it. However, there is prototype code available here [1] >>> > if it's useful. >>> > >>> > The rationale for doing this is is that when you look at the whole PFB >>> > timestream very little information is actually lost (essentially only >>> > a few samples at the ends), though it may be spread across frequency >>> > and time samples. For N PFB samples of length M, there are roughly >>> > 2*N*M total numbers measured, which depend on 2*(N+P-1)*M numbers in >>> > the underlying timestream (where P is the number of taps). As >>> > typically P << N, there are very few unmeasured linear combinations, >>> > and so a statistical inversion can be pretty accurate. Fortunately it >>> > turns out this inversion can also be done pretty efficiently. >>> > >>> > The general scheme is this: >>> > >>> > 1. inverse FFT to generate a pseudo-timestream >>> > 2. the coupling matrix between elements in this pseudo-timestream and >>> > the real timestream is sparse diagonal, and is trivially calculable
[casper] inverse PFB
Hi Ryan, I have used a method of similar simplicity that involves swapping the real and imaginary parts of samples before and after the fft, so a mathematical equivalent of multiplying by j after taking the conjugate of the samples. For that design I used the fft_direct block and operated only on 32 incoming parallel samples. The issue is more that we aren't sure which fft block to place in that algorithm for the case Jonathan describes, or if there is a clever algorithm to use another block. We use the fft_wideband_real to generate half of the full fft, so 16k points coming out over many clock cycles. This block expects input data that is real and produces output data that is complex. It strikes me that this block will not natively slide into the real/imag-swap algorithm. We could obviously try and produce the full fft output from the data by flipping and concatenation (and find the value at pi?), but we are still left with complex data in need of an fft block that will accept it and perform a 32k point transform. Do others use such a block with success? It was suggested to me that the wideband real block was much more widely used than the other blocks, thus it is up to date, tested, and working. It would be really neat if there was a dsp trick out there that used the wideband_real as an inverse, but we'd like to go with simplest solution regardless. -Laura On Thursday, December 11, 2014, Ryan Monroe > wrote: > IIRC, an inverse FFT can be implemented as > 1. Complex conjugate > 2. Fft > 3. Complex conjugate > > Which is mathematically identical iirc to an ifft, if slightly less > efficient computationally. > > In general, the output will not be real valued of course > > On Tue, Dec 9, 2014, 2:45 PM Jonathan Weintroub < > jweintr...@cfa.harvard.edu> wrote: > >> Thanks to Richard and everyone who responded earlier for the comments, >> which in some cases are very detailed. It is good to know we are not the >> only ones worrying about this. Our DSP group is digesting the material and >> looking at options, and other followup will likely follow. I did not want >> to delay thanks and acknowledgment. >> >> One basic question which did come up is it appears that even an inverse >> FFT would present some challenges. We stuff the 32k forward FFT with real >> time series data and extract 16k complex frequency domain points. Might I >> ask if any CASPER folks have experience implementing an inverse FFT >> relevant to this case, as a real time FPGA bit code? >> >> Thanks again. >> >> Jonathan Weintroub >> SAO >> >> >> > On Dec 8, 2014, at 9:50 PM, Richard Shaw >> wrote: >> > >> > Hi, >> > >> > I thought I'd comment as this is a problem we've been having to deal >> > with recently for some VLBI observations. Fortunately we've had some >> > success with an offline least-squares inversion of the PFB. This is >> > probably not the scheme that you want, as it essentially operates on >> > the whole PFB'd timestream at once, so realistically you need a >> > cluster to do it. However, there is prototype code available here [1] >> > if it's useful. >> > >> > The rationale for doing this is is that when you look at the whole PFB >> > timestream very little information is actually lost (essentially only >> > a few samples at the ends), though it may be spread across frequency >> > and time samples. For N PFB samples of length M, there are roughly >> > 2*N*M total numbers measured, which depend on 2*(N+P-1)*M numbers in >> > the underlying timestream (where P is the number of taps). As >> > typically P << N, there are very few unmeasured linear combinations, >> > and so a statistical inversion can be pretty accurate. Fortunately it >> > turns out this inversion can also be done pretty efficiently. >> > >> > The general scheme is this: >> > >> > 1. inverse FFT to generate a pseudo-timestream >> > 2. the coupling matrix between elements in this pseudo-timestream and >> > the real timestream is sparse diagonal, and is trivially calculable >> > from the window function >> > 3. Perform a shuffle on the timstream to turn this into a series of >> > band diagonal matrices (bandwidth ~ 2*P) >> > 4. Use a band diagonal least-squares solve to invert the >> > pseudo-timestream back to the underlying timestream. >> > >> > A fuller description is here [2]. >> > >> > The complexity is O(N), and as the inversion breaks into blocks it >> > parallelises pretty trivially up to M processes (where M is the number >> > of samples in the window function). >> > >> > We did look at some iterative ways that step through the PFB >> > timestream, but they seem to accumulate errors as they go, and become >> > horribly inaccurate very quickly. This avoids it by treating the whole >> > timestream at once. Your accuracy improves the longer the length you >> > use at once. >> > >> > Juan Mena Parra and Kevin Bandura (cc'd) have also been looking at >> > what would need to change about the PFB to make it more easily >> > invertible in a strea