[casper] CASPER Workshop 2014 -- Registration now open!

2014-04-08 Thread Jack Hickish
Dear Casperites,

Just a quick email to say that the CASPER wiki has been updated with
some new details about the upcoming workshop. Check it out at
https://casper.berkeley.edu/wiki/Workshop_2014

The important details --

Where: University of California, Berkeley
Registration Closes: April 29, 2014
Abstracts Due: April 29, 2014
Meeting dates: June 9th (9am) through June 13th, 2014 (1pm)
Cost: $125 for students, $250 for everyone else

The registration page is now live at
https://www.regonline.com/casperworkshop_1535981 . Here you can
register for the workshop and submit abstracts for posters/talks.

If you have any problems with registration, or have any questions
about the workshop, drop an email to casperworks...@ssl.berkeley.edu

(Any suggestions for tutorials / demonstrations / things you'd really
like to see at this year's meeting are also gratefully received).

Thanks, and hope to see you all there,

Jack



Re: [casper] Problem writing to DRAM, ROACH 1

2014-04-08 Thread Madden, Timothy J.
Thanks Glenn

I like the idea of copying from the file system. I am not nfs mounted, but I 
can scp.

I got this to work by splitting into smaller writes and using the offset= 
setting. It is slow but at least it
seems to work. Thanks for the email.



def big_write(self,data):

chunksize=1024*64

ll=len(data)

nchunks=ll/chunksize

for k in range(nchunks):

st=k*chunksize;
ed=(k+1)*chunksize
self.roach.write('dram_memory',data[st:ed],offset=st)



From: G Jones [glenn.calt...@gmail.com]
Sent: Tuesday, April 08, 2014 4:04 PM
To: Madden, Timothy J.
Cc: casper@lists.berkeley.edu
Subject: Re: [casper] Problem writing to DRAM, ROACH 1

Hi,

I think I ran into similar issues, but I don't remember it being a consistent 
failure for a given size, just that large transfers were somewhat unreliable.
I used code like this:
 def _load_dram_katcp(self,data,tries=2):
while tries > 0:
try:
self._pause_dram()
self.r.write_dram(data.tostring())
self._unpause_dram()
return
except Exception, e:
print "failure writing to dram, trying again"
# print e
tries = tries - 1
raise Exception("Writing to dram failed!")

to help deal with such problems. But then I found I got more speed by 
generating the data as a file on the file system (since I was running the code 
on the same machine that hosts the ROACH NFS file system, so I could write the 
data to e.g. /srv/roach_boot/etch/boffiles/dram.bin) and then use the linux 
command dd on the roach to write the data to DRAM. The code looks like:

def 
_load_dram_ssh(self,data,offset_bytes=0,roach_root='/srv/roach_boot/etch',datafile='boffiles/dram.bin'):
offset_blocks = offset_bytes/512 #dd uses blocks of 512 bytes by default
self._update_bof_pid()
self._pause_dram()
data.tofile(os.path.join(roach_root,datafile))
dram_file = '/proc/%d/hw/ioreg/dram_memory' % self.bof_pid
datafile = '/' + datafile
result = borph_utils.check_output(('ssh root@%s "dd seek=%d if=%s 
of=%s"' % (self.roachip,offset_blocks,datafile,dram_file)),shell=True)
print result
self._unpause_dram()

This seems to work pretty well.

Glenn


On Tue, Apr 8, 2014 at 4:51 PM, Madden, Timothy J. 
mailto:tmad...@aps.anl.gov>> wrote:


I am using a DRAM block on a ROACH 1.

I am using python to write data to the dram with corr.

I create a binary array of zeros like:

fa.lut_binaryIQ = '\x00\x00\x00\x00\x00\x00.'

Length of the array is 1048576.

If I do
roach.write('dram_memory',fa.lut_binaryIQ)

It works fine.

If I double the length of the binary array, where len(fa.lut_binaryIQ)=2097152

Then I do
roach.write('dram_memory',fa.lut_binaryIQ)

I get a timeout error
RuntimeError: Request write timed out after 20 seconds.


I have tried longer and longer timeouts of 60sec and still no good result. I 
set the timeout with:
roach = corr.katcp_wrapper.FpgaClient('192.168.0.67', 7147,timeout=60)


Any ideas? It seems there is a 1MB length limit on my dram.

Tim






Re: [casper] Problem writing to DRAM, ROACH 1

2014-04-08 Thread G Jones
Hi,

I think I ran into similar issues, but I don't remember it being a
consistent failure for a given size, just that large transfers were
somewhat unreliable.
I used code like this:
 def _load_dram_katcp(self,data,tries=2):
while tries > 0:
try:
self._pause_dram()
self.r.write_dram(data.tostring())
self._unpause_dram()
return
except Exception, e:
print "failure writing to dram, trying again"
# print e
tries = tries - 1
raise Exception("Writing to dram failed!")

to help deal with such problems. But then I found I got more speed by
generating the data as a file on the file system (since I was running the
code on the same machine that hosts the ROACH NFS file system, so I could
write the data to e.g. /srv/roach_boot/etch/boffiles/dram.bin) and then use
the linux command dd on the roach to write the data to DRAM. The code looks
like:

def
_load_dram_ssh(self,data,offset_bytes=0,roach_root='/srv/roach_boot/etch',datafile='boffiles/dram.bin'):
offset_blocks = offset_bytes/512 #dd uses blocks of 512 bytes by
default
self._update_bof_pid()
self._pause_dram()
data.tofile(os.path.join(roach_root,datafile))
dram_file = '/proc/%d/hw/ioreg/dram_memory' % self.bof_pid
datafile = '/' + datafile
result = borph_utils.check_output(('ssh root@%s "dd seek=%d if=%s
of=%s"' % (self.roachip,offset_blocks,datafile,dram_file)),shell=True)
print result
self._unpause_dram()

This seems to work pretty well.

Glenn


On Tue, Apr 8, 2014 at 4:51 PM, Madden, Timothy J. wrote:

>
>
> I am using a DRAM block on a ROACH 1.
>
> I am using python to write data to the dram with corr.
>
> I create a binary array of zeros like:
>
> fa.lut_binaryIQ = '\x00\x00\x00\x00\x00\x00.'
>
> Length of the array is 1048576.
>
> If I do
> roach.write('dram_memory',fa.lut_binaryIQ)
>
> It works fine.
>
> If I double the length of the binary array, where
> len(fa.lut_binaryIQ)=2097152
>
> Then I do
> roach.write('dram_memory',fa.lut_binaryIQ)
>
> I get a timeout error
> RuntimeError: Request write timed out after 20 seconds.
>
>
> I have tried longer and longer timeouts of 60sec and still no good result.
> I set the timeout with:
> roach = corr.katcp_wrapper.FpgaClient('192.168.0.67', 7147,timeout=60)
>
>
> Any ideas? It seems there is a 1MB length limit on my dram.
>
> Tim
>
>
>
>


[casper] Problem writing to DRAM, ROACH 1

2014-04-08 Thread Madden, Timothy J.


I am using a DRAM block on a ROACH 1.

I am using python to write data to the dram with corr.

I create a binary array of zeros like:

fa.lut_binaryIQ = '\x00\x00\x00\x00\x00\x00.'

Length of the array is 1048576.

If I do
roach.write('dram_memory',fa.lut_binaryIQ)

It works fine.

If I double the length of the binary array, where len(fa.lut_binaryIQ)=2097152

Then I do
roach.write('dram_memory',fa.lut_binaryIQ)

I get a timeout error
RuntimeError: Request write timed out after 20 seconds.


I have tried longer and longer timeouts of 60sec and still no good result. I 
set the timeout with:
roach = corr.katcp_wrapper.FpgaClient('192.168.0.67', 7147,timeout=60)


Any ideas? It seems there is a 1MB length limit on my dram.

Tim





[casper] High Performance Signal Processing Conference, 28th - 31st October, Malta

2014-04-08 Thread Alessio Magro
Dear Casperites,

I would like to inform you that we'll be organizing a conference on High
Performance Signal Processing, which will be held in Malta between the 28th
and 31st of October 2014. The idea of the conference is to bring all the
big science projects together to find out common platforms for carrying out
the signal processing that the science required. We already have a strong
interest from SKA, CTA, CERN, XFEL and other big European experiments, as
well as a strong presence from industry.

You can find more details on the conference website
.

Please circulate this email to any colleagues who might be interested in
attending, and do not hesitate to contact me should you have any questions,
comments or suggestions.

Best Regards,
Alessio