On Thu, Jun 11, 2009 at 11:54 AM, bruce.lab...@autoliv.com wrote:
Curiously, if I allocate more memory but I constrain the problem to fit in
RAM (i.e. run a smaller problem), the program always runs to completion.
On Linux, malloc()'ing memory doesn't commit memory pages (RAM or
swap). Until
On Thu, Jun 11, 2009 at 7:30 PM, Bruce
Labittbruce.lab...@myfairpoint.net wrote:
Using netperf I think I got 770e6 bps write rates, but for considerably
smaller file sizes. That is about twice the rate - but for a file at least
10 times smaller. Something about these big file writes that
gnhlug-discuss-boun...@mail.gnhlug.org wrote on 06/12/2009 07:35:23 AM:
On Thu, Jun 11, 2009 at 7:30 PM, Bruce
Labittbruce.lab...@myfairpoint.net wrote:
Using netperf I think I got 770e6 bps write rates, but for
considerably
smaller file sizes. That is about twice the rate - but for a
I have a simulation program I have written in C (a little C++ is in there
too) that computes ambiguity functions. When I run large sims, the system
runs out of memory. The platform is an IBM QS22 blade running
YellowDogLinux6.1-64 (RH-like, for PPCs) with 32GB RAM. The QS22 has two
enhanced
On Thu, Jun 11, 2009 at 11:54 AM, bruce.lab...@autoliv.com wrote:
Anyways, the program seems to run out of memory after processing many
blocks. So either there is a memory leak, or something else going on.
Any suggestions?
valgrind
It's what for diner. :-D
In both cases, if I use free
bruce.lab...@autoliv.com writes:
Anyways, the program seems to run out of memory after processing many
blocks. So either there is a memory leak, or something else going on.
Any suggestions?
...
Any good memory tracking tools? I have used valgrind but not gained much
insight. Must be
On Thu, Jun 11, 2009 at 11:54 AM, bruce.lab...@autoliv.com wrote:
Any good memory tracking tools? I have used valgrind but not gained much
insight. Must be operator error...
I was reading a discussion on Slashdot recently. Most posts said Purify
($$$) was the best option out there. Lots
On Thu, Jun 11, 2009 at 11:54 AM, bruce.lab...@autoliv.com wrote:
Any good memory tracking tools? I have used valgrind but not gained
much
insight. Must be operator error...
I was reading a discussion on Slashdot recently. Most posts said
Purify ($$$) was the best option out
Thomas Charron twaf...@gmail.com wrote on 06/11/2009 12:30:16 PM:
valgrind
It's what for diner. :-D
What tests did you perform using valgrind? The 'simple' running of
it will just look at things like memory leaks, however, if your
cleaning up after you run, it won't always see
I notice that there is no swap listed. Umm, how does one add swap to a
nfs based system?
NFS swap of course ;-)
Have any good references on NFS swap?
To see if swap is even going to help you you might:
# Create an empty 1Gb file
dd if=/dev/zero of=/someNFSdirectory/mySwapFile
On Thu, Jun 11, 2009 at 1:22 PM, bruce.lab...@autoliv.com wrote:
Thomas Charron twaf...@gmail.com wrote on 06/11/2009 12:30:16 PM:
Another trick I've used when dealing with massive amounts of data is
to use 'fast' media, aka, flash instead of a hard drive. Memory
mapping to this sort of
I wrote:
If no joy just delete that swapFile,
Yikes! I hope it was obvious but I forgot to say that you should:
swapoff /someNFSdirectory/mySwapFile
...before deleting it.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
On Thu, Jun 11, 2009 at 1:12 PM, bruce.lab...@autoliv.com wrote:
I notice that there is no swap listed. Umm, how does one add swap to a
nfs based system?
NFS swap of course ;-)
Have any good references on NFS swap?
The O'Reilly NFS NIS book has a section on diskless clients. pg
On Thu, Jun 11, 2009 at 1:27 PM, Thomas Charron twaf...@gmail.com wrote:
On Thu, Jun 11, 2009 at 1:22 PM, bruce.lab...@autoliv.com wrote:
Thomas Charron twaf...@gmail.com wrote on 06/11/2009 12:30:16 PM:
Another trick I've used when dealing with massive amounts of data is
to use 'fast'
Thomas Charron twaf...@gmail.com wrote on 06/11/2009 01:27:01 PM:
On Thu, Jun 11, 2009 at 1:22 PM, bruce.lab...@autoliv.com wrote:
Thomas Charron twaf...@gmail.com wrote on 06/11/2009 12:30:16 PM:
Another trick I've used when dealing with massive amounts of data
is
to use 'fast' media,
On Jun 11, 2009, at 1:39 PM, Tom Buskey wrote:
On Thu, Jun 11, 2009 at 1:27 PM, Thomas Charron twaf...@gmail.com
wrote:
On Thu, Jun 11, 2009 at 1:22 PM, bruce.lab...@autoliv.com wrote:
Thomas Charron twaf...@gmail.com wrote on 06/11/2009 12:30:16 PM:
Another trick I've used when
On Thu, Jun 11, 2009 at 3:10 PM, Jarod Wilsonja...@wilsonet.com wrote:
On Jun 11, 2009, at 1:39 PM, Tom Buskey wrote:
Typically, no. USB sucks horribly for disk I/O.
Mostly depends on what your talking about. And the quality of the
USB disk/host controller.
USB 2.0 is 480 mbits/s which is
On Thu, 2009-06-11 at 13:41 -0400, bruce.lab...@autoliv.com wrote:
I bet it won't do 480Mbits sustained.
(For a 10 GB file write) However, I just might try it for the heck of
it!
I use external USB and firewire drives fairly regularly. USB does not
come close to achieving 48 MB / sec.
On Jun 11, 2009, at 3:23 PM, Thomas Charron wrote:
On Thu, Jun 11, 2009 at 3:10 PM, Jarod Wilsonja...@wilsonet.com
wrote:
On Jun 11, 2009, at 1:39 PM, Tom Buskey wrote:
Typically, no. USB sucks horribly for disk I/O.
Mostly depends on what your talking about.
Yeah, I left out a
On Thu, Jun 11, 2009 at 3:25 PM, Lloyd Kvam pyt...@venix.com wrote:
On Thu, 2009-06-11 at 13:41 -0400, bruce.lab...@autoliv.com wrote:
I bet it won't do 480Mbits sustained.
(For a 10 GB file write) However, I just might try it for the heck of
it!
I use external USB and firewire drives
On Jun 11, 2009, at 3:25 PM, Lloyd Kvam wrote:
On Thu, 2009-06-11 at 13:41 -0400, bruce.lab...@autoliv.com wrote:
I bet it won't do 480Mbits sustained.
(For a 10 GB file write) However, I just might try it for the heck of
it!
I use external USB and firewire drives fairly regularly. USB does
On Thu, Jun 11, 2009 at 3:25 PM, Lloyd Kvampyt...@venix.com wrote:
On Thu, 2009-06-11 at 13:41 -0400, bruce.lab...@autoliv.com wrote:
I bet it won't do 480Mbits sustained.
(For a 10 GB file write) However, I just might try it for the heck of
it!
I use external USB and firewire drives fairly
On Jun 11, 2009, at 4:50 PM, Thomas Charron wrote:
On Thu, Jun 11, 2009 at 3:25 PM, Lloyd Kvampyt...@venix.com wrote:
On Thu, 2009-06-11 at 13:41 -0400, bruce.lab...@autoliv.com wrote:
I bet it won't do 480Mbits sustained.
(For a 10 GB file write) However, I just might try it for the heck
On Thu, Jun 11, 2009 at 4:50 PM, Thomas Charrontwaf...@gmail.com wrote:
There are SO many variables in the case of USB, you can't just
blankey statement that USB doesn't come close to achieving it.
s/USB/anything/
In benchmarks I've seen, FireWire tends to do somewhat better than
USB, and
But all this doesn't help the OP at all. The OP should benchmark
the performance of what he's got. It doesn't really matter if X could
be faster if his particular X is slow.
-- Ben
OP here... That 45 MB/sec file write rate was real-world, using fwrite
in C over a network. The average
On Thu, 2009-06-11 at 16:50 -0400, Thomas Charron wrote:
On Thu, Jun 11, 2009 at 3:25 PM, Lloyd Kvampyt...@venix.com wrote:
On Thu, 2009-06-11 at 13:41 -0400, bruce.lab...@autoliv.com wrote:
I bet it won't do 480Mbits sustained.
(For a 10 GB file write) However, I just might try it for the
26 matches
Mail list logo