> > > elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576
> > > 1048576+0 records in
> > > 1048576+0 records out
> > > 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec)
> > >
> >
> >Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6
> >scsi-disks in a ra
At 04:47 AM 20/04/2005, Claus Guttesen wrote:
> elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec)
>
Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6
sc
> You could use the atabeast to do two raid 5's, then use vinum to stripe those
> two.
I actually thought of that a while ago (unrelated to this). I read the
vinum-page in the handbook, assume this is still valid. I recall a
discussion regarding it's (re)naming to gvinum, but don't see any
mentio
Claus Guttesen wrote:
That's about what I expected. RAID 5 depends on fast xor, so a slow processor
in a hardware RAID5 box will slow you down a lot.
You should try taking the two RAID5's (6 disks each) created on your original
controller and striping those together (RAID 50) - this should get you
> That's about what I expected. RAID 5 depends on fast xor, so a slow processor
> in a hardware RAID5 box will slow you down a lot.
>
> You should try taking the two RAID5's (6 disks each) created on your original
> controller and striping those together (RAID 50) - this should get you some
> bet
Claus Guttesen wrote:
elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576
1048576+0 records in
1048576+0 records out
1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec)
Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6
scsi-disks in a raid 5:
frodo~%>dd
> elin% dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 count=1048576
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 21.373114 secs (50237968 bytes/sec)
>
Follow-up, did the same dd on a Dell 2850 with a LSI Logic (amr), 6
scsi-disks in a raid 5:
frodo~%>dd if=/dev/ze
> >>I think you are disk bound.. You should not be disk bound at this point
> >>with a
> >>good RAID controller..
> > Good point, it's an atabeast from nexsan.
> Looks like they are indeed waiting on disk.. You could try making two 6 disk
> raid5 in your controller, then striping those with vinum
Claus Guttesen wrote:
What state is nfsd in? Can you send the output of this:
ps -auxw|grep nfsd
while the server is slammed?
elin~%>ps -auxw|grep nfsd
root 378 3,7 0,0 1412 732 ?? DTor07am 4:08,82 nfsd:
server (nfsd)
root 380 3,5 0,0 1412 732 ?? DTor07am 1:56,
> What state is nfsd in? Can you send the output of this:
> ps -auxw|grep nfsd
> while the server is slammed?
elin~%>ps -auxw|grep nfsd
root 378 3,7 0,0 1412 732 ?? DTor07am 4:08,82 nfsd:
server (nfsd)
root 380 3,5 0,0 1412 732 ?? DTor07am 1:56,52 nfsd:
server
Claus Guttesen wrote:
What does gstat look like on the server when you are doing this?
Also - does a dd locally on the server give the same results? You should get
about double that I would estimate locally direct to disk. What about a dd over
NFS?
dd-command:
dd if=/dev/zero of=/nfssrv/dd.tst b
> What does gstat look like on the server when you are doing this?
> Also - does a dd locally on the server give the same results? You should get
> about double that I would estimate locally direct to disk. What about a dd
> over
> NFS?
dd-command:
dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 cou
Claus Guttesen wrote:
When you say 'ide->fiber' that could mean a lot of things. Is this a single
drive, or a RAID subsystem?
Yes, I do read it different now ;-)
It's a raid 5 with 12 400 GB drives split into two volumes (where I
performed the test on one of them).
What does gstat look like on th
> When you say 'ide->fiber' that could mean a lot of things. Is this a single
> drive, or a RAID subsystem?
Yes, I do read it different now ;-)
It's a raid 5 with 12 400 GB drives split into two volumes (where I
performed the test on one of them).
regards
Claus
_
Claus Guttesen wrote:
Q:
Will I get better performance upgrading the server from dual PIII to dual Xeon?
A:
rsync is CPU intensive, so depending on how much cpu you were using for
this,
you may or may not gain. How busy was the server during that time? Is this to
a single IDE disk? If so, you a
> > Q:
> > Will I get better performance upgrading the server from dual PIII to dual
> > Xeon?
> > A:
>
> rsync is CPU intensive, so depending on how much cpu you were using for this,
> you may or may not gain. How busy was the server during that time? Is this
> to
> a single IDE disk? If so,
Claus Guttesen wrote:
Hi.
Sorry for x-posting but the thread was originally meant for
freebsd-stable but then a performance-related question slowly emerged
into the message ;-)
Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some
simple benchmarks against a FreeBSD 5.4 RC2-server. My s
Hi.
Sorry for x-posting but the thread was originally meant for
freebsd-stable but then a performance-related question slowly emerged
into the message ;-)
Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some
simple benchmarks against a FreeBSD 5.4 RC2-server. My seven clients
are RC1
18 matches
Mail list logo