Re: softraid - speed

2009-05-20 Thread Janne Johansson

Uwe Dippel wrote:
I tried again, setting up RAID1 on 2 U320 drives, 15k, as described in 
softraid(4).
Now I find the speed to be too slow. Writing to a single file is kind of 
okay: [everything/pwd is /mnt, which is a softraid drive, /dev/sd3f]


[..]


But a dump && restore of /usr is a tad sick:


[..cut...]


  I can see at times
that the data amount transfered is huge, at other times it is moving by 
steps of 0.1-0.2 MB/s. Probably it is a problem of number of files, not 
of size.


Any idea what to do to improve the performance?


In the generic "I restore/unpack a zillion files to a newfs:ed 
partition", mount it async, which helps the "number of files"
issue, and if it fails in the middle, you will want to restart the 
restore anyhow, so you just might newfs it again in that case.




softraid - speed

2009-05-20 Thread Uwe Dippel
I tried again, setting up RAID1 on 2 U320 drives, 15k, as described in 
softraid(4).
Now I find the speed to be too slow. Writing to a single file is kind of 
okay: [everything/pwd is /mnt, which is a softraid drive, /dev/sd3f]

# bioctl sd3
Volume  Status   Size Device
softraid0 0 Online   299671585280 sd3 RAID1
  0 Online   299671585280 0:0.0   noencl 
  1 Online   299671585280 0:1.0   noencl 

dump and restore is the task. It is not fast:
DUMP: Volume 1 took 0:00:07
  DUMP: Volume 1 transfer rate: 2147 KB/s
  DUMP: Date this dump completed:  Wed May 20 16:31:08 2009
  DUMP: Average transfer rate: 2147 KB/s
7 seconds for 14 MB. But data transfer itself is okay:
# dump -0ua -f testo /dev/sd0e
DUMP: Volume 1 took 0:00:01
  DUMP: Volume 1 transfer rate: 15039 KB/s
  DUMP: Date this dump completed:  Wed May 20 16:49:53 2009
  DUMP: Average transfer rate: 15039 KB/s
  DUMP: level 0 dump on Wed May 20 16:49:51 2009
It is writing that takes the time:
# date && restore rf testo && date
Wed May 20 16:51:48 SGT 2009
Wed May 20 16:51:54 SGT 2009

The raw speed is good:
# dd if=/dev/zero of=nonsense.img bs=1m count=5000
5000+0 records in
5000+0 records out
524288 bytes transferred in 100.534 secs (52149868 bytes/sec)

But a dump && restore of /usr is a tad sick:
(/dev/sd0f  7.9G2.4G5.1G32%/usr)
# dump -0ua -f - /dev/sd0f | restore rf -
  DUMP: Date of this level 0 dump: Wed May 20 16:53:46 2009
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rsd0f (/usr) to standard output
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 2549189 tape blocks.
  DUMP: Volume 1 started at: Wed May 20 16:53:48 2009
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: 4.42% done, finished in 3:48
  DUMP: 36.44% done, finished in 0:27
  DUMP: 40.42% done, finished in 0:30
  DUMP: 52.60% done, finished in 0:23
  DUMP: 64.08% done, finished in 0:17
  DUMP: 77.57% done, finished in 0:10
  DUMP: 92.19% done, finished in 0:03
  DUMP: 2717062 tape blocks
  DUMP: Date of this level 0 dump: Wed May 20 16:53:46 2009
  DUMP: Volume 1 completed at: Wed May 20 17:36:48 2009
  DUMP: Volume 1 took 0:43:00
  DUMP: Volume 1 transfer rate: 1053 KB/s
  DUMP: Date this dump completed:  Wed May 20 17:36:48 2009
  DUMP: Average transfer rate: 1053 KB/s
  DUMP: level 0 dump on Wed May 20 16:53:46 2009
  DUMP: DUMP IS DONE

The LEDs of the drives were kind of continuously on.

I also tried to mount 'softdep', but that didn't make much of a 
difference. When I do 'df -h' in another console, I can see at times 
that the data amount transfered is huge, at other times it is moving by 
steps of 0.1-0.2 MB/s. Probably it is a problem of number of files, not 
of size.


Any idea what to do to improve the performance?

Uwe