An interesting thing I just noticed here testing out some Firewire drives with 
OpenSolaris. 

Setup :
OpenSolaris 2009.06 and a dev version (snv_129)
2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 
devices on the chain)
- one SATA bridge
- one PATA bridge

Created a zpool with both drives as simple vdevs
Started a zfs send/recv to backup a local filesystem

Watching zpool iostat I see that the total throughput maxes out at about 
10MB/s.  Thinking that one of the drives may be at fault, I stopped, destroyed 
the pool and created two separate pools from each drive. Restarting the 
send/recv to one disk and saw the same max throughput.  Tried to the other and 
got the same thing.

Then I started one send/recv to one disk, got the max right away, and started 
and send/recv to the second one and got about 4MB/second while the first 
operation dropped to about 6MB/second.

It would appear that the bus bandwidth is limited to about 10MB/sec (~80Mbps) 
which is well below the theoretical 400Mbps that 1394 is supposed to be able to 
handle.  I know that these two disks can go significantly higher since I was 
seeing 30MB/sec when they were used on Macs previously in the same daisy-chain 
configuration.

I get the same symptoms on both the 2009.06 and the b129 machines.

It's not a critical issue to me since these drives will eventually just be used 
for send/recv backups over a slow link, but it doesn't augur well for the day I 
need to restore data...

Anyone else seen this behaviour with Firewire devices and OpenSolaris?

Erik
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to