Lee Ward wrote:
On Wed, 2006-11-22 at 22:07 +0200, Oleg Drokin wrote:
Hello!
On Wed, Nov 22, 2006 at 02:33:23PM -0500, pauln wrote:
So for meaningful comparison we should compare 10k clients file per
process
with 5k cleints shared file. This "only" gives us 2x difference which is
still
better than 4x.
Also stipe size is not specified, what was it set to?
Oleg, I'm not sure I entirely agree here. If every OSS was used in the
single file test then from the io hardware perspective they are
functionally equivalent. They run 2 OSTs on each OSS (which is probably
done for capacity reasons due to the small lun limitation of ext3) so a
test which only uses half of the OSTs could still use every OSS
processor, interconnect link, and fiber channel link.
We use 2 OST per OSS in order to actively use both channels of the HBA
-- Staying away channel from bonding.
So that limits (currently) your bandwidth to 50% of what is available.
Making an OST volume through LVM can give you full bandwidth, without
channel bonding. I think we have just established that using LVM for
RAID0 will not have an impact on performance.
I have no such information. My assumption was every OST had its own disk
backend, but even if not, we have no idea in what order OSTs were created.
If they were created like on oss0 we have ost0 & ost1 and so on, what I describe
is still correct too.
If somehow number of physical disk backends & osses used in both cases are same
other things might have come into play too, like less spindles used by
underlying disk backend due to partitioning?
They are partitioned. However, during this test the other file systems
were idle.
Each channel on an HBA does, effectively, have it's own string of disk
-- DDN calls them "tiers". There are 4 OSS per DDN "couplet", controller
pair. There are 8 "tiers" per DDN.
Lustre is configured so that two consecutive OSTs would be on the same
OSS.
Bye,
Oleg
_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel
_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel
_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel