future hardware

2006-10-21 Thread Dan
I have been using an older 64bit system, socket 754 for a while now.  It has
the old PCI bus 33Mhz.  I have two low cost (no HW RAID) PCI SATA I cards
each with 4 ports to give me an eight disk RAID 6.  I also have a Gig NIC,
on the PCI bus.  I have Gig switches with clients connecting to it at Gig
speed.

As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect

The transfer rate is not bad across the network but my bottle neck it the
PCI bus.  I have been shopping around for new MB and PCI-express cards.  I
have been using mdadm for a long time and would like to stay with it.  I am
having trouble finding an eight port PCI-express card that does not have all
the fancy HW RAID which jacks up the cost.  I am now considering using a MB
with eight SATA II slots onboard.  GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
nForce 590 SLI MCP ATX.

What are other users of mdadm using with the PCI-express cards, most cost
effective solution?


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: why partition arrays?

2006-10-21 Thread Henrik Holst
Bodo Thiesen wrote:
> Ken Walker <[EMAIL PROTECTED]> wrote:
> 
>> Is LVM stable, or can it cause more problems than separate raids on a array.

[description of street smart raid setup]

(The same function could probably be achieved with logical partitions
and ordinary software raid levels.)

> So, now decide for your own, if you consider LVM stable - I would ;)
> 
> Regards, Bodo

Have you lost any disc (i.e. "physical volumes") since February? Or lost
the meta-data?

I would not recommend anyone to use LVM if they are less than experts on
Linux systems. Setting up a LVM system is easy: administrating and
salvaging the same, was much more work. (I used it ~3 years ago)

/Henrik Holst
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Propose of enhancement of raid1 driver

2006-10-21 Thread Tomasz Chmielewski

Neil Brown wrote:

On Tuesday October 17, [EMAIL PROTECTED] wrote:

I would like to propose an enhancement of raid 1 driver in linux kernel.
The enhancement would be speedup of data reading on mirrored partitions.
The idea is easy.
If we have mirrored partition over 2 disks, and these disk are in sync, there is
possibility of simultaneous reading of the data from both disks on the same way
as in raid 0. So it would be chunk1 read from master, chunk2 read from slave at
the same time. 
As result it would give significant speedup of read operation (comparable with

speed of raid 0 disks).


This is not as easy as it sounds.
Skipping over blocks within a track is no faster than reading blocks
in the track, so you would need to make sure that your chunk size is
larger than one track - probably it would need to be several tracks.


What you said is certainly true when we read one file at a given moment.

What if we read two different files at a given time? Certainly, it would 
be faster if DRIVE_1 reads FILE_1, and DRIVE_2 reads FILE_2.




Raid1 already does some read-balancing, though it is possible (even
likely) that it doesn't balance very effectively.  Working out how
best to do the balancing in general in a non-trivial task, but would
be worth spending time on.


Probably what I said before isn't very correct, as RAID-1 has no idea of 
the filesystem that is on top of it; rather, it will see attempts to 
access differend areas of the array?




--
Tomasz Chmielewski
http://wpkg.org

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: future hardware

2006-10-21 Thread Justin Piszcz


On Sat, 21 Oct 2006, Dan wrote:

> I have been using an older 64bit system, socket 754 for a while now.  It has
> the old PCI bus 33Mhz.  I have two low cost (no HW RAID) PCI SATA I cards
> each with 4 ports to give me an eight disk RAID 6.  I also have a Gig NIC,
> on the PCI bus.  I have Gig switches with clients connecting to it at Gig
> speed.
> 
> As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
> PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
> 
> The transfer rate is not bad across the network but my bottle neck it the
> PCI bus.  I have been shopping around for new MB and PCI-express cards.  I
> have been using mdadm for a long time and would like to stay with it.  I am
> having trouble finding an eight port PCI-express card that does not have all
> the fancy HW RAID which jacks up the cost.  I am now considering using a MB
> with eight SATA II slots onboard.  GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
> nForce 590 SLI MCP ATX.
> 
> What are other users of mdadm using with the PCI-express cards, most cost
> effective solution?
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Read this:

http://www.anandtech.com/IT/showdoc.aspx?i=2859

I have a similar setup to you, 6 IDE ATA/100 + 2 SATA/150 (all 400GB) in 
an mdadm RAID5, works well but it maxes out the PCI bus unfortunately.  At 
some point I am going to do what you did, get 2 x SATA PCI-e SiL 3114 
cards perhaps.  Or, after reading that article, consider SAS maybe..?

Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: why partition arrays?

2006-10-21 Thread Bodo Thiesen
Henrik Holst <[EMAIL PROTECTED]> wrote:

> Have you lost any disc (i.e. "physical volumes") since February?

In fact, we have. One disc failed, we removed it brought a new disc, set 
it up like the other ones and finally added it to the degraded RAID 
array. Now everything is fine again ;)

> Or lost the meta-data?

LVM meta-data or RAID meta-data?

LVM: In this case we may really get a problem. But on the other hand: How 
would you try to loose those meta-data?

RAID6: There we would be quite save, as we have all LVs of raid1 called 
raid?1 and so on. So if it wouldn't assemble anymore we could still fall 
back to just recreate it.

> I would not recommend anyone to use LVM if they are less than experts on
> Linux systems.

LVM is such easy to use, I'd recommend anyone to use LVM in favor of 
creating partitions etc. because I think, repartitioning a disc is MUCH 
MORE error prone, than just creating a new LV. And you can do stuff like 
enlarging an LV what you can't with partitions.

> Setting up a LVM system is easy: administrating and
> salvaging the same, was much more work. (I used it ~3 years ago)

Was that LVM on RAID? I'm talking about LVM on RAID.

Regards, Bodo
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID5 Recovery

2006-10-21 Thread Neil Cavan
Hi,

I had a run-in with the Ubuntu Server installer, and in trying to get
the new system to recognize the clean 5-disk raid5 array left behind by
the previous Ubuntu system, I think I inadvertently instructed it to
create a new raid array using those same partitions.

What I know for sure is that now, I get this:

[EMAIL PROTECTED]:~$ sudo mdadm --examine /dev/hda1
mdadm: No super block found on /dev/hda1 (Expected magic a92b4efc, got
)
[EMAIL PROTECTED]:~$ sudo mdadm --examine /dev/hdc1
mdadm: No super block found on /dev/hdc1 (Expected magic a92b4efc, got
)
[EMAIL PROTECTED]:~$ sudo mdadm --examine /dev/hde1
mdadm: No super block found on /dev/hde1 (Expected magic a92b4efc, got
)
[EMAIL PROTECTED]:~$ sudo mdadm --examine /dev/hdg1
mdadm: No super block found on /dev/hdg1 (Expected magic a92b4efc, got
)
[EMAIL PROTECTED]:~$ sudo mdadm --examine /dev/hdi1
mdadm: No super block found on /dev/hdi1 (Expected magic a92b4efc, got
)

I didn't format the partitions or write any data to the disk, so I think
the array's data should be intact. Is there a way to recreate the
superblocks, or am I hosed?

Thanks,
Neil

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: future hardware

2006-10-21 Thread Richard Scobie

Dan wrote:



What are other users of mdadm using with the PCI-express cards, most cost
effective solution?


I have been successfully using a pair of Addonics AD2SA3GPX1 cards, with
 4 x 500GB in a stacked RAID0 on top of a pair of RAID1 configuration.

The cards are cheap and use the sil24 driver, which seems to be one of
the better supported.

Performance is good - read/write speeds of 140MB/s in bonnie++ as I recall.

Regards,

Richard

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: future hardware

2006-10-21 Thread Mike Hardy


Justin Piszcz wrote:

> cards perhaps.  Or, after reading that article, consider SAS maybe..?


I hate to be the guy that breaks out the unsubstantiated anecdotal
evidence, but I've got a RAID10 with 4x300GB Maxtor SAS drives, and I've
already had two trigger their internal SMART "I'm about to fail" message.

They've been in service now for around 2 months, and they do have an
okay temperature, and I have not been beating the crap out of them.

More than a little disappointing.

They are fast though...

-Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html