r
umount).
Is there any way to do this with raidtools 0.9?
If not, are there any plans to implement the ability to detach/attach
mirror devices on the fly (ala Sun's Disksuite).?
TIA,
Tom
--
Tom Regan, Operations Manager Email: [EMAIL PROTECTED]
NSW AgriculturePhone: 0
noticeable speed improvement.
> I am assuming the ide patches is in place of the raid0145-19990824-2.2.11
> patch and not in addition to.
It is in addition to. It's not RAID for ide, it's just additional ide
support... therefore you need the raid code as well.
tom
of the 0.90 raid code. Do realize, however, that he is very busy
and he has many responsibilities in other sections of the code as well.
tom
enefit from a second
drive.
tom
size you should be able to see greater than a single drive's speed,
though not 2x.
tom
ly way to build in degraded mode, so you will need to upgrade
if you want to accomplish this.
tom
> Linux-RAID mailing list archive: http://www.linuxhq.com/lnxlists/.
> This link has no archive for this list. Is there a searchable archive
> somewhere that I can look through before I ask unnecessary questions?
http://www.kernelnotes.org/lnxlists/
tom
Marc Huber:
> These tools don't seem to be included with the raidtools snapshots.
> Where do I get them from?
when you do a make install. make creates symlinks named these pointing back
to the main program that handles these functions. you can do it by hand as
well.
tom
Thomas Bange wrote:
> I am looking for a kernel patch for the 2.3.x series of raid0145, but I
> haven' t found one. The lastest patch I found is against 2.2.11. Are there
> any new version of the 'new' raid drivers for recent development kernels ?
Not yet, no.
tom
> where is a good 'reliable' archive of this list stored?
http://www.kernelnotes.org/lnxlists/linux-raid/
So this is the reasoning, but it means we have a very long wait before it's
standard in a stable kernel, which is what most raid people use.
kind of sad.
tom
> AFAIK Ingo is trying to get stuff done for 2.4
>
So that's as close as the end of the year to hear some people say it.
tom
Mika Kuoppala wrote:
> Is raid5 safe bet for swapping ? I recall reading
> that atleast in the past swapping wasnt possible on arrays.
Official word says yes.
tom
AID.HOWTO/Software-RAID.HOWTO-6.html
should be just what you're looking for.
tom
me to the process.
I'm in between consulting gigs right now and could probably add something.
Great work, thanks much.
Tom
David Teigland wrote:
> Has anyone else tried raw-io with md devices? It works for me but the
> performance is quite bad.
This is a recently reported issue on the linux-kernel mailing list. The
jist of it is that rawio is using a 512 byte blocksize, where raid assumes a
1024. This was only firs
orcing some pretty
strange behavior.
I wonder if a more straightforward test would be to refuse to use /dev/sdX
if any of the partition on sdX are mounted... this would be akin to refusing
to use /dev/sda1 if sda1 is already mounted.
Tom
itialize /dev/md0, mke2fs it and cause
the same problem? I could not.
tom
installing... and that even comparing both without DMA enabled the arco
product was like 15% slower. (more like 40% slower vs. DMA enabled).
Software raid might be a better choice for you...
Tom
id5
that will support 1 disk failed, or the super paranoid 1/5 raid that will
support two.... But I wouldn't use something that only gave me a 40% chance
of surviving the 2nd disk failure.
Tom
deadlock (ll_rw_blk ?) and all processes
> trying to access disk get stuck.
Can you duplicate this using only one of the raid5 sets? I tried to cause
the same behvior with a single raid5 set and it worked fine... but I did not
layer raid on raid, perhaps this is where the issue is?
Tom
/md2'
Looking at your setup, I'm confused as to why you aren't simply running one
raid5 set on all six disks. It would certainly reduce complexity in
situations like this, and would leave you with more usable space.
Tom
machine crashes? With no OOPS? Is the machine SMP? If so, does the
problem still happen if you run in UP mode? Either way, try compiling with
the Magic SysRq feature (in kernel hacking) and when you get the lockup do
the SysRq + O to cause an OOPS.. and then decode it... this will
(hopefully?) show us where it's at.
Tom
whatever I find out from him. Seems there's some interest in this
topic with people other than just me... ;)
Tom
"Stephen C. Tweedie" wrote:
> One is the GFS team at http://gfs.lcse.umn.edu/. The other hasn't
> announced publicly yet.
>
> --Stephen
>
Stephen (and others who might know),
Are there homepages and/or mailing lists for these teams? I would be
highly interested in participating...
Thanks,
Tom
"Stephen C. Tweedie" wrote:
>
> There are at least two teams working on beefing up NBD, including the
>
condary server only relinquishes control after the primary
server's disk is rebuilt... which could take forever with nbd?
Tom
e with RAID is that is has no such
issues... and after all, all of the traces we've seen have been inside the
ide subsystem, never inside of raid.
Tom
Ok,
I stand corrected. You are correct the new lilo _will_ work with / and
/boot partitions that are on a raid device. Thanks for setting me
straight.
---
Tom Jones "ELVIS" May the Source
R
Hello,
The kickstart mode uses the text mode installer. Therefore the option to
create raid devices during the installation is not available. Hopefully
this will be included in a future release.
Cheers,
---
Tom Jones "
.
---
Tom Jones "ELVIS" May the Source
Red Hat Incbe with you!
---
On Tue, 5 Oct 1999, Laszlo Vecsey wrote:
> does the redhat 6.1
ame place:
http://volition.org/~tsl/raid/raid5-clean-failure.patch.gz
I've tested & retested this patch and it fails correctly, lets you unmount,
reboot etc. So this one should be good.
Let me know if you have any feedback.
Tom
work.
You need the ide patches for your kernel. Take a look in
ftp://ftp.us.kernel.org/pub/linux/kernel/people/hedrick/ for the correct
one. After applying the patch to your kernel, you'll have hpt-266 drivers
instead of the slow generic ones, and auto-dma.
Tom
lure happens half-way through the writes, in which case obviously things
will be out of sync. Anyone agree with this?
Thanks!
Tom
n schedule, and no data will be lost. But thanks
for the suggestion.
Tom
vincenzoj wrote:
>
> try rsync?
>
> JV
--
Tom KunzTool Developer Software Consulting Services
PGP Key http://www.users.fast.net/~tkunz/pgp.html
1452 1F99 E2BB 632E 6EAE 2DF0 EF11 4DFC
DB62 7EBC 3BA0 6C40
rent
system that already does exactly what I want (as open-source, of
course). But if not, is there anyone else on this list who is
interested in venturing out into this arena?
Thanks,
Tom
--
Tom KunzTool Developer Software Consulting Services
PGP Key http://www.users.fast.net/~tkunz/p
're pretty much hosed currently.
If anyone feels up to it, give it a try and simulate some disk failure.
Helps a lot when trying to bring a controlled shutdown to the raid system.
Tom
directive is relatively new, and you don't have that new a
copy of raidtools. If you grab a new version, this should work fine.
Tom
led this part out of the patch and
replaced the file below.
> The patch can be found at
> http://volition.org/~tsl/raid/friendlier.raidtools.patch.gz
Tom
id at all:
cannot determine md version: No such file or directory
This generally means either RAID was not compiled into the kernel or
/dev/md0 does not exist. Check your kernel options, or run
make install_dev to create the md devices
regards,
tom
this as a alpha kind of patch... do lots of
testing before you use this for real.
I'd be interested in anyone's feedback as to how well this works for them,
especially on that many physical disks.
[patches mime attached]
Tom
24-disk-kernel.patch
24-disk-raidtools.patch
g a
disk: Now possible". It's written in yet-another-scripting-language Pliant,
so you'll need to get that as well.
I used it, and it worked fine for me. Let us know if you use it too
Tom
produce a good
answer for this frequently asked question...as well as some others, and
might be able to provide good information to Ingo in terms of the state of
performance of raid.
Tom
I doubt this is recomended behavior, however,
as I'd doubt this behavior is intentional.. and therefore likely subject to
change.
Take care,
Tom
s really no
(transient failure) situation that you can't recover from.
Tom
ts of instability... it doesn't have to
relate to our own problems, though there is a possibility it still does.
In any case, if you had the opportunity to run 2.2.13pre11 (even with raid)
and SMP and report a lockup to linux-kernel the world would be slightly
better off
Tom
... the one I am having a problem is
too. But these kinds of tests may help them resolve the problem... and then
we'll all buy you a beer ;)
Tom
s work fine for 2.2.12. You will get one set of
rejects in fs.h which you can safely ignore, as these patches were already
made to 2.2.12
Tom
f a driver for that DAC960.
Compaq did some development with DAC960 hardware raid controllers, and
you might try to contact them. The "stock" Red Hat 6.0 kernel comes
with some DAC960 drivers, and if that doesn't have it, you can try to
contact Mylex directly.
Hope this help
iffering
> values. These are NOT random bytes dropped onto the good data by a
> wild pointer or something like that.
I was always bad at seeing patterns in numbers, but to me it looked very
random. At least even in my short example there are bit problems in each
binary column.
Thanks for your help!
Tom
t. I'll get to working on that, but it'll
probably take a couple of days before it's all in place to see if I can get
it to fall over as well.
You may have noticed my report of the same system using IDE & SMP being
extremely unstable, it will crash within minutes of heavy load... Do you
think these problems are related?
Thank you!
Tom
retty deep into my raid setup, and I thought people
might appreciate some numbers. I was surprised to see the throughput top
out at four disks and then drop lower after that.
Tom
7a0be7aac6a7f27be473737ee097 -
48ed7a0be7aac6a7f27be473737ee097 -
48ed7a0be7aac6a7f27be473737ee097 -
48ed7a0be7aac6a7f27be473737ee097 -
48ed7a0be7aac6a7f27be473737ee097 -
I'm pretty sure that should rule out a faulty disk or cable.
Tom
MA 2 drive0 (0x90caa731 0x20c8a731)
0x
Sep 13 02:05:31 music kernel: hdq: Maxtor 90845D4, 8063MB w/256kB Cache,
CHS=16383/16/63, (U)DMA
[...]
The driver is slowing down the drive because of an ASIC bug, according to
the source. Is this the mode it is supposed to end up in? MW DMA 2?
Tom
99e0c95849d69698ea36dc864e61f -
Thanks for your help!
Tom
c kernel: hdf1 [events: 0003](write) hdf1's sb
offset: 8203008
Sep 12 23:45:45 music kernel: hdd1 [events: 0003](write) hdd1's sb
offset: 8256896
Sep 12 23:45:45 music kernel: hdb1 [events: 0003](write) hdb1's sb
offset: 8256896
Sep 12 23:45:45 music kernel: .
Let me know if I can be of any more help.
Tom
storage shared between
multiple machines has been a trick so far. I've been playing with
software RAID as a low-end solution for data duplication.
Anyone else out there going through the same thing?
Tom
--
Tom KunzTool Developer Software Consulting Services
PGP Key http://www.u
tting idle/with idle disks? What ever the cause of condition, it would
sure be nice if the system didn't end up rebuilding every time you crash.
Tom
hould be all you need to do. You don't need to do mkraid,
because that is just for the initial creation of the raid partitions.
The raid drivers in the kernel and the raid recovery processes will
handle the recontruction once you do "raidhotadd", you don't have to
copy d
, and
then use the two md devices to set up a third that contains the first two as
a stripe set.
The obvious drawbacks: lose one more disk to parity, and more layers of raid
code. But it works today...
Tom
There has been some discussion (and joy) on this list as the 0.90 RAID code
made it's way into the 2.2.11 and 2.2.12 ac series. Both times they were
backed out, and I don't remember seeing a post here on linux-raid explaining
why.
I noticed this while scanning the linux-kernel archives. This is
ders, which is very close to
your own number. Does hdparm -tT /dev/hdc test from the inner or outer
cylinders? If it starts from block 0, it will be reading from the inner
ones, no? As drives spin at a constant speed, but there is more surface
area on the outside of the platter, more data is read (faster MB/sec) per
one rotation.
Tom
definitely try to get in touch with him, however.. maybe by
reposting to the linux-kernel list. He's very approachable, and can be very
helpful.
> I think maybe we need a separate UDMA mailing list since
> about 25 to 40% of posts there seem to be about UDMA questions/problems.
A certain amount of discussion goes on in the linux-kernel list. I'd find a
linux-ide list to be helpful, though.., I'd subscribe and participate.
Tom
raid
> and they are both supported in 2.3.12 and/or 2.3.13pre
You don't wanna use 2.3.x anyways. :)
---
Tom Rini (TR1265)
http://gate.crashing.org/~trini/
Kiyan Azarbar wrote:
> I ordered them. What I'm wondering is how the controllers will be
> identified provided I do nothing special to set up the kernel (2.2.10
> with 0723 raid patch). which controller will get hde/f,g/h, and which
> will get hdi/j,hdk/l (if I install two ultra33's in a single
>
idhotadd /dev/md1 /dev/hdc2
should get you reconstructing/working again.
Tom
k as a starting point,
which is what I used.
Others on this list seem to be confused as to how to calculate an optimum
stripe size. Would this be a good thing to "get the word out" on, or is
there a more appropriate way to determine it?
Tom
ems to apply here.
Yeah, SCSI gives more bang, but IDE is more space for the buck. For
somrhing like a giant mp3 archive, raiding 'em up will do fine. Or
anything where you need big space and speed isn't a big big concern.
---
Tom Rini (TR1265)
http://gate.crashing.org/~trini/
rs ago when it was all the rage.
But for a hobbiest system, or one where you're making a decision on the
numbers, I think IDE raid is a serious contender. I am happy with my 70GB
ide raid, and it's active 24x7 on the internet. If I had needed to spend 2x
what I spent for IDE to buy SCSI, I just wouldn't have been able to build
the box.
Tom
too, 7200rpm, 2something.
The above numers could be a bit off, as its all from memory and week
oldprices. But, for little more $$ you get a lot more space. (UDMA/66
also does deal with lots of the issues of udma/33, but SCSI is no doubt
faster.)
---
Tom Rini (TR1265)
http://gate.crashing.org/~trini/
You can find it here:
http://kernelnotes.org/lnxlists/linux-raid/lr_9905_01/msg00030.html Please
read the other messages in that thread, as they are helpful as well.
Good Luck,
Tom
33 promise controllers.
Hope this helps your diagnosis.
Tom
has to do to get > 6 interfaces going.
And, since I have you here... Do you have support planned for the HPT-366
udma-66 chip? I don't know if it's available on anything else, but it's
coming on abit's new BP-6 and BE-6(?). I just got two of the BP-6's... they
come with both a standard Intel PIIX4 chip on board, and this HighPoint
HPT-366 chip. You can use both at once, seems like a deal for us ide folks
;)
Tom
address, but From: is filled with the actual From: address. I checked
back as early as march, and the mailing list hasn't changed in this regard.
Perhaps you recently changed or re-configured your mail client? My "reply"
and "reply to all" features still work as expected, even on that very email.
Tom
would suggest making more than one box, and sticking 6-10 disks in each one.
You are right to plan on using one disk per interface. My 10 udma disk
raid5 set runs only slightly faster (accoring to bonnie) than access off of
any one of the disks in the set.
Good Luck,
Tom
ted a fix, which involves changing just one
line. This is the beginning of that thread:
http://kernelnotes.org/lnxlists/linux-raid/lr_9906_04/msg00022.html
but be sure to read Ingo's comments, because you only have to change one
line:
http://kernelnotes.org/lnxlists/linux-raid/lr_9906_04/msg00025.html
Tom
there is a document that describes failure types and recovery
scenarios, however much of it has been discussed on the list in the past
months. Try:
http://kernelnotes.org/lnxlists/linux-raid/
To peruse what people have said.
Good luck,
Tom
other things to die down a bit and inclusion in
2.2.x-presomething and 2.3.x. I'm pretty sure the new stuff is more
stable then whats in the kernel currently. Ingo, this is "yours" yes?
Tell us something please. :)
---
Tom Rini (TR1265)
http://gate.crashing.org/~trini/
did only have to use
them for four of my twelve drives. I've cc'd the guy who bought them,
hopefully he'll respond to the list with their name & phone number
(*hint*hint*)
Tom
l have a ground line in between each data line... doubling the number of
wires in the cable and canceling out some of the noise. I have had 100%
reliability since I switched to using these for my long cable pulls, I have
one that is 24" and one that is 32".
Tom
ould think you'd need a 64 bit
> PCI bus to do it right, and I've only seen those on Alpha motherboards.
And the new macs. :)
---
Tom Rini (TR1265)
http://gate.crashing.org/~trini/
le/drive/enclosure problems with this system. If I had it to do
again, I might very well see if there was a small country bank nearby to
knock over... thus allowing me to buy scsi. I think it would be less stress
;)
Take care,
Tom
Chris Brown wrote:
> I am trying to build a large IDE RAID-5 array using three
> pdc20246 cards. In order for the three cards to work with linux I
> need to use kernel 2.2.9 with the 2.2.9 IDE patches, but from what
> I've seen on the list and what I've tried myself the 2.2.6 RAID patch
> doe
7;s returned? If you do,
would this be solved by moving the raid system to user space where you could
run threaded? Or would threads also block? Am I way off?
Sorry this is so long, obviously I've been mulling it over for a while, but
I haven't been able to find technical discussions like this out there. Are
there any pointers for this kind of info?
Take care,
Tom
lusion in the next raid patch? It's very
effective for root raid, and recovering RAID5, and otherwise has minimal
impact... I think we'd all like to see it in there]
This should get ya going.
Tom
ync when you're not otherwise in degraded mode.
> I may try the resync, because it seems to be really close to what you
> (and Ingo Molnar (in private email) and Piete Brooks) suggest. But I
> *will* wait for your reply. :)
Good luck
Tom
at new 35gig drive or whatever a couple of years from now.
Tom
>
edure...
Feel free to email me if after reading everything you're still stuck. I
know what it can be like to have gigs of data possibly lost (but not quite),
I'd be happy to help.
Good luck,
Tom
On Mon, 31 May 1999, Tom Rini wrote:
> Hello. I've got a bit of a problem. I used to have a machine (Apple
> 7500), which I had a (v0.90) raid0 array on. disk 0 was on the int scsi,
> disk 1 on the ext scsi. But, I managed to kill the box. The disks are
> fine (just checke
ut I
can't seem to get /dev/md0 back. (I might have tried to fsck one of the
parts by accident tho, which might explain it all). I've got 2.2.9+Ingo's
patch to include/linux/fs.h running right now. But when I raidstart -a,
it complains about the superblock. Ideas?
-
down the details, next(?) time I will.
I had been running with 2.2.3 + raid0145-199903?? + redhat 5.1 before this. I
had more than 3 weeks of uptime with that build.
Tom
id had a nice --superblocks-only option.
I'm no expert, but after Piete explained this to me, I got the impression that
this is pretty much all mkraid does... write superblocks based on what is in
/etc/raidtab. Is this not true?
Congratulations on recovering your array. I know the feeling of bliss you can
get after being able to mount again!
Tom
led (provided, of course,
that it didn't truly fail... like Giulio's power failure problems. It should
begin the reconstruction, and you should be back on your way to up!
I hope this helps someone. Thanks go out to Piete and Martin for providing
tools and advice for when I had to do this. My users are in debt to you (and
so am I!)
Take care,
Tom
re ``The Author''? Can anyone point me to the
> source so I can get my RAID on line? Or do I have to go and write
> a nasty web page about DPT despite their recent Linux support efforts?
>
> Thanks again,
> Josh Fishman
> NYU / RLab
>
>
Tom
ere not very responsive and generally
> unimpressive.
Never had that problem. We had a DOA card, which was quickly replaced.
DPT provided lots of helpful cabling advice.
> I'd look at ICP/Vortex if I were you. Sounds like a lot of people are
> happy with those cards.
>
>
> Bill Carlson | Opinions expressed are my own
> KINZE Manufacturing, Inc. | not my employer's.
>
>
>
Tom
On Fri, 19 Mar 1999, Piete Brooks wrote:
> > Are there any plans to have a utility as in Solaris (Disksuite) where
> > metastat can output the config format so that the equivalent of raidtab
> > can be updated...
>
> That was indeed one of the suggestions I made ...
>
> As there was no immediat
mmm... make me wonder why VFS wasn't fixed long, long ago then.
> > Tom
>
> /Matti Aarnio <[EMAIL PROTECTED]>
>
>
Tom
UFS, and UFS
supports > 2GB files. I understand that UFS is available for Linux too,
and when you use it, you get > 2GB files too. I also understand that
other non-ext2fs filesystems for Linux > 2GB files too.
Tom
it will be fixed.
> Thank you,
> Josh Fishman
> NYU / RLab
>
>
Tom
ld just able to do something
like boot off a floppy with root=/dev/md0, right?
---
Tom Rini (TR1265)
http://dobbstown.yeti.edu/
On Mon, 1 Mar 1999, Stephen Costaras wrote:
> I'm in the process of re-building a system that died and was wondering if
> there was a raid0145 patch for the 2.2.2 kernel floating around. If someone
> could point me to a url or something I'd appreciate it.
as far as i know there aren't... you
1 - 100 of 109 matches
Mail list logo