Justin Piszcz wrote:
> Should I be worried?
>
> Fri Feb 22 20:00:05 EST 2008: Executing RAID health check for /dev/md3...
> Fri Feb 22 21:00:06 EST 2008: cat /sys/block/md3/md/mismatch_cnt
> Fri Feb 22 21:00:06 EST 2008: 936
> Fri Feb 22 21:00:09 EST 2008: Executing repair on /dev/md3
> Fri Feb 22
Jeff Breidenbach wrote:
>> It's not a RAID issue, but make sure you don't have any duplicate volume
>> names. According to Murphy's Law, if there are two / volumes, the wrong
>> one will be chosen upon your next reboot.
>
> Thanks for the tip. Since I'm not using volumes or LVM at all, I should b
George Spelvin wrote:
> I just discovered (the hard way, sigh, but not too much data loss) that a
> 4-drive RAID 10 array had the mirroring set up incorrectly.
>
> Given 4 drvies A, B, C and D, I had intended to mirror A<->C and B<->D,
> so that I could split the mirror and run on either (A,B) or
Quoting Hubert Verstraete <[EMAIL PROTECTED]>:
Hi All,
My RAID 5 array is running slow.
I've made a lot of test to find out where this issue is laying.
I've come to the conclusion that once the array is created with mdadm
2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm
you
Linda Walsh wrote:
>
> Michael Tokarev wrote:
>> Unfortunately an UPS does not *really* help here. Because unless
>> it has control program which properly shuts system down on the loss
>> of input power, and the battery really has the capacity to power the
>> s
Janek Kozicki wrote:
> Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300)
>
>> Janek Kozicki wrote:
>>> I'm not using mdadm.conf at all.
>> That's wrong, as you need at least something to identify the array
>> components.
Moshe Yudkowsky wrote:
> Michael Tokarev wrote:
>> Janek Kozicki wrote:
>>> Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
>>>
>>>> 2. How can I delete that damn array so it doesn't hang my server up
>>>> in a loop?
&g
Janek Kozicki wrote:
> Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
>
>> 2. How can I delete that damn array so it doesn't hang my server up in a
>> loop?
>
> dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
This works provided the superblocks are at the beginning of the
com
John Stoffel wrote:
[]
> C'mon, how many of you are programmed to believe that 1.2 is better
> than 1.0? But when they're not different, just just different
> placements, then it's confusing.
Speaking of "more is better" thing...
There were quite a few bugs fixed in recent months wrt version 1
s
Eric Sandeen wrote:
> Moshe Yudkowsky wrote:
>> So if I understand you correctly, you're stating that current the most
>> reliable fs in its default configuration, in terms of protection against
>> power-loss scenarios, is XFS?
>
> I wouldn't go that far without some real-world poweroff testing,
Eric Sandeen wrote:
[]
> http://oss.sgi.com/projects/xfs/faq.html#nulls
>
> and note that recent fixes have been made in this area (also noted in
> the faq)
>
> Also - the above all assumes that when a drive says it's written/flushed
> data, that it truly has. Modern write-caching drives can wre
Moshe Yudkowsky wrote:
[]
> If I'm reading the man pages, Wikis, READMEs and mailing lists correctly
> -- not necessarily the case -- the ext3 file system uses the equivalent
> of data=journal as a default.
ext3 defaults to data=ordered, not data=journal. ext2 doesn't have
journal at all.
> The
Moshe Yudkowsky wrote:
[]
> But that's *exactly* what I have -- well, 5GB -- and which failed. I've
> modified /etc/fstab system to use data=journal (even on root, which I
> thought wasn't supposed to work without a grub option!) and I can
> power-cycle the system and bring it up reliably afterward
Moshe Yudkowsky wrote:
> Michael Tokarev wrote:
>
>> Speaking of repairs. As I already mentioned, I always use small
>> (256M..1G) raid1 array for my root partition, including /boot,
>> /bin, /etc, /sbin, /lib and so on (/usr, /home, /var are on
>> their ow
Moshe Yudkowsky wrote:
> I've been reading the draft and checking it against my experience.
> Because of local power fluctuations, I've just accidentally checked my
> system: My system does *not* survive a power hit. This has happened
> twice already today.
>
> I've got /boot and a few other piec
Moshe Yudkowsky wrote:
> Michael Tokarev wrote:
>
>> You only write to root (including /bin and /lib and so on) during
>> software (re)install and during some configuration work (writing
>> /etc/password and the like). First is very infrequent, and both
>> needs
Peter Rabbitson wrote:
> Moshe Yudkowsky wrote:
>> over the other. For example, I've now learned that if I want to set up
>> a RAID1 /boot, it must actually be 1.2 or grub won't be able to read
>> it. (I would therefore argue that if the new version ever becomes
>> default, then the default sub-ver
Keld Jørn Simonsen wrote:
[]
>> Ugh. 2-drive raid10 is effectively just a raid1. I.e, mirroring
>> without any striping. (Or, backwards, striping without mirroring).
>
> uhm, well, I did not understand: "(Or, backwards, striping without
> mirroring)." I don't think a 2 drive vanilla raid10 will
Moshe Yudkowsky wrote:
[]
> Mr. Tokarev wrote:
>
>> By the way, on all our systems I use small (256Mb for small-software systems,
>> sometimes 512M, but 1G should be sufficient) partition for a root filesystem
>> (/etc, /bin, /sbin, /lib, and /boot), and put it on a raid1 on all...
>> ... doing [i
integrity of die md-Device at
low-level? Can I trust the Device?
Best regards,
Michael
--
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
-
To unsubscribe from this list: send the line "unsubscribe linux-raid&
Keld Jørn Simonsen wrote:
> On Tue, Jan 29, 2008 at 09:57:48AM -0600, Moshe Yudkowsky wrote:
>> In my 4 drive system, I'm clearly not getting 1+0's ability to use grub
>> out of the RAID10. I expect it's because I used 1.2 superblocks (why
>> not use the latest, I said, foolishly...) and therefo
Peter Rabbitson wrote:
[]
> However if you want to be so anal about names and specifications: md
> raid 10 is not a _full_ 1+0 implementation. Consider the textbook
> scenario with 4 drives:
>
> (A mirroring B) striped with (C mirroring D)
>
> When only drives A and C are present, md raid 10 with
Keld Jørn Simonsen wrote:
> On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
>> Linux raid10 MODULE (which implements that standard raid10
>> LEVEL in full) adds some quite.. unusual extensions to that
>> standard raid10 LEVEL. The resulting layout is als
Moshe Yudkowsky wrote:
> Michael Tokarev wrote:
>
>> There are more-or-less standard raid LEVELS, including
>> raid10 (which is the same as raid1+0, or a stripe on top
>> of mirrors - note it does not mean 4 drives, you can
>> use 6 - stripe over 3 mirrors each of 2
Peter Rabbitson wrote:
> Michael Tokarev wrote:
> > Raid10 IS RAID1+0 ;)
>> It's just that linux raid10 driver can utilize more.. interesting ways
>> to lay out the data.
>
> This is misleading, and adds to the confusion existing even before linux
> raid10.
Moshe Yudkowsky wrote:
> Peter Rabbitson wrote:
>
>> It is exactly what the names implies - a new kind of RAID :) The setup
>> you describe is not RAID10 it is RAID1+0. As far as how linux RAID10
>> works - here is an excellent article:
>> http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linu
Peter Rabbitson wrote:
> Moshe Yudkowsky wrote:
>>
>> One of the puzzling things about this is that I conceive of RAID10 as
>> two RAID1 pairs, with RAID0 on top of to join them into a large drive.
>> However, when I use --level=10 to create my md drive, I cannot find
>> out which two pairs are th
Martin Seebach wrote:
> Hi,
>
> I'm not sure this is completely linux-raid related, but I can't figure out
> where to start:
>
> A few days ago, my server died. I was able to log in and salvage this content
> of dmesg:
> http://pastebin.com/m4af616df
>
> I talked to my hosting-people and t
Hi,
I have just built a Raid 5 array using mdadm and while it is running fine I
have a question, about identifying the order of disks in the array.
In the pre sata days you would connect your drives as follows:
Primary Master - HDA
Primary Slave - HDB
Secondary - Master - HDC
Secondary - Slave
Quoting Mitchell Laks <[EMAIL PROTECTED]>:
Hi mdadm raid gurus,
I wanted to make a raid1 array, but at the moment I have only 1
drive available. The other disk is
in the mail. I wanted to make a raid1 that i will use as a backup.
But I need to do the backup now, before the second drive com
Quoting Norman Elton <[EMAIL PROTECTED]>:
I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.
Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
control
Neil Brown wrote:
> On Monday December 31, [EMAIL PROTECTED] wrote:
>> I'm hoping that if I can get raid5 to continue despite the errors, I
>> can bring back up enough of the server to continue, a bit like the
>> remount-ro option in ext2/ext3.
>>
>> If not, oh well...
>
> Sorry, but it is "oh wel
Justin Piszcz wrote:
[]
> Good to know/have it confirmed by someone else, the alignment does not
> matter with Linux/SW RAID.
Alignment matters when one partitions Linux/SW raid array.
If the inside partitions will not be aligned on a stripe
boundary, esp. in the worst case when the filesystem blo
maobo wrote:
> Hi,all
> Yes, Raid10 read balance is the shortest position time first and
> considering the sequential access condition. But its performance is
> really poor from my test than raid0.
Single-stream write performance of raid0, raid1 and raid10 should be
of similar level (with raid5 an
Janek Kozicki wrote:
> Michael Tokarev said: (by the date of Fri, 21 Dec 2007 14:53:38 +0300)
>
>>> I just noticed that with Linux software RAID10, disk
>>> usage isn't equal at all, that is, most reads are
>>> done from the first part of mirror(s) only.
>
Michael Tokarev wrote:
> I just noticed that with Linux software RAID10, disk
> usage isn't equal at all, that is, most reads are
> done from the first part of mirror(s) only.
>
> Attached (disk-hour.png) is a little graph demonstrating
> this (please don't blame me f
Thierry Iceta wrote:
> Hi
>
> I would like to use raidtools-1.00.3 on Rhel5 distribution
> but I got thie error
Use mdadm instead. Raidtools is dangerous/unsafe, and is
not maintained for a long time already.
/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the bo
Guy Watkins wrote:
man md
man mdadm
I use RAID6. Happy with it so far, but haven't had a disk failure yet.
RAID5 sucks because if you have 1 failed disk and 1 bad block on any other
disk, you are hosed.
Hope that helps.
I can't believe I've been using a raid array for 2 years and didn't know
David Greaves wrote:
Michael Makuch wrote:
So my questions are:
...
- Is this a.o.k for a raid5 array?
So I realised that /proc/mdstat isn't documented too well anywhere...
http://linux-raid.osdl.org/index.php/Mdstat
Comments welcome...
David
One thing, i
I realize this is the developers list and though I am a developer I'm
not a developer
of linux raid, but I can find no other source of answers to these questions:
I've been using linux software raid (5) for a couple of years, having
recently uped
to the 2.6.23 kernel (FC7, was previously on FC
[Cc'd to xfs list as it contains something related]
Dragos wrote:
> Thank you.
> I want to make sure I understand.
[Some background for XFS list. The talk is about a broken linux software
raid (the reason for breakage isn't relevant anymore). The OP seems to
lost the order of drives in his arra
I come across a situation where external MD bitmaps
aren't usable on any "standard" linux distribution
unless special (non-trivial) actions are taken.
First is a small buglet in mdadm, or two.
It's not possible to specify --bitmap= in assemble
command line - the option seems to be ignored. But
i
> Justin Piszcz said: (by the date of Sun, 2 Dec 2007 04:11:59 -0500 (EST))
>
>> The badblocks did not do anything; however, when I built a software raid 5
>> and the performed a dd:
>>
>> /usr/bin/time dd if=/dev/zero of=fill_disk bs=1M
>>
>> I saw this somewhere along the way:
>>
>> [42332.
Bryce wrote:
[]
> mdadm -C -l5 -n5 -c128 /dev/md0 /dev/sdf1 /dev/sde1 /dev/sdg1 /dev/sdc1
> /dev/sdd1
...
> IF you don't have the configuration printout, then you're left with
> exhaustive brute force searching of the combinations
You're missing a very important point -- --assume-clean option.
F
On the heels of last week's post asking about hardware recommendations,
I'd like to ask a few questions too. :)
I'm considering my first SAS purchase. I'm planning to build a software
RAID6 array using a SAS JBOD attached to a linux box. I haven't decided
on any of the hardware specifics.
I'm l
Janek Kozicki wrote:
[]
> Can you please add do the manual under 'SEE ALSO' a reference
> to /usr/share/doc/mdadm ?
/usr/share/doc/mdadm is Debian-specific (well.. not sure it's really
Debian (or something derived from it) -- some other distros may use
the same naming scheme, too). Other distribu
Justin Piszcz wrote:
> On Sun, 4 Nov 2007, Michael Tokarev wrote:
[]
>> The next time you come across something like that, do a SysRq-T dump and
>> post that. It shows a stack trace of all processes - and in particular,
>> where exactly each task is stuck.
> Yes I got i
Justin Piszcz wrote:
> # ps auxww | grep D
> USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
> root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
> root 274 0.0 0.0 0 0 ?DOct21 13:00 [pdflush]
>
> After several days/wee
John Stoffel wrote:
>>>>>> "Michael" == Michael Tokarev <[EMAIL PROTECTED]> writes:
>>> If you are going to mirror an existing filesystem, then by definition
>>> you have a second disk or partition available for the purpose. So you
>>&
Justin Piszcz wrote:
>
> On Fri, 19 Oct 2007, Doug Ledford wrote:
>
>> On Fri, 2007-10-19 at 13:05 -0400, Justin Piszcz wrote:
[]
>>> Got it, so for RAID1 it would make sense if LILO supported it (the
>>> later versions of the md superblock)
>>
>> Lilo doesn't know anything about the superblock f
John Stoffel wrote:
>>>>>> "Michael" == Michael Tokarev <[EMAIL PROTECTED]> writes:
[]
> Michael> Well, I strongly, completely disagree. You described a
> Michael> real-world situation, and that's unfortunate, BUT: for at
> Michael>
Doug Ledford wrote:
[]
> 1.0, 1.1, and 1.2 are the same format, just in different positions on
> the disk. Of the three, the 1.1 format is the safest to use since it
> won't allow you to accidentally have some sort of metadata between the
> beginning of the disk and the raid superblock (such as an
Justin Piszcz wrote:
[]
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to [EMAIL PROTECTED]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
Justin, forgive me please, but can you learn to trim the original
messages wh
Neil Brown wrote:
> On Tuesday October 9, [EMAIL PROTECTED] wrote:
[]
>> o During this reshape time, errors may be fatal to the whole array -
>> while mdadm do have a sense of "critical section", but the
>> whole procedure isn't as much tested as the rest of raid code,
>> I for one will not r
Janek Kozicki wrote:
> Hello,
>
> Recently I started to use mdadm and I'm very impressed by its
> capabilities.
>
> I have raid0 (250+250 GB) on my workstation. And I want to have
> raid5 (4*500 = 1500 GB) on my backup machine.
Hmm. Are you sure you need that much space on the backup, to
start
Rustedt, Florian wrote:
> Hello list,
>
> some folks reported severe filesystem-crashes with ext3 and reiserfs on
> mdraid level 1 and 5.
I guess much more strong evidience and details are needed.
Without any additional information I for one can only make
a (not-so-pleasant) guess about those "so
Patrik Jonsson wrote:
> Michael Tokarev wrote:
[]
>> But in any case, md should not stall - be it during reconstruction
>> or not. For this, I can't comment - to me it smells like a bug
>> somewhere (md layer? error handling in driver? something else?)
>> which sho
Daniel Santos wrote:
> I retried rebuilding the array once again from scratch, and this time
> checked the syslog messages. The reconstructions process is getting
> stuck at a disk block that it can't read. I double checked the block
> number by repeating the array creation, and did a bad block sca
Dean S. Messing wrote:
> Michael Tokarev writes:
[]
> : the procedure is something like this:
> :
> : cd /backups
> : rm -rf tmp/
> : cp -al $yesterday tmp/
> : rsync -r --delete -t ... /filesystem tmp
> : mv tmp $today
> :
> : That is, link the previous
Dean S. Messing wrote:
> Michal Soltys writes:
[]
> : Rsync is fantastic tool for incremental backups. Everything that didn't
> : change can be hardlinked to previous entry. And time of performing the
> : backup is pretty much neglible. Essentially - you have equivalent of
> : full backups a
Dean S. Messing wrote:
[]
> [] That's what
> attracted me to RAID 0 --- which seems to have no downside EXCEPT
> safety :-).
>
> So I'm not sure I'll ever figure out "the right" tuning. I'm at the
> point of abandoning RAID entirely and just putting the three disks
> together as a big LV and bei
Hi
Looks like a disk I/O error to me.
As I can remember, after kernel 2.6.16, raid1 read error will be auto-corrected.
I think do a filesystem check might help.
Michael
On 9/3/07, Mitchell Laks <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I run raid1 on a debian etch server.
&g
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> >> Michael Evans wrote:
> >>> On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> >>>> Michael
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> >> Michael Evans wrote:
> >>> Oh, I see. I forgot about the changelogs. I'd send out version 5
> >>>
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > Oh, I see. I forgot about the changelogs. I'd send out version 5
> > now, but I'm not sure what kernel version to make the patch against.
> > 2.6.23-rc4 is on kernel.
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On Tuesday 28 August 2007, Jan Engelhardt wrote:
>
> On Aug 28 2007 06:08, Michael Evans wrote:
> >
> >Oh, I see. I forgot about the changelogs. I'd send out version 5
> >now, but I'm not sure what kernel version to make the patch against.
> >2.6.23-rc4
On 8/27/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael J. Evans wrote:
> > On Monday 27 August 2007, Randy Dunlap wrote:
> >> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
> >>
> >>> ===
On Monday 27 August 2007, Randy Dunlap wrote:
> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
>
> > =
> > --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
> > +++ linux/drivers
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On 8/26/07, Kyle Moffett <[EMAIL PROTECTED]> wrote:
> On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
> > Also, I forgot to mention, the reason I added the counters was
> > mostly for debugging. However they're also as useful in the same
> > way that listing t
On 8/26/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
>
> > From: Michael J. Evans <[EMAIL PROTECTED]>
> >
>
> Is there any way to tell the user what device (or partition?) is
> bein skipped? This
On 8/26/07, Jan Engelhardt <[EMAIL PROTECTED]> wrote:
>
> On Aug 26 2007 04:51, Michael J. Evans wrote:
> > {
> >- if (dev_cnt >= 0 && dev_cnt < 127)
> >- detected_devices[dev_cnt++] = dev;
> >+ struct detected_devices_no
Also, I forgot to mention, the reason I added the counters was mostly
for debugging. However they're also as useful in the same way that
listing the partitions when a new disk is added can be (in fact this
augments that and the existing messages the autodetect routines
provide).
As for using auto
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
wn <[EMAIL PROTECTED]> wrote:
> On Wednesday August 22, [EMAIL PROTECTED] wrote:
> > From: Michael J. Evans <[EMAIL PROTECTED]>
> >
> > In current release kernels the md module (Software RAID) uses a static array
> > (dev_t[128]) to store partition/device info
Tomas France wrote:
> Thanks for the answer, David!
>
> I kind of think RAID-10 is a very good choice for a swap file. For now I
> will need to setup the swap file on a simple RAID-1 array anyway, I just
> need to be prepared when it's time to add more disks and transform the
> whole thing into RA
I have removed the drives from my machine, the problem Im having is that I dont
know the order (ports) they go back into the machine. Does anyone know how to
determine the order, or how to fix the drive array if the order is not correct?
_
mullaly wrote:
[]
> All works well until a system reboot. md2 appears to be brought up before
> md0 and md1 which causes the raid to start without two of its drives.
>
> Is there anyway to fix this?
How about listing the arrays in proper order in mdadm.conf ?
/mjt
-
To unsubscribe from this lis
From: Daniel Korstad <[EMAIL PROTECTED]>
To: Michael <[EMAIL PROTECTED]>
Cc: linux-raid@vger.kernel.org
Sent: Monday, July 16, 2007 10:23:23 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
You will learn a lot by building your own system and will allow you to do m
Joshua Baker-LePain wrote:
[]
> Yep, hardware RAID -- I need the hot swappability (which, AFAIK, is
> still an issue with md).
Just out of curiocity - what do you mean by "swappability" ?
For many years we're using linux software raid, we had no problems
with "swappability" of the component drive
und, they are corrected from parity information
30 2 * * Mon echo >> check /sys/block/md0/md/sync_action
For more info on crontab and syntax for times (I just did a google and grabbed
the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/cron&crontab-howto.htm
http:/
as long as possible.
- Original Message
From: Bill Davidsen <[EMAIL PROTECTED]>
To: Daniel Korstad <[EMAIL PROTECTED]>
Cc: Michael <[EMAIL PROTECTED]>; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable a
is easy to use, supports all of my hardware right on install and has the auto
update features that I enjoy. I have instead I have seen a report of tune2fs
(which is available), though I am not sure if this is of use on a RAID-5 array.
Thanks
Michael Parisi
- Original Message
From: Bill
anks for the help everyone!
--
YT,
Michael
/dev/hda3:
Magic : a92b4efc
Version : 00.90.00
UUID : 36bbe21d:f49e8b5d:f504154c:a6f12a51
Creation Time : Sun Jan 14 21:17:53 2007
Raid Level : raid5
Device Size : 10490368 (10.00 GiB 10.74 GB)
Array Size : 20
moved?
--
YT,
Michael
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
itions are an array and stay an array until your superblock
is erased or hell freezes over, whichever happens first. Amen.
--
YT,
Michael
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Michael Schwarz wrote:
The problem with that approach is that it opens up the applications in
question to *any parameters* unlike the setuid C program which hardcodes the
parameters to the commands.
Take a look at the man page for sudo. It can limit which parameters can
be used. You can
day controller? That one would surely not have the devices hda through
hdd and my array would refuse to start.
Does anyone have a suggestion of what I can try? Ok, the array runs fine as
long as it is connected to its original bus, but I really don't want to take
chances here.
--
YT,
Michael
-
arbitrary "mdadm" and "mount/umount"
commands using sudo.
--
Michael Schwarz
> This isn't really an answer to your question, but isn't this an ideal
> application for sudo? Make a shell script with the mdadm command(s) you
> want. And set it up so apache or wha
quot;thank you" to everyone who works on Linux software RAID. I
have had a complete BLAST using the feature. I now routinely make 2-3 USB
drive RAID 0 arrays just for fun (and faster write speed!).
It is a very nice feature and set of tools!
--
Michael Schwarz
-
To unsubscribe from t
On Monday 02 July 2007 08:35:18 Michael Frotscher wrote:
> Hmm, I'd need to check that after I rebuild the arrays. Maybe the other
> IDE-controller is not in the initrd.
No, although this sounded like a good idea. The IDE controller is initialized
before the assembly of the arra
a raid
itself.
> Maybe the other IDE controller uses a module that it loaded late.
Hmm, I'd need to check that after I rebuild the arrays. Maybe the other
IDE-controller is not in the initrd. That wouldn't explain the missing hdb,
though.
--
YT,
Michael
-
To unsubscribe from t
/boot/grub/device.map and changed hdc to hdg, so that can't
be the reason.
I seem to be missing something here, but what is it?
--
YT,
Michael
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo i
On Thu, Jun 28, 2007 at 09:12:56AM +0100, David Greaves wrote:
> (back on list for google's benefit ;) and because there are some good
> questions and I don't know all the answers... )
Thanks, I didn't realize I didn't 'reply-all' to stay on the list.
> Hopefully it will snowball as people who u
How do I create an array with a helpful name? i.e. "/dev/md/storage"?
The mdadm man page hints at this in the discussion of the --auto option
in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done.
Must I create the device nodes by hand first using MAKEDEV?
Thanks.
-
To unsubsc
ansk
Mike
- Original Message
From: Brad Campbell <[EMAIL PROTECTED]>
To: Michael <[EMAIL PROTECTED]>
Sent: Thursday, June 21, 2007 5:45:00 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Michael wrote:
> Thank you;
>
> Not that I want to, but where d
Thank you;
Not that I want to, but where did you find a SATA PCI card that fit 15 drives?
The most expensive part of the build has been finding drive controllers
Also, how did you come up with that power requirement. Seems like allot of
power for 29 drives I will be able to fit near
1 - 100 of 268 matches
Mail list logo