Re: mismatch_cnt != 0
Justin Piszcz said: (by the date of Sun, 24 Feb 2008 04:26:39 -0500 (EST)) > Kernel 2.6.24.2 I've seen it on different occasions, for this last time > though it may have been due to a power outage that lasted > 2hours and > obviously the UPS did not hold up that long. you should connect UPS through RS-232 or USB, and if a power-down event is detected - issue hibernate or shutdown. Currently I am issuing hibernate in this case, works pretty well for 2.6.22 and up. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 to RAID6 reshape?
Beolach said: (by the date of Mon, 18 Feb 2008 05:38:15 -0700) > On Feb 17, 2008 10:26 PM, Janek Kozicki <[EMAIL PROTECTED]> wrote: > > Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700) > > > > > Well, I was reading that LVM2 had a 20%-50% performance penalty, > <http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID>. hold on. This might be related to raid chunk positioning with respect to LVM chunk positioning. If they interfere there indeed may be some performance drop. Best to make sure that those chunks are aligned together. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 to RAID6 reshape?
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700) > Well, I was reading that LVM2 had a 20%-50% performance penalty, huh? Make a benchmark. Do you really think that anyone would be using it if there was any penalty bigger than 1-2% ? (random access, r/w). I have no idea what is the penalty, but I'm totally sure I didn't notice it. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 to RAID6 reshape?
Mark Hahn said: (by the date of Sun, 17 Feb 2008 17:40:12 -0500 (EST)) > >> I'm also interested in hearing people's opinions about LVM / EVMS. > > > > With LVM it will be possible for you to have several raid5 and raid6: > > eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would > > have 14 HDDs and five of them being extra - for safety/redundancy > > purposes. > > that's a very high price to pay. > > > partition on top of them. Without LVM you will end up with raid6 on > > 14 HDDs thus having only 2 drives used for redundancy. Quite risky > > IMHO. > > your risk model is quite strange - 5/14 redundancy means that either yeah, sorry. I went too far. I didn't have IO controller failure so far. But I've read about one on this list, and that all data was lost. You're right, better to duplicate a server with backup copy, so it is independent of the original one. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 to RAID6 reshape?
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700) > Or would I be better off starting w/ 4 drives in RAID6? oh, right - Sevrin Robstad has a good idea to solve your problem - create raid6 with one missing member. And add this member, when you have it, next year or such. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 to RAID6 reshape?
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700) > I'm also interested in hearing people's opinions about LVM / EVMS. With LVM it will be possible for you to have several raid5 and raid6: eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would have 14 HDDs and five of them being extra - for safety/redundancy purposes. LVM allows you to "join" several blockdevices and create one huge partition on top of them. Without LVM you will end up with raid6 on 14 HDDs thus having only 2 drives used for redundancy. Quite risky IMHO. It is quite often that a *whole* IO controller dies and takes all 4 drives with it. So when you connect your drives, always make sure that you are totally safe if any of your IO conrollers dies (taking down 4 HDDs with it). With 5 redundant discs this may be possible to solve. Of course when you replace the controller the discs are up again, and only need to resync (which is done automatically). LVM can be grown on-line (without rebooting the computer) to "join" new block devices. And after that you only `resize2fs /dev/...` and your partition is bigger. Also in such configuration I suggest you to use ext3 fs, because no other fs (XFS, JFS, whatever) had that much testing than ext* filesystems had. Question to other people here - what is the maximum partition size that ext3 can handle, am I correct it 4 TB ? And to go above 4 TB we need to use ext4dev, right? best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 how chage chunck size from 64 to 128, 256 ? is it possible ?
Justin Piszcz said: (by the date of Sat, 9 Feb 2008 04:14:51 -0500 (EST)) > When you reate the array its --chunk or -c -- I found 256 KiB to 1024 KiB > to be optimal. Hello Justin, what is your typical bonnie++ invocation, to test your configuration? Which fields are meaningful for you from this benchmark? Do you use anything else for benchmarks? eg: 'zcav /dev/sda > result' ? I'm asking becuase I want to make some local benchmarks to determine best chunk size in my HDD setup. thanks in advance -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm 2.6.4 : How i can check out current status of reshaping ?
Andreas-Sokov said: (by the date of Wed, 6 Feb 2008 22:15:05 +0300) > Hello, Neil. > > . > > Possible you have bad memory, or a bad CPU, or you are overclocking > > the CPU, or it is getting hot, or something. > > As seems to me all my problems has been started after i have started update > MDADM. what is the update? - you installed a new version of mdadm? - you installed new kernel? - something else? - what was the version before, and what version is now? - can you downgrade to previous version? best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)
Bill Davidsen said: (by the date of Wed, 06 Feb 2008 13:16:14 -0500) > Janek Kozicki wrote: > > Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500 > > (EST)) > > writing on raid10 is supposed to be half the speed of reading. That's > > because it must write to both mirrors. > > > ??? Are you assuming that write to mirrored copies are done sequentially > rather than in parallel? Unless you have enough writes to saturate > something the effective speed approaches the speed of a single drive. I > just checked raid1 and raid5, writing 100MB with an fsync at the end. > raid1 leveled off at 85% of a single drive after ~30MB. Hi, In above context I'm talking about raid10 (not about raid1, raid0, raid0+1, raid1+0, raid5 or raid6). Of course writes are done in parallel. When each chunk has two copies raid10 reads twice as fast as it writes. If each chunk has three copies, then writes are 1/3 speed of reading. If each chunk has number of copies equal to number of drives, then write speed drops down to that of a single drive - a 1/Nth of read speed. But it's all just a theory. I'd like to see more benchmarks :-) -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)
Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500 (EST)) > I remember testing with bonnie++ and raid10 was about half the speed > (200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output, writing on raid10 is supposed to be half the speed of reading. That's because it must write to both mirrors. IMHO raid5 could perform good here, because in *continuous* write operation the blocks from other HDDs were just have been written, they stay in cache and can be used to calculate xor. So you could get close to almost raid-0 performance here. Randomly scattered small-sized write operations will kill raid5 performance, for sure. Because corresponding blocks from few other drives must be read, to calculate parity correctly. I'm wondering how much raid5 performance would go down... Is there a bonnie++ test for that, or any other benchmark software for this? > but input was closer to RAID5 speeds/did not seem affected (~550MiB/s). reading in raid5 and raid10 is supposed to be close to raid-0 speed. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Auto generation of mdadm.conf
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 18:34:47 +0300) <...> > So.. probably this is the way your arrays are being assembled, since you > do have HOMEHOST in your mdadm.conf... Looks like it should work, after > all... ;) And in this case there's no need to specify additional array > information in the config file. whew, that was a long read. Thanks for detailed analysis. I hope that your conclusion is correct, since I have no way to decide this by myself. My knowledge is not enough here :) best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Auto generation of mdadm.conf (was: Deleting mdadm RAID arrays)
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300) > Janek Kozicki wrote: > > I'm not using mdadm.conf at all. > > That's wrong, as you need at least something to identify the array > components. I was afraid of that ;-) So, is that a correct way to automatically generate a correct mdadm.conf ? I did it after some digging in man pages: echo 'DEVICE partitions' > mdadm.conf mdadm --examine --scan --config=mdadm.conf >> ./mdadm.conf Now, when I do 'cat mdadm.conf' i get: DEVICE partitions ARRAY /dev/md/0 level=raid1 metadata=1 num-devices=3 UUID=75b0f87879:539d6cee:f22092f4:7a6e6f name='backup':0 ARRAY /dev/md/2 level=raid1 metadata=1 num-devices=3 UUID=4fd340a6c4:db01d6f7:1e03da2d:bdd574 name=backup:2 ARRAY /dev/md/1 level=raid5 metadata=1 num-devices=3 UUID=22f22c3599:613d5231:d407a655:bdeb84 name=backup:1 Looks quite reasonable. Should I append it to /etc/mdadm/mdadm.conf ? This file currently contains: (commented lines are left out) DEVICE partitions CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST MAILADDR root This is the default content of /etc/mdadm/mdadm.conf on fresh debian etch install. best regards -- Janek Kozicki - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Deleting mdadm RAID arrays
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100) > 2. How can I delete that damn array so it doesn't hang my server up in a loop? dd if=/dev/zero of=/dev/sdb1 bs=1M count=10 I'm not using mdadm.conf at all. Everything is stored in the superblock of the device. So if you don't erase it - info about raid array will be still automatically found. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid10 on three discs - few questions.
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100) wow, thanks for quick reply :) > > 3. Another thing - would raid10,far=2 work when three drives are used? > >Would it increase the read performance? > > Yes. is far=2 the most I could do to squeeze every possible MB/sec performance in raid10 on three discs ? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
raid10 on three discs - few questions.
Hi, Maybe I'll buy three HDDs to put a raid10 on them. And get the total capacity of 1.5 of a disc. 'man 4 md' indicates that this is possible and should work. I'm wondering - how a single disc failure is handled in such configuration? 1. does the array continue to work in a degraded state? 2. after the failure I can disconnect faulty drive, connect a new one, start the computer, add disc to array and it will sync automatically? Question seems a bit obvious, but the configuration is, at least for me, a bit unusual. This is why I'm asking. Anybody here tested such configuration, has some experience? 3. Another thing - would raid10,far=2 work when three drives are used? Would it increase the read performance? 4. Would it be possible to later '--grow' the array to use 4 discs in raid10 ? Even with far=2 ? thanks, -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: draft howto on making raids for surviving a disk crash
Keld Jørn Simonsen said: (by the date of Sat, 2 Feb 2008 20:41:31 +0100) > This is intended for the linux raid howto. Please give comments. > It is not fully ready /keld very nice. do you intend to put it on http://linux-raid.osdl.org/ As wiki, it will be much easier for our community to fix errors and add updates. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)
Keld Jørn Simonsen said: (by the date of Thu, 31 Jan 2008 02:55:07 +0100) > Given that you want maximum thruput for both reading and writing, I > think there is only one way to go, that is raid0. > > All the raid10's will have double time for writing, and raid5 and raid6 > will also have double or triple writing times, given that you can do > striped writes on the raid0. > > For random and sequential writing in the normal case (no faulty disks) I would > guess that all of the raid10's, the raid1 and raid5 are about equally fast, > given the > same amount of hardware. (raid5, raid6 a little slower given the > unactive parity chunks). > > For random reading, raid0, raid1, raid10 should be equally fast, with > raid5 a little slower, due to one of the disks virtually out of > operation, as it is used for the XOR parity chunks. raid6 should be > somewhat slower due to 2 non-operationable disks. raid10,f2 may have a > slight edge due to virtually only using half the disk giving better > average seek time, and using the faster outer disk halves. > > For sequential reading, raid0 and raid10,f2 should be equally fast. > Possibly raid10,o2 comes quite close. My guess is that raid5 then is > next, achieving striping rates, but with the loss of one parity drive, > and then raid1 and raid10,n2 with equal performance. > > In degraded mode, I guess for random read/writes the difference is not > big between any of the raid1, raid5 and raid10 layouts, while sequential > reads will be especially bad for raid10,f2 approaching the random read > rate, and others will enjoy the normal speed of the above filesystem > (ext3, reiserfs, xfs etc). Wow! Thanks for detailed explanations. I was thinking that maybe raid10 on 4 drives could be faster than raid0. But now it's all logical for me. With 4 drives and raid10,f2 I could get an "extra" reading speed, but not the writing speed. Makes a lot of sense. Perhaps it should be added to linux-raid wiki? (and perhaps a FAQ there - isn't a question about speed a frequent one?) http://linux-raid.osdl.org/index.php/Main_Page > Theory, theory theory. Show me some real figures. yes... that would be great if someone could spend some time benchmarking all possible configurations :-) thanks for your help! -- Janek Kozicki - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100) > Teoretically, raid0 and raid10,f2 should be the same for reading, given the > same size of the md partition, etc. For writing, raid10,f2 should be half the > speed of > raid0. This should go both for sequential and random read/writes. > But I would like to have real test numbers. Me too. Thanks. Are there any other raid levels that may count here? Raid-10 with some other options? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)
Hello, Yes, I know that some levels give faster reading and slower writing, etc. I want to talk here about a typical workstation usage: compiling stuff (like kernel), editing openoffice docs, browsing web, reading email (email: I have a webdir format, and in boost mailing list directory I have 14000 files (posts), opening this directory takes circa 10 seconds in sylpheed). Moreover, opening .pdf files, more compiling of C++ stuff, etc... I have a remote backup system configured (with rsnapshot), which does backups two times a day. So I'm not afraid to lose all my data due to disc failure. I want absolute speed. Currently I have Raid-0, because I was thinking that this one is fastest. But I also don't need twice the capacity. I could use Raid-1 as well, if it was faster. Due to recent discussion about Raid-10,f2 I'm getting worried that Raid-0 is not the fastest solution, but instead a Raid-10,f2 is faster. So how really is it, which level gives maximum overall speed? I would like to make a benchmark, but currently, technically, I'm not able to. I'll be able to do it next month, and then - as a result of this discussion - I will switch to other level and post here benchmark results. How does overall performance change with the number of available drives? Perhaps Raid-0 is best for 2 drives, while Raid-10 is best for 3, 4 and more drives? best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: linux raid faq
David Greaves said: (by the date of Wed, 30 Jan 2008 12:46:52 +) > http://linux-raid.osdl.org/index.php/Main_Page great idea! I belive that wikis are the best way to go. > I have written to faqs.org but got no reply. I'll try again... > > If I searched on google for "raid faq", the first say 5-7 items did not > > mention raid10. > > Until people link to and use the new wiki, Google won't find it. Everyone that has a website - link to that wiki RIGHT NOW! And we will have a central place for all linux-raid documentation. Findable with google. There should be a link to it from vger.kernel.org. Or even the kernel.org itself. Mailing list admins - can you do it? best regards. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: linux raid faq
Keld Jørn Simonsen said: (by the date of Tue, 29 Jan 2008 20:17:55 +0100) > Hmm, I read the Linux raid faq on > http://www.faqs.org/contrib/linux-raid/x37.html I've found some information in /usr/share/doc/mdadm/FAQ.gz I'm wondering why this file is not advertised anywhere (eg. in 'man mdadm'). Does it exist only in debian packages, or what? With 'man 4 md' I've found a little sparse info about raid10. But still I don't get it. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid over 48 disks ... for real now
Norman Elton said: (by the date of Thu, 17 Jan 2008 11:19:35 -0500) > I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3. there is ext4 (or ext4dev) - it's an ext3 modified to support 1024 PB size (1048576 TB). You could check if it's feasible. Personally I'd always stick with ext2/ext3/ext4 since it is most widely used and thus has the best recovery tools. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid10: unfair disk load?
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300) > Janek Kozicki wrote: > > what's your kernel version? I recall that recently there have been > > some works regarding load balancing. > > It was in my original email: > The kernel is 2.6.23 > > Strange I missed the new raid10 development you > mentioned (I follow linux-raid quite closely). > What change(s) you're referring to? oh sorry it was a patch for raid1, not raid10: http://www.spinics.net/lists/raid/msg17708.html I'm wondering if it could be adapted for raid10 ... Konstantin Sharlaimov said: (by the date of Sat, 03 Nov 2007 20:08:42 +1000) > This patch adds RAID1 read balancing to device mapper. A read operation > that is close (in terms of sectors) to a previous read or write goes to > the same mirror. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid10: unfair disk load?
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 14:53:38 +0300) > > I just noticed that with Linux software RAID10, disk > > usage isn't equal at all, that is, most reads are > > done from the first part of mirror(s) only. what's your kernel version? I recall that recently there have been some works regarding load balancing. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid5 reshape/resync - BUGREPORT
> - Message from [EMAIL PROTECTED] - Nagilum said: (by the date of Tue, 18 Dec 2007 11:09:38 +0100) > >> Ok, I've recreated the problem in form of a semiautomatic testcase. > >> All necessary files (plus the old xfs_repair output) are at: > >> > >> http://www.nagilum.de/md/ > > > >> After running the test.sh the created xfs filesystem on the raid > >> device is broken and (at last in my case) cannot be mounted anymore. > > > > I think that you should file a bugreport > - End message from [EMAIL PROTECTED] - > > Where would I file this bug report? I thought this is the place? > I could also really use a way to fix that corruption. :( ouch. To be honest I subscribed here just a month ago, so I'm not sure. But I haven't seen other bugreports here so far. I was expecting that there is some bugzilla? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid5 reshape/resync
Nagilum said: (by the date of Tue, 11 Dec 2007 22:56:13 +0100) > Ok, I've recreated the problem in form of a semiautomatic testcase. > All necessary files (plus the old xfs_repair output) are at: > http://www.nagilum.de/md/ > After running the test.sh the created xfs filesystem on the raid > device is broken and (at last in my case) cannot be mounted anymore. I think that you should file a bugreport, and provide there the explanations you have put in there. An automated test case that leads to xfs corruption is a neat snack for bug squashers ;-) I wonder however where to report this - the xfs or raid ? Eventually cross report to both places and write in the bugreport that you are not sure on which side there is a bug. best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
mailing list configuration (was: raid6 check/repair)
Thiemo Nagel said: (by the date of Mon, 03 Dec 2007 20:59:21 +0100) > Dear Michael, > > Michael Schmitt wrote: > > Hi folks, > > Probably erroneously, you have sent this mail only to me, not to the list... I have a similar problem all the time on this list. it would be really nice to reconfigure the mailing list server, so that "reply" does not reply to the sender but to the mailing list. Moreover, in sylpheed I have two reply options: "reply to sender" and "reply to mailing list" and both are using the *sender* address! I doubt that sylpheed is broken - it works on nearly 20 other lists, so I conclude that the server is seriously misconfigured. apologies for my stance. Anyone can comment on this? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Spontaneous rebuild
> Justin Piszcz schrieb: > > > > Naturally, when it is reset, the device is disconnected and then > > re-appears, when MD see's this it rebuilds the array. Least you can do is to add an internal bitmap to your raid, this will make rebuilds faster.... :-/ -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Kernel 2.6.23.9 / P35 Chipset + WD 750GB Drives (reset port)
Justin Piszcz said: (by the date of Sun, 2 Dec 2007 04:11:59 -0500 (EST)) > The badblocks did not do anything; however, when I built a software raid 5 > and the performed a dd: > > /usr/bin/time dd if=/dev/zero of=fill_disk bs=1M > > I saw this somewhere along the way: > > [42332.936706] ata5.00: spurious completions during NCQ issue=0x0 > SAct=0x7000 FIS=004040a1:0800 > [42333.240054] ata5: soft resetting port I know nothing about NCQ ;) But I find it interesting that *slower* access worked fine while *fast* access didn't. If I understand you correctly: - badblocks is slower, and you said that it worked flawlessly, right? - getting from /dev/zero is the fastest thing you can do, and it fails... I'd check jumpers on HDD and if there is any, set it to 1.5 Gb speed instead of default 3.0 Gb. Or sth. along that way. I remember seeing such jumper on one of my HDDs (I don't remember the exact speed numbers though). Also on one forum I remember about problems occurring when HDD was working at maximum speed, which was faster than the IO controller could handle. I dunno. It's just what came to my mind... -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Kernel 2.6.23.9 / P35 Chipset + WD 750GB Drives (reset port)
Justin Piszcz said: (by the date of Sat, 1 Dec 2007 07:23:41 -0500 (EST)) > >> dd if=/dev/zero of=/dev/sdc > > The purpose is with any new disk its good to write to all the blocks and > let the drive to all of the re-mapping before you put 'real' data on it. > Let it crap out or fail before I put my data on it. better use badblocks. It writes data, then reads it afterwards: In this example the data is semi random (quicker than /dev/urandom ;) badblocks -c 10240 -s -w -t random -v /dev/sdc -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: telling mdadm to use spare drive.
Richard Scobie said: (by the date of Fri, 09 Nov 2007 10:32:08 +1300) > This was the bug I was thinking of: > > http://marc.info/?l=linux-raid&m=116003247912732&w=2 This bug says that it only with mdadm 1.x: "If a drive is added to a raid1 using older tools (mdadm-1.x or raidtools) then it will be included in the array without any resync happening." But I have here: # mdadm --version mdadm - v2.5.6 - 9 November 2006 maybe I stumbled on another bug? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: telling mdadm to use spare drive.
Richard Scobie said: (by the date of Thu, 08 Nov 2007 08:13:19 +1300) > What kernel and RAID level is this? > > If it's RAID 1, I seem to recall there was a relatively recently fixed > bug for this. debian etch, stock install Linux 2.6.18-5-k7 #1 SMP i686 GNU/Linux The problem was with was RAID 5. But also I have RAID 1 there, and after --add the drives automatically resynced. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: telling mdadm to use spare drive.
Goswin von Brederlow said: (by the date of Wed, 07 Nov 2007 10:17:51 +0100) > Strange. That is exactly how I always do it and it always just worked. > mdadm should start syncing on any spare as soon as a disk fails or you > add the spare to a degraded array afaik. No special "start now" > interaction needed. Thanks for your confirmation. I cannot explain this behaviour - I just started using mdadm. If anybody here wants, I can remove the drive and add this again, to see if I can duplicate this "bug" (?). If so - then tell me what debug information you do need and I will give it to you. Anyway, it seems that this command mdadm --assemble --update=resync /dev/md1 /dev/hda3 /dev/sda3 /dev/hdc3 worked, becasue `mdadm -D /dev/md1` says that array is in "State : active" (not degraded). best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: question about mdadm + grub interaction
Steve Lane said: (by the date of Mon, 5 Nov 2007 16:39:25 -0800) > Greetings. In order to insure that a Debian stock kernel (i.e. the > kernel installed from the linux-image-2.6.22-2-686-bigmem package) boots > correctly off of a mdadm RAID 1 set of two disks if one of the disks is > dead, do we: maybe this will help you? It helped me a lot. http://www.spinics.net/lists/raid/msg17653.html -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: man mdadm - suggested correction.
Janek Kozicki said: (by the date of Mon, 5 Nov 2007 11:58:15 +0100) > I did read 'man mdadm' from top to bottom, but I totally forgot to > look into /usr/share/doc/mdadm ! PS: this why I asked so much questions on this list ;-) -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
man mdadm - suggested correction.
Hello, I did read 'man mdadm' from top to bottom, but I totally forgot to look into /usr/share/doc/mdadm ! And there is much more - FAQs, recipes, etc! Can you please add do the manual under 'SEE ALSO' a reference to /usr/share/doc/mdadm ? thanks :-) -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
man mdadm - suggested correction.
Hello, I did read 'man mdadm' from top to bottom, but I totally forgot to look into /usr/share/doc/mdadm ! And there is much more - FAQs, recipes, etc! Can you please add to the manual under 'SEE ALSO' a reference to /usr/share/doc/mdadm ? thanks :-) -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
telling mdadm to use spare drive.
Hi, I finished copying all data from old disc hdc to my shiny new RAID5 array (/dev/hda3 /dev/sda3 missing). Next step is to create a partition on hdc and add it to the array. And so I did this: # mdadm --add /dev/md1 /dev/hdc3 But then I had a problem - the /dev/hdc3 was a spare, it didn't resync automatically: # mdadm -D /dev/md1 [] Number Major Minor RaidDevice State 0 330 active sync /dev/hda3 1 831 active sync /dev/sda3 2 002 removed 3 223- spare /dev/hdc3 I wanted to tell mdadm to use the spare device, and I wasn't sure how to do this, so I tried following: # mdadm --stop /dev/md1 # mdadm --assemble --update=resync /dev/md1 /dev/hda3 /dev/sda3 /dev/hdc3 Now, 'mdadm -D /dev/md1' says: [...] Number Major Minor RaidDevice State 0 330 active sync /dev/hda3 1 831 active sync /dev/sda3 3 2232 spare rebuilding /dev/hdc3 I'm writing here just because I want to be sure that I added this new device correctly, I don't want to make any stupid mistake here... # cat /proc/mdstat md1 : active raid5 hda3[0] hdc3[3] sda3[1] 966807296 blocks super 1.1 level 5, 128k chunk, algorithm 2 [3/2] [UU_] [=>...] recovery = 6.2% (30068096/483403648) finish=254.9min speed=29639K/sec bitmap: 8/8 pages [32KB], 32768KB chunk Was there a better way to do this, is it OK? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: stride / stripe alignment on LVM ?
Doug Ledford said: (by the date of Sat, 03 Nov 2007 14:40:48 -0400) > so you really only need to align the > lvm superblock so that data starts at 128K offset into the raid array. Sorry, I thought that it will be easier to figure this out experimentally - put LVM here or there, write 128k of data to the disc (inside LVM partition), then see (with hexedit) if this data is really split across several discs or not. In fact I even managed to find where LVM superblock starts inside RAID, the problem for me was that I wasn't sure where it ends, and where the actual data, starts, and *THAT* data has to be aligned on 128K offset. Now I know that I should simply look more carefully at LVM manuals, to see exactly what is the size of LVM superblock. So I was unable to do that simple 128k test like that: # dd if=./128k_of_0xAA of=/dev/lvm_raid5/test then looking for 128k(or 64k or 32k) of 0xAA on hda3 and sda3. But most of the time was spent searching the search pattern (scanning the disc). So my efficiency was low, and in fact I should have simply used a smaller test partitions (eg. hda4, sda4 with just 20MB), so scanning would be faster. With smaller test partitions perhaps I'd have enough time to overcome the main difficulty - dealing with degraded array (and encoded data). Possibly I'll try this next time when I'll buy fourth disc to the array (next year), so I'll be able to have two degraded arrays of two discs at the same time. Then I could use LVM again and "dd" all data from old array to new one, then grow the new array to use all 4 HDDs. Currently I just formatted /dev/md1 with ext3, without LVM. Thanks, I got to remember that in 1.1 the superblock is on the front. And I shouldn't forget about the bitmap either :) > If you run mdadm -D /dev/md1 it will tell you the data offset > (in sectors IIRC). Uh, I don't see it: backup:~# mdadm -D /dev/md1 /dev/md1: Version : 01.01.03 Creation Time : Fri Nov 2 23:35:37 2007 Raid Level : raid5 Array Size : 966807296 (922.02 GiB 990.01 GB) Device Size : 966807296 (461.01 GiB 495.01 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Nov 3 20:59:06 2007 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K Name : backup:1 (local to host backup) UUID : 22f22c35:99613d52:31d407a6:55bdeb84 Events : 39975 Number Major Minor RaidDevice State 0 330 active sync /dev/hda3 1 831 active sync /dev/sda3 2 002 removed thanks again for all your helpful responses! -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: stride / stripe alignment on LVM ?
Bill Davidsen said: (by the date of Fri, 02 Nov 2007 09:01:05 -0400) > So I would expect this to make a very large performance difference, so > even if it work it would do so slowly. I was trying to find out the stripe layout for few hours, using hexedit and dd. And I'm baffled: md1 : active raid5 hda3[0] sda3[1] 969907968 blocks super 1.1 level 5, 128k chunk, algorithm 2 [3/2] [UU_] bitmap: 8/8 pages [32KB], 32768KB chunk I fill md1 with random data: # dd bs=128k count=64 if=/dev/urandom of=/dev/md1 # hexedit /dev/md1 I copy/paste (and remove formmatting) the first 32 bytes of /dev/md1, now I search for those 32 bytes in /dev/hda3 and in /dev/sda3: # hexedit /dev/hda3 # hexedit /dev/sda3 And no luck! I'd expect the first bytes of /dev/md1 to be on beginning of the first drive (hda3). I pick next 20 bytes from /dev/md1 and I can find them on /dev/hda3 starting just after address 0x1. The bytes before and after those 20 bytes are similar to those on /dev/md1. So now I hexedit /dev/md1 and write by hand 32 bytes of 0xAA. Then I look at address 0x1 on /dev/hda3 - and there is no 0xAA at all. Well.. it's not critical for me, so you can just ignore my mumbling, I was just wondering what obvious did I miss. There seems to be more XORing (or sth. else) involved than I expected. Maybe the disc did not flush writes, and what I see on /dev/md1 is not yet present on /dev/hda3 (how's that possible?) Nevertheless, I think that I will resign from LVM, and just put ext3 on /dev/md1, to avoid this stripe misalignment. I wanted LVM here only because I might wanted to use lvm-snapshot, but I can live without that. I can already grow /dev/md1 without LVM, but using mdadm grow. best regards -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
doesm mdadm try to use fastest HDD ?
Hello, My three HHDs have following speeds: hda - speed 70 MB/sec hdc - speed 27 MB/sec sda - speed 60 MB/sec They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to ask if mdadm is trying to pick the fastest HDD during operation? Maybe I can "tell" which HDD is preferred? This came to my mind when I saw this: # mdadm --query --detail /dev/md1 | grep Prefer Preferred Minor : 1 And also in the manual: -W, --write-mostly [...] "can be useful if mirroring over a slow link." many thanks for all your help! -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: switching root fs '/' to boot from RAID1 with grub
Doug Ledford said: (by the date of Thu, 01 Nov 2007 14:30:58 -0400) > So, what I said is true, the MBR will search on the disk it is being run > from for the files it needs: 0x80. my motherboard allows to pick a boot device if I press F11 during boot. Do you mean, that no matter which HDD I will choose it will have 0x80 number? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
stride / stripe alignment on LVM ?
Hello, I have raid5 /dev/md1, --chunk=128 --metadata=1.1. On it I have created LVM volume called 'raid5', and finally a logical volume 'backup'. Then I formatted it with command: mkfs.ext3 -b 4096 -E stride=32 -E resize=550292480 /dev/raid5/backup And because LVM is putting its own metadata on /dev/md1, the ext3 partition is shifted by some (unknown for me) amount of bytes from the beginning of /dev/md1. I was wondering, how big is the shift, and would it hurt the performance/safety if the `ext3 stride=32` didn't align perfectly with the physical stripes on HDD? PS: the resize option is to make sure that I can grow this fs in the future. PSS: I looked in the archive but didn't find this question asked before. I'm sorry if it really was asked. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: xosview + RAID (was: switching root fs '/'...)
Doug Ledford said: (by the date of Wed, 31 Oct 2007 13:38:08 -0400) > Now that grub's installed, you won't have to do anything manual again. > The only time you might have to repeat that grub install procedure is if > you loose a drive and need to add a new one back in, then the new one > will need it. great! many thanks again. Another thing.. I'm using xosview to monitor my system activity (others prefer gkremml, or sth else ;-). To see RAID I can run xosview like this: xosview -xrm "xosview*RAID:true" -xrm "xosview*RAIDdevicecount:2" but I have three devices (md0, md1, md2), so I should use RAIDdevicecount:3 but it gives following error: terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc Aborted anybody else here is using xosview? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: switching root fs '/' to boot from RAID1 with grub
Thanks! It all worked really great, I had to correct only one typo: > device /dev/sda (hd0) into this: device (hd0) /dev/sda I have debian etch here, and I didn't have to rebuild initrd kernel. Also I didn't have to disconnect hda drive, I just selected a different boot device during boot (without even going into bios, or opening the case). In fact.. the only "trouble" I had here is that I needed to connect a PS2 keyboard to this box ;-) (it's a just a backup machine) And now I have a full RAID1 array. Now just two questions: 1. when I `shutdown -r now` I see a worrying message at the end: Stopping array md2 done (stopped) Stopping array md1 done (stopped) Stopping array md0 failed (busy) Will now reboot md: Stopping all md devices md: md0 still in use Is that ok ? 2. Will grub update all drives automatically, for instance when I will upgrade kernel by 'aptitude upgrade'? Or do I need to repeat your grub instructions each time a new kernel is installed? thanks again! -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: switching root fs '/' to boot from RAID1 with grub
Janek Kozicki said: (by the date of Tue, 30 Oct 2007 21:07:21 +0100) > then I did 'dd if=/dev/hda1 of=/dev/md0'. I carefully checked that > the partition sizes match exactly. So now md0 contains the same thing > as hda1. in fact, to check the size I was using 'fdisk -l' because it gives size in bytes (not in blocks), like this: backup:~# fdisk -l /dev/md0 Disk /dev/md0: 1003 MB, 1003356160 bytes And the same for /dev/hda1 But that's a detail, just so you know that I dd'ed my root partition correctly and I can mount /dev/md0 without problems. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
switching root fs '/' to boot from RAID1 with grub
Hello, I have and olde HDD and two new HDDs: - hda1 - my current root filesystem '/' - sda1 - part of raid1 /dev/md0 [U_U] - hdc1 - part of raid1 /dev/md0 [U_U] I want all hda1, sda1, hdc1 to be a raid1. I remounted hda1 readonly then I did 'dd if=/dev/hda1 of=/dev/md0'. I carefully checked that the partition sizes match exactly. So now md0 contains the same thing as hda1. But hda1 is still outside of the array. I want to add it to the array. But before I do this I think that I should boot from /dev/md0 ? Otherwise I might hose this system. I tried `grub-install /dev/sda1` (assuming that grub would see no problem with reading raid1 partition, and boot from it, until mdadm detects an array). I tried `grub-install /dev/sda` as well as on /dev/hdc and /dev/hdc1. I turned off 'active' flag for partition hda1 and turned it on for hdc1 and sda1. But still grub is booting from hda1. I did all this with version 1.1 mdadm --create --verbose /dev/md0 --chunk=64 --level=raid1 \ --metadata=1.1 --bitmap=internal --raid-devices=3 /dev/sda1 \ missing /dev/hdc1 I'm NOT using LVM here. Can someone tell me how should I switch grub to boot from /dev/md0 ? After the boot I will add hda1 to the array, and all three partitions should become a raid1. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Test 2
Daniel L. Miller said: (by the date of Thu, 25 Oct 2007 16:32:31 -0700) > Thanks for the test responses - I have re-subscribed...if I see this > myself...I'm back! I know that gmail doesn't allow to see your own posts on mailing lists. Only posts from other people. Maybe you have a similar problem? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: deleting mdadm array?
David Greaves said: (by the date of Thu, 25 Oct 2007 10:55:44 +0100) > How much later? This will, of course, destroy any data on the array (!) and > you'll need to mkfs again... Just after, I didn't even create LVM volume on it (not mentioning formatting it). > Also, if you don't mind me asking: why did you choose version 1.1 for the > metadata/superblock version? In "time to deprecate old RAID formats" Doug Ledford said, that 1.1 is safest when used with LVM. I wish that this info would get into the man page. I just hope that grub will be able to boot from LVM from '/' partition raid1 (version 1.1), I didn't check this yet. Doug Ledford said: (by the date of Fri, 19 Oct 2007 12:15:34 -0400) > 1.0, 1.1, and 1.2 are the same format, just in different positions on > the disk. Of the three, the 1.1 format is the safest to use since it > won't allow you to accidentally have some sort of metadata between the > beginning of the disk and the raid superblock (such as an lvm2 > superblock), and hence whenever the raid array isn't up, you won't be > able to accidentally mount the lvm2 volumes, filesystem, etc. (In worse > case situations, I've seen lvm2 find a superblock on one RAID1 array > member when the RAID1 array was down, the system came up, you used the > system, the two copies of the raid array were made drastically > inconsistent, then at the next reboot, the situation that prevented the > RAID1 from starting was resolved, and it never know it failed to start > last time, and the two inconsistent members we put back into a clean > array). So, deprecating any of these is not really helpful. And you > need to keep the old 0.90 format around for back compatibility with > thousands of existing raid arrays. > -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
deleting mdadm array?
Hello, I just created a new array /dev/md1 like this: mdadm --create --verbose /dev/md1 --chunk=64 --level=raid5 \ --metadata=1.1 --bitmap=internal \ --raid-devices=3 /dev/hdc2 /dev/sda2 missing But later I changed my mind, and I wanted to use chunk 128. Do I need to delete this array somehow first, or can I just create an array again (overwriting the current one)? -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Partitionable raid array... How to create devices ?
BERTRAND Joël said: (by the date of Tue, 16 Oct 2007 10:22:46 +0200) > > Root gershwin:[/dev] > ls -l md* > brw-rw 1 root disk 9, 0 Oct 15 10:29 md0 > brw-rw 1 root disk 9, 1 Oct 15 10:29 md1 > brw-rw 1 root disk 9, 127 Oct 16 09:59 md127 > brw-rw 1 root disk 9, 2 Oct 15 10:29 md2 > brw-rw 1 root disk 9, 3 Oct 15 10:29 md3 > brw-rw 1 root disk 9, 4 Oct 15 10:29 md4 > brw-rw 1 root disk 9, 5 Oct 15 10:29 md5 > brw-rw 1 root disk 9, 6 Oct 15 10:29 md6 > brw-rw 1 root disk 9, 7 Oct 15 10:29 md7 > brw-rw 1 root disk 9, 8 Oct 15 10:29 md8 > crw-rw 1 root root 10, 63 Oct 15 10:29 mdesc > brw-rw 1 root disk 9, 127 Oct 16 10:03 mdp0 ... crazy. Much better to create just /dev/md0 and use LVM http://tldp.org/HOWTO/Software-RAID-HOWTO-11.html -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: very degraded RAID5, or increasing capacity by adding discs
Michael Tokarev said: (by the date of Tue, 09 Oct 2007 02:52:06 +0400) > Janek Kozicki wrote: > > Hello, > > > > Recently I started to use mdadm and I'm very impressed by its > > capabilities. > > > > I have raid0 (250+250 GB) on my workstation. And I want to have > > raid5 (4*500 = 1500 GB) on my backup machine. > > Hmm. Are you sure you need that much space on the backup, to > start with? Maybe better backup strategy will help to avoid > hardware costs? Such as using rsync for backups as discussed > on this mailinglist about a month back (rsync is able to keep > many ready to use copies of your filesystems but only store > files that actually changed since the last backup, thus > requiring much less space than many full backups). yes, exactly. I am using rsnapshot, which is based on rsync and hardlinks. It works exceptionally well - to my knowledge it's the best backup solution I have ever seen. With plugin scripts I am even mounting an lvm-snapshot of the drive being backupped. from command 'rsnapshot du' I can see how many space is used (but each directory tree is a full backup (made with hardlinks)): 278G/backup/.sync 454M/backup/hourly.0/ 515M/backup/hourly.1/ 527M/backup/daily.0/ 30G /backup/daily.1/ 21G /backup/daily.2/ 561M/backup/daily.3/ 1.6G/backup/daily.4/ 3.0G/backup/daily.5/ 594M/backup/daily.6/ 1.4G/backup/weekly.0/ 11G /backup/weekly.1/ 9.3G/backup/weekly.2/ 23G /backup/weekly.3/ 33G /backup/monthly.0/ 3.7G/backup/monthly.1/ 415Gtotal > It's definitely not possible with raid5. Only option is to create a > raid5 array consisting of less drives than it should contain at the > end, and reshape it when you get more drives, as others noted in this > thread. But do note the following points: <..snip..> yes, I am aware of all those problems you listed. The data I'm talking about is already a backup. While the real data is on my workstation (a different linux box - albeit only the newest version of my data). Only losing both of them simultaneoulsy will be catastrophic for me. So I am inclined to do some experiments with the backup drives configuration, while still doing my best at not losing it. An exercise, you know :) > > is it just a pipe dream? > > I'd say it is... ;) oh well. But I learnt a lot from your answers, thanks a lot! PS: I'm receiving some mailing list posts twice, anybody knows why? I'm used to mailman but looks like majordomo is being configured in a different way - I cannot find a configure page. (I just subscribed). -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: very degraded RAID5, or increasing capacity by adding discs
Neil Brown said: (by the date of Tue, 9 Oct 2007 13:32:09 +1000) > On Tuesday October 9, [EMAIL PROTECTED] wrote: > > > > Problems at step 4.: 'man mdadm' doesn't tell if it's possible to > > grow an array to a degraded array (non existant disc). Is it possible? > > Why not experiment with loop devices on files and find out? > > But yes: you can grow to a degraded array providing you specify a > --backup-file. Thanks! I'll test this on loopback devices :) -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: very degraded RAID5, or increasing capacity by adding discs
Janek Kozicki said: (by the date of Tue, 9 Oct 2007 00:25:50 +0200) > Richard Scobie said: (by the date of Tue, 09 Oct 2007 08:26:35 +1300) > > > No, but you can make a degraded 3 drive array, containing 2 drives and > > then add the next drive to complete it. > > > > The array can then be grown (man mdadm, GROW section), to add the fourth. > > Oh, good. Thanks, I must've been blind that I missed this. > This completely solves my problem. Uh, actually not :) My 1st 500 GB drive is full now. When I buy a 2nd one I want to create a 3-disc degraded array using just 2 discs, one of which contains unbackupable data. steps: 1. create degraded two-disc RAID5 on 1 new disc 2. copy data from old disc to new one 3. rebuild the array with old and new discs (now I have 500 GB on 2 discs) 4. GROW this array to a degraded 3 discs RAID5 (so I have 1000 GB on 2 discs) ... 5. when I buy 3rd drive I either grow the array, or just rebuild and wait with growing until I buy a 4th drive. Problems at step 4.: 'man mdadm' doesn't tell if it's possible to grow an array to a degraded array (non existant disc). Is it possible? PS: the fact, that degraded array will be unsafe for the data is an intented motivating factor for buying next drive ;) -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: very degraded RAID5, or increasing capacity by adding discs
Richard Scobie said: (by the date of Tue, 09 Oct 2007 08:26:35 +1300) > No, but you can make a degraded 3 drive array, containing 2 drives and > then add the next drive to complete it. > > The array can then be grown (man mdadm, GROW section), to add the fourth. Oh, good. Thanks, I must've been blind that I missed this. This completely solves my problem. -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
very degraded RAID5, or increasing capacity by adding discs
Hello, Recently I started to use mdadm and I'm very impressed by its capabilities. I have raid0 (250+250 GB) on my workstation. And I want to have raid5 (4*500 = 1500 GB) on my backup machine. The backup machine currently doesn't have raid, just a single 500 GB drive. I plan to buy more HDDs to have a bigger space for my backups but since I cannot afford all HDDs at once I face a problem of "expanding" an array. I'm able to add one 500 GB drive every few months until I have all 4 drives. But I cannot make a backup of a backup... so reformatting/copying all data each time when I add new disc to the array is not possible for me. Is it possible anyhow to create a "very degraded" raid array - a one that consists of 4 drives, but has only TWO ? This would involve some very tricky *hole* management on the block device... A one that places holes in stripes on the block device, until more discs are added to fill the holes. When the holes are filled, the block device grows bigger, and with lvm I just increase the filesystem size. This is perhaps coupled with some "unstripping" that moves/reorganizes blocks around to fill/defragment the holes. is it just a pipe dream? best regards PS: yes it's simple to make a degraded array of 3 drives, but I cannot afford two discs at once... -- Janek Kozicki | - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html