Re: mailing list configuration (was: raid6 check/repair)
On Mon, Dec 03, 2007 at 09:36:32PM +0100, Janek Kozicki wrote: > Thiemo Nagel said: (by the date of Mon, 03 Dec 2007 20:59:21 +0100) > > > Dear Michael, > > > > Michael Schmitt wrote: > > > Hi folks, > > > > Probably erroneously, you have sent this mail only to me, not to the list... > > I have a similar problem all the time on this list. it would be > really nice to reconfigure the mailing list server, so that "reply" > does not reply to the sender but to the mailing list. > > Moreover, in sylpheed I have two reply options: "reply to sender" and > "reply to mailing list" and both are using the *sender* address! > I doubt that sylpheed is broken - it works on nearly 20 other lists, > so I conclude that the server is seriously misconfigured. My mutt works also with VGER's lists, so they can not be entirely broken ? But the thing is something you should ask VGER's Postmasters about, after you have read the old Linux-Kernel -list FAQ about Reply-To. > apologies for my stance. Anyone can comment on this? > -- > Janek Kozicki | /Matti Aarnio - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k
On Thu, Jun 28, 2007 at 10:24:54AM +0200, Peter Rabbitson wrote: > Interesting, I came up with the same results (1M chunk being superior) > with a completely different raid set with XFS on top: > > mdadm --create \ > --level=10 \ > --chunk=1024 \ > --raid-devices=4 \ > --layout=f3 \ > ... > > Could it be attributed to XFS itself? Sort of.. /dev/md4: Version : 00.90.03 Raid Level : raid5 Raid Devices : 4 Total Devices : 4 Preferred Minor : 4 Active Devices : 4 Working Devices : 4 Layout : left-symmetric Chunk Size : 256K This means there are 3x 256k for the user data.. Now I had to carefully tune the XFS bsize/sunit/swidth to match that: meta-data=/dev/DataDisk/lvol0isize=256agcount=32, agsize=7325824 blks = sectsz=512 attr=1 data = bsize=4096 blocks=234426368, imaxpct=25 = sunit=64 swidth=192 blks, unwritten=1 ... That is, 4k * 64 = 256k, and 64 * 3 = 192 With that, bulk writing on the file system runs without need to read back blocks of disk-space to calculate RAID5 parity data because the filesystem's idea of block does not align with RAID5 surface. I do have LVM in between the MD-RAID5 and XFS, so I did also align the LVM to that 3 * 256k. Doing this alignment thing did boost write performance by nearly a factor of 2 from mkfs.xfs with default parameters. With very wide RAID5, like the original question... I would find it very surprising if the alignment of upper layers to MD-RAID level would not be important there as well. Very small continuous writing does not make good use of disk mechanism, (seek time, rotation delay), so something in order of 128k-1024k will speed things up -- presuming that when you are writing, you are doing it many MB at the time. Database transactions are a lot smaller, and are indeed harmed by such large megachunk-IO oriented surfaces. RAID-levels 0 and 1 (and 10) do not have the need of reading back parts of the surface because a subset of it was not altered by incoming write. Some DB application on top of the filesystem would benefit if we had a way for it to ask about these alignment boundary issues, so it could read whole alignment block even though it writes out only a subset of it. (Theory being that those same blocks would also exist in memory cache and thus be available for write-back parity calculation.) > Peter /Matti Aarnio - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: spam on the list
On Sat, May 06, 2006 at 12:13:48PM +0200, Shai wrote: > Date: Sat, 6 May 2006 12:13:48 +0200 > From: Shai <[EMAIL PROTECTED]> > To: linux-raid@vger.kernel.org > Subject: spam on the list > > Hi, > > Spam arrives to the list ... > Is the list closed to un-registered users? FAQ: http://www.tux.org/lkml/#s3-14 > Shai - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I dropped 42 Lbs in 4 days
On Mon, Apr 03, 2006 at 11:04:48AM -0700, Technomage wrote: > pardon my asking but... > > HUH?!?!? Sometimes spams do leak thru to the lists. How and why is explained in LKML-FAQ. > On Monday 03 April 2006 17:46, Alice wrote: > > I lost 30lbs in > > w eeks > > /Matti Aarnio - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Backup for RAID-5 array ?
On Wed, Nov 01, 2000 at 04:38:34PM +, Ian Thurlbeck wrote: > Dear All > > how do people back up their RAID arrays? I have a DDS-4 tape waiting > to receive my files but the standard "dump" program doesn't like the > raid device /dev/md0: On one hand there is 'dump' package from which /sbin/dump comes, on the other hand, there is 'e2fsprogs' package with /sbin/dumpe2fs. This, at least with RedHat 6.2. > root # /sbin/dump -0f myserver:/dev/rmt/0h /dev/md0 ... > /dev/hda1: Ext2 inode is not a directory while mapping files in dev/md0 ... > Ian > -- > Ian Thurlbeckhttp://www.stams.strath.ac.uk/ /Matti Aarnio - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED]