8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
thanks. 16 makes sense (2^32 * 4k blocks).
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have 14 HDDs and five of them being extra - for safety/redundancy
purposes.
that's a
yes, but what about memory? I speculate that this is an Intel-based
system that is relatively memory-starved.
Yes, its an intel system, since still has problems to deliver AMD quadcores.
Anyway, I don't believe the systems memory bandwidth is only
6 x 280 MB/s = 1680 MB/s (280 MB/s is the
I know this is a high end configuration, but no latency critical component
is at any limit, 4 CPUs are idling, PCI-X busses are
far away from being saturated.
yes, but what about memory? I speculate that this is an Intel-based
system that is relatively memory-starved.
-
To unsubscribe from
optimize for density, so have a wide zone
which therefore has more decrease in bandwidth.
I have an array of 4 Maxtor sata drives, and
raw read performance at the end of the disk is 38mb/s compared to 62mb/s
at the beginning.
that's pretty typical.
regards, mark hahn.
-
To unsubscribe from
In its current implementation write-back mode acknowledges writes before
they have reached non-volatile media.
which is basically normal for unix, no?
are you planning to support barriers? (which are the block system's way
of supporting filesystem atomicity).
-
To unsubscribe from this
times, they were within a factor of two one-track seek times, which explains
why a same-family disk with more heads/surfaces shows only slight speedup.
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
The fact that you mention you are using partitions on disks that
possibly have other partions doing other things, means raw performance
will be compromised anyway.
with normal unraided swap (partition or file), swapouts are not a performance
problem, since they're lazy, relatively cheap,
. there are billions of studies
(in medical/behavioral/social fields) which assume large numbers of more
or less hidden variables, and which still manage good success...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
.
that is, not just read and checked, possibly with parity fixed,
but all blocks read and rewritten (with verify, I suppose!)
this starts to get a bit hair-raising to have entirely in the kernel -
I wonder if anyone is thinking about how to pull some such activity
out into user-space.
regards, mark
engineer on the failure modes
they worry about...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
| if a discrete resistor has a 1e9 hour MTBF, 1k of them are 1e6
That's not actually true. As a (contrived) example, consider two cases.
if you know nothing else, it's the best you can do. it's also a
conservative estimate (where conservative means to expect a failure sooner).
In contrast, ever since these holes appeared, drive failures became the norm.
wow, great conspiracy theory! maybe the hole is plugged at
the factory with a substance which evaporates at 1/warranty-period ;)
seriously, isn't it easy to imagine a bladder-like arrangement that
permits
.
this is often much nicer than the default random OOM slaughter.
(you probably also need to adjust vm.overcommit_ratio with
some knowlege of your MemTotal and SwapTotal.)
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
gives a speed at only about 5000K/sec , and HIGH load average :
# uptime
20:03:55 up 8 days, 19:55, 1 user, load average: 11.70, 4.04, 1.52
loadav is a bit misleading - it doesn't mean you had 11 runnable jobs.
you might just have more jobs waiting on IO, being starved by the
IO done by
is used
somewhat naively - that all IO goes through it. the most important
blocks would be parity and ends of a write that partially update an
underlying chunk. (conversely, don't bother caching anything which
can be blindly written to disk.)
regards, mark hahn.
-
To unsubscribe from this list
stride=stripe-size
Configure the filesystem for a RAID array with
stripe-size filesystem blocks per stripe.
my understanding (hey, I even had a quick look at the source) is that
you want blocksize * stripe-size = raid-stripe-size, where
I have an AMD K7 machine with an 8way RADI5 array on a Marvell based
K7 is pretty wimpy by today's standards - really the K8 was AMD's
first solid, server-quality platform. in addition, you appear to have
it under pretty high memory pressure.
controller (sata_mv). When data is coming in
Does that make sense? Has the format changed for initrd's. I also noticed
that the new initrds have a script called init instead of linuxrc.
see Documentation/filesystems/ramfs-rootfs-initramfs.txt
in the kernel sources.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
.
_that_ is how you should choose your storage architecture...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
My question: What are the chunk and usable storage sizes per stripe for four
discs in RAID-0 on an ICH7R configured for a 128KB strip?
raid0 always has 100% usable; configuring it is deciding how much
concurrency you want.
if your writes are 64K and you have 4 disks, your max concurrency
There's much easier/simpler way to set default scheduler. As
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
I had no idea about this particular configuration requirement. None of
just to be clear: it's not a requirement. if you want the very nice
auto-assembling behavior, you need to designate the auto-assemblable
partitions. but you can assemble manually without 0xfd partitions
(even if that's in
I created my array in 1/2003, don't know versions of kernel or mdadm I
was using then.
did you have /etc/*md* related config files? some distros use
them to assemble during boot (not quite the same as 0xfd auto-assembly,
but still pretty auto).
In my situation over the past few days.
i strongly believe it is not correct to let kernel auto-assemble devices
kernel auto-assembly should be disable and activation should be handled
by mdadm only!
it's a convenience/safety tradeoff, like so many other cases.
without kernel auto-assembly, it's somewhat more annoying to
boot onto
I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
Isn't that a little slow?
what bs parameter did you give to dd? it should be at least 3*chunk
(probably 3*64k if you used defaults.)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
or zero the superblock or both...
More questions: is the raid superblock the same as an ordinary file system
superblock?
iirc, the raid SB is at the end, and most FS's base theirs at the beginning.
Zero the superblock - the orphaned one, I assume? This is not like zero it
and linux
What do I need to do when I want to install a different distro on the machine
with a raid5 array?
Which files do I need? /etc/mdadm.conf? /etc/raittab? both?
MD doesn't need any files to function, since it can auto-assemble
arrays based on their superblocks (for partition-type 0xfd).
-
To
Now the devices have all two superblocks, the one left from the first try
which are now kinda orphaned and those now active.
Can I trust mdadm to handle this properly on its own?
I'm not sure what properly means. you should not leave around 0xfd
partitions with bogus superblocks, since MD
- 2 RAID controllers: ARECA with 7 SATA disks each (RAID5)
what are the /sys/block settings for the blockdevs these export?
I'm thinking about max*sectors_kb.
- stripe size is always 64k
Measured with IOMETER (MB/s, 64 kb block size with sequential I/O).
I don't see how that could be
This has happened with 7 (seven!!) disks already, 80GB and 120GB,
Maxtor and Seagate. Has anyone else seen this?
sure. but you should be asking what's wrong with your environment
or supply chain to cause this kind of disk degradation. how's your
cooling? power? some people claim that if
is there a way to batch explicitely write requests raid5 issues?
sort of like TCP_CORK?
for example, there is a raid5 built from 3 disks with chunk=64K.
one types dd if=/dev/zero of=/dev/md0 bs=128k count=1
OK, so this is an aligned, whole-stripe write.
and 128K
bio gets into the raid5.
Mar 29 23:33:26 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error
}
Mar 29 23:33:26 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC }
BadCRC is unambiguous: the packet was corrupted. that could happen several
ways, the most common of which is bogus cables (remember, PATA is
http://linuxmafia.com/faq/Hardware/sata.html
A little bit outdated though. (Amazing how many RAID cards are fakeraid)
well, the page is useful in that it discusses particular products,
but the fakeraid stuff is just silly name-calling. there are either
real (hardware) raid cards, and there
slower than MD. but the 9550 is really
quite impressive...
regards, mark hahn
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
the MD's coming to life again. I never
use /etc/mdadm.conf, since there doesn't seem to be much point...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
and with the ETCH testing 2.6.12: the sata_via module fails with
I'm sure you know that no kernel developer really cares about distro-hacked
kernels. why not test a real (kernel.org) kernel?
ata1: status=0x51 { DriveReady SeekComplete Error }
ata1: error=0x84 { DriveStatusError BadCRC }
I recently extended my raid array with a 9th drive, and I find that the
300 watt PSU I use is insufficient to start the system. What happens is
IMO, a many-disk server that tries to use normal non-server commodity parts
is pushing its luck. the mass market just isn't friendly to =8 disks
, many files are cached...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
to find out why, since I usually need redundancy...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
, and a smallish fraction of even plain old 64x66 PCI.
array has more than two disks, that would make RAID 1 *faster* than RAID 0.
R1 is not going to be faster than R0 on the same number of disks.
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
interleaved stream of them,
you might not benefit much from parallelism. (for instance, 64k is
a pretty common raid stripe size, but if you have a 14-disk R6,
you'd really like to be doing writes in multiples of 768K!)
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe
What does doing
mdadm -Cv -n2 -l1 /dev/md0 /dev/sda /dev/sdb
do to the partition tables???
overwrites them. then again, there's nothing critical about
partition tables - they're just a convention for slicing up
the disk, not necessary in any sense.
in fact, it can be quite handy to avoid
(same board, 32G PC3200), but alas,
end-users have found out about them. not to mention that they only have
3x160G SATA disks...
regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
Please be sure to use a fixed-pitch font when viewing the tables found
below. BTW, if people weren't so terrified of HTML, I could just make a
nice HTML table for easy reading without silly font requirements...
it's not a matter of terror - many people still prefer ascii email.
2.4.10 related to this, and it left a bad taste in
everyone's mouth. I think the main conclusion was that too much fanciness
results in a fragile, more subtle and difficult-to-maintain system
that performs better, true, but over a narrower range of workloads.
regards, mark hahn
sharcnet/mcmaster
.
it doesn't try to do anything with random seeks. it doesn't do
anything multi-stream.
regards, mark hahn.
/* iorate.c - measure rates of sequential IO, showing incremental
bandwidth written by Mark Hahn ([EMAIL PROTECTED]) 2003,2004,2005
the main point of this code is to illustrate the danger
md0 /boot sd[ab]1
md1 / sd[ab]2
md2 /usr sd[ab]3
md3 /var sd[ab]4
...
There are other reasons to use multiple partitions. Having /home as a
separate partition, e.g., allows one to reinstall without copying the
# ./lspci -vv -d 11ab:
02:01.0 Class 0100: 11ab:5081 (rev 03)
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
ParErr- Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium TAbort-
TAbort- MAbort- SERR- PERR-
Latency: 32, cache
larger than the typical FS blocksize. ideally, you want chunks
of, say, 64K going to each disk, so if you have an 8+1 r5, you'd really
like to see writes of at least 512K.
I was wondering whether perhaps DM could reblock a filesystem.
thanks, mark hahn.
-
To unsubscribe from this list: send
not get the tape put into some drive...)
regards, mark hahn.
(buying many TB of disk this year and no tape)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
51 matches
Mail list logo