3. mark the array read-only (mdadm -o)
You shouldn't be able to do this. It should only be possible to set
an array to read-only when it is not in use. The fact that you cannot
suggests something else if wrong.
Should I ? I assumed that mdadm -o could be run with a running used
array, and
Hi all,
I've hit the following bug while unmounting a xfs partition
--- [cut here ] - [please bite here ] -
Kernel BUG at drivers/md/md.c:5035
invalid opcode: [1] SMP
CPU 0
Modules linked in: unionfs sbp2 ohci1394 ieee1394 raid456 xor
w83627ehf i2c_isa i2c_core
Pid:
Hi all,
First of all, best wishes for 2007, may the Penguin be with you for
the whole year on.
Then, my problem.
1. The setup : x86_64 2x500GB sata, linux 2.6.18-6, raid 1 on
/dev/sd[ab]1 - /dev/md0.
Raid1 is configured with a 1.0 superblock version, but no autoraid 0xfd stuff.
The aim is to
find in util.c: guess_super() of the
mdadm package ?
Would that be too dangerous ?
It seems to me that the tricky part is dealing with the ctime stuff of
all possible superblocks found...
Regards,
2007/1/5, Francois Barre [EMAIL PROTECTED]:
Hi all,
First of all, best wishes for 2007, may
2006/8/23, Richard Scobie [EMAIL PROTECTED]:
Has anyone had any experience or comment regarding linux RAID over ieee1394?
I've been successfully running a 4x250Gb Raid5 over ieee1394 with XFS on top.
The 4 drives are sharing the same ieee1394 bus, so the bandwidth is
awfull, because they have
As promised, here is the dmesg of the Oops caused by the BUG_ON() on line 1166.
So it seems that (*bcm COUNTER_MAX) == COUNTER_MAX, so the system is
issuing many more bitmap_startwrite() than bitmap_endwrite(). I'll try
and compile with more verbous options and see what happens.
This may be
What are you expecting fdisk to tell you? fdisk lists partitions and
I suspect you didn't have any partitions on /dev/md0
More likely you want something like
fsck -n -f /dev/md0
and see which one produces the least noise.
Maybe a simple file -s /dev/md0 could do the trick, and would only
More likely to produce an output whenever the 1st disk in the array is in the
right place as it will
just look at the 1st couple of sectors for the superblock.
I'd go with the fsck idea as it will try to inspect the rest of the filesystem
also.
Obviously that's true, but it's still a good
2006/7/11, Justin Piszcz [EMAIL PROTECTED]:
Why not just --set-faulty or --fail with mdadm?
If you answer to Bad blocks maybe, can't test that., I'll say AFAIK,
--set-faulty will only stay at the md layer, and won't go through the
underlying driver (sata here, ata elsewhere, ...), so this
Hello David, all,
You pointed the http://linux-raid.osdl.org as a future ressource for
SwRAID and MD knowledge base.
In fact, the TODO page on the wiki is empty...
But I would like to help on feeding this wiki with all the clues and
experiences posted on the ML, and it would first be
# mdadm --stop /dev/md0
# mdadm -A /dev/md0
will result in the array started with 3 drives out of 4 again. what am I
doing wrong?
Akos
AFAIK, mdadm -A raid device will use /etc/mdadm.conf to know what
underlying partitions you mean with your /dev/md0.
So, try
# mdadm --stop /dev/md0
#
2006/7/1, Ákos Maróy [EMAIL PROTECTED]:
Neil Brown wrote:
Try adding '--force' to the -A line.
That tells mdadm to try really hard to assemble the array.
thanks, this seems to have solved the issue...
Akos
Well, Neil, I'm wondering,
It seemed to me that Akos' description of the problem
2006/6/30, Ákos Maróy [EMAIL PROTECTED]:
Hi,
Hi,
I have some issues reviving my raid5 array after a power failure.
[...]
strange - why wouldn't it take all four disks (it's omitting /dev/sdb1)?
First, what is your mdadm version ?
Then, could you please show us the result of :
mdadm -E
2006/6/30, Ákos Maróy [EMAIL PROTECTED]:
Francois,
Thank you for the very swift response.
First, what is your mdadm version ?
# mdadm --version
mdadm - v1.12.0 - 14 June 2005
Rather old, version, isn't it ?
The freshest meat is 2.5.2, and can be grabbed here :
(answering to myself is one of my favourite hobbies)
Yep, this looks like it.
The events difference is quite big : 0.2701790 vs. 0.2607131... Could
it be that the sdb1 was marked faulty a couple of seconds before the
power failure ?
I'm wondering :
sd[acd] has an Update Time : Thu Jun 29
so, the situation seems that my array was degraded already when the power
failure happened, and then it got into the dirty state. what can one do
about such a situation?
Did you try upgrading mdadm yet ?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a
2006/6/30, Akos Maroy [EMAIL PROTECTED]:
On Fri, 30 Jun 2006, Francois Barre wrote:
Did you try upgrading mdadm yet ?
yes, I have version 2.5 now, and it produces the same results.
Akos
And I suppose there is no change in the various outputs mdadm is able
to produce (i.e. -D or -E).
Can
2006/6/23, PFC [EMAIL PROTECTED]:
- XFS is faster and fragments less, but make sure you have a good UPS
Why a good UPS ? XFS has a good strong journal, I never had an issue
with it yet... And believe me, I did have some dirty things happening
here...
- ReiserFS 3.6 is mature
Strange that whatever the filesystem you get equal numbers of people
saying that
they have never lost a single byte to those who have had horrible
corruption and
would never touch it again.
[...]
Loosing data is worse than loosing anything else. You can buy you
another hard drive, you can buy
That's why RAID is no excuse for backups.
Of course yes, but...
(I'm working in car industry) Raid is your active (if not pro-active)
security system, like a car ESP ; if something goes wrong, it
gracefully and automagically re-align to the *safe way*. Whereas
backup is your airbag. It's always
The problem is that there is no cost effective backup available.
One-liner questions :
- How does Google make backups ?
- Aren't tapes dead yet ?
- What about a NUMA principle applied to storage ?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
Guess hdparm -z /dev/md12 would do the trick, if you're lucky enough...
2006/5/30, Herta Van den Eynde [EMAIL PROTECTED]:
I'm trying to add a new SAN LUN to a system, create a multipath mdadm
device on it, partition it, and create a new filesystem on it, all
without taking the system down.
All
2006/5/30, Luca Berra [EMAIL PROTECTED]:
Guess hdparm -z /dev/md12 would do the trick, if you're lucky enough...
please avoid
top posting
quoting full emails
give advice if you are not sure
Sorry for my ugly-looking short answer...
I shall say for my own defense (if the President of the Court
Ok, look again, under
http://cgi.cse.unsw.edu.au/~neilb/patches/linux-devel/
Early comments :
I tested with patch-all-2006-03-17-10 on top of 2.6.16-rc6-mm1.
I had two compilation issues :
* in include/linux/raid/md_k.h, external is not defined in mddev_s. I
supposed it was an int.
* in
2006/3/7, Neil Brown [EMAIL PROTECTED]:
Thank you for your thoughts.
You welcome !
[...]
So an upsize migration from any of these level to any other could be
done within the raid6 code providing the raid6 code could cope with an
array where the Q block was not present (which is often the case
2006/2/21, Neil Brown [EMAIL PROTECTED]:
On Monday February 20, [EMAIL PROTECTED] wrote:
Hi All,
Please, Help !
[...]
And it seams like md superblock disk format is hostendian, so how
should I say mdadm to use a endianness ?
Read the man page several times?
Look for
2006/2/17, Ken Walker [EMAIL PROTECTED]:
Anybody tried a Raid1 or Raid5 on USB2.
If so did it crawl or was it usable ?
Many thanks
Ken :o)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
2006/2/17, Gordon Henderson [EMAIL PROTECTED]:
On Fri, 17 Feb 2006, berk walker wrote:
RAID-6 *will* give you your required 2-drive redundancy.
Anyway, if you wish to resize your setup to 5 drives one day or
another, I guess raid 6 would be preferable, because one day or
another, a patch will
2006/2/6, David Liontooth [EMAIL PROTECTED]:
Mattias Wadenstein wrote:
On Sun, 5 Feb 2006, David Liontooth wrote:
For their deskstar (sata/pata) drives I didn't find life time
estimates beyond 5 start-stop-cycles.
If components are in fact manufactured to fail simultaneously under
Some drives do support quiet vs. performance modes.
hdparm will set this for you, however, from the hdparm manual page:
-M Get/set Automatic Acoustic Management (AAM) setting. Most modern
harddisk drives have the ability to speed down the head move-
2006/1/18, Mario 'BitKoenig' Holbe [EMAIL PROTECTED]:
Mario 'BitKoenig' Holbe [EMAIL PROTECTED] wrote:
scheduled read-requests. Would it probably make sense to split one
single read over all mirrors that are currently idle?
A I got it from the other thread - seek times :)
Perhaps using
2006/1/17, Michael Tokarev [EMAIL PROTECTED]:
NeilBrown wrote:
Greetings.
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then re-laying
2006/1/17, Zhikun Wang [EMAIL PROTECTED]:
hi,
I am a new guy in linux MD. I want to add some fuctions into md source
code to do research. But i can not complile MD source code as modules
properly. Every time i need to put the source code at the directory and bulid
the whole kernel.
2006/1/17, Michael Tokarev [EMAIL PROTECTED]:
Sander wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially non-trivial
ones, does not buy you bugfree raid
AFAIK, ext3 volume cannot be bigger than 4TB on a 32 bits system.
I think it is important you know that in case it could be a concern
for you.
What ? What ? What ?
dased_and_confusedAre you sure ? I may search for it more
extensively... Anyway, the total size cannot be more than
2006/1/5, berk walker [EMAIL PROTECTED]:
[...]
Ext3 does have a fine record. Might I also suggest an added expense of
18 1/2% and do RAID6 for better protection against data loss?
b-
Well, I guess so. I just hope I'll be given enough money for it, since
it increases the cost per GB.
2006/1/5, John Stoffel [EMAIL PROTECTED]:
So what are you doing for backups, and can you allow the downtime
needed to restore all your data if there is a problem? Remember, it's
not the cost of doing backups which drives things, it's the cost of
the time to *restore* the data which drives
Another feature of LVM is moving physical devices (PV). This makes it
easier to grow your enormous fs because it allows you to remove disks.
Thanks for the tip, didn't think of it this way...
[...]
I haven't ever done this, just read about it. Also, maybe when md
allows growing raid5/6 this
Hello all,
Surfing on the web I found a patch from Steinar H. Gunderson which
implements raid5 resize stuff.
(http://marc.theaimsgroup.com/?l=linux-raidm=112998877619952w=2).
It seems like it has been included in the 2.6.14 kernel
(http://www.linuxhq.com/kernel/v2.6/14/drivers/md/raid5.c), but I
Ok, thanks for your answer.
Then I suppose I will have to wait a bit.
If I can be of any help for testing and hacking purpose, please let me know.
I think I will really *need* this feature by the end of february,
2006. Do you think I have any chance to use it safely by that time ?
Best regards,
40 matches
Mail list logo