Daniel L. Miller said: (by the date of Thu, 25 Oct 2007 16:32:31 -0700)
> Thanks for the test responses - I have re-subscribed...if I see this
> myself...I'm back!
I know that gmail doesn't allow to see your own posts on mailing
lists. Only posts from other people. Maybe you have a similar p
On Thursday October 25, [EMAIL PROTECTED] wrote:
>
> I didn't get a reply to my suggestion of separating the data and location...
No. Sorry.
>
> ie not talking about superblock versions 0.9, 1.0, 1.1, 1.2 etc but a data
> format (0.9 vs 1.0) and a location (end,start,offset4k)?
>
> This would
On Thursday October 25, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> > It might be worth finding out where mdadm is being run in the init
> > scripts and add a "-v" flag, and redirecting stdout/stderr to some log
> > file.
> > e.g.
> >mdadm -As -v > /var/log/mdadm-$$ 2>&1
> >
> > And see if
On Thursday October 25, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> >
> > BTW, I don't think your problem has anything to do with the fact that
> > you are using whole partitions.
> >
>
> You don't think the "unknown partition table" on sdd is related? Because
> I read that as a sure indica
On Thursday October 25, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> > I certainly accept that the documentation is probably less that
> > perfect (by a large margin). I am more than happy to accept patches
> > or concrete suggestions on how to improve that. I always think it is
> > best if a n
Success.
On Thu, 25 Oct 2007, Daniel L. Miller wrote:
Sorry for consuming bandwidth - but all of a sudden I'm not seeing messages.
Is this going through?
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majord
Success 2.
On Thu, 25 Oct 2007, Daniel L. Miller wrote:
Thanks for the test responses - I have re-subscribed...if I see this
myself...I'm back!
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Thanks for the test responses - I have re-subscribed...if I see this
myself...I'm back!
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 2007-10-24 at 16:22 -0400, Bill Davidsen wrote:
> Doug Ledford wrote:
> > On Mon, 2007-10-22 at 16:39 -0400, John Stoffel wrote:
> >
> >
> >> I don't agree completely. I think the superblock location is a key
> >> issue, because if you have a superblock location which moves depending
>
Sorry for consuming bandwidth - but all of a sudden I'm not seeing
messages. Is this going through?
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Bill Davidsen wrote:
You don't think the "unknown partition table" on sdd is related?
Because I read that as a sure indication that the system isn't
considering the drive as one without a partition table, and therefore
isn't looking for the superblock on the whole device. And as Doug
pointed o
Bill Davidsen wrote:
> Neil Brown wrote:
>> I certainly accept that the documentation is probably less that
>> perfect (by a large margin). I am more than happy to accept patches
>> or concrete suggestions on how to improve that. I always think it is
>> best if a non-developer writes documentatio
I am at the design stage for a new server. That's when you try to
convince a client that they have an unfavorable ratio of requirements to
budget. I am thinking a raid-1, with a mirror to an nbd device running
write-mostly. I will have redundant network paths to the other machine,
one via a ded
Neil Brown wrote:
I certainly accept that the documentation is probably less that
perfect (by a large margin). I am more than happy to accept patches
or concrete suggestions on how to improve that. I always think it is
best if a non-developer writes documentation (and a developer reviews
it) as
Neil Brown wrote:
On Wednesday October 24, [EMAIL PROTECTED] wrote:
Current mdadm.conf:
DEVICE partitions
ARRAY /dev/.static/dev/md0 level=raid10 num-devices=4
UUID=9d94b17b:f5fac31a:577c252b:0d4c4b2a auto=part
still have the problem where on boot one drive is not part of the
array. Is t
David Greaves said: (by the date of Thu, 25 Oct 2007 10:55:44 +0100)
> How much later? This will, of course, destroy any data on the array (!) and
> you'll need to mkfs again...
Just after, I didn't even create LVM volume on it (not mentioning
formatting it).
> Also, if you don't mind me ask
Neil Brown wrote:
It might be worth finding out where mdadm is being run in the init
scripts and add a "-v" flag, and redirecting stdout/stderr to some log
file.
e.g.
mdadm -As -v > /var/log/mdadm-$$ 2>&1
And see if that leaves something useful in the log file.
I haven't rebooted yet, b
Neil Brown wrote:
It might be worth finding out where mdadm is being run in the init
scripts and add a "-v" flag, and redirecting stdout/stderr to some log
file.
e.g.
mdadm -As -v > /var/log/mdadm-$$ 2>&1
And see if that leaves something useful in the log file.
I haven't rebooted yet, bu
On Thursday October 25, [EMAIL PROTECTED] wrote:
> I think the only time you need to 'delete' an array before creating a new one
> is
> if you change the superblock version since it quietly writes different
> superblocks to different disk locations you may end up with 2 superblocks on
> the
> dis
Janek Kozicki wrote:
> Hello,
>
> I just created a new array /dev/md1 like this:
>
> mdadm --create --verbose /dev/md1 --chunk=64 --level=raid5 \
>--metadata=1.1 --bitmap=internal \
>--raid-devices=3 /dev/hdc2 /dev/sda2 missing
>
>
> But later I changed my mind, and I wanted to use chu
On Thursday October 25, [EMAIL PROTECTED] wrote:
> Hello,
>
> I just created a new array /dev/md1 like this:
>
> mdadm --create --verbose /dev/md1 --chunk=64 --level=raid5 \
>--metadata=1.1 --bitmap=internal \
>--raid-devices=3 /dev/hdc2 /dev/sda2 missing
>
>
> But later I changed my m
Jeff Garzik wrote:
> Neil Brown wrote:
>> As for where the metadata "should" be placed, it is interesting to
>> observe that the SNIA's "DDFv1.2" puts it at the end of the device.
>> And as DDF is an industry standard sponsored by multiple companies it
>> must be ..
>> Sorry. I had intended to
Hello,
I just created a new array /dev/md1 like this:
mdadm --create --verbose /dev/md1 --chunk=64 --level=raid5 \
--metadata=1.1 --bitmap=internal \
--raid-devices=3 /dev/hdc2 /dev/sda2 missing
But later I changed my mind, and I wanted to use chunk 128. Do I need
to delete this array so
On Thu, 2007-10-25 at 09:55 +1000, Neil Brown wrote:
> As for where the metadata "should" be placed, it is interesting to
> observe that the SNIA's "DDFv1.2" puts it at the end of the device.
> And as DDF is an industry standard sponsored by multiple companies it
> must be ..
> Sorry. I had i
24 matches
Mail list logo