Neil Brown <[EMAIL PROTECTED]> writes:
> On Monday November 12, [EMAIL PROTECTED] wrote:
>> Neil Brown wrote:
>> >
>> > However there is value in regularly updating the bitmap, so add code
>> > to periodically pause while all pending sync requests complete, then
>> > update the bitmap. Doing this
Rik van Riel <[EMAIL PROTECTED]> writes:
> On Thu, 08 Nov 2007 17:28:37 +0100
> Goswin von Brederlow <[EMAIL PROTECTED]> wrote:
>
>> Maybe you need more parameter:
>
> Generally a bad idea, unless you can come up with sane defaults (which
> do not need tunin
Hi,
I have created a new raid6:
md0 : active raid6 sdb1[0] sdl1[5] sdj1[4] sdh1[3] sdf1[2] sdd1[1]
6834868224 blocks level 6, 512k chunk, algorithm 2 [6/6] [UU]
[>] resync = 21.5% (368216964/1708717056)
finish=448.5min speed=49808K/sec
bitmap: 204/204 p
Konstantin Sharlaimov <[EMAIL PROTECTED]> writes:
> On Wed, 2007-11-07 at 10:15 +0100, Goswin von Brederlow wrote:
>> I wonder if there shouldn't be a way to turn this off (or if there
>> already is one).
>>
>> Or more generaly an option to say what is &
Lyle Schlueter <[EMAIL PROTECTED]> writes:
> Do you know of any concerns of using all the ports on a motherboard?
> Slowdowns or anything like that?
More likely the opposite. But it depends on how the chips are
connected.
On desktop boards the onboard chip is in the north and/or southbridge
and
Lyle Schlueter <[EMAIL PROTECTED]> writes:
> Hello,
>
> I just started looking into software raid with linux a few weeks ago. I
> am outgrowing the commercial NAS product that I bought a while back.
> I've been learning as much as I can, suscribing to this mailing list,
> reading man pages, experi
Janek Kozicki <[EMAIL PROTECTED]> writes:
> Hi,
>
> I finished copying all data from old disc hdc to my shiny new
> RAID5 array (/dev/hda3 /dev/sda3 missing). Next step is to create a
> partition on hdc and add it to the array. And so I did this:
>
> # mdadm --add /dev/md1 /dev/hdc3
>
> But then I
Konstantin Sharlaimov <[EMAIL PROTECTED]> writes:
> This patch adds RAID1 read balancing to device mapper. A read operation
> that is close (in terms of sectors) to a previous read or write goes to
> the same mirror.
I wonder if there shouldn't be a way to turn this off (or if there
already is o
Bill Davidsen <[EMAIL PROTECTED]> writes:
> Janek Kozicki wrote:
>> Hello,
>>
>> My three HHDs have following speeds:
>>
>> hda - speed 70 MB/sec
>> hdc - speed 27 MB/sec
>> sda - speed 60 MB/sec
>>
>> They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to
>> ask if mdadm is try
Neil Brown <[EMAIL PROTECTED]> writes:
> On Thursday November 1, [EMAIL PROTECTED] wrote:
>> Hello,
>>
>> I have raid5 /dev/md1, --chunk=128 --metadata=1.1. On it I have
>> created LVM volume called 'raid5', and finally a logical volume
>> 'backup'.
>>
>> Then I formatted it with command:
>>
>>
Janek Kozicki <[EMAIL PROTECTED]> writes:
> Doug Ledford said: (by the date of Sat, 03 Nov 2007 14:40:48 -0400)
>
>> so you really only need to align the
>> lvm superblock so that data starts at 128K offset into the raid array.
>
> Sorry, I thought that it will be easier to figure this out
> e
BERTRAND Joël <[EMAIL PROTECTED]> writes:
> PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
>
> 5426 root 15 -5 000 R 100 0.0 46:32.54
> md_d0_raid5
First: You can tune the stripe cache.
Secondly: You might want the raid speedup patches that implement
p
Doug Ledford <[EMAIL PROTECTED]> writes:
> On Sun, 2007-10-28 at 01:27 -0500, Alberto Alonso wrote:
>> Even if the default timeout was really long (ie. 1 minute) and then
>> configurable on a per device (or class) via /proc it would really help.
>
> It's a band-aid. It's working around other bugs
Bill Davidsen <[EMAIL PROTECTED]> writes:
> Goswin von Brederlow wrote:
>> Hi,
>>
>> I would welcome if someone could work on a new feature for raid5/6
>> that would allow replacing a disk in a raid5/6 with a new one without
>> having to degrade the array.
Hi,
I would welcome if someone could work on a new feature for raid5/6
that would allow replacing a disk in a raid5/6 with a new one without
having to degrade the array.
Consider the following situation:
raid5 md0 : sda sdb sdc
Now sda gives a "SMART - failure iminent" warning and you want to
r
Bill Davidsen <[EMAIL PROTECTED]> writes:
> Alberto Alonso wrote:
>> On Tue, 2007-10-23 at 18:45 -0400, Bill Davidsen wrote:
>>
>>
>>> I'm not sure the timeouts are the problem, even if md did its own
>>> timeout, it then needs a way to tell the driver (or device) to stop
>>> retrying. I don't bel
Justin Piszcz <[EMAIL PROTECTED]> writes:
> On Fri, 19 Oct 2007, Alberto Alonso wrote:
>
>> On Thu, 2007-10-18 at 17:26 +0200, Goswin von Brederlow wrote:
>>> Mike Accetta <[EMAIL PROTECTED]> writes:
>>
>>> What I would like to see is a timeout d
Mike Accetta <[EMAIL PROTECTED]> writes:
> Also, read errors don't tend to fail the array so when the bad disk is
> again accessed for some subsequent read the whole hopeless retry process
> begins anew.
>
> I posted a patch about 6 weeks ago which attempts to improve this situation
> for RAID1 by
"Mike Snitzer" <[EMAIL PROTECTED]> writes:
> All,
>
> I have repeatedly seen that when a 2 member raid1 becomes degraded,
> and IO continues to the lone good member, that if the array is then
> stopped and reassembled you get:
>
> md: bind
> md: bind
> md: kicking non-fresh nbd0 from array!
> md:
Kelly Byrd <[EMAIL PROTECTED]> writes:
> On Thu, 11 Oct 2007 11:38:04 -0400, Bill Davidsen <[EMAIL PROTECTED]> wrote:
>> Kelly Byrd wrote:
>>> I've currently got a pair of identical drives in a RAID1 set for
>>> my data partition. I'll be getting a pair of bigger drives in a
>>> bit, and I was won
"Dean S. Messing" <[EMAIL PROTECTED]> writes:
> I'm having the devil of a time trying to boot off
> an "LVM-on-RAID0" device on my Fedora 7 system.
>
> I've created a software RAID-0, defined a Volume Group on in with
> (currently) a single logical volume, and copied my entire
> installation onto
Andrew Clayton <[EMAIL PROTECTED]> writes:
> Hi,
>
> Hardware:
>
> Dual Opteron 2GHz cpus. 2GB RAM. 4 x 250GB SATA hard drives. 1 (root file
> system) is connected to the onboard Silicon Image 3114 controller. The other
> 3 (/home) are in a software RAID 5 connected to a PCI Silicon Image 3124
Justin Piszcz <[EMAIL PROTECTED]> writes:
> Have you tried a 1024k stripe and 16384k stripe_cache_size?
>
> I'd be curious what kind of performance/write speed you get with that
> configuration.
>
> Justin.
stripe_cache_size is not in KiB of memory but in multiples of some
internal structures. So
Hi,
we (Q-Leap networks) are in the process of setting up a high speed
storage cluster and we are having some problems getting proper
performance.
Our test system consists of a 2x dual core system with 2 dual channel
UW scsi controlers connected to 2 external raid boxes and we use
iozone with 16G
Michal Soltys <[EMAIL PROTECTED]> writes:
> Goswin von Brederlow wrote:
>>
>> I was thinking Michal Soltys ment it this way. You can probably
>> replace the cp invocation with an rsync one but that hardly changes
>> things.
>>
>> I don't th
Michael Tokarev <[EMAIL PROTECTED]> writes:
> Dean S. Messing wrote:
>> Michal Soltys writes:
> []
>> : Rsync is fantastic tool for incremental backups. Everything that didn't
>> : change can be hardlinked to previous entry. And time of performing the
>> : backup is pretty much neglible. Esse
"Dean S. Messing" <[EMAIL PROTECTED]> writes:
> Goswin von Brederlow writes:
> : Dean Mesing writes:
> : > Goswin von Brederlow writes:
> : > : LVM is not the same as LVM. What I mean is that you still have choices
> : > : left.
> : >
> : >
"Dean S. Messing" <[EMAIL PROTECTED]> writes:
> Goswin von Brederlow writes:
> : Dean S. Messing writes:
> : > Michael Tokarev writes:
> : > : Dean S. Messing wrote:
> : > : []
> : > : > [] That's what
> : > : > attracted me to RAI
"Dean S. Messing" <[EMAIL PROTECTED]> writes:
> Michael Tokarev writes:
> : Dean S. Messing wrote:
> : []
> : > [] That's what
> : > attracted me to RAID 0 --- which seems to have no downside EXCEPT
> : > safety :-).
> : >
> : > So I'm not sure I'll ever figure out "the right" tuning. I'm at th
Bill Davidsen <[EMAIL PROTECTED]> writes:
> Goswin von Brederlow wrote:
>>>> I'm using RHEL4/U4 (kernel 2.6.9) on this system.
>>>>
>>
>> That kernel seems to be a bit old. Better upgrade first.
>>
>
> You don't upgrade when usin
"J. David Beutel" <[EMAIL PROTECTED]> writes:
> Neil Brown wrote:
>> 2.6.12 does support reducing the number of drives in a raid1, but it
>> will only remove drives from the end of the list. e.g. if the
>> state was
>>
>> 58604992 blocks [3/2] [UU_]
>>
>> then it would work. But as
Jordan Russell <[EMAIL PROTECTED]> writes:
> Iustin Pop wrote:
>> Maybe it's because md doesn't support barriers whereas the disks
>> supports them? In this case some filesystems, for example XFS, will work
>> faster on raid1 because they can't force the flush to disk using
>> barriers.
>
> It's a
Iustin Pop <[EMAIL PROTECTED]> writes:
> On Sat, Sep 15, 2007 at 12:28:07AM -0500, Jordan Russell wrote:
>> (Kernel: 2.6.18, x86_64)
>>
>> Is it normal for an MD RAID1 partition with 1 active disk to perform
>> differently from a non-RAID partition?
>>
>> md0 : active raid1 sda2[0]
>> 8193
Goswin von Brederlow <[EMAIL PROTECTED]> writes:
> The simplest is to pull the disks from md1 from the first controler
> and put them into the 2nd controler and then add the new disks to the
> first controler.
That is of cause whith the raid stoped. You didn't say what kind
Maurice Hilarius <[EMAIL PROTECTED]> writes:
> Hi to all.
>
> I wonder if somebody would care to help me to solve a problem?
>
> I have some servers.
> They are running CentOS5
> This OS has a limitation where the maximum filesystem size is 8TB.
>
> Each server curr3ently has a AMCC/3WARE 16 port
"Stuart D. Gathman" <[EMAIL PROTECTED]> writes:
> On Wed, 12 Sep 2007, Hiren Joshi wrote:
>
>> Has anyone of you been using ext2online to resize (large) ext3
>> filesystems?
>> I have to do it going from 500GB to 1TB on a productive system I was
>> wondering if you have some horror/success stories
Tomasz Chmielewski <[EMAIL PROTECTED]> writes:
> Chris Osicki schrieb:
>> Hi
>>
>> I apologize in advance for asking a question not really appropriate
>> for this mailing list, but I couldn't find a better place with lots of
>> people managing lots of disk space.
>>
>> The question:
>> Has anyone
Iustin Pop <[EMAIL PROTECTED]> writes:
> On Mon, Sep 10, 2007 at 10:51:37PM +0300, Dimitrios Apostolou wrote:
>> On Monday 10 September 2007 22:35:30 Iustin Pop wrote:
>> > On Mon, Sep 10, 2007 at 10:29:30PM +0300, Dimitrios Apostolou wrote:
>> > > Hello list,
>> > >
>> > > I just created a RAID1
Bill Davidsen <[EMAIL PROTECTED]> writes:
> Bernd Schubert wrote:
>> Yep, thats exactly what I'm talking about and its not only limited
>> to usb, but happens with sata as well.
>>
>
> And real SCSI hot plug drives if you pull the wrong one.
The right thing to do would be to change the raid super
39 matches
Mail list logo