Good suggestion, no it's not ESX, but it does do snapshots.
-Ross
- Original Message -
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: 'CentOS mailing list'
Sent: Wed Feb 13 17:30:39 2008
Subject: RE: [CentOS] pvmove speed
>I am facing the same issue wi
>I am facing the same issue with a migration of our VM machines
>to a new iSCSI setup this year, around 1TB of VMs need to be
>fork lifted over and I thought about exotic ways to move it
>over, but I think in the end it will be by good ole backup exec
>and tape.
You're not running esx are you?
Heh
Joseph L. Casale wrote:
>
> >Since your moving the data over to a new server/array combo have
> >you thought about using LTO tapes to back it up and restore it
> >on the new server?
> >
> >I know it isn't as sexy as LVM pv duplication and such, but it
> >works...
>
> We have an HP Autoloader, I t
>Since your moving the data over to a new server/array combo have
>you thought about using LTO tapes to back it up and restore it
>on the new server?
>
>I know it isn't as sexy as LVM pv duplication and such, but it
>works...
We have an HP Autoloader, I thought of doing that actually, and I think
Joseph L. Casale wrote:
>
> >Ah, well you are using SAS drives, so there is some cash there...
>
> My bad, SAS controller with SATA II drives :(
>
> >What industry do you work in?
>
> All sorts, odd company: We do everything from automotive
> accessories to home building!
>
> >That's not true
>Ah, well you are using SAS drives, so there is some cash there...
My bad, SAS controller with SATA II drives :(
>What industry do you work in?
All sorts, odd company: We do everything from automotive accessories to home
building!
>That's not true! I'm unimpressed now ;-)
>
>-Ross
Love your h
Joseph L. Casale
>
> >Don't know? Where are you pvmoving everything now?
>
> Where do I begin... Scenario is "No cash to do it right" so
> the interim step involves migration to a non fault tolerant
> setup temporarily. Server is a 1u HP and I don't have another
> controller that matches the r
>Don't know? Where are you pvmoving everything now?
Where do I begin... Scenario is "No cash to do it right" so the interim step
involves migration to a non fault tolerant setup temporarily. Server is a 1u HP
and I don't have another controller that matches the remaining interface in
that small
Joseph L. Casale wrote:
>
> >What are you pvmoving again?
> >
> >-Ross
>
> Ok, here is what happened: I have a box running iet exporting
> an LV that started out as two 750 gig HD's mirrored off an 8
> channel LSI SAS controller. I needed more space, and added 3
> 400 gig HD's in a r5 vd to th
>What are you pvmoving again?
>
>-Ross
Ok, here is what happened: I have a box running iet exporting an LV that
started out as two 750 gig HD's mirrored off an 8 channel LSI SAS controller. I
needed more space, and added 3 400 gig HD's in a r5 vd to this VG. Yes, I now
need even more space, but
Joseph L. Casale wrote:
>
> >I don't believe pvmove actually does any of the lifting. Pvmove
> >merely creates a mirrored pv area in dev-mapper and then hangs
> >around monitoring it's progress until the mirror is sync'd up
> >then it throws a couple of barriers and removes the original
> >pv from
>I don't believe pvmove actually does any of the lifting. Pvmove
>merely creates a mirrored pv area in dev-mapper and then hangs
>around monitoring it's progress until the mirror is sync'd up
>then it throws a couple of barriers and removes the original
>pv from the mirror leaving the new pv as the
Joseph L. Casale wrote:
>
> Are there any ways to improve/manage the speed of pvmove? Man
> doesn't show any documented switches for priority scheduling.
> Iostat shows the system way underutilized even though the lv
> whose pe's are being migrated is continuously being written
> (slowly) to.
>Running iostat like this will give you utilisation statistics since boot,
>which will not be inidicative of what's happening now. If you give it a
>reporting interval, say 10 seconds (iostat -m -x >10), I am guessing you will
>see very different data (likely high r/s, w/s, await, and derived va
Joseph L. Casale wrote:
>
> Not very impressive :) Two different SATA II based arrays on an LSI
> controller, 5% complete in ~7 hours == a week to complete! I ran this
> command from an ssh session from my workstation (That was clearly a
> dumb move). Given the robustness of the pvmove command I h
On 13/02/2008 05:24, Joseph L. Casale wrote:
But I really have a hunch that it is just a lot of I/O wait time due to
either metadata maintenance and checkpointing and/or I/O failures, which
have very long timeouts before failure is recognized and *then*
alternate block assignment and mapping is d
On Tue, 2008-02-12 at 22:24 -0700, Joseph L. Casale wrote:
> >But I really have a hunch that it is just a lot of I/O wait time due to
> >either metadata maintenance and checkpointing and/or I/O failures, which
> >have very long timeouts before failure is recognized and *then*
> >alternate block ass
>But I really have a hunch that it is just a lot of I/O wait time due to
>either metadata maintenance and checkpointing and/or I/O failures, which
>have very long timeouts before failure is recognized and *then*
>alternate block assignment and mapping is done.
One of the original arrays just needs
On Tue, 2008-02-12 at 20:41 -0700, Joseph L. Casale wrote:
> >You could "nice" it. "man nice". Since there is likely to be a lot of
> >I/O happening, it may not help much.
>
> Ok, here's a noob question :) - What process would I nice?
If you run pvmove from the command line, "nice -20 pvmove" for
>You could "nice" it. "man nice". Since there is likely to be a lot of
>I/O happening, it may not help much.
Ok, here's a noob question :) - What process would I nice?
>If the drives are on the same channel, or other devices on the channel
>are also flooding the channel, that would be expected. D
On Tue, 2008-02-12 at 19:57 -0700, Joseph L. Casale wrote:
>
> Iostat shows the system way underutilized even though the lv whose
> pe's are being migrated is continuously being written (slowly) to.
I finally thought about that last line. Makes since because meta-data
tracking must be done as va
Sorry 'bout that previous one. Wrong key combo hit!
On Tue, 2008-02-12 at 19:57 -0700, Joseph L. Casale wrote:
> Are there any ways to improve/manage the speed of pvmove?
Not that I am aware of. Keep in mind that a *lot* of work is being done.
You could "nice" it. "man nice". Since there is like
On Tue, 2008-02-12 at 19:57 -0700, Joseph L. Casale wrote:
> Are there any ways to improve/manage the speed of pvmove? Man doesn't show
> any documented switches for priority scheduling.
> Iostat shows the system way underutilized even though the lv whose pe's are
> being migrated is continuously
Are there any ways to improve/manage the speed of pvmove? Man doesn't show any
documented switches for priority scheduling.
Iostat shows the system way underutilized even though the lv whose pe's are
being migrated is continuously being written (slowly) to.
Thanks!
jlc
__
24 matches
Mail list logo