Re: [CentOS-virt] Fedora 12 2.6.31.5-127.fc12 domU on CentOS 5.4 2.6.18-164.6.1.el5xen fails to boot

2009-12-03 Thread Pasi Kärkkäinen
On Fri, Dec 04, 2009 at 09:36:49AM +0200, Pasi Kärkkäinen wrote:
> On Tue, Dec 01, 2009 at 01:22:06PM -0500, Scot P. Floess wrote:
> > 
> > What happens is /boot is always installing as ext4 - no matter what I set 
> > it to be in my kickstart file.
> > 
> > I use Cobbler/KOAN for my VM installs...  What did you do to get F12 
> > installed as a VM?
> >
> 
> I did manual installation using virt-manager. I didn't use kickstart or
> Cobbler/KOAN.
> 
> The normal manual installation allows you to set /boot to ext3. 
>

Oh, and Xen pygrub in Fedora 12 dom0 supports domU ext4 /boot. I've been
using that aswell.

-- Pasi

> 
> 
> > On Tue, 1 Dec 2009, Pasi Kärkkäinen wrote:
> > 
> > >On Tue, Dec 01, 2009 at 09:22:55AM -0500, Scot P. Floess wrote:
> > >>
> > >>So, to be honest, I don't recall seeing that option in my BIOS - but I've
> > >>not looked either.  I installed the x86_64 version of F11 and it works
> > >>fine.
> > >>
> > >>Again, this isn't really a big issue as this is all stuff on my home
> > >>network.  I was pleased just to be able to run F11 in some capacity as a
> > >>Xen guest :)
> > >>
> > >>Now, if I could just get F12 to work ;)
> > >>
> > >
> > >What's the problem with F12? :)
> > >
> > >Works for me, both as Xen PV domU and as dom0 (using custom dom0 kernel).
> > >
> > >-- Pasi
> > >
> > >>On Tue, 1 Dec 2009, Pasi Kärkkäinen wrote:
> > >>
> > >>>On Sun, Nov 29, 2009 at 09:44:13PM -0500, Scot P. Floess wrote:
> > 
> > So, as it turns out, my issue seems to be running a CentOS 5.4 x86_64 
> > host
> > and an i386 F11 VM.  I used another machine running CentOS 5.4 i386
> > host and was able to launch, no changes, the F11 i386 VM.
> > 
> > Is there any reason I should see the failure listed below?  I've not yet
> > tried to install F11 x86_64 as a VM on the CentOS 5.4 x86_64 host.
> > 
> > >>>
> > >>>Hmm.. iirc F11 GA kernel has some xen-related bugs (at least NX and 
> > >>>xsave).
> > >>>Not sure if your crash is one of those though..
> > >>>
> > >>>Did you try toggling the NX (No eXecute memory protection) setting in
> > >>>BIOS?
> > >>>
> > >>>There was workarounds for those in post RHEL 5.4 virttest kernels, so
> > >>>could try those aswell..
> > >>>
> > >>>Some bugzillas that might be related:
> > >>>https://bugzilla.redhat.com/show_bug.cgi?id=502826
> > >>>https://bugzilla.redhat.com/show_bug.cgi?id=524719
> > >>>https://bugzilla.redhat.com/show_bug.cgi?id=525290
> > >>>
> > >>>You could try updated post-5.4 Xen / dom0 kernel rpms from here:
> > >>>
> > >>>http://people.redhat.com/clalance/virttest/
> > >>>or from: http://people.redhat.com/dzickus/el5/
> > >>>
> > >>>-- Pasi
> > >>>
> > 
> > 
> > >
> > >___
> > >CentOS-virt mailing list
> > >CentOS-virt@centos.org
> > >http://lists.centos.org/mailman/listinfo/centos-virt
> > >
> > 
> > Scot P. Floess
> > 27 Lake Royale
> > Louisburg, NC  27549
> > 
> > 252-478-8087 (Home)
> > 919-890-8117 (Work)
> > 
> > Chief Architect JPlate   http://sourceforge.net/projects/jplate
> > Chief Architect JavaPIM  http://sourceforge.net/projects/javapim
> > 
> > Architect Keros  http://sourceforge.net/projects/keros
> 
> > ___
> > CentOS-virt mailing list
> > CentOS-virt@centos.org
> > http://lists.centos.org/mailman/listinfo/centos-virt
> 
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Fedora 12 2.6.31.5-127.fc12 domU on CentOS 5.4 2.6.18-164.6.1.el5xen fails to boot

2009-12-03 Thread Pasi Kärkkäinen
On Tue, Dec 01, 2009 at 01:22:06PM -0500, Scot P. Floess wrote:
> 
> What happens is /boot is always installing as ext4 - no matter what I set 
> it to be in my kickstart file.
> 
> I use Cobbler/KOAN for my VM installs...  What did you do to get F12 
> installed as a VM?
>

I did manual installation using virt-manager. I didn't use kickstart or
Cobbler/KOAN.

The normal manual installation allows you to set /boot to ext3. 

-- Pasi


> On Tue, 1 Dec 2009, Pasi Kärkkäinen wrote:
> 
> >On Tue, Dec 01, 2009 at 09:22:55AM -0500, Scot P. Floess wrote:
> >>
> >>So, to be honest, I don't recall seeing that option in my BIOS - but I've
> >>not looked either.  I installed the x86_64 version of F11 and it works
> >>fine.
> >>
> >>Again, this isn't really a big issue as this is all stuff on my home
> >>network.  I was pleased just to be able to run F11 in some capacity as a
> >>Xen guest :)
> >>
> >>Now, if I could just get F12 to work ;)
> >>
> >
> >What's the problem with F12? :)
> >
> >Works for me, both as Xen PV domU and as dom0 (using custom dom0 kernel).
> >
> >-- Pasi
> >
> >>On Tue, 1 Dec 2009, Pasi Kärkkäinen wrote:
> >>
> >>>On Sun, Nov 29, 2009 at 09:44:13PM -0500, Scot P. Floess wrote:
> 
> So, as it turns out, my issue seems to be running a CentOS 5.4 x86_64 
> host
> and an i386 F11 VM.  I used another machine running CentOS 5.4 i386
> host and was able to launch, no changes, the F11 i386 VM.
> 
> Is there any reason I should see the failure listed below?  I've not yet
> tried to install F11 x86_64 as a VM on the CentOS 5.4 x86_64 host.
> 
> >>>
> >>>Hmm.. iirc F11 GA kernel has some xen-related bugs (at least NX and 
> >>>xsave).
> >>>Not sure if your crash is one of those though..
> >>>
> >>>Did you try toggling the NX (No eXecute memory protection) setting in
> >>>BIOS?
> >>>
> >>>There was workarounds for those in post RHEL 5.4 virttest kernels, so
> >>>could try those aswell..
> >>>
> >>>Some bugzillas that might be related:
> >>>https://bugzilla.redhat.com/show_bug.cgi?id=502826
> >>>https://bugzilla.redhat.com/show_bug.cgi?id=524719
> >>>https://bugzilla.redhat.com/show_bug.cgi?id=525290
> >>>
> >>>You could try updated post-5.4 Xen / dom0 kernel rpms from here:
> >>>
> >>>http://people.redhat.com/clalance/virttest/
> >>>or from: http://people.redhat.com/dzickus/el5/
> >>>
> >>>-- Pasi
> >>>
> 
> 
> >
> >___
> >CentOS-virt mailing list
> >CentOS-virt@centos.org
> >http://lists.centos.org/mailman/listinfo/centos-virt
> >
> 
> Scot P. Floess
> 27 Lake Royale
> Louisburg, NC  27549
> 
> 252-478-8087 (Home)
> 919-890-8117 (Work)
> 
> Chief Architect JPlate   http://sourceforge.net/projects/jplate
> Chief Architect JavaPIM  http://sourceforge.net/projects/javapim
> 
> Architect Keros  http://sourceforge.net/projects/keros

> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> http://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Luke S Crawford
Grant McWilliams  writes:
> So if I have 6 drives on my RAID controller which do I choose?


considering the port-cost of good raid cards, you could probably use md
and get 8 or 10 drives for the same money.   It's hard to beat more 
spindles for random access performance over a large dataset.  (of course, 
the power cost of another 2-4 drives is probably greater than that of a 
raid card)  
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Thomas Harold
On 12/3/2009 7:35 AM, Grant McWilliams wrote:
>
> You can talk theoretics but I can tell you my real world experience. I
> cannot speak for other vendors but for 3ware this DOES work and is
> working so far with 100% success. I have a bunch of Areca controllers
> too but the drives are never moved between them so I can say how they'd
> act in that circumstance.
>

Brand probably matters a lot.  The 3ware and Areca's I'm inclined to 
trust.  They're true hardware RAID controllers and not just fakeraid. 
Things get a lot murkier when you get into the bottom half of the market.

But for smaller shops that can't afford to have 4+ of everything and 
don't need the CPU offload that a hardware RAID controller offers, Linux 
Software RAID is a solid choice.
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Christopher G. Stach II

- "Grant McWilliams"  wrote:

> RAID 5 is faster than RAID 10 for reads and writes.

*Serial* reads and writes. That is not the access pattern that you will have in 
most virtualization hosts.

> What wasn't in the test (but is in others that they've done) is RAID
> 6. I'm not sure I'm sold on it because it gives us about the same
> level of redundancy as RAID 10 but with less performance than RAID 5.
> Theoretically it would get soundly trounced by RAID 10 on IOs and
> maybe be slower on r/w transfer as well.

RAID 6 is pretty slow, but you can stripe them as RAID 60. If you need that 
kind of fault tolerance, the performance hit is negligible. On high volume 
boxes with low performance requirements, say NLS on an 8-12 bay 2U or 3U 
machine, I use RAID 6 with one hot spare.

-- 
Christopher G. Stach II


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Grant McWilliams
On Thu, Dec 3, 2009 at 6:08 AM, Christopher G. Stach II wrote:

> - "Grant McWilliams"  wrote:
>
> > On Wed, Dec 2, 2009 at 9:48 PM, Christopher G. Stach II <
> > c...@ldsys.net > wrote:
> >
> > - "Grant McWilliams" < grantmasterfl...@gmail.com > wrote:
> >
> > > a RAID 10 (or 0+1) will never reach the write... performance of
> > > a RAID-5.
> >
> > (*cough* If you keep the number of disks constant or the amount of
> > usable space? "Things working" tends to trump CapEx, despite the
> > associated pain, so I will go with "amount of usable space.")
> >
> > No.
> >
> > --
> > Christopher G. Stach II
> >
> > Nice quality reading. I like theories as much as the next person but
> > I'm wondering if the Toms Hardware guys are on crack or you disapprove
> > of their testing methods.
> >
> > http://www.tomshardware.com/reviews/external-raid-storage,1922-9.html
>
> They used a constant number of disks to compare two different hardware
> implementations, not to compare RAID 5 vs. RAID 10. They got the expected
> ~50% improvement from the extra stripe segment in RAID 5 with a serial
> access pattern. Unfortunately, that's neither real world use nor the typical
> way you would fulfill requirements. If you read ahead to the following
> pages, you have a nice comparison of random access patterns and RAID 10
> coming out ahead (with one less stripe segment and a lot less risk):
>
> http://www.tomshardware.com/reviews/external-raid-storage,1922-11.html
> http://www.tomshardware.com/reviews/external-raid-storage,1922-12.html
>
> --
> Christopher G. Stach II
>
>
So if I have 6 drives on my RAID controller which do I choose? If I have to
add two more drives to the RAID 10 to equal the performance of a RAID 5 I
could just make it a RAID 5 and be faster still. RAID 5 is faster than RAID
10 for reads and writes.

However, you are right on the IOs. The RAID 10 pretty much trounced RAID 5
on IOs in all tests.
What wasn't in the test (but is in others that they've done) is RAID 6. I'm
not sure I'm sold on it because it gives us about the same level of
redundancy as RAID 10 but with less performance than RAID 5. Theoretically
it would get soundly trounced by RAID 10 on IOs and maybe be slower on r/w
transfer as well.

Grant McWilliams

Some people, when confronted with a problem, think "I know, I'll use
Windows."
Now they have two problems.
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Christopher G. Stach II
- "Grant McWilliams"  wrote:

> On Wed, Dec 2, 2009 at 9:48 PM, Christopher G. Stach II <
> c...@ldsys.net > wrote:
> 
> - "Grant McWilliams" < grantmasterfl...@gmail.com > wrote:
> 
> > a RAID 10 (or 0+1) will never reach the write... performance of
> > a RAID-5.
> 
> (*cough* If you keep the number of disks constant or the amount of
> usable space? "Things working" tends to trump CapEx, despite the
> associated pain, so I will go with "amount of usable space.")
> 
> No.
> 
> --
> Christopher G. Stach II
> 
> Nice quality reading. I like theories as much as the next person but
> I'm wondering if the Toms Hardware guys are on crack or you disapprove
> of their testing methods.
> 
> http://www.tomshardware.com/reviews/external-raid-storage,1922-9.html

They used a constant number of disks to compare two different hardware 
implementations, not to compare RAID 5 vs. RAID 10. They got the expected ~50% 
improvement from the extra stripe segment in RAID 5 with a serial access 
pattern. Unfortunately, that's neither real world use nor the typical way you 
would fulfill requirements. If you read ahead to the following pages, you have 
a nice comparison of random access patterns and RAID 10 coming out ahead (with 
one less stripe segment and a lot less risk):

http://www.tomshardware.com/reviews/external-raid-storage,1922-11.html
http://www.tomshardware.com/reviews/external-raid-storage,1922-12.html

-- 
Christopher G. Stach II


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Grant McWilliams
On Wed, Dec 2, 2009 at 9:48 PM, Christopher G. Stach II wrote:

>
> - "Grant McWilliams"  wrote:
>
> > Interesting thoughts on raid5 although I doubt many would agree.
>
> That's okay. We all have our off days... Here's some quality reading:
>
> http://blogs.sun.com/bonwick/entry/raid_z
> http://www.cyberciti.biz/tips/raid5-vs-raid-10-safety-performance.html
> http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
> http://www.miracleas.com/BAARF/1.Millsap2000.01.03-RAID5.pdf
> http://www.codinghorror.com/blog/archives/001233.html
> http://web.ivy.net/carton/rant/ml/raid-raid5writehole-0.html
>
> Maybe you are thinking of RAID 6.
>
> > I don't see how the drive type has ANYTHING to do with the RAID
> > level.
>
> IOPS, bit error ratio, bus speed, and spindle speed tend to factor in and
> are usually governed by the drive type. (The BER is very important for how
> often you can expect the data elves come out and chew on your data during
> RAID 5 rebuilds.) You will use those numbers to calculate the number of
> stripe segments, controllers, and disks. Combine that with the controller's
> local bus, number of necessary controllers, host bus, budget, and other
> business requirements and you have a RAID type.
>
> > a RAID 10 (or 0+1) will never reach the write... performance of
> > a RAID-5.
>
> (*cough* If you keep the number of disks constant or the amount of usable
> space? "Things working" tends to trump CapEx, despite the associated pain,
> so I will go with "amount of usable space.")
>
> No.
>
> --
> Christopher G. Stach II
>
>
Nice quality reading. I like theories as much as the next person but I'm
wondering if the Toms Hardware guys are on crack or you disapprove of their
testing methods.

http://www.tomshardware.com/reviews/external-raid-storage,1922-9.html



Grant McWilliams

Some people, when confronted with a problem, think "I know, I'll use
Windows."
Now they have two problems.
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Grant McWilliams
On Wed, Dec 2, 2009 at 9:51 PM, Christopher G. Stach II wrote:

> - "Grant McWilliams"  wrote:
>
> > Portability is no different with a RAID controller as long as you've
> > standardized on controllers.
>
> For this to be true, it would have to be absolute. Since many people have
> evidence that it is not true, it's not absolute. Controllers of the same
> revision and firmware version have had portability problems.
>
> --
> Christopher G. Stach II
>
>
Like I said in our environment we have hundreds of drives moving between
controllers *every month and have had zero problems with it. Not all
machines use the same controller (some are 3ware 9550SX and some are 3ware
9650SE) nor do they use the same firmware version. We've been doing this for
3 years now and have never had a drive have problems. All drives are
initialized, partitioned, formatted and populated with content in 3
locations around the world and then multiple sets are shipped to 75 machines
in various geographical zones and used for one month. At this point we've
done close to 10,000 swaps (300/month) and we've never had a controller not
see a drive and recognize it without anything more than a tw_cli rescan.

You can talk theoretics but I can tell you my real world experience. I
cannot speak for other vendors but for 3ware this DOES work and is working
so far with 100% success. I have a bunch of Areca controllers too but the
drives are never moved between them so I can say how they'd act in that
circumstance.

Grant McWilliams
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt