Re: [zfs-discuss] RAID-Z and virtualization

2009-11-10 Thread Joe Auty
Toby Thain wrote:
 On 8-Nov-09, at 12:20 PM, Joe Auty wrote:

 Tim Cook wrote:
 On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org
 mailto:j...@netmusician.org wrote:

 ...


 Why not just convert the VM's to run in virtualbox and run Solaris
 directly on the hardware?


 That's another possibility, but it depends on how Virtualbox stacks
 up against VMWare Server. At this point a lot of planning would be
 necessary to switch to something else, although this is possibility.

 How would Virtualbox stack up against VMWare Server? Last I checked
 it doesn't have a remote console of any sort, which would be a deal
 breaker. Can I disable allocating virtual memory to Virtualbox VMs?
 Can I get my VMs to auto boot in a specific order at runlevel 3? Can
 I control my VMs via the command line? 

 Yes you certainly can. Works well, even for GUI based guests, as there
 is vm-level VRDP (VNC/Remote Desktop) access as well as whatever
 remote access the guest provides.



 I thought Virtualbox was GUI only, designed for Desktop use primarily? 

 Not at all. Read up on VBoxHeadless.


I take it that Virtualbox, being Qemu/KVM based will support 64 bit
versions of FreeBSD guests, unlike Xen based solutions?



 --Toby

 This switch will only make sense if all of this points to a net positive.



 --Tim


 -- 
 Joe Auty
 NetMusician: web publishing software for musicians
 http://www.netmusician.org
 j...@netmusician.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
j...@netmusician.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-09 Thread Joe Auty
Tim Cook wrote:
 On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org
 mailto:j...@netmusician.org wrote:

 I'm entertaining something which might be a little wacky, I'm
 wondering what your general reaction to this scheme might be :)


 I would like to invest in some sort of storage appliance, and I
 like the idea of something I can grow over time, something that
 isn't tethered to my servers (i.e. not direct attach), as I'd like
 to keep this storage appliance beyond the life of my servers.
 Therefore, a RAID 5 or higher type setup in a separate 2U chassis
 is attractive to me.

 I do a lot of virtualization on my servers, and currently my VM
 host is running VMWare Server. It seems like the way forward is
 with software based RAID with sophisticated file systems such as
 ZFS or BTRFS rather than a hardware RAID card and dumber file
 system. I really like what ZFS brings to the table in terms of
 RAID-Z and more, so I'm thinking that it might be smart to skip
 getting a hardware RAID card and jump into using ZFS.

 The obvious problem at this point is that ZFS is not available for
 Linux yet, and BTRFS is not yet ready for production usage. So,
 I'm exploring some options. One option is to just get that RAID
 card and reassess all of this when BTRFS is ready, but the other
 option is the following...

 What if I were to run a FreeBSD VM and present it several vdisks,
 format these as ZFS, and serve up ZFS shares through this VM? I
 realize that I'm getting the sort of userland conveniences of ZFS
 this way since the host would still be writing to an EXT3/4
 volume, but on the other hand perhaps these conveniences and other
 benefits would be worthwhile? What would I be missing out on,
 despite no assurances of the same integrity given the underlying
 EXT3/4 volume?

 What do you think, would setting up a VM solely for hosting ZFS
 shares be worth my while as a sort of bridge to BTRFS? I realize
 that I'd have to allocate a lot of RAM to this VM, I'm prepared to
 do that.


 Is this idea retarded? Something you would recommend or do
 yourself? All of this convenience is pointless if there will be
 significant problems, I would like to eventually serve production
 servers this way. Fairly low volume ones, but still important to me.


 Why not just convert the VM's to run in virtualbox and run Solaris
 directly on the hardware?


That's another possibility, but it depends on how Virtualbox stacks up
against VMWare Server. At this point a lot of planning would be
necessary to switch to something else, although this is possibility.

How would Virtualbox stack up against VMWare Server? Last I checked it
doesn't have a remote console of any sort, which would be a deal
breaker. Can I disable allocating virtual memory to Virtualbox VMs? Can
I get my VMs to auto boot in a specific order at runlevel 3? Can I
control my VMs via the command line? I thought Virtualbox was GUI only,
designed for Desktop use primarily?

This switch will only make sense if all of this points to a net positive.



 --Tim


-- 
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
j...@netmusician.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-09 Thread Joe Auty
Erik Ableson wrote:
 Uhhh - for an unmanaged server you can use ESXi for free. Identical
 server functionality, just requires licenses if you need multiserver
 features (ie vMotion)

How does ESXi w/o vMotion, vSphere, and vCenter server stack up against
VMWare Server? My impression was that you need these other pieces to
make such an infrastructure useful?


 Cordialement,

 Erik Ableson 

 On 8 nov. 2009, at 19:12, Tim Cook t...@cook.ms mailto:t...@cook.ms
 wrote:



 On Sun, Nov 8, 2009 at 11:48 AM, Joe Auty j...@netmusician.org
 mailto:j...@netmusician.org wrote:

 Tim Cook wrote:


 It appears that one can get more in the way of features out
 of VMWare Server for free than with ESX, which is seemingly
 a hook into buying more VMWare stuff.

 I've never looked at Sun xVM, in fact I didn't know it even
 existed, but I do now. Thank you, I will research this some
 more!

 The only other variable, I guess, is the future of said
 technologies given the Oracle takeover? There has been much
 discussion on how this impacts ZFS, but I'll have to learn
 how xVM might be affected, if at all.


 Quite frankly, I wouldn't let that stop you.  Even if Oracle
 were to pull the plug on xVM entirely (not likely), you could
 very easily just move the VM's back over to *insert your
 favorite flavor of Linux* or Citrix Xen.  Including Unbreakable
 Linux (Oracle's version of RHEL).


 I remember now why Xen was a no-go from when I last tested it. I
 rely on the 64 bit version of FreeBSD for most of my VM guest
 machines, and FreeBSD only supports running as domU on i386
 systems. This is a monkey wrench!

 Sorry, just thinking outloud here...



 I have no idea what it supports right now.  I can't even find a
 decent support matrix.  Quite frankly, I would (and do) just use a
 separate server for the fileserver than the vm box.  You can get
 64bit cpu's with 4GB of ram for awfully cheap nowadays.  That should
 be more than enough for most home workloads.

 --Tim

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
j...@netmusician.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-09 Thread Toby Thain


On 8-Nov-09, at 12:20 PM, Joe Auty wrote:


Tim Cook wrote:


On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org wrote:
...

Why not just convert the VM's to run in virtualbox and run Solaris  
directly on the hardware?




That's another possibility, but it depends on how Virtualbox stacks  
up against VMWare Server. At this point a lot of planning would be  
necessary to switch to something else, although this is possibility.


How would Virtualbox stack up against VMWare Server? Last I checked  
it doesn't have a remote console of any sort, which would be a deal  
breaker. Can I disable allocating virtual memory to Virtualbox VMs?  
Can I get my VMs to auto boot in a specific order at runlevel 3?  
Can I control my VMs via the command line?


Yes you certainly can. Works well, even for GUI based guests, as  
there is vm-level VRDP (VNC/Remote Desktop) access as well as  
whatever remote access the guest provides.





I thought Virtualbox was GUI only, designed for Desktop use primarily?


Not at all. Read up on VBoxHeadless.

--Toby



This switch will only make sense if all of this points to a net  
positive.





--Tim



--
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
j...@netmusician.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread besson3c
I'm entertaining something which might be a little wacky, I'm wondering what 
your general reaction to this scheme might be :)


I would like to invest in some sort of storage appliance, and I like the idea 
of something I can grow over time, something that isn't tethered to my servers 
(i.e. not direct attach), as I'd like to keep this storage appliance beyond the 
life of my servers. Therefore, a RAID 5 or higher type setup in a separate 2U 
chassis is attractive to me.

I do a lot of virtualization on my servers, and currently my VM host is running 
VMWare Server. It seems like the way forward is with software based RAID with 
sophisticated file systems such as ZFS or BTRFS rather than a hardware RAID 
card and dumber file system. I really like what ZFS brings to the table in 
terms of RAID-Z and more, so I'm thinking that it might be smart to skip 
getting a hardware RAID card and jump into using ZFS. 

The obvious problem at this point is that ZFS is not available for Linux yet, 
and BTRFS is not yet ready for production usage. So, I'm exploring some 
options. One option is to just get that RAID card and reassess all of this when 
BTRFS is ready, but the other option is the following...

What if I were to run a FreeBSD VM and present it several vdisks, format these 
as ZFS, and serve up ZFS shares through this VM? I realize that I'm getting the 
sort of userland conveniences of ZFS this way since the host would still be 
writing to an EXT3/4 volume, but on the other hand perhaps these conveniences 
and other benefits would be worthwhile? What would I be missing out on, despite 
no assurances of the same integrity given the underlying EXT3/4 volume?

What do you think, would setting up a VM solely for hosting ZFS shares be worth 
my while as a sort of bridge to BTRFS? I realize that I'd have to allocate a 
lot of RAM to this VM, I'm prepared to do that.


Is this idea retarded? Something you would recommend or do yourself? All of 
this convenience is pointless if there will be significant problems, I would 
like to eventually serve production servers this way. Fairly low volume ones, 
but still important to me.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Bob Friesenhahn

On Sun, 8 Nov 2009, besson3c wrote:


What if I were to run a FreeBSD VM and present it several vdisks, 
format these as ZFS, and serve up ZFS shares through this VM? I 
realize that I'm getting the sort of userland conveniences of ZFS 
this way since the host would still be writing to an EXT3/4 volume, 
but on the other hand perhaps these conveniences and other benefits 
would be worthwhile? What would I be missing out on, despite no 
assurances of the same integrity given the underlying EXT3/4 volume?


The main concern here would be if the VM correctly honors all of the 
cache sync requests (all the way to underlying disk) that zfs needs in 
order to be reliable.  Some VMs are known to cut corners in this area 
so that they offer more performance.  If the VM uses large files on 
EXT4 then maybe the pool would be lost after a power fail.  The chance 
of success is better if you can give the VM real disk devices to work 
with.


There is also the option to run zfs under Linux via FUSE.  I have no 
idea how well the zfs implementation for FUSE works or if it is well 
maintained.  Benchmarks show that zfs performance under FUSE does not 
suck nearly as much as one would think.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread besson3c
My impression was that the ZFS Fuse project was no longer being maintained?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org wrote:

 I'm entertaining something which might be a little wacky, I'm wondering
 what your general reaction to this scheme might be :)


 I would like to invest in some sort of storage appliance, and I like the
 idea of something I can grow over time, something that isn't tethered to my
 servers (i.e. not direct attach), as I'd like to keep this storage appliance
 beyond the life of my servers. Therefore, a RAID 5 or higher type setup in a
 separate 2U chassis is attractive to me.

 I do a lot of virtualization on my servers, and currently my VM host is
 running VMWare Server. It seems like the way forward is with software based
 RAID with sophisticated file systems such as ZFS or BTRFS rather than a
 hardware RAID card and dumber file system. I really like what ZFS brings
 to the table in terms of RAID-Z and more, so I'm thinking that it might be
 smart to skip getting a hardware RAID card and jump into using ZFS.

 The obvious problem at this point is that ZFS is not available for Linux
 yet, and BTRFS is not yet ready for production usage. So, I'm exploring some
 options. One option is to just get that RAID card and reassess all of this
 when BTRFS is ready, but the other option is the following...

 What if I were to run a FreeBSD VM and present it several vdisks, format
 these as ZFS, and serve up ZFS shares through this VM? I realize that I'm
 getting the sort of userland conveniences of ZFS this way since the host
 would still be writing to an EXT3/4 volume, but on the other hand perhaps
 these conveniences and other benefits would be worthwhile? What would I be
 missing out on, despite no assurances of the same integrity given the
 underlying EXT3/4 volume?

 What do you think, would setting up a VM solely for hosting ZFS shares be
 worth my while as a sort of bridge to BTRFS? I realize that I'd have to
 allocate a lot of RAM to this VM, I'm prepared to do that.


 Is this idea retarded? Something you would recommend or do yourself? All of
 this convenience is pointless if there will be significant problems, I would
 like to eventually serve production servers this way. Fairly low volume
 ones, but still important to me.


Why not just convert the VM's to run in virtualbox and run Solaris directly
on the hardware?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Ross Walker

On Nov 8, 2009, at 12:09 PM, Tim Cook t...@cook.ms wrote:

Why not just convert the VM's to run in virtualbox and run Solaris  
directly on the hardware?


Or use OpenSolaris xVM (Xen) with either qemu img files on zpools for  
the VMs or zvols.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 11:20 AM, Joe Auty j...@netmusician.org wrote:

  Tim Cook wrote:

 On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org wrote:

 I'm entertaining something which might be a little wacky, I'm wondering
 what your general reaction to this scheme might be :)


 I would like to invest in some sort of storage appliance, and I like the
 idea of something I can grow over time, something that isn't tethered to my
 servers (i.e. not direct attach), as I'd like to keep this storage appliance
 beyond the life of my servers. Therefore, a RAID 5 or higher type setup in a
 separate 2U chassis is attractive to me.

 I do a lot of virtualization on my servers, and currently my VM host is
 running VMWare Server. It seems like the way forward is with software based
 RAID with sophisticated file systems such as ZFS or BTRFS rather than a
 hardware RAID card and dumber file system. I really like what ZFS brings
 to the table in terms of RAID-Z and more, so I'm thinking that it might be
 smart to skip getting a hardware RAID card and jump into using ZFS.

 The obvious problem at this point is that ZFS is not available for Linux
 yet, and BTRFS is not yet ready for production usage. So, I'm exploring some
 options. One option is to just get that RAID card and reassess all of this
 when BTRFS is ready, but the other option is the following...

 What if I were to run a FreeBSD VM and present it several vdisks, format
 these as ZFS, and serve up ZFS shares through this VM? I realize that I'm
 getting the sort of userland conveniences of ZFS this way since the host
 would still be writing to an EXT3/4 volume, but on the other hand perhaps
 these conveniences and other benefits would be worthwhile? What would I be
 missing out on, despite no assurances of the same integrity given the
 underlying EXT3/4 volume?

 What do you think, would setting up a VM solely for hosting ZFS shares be
 worth my while as a sort of bridge to BTRFS? I realize that I'd have to
 allocate a lot of RAM to this VM, I'm prepared to do that.


 Is this idea retarded? Something you would recommend or do yourself? All
 of this convenience is pointless if there will be significant problems, I
 would like to eventually serve production servers this way. Fairly low
 volume ones, but still important to me.


 Why not just convert the VM's to run in virtualbox and run Solaris directly
 on the hardware?


 That's another possibility, but it depends on how Virtualbox stacks up
 against VMWare Server. At this point a lot of planning would be necessary to
 switch to something else, although this is possibility.

 How would Virtualbox stack up against VMWare Server? Last I checked it
 doesn't have a remote console of any sort, which would be a deal breaker.
 Can I disable allocating virtual memory to Virtualbox VMs? Can I get my VMs
 to auto boot in a specific order at runlevel 3? Can I control my VMs via the
 command line? I thought Virtualbox was GUI only, designed for Desktop use
 primarily?

 This switch will only make sense if all of this points to a net positive.


Why are you running VMware server at all if those are your requirements?
Nothing in your requirements explain why you would choose something with the
overhead of VMware server over ESX.

With those requirements, I'd point you at Sun xVM.

In any case, while I can't answer all of your questions as I don't use
Virtualbox:  yes, you can control VM's from the command line.

VMware server is designed primarily for Desktop use, hence my confusion with
your choice.




--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 11:37 AM, Joe Auty j...@netmusician.org wrote:

  Tim Cook wrote:

 On Sun, Nov 8, 2009 at 11:20 AM, Joe Auty j...@netmusician.org wrote:

 Tim Cook wrote:

 On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org wrote:

 I'm entertaining something which might be a little wacky, I'm wondering
 what your general reaction to this scheme might be :)


 I would like to invest in some sort of storage appliance, and I like the
 idea of something I can grow over time, something that isn't tethered to my
 servers (i.e. not direct attach), as I'd like to keep this storage appliance
 beyond the life of my servers. Therefore, a RAID 5 or higher type setup in a
 separate 2U chassis is attractive to me.

 I do a lot of virtualization on my servers, and currently my VM host is
 running VMWare Server. It seems like the way forward is with software based
 RAID with sophisticated file systems such as ZFS or BTRFS rather than a
 hardware RAID card and dumber file system. I really like what ZFS brings
 to the table in terms of RAID-Z and more, so I'm thinking that it might be
 smart to skip getting a hardware RAID card and jump into using ZFS.

 The obvious problem at this point is that ZFS is not available for Linux
 yet, and BTRFS is not yet ready for production usage. So, I'm exploring some
 options. One option is to just get that RAID card and reassess all of this
 when BTRFS is ready, but the other option is the following...

 What if I were to run a FreeBSD VM and present it several vdisks, format
 these as ZFS, and serve up ZFS shares through this VM? I realize that I'm
 getting the sort of userland conveniences of ZFS this way since the host
 would still be writing to an EXT3/4 volume, but on the other hand perhaps
 these conveniences and other benefits would be worthwhile? What would I be
 missing out on, despite no assurances of the same integrity given the
 underlying EXT3/4 volume?

 What do you think, would setting up a VM solely for hosting ZFS shares be
 worth my while as a sort of bridge to BTRFS? I realize that I'd have to
 allocate a lot of RAM to this VM, I'm prepared to do that.


 Is this idea retarded? Something you would recommend or do yourself? All
 of this convenience is pointless if there will be significant problems, I
 would like to eventually serve production servers this way. Fairly low
 volume ones, but still important to me.


 Why not just convert the VM's to run in virtualbox and run Solaris
 directly on the hardware?


 That's another possibility, but it depends on how Virtualbox stacks up
 against VMWare Server. At this point a lot of planning would be necessary to
 switch to something else, although this is possibility.

 How would Virtualbox stack up against VMWare Server? Last I checked it
 doesn't have a remote console of any sort, which would be a deal breaker.
 Can I disable allocating virtual memory to Virtualbox VMs? Can I get my VMs
 to auto boot in a specific order at runlevel 3? Can I control my VMs via the
 command line? I thought Virtualbox was GUI only, designed for Desktop use
 primarily?

 This switch will only make sense if all of this points to a net positive.


 Why are you running VMware server at all if those are your requirements?
 Nothing in your requirements explain why you would choose something with the
 overhead of VMware server over ESX.

 With those requirements, I'd point you at Sun xVM.

 In any case, while I can't answer all of your questions as I don't use
 Virtualbox:  yes, you can control VM's from the command line.

 VMware server is designed primarily for Desktop use, hence my confusion
 with your choice.



 It appears that one can get more in the way of features out of VMWare
 Server for free than with ESX, which is seemingly a hook into buying more
 VMWare stuff.

 I've never looked at Sun xVM, in fact I didn't know it even existed, but I
 do now. Thank you, I will research this some more!

 The only other variable, I guess, is the future of said technologies given
 the Oracle takeover? There has been much discussion on how this impacts ZFS,
 but I'll have to learn how xVM might be affected, if at all.


Quite frankly, I wouldn't let that stop you.  Even if Oracle were to pull
the plug on xVM entirely (not likely), you could very easily just move the
VM's back over to *insert your favorite flavor of Linux* or Citrix Xen.
Including Unbreakable Linux (Oracle's version of RHEL).

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread jay
From your description, it sounds like you are looking for an independent nas 
hardware box?  In which case using freenas or opensolaris to handle the 
hardware and present iscsi volumes to your vms, is a pretty simple solution.

If your instead looking for one box to handle both data storage and vms, then I 
would suggest looking into vmware esxi.  A vm hosted on esxi can be given full 
control of certain hardware, which isn't possible on vmware server.

Alternatively you could set up an opensolaris dom0 using xVM (Xen), and have 
the dom0 handle the drives. But this would require more complicated conversion 
of existing vms, or rebuilding. Or do the same thing with freebsd as your base 
system.

--Original Message--
From: besson3c
Sender: zfs-discuss-boun...@opensolaris.org
To: zfs Discuss
Subject: [zfs-discuss] RAID-Z and virtualization
Sent: Nov 8, 2009 3:03 AM

I'm entertaining something which might be a little wacky, I'm wondering what 
your general reaction to this scheme might be :)


I would like to invest in some sort of storage appliance, and I like the idea 
of something I can grow over time, something that isn't tethered to my servers 
(i.e. not direct attach), as I'd like to keep this storage appliance beyond the 
life of my servers. Therefore, a RAID 5 or higher type setup in a separate 2U 
chassis is attractive to me.

I do a lot of virtualization on my servers, and currently my VM host is running 
VMWare Server. It seems like the way forward is with software based RAID with 
sophisticated file systems such as ZFS or BTRFS rather than a hardware RAID 
card and dumber file system. I really like what ZFS brings to the table in 
terms of RAID-Z and more, so I'm thinking that it might be smart to skip 
getting a hardware RAID card and jump into using ZFS. 

The obvious problem at this point is that ZFS is not available for Linux yet, 
and BTRFS is not yet ready for production usage. So, I'm exploring some 
options. One option is to just get that RAID card and reassess all of this when 
BTRFS is ready, but the other option is the following...

What if I were to run a FreeBSD VM and present it several vdisks, format these 
as ZFS, and serve up ZFS shares through this VM? I realize that I'm getting the 
sort of userland conveniences of ZFS this way since the host would still be 
writing to an EXT3/4 volume, but on the other hand perhaps these conveniences 
and other benefits would be worthwhile? What would I be missing out on, despite 
no assurances of the same integrity given the underlying EXT3/4 volume?

What do you think, would setting up a VM solely for hosting ZFS shares be worth 
my while as a sort of bridge to BTRFS? I realize that I'd have to allocate a 
lot of RAM to this VM, I'm prepared to do that.


Is this idea retarded? Something you would recommend or do yourself? All of 
this convenience is pointless if there will be significant problems, I would 
like to eventually serve production servers this way. Fairly low volume ones, 
but still important to me.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Sent from my BlackBerry® smartphone with SprintSpeed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 11:48 AM, Joe Auty j...@netmusician.org wrote:

  Tim Cook wrote:



 It appears that one can get more in the way of features out of VMWare
 Server for free than with ESX, which is seemingly a hook into buying more
 VMWare stuff.

 I've never looked at Sun xVM, in fact I didn't know it even existed, but I
 do now. Thank you, I will research this some more!

 The only other variable, I guess, is the future of said technologies given
 the Oracle takeover? There has been much discussion on how this impacts ZFS,
 but I'll have to learn how xVM might be affected, if at all.


 Quite frankly, I wouldn't let that stop you.  Even if Oracle were to pull
 the plug on xVM entirely (not likely), you could very easily just move the
 VM's back over to *insert your favorite flavor of Linux* or Citrix Xen.
 Including Unbreakable Linux (Oracle's version of RHEL).


 I remember now why Xen was a no-go from when I last tested it. I rely on
 the 64 bit version of FreeBSD for most of my VM guest machines, and FreeBSD
 only supports running as domU on i386 systems. This is a monkey wrench!

 Sorry, just thinking outloud here...



I have no idea what it supports right now.  I can't even find a decent
support matrix.  Quite frankly, I would (and do) just use a separate server
for the fileserver than the vm box.  You can get 64bit cpu's with 4GB of ram
for awfully cheap nowadays.  That should be more than enough for most home
workloads.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Erik Ableson
Uhhh - for an unmanaged server you can use ESXi for free. Identical  
server functionality, just requires licenses if you need multiserver  
features (ie vMotion)


Cordialement,

Erik Ableson

On 8 nov. 2009, at 19:12, Tim Cook t...@cook.ms wrote:




On Sun, Nov 8, 2009 at 11:48 AM, Joe Auty j...@netmusician.org wrote:
Tim Cook wrote:




It appears that one can get more in the way of features out of  
VMWare Server for free than with ESX, which is seemingly a hook  
into buying more VMWare stuff.


I've never looked at Sun xVM, in fact I didn't know it even  
existed, but I do now. Thank you, I will research this some more!


The only other variable, I guess, is the future of said  
technologies given the Oracle takeover? There has been much  
discussion on how this impacts ZFS, but I'll have to learn how xVM  
might be affected, if at all.



Quite frankly, I wouldn't let that stop you.  Even if Oracle were  
to pull the plug on xVM entirely (not likely), you could very  
easily just move the VM's back over to *insert your favorite flavor  
of Linux* or Citrix Xen.  Including Unbreakable Linux (Oracle's  
version of RHEL).




I remember now why Xen was a no-go from when I last tested it. I  
rely on the 64 bit version of FreeBSD for most of my VM guest  
machines, and FreeBSD only supports running as domU on i386 systems.  
This is a monkey wrench!


Sorry, just thinking outloud here...



I have no idea what it supports right now.  I can't even find a  
decent support matrix.  Quite frankly, I would (and do) just use a  
separate server for the fileserver than the vm box.  You can get  
64bit cpu's with 4GB of ram for awfully cheap nowadays.  That should  
be more than enough for most home workloads.


--Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 12:39 PM, Joe Auty j...@netmusician.org wrote:

  Erik Ableson wrote:

 Uhhh - for an unmanaged server you can use ESXi for free. Identical server
 functionality, just requires licenses if you need multiserver features (ie
 vMotion)


 How does ESXi w/o vMotion, vSphere, and vCenter server stack up against
 VMWare Server? My impression was that you need these other pieces to make
 such an infrastructure useful?


VMware server doesn't have vmotion.  There is no such thing as vsphere,
that's the marketing name for the entire product suite.  vCenter is only
required for advanced functionality like HA/DPM/DRS that you don't have with
VMware server either.

Are you just throwing out buzzwords, or do you actually know what they do?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Erik Ableson
Simply put ESXi is exactly the same local feature set as ESX server.  
So you get all of the useful stuff like transparent memory page  
sharing (memory deduplication), virtual switches with VLAN tagging,  
and high performance storage I/O. For free. As many copies as you like.


But... You will need a vCenter license and then by server (well, by  
processor) licenses if you want the advanced management features like  
live migration of running VMs between servers, fault tolerance, guided  
consolidation etc.


Most importantly, ESXi is a bare metal install so you have a proper  
hypervisor allocating resources instead of a general purpose OS with a  
Virtualisation application.


Cordialement,

Erik Ableson

On 8 nov. 2009, at 19:43, Tim Cook t...@cook.ms wrote:




On Sun, Nov 8, 2009 at 12:39 PM, Joe Auty j...@netmusician.org wrote:
Erik Ableson wrote:


Uhhh - for an unmanaged server you can use ESXi for free. Identical  
server functionality, just requires licenses if you need  
multiserver features (ie vMotion)


How does ESXi w/o vMotion, vSphere, and vCenter server stack up  
against VMWare Server? My impression was that you need these other  
pieces to make such an infrastructure useful?



VMware server doesn't have vmotion.  There is no such thing as  
vsphere, that's the marketing name for the entire product suite.   
vCenter is only required for advanced functionality like HA/DPM/DRS  
that you don't have with VMware server either.


Are you just throwing out buzzwords, or do you actually know what  
they do?


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread jay
Really if your just talking a handful of drives then hardware raid may be the 
simpilest solution for now.  However, I also would be inclided to use seperate 
nas and vm servers.  Even with ecc you can put together a nas box for a few 
hundred (or use existing hardware), plus what you need for a case, bays and 
drives.  Which is what you'll spend on decent hardware raid.

Sent from my BlackBerry® smartphone with SprintSpeed

-Original Message-
From: Joe Auty j...@netmusician.org
Date: Sun, 08 Nov 2009 12:50:30 
To: j...@lentecs.com
Subject: Re: [zfs-discuss] RAID-Z and virtualization

j...@lentecs.com wrote:
 From your description, it sounds like you are looking for an independent nas 
 hardware box?  In which case using freenas or opensolaris to handle the 
 hardware and present iscsi volumes to your vms, is a pretty simple solution.

 If your instead looking for one box to handle both data storage and vms, then 
 I would suggest looking into vmware esxi.  A vm hosted on esxi can be given 
 full control of certain hardware, which isn't possible on vmware server.

 Alternatively you could set up an opensolaris dom0 using xVM (Xen), and have 
 the dom0 handle the drives. But this would require more complicated 
 conversion of existing vms, or rebuilding. Or do the same thing with freebsd 
 as your base system.
   
I'm reluctant to go ESX or ESXi due to cost related issues, and what I
can get out of the free versions.

The other monkey wrench, as I just wrote in another post, is that I run
several 64 bit FreeBSD guests which don't support Xen.




 --Original Message--
 From: besson3c
 Sender: zfs-discuss-boun...@opensolaris.org
 To: zfs Discuss
 Subject: [zfs-discuss] RAID-Z and virtualization
 Sent: Nov 8, 2009 3:03 AM

 I'm entertaining something which might be a little wacky, I'm wondering what 
 your general reaction to this scheme might be :)


 I would like to invest in some sort of storage appliance, and I like the idea 
 of something I can grow over time, something that isn't tethered to my 
 servers (i.e. not direct attach), as I'd like to keep this storage appliance 
 beyond the life of my servers. Therefore, a RAID 5 or higher type setup in a 
 separate 2U chassis is attractive to me.

 I do a lot of virtualization on my servers, and currently my VM host is 
 running VMWare Server. It seems like the way forward is with software based 
 RAID with sophisticated file systems such as ZFS or BTRFS rather than a 
 hardware RAID card and dumber file system. I really like what ZFS brings to 
 the table in terms of RAID-Z and more, so I'm thinking that it might be smart 
 to skip getting a hardware RAID card and jump into using ZFS. 

 The obvious problem at this point is that ZFS is not available for Linux yet, 
 and BTRFS is not yet ready for production usage. So, I'm exploring some 
 options. One option is to just get that RAID card and reassess all of this 
 when BTRFS is ready, but the other option is the following...

 What if I were to run a FreeBSD VM and present it several vdisks, format 
 these as ZFS, and serve up ZFS shares through this VM? I realize that I'm 
 getting the sort of userland conveniences of ZFS this way since the host 
 would still be writing to an EXT3/4 volume, but on the other hand perhaps 
 these conveniences and other benefits would be worthwhile? What would I be 
 missing out on, despite no assurances of the same integrity given the 
 underlying EXT3/4 volume?

 What do you think, would setting up a VM solely for hosting ZFS shares be 
 worth my while as a sort of bridge to BTRFS? I realize that I'd have to 
 allocate a lot of RAM to this VM, I'm prepared to do that.


 Is this idea retarded? Something you would recommend or do yourself? All of 
 this convenience is pointless if there will be significant problems, I would 
 like to eventually serve production servers this way. Fairly low volume ones, 
 but still important to me.
   


-- 
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
j...@netmusician.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread jay
In terms of capability and preformance, esxi is well above anything your 
getting from vmware serve, even just using the free utilities. The issues to 
consider are complexity and hardware support. You shouldn't have a problem with 
hardware if you do your home work before you buy.  However the complexity of 
what you are trying to accomplish may be more than you want to get into.  It is 
likely still a better solution to go with seperate storage and vms servers.
   
Sent from my BlackBerry® smartphone with SprintSpeed

-Original Message-
From: Tim Cook t...@cook.ms
Date: Sun, 8 Nov 2009 12:43:59 
To: Joe Autyj...@netmusician.org
Cc: zfs-discuss@opensolaris.orgzfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] RAID-Z and virtualization

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Morten Dall
Just to clear out the vmware stuff.

ESXi + ZFS
I've run production with a thumper 4540, solaris10 (before dedup:) ,48
drives, one pool,
NFS through 1 GB to ESX(+ESXi) on dedicated NICs
ZFS snapshots always proved to be consistent data to ESX
=
ESX or ESXi depends on your needs
NFS (leaves all filesystem management to your ZFS OS)
1 GB dedicated NIC is enough for operation (peaks, as everything else eg.
fiberchannel, when moving data across)
ZFS OS could be whatever you prefer (Solaris, OpenSolaris, FreeBSD...)

Hope this helps someone
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss