Re: [zfs-discuss] virsh troubling zfs!?

2009-11-04 Thread Gary Pennington
On Tue, Nov 03, 2009 at 11:39:28AM -0800, Ralf Teckelmann wrote:
> Hi and hello,
> 
> I have a problem confusing me. I hope someone can help me with it.
> I followed a "best practise" - I think - using dedicated zfs filesystems for 
> my virtual machines.
> Commands (for completion):
> [i]zfs create rpool/vms[/i]
> [i]zfs create rpool/vms/vm1[/i]
> [i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
> 
> This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the 
> according [i]/dev/zvol/dsk/rpool/vms/vm1/vm1-dsk[/i].
> 

(Clarification)

Your commands create two filesystems:

rpool/vms
rpool/vms/vm1

You then create a ZFS Volume:

rpool/vms/vm1/vm1-dsk

which results in associated dsk and rdsk devices being created as:

/dev/zvol/dsk/rpool/vms/vm1/vm1-dsk
/dev/zvol/rdsk/rpool/vms/vm1/vm1-dsk

These two nodes are artifacts of the zfs volume implementation and are required
to allow zfs volumes to emulate traditional disk devices. They will appear
and disappear accordingly as zfs volumes are created and destroyed.

> If I delete a VM i set up using this filesystem via[i] virsh undefine vm1[/i] 
> the [i]/rpool/vms/vm1/vm1-dsk[/i] gets also deleted, but the 
> [i]/dev/zvol/dsk/rpool/vms/vm1/vm1-dsk[/i] is left.
> 

virsh undefine does not delete filesystems, disks or any other kind of
backing storage. In order to delete the three things you created, you need
to issue:

zfs destroy rpool/vms/vm1/vm1-dsk
zfs destroy rpool/vms/vm1
zfs destroy rpool/vms

or (more simply) you can do it recursively, if there's nothing else to be
affected:

zfs destroy -r rpool/vms

Obviously you need to be careful with recursive destruction that no other
filesystems/volumes are affected.

> Without [i]/rpool/vms/vm1/vm1-dsk[/i] I am not able to do [i]zfs destroy 
> rpool/vms/vm1/vm1-dsk[/i] so the [i]/dev/zvol/dsk/rpool/vms/vm1/vm1-dsk[/i] 
> could not be destroyed "and will be left forever"!? 
> 
> How can I get rid of this problem?

You don't have a problem. When the zfs volume is destroyed (as I describe
above), then the associated devices are also removed.

> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Hope that helps.

Gary
-- 
Gary Pennington
Solaris Core OS
Sun Microsystems
gary.penning...@sun.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] howto resize/extend an existing ZFS volume ?

2008-06-20 Thread Gary Pennington
On Fri, Jun 20, 2008 at 08:20:29AM -0700, Daniel Schwager wrote:
> Hi,
> 
> i would like to resize an existing volume. Is this possible ? 
> i can't find any relating zfs-command for this task.
> 
> [EMAIL PROTECTED] ~]$ zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> pool1 35.2G  72.1G18K  /pool1
> pool1/solaris-b90-t1.img  9.89G  72.1G  8.79G  -
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

You must set the volsize property. man zfs(1M)

Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-02 Thread Gary Pennington
Hmm, I just repeated this test on my system:

bash-3.2# uname -a
SunOS soe-x4200m2-6 5.11 onnv-gate:2007-11-02 i86pc i386 i86xpv

bash-3.2# prtconf | more
System Configuration:  Sun Microsystems  i86pc
Memory size: 7945 Megabytes

bash-3.2# prtdiag | more
System Configuration: Sun Microsystems Sun Fire X4200 M2
BIOS Configuration: American Megatrends Inc. 080012   02/02/2007
BMC Configuration: IPMI 1.5 (KCS: Keyboard Controller Style)

bash-3.2# ptime dd if=/dev/zero of=/xen/myfile bs=16k count=15
15+0 records in
15+0 records out

real   31.927
user0.689
sys15.750

bash-3.2# zpool iostat 1

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
xen 15.3G   121G  0261  0  32.7M
xen 15.3G   121G  0350  0  43.8M
xen 15.3G   121G  0392  0  48.9M
xen 15.3G   121G  0631  0  79.0M
xen 15.5G   121G  0532  0  60.1M
xen 15.6G   120G  0570  0  65.1M
xen 15.6G   120G  0645  0  80.7M
xen 15.6G   120G  0516  0  63.6M
xen 15.7G   120G  0403  0  39.9M
xen 15.7G   120G  0585  0  73.1M
xen 15.7G   120G  0573  0  71.7M
xen 15.7G   120G  0579  0  72.4M
xen 15.7G   120G  0583  0  72.9M
xen 15.7G   120G  0568  0  71.1M
xen 16.1G   120G  0400  0  39.0M
xen 16.1G   120G  0584  0  73.0M
xen 16.1G   120G  0568  0  71.0M
xen 16.1G   120G  0585  0  73.1M
xen 16.1G   120G  0583  0  72.8M
xen 16.1G   120G  0665  0  83.2M
xen 16.1G   120G  0643  0  80.4M
xen 16.1G   120G  0603  0  75.0M
xen 16.1G   120G  5526   320K  64.9M
xen 16.7G   119G  0582  0  68.0M
xen 16.7G   119G  0639  0  78.5M
xen 16.7G   119G  0641  0  80.2M
xen 16.7G   119G  0664  0  83.0M
xen 16.7G   119G  0629  0  78.5M
xen 16.7G   119G  0654  0  81.7M
xen 17.2G   119G  0563  63.4K  63.5M
xen 17.3G   119G  0525  0  59.2M
xen 17.3G   119G  0619  0  71.4M
xen 17.4G   119G  0  7  0   448K
xen 17.4G   119G  0  0  0  0
xen 17.4G   119G  0408  0  51.1M
xen 17.4G   119G  0618  0  76.5M
xen 17.6G   118G  0264  0  27.4M
xen 17.6G   118G  0  0  0  0
xen 17.6G   118G  0  0  0  0
xen 17.6G   118G  0  0  0  0
...

I don't seem to be experiencing the same result as yourself.

The behaviour of ZFS might vary between invocations, but I don't think that
is related to xVM. Can you get the results to vary when just booting under
"bare metal"?

Gary

On Fri, Nov 02, 2007 at 10:46:56AM -0700, Martin wrote:
> I've removed half the memory, leaving 4Gb, and rebooted into "Solaris xVM", 
> and re-tried under Dom0.  Sadly, I still get a similar problem.  With "dd 
> if=/dev/zero of=myfile bs=16k count=15" I get command returning in 15 
> seconds, and "zpool iostat 1 1000" shows 22 records with an IO rate of around 
> 80M, then 209 records of 2.5M (pretty consistent), then the final 11 records 
> climbing to 2.82, 3.29, 3.05, 3.32, 3.17, 3.20, 3.33, 4.41, 5.44, 8.11
> 
> regards
> 
> Martin
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Gary Pennington
Solaris Core OS
Sun Microsystems
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss