Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-09 Thread Richard Elling
On Oct 7, 2010, at 11:40 AM, Jim Sloey wrote:

 One of us found the following:
 
 The presence of snapshots can cause some unexpected behavior when you attempt 
 to free space. Typically, given appropriate permissions, you can remove a 
 file from a full file system, and this action results in more space becoming 
 available in the file system. However, if the file to be removed exists in a 
 snapshot of the file system, then no space is gained from the file deletion. 
 The blocks used by the file continue to be referenced from the snapshot. 

Yes, as designed.

 As a result, the file deletion can consume more disk space, because a new 
 version of the directory needs to be created to reflect the new state of the 
 namespace. This behavior means that you can get an unexpected ENOSPC or 
 EDQUOT when attempting to remove a file.

Yes, as designed.

 Since we are using snapshots to a remote system, what will be the impact of 
 destroying the snapshots? Since the files we moved are some of the oldest, 
 will we have to start replication to the remote site over again from the 
 beginning?

In most cases where we implement this, the remote (backup) system will have
more snapshots than the production system. All you really need is a single, 
common
snapshot between the two to re-start an incremental send/receive.
 -- richard

-- 
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com
ZFS and performance consulting
http://www.RichardElling.com












___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC 
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and 
have not received disk for our SAN. Using df -h results in:
Filesystem size   used  avail capacity  Mounted on
pool120T20T55G   100%/pool1
pool2   9.1T   8.0T   497G95%/pool2
The idea was to temporarily move a group of big directories to another zfs pool 
that had space available and link from the old location to the new.
cp   –r   /pool1/000/pool2/
mv   /pool1/000   /pool1/000d
ln   –s   /pool2/000/pool1/000
rm   –rf   /pool1/000   
Using df -h after the relocation results in:
Filesystem size   used  avail capacity  Mounted on
pool120T19T15G   100%/pool1
pool2   9.1T   8.3T   221G98%/pool2
Using zpool list says:
NAMESIZE   USEDAVAIL   CAP
pool1 19.9T19.6T  333G 98%
pool2 9.25T8.89T  369G 96%
Using zfs get all pool1 produces:
NAME  PROPERTYVALUE  SOURCE
pool1  typefilesystem -
pool1  creationTue Dec 18 11:37 2007  -
pool1  used19.6T  -
pool1  available   15.3G  -
pool1  referenced  19.5T  -
pool1  compressratio   1.00x  -
pool1  mounted yes-
pool1  quota   none   default
pool1  reservation none   default
pool1  recordsize  128K   default
pool1  mountpoint  /pool1  default
pool1  sharenfson local
pool1  checksumon default
pool1  compression offdefault
pool1  atime   on default
pool1  devices on default
pool1  execon default
pool1  setuid  on default
pool1  readonlyoffdefault
pool1  zoned   offdefault
pool1  snapdir hidden default
pool1  aclmode groupmask  default
pool1  aclinherit  secure default
pool1  canmounton default
pool1  shareiscsi  offdefault
pool1  xattr   on default
pool1  replication:locked  true   local

Has anyone experienced this or know where to look for a solution to recovering 
space?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Remco Lengers

 any snapshots?

*zfs list -t snapshot*

..Remco



On 10/7/10 7:24 PM, Jim Sloey wrote:

I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC 
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and 
have not received disk for our SAN. Using df -h results in:
Filesystem size   used  avail capacity  Mounted on
pool120T20T55G   100%/pool1
pool2   9.1T   8.0T   497G95%/pool2
The idea was to temporarily move a group of big directories to another zfs pool 
that had space available and link from the old location to the new.
cp   –r   /pool1/000/pool2/
mv   /pool1/000   /pool1/000d
ln   –s   /pool2/000/pool1/000
rm   –rf   /pool1/000
Using df -h after the relocation results in:
Filesystem size   used  avail capacity  Mounted on
pool120T19T15G   100%/pool1
pool2   9.1T   8.3T   221G98%/pool2
Using zpool list says:
NAMESIZE   USEDAVAIL   CAP
pool1 19.9T19.6T  333G 98%
pool2 9.25T8.89T  369G 96%
Using zfs get all pool1 produces:
NAME  PROPERTYVALUE  SOURCE
pool1  typefilesystem -
pool1  creationTue Dec 18 11:37 2007  -
pool1  used19.6T  -
pool1  available   15.3G  -
pool1  referenced  19.5T  -
pool1  compressratio   1.00x  -
pool1  mounted yes-
pool1  quota   none   default
pool1  reservation none   default
pool1  recordsize  128K   default
pool1  mountpoint  /pool1  default
pool1  sharenfson local
pool1  checksumon default
pool1  compression offdefault
pool1  atime   on default
pool1  devices on default
pool1  execon default
pool1  setuid  on default
pool1  readonlyoffdefault
pool1  zoned   offdefault
pool1  snapdir hidden default
pool1  aclmode groupmask  default
pool1  aclinherit  secure default
pool1  canmounton default
pool1  shareiscsi  offdefault
pool1  xattr   on default
pool1  replication:locked  true   local

Has anyone experienced this or know where to look for a solution to recovering 
space?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread taemun
Forgive me, but isn't this incorrect:

---
mv   /pool1/000   /pool1/000d
---
rm   –rf   /pool1/000

Shouldn't that last line be
rm   –rf   /pool1/000d
??

On 8 October 2010 04:32, Remco Lengers re...@lengers.com wrote:

  any snapshots?

 *zfs list -t snapshot*

 ..Remco



 On 10/7/10 7:24 PM, Jim Sloey wrote:

 I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC 
 SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and 
 have not received disk for our SAN. Using df -h results in:
 Filesystem size   used  avail capacity  Mounted on
 pool120T20T55G   100%/pool1
 pool2   9.1T   8.0T   497G95%/pool2
 The idea was to temporarily move a group of big directories to another zfs 
 pool that had space available and link from the old location to the new.
 cp   –r   /pool1/000/pool2/
 mv   /pool1/000   /pool1/000d
 ln   –s   /pool2/000/pool1/000
 rm   –rf   /pool1/000
 Using df -h after the relocation results in:
 Filesystem size   used  avail capacity  Mounted on
 pool120T19T15G   100%/pool1
 pool2   9.1T   8.3T   221G98%/pool2
 Using zpool list says:
 NAMESIZE   USEDAVAIL   CAP
 pool1 19.9T19.6T  333G 98%
 pool2 9.25T8.89T  369G 96%
 Using zfs get all pool1 produces:
 NAME  PROPERTYVALUE  SOURCE
 pool1  typefilesystem -
 pool1  creationTue Dec 18 11:37 2007  -
 pool1  used19.6T  -
 pool1  available   15.3G  -
 pool1  referenced  19.5T  -
 pool1  compressratio   1.00x  -
 pool1  mounted yes-
 pool1  quota   none   default
 pool1  reservation none   default
 pool1  recordsize  128K   default
 pool1  mountpoint  /pool1  default
 pool1  sharenfson local
 pool1  checksumon default
 pool1  compression offdefault
 pool1  atime   on default
 pool1  devices on default
 pool1  execon default
 pool1  setuid  on default
 pool1  readonlyoffdefault
 pool1  zoned   offdefault
 pool1  snapdir hidden default
 pool1  aclmode groupmask  default
 pool1  aclinherit  secure default
 pool1  canmounton default
 pool1  shareiscsi  offdefault
 pool1  xattr   on default
 pool1  replication:locked  true   local

 Has anyone experienced this or know where to look for a solution to 
 recovering space?


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
Yes, you're correct. There was a typo when I copied to the forum.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
Yes. We run a snap in cron to a disaster recovery site.
NAME USED  AVAIL  REFER  MOUNTPOINT
po...@20100930-22:20:00  13.2M  -  19.5T  -
po...@20101001-01:20:00  4.35M  -  19.5T  -
po...@20101001-04:20:00  0  -  19.5T  -
po...@20101001-07:20:00  0  -  19.5T  -
po...@20101001-10:20:00  1.87M  -  19.5T  -
po...@20101001-13:20:00  2.93M  -  19.5T  -
po...@20101001-16:20:00  4.68M  -  19.5T  -
po...@20101001-19:20:00  5.47M  -  19.5T  -
po...@20101001-22:20:00  3.33M  -  19.5T  -
po...@20101002-01:20:00  4.98M  -  19.5T  -
po...@20101002-04:20:00   298K  -  19.5T  -
po...@20101002-07:20:00   138K  -  19.5T  -
po...@20101002-10:20:00  1.14M  -  19.5T  -
po...@20101002-13:20:00   228K  -  19.5T  -
po...@20101002-16:20:00  0  -  19.5T  -
po...@20101002-19:20:00  0  -  19.5T  -
po...@20101002-22:20:01   110K  -  19.5T  -
po...@20101003-01:20:00  1.39M  -  19.5T  -
po...@20101003-04:20:00  3.67M  -  19.5T  -
po...@20101003-07:20:00   540K  -  19.5T  -
po...@20101003-10:20:00   551K  -  19.5T  -
po...@20101003-13:20:00   640K  -  19.5T  -
po...@20101003-16:20:00  1.72M  -  19.5T  -
po...@20101003-19:20:00   542K  -  19.5T  -
po...@20101003-22:20:00  0  -  19.5T  -
po...@20101004-01:20:00  0  -  19.5T  -
po...@20101004-04:20:01   102K  -  19.5T  -
po...@20101004-07:20:00   501K  -  19.5T  -
po...@20101004-10:20:00  2.54M  -  19.5T  -
po...@20101004-13:20:00  5.24M  -  19.5T  -
po...@20101004-16:20:00  4.78M  -  19.5T  -
po...@20101004-19:20:00  3.86M  -  19.5T  -
po...@20101004-22:20:00  4.37M  -  19.5T  -
po...@20101005-01:20:00  7.18M  -  19.5T  -
po...@20101005-04:20:00  0  -  19.5T  -
po...@20101005-07:20:00  0  -  19.5T  -
po...@20101005-10:20:00  2.89M  -  19.5T  -
po...@20101005-13:20:00  8.42M  -  19.5T  -
po...@20101005-16:20:00  12.0M  -  19.5T  -
po...@20101005-19:20:00  4.75M  -  19.5T  -
po...@20101005-22:20:00  2.49M  -  19.5T  -
po...@20101006-01:20:00  3.06M  -  19.5T  -
po...@20101006-04:20:00   244K  -  19.5T  -
po...@20101006-07:20:00   182K  -  19.5T  -
po...@20101006-10:20:00  3.16M  -  19.5T  -
po...@20101006-13:20:00   177M  -  19.5T  -
po...@20101006-16:20:00   396M  -  19.5T  -
po...@20101006-22:20:00   282M  -  19.5T  -
po...@20101007-10:20:00   187M  -  19.5T  -
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
One of us found the following:

The presence of snapshots can cause some unexpected behavior when you attempt 
to free space. Typically, given appropriate permissions, you can remove a file 
from a full file system, and this action results in more space becoming 
available in the file system. However, if the file to be removed exists in a 
snapshot of the file system, then no space is gained from the file deletion. 
The blocks used by the file continue to be referenced from the snapshot. 
As a result, the file deletion can consume more disk space, because a new 
version of the directory needs to be created to reflect the new state of the 
namespace. This behavior means that you can get an unexpected ENOSPC or EDQUOT 
when attempting to remove a file.

Since we are using snapshots to a remote system, what will be the impact of 
destroying the snapshots? Since the files we moved are some of the oldest, will 
we have to start replication to the remote site over again from the beginning?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss