Re: [zfs-discuss] unable to mount zfs file system..pl help

2011-08-11 Thread Vikash Gupta
Hi Ian,

It's there in the subject line.

I am unable to see the zfs file system in df output.


# zfs get all pool1/fs1
NAME   PROPERTY  VALUE  SOURCE
pool1/fs1  type  filesystem -
pool1/fs1  creation  Fri Aug 12  1:44 2011  -
pool1/fs1  used  21K-
pool1/fs1  available 228G   -
pool1/fs1  referenced21K-
pool1/fs1  compressratio 1.00x  -
pool1/fs1  mounted   no -
pool1/fs1  quota none   default
pool1/fs1  reservation   none   default
pool1/fs1  recordsize128K   default
pool1/fs1  mountpoint/vik   local
pool1/fs1  sharenfs  offdefault
pool1/fs1  checksum  on default
pool1/fs1  compression   offdefault
pool1/fs1  atime on default
pool1/fs1  devices   on default
pool1/fs1  exec  on default
pool1/fs1  setuidon default
pool1/fs1  readonly  offdefault
pool1/fs1  zoned offdefault
pool1/fs1  snapdir   hidden default
pool1/fs1  aclinheritrestricted default
pool1/fs1  canmount  on default
pool1/fs1  xattr on default
pool1/fs1  copies1  default
pool1/fs1  version   5  -
pool1/fs1  utf8only  off-
pool1/fs1  normalization none   -
pool1/fs1  casesensitivity   sensitive  -
pool1/fs1  vscan offdefault
pool1/fs1  nbmandoffdefault
pool1/fs1  sharesmb  offdefault
pool1/fs1  refquota  none   default
pool1/fs1  refreservationnone   default
pool1/fs1  primarycache  alldefault
pool1/fs1  secondarycachealldefault
pool1/fs1  usedbysnapshots   0  -
pool1/fs1  usedbydataset 21K-
pool1/fs1  usedbychildren0  -
pool1/fs1  usedbyrefreservation  0  -
pool1/fs1  logbias   latencydefault
pool1/fs1  dedup offdefault
pool1/fs1  mlslabel  none   default
pool1/fs1  sync  standard   default

Rgds
Vikash

-Original Message-
From: Ian Collins [mailto:i...@ianshome.com] 
Sent: Friday, August 12, 2011 2:15 AM
To: Vikash Gupta
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] unable to mount zfs file system..pl help

  On 08/12/11 08:25 AM, Vikash Gupta wrote:
>
> # uname -a
>
> Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 
> x86_64 x86_64 x86_64 GNU/Linux
>
> # rpm -qa|grep zfs
>
> zfs-test-0.5.2-1
>
> zfs-modules-0.5.2-1_2.6.18_194.el5
>
> zfs-0.5.2-1
>
> zfs-modules-devel-0.5.2-1_2.6.18_194.el5
>
> zfs-devel-0.5.2-1
>
> # zfs list
>
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
>
> pool1   120K   228G21K  /pool1
>
> pool1/fs121K   228G21K  /vik
>
You haven't said what your problem is (what commands did you use and 
what errors you get?)!

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ray Van Dolson
> 
> Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
> understand its wear and performance characteristics can be bumped up
> significantly by increasing the overprovisioning to 20% (dropping
> usable capacity to 80%).
> 
> Anyone have experience with this?

I think most purposes are actually better suited to disabling the ZIL
completely.  But of course you need to understand it and make an intelligent
decision yourself in your particular case.

Figure it like this...  Suppose you have a 6Gbit bus.  Suppose you have an
old OS which flushes TXG's maximum every 30 sec (as opposed to the more
current 5 sec)...  that means the absolute max data you could possibly have
sitting in the log device is 6gbit * 30sec = 180Gbit = 22 Gbytes.  Leave
yourself some breathing room, and figure a comfortable size is 30G usable.  

Intel 320's look like they start at 40G, so you're definitely safe
overprovisioning 25% or higher.

I cannot speak to any actual performance increase resulting from this tweak.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
> 
> Unfortunately, if dedup was previously enabled, the damage was already
> done since dedup is baked into your pool.  The situation may improve
> over time as dedup blocks are gradually eliminated but this depends on
> how often the data gets overwritten and how many old snapshots are
> deleted.

No matter how you cut it, unless you want to "zpool destroy," there is a
fixed total amount of time that will be spent destroying zfs snapshots that
have previously dedup data baked into them.  Your only choice is when to
perform that work, and broken into what granularity.

Most likely you're talking about daily snapshots at this point.  Where each
night at midnight, some time will be spent destroying the oldest daily
snapshot.  Most likely the best thing for you to do is simply to do nothing,
and allow 2 hours every night at midnight, until those old snaps are all
gone.  But if you wanted, you could destroy more snapshots each night, and
get it all over with sooner.  Or something like that.


> The alternative is to install a lot more RAM, or install a SSD as a
> L2ARC device.

RAM (ARC) and SSD (L2ARC) are only going to help prevent re-reading the
blocks from disk after they've been read once.  No matter what, if you're
talking about time to destroy snapshots, you're going to have to spend time
reading (and erasing) those blocks from disk.  So infinite ram and infinite
SSD aren't going to help you at this point.  Given that it's already been
created with dedup, and the DDT is not currently this instant already in
cache.

I mean...  Yes, it's marginally possible to imagine some benefit, if you
install enough ram or cache, you could read all the contents of the snapshot
during the day so it will still be warm in cache at night when the snap gets
destroyed, or something like that.  Maybe able to spread the workload out
across less critical hours of the day.  But like I said.  "Marginal."  I
doubt it.  

Most likely the best thing for you to do is simply do nothing, and wait for
it to go away in the upcoming weeks, a little bit each night.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool replace malfunction how to correct?

2011-08-11 Thread Alain Feeny
Hello, 
I have an EON server installed on which a drive mysteriously went offline a few 
time, so I decided to replace the drive with one I connected to another port. 
Unfortunately the replace operation failed, I think because of hardware issues 
with the replacement drive.  I bought a new replacement drive, but now the pool 
is degraded, and I can't seem to clear the status of the pool in order to 
rebuild with this new device.

The unreliable drive is c2t5d0, the replacement drive was c2t7d0

Here are a few screenshots that will help describe the status of the system.


# cfgadm
Ap_Id  Type Receptacle   Occupant Condition
sata0/0::dsk/c1t0d0disk connectedconfigured   ok
sata0/1::dsk/c1t1d0disk connectedconfigured   ok
sata0/2::dsk/c1t2d0disk connectedconfigured   ok
sata0/3::dsk/c1t3d0disk connectedconfigured   ok
sata0/4::dsk/c1t4d0disk connectedconfigured   ok
sata0/5::dsk/c1t5d0disk connectedconfigured   ok
sata0/6sata-portemptyunconfigured ok
sata0/7sata-portemptyunconfigured ok
sata1/0::dsk/c2t0d0disk connectedconfigured   ok
sata1/1::dsk/c2t1d0disk connectedconfigured   ok
sata1/2::dsk/c2t2d0disk connectedconfigured   ok
sata1/3::dsk/c2t3d0disk connectedconfigured   ok
sata1/4::dsk/c2t4d0disk connectedconfigured   ok
sata1/5::dsk/c2t5d0disk connectedconfigured   ok
sata1/6::dsk/c2t6d0disk connectedconfigured   ok
sata1/7disk connectedunconfigured unknown
usb0/1 unknown  emptyunconfigured ok
usb0/2 usb-inputconnectedconfigured   ok
usb0/3 unknown  emptyunconfigured ok
usb0/4 unknown  emptyunconfigured ok

# zpool status -v
  pool: tank
 state: DEGRADED
 scrub: resilver completed after 0h0m with 0 errors on Thu Aug 11 04:29:00 2011
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  raidz1-0  ONLINE   0 0 0
c1t0d0p0ONLINE   0 0 0
c1t1d0p0ONLINE   0 0 0
c1t4d0p0ONLINE   0 0 0
c1t3d0p0ONLINE   0 0 0
c1t2d0p0ONLINE   0 0 0
c1t5d0p0ONLINE   0 0 0
  raidz1-1  DEGRADED 0 0 0
c2t0d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
replacing-3 DEGRADED 0 0 0
  c2t5d0ONLINE   0 0 0  9.43M resilvered
  17980209657994758716  UNAVAIL  0 0 0  was 
/dev/dsk/c2t7d0s0
c2t3d0  ONLINE   0 0 0
c2t4d0  ONLINE   0 0 0
cache
  c2t6d0ONLINE   0 0 0

errors: No known data errors

At this point I'd appreciate suggestions on how to proceed to fix this issue.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to mount zfs file system..pl help

2011-08-11 Thread Ian Collins

 On 08/12/11 08:25 AM, Vikash Gupta wrote:


# uname -a

Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 
x86_64 x86_64 x86_64 GNU/Linux


# rpm -qa|grep zfs

zfs-test-0.5.2-1

zfs-modules-0.5.2-1_2.6.18_194.el5

zfs-0.5.2-1

zfs-modules-devel-0.5.2-1_2.6.18_194.el5

zfs-devel-0.5.2-1

# zfs list

NAMEUSED  AVAIL  REFER  MOUNTPOINT

pool1   120K   228G21K  /pool1

pool1/fs121K   228G21K  /vik

You haven't said what your problem is (what commands did you use and 
what errors you get?)!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] unable to mount zfs file system..pl help

2011-08-11 Thread Vikash Gupta

# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 
x86_64 GNU/Linux

# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool1   120K   228G21K  /pool1
pool1/fs121K   228G21K  /vik

[root@nofclo038]/# zfs get all pool1/fs1
NAME   PROPERTY  VALUE  SOURCE
pool1/fs1  type  filesystem -
pool1/fs1  creation  Fri Aug 12  1:44 2011  -
pool1/fs1  used  21K-
pool1/fs1  available 228G   -
pool1/fs1  referenced21K-
pool1/fs1  compressratio 1.00x  -
pool1/fs1  mounted   no -
pool1/fs1  quota none   default
pool1/fs1  reservation   none   default
pool1/fs1  recordsize128K   default
pool1/fs1  mountpoint/vik   local
pool1/fs1  sharenfs  offdefault
pool1/fs1  checksum  on default
pool1/fs1  compression   offdefault
pool1/fs1  atime on default
pool1/fs1  devices   on default
pool1/fs1  exec  on default
pool1/fs1  setuidon default
pool1/fs1  readonly  offdefault
pool1/fs1  zoned offdefault
pool1/fs1  snapdir   hidden default
pool1/fs1  aclinheritrestricted default
pool1/fs1  canmount  on default
pool1/fs1  xattr on default
pool1/fs1  copies1  default
pool1/fs1  version   5  -
pool1/fs1  utf8only  off-
pool1/fs1  normalization none   -
pool1/fs1  casesensitivity   sensitive  -
pool1/fs1  vscan offdefault
pool1/fs1  nbmandoffdefault
pool1/fs1  sharesmb  offdefault
pool1/fs1  refquota  none   default
pool1/fs1  refreservationnone   default
pool1/fs1  primarycache  alldefault
pool1/fs1  secondarycachealldefault
pool1/fs1  usedbysnapshots   0  -
pool1/fs1  usedbydataset 21K-
pool1/fs1  usedbychildren0  -
pool1/fs1  usedbyrefreservation  0  -
pool1/fs1  logbias   latencydefault
pool1/fs1  dedup offdefault
pool1/fs1  mlslabel  none   default
pool1/fs1  sync  standard   default

Rgds
Vikash
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Ray Van Dolson
On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote:
>   On 08/12/11 08:00 AM, Ray Van Dolson wrote:
> > Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
> > understand its wear and performance characteristics can be bumped up
> > significantly by increasing the overprovisioning to 20% (dropping
> > usable capacity to 80%).
> >
> A log device doesn't have to be larger than a few GB, so that shouldn't 
> be a problem.  I've found even low cost SSDs make a huge difference to 
> the NFS write performance of a pool.

We've been using the X-25E (SLC-based).  It's getting hard to find, and
since we're trying to stick to Intel drives (Nexenta certifies them),
and Intel doesn't have a new SLC drive available until late September,
we're hoping an overprovisioned 320 could fill the gap until then and
perform at least as well as the X-25E.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Ian Collins

 On 08/12/11 08:00 AM, Ray Van Dolson wrote:

Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the overprovisioning to 20% (dropping
usable capacity to 80%).

A log device doesn't have to be larger than a few GB, so that shouldn't 
be a problem.  I've found even low cost SSDs make a huge difference to 
the NFS write performance of a pool.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Ray Van Dolson
Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the overprovisioning to 20% (dropping
usable capacity to 80%).

Anyone have experience with this?

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-11 Thread Ian Collins

 On 08/12/11 01:35 AM, Bob Friesenhahn wrote:

On Wed, 10 Aug 2011, Nix wrote:

Yes, I have enabled the dedup on the pool.

I will off the dedup and will try to delete the new created snapshot.

Unfortunately, if dedup was previously enabled, the damage was already
done since dedup is baked into your pool.  The situation may improve
over time as dedup blocks are gradually eliminated but this depends on
how often the data gets overwritten and how many old snapshots are
deleted.

The alternative is to install a lot more RAM, or install a SSD as a
L2ARC device.

On a system with both, I recently timed 147 minutes to destroy a 2.2T 
filesystem that was q 95% duplicate of another.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send & delegated permissions

2011-08-11 Thread Test Rat
After replicating a pool with zfs send/recv I've found out I cannot
perform some zfs on those datasets anymore. The datasets had permissions
set via `zfs allow'. To verify assumption I did

  $ zfs allow blah create tank/foo
  $ zfs allow tank/foo
   Permissions on tank/foo 
  Local+Descendent permissions:
  user blah create
  $ zfs snapshot tank/foo@send
  $ zfs send -p tank/foo@send | zfs recv tank/bar
  $ zfs allow tank/bar
  [nothing]

  $ zfs create -sV1g tank/foo
  $ zfs create -sV1g tank/bar
  $ zpool create foo zvol/tank/foo
  $ zpool create bar zvol/tank/bar
  $ zfs allow blah create foo
  $ zfs allow foo
   Permissions on foo 
  Local+Descendent permissions:
  user blah create
  $ zfs snapshot -r foo@send
  $ zfs send -R foo@send | zfs recv -F bar
  $ zfs allow bar
  [nothing]

So, what are permissions if not properties? And why they're not
sent unlike say user/group quotas?

--
ZFSv28 as of FreeBSD 9.0-BETA1 r224776M amd64
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-11 Thread Bob Friesenhahn

On Wed, 10 Aug 2011, Nix wrote:


Yes, I have enabled the dedup on the pool.

I will off the dedup and will try to delete the new created snapshot.


Unfortunately, if dedup was previously enabled, the damage was already 
done since dedup is baked into your pool.  The situation may improve 
over time as dedup blocks are gradually eliminated but this depends on 
how often the data gets overwritten and how many old snapshots are 
deleted.


The alternative is to install a lot more RAM, or install a SSD as a 
L2ARC device.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OpenSolaris Package Manager Information Required!!

2011-08-11 Thread Vijaeendra
I am not able to install packages from the package manager on opensolaris.

Even after connecting to the internet. It says preparation failed. Also how do 
I include new repositories to my package manager?? Please help, I m new to 
Solaris !!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-11 Thread Nix
Hi Ian,

Yes, I have enabled the dedup on the pool.

I will off the dedup and will try to delete the new created snapshot.

-
Nix
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-11 Thread Paul Kraus
On Wed, Aug 10, 2011 at 11:32 PM, Edward Ned Harvey
 wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Ian Collins
>>
>> > I am facing issue with zfs destroy, this takes almost 3 Hours to
>> delete the snapshot of size 150G.
>> >
>> Do you have dedup enabled?
>
> I have always found, zfs destroy takes some time.  zpool destroy takes no
> time.
>
> Although zfs destroy takes some time, it's not terrible unless you have
> dedup enabled.  If you have dedup enabled, then yes it's terrible, as Ian
> suggested.

I have found that the time to destroy a snapshot or dataset is
directly related to the number of objects and not the size. A dataset
or snapshot with millions of small files will take a very long time
(hours, but not usually days).

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss