Re: [zfs-discuss] dataset is busy when doing snapshot

2012-05-20 Thread Anil Jangity
Yup, that's exactly what I did last night...

zoned=off
mountpoint=/some/place
mount
unmount
mountpoint=legacy
zoned=on

Thanks!

On May 20, 2012, at 3:09 AM, Jim Klimov wrote:

 2012-05-20 8:18, Anil Jangity wrote:
 What causes these messages?
 
 cannot create snapshot 'zones/rani/ROOT/zbe-2@migration': dataset is busy
 
 ...This is on a very old OS now snv_134b… I am working to upgrade them soon!
 There are zones living in zones pool, but none of the zones are running or 
 mounted.
 
 I've had such problems a few times on snv_117 and/or older,
 blocking recursive snapshots and the zfs-auto-snap service,
 and liveupgrade (OpenSolaris SXCE) among other things, but
 I thought they were fixed in later releases before the last
 ones like yours.
 
 Essentially the workaround was to mount the offending
 dataset and unmount it again if it is not needed now -
 this clears some internal ZFS flags and allows snapshots
 to proceed. This could be tricky with zoned=on datasets
 and to an extent those with a legacy mountpoint, but
 those problems can be worked around (i.e. disable zoned,
 mount, umount, reenable zoned).
 
 Hope this helps,
 //Jim Klimov
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dataset is busy when doing snapshot

2012-05-19 Thread Anil Jangity
What causes these messages?

cannot create snapshot 'zones/rani/ROOT/zbe-2@migration': dataset is busy


There are zones living in zones pool, but none of the zones are running or 
mounted.


root@:~# zfs get -r mounted,zoned,mountpoint zones/rani
NAME  PROPERTYVALUESOURCE
zones/ranimounted yes  -
zones/ranizoned   off  default
zones/ranimountpoint  /zones/rani  default
zones/rani/ROOT   mounted no   -
zones/rani/ROOT   zoned   on   local
zones/rani/ROOT   mountpoint  legacy   local
zones/rani/ROOT@original  mounted --
zones/rani/ROOT@original  zoned   --
zones/rani/ROOT@original  mountpoint  --
zones/rani/ROOT/zbe-2 mounted no   -
zones/rani/ROOT/zbe-2 zoned   on   local
zones/rani/ROOT/zbe-2 mountpoint  legacy   inherited from 
zones/rani/ROOT
root@:~# 

This is on a very old OS now snv_134b… I am working to upgrade them soon!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 2.5 to 3.5 bracket for SSD

2012-01-14 Thread Anil Jangity
I have a couple of Sun/Oracle x2270 boxes and am planning to get some 2.5 
intel 320 SSD for the rpool.

Do you happen to know what kind of bracket is required to get the 2.5 SSD to 
fit into the 3.5 slots?

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] scrub never finishes

2008-07-13 Thread Anil Jangity
On one of the pools, I started a scrub. It never finishes. At one time, 
I saw it go up to like 70% and then a little bit later I ran the pool 
status, it went back to 5% and started again.

What is going on? Here is the pool layout:

  pool: data2
 state: ONLINE
 scrub: scrub in progress, 35.25% done, 0h54m to go
config:

NAME STATE READ WRITE CKSUM
data2ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t12d0  ONLINE   0 0 0
c0t13d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t15d0  ONLINE   0 0 0
c0t16d0  ONLINE   0 0 0

errors: No known data errors



Thanks


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub never finishes

2008-07-13 Thread Anil Jangity
Oh, my hunch was right. Yup, I do have an hourly snapshot going. I'll 
take it out and see.

Thanks!


Bob Friesenhahn wrote:
 On Sun, 13 Jul 2008, Anil Jangity wrote:

   
 On one of the pools, I started a scrub. It never finishes. At one time,
 I saw it go up to like 70% and then a little bit later I ran the pool
 status, it went back to 5% and started again.

 What is going on? Here is the pool layout:
 

 Initiating a snapshot stops the scrub.  I don't know if the scrub is 
 restarted at 0%, or simply aborted.  Are you taking snapshots during 
 the scrub?

 Bob

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] global zone snapshots

2008-06-24 Thread Anil Jangity
Is it possible to give access to the snapshots of a global zone (through 
lofs perhaps?) into a zone? I recall that
you can't just delegate a snapshot dataset into a zone yet, but was 
wondering if there is some lofs magic I can do?

Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
Why is it that the read operations are 0 but the read bandwidth is 0? 
What is iostat
[not] accounting for? Is it the metadata reads? (Is it possible to 
determine what kind of metadata
reads these are?

I plan to have 3 disks and am debating what I should do with them, if I 
should do a
raidz (single or double parity) or just a mirror.

As per some of the blog entries, I've been reading that raidz may not be 
that suitable for lot of
random reads.

With the # of reads below, I don't see any reason why I should consider 
that. I would like to
proceed with doing a raidz with double parity, please give me some feedback.

Thanks,
Anil

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
data1   41.6G  5.67G  2 19  52.3K   198K
data2   58.2G  9.83G  3 44  60.5K   180K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 21  11.3K   151K
data2   58.2G  9.83G  0 44158   140K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 18436   117K
data2   58.2G  9.83G  0 44203   149K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 20  1.49K   167K
data2   58.2G  9.83G  0 44331   154K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 21791   166K
data2   58.2G  9.83G  0 46199   167K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 25686   364K
data2   58.2G  9.83G  0 45  35.9K   152K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 19698   129K
data2   58.2G  9.83G  0 43 81   146K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 19  1.45K   141K
data2   58.2G  9.82G  0 44 59   139K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 19436   124K
data2   58.2G  9.82G  0 43 71   145K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 21412   150K
data2   58.2G  9.82G  0 41114   138K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 20  1.35K   128K
data2   58.2G  9.82G  0 47918   160K
--  -  -  -  -  -  -


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity




A three-way mirror and three disks in a double parity array are going to get you
the same usable space.  They are going to get you the same level of redundancy.
The only difference is that the RAIDZ2 is going to consume a lot more CPU cycles
calculating parity for no good cause.

In this case, a three-way mirror is the way to go.
  


Sorry, I meant to say that I will have 3 disks if I am doing raidz. If I 
am doing mirror, I will

just have 2.

Thanks for clarification on something else though, about double parity 
with 3 disks resulting in

only 1 usable disk blocks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity

I was using a 5 minute interval.
I did another test with 1 second interval:

data1   41.6G  5.65G  0  0  63.4K  0
data2   58.2G  9.81G  0447  0  2.31M

So, the 63K read bandwidth doesn't show any read operations still. Is 
that still rounding?

What exactly is an operation? (just any IO access?)



This could just be rounding - what interval did you use for zpool?  I believe
1 read per 5 seconds will be shown as '0' (rounded from 0.2); the read
bandwidth in your output is very small, suggesting that something like
this may be happening.

Brendan

  
I plan to have 3 disks and am debating what I should do with them, if I 
should do a

raidz (single or double parity) or just a mirror.

As per some of the blog entries, I've been reading that raidz may not be 
that suitable for lot of

random reads.

With the # of reads below, I don't see any reason why I should consider 
that. I would like to

proceed with doing a raidz with double parity, please give me some feedback.

Thanks,
Anil

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
data1   41.6G  5.67G  2 19  52.3K   198K
data2   58.2G  9.83G  3 44  60.5K   180K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 21  11.3K   151K
data2   58.2G  9.83G  0 44158   140K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 18436   117K
data2   58.2G  9.83G  0 44203   149K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 20  1.49K   167K
data2   58.2G  9.83G  0 44331   154K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 21791   166K
data2   58.2G  9.83G  0 46199   167K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 25686   364K
data2   58.2G  9.83G  0 45  35.9K   152K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 19698   129K
data2   58.2G  9.83G  0 43 81   146K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 19  1.45K   141K
data2   58.2G  9.82G  0 44 59   139K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 19436   124K
data2   58.2G  9.82G  0 43 71   145K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 21412   150K
data2   58.2G  9.82G  0 41114   138K
--  -  -  -  -  -  -
data1   41.6G  5.66G  0 20  1.35K   128K
data2   58.2G  9.82G  0 47918   160K
--  -  -  -  -  -  -


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS layout recommendations

2007-11-21 Thread Anil Jangity
I have pool called data.

I have zones configured in that pool. The zonepath is: /data/zone1/fs. 
(/data/zone1 itself is not used for anything else, by anyone, and has no other 
data.) There are no datasets being delegated to this zone.

I want to create a snapshot that I would want to make available from within the 
zone. What are the best options?

If I do something like:
zfs snapshot data/[EMAIL PROTECTED]

How do I make that snapshot available to the zone? 

It seems like I got two options:
1. 
add dataset
set name=data/zone1/recover
end

Then:
zfs send data/[EMAIL PROTECTED] | zfs recv data/zone1/[EMAIL PROTECTED]

I think this option might work, but using zfs send will just send the whole 
data/zone1 file system which will use more disk space instead of just sending 
snapshots.


2. I was thinking, maybe I could do a NFS share of /data/zone1/.zfs/snapshot to 
zone1. Then, do a NFS client access to that file system. 


Thanks, hope thats clear.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS layout recommendations

2007-11-21 Thread Anil Jangity
Thanks James/John!

That link specifically mentions new Solaris 10 release, so I am assuming that 
means going from like u4 to Sol 10 u5, and that shouldn't cause a problem when 
doing plain patchadd's (w/o live upgrade). If so, then I am fine with those 
warnings and can use zfs with zones' path.


So, to do that lofs mount, I could do something like:
zfs set snapdir=visible data/zone1

add fs
set dir=/data/zone1/.zfs
set special=/data/zone1/zfsfiles
set type=lofs
end

Then, from inside the zone, I should be able to do something like:
ls /data/zone1/zfsfiles/snapshot/1hrbackup

Please correct me if I am wrong.

(Just want to make sure I got it right, before I go try this on this 
semi-production system. Unfortunately, I don't have a test system on hand to 
play with right now.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss