[zfs-discuss] Snapshot access from non-global zones

2009-08-20 Thread Mike Futerko
Hello


I'm curious what is the best (if any) way to access ZFS snapshots from
the non-global zones?

I have a common ZFS file system (on Solaris Express b116 for ex.) in a
global zone, mounted as lofs to several non-global zones. In each zone I
can access all files with no problem, but I'm unable to the snapshots. I
can get the list of snapshots, but not the content of individual
snapshot, all commands fails with "Not owner". There is one trick
however - if snapshot was created *before* the filesystem was lofs
mounted inside of a zone these snapshots are perfectly accessible. But
if the snapshots were created after the mount - they are not accessible
from inside of a zone.

So this is correct behavior or it's bug, any workarounds?

Thanks in advance for all comments.

Regards,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-03-30 Thread Mike Futerko
Hello

> 1) Dual IO module option
> 2) Multipath support 
> 3) Zone support [multi host connecting to same JBOD or same set of JBOD's
> connected in series. ] 

This sounds interesting - where I can read more about connecting two
hosts to same J4200 etc?


Thanks
Mike

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send / zfs receive hanging

2009-01-12 Thread Mike Futerko
Hi


> It would be also nice to be able to specify the zpool version during pool 
> creation. E.g. If I have a newer machine and I want to move data to an older 
> one, I should be able to specify the pool version, otherwise it's a one-way 
> street.

zpool create -o version=xx ...


Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Mike Futerko
Hello

> This seems like a reasonable proposal to enhance zfs list.  But it would 
> also be good to add as few new options to zfs list as possible.  So it 
> probably makes sense to add at most one of these new options.  Or 
> perhaps add an optional depth argument to the -r option instead?
> 
> As you point out, the -c option is user friendly while the -depth (or 
> maybe -d) option is more general.  There have been several requests for 
> the -c option.  Would anyone prefer the -depth option?  In what cases 
> would this be used?


I'd have a few more proposals how to improve zfs list if they don't
contravene the concept of zfs list command.

Currently zfs list returns error "operation not applicable to datasets
of this type" if you try to list for ex.: "zfs list -t snapshot
file/system" returns above error while it could return what you actually
asked - the list of all snapshots of "file/system". Similar case is if
you try "zfs list file/sys...@snapshot" - can zfs be more smart to
return the snapshot instead of error message if dataset name contains
"@" in its name?

Other thing is zfs list performance... even if you want to get the list
of snapshots with no other properties "zfs list -oname -t snapshot -r
file/system" it still takes quite long time if there are hundreds of
snapshots, while "ls /file/system/.zfs/snapshot" returns immediately.
Can this also be improved somehow please?



Thanks
Mike


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-08 Thread Mike Futerko
Hello

> Yah, the incrementals are from a 30TB volume, with about 1TB used.
> Watching iostat on each side during the incremental sends, the sender
> side is hardly doing anything, maybe 50iops read, and that could be
> from other machines accessing it, really light load.
> The receiving side however, for about 3 minutes it is peaking around
> 1500 iops reads, and no writes.


Have you tries truss on both sides? From my experiments I found that
sending side on beginning of the transfer mostly sleeps while receiving
lists all available snapshots on the syncing file system. So if you have
a lot of snapshots on receiving side (as in my case) the process will
take long time sending no data but listing the snapshots. The worst case
is if you use recursive sync of hundreds of file system with hundreds of
snapshots on each. I'm sure this must be optimized somehow otherwise
it's almost useless in practice.


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to compile mbuffer

2008-12-06 Thread Mike Futerko
Hello

> i've been following the "[zfs-discuss] 'zfs recv' is very slow" thread
> and i believe i have the same issue; we get ~10MB/sec sending large
> incrimental data sets using zfs send | ssh | zfs recv. I'd like to try
> mbuffer.
> 
> We're running Solaris Express Developers Edition (SunOS murray 5.11
> snv_79a i86pc i386 i86pc).  I found the download page
> http://www.maier-komor.de/mbuffer.html and i have the source files on
> Murray.
> 
> How do i compile mbuffer for our system, and what syntax to i use to
> invoke it within the zfs send recv?
> 
> Any help appreciated!


I used compile it this way:

1) wget http://www.maier-komor.de/software/mbuffer/mbuffer-20081113.tgz
2) gtar -xzvf mbuffer-20081113.tgz
3) cd mbuffer-20081113
4) ./configure --prefix=/usr/local --disable-debug CFLAGS="-O" MAKE=gmake
If you are on 64bit system you may want to compile 64bit version:
./configure --prefix=/usr/local --disable-debug CFLAGS="-O -m64" MAKE=gmake
5) gmake && gmake install
6) /usr/local/bin/mbuffer -V


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS performance

2008-11-16 Thread Mike Futerko
Hello list,


I have a system with 2x 1.8 GHz AMD CPUs, 4G of ECC RAM, 7T RAID-Z pool
on Areca controller with about 400 file systems on OpenSolaris snv_101.

The problem is that it takes VERY long to take or delete snapshot and
sync incremental snapshots to backup system.


System load is quite low I'd say, CPU is 98% idle:
load average: 0.09, 0.13, 0.26

IOPs are low as well:
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
data5.62T  2.32T 98115  7.72M  2.35M
data5.62T  2.32T547227  47.9M   864K
data5.62T  2.32T204 58  15.9M   616K
data5.62T  2.32T  4  0   256K  0
data5.62T  2.32T 20  0   399K  0
data5.62T  2.32T 99 47  9.68M   264K
data5.62T  2.32T  0 11  6.93K  38.1K
data5.62T  2.32T  0455506  1.90M
data5.62T  2.32T250 21  17.0M   420K
data5.62T  2.32T150235  10.7M  1.34M
data5.62T  2.32T305  0  16.0M  0
data5.62T  2.32T137  3.42K  12.9M  16.8M
data5.62T  2.32T107  0  13.2M  0
data5.62T  2.32T 56  0  4.97M  0
data5.62T  2.32T200296  23.6M  1.70M


mpstat output:
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0  160   0  690  1152  568 1133   89   68  1440  25995   8   0  87
  1  154   0  108  4424 3241 1388  102   68  1370  24815   7   0  88
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  06   0   83   594  365  2860   3130   6160   7   0  93
  10   00   524  141  6692   2710   3210   2   0  98
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0   55   575  353  2800   1530   4831   6   0  93
  10   00   462  142  6103   1750   4501   2   0  97
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   00   454  210  3230   1910   7630   3   0  97
  10   00   288  166  2970   1530   3380   2   0  98
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   00   398  172  2130   1310   6260   1   0  99
  10   00   252  154  2450   1510   2490   0   0 100
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   00   461  229  2920   1710   5010   3   0  97
  10   00   290  149  3394   1210   4020   2   0  98



What can be wrong that ZFS operations like create file system,
take/destroy snapshot, (not saying snapshot listing which takes ages)
takes minutes to complete.

Is there something I can look at which would help to determine where is
a bottleneck or what is wrong etc?


Thanks in advance for any advice,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hi

> [Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko
> <[EMAIL PROTECTED]> wrote:
> 
>> Hello
>>
>> Is there any way to list all snapshots of particular file system
>> without listing the snapshots of its children file systems?
> 
> fsnm=tank/fs;zfs list -rt snapshot ${fsnm}|grep "${fsnm}@"
> 
> or even
> 
> fsnm=tank/fs;zfs list -r ${fsnm}|grep "${fsnm}@"


Yes, thanks - I know about grep but if you have hundred of thousands of
snapshots grep is what I wanted to avoid. In my case full zfs list -rt
snapshot take hours, while listing snapshot for individual filesystem is
much much quicker :(


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hello


Is there any way to list all snapshots of particular file system without
listing the snapshots of its children file systems?


Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-10 Thread Mike Futerko
Hi

> Not merely a little pokey it was unacceptably slow and the casing got very 
> warm.  I am guessing it was pushing CPU right to 100% all the time.  Took 
> hours to load and when booting took minutes.  Also didn't see an easy way to 
> disable graphical login so on boot every time it would go to scrambled 
> graphics and then no easy way to get in to turn off graphics.  Trying virtual 
> console switching didn't work for me.  It didn't take me long to realize this 
> wasn't going to be usable.

Well... it's easy to disable graphical login:

svcadm disable cde-login

I'd also recommend to disable some other unnecessary processes, ex:

svcs | egrep
'webco|wbem|avahi|print|font|cde|sendm|name-service-cache|opengl' | awk
'{print $3}' | xargs -n1 svcadm disable

This should made your system more usable on light hardware.


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-29 Thread Mike Futerko
Hi

>> [EMAIL PROTECTED] ls -la
>> /data/zones/testfs/root/etc/services
>> lrwxrwxrwx   1 root root  15 Oct 13 14:35
>> /data/zones/testfs/root/etc/services ->
>> ./inet/services
>>
>> [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
>> lrwxrwxrwx   1 root root  15 Oct 13 14:35
>> /data/zones/testfs/root/etc/services ->
>> s/teni/.ervices
> 
> Ouch, thats a bad one.
> 
> I downloaded and burnt b101 to dvd for x86 and solaris, i'm gonna install 
> them tomorrow at work and try moving a pool between them to see what 
> happens...

Would be interesting to know how it'll work if move whole zpool not just
sync with send/recv. But I think all will be fine there as is seems the
problem is in send/recv part on the file system itself on different
architectures.


Thanks
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Mike Futerko
Hi


Just checked with snv_99 on x86 (VMware install) - same result :(



Regards
Mike


[EMAIL PROTECTED] wrote:
>> Hello
>>
>>
>> Today I've suddenly noticed that symlinks (at least) are corrupted when
>> sync ZFS from SPARC to x86 (zfs send | ssh | zfs recv).
>>
>> Example is:
>>
>> [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
>> lrwxrwxrwx   1 root root  15 Oct 13 14:35
>> /data/zones/testfs/root/etc/services -> ./inet/services
>>
>> [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
>> lrwxrwxrwx   1 root root  15 Oct 13 14:35
>> /data/zones/testfs/root/etc/services -> s/teni/.ervices
>>
>>
>> Firstly I thought it's because original FS on SPARC is compressed... so
>> I've just synced it locally on same machine and all was OK just
>> different FS size since destination was not compressed.
>>
>> Then I've synced that copy again to x86 but result was same - symlinks
>> were corrupted... so it's not compression.
>>
>> SPARC is snv_85 and x86 snv_82, I haven't got a chance yet to test on
>> latest OpenSolaris.
>>
>>
>> Any suggestions?
> 
> Looks like the first 8 bytes aren't "reversed".
> 
> Casper
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Mike Futerko
Hello


Today I've suddenly noticed that symlinks (at least) are corrupted when
sync ZFS from SPARC to x86 (zfs send | ssh | zfs recv).

Example is:

[EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
lrwxrwxrwx   1 root root  15 Oct 13 14:35
/data/zones/testfs/root/etc/services -> ./inet/services

[EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
lrwxrwxrwx   1 root root  15 Oct 13 14:35
/data/zones/testfs/root/etc/services -> s/teni/.ervices


Firstly I thought it's because original FS on SPARC is compressed... so
I've just synced it locally on same machine and all was OK just
different FS size since destination was not compressed.

Then I've synced that copy again to x86 but result was same - symlinks
were corrupted... so it's not compression.

SPARC is snv_85 and x86 snv_82, I haven't got a chance yet to test on
latest OpenSolaris.


Any suggestions?


Thanks
Mike

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Kernel panic on ZFS snapshot destroy

2008-07-31 Thread Mike Futerko

Hello all


I have weird problem with a snapshot... when I try to delete it kernel 
panics. However I can successfully create and then delete other 
snapshots on same file system. The OS version I noticed it happens was 
snv_81 so I've upgraded to snv_94 (LU) but it doesn't help.


I've attached a screenshot if it may be useful.


Any help would be appreciated...

Thanks,
Mike
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss