Re: [zfs-discuss] Zfs improvements to compression in Solaris 10?

2009-10-30 Thread Gaëtan Lehmann


Le 4 août 09 à 20:25, Prabahar Jeyaram a écrit :


On Tue, Aug 04, 2009 at 01:01:40PM -0500, Bob Friesenhahn wrote:

On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:


You seem to be hitting :

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537

The fix is available in OpenSolaris build 115 and later not for  
Solaris 10 yet.


It is interesting that this is a simple thread priority issue.  The
system has a ton of available CPU but the higher priority compression
thread seems to cause scheduling lockout.  The Perfmeter tool shows  
that
compression is a very short-term spike in CPU. Of course since  
Perfmeter

and other apps stop running, it might be missing some sample data.

I could put the X11 server into the real-time scheduling class but  
hate to
think about what would happen as soon as Firefox visits a web  
site. :-)


Compression is only used for the intermittently-used backup pool so  
it
would be a shame to reduce overall system performance for the rest  
of the

time.

Do you know if this fix is planned to be integrated into a future  
Solaris

10 update?



Yes. It is planned for S10U9.



In the mean time, is there a patch available for Solaris 10?
I can't find it on sunsolve.

Thanks,

Gaëtan

--
Gaëtan Lehmann
Biologie du Développement et de la Reproduction
INRA de Jouy-en-Josas (France)
tel: +33 1 34 65 29 66fax: 01 34 65 29 09
http://voxel.jouy.inra.fr  http://www.itk.org
http://www.mandriva.org  http://www.bepo.fr



PGP.sig
Description: Ceci est une signature électronique PGP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-30 Thread Darren J Moffat

Paul B. Henson wrote:

I posted a little while back about a problem we are having where when a
new directory gets created over NFS on a Solaris NFS server from a Linux
NFS client, the new directory group ownership is that of the primary group
of the process, even if the parent directory has the sgid bit set and is
owned by a different group.


Have you tried using different values for the per dataset aclinherit or 
aclmode properties ?


	aclinherit  YES  YES   discard | noallow | restricted | 
passthrough | passthrough-x

aclmode YES  YES   discard | groupmask | passthrough


I'm not sure they will help you much but I was curious if you had looked 
at this area for help.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS - first steps

2009-10-30 Thread Erwin Panen

Hi folks,

Just recently set-up my first OSOL box, have some Linux background but 
am TOTALLY new to OSOL, so please bear with me.

I'm looking to expand usage of PM capable hardware (port multilier).
This will allow to add extensive hd's.
At the moment I'm using a Sil3132 based PCIe card, which is recognized 
fine. The card is configured to use the esata port on the back.
To this I connect a disk array consisting of maximum 5 hd's. I can see 
they are recognized and connected because the hd's led's light up.
(f.y.i. did some testing under linux Ubuntu, connecting this disk array 
to various systems with various chipsets, some support PM others don't - 
It seems very hard to find information on whether a mobo or chipset 
supports PM yes or no)


My question:
I'm used to doing a 'tail -f /var/log/dmesg or messages to see 'live' 
messages when plugging / unplugging USB or eSATA devices.

What command do I use to do the same with OSOL?

I feel this is basic before endavouring in experimenting with ZFS?

Thanks for your help!

Erwin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-30 Thread Victor Latushkin

On 30.10.09 02:13, Scott Meilicke wrote:

Hi all,

I received my SSD, and wanted to test it out using fake zpools with files as 
backing stores before attaching it to my production pool. However, when I 
exported the test pool and imported, I get an error. Here is what I did:

I created a file to use as a backing store for my new pool:
mkfile 1g /data01/test2/1gtest

Created a new pool:
zpool create ziltest2 /data01/test2/1gtest 


Added the SSD as a log:
zpool add -f ziltest2 log c7t1d0

(c7t1d0 is my SSD. I used the -f option since I had done this before with a 
pool called 'ziltest', same results)

A 'zpool status' returned no errors.

Exported:
zpool export ziltest2

Imported:
zpool import -d /data01/test2 ziltest2
cannot import 'ziltest2': one or more devices is currently unavailable

This happened twice with two different test pools using file-based backing 
stores.

I am nervous about adding the SSD to my production pool. Any ideas why I am 
getting the import error?


There's no c7t1d0s0 in /data01/test2, use one more -d:

zpool import -d /dev/dsk -d /data01/test2 ziltest2

victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dumb idea?

2009-10-30 Thread Joerg Schilling
C. Bergström codest...@osunix.org wrote:

 Miles Nordin wrote:
  pt == Peter Tribble peter.trib...@gmail.com writes:
  
 
  pt Does it make sense to fold this sort of intelligence into the
  pt filesystem, or is it really an application-level task?
 
  in general it seems all the time app writers want to access hundreds
  of thousands of files by unique id rather than filename, and the POSIX
  directory interface is not really up to the task.
 Dear zfs'ers

 It's possible to heavily influence the next POSIX/UNIX standard if 
 you're interested to test or give feedback ping me off list.  The Open 
 Group does take feedback before they implement the next version of the 
 standard and now is a good time to participate in that.

You seem to missunderstand how the POSIX standard committee works.

The POSIX standard usually does not give up previous definitions and it only
adopts to already existing and well tested implementations in case they fit
well into the existing standard.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: File level cloning

2009-10-30 Thread Jeffry Molanus
Yes but the number of nfs mounts/datastores for ESX is limited; so that would 
leave me with limited numer of clones.


Jeff


From: Robert Milkowski [mi...@task.gda.pl]
Sent: Friday, October 30, 2009 2:31 AM
To: Jeffry Molanus
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] FW:  File level cloning

create a dedicated zfs zvol or filesystem for each file representing
your virtual machine.
Then if you need to clone a VM you clone its zvol or the filesystem.


Jeffry Molanus wrote:
 I'm not doing anything yet; I just wondered if ZFS provides any methods to
 do file level cloning instead of complete file systems. Basically I want a
 zero-size-increase copy of a file. A while ago BTRFS developers added this
 feature to the fs by doing a specialized ioctl call. Maybe this isn't
 needed at all since vmware can clone but I have the gut feeling that doing
 this at zfs level is more efficient. I might me wrong though.

 Regards, Jeff


 -Oorspronkelijk bericht-
 Van: Scott Meilicke [mailto:scott.meili...@craneaerospace.com]
 Verzonden: Wednesday, October 28, 2009 9:33 PM
 Aan: Jeffry Molanus
 Onderwerp: Re: [zfs-discuss] File level cloning

 What are you doing with your vmdk file(s) from the clone?


 On 10/28/09 9:36 AM, Jeffry Molanus jeffry.mola...@proact.nl wrote:


 Agreed, but with file level it is more granular then cloning a whole fs

 and I

 would not need to delete the cloned fs once i picked the vmdk I wanted.

 Esx

 has maximum on its datastore otherwise this would not be needed and I

 would be

 able to create a fs per vmdk

 Regards, jeff
 - Oorspronkelijk bericht -
 Van: Scott Meilicke scott.meili...@craneaerospace.com
 Verzonden: woensdag 28 oktober 2009 17:07
 Aan: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
 Onderwerp: Re: [zfs-discuss] File level cloning

 I don't think so. But, you can clone at the ZFS level, and then just use

 the

 vmdk(s) that you need. As long as you don't muck about with the other

 stuff in

 the clone, the space usage should be the same.

 -Scott

 --
 Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace 
 Electronics | +1 425-743-8153 | M: +1 206-406-2670


 --

 -

 -
 We value your opinion!  How may we serve you better?
 Please click the survey link to tell us how we are doing:
 http://www.craneae.com/ContactUs/VoiceofCustomer.aspx
 Your feedback is of the utmost importance to us. Thank you for your time.
 --

 -

 -
 Crane Aerospace  Electronics Confidentiality Statement:
 The information contained in this email message may be privileged and is
 confidential information intended only for the use of the recipient, or

 any

 employee or agent responsible to deliver it to the intended recipient. Any
 unauthorized use, distribution or copying of this information is strictly
 prohibited
 and may be unlawful. If you have received this communication in error,
 please notify
 the sender immediately and destroy the original message and all

 attachments

 from
 your electronic files.
 --

 -

 -

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-30 Thread Scott Meilicke
Excellent! That worked just fine. Thank you Victor.

-Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS/ZFS slow on parallel writes

2009-10-30 Thread Bernd Nies
Hi,

Just for closing this topic. Two issues have been found which caused the slow 
write performance on our Sun Storage 7410 with RAIDZ2:

(1) A Opensolaris bug when NFS mount option is set to wsize=32768. Reducing to 
wsize=16384 resulted in a performance gain of about factor 2. 

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6686887

(2) The Sun Storage 7410 (software release 2009.09.01.0.0,1-1.2) was configured 
as an LDAP client for mapping Unix UID/GID to Windows names/groups. At every 
file access the filer asked the LDAP server and resolved the ownership of the 
file. This also happened during NDMP backups and caused a high load on the LDAP 
server. Seems that this release has a non-working name service cache daemon or 
that the cache size is too small. We have about 500 users and 100 groups.

The LDAP replica was a Sun directory server 5.2p5 on a rather slow SunFire V240 
with Solaris 9. After migrating the LDAP server to a fast machine (Solaris 10 
x86 on VMware ESX 4i) the NFS I/O rate was much better and after disabling LDAP 
client at all the I/O rate is now about 16x better when 10 Linux hosts are 
untarring the Linux kernel source to the same NFS share.

Actions:
- time tar -xf ../linux-2.6.32-rc1.tar
- time rm -rf linux-2.6.32-rc1

NFS mount options: wsize=16384
gzip: ZFS filesystem on the fly compression

OpenStorage 7410| tar -xf | rm -rf
+-+
LDAP on,  1 client  |  3m 50.809s |  0m 16.395s
 10 clients | 19m 59.453s | 69m 12.107s
+-+
LDAP off, 1 client  |  1m 15.340s |  0m 14.784s
 10 clients |  3m 29.785s |  4m 51.606s
+-+
LDAP off, gzip 1 cl |  2m 13.713s |  0m 14.936s
  10 cl |  3m 47.773s |  7m 37.606s

In the meantime the system performs well.

Best regards,
Bernd
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can not stop zfs send...

2009-10-30 Thread Orvar Korvar
I am doing a large zfs send | zfs receive and suddenly, during the zfs send, 
one drive is faulted. I try to break this zfs send and examine the faulty drive 
so the zpool stops being DEGRADED mode. I can not stop this zfs send. I tried 
kill -9 PID
CTRL-X 
CTRL-Z
CTRL-D
CTRL-C
nothing can stop this zfs send. Can I just turn off the power?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can not stop zfs send...

2009-10-30 Thread Orvar Korvar
Ok, ctrl-x or whatever combination killed the zfs send. It took some 
time,though. Solved problem. Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can not stop zfs send...

2009-10-30 Thread Orvar Korvar
Ok, ctrl-x or whatever combination killed the zfs send. It took some 
time,though. Solved problem. Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - first steps

2009-10-30 Thread Richard Elling

On Oct 30, 2009, at 3:20 AM, Erwin Panen wrote:


Hi folks,

Just recently set-up my first OSOL box, have some Linux background  
but am TOTALLY new to OSOL, so please bear with me.

I'm looking to expand usage of PM capable hardware (port multilier).
This will allow to add extensive hd's.
At the moment I'm using a Sil3132 based PCIe card, which is  
recognized fine. The card is configured to use the esata port on the  
back.
To this I connect a disk array consisting of maximum 5 hd's. I can  
see they are recognized and connected because the hd's led's light up.
(f.y.i. did some testing under linux Ubuntu, connecting this disk  
array to various systems with various chipsets, some support PM  
others don't - It seems very hard to find information on whether a  
mobo or chipset supports PM yes or no)


Welcome.

Port multiplier support is recent, so you will need to start with a  
later build.
I don't think 2009.06 had any port multiplier support. You will want  
to upgrade

to a later dev release, like b125 -- use the dev repository.


My question:
I'm used to doing a 'tail -f /var/log/dmesg or messages to see  
'live' messages when plugging / unplugging USB or eSATA devices.


The location of the messages file is set in the syslog.conf. For
Solaris/SunOS, it has been /var/adm/messages since a decade before
Linux existed. Perhaps there should be a trivia question to answer
who decided /var/adm vs /var/log... BSD?  :-)


What command do I use to do the same with OSOL?


tail -f /var/adm/messages


I feel this is basic before endavouring in experimenting with ZFS?


It can be useful. Also know that the dmesg command shows the abbreviated
list.  You will also want to know about cfgadm, which is useful for  
checking

the status of removable devices.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-help] Help! Bricked my 2009.06 OpenSolaris snv_111b NFS host (long)

2009-10-30 Thread Donald Murray, P.Eng.
Hi,

On Fri, Oct 30, 2009 at 8:41 AM, Gopi Desaboyina
gopidesaboy...@yahoo.com wrote:
 I think your system might be over heating. I observed this kind of behaviour 
 in laptops when they get overheated. check if FAN is working or not. How 
 frequent it gets rebooted. you could boot from opensolaris LiveCD and keep it 
 for a day like that. if it reboots that means there could be h/w issue. if 
 not you can try for other OS related stuff.
 --
 This message posted from opensolaris.org
 ___
 opensolaris-help mailing list
 opensolaris-h...@opensolaris.org


Cross-posting to zfs-discuss, just in case.

Okay, I'll try a LiveCD and see whether it still reboots.

Additional information: I started a zpool scrub of my root pool and my
storage pool. After a few minutes the machine spontaneously rebooted.

And: I'm running a pool with two mirrors: a pair of 500GB on a
two-port Sil3132 PCI-e card, and a pair of 1TB drives on the Asus
M2N-SLI Deluxe motherboard.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs improvements to compression in Solaris 10?

2009-10-30 Thread Prabahar Jeyaram
On Fri, Oct 30, 2009 at 09:48:39AM +0100, Ga?tan Lehmann wrote:
 
 Le 4 ao?t 09 ? 20:25, Prabahar Jeyaram a ?crit :
 
 On Tue, Aug 04, 2009 at 01:01:40PM -0500, Bob Friesenhahn wrote:
 On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:
 
 You seem to be hitting :
 
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
 
 The fix is available in OpenSolaris build 115 and later not for
 Solaris 10 yet.
 
 It is interesting that this is a simple thread priority issue.  The
 system has a ton of available CPU but the higher priority compression
 thread seems to cause scheduling lockout.  The Perfmeter tool
 shows that
 compression is a very short-term spike in CPU. Of course since
 Perfmeter
 and other apps stop running, it might be missing some sample data.
 
 I could put the X11 server into the real-time scheduling class
 but hate to
 think about what would happen as soon as Firefox visits a web
 site. :-)
 
 Compression is only used for the intermittently-used backup pool
 so it
 would be a shame to reduce overall system performance for the
 rest of the
 time.
 
 Do you know if this fix is planned to be integrated into a future
 Solaris
 10 update?
 
 
 Yes. It is planned for S10U9.
 
 
 In the mean time, is there a patch available for Solaris 10?

NO. Not yet.

 I can't find it on sunsolve.
 

--
Prabahar.

 Thanks,
 
 Ga?tan
 
 -- 
 Ga?tan Lehmann
 Biologie du D?veloppement et de la Reproduction
 INRA de Jouy-en-Josas (France)
 tel: +33 1 34 65 29 66fax: 01 34 65 29 09
 http://voxel.jouy.inra.fr  http://www.itk.org
 http://www.mandriva.org  http://www.bepo.fr
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs improvements to compression in Solaris 10?

2009-10-30 Thread Bob Friesenhahn

On Fri, 30 Oct 2009, Gaëtan Lehmann wrote:


Yes. It is planned for S10U9.



In the mean time, is there a patch available for Solaris 10?
I can't find it on sunsolve.


Notice that the fix for this requires adding a new kernel scheduling 
class with default lower priority than user processes, but with the 
ability to raise the priority if user processes continue to hog all 
CPU.  This means that it requires more than a simple zfs fix.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-30 Thread Paul B. Henson
On Fri, 30 Oct 2009, Darren J Moffat wrote:

 Have you tried using different values for the per dataset aclinherit or
 aclmode properties ?

We have aclmode set to passthrough and aclinherit to passthrough-x (thanks
again Mark!). We haven't tried anything else.

 I'm not sure they will help you much but I was curious if you had looked
 at this area for help.

If you saw the message I sent late yesterday, I found the code in the nfs
server which explicitly sets the group owner if one is not specified by the
client, so I don't think at the filesystem level it has much choice, it's
being told explicitly which group the new directory should be owned by.

Thanks...

-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ARC cache and ls query

2009-10-30 Thread John
Hi,

On an idle server, when I do a recursive '/usr/bin/ls' on a folder, I see a lot 
of disk activity. This makes sense because the results (metadata/data) may not 
have been cached.
When I do a second ls on the same folder right after the first one finished, 
I do see disk activity again.

Can someone explain why the results are not cached in ARC?

I am running this on a thumper, b118 and default zpool settings. The box has 
64GB of RAM and arc size is around 24GB. No ARC tuning.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can not stop zfs send...

2009-10-30 Thread deniz rende
Although ctrl-x might have stopped it, it may still be running at the 
background. Make sure to check there are no related processes running...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ARC cache and ls query

2009-10-30 Thread Henrik Johansson

Hello John,

On Oct 30, 2009, at 9:03 PM, John wrote:


Hi,

On an idle server, when I do a recursive '/usr/bin/ls' on a folder,  
I see a lot of disk activity. This makes sense because the results  
(metadata/data) may not have been cached.
When I do a second ls on the same folder right after the first one  
finished, I do see disk activity again.


Can someone explain why the results are not cached in ARC?


You would have disk access again unless you have turned set atime to  
off for that filesystem. I did posted something similar a few days  
back and write a summary of the ARC-part of my findings: http://sparcv9.blogspot.com/2009/10/curious-case-of-strange-arc.html


Here is the whole thread: 
http://opensolaris.org/jive/thread.jspa?messageID=430385

If that does not explain it you should probably provide some more  
data, how many files, some ARC statistics etc.


Regards
Henrik

Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Does refreservation sit on top of usedbydataset?

2009-10-30 Thread Roman Naumenko
Hello,

I'm curios how space is counted for this particular system (opensolaris b118)

#zfs list

NAME USED__AVAIL__REFER__MOUNTPOINT
zsan03.10T__2.25T23K__/zsan0
zsan0/fs__ 3.10T__2.25T__6.90G___/zsan0/fs
[b]zsan0/fs/bar-001__ 680G__2.65T__ 280G[/b]
zsan0/fs/dnd01-sqldb__ 236G__2.45T__35.8G
zsan0/fs/evg-001-sqldb 219G__2.45T__18.5G
zsan0/fs/jen-001-sqldb 374G__2.55T__74.4G
zsan0/fs/meg-flt001___811G__2.84T__ 211G
zsan0/fs/sfc-001__ 229G__2.45T__29.3G
zsan0/fs/sscc-001_278G__2.45T__78.0G
zsan0/fs/tph-001_ 338G__2.45T__ 138G


# zfs get all zsan0/fs/bar-001
NAME__PROPERTY__VALUE__SOURCE
zsan0/fs/bar-001__type__volume -
zsan0/fs/bar-001__creation__Thu Oct 15 17:36 2009__-
[b]zsan0/fs/bar-001__used__680G__ -[/b]
zsan0/fs/bar-001__available 2.65T__-
zsan0/fs/bar-001__referenced280G__ -
zsan0/fs/bar-001__compressratio 1.00x__-
zsan0/fs/bar-001__reservation__ 400G__ local
zsan0/fs/bar-001__volsize__ 400G__ -
zsan0/fs/bar-001__volblocksize__128K__ -
zsan0/fs/bar-001__checksum__on default
zsan0/fs/bar-001__compression__ offdefault
zsan0/fs/bar-001__readonly__offdefault
zsan0/fs/bar-001__shareiscsioffdefault
zsan0/fs/bar-001__copies1__default
zsan0/fs/bar-001__refreservation400G__ local
zsan0/fs/bar-001__primarycache__alldefault
zsan0/fs/bar-001__secondarycachealldefault
zsan0/fs/bar-001__usedbysnapshots__ 0__-
zsan0/fs/bar-001__usedbydataset 280G__ -
zsan0/fs/bar-001__usedbychildren0__-
zsan0/fs/bar-001__usedbyrefreservation__400G__ -

zfs list -t snapshot | grep bar
zsan0/fs/bar-...@zfs-auto-snap:zsan03_1day_keep60days-2009-10-30-16:08_ 
0___-_ 280G_-

My concern is USED property: it's 680 for zsan0/fs/bar-001 (400 refreservation 
+ 280 usedbydataset)

When I set up reservation I wanted guaranteed space from pool be available to 
the volume (600G) which is the same as volume size itself.

But  now the volume data consumed space on top of that (referenced or 
usedbydataset). Confusing...

Can somebody advise how to setup reservation properly? I want each volume to 
have guaranteed space for data plus something reserved on top of that for 
volume snapshots.

--
Roman
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ARC cache and ls query

2009-10-30 Thread John
 div id=jive-html-wrapper-div
 Hello John,divbrdivdivdivOn Oct 30, 2009,
 at 9:03 PM, John wrote:/divbr
 class=Apple-interchange-newlineblockquote
 type=citedivHi,brbrOn an idle server, when I
 do a recursive '/usr/bin/ls' on a folder, I see a lot
 of disk activity. This makes sense because the
 results (metadata/data) may not have been
 cached.brWhen I do a second ls on the same folder
 right after the first one finished, I do see disk
 activity again.brbrCan someone explain why the
 results are not cached in
 ARC?br/div/blockquotebr/divdivYou would
 have disk access again unless you have turned set
 atime to off for that filesystem. I did posted
 something similar a few days back and write a summary
 of the ARC-part of my findings:nbsp;a
 href=http://sparcv9.blogspot.com/2009/10/curious-case
 -of-strange-arc.htmlhttp://sparcv9.blogspot.com/2009
 /10/curious-case-of-strange-arc.html/a/divdivbr
 /divdivHere is the whole thread:nbsp;a
 href=http://opensolaris.org/jive/thread.jspa?messageI
 D=430385http://opensolaris.org/jive/thread.jspa?mess
 ageID=430385/a/divdivbr/divdivIf that
 does not explain it you should probably provide some
 more data, how many files, some ARC statistics
 etc./divdivbr/divdivRegards/divdivHenrik
 /divbrdiv
 span class=Apple-style-span
 style=border-collapse: separate; color: rgb(0, 0,
 0); font-family: Helvetica; font-size: medium;
 font-style: normal; font-variant: normal;
 font-weight: normal; letter-spacing: normal;
 line-height: normal; orphans: 2; text-align: auto;
 text-indent: 0px; text-transform: none; white-space:
 normal; widows: 2; word-spacing: 0px;
 -webkit-border-horizontal-spacing: 0px;
 -webkit-border-vertical-spacing: 0px;
 -webkit-text-decorations-in-effect: none;
 -webkit-text-size-adjust: auto;
 -webkit-text-stroke-width: 0px; divHenrikdiva
 href=http://sparcv9.blogspot.com/;http://sparcv9.blo
 gspot.com/a/div/div/span
 /div
 
 br/div/div
 /div___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

thanks Henrik. This makes perfect sense. More questions.
arc_meta_limit is set to a quarter of the ARC size.
what is arc_meta_max ?
On some systems, I have arc_meta_max  arc_meta_limit.

Example:
arc_meta_used = 29427 MB
arc_meta_limit= 16125 MB
arc_meta_max  = 29427 MB

Example 2:
arc_meta_used =  5885 MB
arc_meta_limit=  5885 MB
arc_meta_max  = 17443 MB
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ARC cache and ls query

2009-10-30 Thread Henrik Johansson


On Oct 30, 2009, at 10:20 PM, John wrote:



thanks Henrik. This makes perfect sense. More questions.
arc_meta_limit is set to a quarter of the ARC size.
what is arc_meta_max ?
On some systems, I have arc_meta_max  arc_meta_limit.

Example:
arc_meta_used = 29427 MB
arc_meta_limit= 16125 MB
arc_meta_max  = 29427 MB

Example 2:
arc_meta_used =  5885 MB
arc_meta_limit=  5885 MB
arc_meta_max  = 17443 MB
--  


That looks very strange, the source says:
if (arc_meta_max  arc_meta_used)
 arc_meta_max = arc_meta_used;

So arc_meta_max should be the maximum amount of that arc_meta_used has  
ever reached.


The limit on the metadata  is not enforced synchronously, but that  
seems to be quite a bit over the limit. What are these machines doing,  
are they quickly processing large numbers  files/directories? I do not  
know the exact implementation of this but perhaps new metadata is  
added to the cache faster than it gets purged. Maybe someone else  
knows more exactly how this works?


Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] internal scrub keeps restarting resilvering?

2009-10-30 Thread Jeremy Kitchen


On Oct 29, 2009, at 5:12 PM, Jeremy Kitchen wrote:

After several days of trying to get a 1.5TB drive to resilver and it  
continually restarting, I eliminated all of the snapshot-taking  
facilities which were enabled and


and last night the pool blew that second drive and apparently a third  
and went offline.  After rebooting the machine, everything came up as  
degraded, running a zpool clear got it back going again and it's  
currently resilvering.  However, it does keep restarting the  
resilvering process, and looking at zpool history -i I'm still seeing  
these internal pool scrubs right about the same time the resilvering  
process starts over.


Is it possible to disable these internal pool scrubs for the time  
being to keep it from restarting the resilvering process?  I'm aware  
that it really only interrupts the process, as any data which has  
already been resilvered is already done, and it's just leaving where  
it left off, but constantly seeing the resilver status at 1.5%  
complete is rather depressing, and makes it hard for us to give an ETA  
on when it might finish :(


Thanks!

-Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss