Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-12-01 Thread Roch

Tim writes:
  On Sat, Nov 29, 2008 at 11:06 AM, Ray Clark [EMAIL PROTECTED]wrote:
  
   Please help me understand what you mean.  There is a big difference between
   being unacceptably slow and not working correctly, or between being
   unacceptably slow and having an implementation problem that causes it to
   eventually stop.  I expect it to be slow, but I expect it to work.  Are you
   saying that you found that it did not function correctly, or that it was 
   too
   slow for your purposes?  Thanks for your insights!  (3x would be awesome).
   --
  
  
  
  I expect it will go SO SLOW, that some function somewhere is eventually
  going to fail/timeout.  That system is barely usable WITHOUT compression.  I
  hope at the very least you're disabling every single unnecessary service
  before doing any testing, especially the GUI.
  
  ZFS uses ram, and plenty of it.  That's the nature of COW.  Enabling
  realtime compression with an 800mhz p3?  Kiss any performance, however poor
  it was, goodbye.
  
  --Tim

Hi Tim,

Let me highjack this thread to comment on the RAM
usage. It's a misconception to blame ram usage on COW.

As been stated in this threads, ZFS will need Address Space
in the kernel in order to maintain it's cache. But the cache
is designed to grow and shrink according to memory demand.

The amount memory that ZFS really _needs_ is the amount of
dirty data per transaction group. Today the code is in place
to limit that to 10 seconds worth of I/O. So this should be
very reasonable usage in most cases.

-r




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Setting per-file record size / querying fs/file record size?

2008-12-01 Thread Roch

Bill Sommerfeld writes:
  On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
   I'm assuming this is local filesystem rather than ZFS backed NFS (which 
   is what I have).
  
  Correct, on a laptop.
  
   What has setting the 32KB recordsize done for the rest of your home
   dir, or did you give the evolution directory its own dataset ?
  
  The latter, though it occurs to me that I could set the recordsize back
  up to 128K once the databases (one per mail account) are created -- the
  recordsize dataset property is read only at file create time when the
  file's recordsize is set.  

...almost.

The definitive recordsize for a file is set when the
filesize grows, for the first time, above the filessystem
recordsize property. Touching a file is not enough here.

  (Having a new interface to set the file's
  recordsize directly at create time would bypass this sort of gyration).
  

I kind of agree here but we would need to change also how it
works also.

-r

  (Apparently the sqlite file format uses 16-bit within-page offsets; 32kb
  is its current maximum page size and 64k may be as large as it can go
  without significant renovations..)
  
   - Bill
  
  
   - Bill
  
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread BJ Quinn
Oh.  Yup, I had figured this out on my own but forgot to post back.  --inplace 
accomplishes what we're talking about.  --no-whole-file is also necessary if 
copying files locally (not over the network), because rsync does default to 
only copying changed blocks, but it overrides that default behavior when not 
copying over the network.

Also, has anyone figured out a best-case blocksize to use with rsync?  I tried 
zfs get volblocksize [pool], but it just returns -.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread Darren J Moffat
BJ Quinn wrote:
 Oh.  Yup, I had figured this out on my own but forgot to post back.  
 --inplace accomplishes what we're talking about.  --no-whole-file is also 
 necessary if copying files locally (not over the network), because rsync does 
 default to only copying changed blocks, but it overrides that default 
 behavior when not copying over the network.
 
 Also, has anyone figured out a best-case blocksize to use with rsync?  I 
 tried zfs get volblocksize [pool], but it just returns -.

zfs get recordsize dataset


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-12-01 Thread BJ Quinn
Should I set that as rsync's block size?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-12-01 Thread Karl Rossing
Could zfs be configured to use gzip-9 to compress small files or when 
the system is idle..

When the system is busy or is handling a large file use lzjb.

Busy/Idle and large/small files would need to be defined somewhere.

Alternatively, write the file out using lzjb if the system is busy and 
go back and gzip-9 it when the system is idle or less busy.

I'm not familiar with fs design. There are probably compelling technical 
and compliance reasons not to do any of my suggestions.

Karl












CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is strictly
prohibited.  If you have received this communication in error, please notify
the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-12-01 Thread Bob Friesenhahn
On Mon, 1 Dec 2008, Karl Rossing wrote:

 I'm not familiar with fs design. There are probably compelling technical
 and compliance reasons not to do any of my suggestions.

Due to ZFS COW design, each re-compression requires allocation of a 
new block.  This has implications when snapshots and clones are 
involved.  There could be huge wasted disk space or else all the 
snapshots/clones would need to be updated somehow to use the new 
existing blocks.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Joe S
I read Ben Rockwood's blog post about Thumpers and SMART
(http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd
package only work on a Thumper? Can I use this on my snv_101 system
with AMD 64 bit processor and nVidia SATA?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Scott Williamson
Try it and tell us if it works :)

It might have hooks into the specific controller driver.
On Mon, Dec 1, 2008 at 1:45 PM, Joe S [EMAIL PROTECTED] wrote:

 I read Ben Rockwood's blog post about Thumpers and SMART
 (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd
 package only work on a Thumper? Can I use this on my snv_101 system
 with AMD 64 bit processor and nVidia SATA?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-12-01 Thread Eric Hill
Well, there's the problem...

#id -a tom
uid=15669(tom) gid=15004(domain users) groups=15004(domain users)
#

wbinfo -r shows the full list of groups, but id -a only lists domain users.  
Since I'm trying to restrict permissions on other groups, my access denied 
error message makes more sense.

Any thoughts on how come Solaris/id isn't seeing the full group list for the 
user?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Gnome Disk Usage Analyzer

2008-12-01 Thread Ross
Hey folks,

With OpenSolaris incorporating gnome these days, will things like the Disk 
Usage Analyzer be included, and will that work with ZFS?
http://www.simplehelp.net/2008/11/04/how-to-analyze-disk-usage-in-ubuntu/

Ross
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Gnome Disk Usage Analyzer

2008-12-01 Thread Ross
Aaah, nm, found it.  It's under a different menu in OpenSolaris, and looks like 
it scans ZFS fine.

There doesn't appear to be any way to stop a scan once you've started it 
though, the entire GUI looks to have frozen up on me.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-12-01 Thread Roch Bourbonnais

Le 15 nov. 08 à 08:49, Nicholas Lee a écrit :



 On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED] 
  wrote:
 In short, separate logs with rotating rust may reduce sync write  
 latency by
 perhaps 2-10x on an otherwise busy system.  Using write optimized SSDs
 will reduce sync write latency by perhaps 10x in all cases.  This is  
 one of
 those situations where we can throw hardware at the problem to solve  
 it.

 Are the SSD devices Sun is using in the 7000s available for general  
 use?  Are they OEM parts or special items?


Custom designed for the Hybrid Storage Pool.

-r


 Nicholas
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-12-01 Thread Scott Williamson
Hi,

On Mon, Dec 1, 2008 at 3:37 PM, Eric Hill [EMAIL PROTECTED] wrote:
 Any thoughts on how come Solaris/id isn't seeing the full group list for the 
 user?

Do an ldapsearch and dump the attributes for the group. If it is using
memberuid to list the members solaris should work, if you are using
uniquemember then it will not work.

As far as I remember.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Gnome Disk Usage Analyzer

2008-12-01 Thread Ross
Hmm, there appear to be a few bugs with it actually.  In addition to locking up 
the system while scanning, it seems to have created a circular reference.  It 
looks like it hasn't been able to finish scanning the folders, and has wound up 
creating a link back to the ZFS root with the last folder it scanned.

It's also got very weird figures for filesystem size, used and available.  
Should I report these as bugs?  What area of the bugtracker would the Disk 
Usage Analyzer be under?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Gnome Disk Usage Analyzer

2008-12-01 Thread Ross
Hmm... and on my second attempt it run way faster, enumerates all folders and 
doesn't lock the system up at all.  The totals are still wrong, but the 
performance is completely different.

I suspect it's just ZFS being slow after a reboot (with no data in the cache), 
will test this at work tomorrow.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-12-01 Thread Ray Clark
It completed copying 191,xxx MB without issue in 17 hours and 40 minutes, 
average transfer rate of 3.0MB/Sec.  During the copy (At least the first hour 
or so, and an hour in the middle), the machine was reasonably responsive.  It 
was jerky to a greater or lesser extent, but nothing like even the best times 
with gzip-9.  Not sure how to convey it.  The machine was usable.

It was stopped by running out of disk space.  The source was about 1GB larger 
than the target zfs file system.  (When I started this exercise I had an IT8212 
PCI PATA card in the system for a another pair of drives for the pool, and took 
it out to eliminate a potential cause of my troubles).

Interestingly before I started I had to reboot, as there was a trashapplett 
eating 100% of the CPU, 60% user, 40% system.  Note that I have not made, much 
less deleted any files with gnome, nor put any in my home directory.  I don't 
even know how to do these things, as I am a KDE man.  All I have done is futz 
with this zfs in a separate pool and type at terminal windows.  Can't imagine 
what trashapplett was doing with 100% of the CPU for an extended time without 
any files to manage!

Something I have not mentioned is that the fourth memory socket was worn out a 
few years ago testing memory, this is why I only have 768 installed (The bottom 
3 have not been abused and are fine).  My next move is to trade the motherboard 
for one in good shape so I can put in all 1024MB, plug in the IT8212 with a 
couple of 160GB disks to get my pool up to 360GB, and install RC2...  

But it looks like 2008.11 has been released!  The mirrors still have 2008.05, 
but the main link goes to osol-0811.iso!  Is that final, not an RC?

I will be beating on it to gain confidence and learn about Solaris.  If anyone 
wants me to run any other tests, let me know.  Thanks (again) for all of your 
help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-12-01 Thread Richard Elling
Nicholas Lee wrote:


 On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 In short, separate logs with rotating rust may reduce sync write
 latency by
 perhaps 2-10x on an otherwise busy system.  Using write optimized SSDs
 will reduce sync write latency by perhaps 10x in all cases.  This
 is one of
 those situations where we can throw hardware at the problem to
 solve it.


 Are the SSD devices Sun is using in the 7000s available for general 
 use?  Are they OEM parts or special items?


Yes, they are OEMed. See:
http://www.marketwatch.com/news/story/STEC-Support-Suns-Unified-Storage/story.aspx?guid=%7B07043E00-7628-411D-B24A-2FFEC8B8F706%7D

The ZEUS product line makes a fine slog while the MACH8 product line
works nicely for L2ARC.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-01 Thread Lori Alt

On 11/27/08 17:18, Gary Mills wrote:

On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
  

On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:


On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
  

I'm currently working with an organisation who


want use ZFS for their   full zones. Storage is SAN attached, and they
also want to create a   separate /var for each zone, which causes issues
when the zone is   installed. They believe that a separate /var is
still good practice.
If your mount options are different for /var and /, you will need
a separate filesystem.  In our case, we use `setuid=off' and
`devices=off' on /var for security reasons.  We do the same thing
for home directories and /tmp .

  

For zones?



Sure, if you require different mount options in the zones.

  

I looked into this and found that, using ufs,  you can indeed set up
the zone's /var directory as a separate file system.  I  don't know about
how LiveUpgrade works with that configuration (I didn't try it). 
But I was at least able to get the zone to install and boot.


But with zfs, I couldn't even get a zone with a separate /var
dataset to install, let alone be manageable with LiveUpgrade.
I configured the zone like so:

# zonecfg -z z4
z4: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:z4 create
zonecfg:z4 set zonepath=/zfszones/z4
zonecfg:z4 add fs
zonecfg:z4:fs set dir=/var
zonecfg:z4:fs set special=rpool/ROOT/s10x_u6wos_07b/zfszones/z4/var
zonecfg:z4:fs set type=zfs
zonecfg:z4:fs end
zonecfg:z4 exit

I then get this result from trying to install the zone:

prancer# zoneadm -z z4 install
Preparing to install zone z4.
ERROR: No such file or directory: cannot mount /zfszones/z4/root/var 
in non-global zone to install: the source block device or directory 
rpool/ROOT/s10x_u6wos_07b/zfszones/z1/var cannot be accessed

ERROR: cannot setup zone z4 inherited and configured file systems
ERROR: cannot setup zone z4 file systems inherited and configured from 
the global zone

ERROR: cannot create zone boot environment z4

I don't fully  understand the failures here.  I suspect that there are
problems both in the zfs code and zones code.  It SHOULD work though.
The fact that it doesn't seems like a bug.

In the meantime, I guess we have to conclude that a separate /var
in a non-global zone is not supported on zfs.  A separate /var in
the global zone is supported  however, even when the root is zfs.

Lori



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-12-01 Thread Ray Clark
Re pantzer5's suggestion:  

Memory is not a big problem for ZFS, address space is. You may have to
give the kernel more address space on 32-bit CPUs.

eeprom kernelbase=0x8000

This will reduce the usable address space of user processes though.

---
Would you please verify that I understand correctly.  I am extrapolating here 
based on general knowledge:

During a running user process, the process has the entire lower part of the 
address space below the kernel.  The kernel is loaded at kernelbase, and has 
from there to the top (2**32-1) to use for its purposes.  Evidently it is 
relocatable or position independent.

The positioning of kernelbase really has nothing to do with how much physical 
RAM I have, since the user memory and perhaps some of the kernel memory is 
virtual (paged).  So the fact that I have 768MB does enter into this decision 
directly (It does indirectly per Jeff's note implying that kernel structures 
need to be larger with larger RAM, makes sense, more to keep track of, more 
page tables).

By default kernelbase is set at 3G, so presumably the kernel needs a minimum of 
1G space.

Every userland process gets the full virtual space from 0 to kernelbase-1.  So 
unless I am going to run a process that needs more than 1G, there is no 
advantage in setting kernelbase to something larger than 1G, etc.  Even if 
physical RAM is larger.

If I am not going to run virtual machines, or edit enormous video or audio or 
image files in RAM, I really have no use for userland address space, and giving 
alot to the kernel can only help it to have things mapped rather than having to 
recreate create information (Although I don't have a good handle on the utility 
of address space without a storage mechanism like RAM or Disk behind it...must 
be something akin to a pagefault with pages mapped to a disk file so you don't 
have to walk the file hierarchy).  

Hence your suggestion to set kernelbase to 2G.  But 1G is probably fine too 
(Although the incremental benefit may be negligible - I am going for the 
principle here).

How am I doing?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] rsync using 100% of a cpu

2008-12-01 Thread Francois Dion
Source is local to rsync, copying from a zfs file system, destination is remote 
over a dsl connection. Takes forever to just go through the unchanged files. 
Going the other way is not a problem, it takes a fraction of the time. Anybody 
seen that? Suggestions?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Rob
  (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd

can't dump all SMART data, but get some temps on a generic box..

4 % hd -a
 fdisk
DeviceSerialVendor   Model Rev  Temperature Type
------   -  --- 
c3t0d0p0ATA  ST3750640AS   K255 C (491 F) EFI
c3t1d0p0ATA  ST3750640AS   K255 C (491 F) EFI
c3t2d0p0ATA  ST3750640AS   K255 C (491 F) EFI
c3t4d0p0ATA  ST3750640AS   K255 C (491 F) EFI
c3t5d0p0ATA  ST3750640AS   K255 C (491 F) EFI
c4t0d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
c4t1d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
c4t2d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
c4t4d0p0ATA  WDC WD1001FALS-0  0K05 42 C (107 F) EFI
c4t5d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
c5t0d0p0    TSSTcorp CD/DVDW SH-S162A  TS02 None  None
c5t1d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F) Solaris2
c5t2d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F) Solaris2
c5t3d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F) Solaris2
c5t4d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F) Solaris2
c5t5d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F) Solaris2

Do you know of a solaris tool to get SMART data?

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rsync using 100% of a cpu

2008-12-01 Thread Blake Irvin

Upstream when using DSL is much slower than downstream?

Blake

On Dec 1, 2008, at 7:42 PM, Francois Dion [EMAIL PROTECTED]  
wrote:


Source is local to rsync, copying from a zfs file system,  
destination is remote over a dsl connection. Takes forever to just  
go through the unchanged files. Going the other way is not a  
problem, it takes a fraction of the time. Anybody seen that?  
Suggestions?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Blake Irvin
I've used that tool only with the Marvell chipset that ships with the  
thumpers.  (in a supermicro hba)

Have you looked at cfgadm?

Blake

On Dec 1, 2008, at 7:49 PM, [EMAIL PROTECTED] wrote:

 (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd

 can't dump all SMART data, but get some temps on a generic box..

 4 % hd -a
  
 fdisk
 DeviceSerialVendor   Model Rev  Temperature  
 Type
 ------   -  ---  
 
 c3t0d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t1d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t2d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t4d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t5d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c4t0d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c4t1d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c4t2d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c4t4d0p0ATA  WDC WD1001FALS-0  0K05 42 C (107 F)  
 EFI
 c4t5d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c5t0d0p0    TSSTcorp CD/DVDW SH-S162A  TS02 None  None
 c5t1d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t2d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t3d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t4d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t5d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2

 Do you know of a solaris tool to get SMART data?

Rob

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS UI management

2008-12-01 Thread Jean Dion
I there a plan to add send/receive functions to ZFS UI (Java Console)?

You should consider adding ZFS UI like Time Slider rather than just Java 
Console.

We also need ways to ease recovery of an entire file systems from local or 
remote.  Timer Slider focus on zfs level only.  Be nice to add entire zpool 
level as well as supporting send/receive.

Dealing with multiple command levels or scripts is not easy.  UI are excellent 
for these tasks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How often to scrub?

2008-12-01 Thread Glaser, David
Hi all,

I have a Thumper (ok, actually 3) with each having one large pool, multiple 
filesystems and many snapshots. They are holding rsync copies of multiple 
clients, being synced every night (using snapshots to keep 'incremental' 
backups).

I'm wondering how often (if ever) I should do scrubs of the pools, or if the 
internal zfs integrity is enough that I don't need to do manual scrubs of the 
pool? I read through a number of tutorials online as well as the zfs wiki 
entry, but I didn't see anything very pertinent. Scrubs are I/O intensive, but 
is the Pool able to be used normally during a scrub? I think the answer is yes, 
but some confirmation helps me sleep at night.

Thoughts? Ideas? Knife-fights?

Thanks
Dave

David Glaser
Systems Administrator Senior
LSA Information Technology
University of Michigan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How often to scrub? [SEC=UNCLASSIFIED]

2008-12-01 Thread LEES, Cooper

Hi,

I scrub my pools once a week on the weekend, rpool saturday, cesspool  
(my other pool) Sunday. You can still use the box while it is  
scrubbing but as you would expect the I/O is very slow and can  
sometimes be close to unusable.


Ta,
---
Cooper Ry Lees
UNIX Evangelist - Information Management Services (IMS)
Australian Nuclear Science and Technology Organisation
T  +61 2 9717 3853
F  +61 2 9717 9273
M  +61 403 739 446
E  [EMAIL PROTECTED]
www.ansto.gov.au

Important: This transmission is intended only for the use of the  
addressee. It is confidential and may contain privileged information  
or copyright material. If you are not the intended recipient, any use  
or further disclosure of this communication is strictly forbidden. If  
you have received this transmission in error, please notify me  
immediately by telephone and delete all copies of this transmission as  
well as any attachments.


On 02/12/2008, at 2:05 PM, Glaser, David wrote:


Hi all,

I have a Thumper (ok, actually 3) with each having one large pool,  
multiple filesystems and many snapshots. They are holding rsync  
copies of multiple clients, being synced every night (using  
snapshots to keep ‘incremental’ backups).


I’m wondering how often (if ever) I should do scrubs of the pools,  
or if the internal zfs integrity is enough that I don’t need to do  
manual scrubs of the pool? I read through a number of tutorials  
online as well as the zfs wiki entry, but I didn’t see anything very  
pertinent. Scrubs are I/O intensive, but is the Pool able to be used  
normally during a scrub? I think the answer is yes, but some  
confirmation helps me sleep at night.


Thoughts? Ideas? Knife-fights?

Thanks
Dave

David Glaser
Systems Administrator Senior
LSA Information Technology
University of Michigan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool replace - choke point

2008-12-01 Thread Alan Rubin
I had posted at the Sun forums, but it was recommended to me to try here as 
well.  For reference, please see 
http://forums.sun.com/thread.jspa?threadID=5351916tstart=0.

In the process of a large SAN migration project we are moving many large 
volumes from the old SAN to the new. We are making use of the 'replace' 
function to replace the old volumes with similar or larger new volumes. This 
process is moving very slowly, sometimes as slow as only moving one percentage 
of data every 10 minutes. Is there any way to streamline this method? The 
system is Solaris 10 08/07. How much is dependent on the activity of the box? 
How about on the architecture of the box? The primary system in question at 
this point is a T2000 with 8GB of RAM and a 4-core CPU. This server has 6 4Gb 
fibre channel connections to our SAN environment. At times this server is quite 
busy because it is our backup server, but performance seems no better when 
backup operations have ceased their daily activities.

Our pools are only stripes. Would we expect better performance from a mirror or 
raidz pool? It is worrisome that if the environment were compromised by a 
failed disk that it could take so long to replace and correct the usual 
redundancies (if it was a mirror or raidz pool). 

I have previously applied the kernel change described here: 
http://blogs.digitar.com/jjww/?itemid=52

I just moved a 1TB volume which took approx. 27h.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool replace - choke point

2008-12-01 Thread Blake
Have you considered moving to 10/08 ?  ZFS resilver performance is
much improved in this release, and I suspect that code might help you.

You can easily test upgrading with Live Upgrade.  I did the transition
using LU and was very happy with the results.

For example, I added a disk to a mirror and resilvering the new disk
took about 6 min for almost 300GB, IIRC.

Blake



On Mon, Dec 1, 2008 at 11:04 PM, Alan Rubin [EMAIL PROTECTED] wrote:
 I had posted at the Sun forums, but it was recommended to me to try here as 
 well.  For reference, please see 
 http://forums.sun.com/thread.jspa?threadID=5351916tstart=0.

 In the process of a large SAN migration project we are moving many large 
 volumes from the old SAN to the new. We are making use of the 'replace' 
 function to replace the old volumes with similar or larger new volumes. This 
 process is moving very slowly, sometimes as slow as only moving one 
 percentage of data every 10 minutes. Is there any way to streamline this 
 method? The system is Solaris 10 08/07. How much is dependent on the activity 
 of the box? How about on the architecture of the box? The primary system in 
 question at this point is a T2000 with 8GB of RAM and a 4-core CPU. This 
 server has 6 4Gb fibre channel connections to our SAN environment. At times 
 this server is quite busy because it is our backup server, but performance 
 seems no better when backup operations have ceased their daily activities.

 Our pools are only stripes. Would we expect better performance from a mirror 
 or raidz pool? It is worrisome that if the environment were compromised by a 
 failed disk that it could take so long to replace and correct the usual 
 redundancies (if it was a mirror or raidz pool).

 I have previously applied the kernel change described here: 
 http://blogs.digitar.com/jjww/?itemid=52

 I just moved a 1TB volume which took approx. 27h.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How often to scrub?

2008-12-01 Thread Toby Thain

On 1-Dec-08, at 10:05 PM, Glaser, David wrote:

 Hi all,



 I have a Thumper (ok, actually 3) with each having one large pool,  
 multiple filesystems and many snapshots. They are holding rsync  
 copies of multiple clients, being synced every night (using  
 snapshots to keep ‘incremental’ backups).



 I’m wondering how often (if ever) I should do scrubs of the pools,  
 or if the internal zfs integrity is enough that I don’t need to do  
 manual scrubs of the pool?



Yes you should. Passive integrity is not all; proactively reading the  
pool improves your MTTDL substantially, see other sources for the  
actual figures. :) It does not need to be very frequent. I do it  
monthly on my colo server.

--Toby

 I read through a number of tutorials online as well as the zfs wiki  
 entry, but I didn’t see anything very pertinent. Scrubs are I/O  
 intensive, but is the Pool able to be used normally during a scrub?  
 I think the answer is yes, but some confirmation helps me sleep at  
 night.



 Thoughts? Ideas? Knife-fights?



 Thanks

 Dave



 David Glaser

 Systems Administrator Senior

 LSA Information Technology

 University of Michigan

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Blake
Also, see the 'hd -e' option (unless this works only with the Marvell chipset):

[EMAIL PROTECTED] ~]# hd -e c3t1
Revision: 16
Offline status 132
Selftest status 0
Seconds to collect 15960
Time in minutes to run short selftest 2
Time in minutes to run extended selftest 198
Offline capability 123
SMART capability 3
Error logging capability 1
Checksum 0xaf
Identification Status Current Worst Raw data
  1 Raw read error rate0xf200   2000
  3 Spin up time   0x3184   184 7758
  4 Start/Stop count   0x32   100   100   54
  5 Reallocated sector count   0x33   200   2000
  7 Seek error rate0xe200   2000
  9 Power on hours count   0x329494 4833
 10 Spin retry count   0x12   100   2530
 11 Recalibration Retries count0x12   100   2530
 12 Device power cycle count   0x32   100   100   53
192 Power off retract count0x32   200   200   28
193 Load cycle count   0x32   200   200   54
194 Temperature0x22   124   114  28/  0/  0
(degrees C cur/min/max)
196 Reallocation event count   0x32   200   2000
197 Current pending sector count   0x12   200   2000
198 Scan uncorrected sector count  0x10   200   2000
199 Ultra DMA CRC error count  0x3e   200   2000
200 Write/Multi-Zone Error Rate0x8200   2000


On Mon, Dec 1, 2008 at 8:57 PM, Blake Irvin [EMAIL PROTECTED] wrote:
 I've used that tool only with the Marvell chipset that ships with the
 thumpers.  (in a supermicro hba)

 Have you looked at cfgadm?

 Blake

 On Dec 1, 2008, at 7:49 PM, [EMAIL PROTECTED] wrote:

 (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd

 can't dump all SMART data, but get some temps on a generic box..

 4 % hd -a
fdisk
 DeviceSerialVendor   Model Rev  Temperature Type
 ------   -  --- 
 c3t0d0p0ATA  ST3750640AS   K255 C (491 F) EFI
 c3t1d0p0ATA  ST3750640AS   K255 C (491 F) EFI
 c3t2d0p0ATA  ST3750640AS   K255 C (491 F) EFI
 c3t4d0p0ATA  ST3750640AS   K255 C (491 F) EFI
 c3t5d0p0ATA  ST3750640AS   K255 C (491 F) EFI
 c4t0d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
 c4t1d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
 c4t2d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
 c4t4d0p0ATA  WDC WD1001FALS-0  0K05 42 C (107 F) EFI
 c4t5d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F) EFI
 c5t0d0p0    TSSTcorp CD/DVDW SH-S162A  TS02 None  None
 c5t1d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)
 Solaris2
 c5t2d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)
 Solaris2
 c5t3d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)
 Solaris2
 c5t4d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)
 Solaris2
 c5t5d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)
 Solaris2

 Do you know of a solaris tool to get SMART data?

   Rob

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool replace - choke point

2008-12-01 Thread Alan Rubin
We will be considering it in the new year,  but that will not happen in time to 
affect our current SAN migration.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How often to scrub?

2008-12-01 Thread Richard Elling
Glaser, David wrote:

 Hi all,

 I have a Thumper (ok, actually 3) with each having one large pool, 
 multiple filesystems and many snapshots. They are holding rsync copies 
 of multiple clients, being synced every night (using snapshots to keep 
 ‘incremental’ backups).

 I’m wondering how often (if ever) I should do scrubs of the pools, or 
 if the internal zfs integrity is enough that I don’t need to do manual 
 scrubs of the pool? I read through a number of tutorials online as 
 well as the zfs wiki entry, but I didn’t see anything very pertinent. 
 Scrubs are I/O intensive, but is the Pool able to be used normally 
 during a scrub? I think the answer is yes, but some confirmation helps 
 me sleep at night.


We did a study on re-write scrubs which showed that once per year was a
good interval for modern, enterprise-class disks. However, ZFS does a
read-only scrub, so you might want to scrub more often.

 Thoughts? Ideas? Knife-fights?


Knife fights? Naw, more like paranoia will destroy ya :-)
Maybe we need a ZFS theme song :-)
http://www.youtube.com/watch?v=g3OVaCDLc9M
http://www.youtube.com/watch?v=ZBbAZVw3_7A
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss