Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-15 Thread Vincent Boisard
On Sat, Nov 15, 2008 at 12:18 AM, Al Hopper [EMAIL PROTECTED] wrote:


  This has a comparison (at the time) as to what the differences are
  with the different Solaris versions:
  http://blogs.sun.com/weber/entry/solaris_opensolaris_nevada_indiana_sxde

 That's too old to be useful.


I agree with you on this



 OTOH - if you don't know OpenSolaris well enough, you're better off
 either picking an earlier release that has proven to have very few
 relevant warts - usually based on a recommendation for other, more
 experieced, users.  Or you could go with the commercial, rock solid
 release called Solaris U6 (Update 6) recently released.


Where can I find advice on these earlier versions with few relevant warts.
When I look at forums, I see good and bad for each release. Also, S10U6 does
not have features that I need (Zones ZFS cloning). Also, as I have no
support contract with sun (home user), I am not sure if I will get patches
or not.



 In any case, load the release you choose, play with it for a week or
 so, while running the type of apps you intend to run and see if it
 works for you.  After that, consider it production and load up all
 your precious data.


I'll try to do that


  Also - to add yet another dimension to the decision making process -
 os2008.11 is due out any day now.  I think that this release will be a
 winner.  You can download and eval the Release Candidate from
 http://www.genunix.org/ (based on 101a).  The production release
 can't be far away.   To a large extent, os2008.nn will be a better
 long-term choice, since it incorporates the new package update
 facility.  So you'll be able to upgrade any problem binaries very
 easily and with very little of something going very wrong.


I'd love to go with os2008.nn, but the zones features are too different.
I need sparse zones (and branded zones for linux perhaps). Also, I don't
have a fast internet connection, so fetching everything from the web every
time I create a zone is a bit of a problem.

Anyway, thanks for your help,

Vincent
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hello


Is there any way to list all snapshots of particular file system without
listing the snapshots of its children file systems?


Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-15 Thread Casper . Dik

I looked at this a month back, i was leaning towards intel for  
performance and power consumption but went for AMD doe to lack of ECC  
support in most of the Intel chipsets.

I went for a AM2+ GeForce 8200 motherboard which seemed more stable  
with Solaris than 8300. With the AM2+ socket I can wait for the new  
45nm CPUs, I bought the cheapest dual-core I could find for now (which  
did not support PM). I am very happy with the system except for the  
fact that the onboard NIC doesn't work.

Which NIC is that?

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Seeking thoughts on using SXCE rather than Solar 10 on production servers.

2008-11-15 Thread Ian Collins
Anyone who follows this list we have seen a number of issues with
Solaris 10 and ZFS from me this week.

We deployed Solaris 10 for the usual conservative reasons, support and
stability.  Most of my my ZFS experience has been with SXCE and I've
seen problems reported and fixed a couple of builds later.  The further
SXCE moves ahead of Solaris 10 ZFS, the longer (and probably more
difficult) the task of back porting these fixes will become.

So my question is, for production servers (x4540) that are primarily SMB
(80%) and NFS (20%) file servers, would you deploy  SXCE with native
CIFS support, or Solaris 10/Samba?

I wouldn't hesitate to go with the former, relying on Live Upgrade to
incorporate fixes rather than patching.  Persuading clients may be a
little harder!

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot list

2008-11-15 Thread Kees Nuyt
[Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko
[EMAIL PROTECTED] wrote:

Hello

Is there any way to list all snapshots of particular file system
without listing the snapshots of its children file systems?

fsnm=tank/fs;zfs list -rt snapshot ${fsnm}|grep ${fsnm}@

or even

fsnm=tank/fs;zfs list -r ${fsnm}|grep ${fsnm}@

Thanks,
Mike
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-15 Thread Thomas Maier-Komor

 
 Seems like there's a strong case to have such a program bundled in Solaris.
 

I think, the idea of having a separate configurable buffer program with a high 
feature set fits into UNIX philosophy of having small programs that can be used 
as building blocks to solve larger problems.

mbuffer is already bundled with several Linux distros. And that is also the 
reason its feature set expanded over time. In the beginning there wasn't even 
support for network transfers.

Today mbuffer supports direct transfer to multiple receivers, data transfer 
rate limitation, high/low water mark algorithm, on the fly md5 calculation, 
multi volume tape access, usage of sendfile, and has a configurable buffer 
size/layout.

So ZFS send/receive is just another use case for this tool.

- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-15 Thread dick hoogendijk
On Sat, 15 Nov 2008 18:49:17 +1300
Ian Collins [EMAIL PROTECTED] wrote:

 [EMAIL PROTECTED] wrote:
WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
The P45 based boards are a no-brainer
 
  16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210 based boards
 
  That is the question.

 I guess the answer is how valuable is your data?

I disagree. The answer is: go for the 16G and make backups. The 16G
system will work far more easy and I may be lucky but in the past
years I did not have ZFS issues with my non-ECC ram ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv101 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hi

 [Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko
 [EMAIL PROTECTED] wrote:
 
 Hello

 Is there any way to list all snapshots of particular file system
 without listing the snapshots of its children file systems?
 
 fsnm=tank/fs;zfs list -rt snapshot ${fsnm}|grep ${fsnm}@
 
 or even
 
 fsnm=tank/fs;zfs list -r ${fsnm}|grep ${fsnm}@


Yes, thanks - I know about grep but if you have hundred of thousands of
snapshots grep is what I wanted to avoid. In my case full zfs list -rt
snapshot take hours, while listing snapshot for individual filesystem is
much much quicker :(


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-15 Thread Richard Elling
dick hoogendijk wrote:
 On Sat, 15 Nov 2008 18:49:17 +1300
 Ian Collins [EMAIL PROTECTED] wrote:

   
 [EMAIL PROTECTED] wrote:
 
   WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
   The P45 based boards are a no-brainer

 16G of DDR2-1066 with P45 or
   8G of ECC DDR2-800 with 3210 based boards

 That is the question.
   
   
 I guess the answer is how valuable is your data?
 

 I disagree. The answer is: go for the 16G and make backups. The 16G
 system will work far more easy and I may be lucky but in the past
 years I did not have ZFS issues with my non-ECC ram ;-)
   

You are lucky.  I recommend ECC RAM for any data that you care
about.  Remember, if there is a main memory corruption, that may
impact the data that ZFS writes which will negate any on-disk
redundancy.  And yes, this does occur -- check the archives for the
tales of woe.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-15 Thread Johan Hartzenberg
On Sat, Nov 15, 2008 at 10:57 AM, Vincent Boisard [EMAIL PROTECTED]wrote:




 OTOH - if you don't know OpenSolaris well enough, you're better off
 either picking an earlier release that has proven to have very few
 relevant warts - usually based on a recommendation for other, more
 experieced, users.  Or you could go with the commercial, rock solid
 release called Solaris U6 (Update 6) recently released.


 Where can I find advice on these earlier versions with few relevant
 warts. When I look at forums, I see good and bad for each release. Also,
 S10U6 does not have features that I need (Zones ZFS cloning). Also, as I
 have no support contract with sun (home user), I am not sure if I will get
 patches or not.


If Zone Cloning via ZFS snapshots is the only feature you miss in S10u6,
then you should reconsider.  Writing a script to implement this yourself
will require only a little experimentation.


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-15 Thread Al Hopper
On Sat, Nov 15, 2008 at 9:26 AM, Richard Elling [EMAIL PROTECTED] wrote:
 dick hoogendijk wrote:
 On Sat, 15 Nov 2008 18:49:17 +1300
 Ian Collins [EMAIL PROTECTED] wrote:


 [EMAIL PROTECTED] wrote:

   WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
   The P45 based boards are a no-brainer

 16G of DDR2-1066 with P45 or
   8G of ECC DDR2-800 with 3210 based boards

 That is the question.


 I guess the answer is how valuable is your data?


 I disagree. The answer is: go for the 16G and make backups. The 16G
 system will work far more easy and I may be lucky but in the past
 years I did not have ZFS issues with my non-ECC ram ;-)


 You are lucky.  I recommend ECC RAM for any data that you care
 about.  Remember, if there is a main memory corruption, that may
 impact the data that ZFS writes which will negate any on-disk
 redundancy.  And yes, this does occur -- check the archives for the
 tales of woe.

I agree with your recommendation Richard.  OTOH I've built/used a
bunch of systems over several years that were mostly non ECC equipped
and only lost one DIMM along the way.  So I guess I've been lucky also
- but IMHO the failure rate for RAM these days is pretty small[1].
I've also been around hundreds of SPARC boxes and, again, very, few
RAM failures (one is all that I can remember).

Risk management is exactly that.  You have to determine where the risk
is and how important it is and how likely it is to bite.  And then
allocate costs from your budget to minimize that risk.  Remember that
you won't totally eliminate all risk - but you can minimize it.  At
the time when there was a big cost delta between ECC and non ECC RAM
parts, I always went with the most (non ECC) RAM that the budget would
support.  That was my personal risk assessment and priority.  I think
it was a good decision and it did'nt cause me any grief.

[1] I do recommend that you test the heck out of new RAM parts and
ensure that they get some airflow - especially if they are getting a
supply of hot air from any nearby CPU coolers.  Even the simple
finger test will tell you if you need a fan for your RAM DIMMs.

-- 
Al Hopper  Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
   Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost space in empty pool (no snapshots)

2008-11-15 Thread Henrik Johansson
I have done some more tests, it seems that if I create a large file  
with mkfile and interrupt the creation, the space that was allocated  
is still occupied after I remove the file.

I'm gonna file this as a bug if no one has anything to add to this.

First I create a new pool, on that pool I create a file and interrupt  
the creation, after removing that file the space is free again:

# uname -a
SunOS tank 5.11 snv_101 i86pc i386 i86pc

# zpool create tank raidz c1t1d0 c1t2d0 c1t4d0
# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  85.9K  2.66T  24.0K  /tank
# mkfile 10G /tank/testfile01
^C# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  4.73G  2.66T  4.73G  /tank
# rm /tank/testfile01  sync
# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  85.9K  2.66T  24.0K  /tank

Now, if I do the same again, but with a very large file:

# mkfile 750G /tank/testfile02
^C# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  11.3G  2.65T  11.3G  /tank
# rm /tank/testfile02  sync
zfs list tank# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  12.2G  2.65T  12.2G  /tank
# zpool export tank
# zpool import tank
# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  12.2G  2.65T  12.2G  /tank
# zpool scrub tank
# zpool status tank
  pool: tank
state: ONLINE
scrub: scrub completed after 0h1m with 0 errors on Sun Nov 16 01:17:54  
2008
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0

errors: No known data errors

Some zdb output:

# zdb - tank |more
Dataset mos [META], ID 0, cr_txg 4, 89.9K, 30 objects, rootbp [L0 DMU  
objset] 40
0L/200P DVA[0]=0:c800026800:400 DVA[1]=0:1926800:400  
DVA[2]=0:26800:400
  fletcher4 lzjb LE contiguous birth=43 fill=30 cksum=af477f73c: 
4926037df90:f80a
fd99a65f:2399d9c07818be

Object  lvl   iblk   dblk  lsize  asize  type
 0116K16K16K 8K  DMU dnode

Object  lvl   iblk   dblk  lsize  asize  type
 1116K16K32K  12.0K  object directory
Fat ZAP stats:
Pointer table:
1024 elements
zt_blk: 0
zt_numblks: 0
zt_shift: 10
zt_blks_copied: 0
zt_nextblk: 0
ZAP entries: 7
Leaf blocks: 1
Total blocks: 2
zap_block_type: 0x8001
zap_magic: 0x2f52ab2ab
zap_salt: 0x1d479cab3
Leafs with 2^n pointers:
9:  1 *
Blocks with n*5 entries:
1:  1 *
Blocks n/10 full:
1:  1 *
Entries with n chunks:
3:  7 ***
Buckets with n entries:
0:505  

1:  7 *

sync_bplist = 21
history = 22
root_dataset = 2
errlog_scrub = 0
errlog_last = 0
deflate = 1
config = 20

Object  lvl   iblk   dblk  lsize  asize  type
 2116K512512  0  DSL directory
 256  bonus  DSL directory
creation_time = Sun Nov 16 01:11:53 2008
head_dataset_obj = 16
parent_dir_obj = 0
origin_obj = 14
child_dir_zapobj = 4
used_bytes = 12.2G
compressed_bytes = 12.2G
uncompressed_bytes = 12.2G
quota = 0
reserved = 0
props_zapobj = 3
deleg_zapobj = 0
flags = 1
used_breakdown[HEAD] = 12.2G
used_breakdown[SNAP] = 0
used_breakdown[CHILD] = 89.9K
used_breakdown[CHILD_RSRV] = 0
used_breakdown[REFRSRV] = 0

Object  lvl   iblk   dblk  lsize  asize  type
 3116K512512 2K  DSL props
microzap: 512 bytes, 0 entries


Object  lvl   iblk   dblk  lsize  asize  type
 4116K512512 2K  DSL directory child map
microzap: 512 bytes, 2 entries

$MOS = 5
$ORIGIN = 8

Object  lvl   iblk   dblk  lsize  asize  type
 5116K512512  0  DSL directory
 256  bonus  DSL directory
creation_time = Sun Nov 16 01:11:53 2008
head_dataset_obj = 0
parent_dir_obj = 2
origin_obj = 0
child_dir_zapobj = 

Re: [zfs-discuss] [ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM

2008-11-15 Thread James Black
I've tried using S10 U6 to reinstall the boot file (instead of U5) over 
jumpstart as its a ldom, noticed a another error.

Boot device: /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]  File and 
args: -s
Requesting Internet Address for 0:14:4f:f9:84:f3
boot: cannot open kernel/sparcv9/unix
Enter filename [kernel/sparcv9/unix]:

Has anyone seen this error on U6 jumpstart or is it just me?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM

2008-11-15 Thread James Black
Sorry this is the rest of my problem...
Hi, I just finished patching 30+ LDoms and on the last one I get this error 
when booting.
- 
-
SPARC Enterprise T2000, No Keyboard
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.29.0.a, 6144 MB memory available, Serial #66845120.
Ethernet address 0:14:4f:fb:f9:c0, Host ID: 83fbf9c0.



Boot device: /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]:a File and 
args:

seek failed

Warning: Fcode sequence resulted in a net stack depth change of 1
Evaluating:

Evaluating:
The file just loaded does not appear to be executable.
 
br /
I have a number of T2000 servers all with the same firmware and OS patch level.

sc showsc version -v
Advanced Lights Out Manager CMT v1.6.5
SC Firmware version: CMT 1.6.5
SC Bootmon version: CMT 1.6.5

VBSC 1.6.7.b
VBSC firmware built Sep 29 2008, 09:30:31

SC Bootmon Build Release: 01
SC bootmon checksum: E6213179
SC Bootmon built Sep 29 2008, 08:37:29

SC Build Release: 01
SC firmware checksum: EA9D0B0D

SC firmware built Sep 29 2008, 09:34:34
SC firmware flashupdate FRI NOV 07 04:20:00 2008

SC System Memory Size: 32 MB
SC NVRAM Version = 14
SC hardware type: 4

FPGA Version: 4.2.4.7
sc showhost
SPARC-Enterprise-T2000 System Firmware 6.6.7 2008/09/29 09:36

Host flash versions:
OBP 4.29.0.a 2008/09/15 12:01
Hypervisor 1.6.7.a 2008/09/29 09:29
POST 4.29.0.a 2008/09/15 12:26
###


And my LDOMs all have the same OS patch level as well, the only thing different 
on this one that i am getting this error it has Sun Studio 12 + studios patches.

The LDOM has UFS boot file system and ZFS pools for my Global Zone. The system 
was built of Solaris 10 U5.

I tried installboot recovery from U5 but that didn't work and U6 i get another 
error when boot over net:
Requesting Internet Address for 0:14:4f:f9:84:f3
boot: cannot open kernel/sparcv9/unix
Enter filename [kernel/sparcv9/unix]


Help needed. Any ideas?

Thanks,
James
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss