Re: [zfs-discuss] can anyone help me?

2008-06-01 Thread Eric Snellman
I have very little technical knowledge on what the problem is.

Some random things to try:

Make a seperate zpool and filesytem for the swap.

Add more ram to the system.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/sadm on zfs?

2008-06-01 Thread Enda O'Connor
Jim Litchfield at Sun wrote:
 I think you'll find that any attempt to make zones (certainly whole root
 ones) will fail after this.
   

right, zoneadm install actually copies in the global zones undo.z into 
the local zone, so that patchrm of an existing patch will work.

haven't tried out what happens when the undo is missing,

but zoneadm install actually copies the undo from
/var/sadm/pkg/SUNWcsr/save/pspool/SUNWcsr/save/patch-id/undo.z
above example for just SUNWcsr.

BTW the undo under pspool is identical to the one in 
/var/sadm/pkg/SUNWcsr/save/patch-id/undo.z ( obvious waste of space 
really )

so one solution based on Mike's would be to create a symlink in the 
pspool save/patch-id  for each undo.z being moved.

Note I have not tested any of this out so beware :-)

Enda
 Jim
 ---
 Mike Gerdts wrote:
   
 On Sat, May 31, 2008 at 5:16 PM, Bob Friesenhahn
 [EMAIL PROTECTED] wrote:
   
 
 On my heavily-patched Solaris 10U4 system, the size of /var (on UFS)
 has gotten way out of hand due to the remarkably large growth of
 /var/sadm.  Can this directory tree be safely moved to a zfs
 filesystem?  How much of /var can be moved to a zfs filesystem without
 causing boot or runtime issues?
 
   
 /var/sadm is not used during boot.

 If you have been patching regularly, you probably have a bunch of
 undo.Z files that are used only in the event that you want to back
 out.  If you don't think you will be backing out any patches that were
 installed 90 or more days ago the following commands may be helpful:

 To understand how much space would be freed up by whacking the old undo 
 files:

 # find /var/sadm/pkg -mtime +90 -name undo.Z | xargs du -k \
 | nawk '{t+= $1; print $0} END {printf(Total: %d MB\n, t / 1024)}'

 Copy the old backout files somewhere else:

 # cd /var/sadm
 # find pkg -mtime +90 -name undo.Z \
  | cpio -pdv /somewhere/else

 Remove the old (90+ days) undo files

 # find /var/sadm/pkg -mtime +90 -name undo.Z | xargs rm -f

 Oops, I needed those files to back out 123456-01

 # cd /somewhere/else
 # find pkg -name undo.Z | grep 123456-01 \
  | cpio -pdv /var/sadm
 # patchrm 123456-01

 Before you do this, test it and convince yourself that it works.  I
 have not seen Sun documentation (either docs.sun.com or
 sunsolve.sun.com) that says that this is a good idea - but I haven't
 seen any better method for getting rid of the cruft that builds up in
 /var/sadm either.

 I suspect that further discussion on this topic would be best directed
 to [EMAIL PROTECTED] or sun-managers mailing list (see
 http://www.sunmanagers.org/).

   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/sadm on zfs?

2008-06-01 Thread Mike Gerdts
On Sun, Jun 1, 2008 at 3:53 AM, Enda O'Connor [EMAIL PROTECTED] wrote:
 Jim Litchfield at Sun wrote:

 I think you'll find that any attempt to make zones (certainly whole root
 ones) will fail after this.


 right, zoneadm install actually copies in the global zones undo.z into the
 local zone, so that patchrm of an existing patch will work.

 haven't tried out what happens when the undo is missing,

My guess it works just fine - based upon the fact that patchadd -d
does not create the undo.z file.  Admittedly, it is sloppy to just get
rid of the undo.z file - the existence of the other related
directories is (save/patchid) may trip something up.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic: avl_find() succeeded inside avl_add()

2008-06-01 Thread Mike Gerdts
On Sat, May 31, 2008 at 9:38 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
 $ find /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
 /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
 /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/.make.state.lock
 /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/debug64
 panic

The stack from this one is...

 ::stack
vpanic(128d918, 300093c3778, 2a1010c7418, 0, 300093c39a8, 1229000)
avl_add+0x38(300091da548, 300093c3778, 649e740, 30005f1a180,
800271d6, 128d800)
mzap_open+0x18c(cf, 300091da538, 300091df998, 30005f1a180, 300091da520,
300091da508)
zap_lockdir+0x54(30003ac6b88, 26b32, 0, 0, 1, 2a1010c78f8)
zap_cursor_retrieve+0x40(2a1010c78f0, 2a1010c77d8, 0, 1, 2a1010c78f0, 2)
zfs_readdir+0x224(3, 2a1010c7aa0, 30009173308, 2, 2000, 2a1010c77f0)
fop_readdir+0x44(300091fe940, 2a1010c7aa0, 30005f403b0, 2a1010c7a9c, 2000,
111dd48)
getdents64+0x90(4, 2a1010c7ad0, 2000, 0, 30008245dd0, 0)
syscall_trap32+0xcc(4, ff1a, 2000, 0, 0, 0)

It tripped up on:

 300091fe940::print vnode_t v_path
v_path = 0x300082608c0 
/ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/debug64

Which is a subdirectory of where it tripped up before.

I am able to do find /ws/mount -name serengeti -prune without
problems.  To make it so that I can hopefully proceed with the build I
have moved the directory out of the way, then did an hg update so
that I can hopefully get the build I was trying to do to complete.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS problems with USB Storage devices

2008-06-01 Thread Paulo Soeiro
Greetings,

I was experimenting with zfs, and i made the following test, i shutdown the
computer during a write operation
in a mirrored usb storage filesystem.

Here is my configuration

NGS USB 2.0 Minihub 4
3 USB Silicom Power Storage Pens 1 GB each

These are the ports:

hub devices
/---\
| port 2 | port  1  |
| c10t0d0p0  | c9t0d0p0  |
-
| port 4 | port  4  |
| c12t0d0p0  | c11t0d0p0|
\/

Here is the problem:

1)First i create a mirror with port2 and port1 devices

zpool create myPool mirror c10t0d0p0 c9t0d0p0
-bash-3.2# zpool status
  pool: myPool
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
myPool ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c10t0d0p0  ONLINE   0 0 0
c9t0d0p0   ONLINE   0 0 0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c5t0d0s0  ONLINE   0 0 0

errors: No known data errors

2)zfs create myPool/myfs

3)created a random file (file.txt - more or less 100MB size)

digest -a md5 file.txt
3f9d17531d6103ec75ba9762cb250b4c

4)While making a second copy of the file:

cp file.txt test 

I've shutdown the computer while the file was being copied. And restarted
the computer again. And here is the result:


-bash-3.2# zpool status
  pool: myPool
 state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid.  There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-5E
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
myPool UNAVAIL  0 0 0  insufficient replicas
  mirror   UNAVAIL  0 0 0  insufficient replicas
c12t0d0p0  OFFLINE  0 0 0
c9t0d0p0   FAULTED  0 0 0  corrupted data

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c5t0d0s0  ONLINE   0 0 0

errors: No known data errors

---

I was expecting that only one of the files was corrupted, not the all the
filesystem.


Thanks  Regards
Paulo
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-06-01 Thread Orvar Korvar
This sounds like a pain.

Is it possible that you bought support from SUN on this matter, if this is 
really important to you?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-06-01 Thread Marc Bevand
So you are experiencing slow I/O which is making the deletion of this clone 
and the replay of the ZIL take forever. It could be because of random I/O ops, 
or one of your disks which is dying (not reporting any errors, but very slow 
to execute every single ATA command). You provided the output of 'zpool 
iostat' while an import was hanging, what about 'iostat -Mnx 3 20' (not to be 
confused with zpool iostat). Please let the command complete, it will run for 
3*20 = 60 secs.

Also, to validate the slowly-dying-disk theory, reboot the box, do NOT import 
the pool, and run 4 of these commands (in parallel in the background) with 
c[1234]d0p0:
  $ dd bs=1024k of=/dev/null if=/dev/rdsk/cXd0p0
Then 'iostat -Mnx 2 5'

Also, are you using non-default settings in /etc/systems (other than 
zfs_arc_max) ? Are you passing any particular kernel parameters via GRUB or 
via 'eeprom' ?

On a side note, what is the version of your pool and the version of your 
filesystems ? If you don't know run 'zpool upgrade' and 'zfs upgrade' with no 
argument.

What is your SATA controller ? I didn't see you run dmesg.

-marc


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-01 Thread Jean-Paul Rivet
I have the same problem here with https://localhost:6789 ZFS Administration 
bombing out with the same error.

I am using SXCE B90 with ZFS as the root partition running in VBOX 1.6.

Any suggestions on next steps?

Cheers, JP
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-06-01 Thread Keith Bierman

On May 30, 2008, at 6:59 PM, Erik Trimble wrote:

 The only drawback of the older Socket 940 Opterons is that they don't
 support the hardware VT extensions, so running a Windows guest  
 under xVM
 on them isn't currently possible.



 From the VirtualBox manual, page 11

• No hardware virtualization required. VirtualBox does not require  
processor
features built into newer hardware like VT-x (on Intel processors) or  
AMD-V
(on AMD processors). As opposed to many other virtualization  
solutions, you
can therefore use VirtualBox even on older hardware where these  
features are
not present. In fact, VirtualBox’s sophisticated software techniques  
are typically
faster than hardware virtualization, although it is still possible to  
enable hard-
ware virtualization on a per-VM basis. Only for some exotic guest  
operating
systems like OS/2, hardware virtualization is required.




I've been running windows under OpenSolaris on an aged 32-bit Dell.  
I'm morally certain it lacks the hardware support, and in any event,  
the VBOX configuration is set to avoid using the VT extensions anyway.

Runs fine. Not the fastest box on the planet ... but it's got limited  
DRAM.



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-01 Thread Jean-Paul Rivet
Just tried SXCE B90 with UFS as root partition in VBOX 1.6 and it works fine, 
so ZFS as the root partition might be the cause...

Cheers, JP
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-01 Thread Jim Klimov
I checked - this system has a UFS root. When installed as snv_84 and then LU'd 
to snv_89, and when I fiddled with these packages from various other releases, 
it had the stacktrace instead of the ZFS admin GUI (or the well-known 
smcwebserver restart effect for the older packages).

This system was mostly End-User cluster, handpicked away a few non-needed 
packages; usually our server systems are stripped way more (Net-Core plus a few 
packages for zone/zfs support). Perhaps I'm missing a bit from some Java or 
library dependencies?

Were your UFS and ZFS setups different in packaging?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-06-01 Thread Hernan Freschi
I'll provide you with the results of these commands soon. But for the record, 
solaris does hang (dies out of memory, can't type anything on the console, 
etc). What I can do is boot with -k and get to kmdb when it's hung (BREAK over 
serial line). I have a crashdump I can upload.

I checked the disks with the drive manufacturers' tests and found no errors.
The controller is an NForce4 SATA on-board. zpool version is the latest (10). 
The non-default settings were removed, these were only for testing. No other 
non-default eeprom settings (other than the serial console options, but these 
were added after the problem started).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-06-01 Thread Hernan Freschi
Here's the output. Numbers may be a little off because I'm doing a nightly 
build and compressing a crashdump with bzip2 at the same time.

extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
3.7   19.40.10.3  3.3  0.0  142.71.6   1   3 c0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
0.00.00.00.0  0.0  0.00.1   12.6   0   0 c5t0d0
0.00.00.00.0  0.0  0.00.1   13.0   0   0 c5t1d0
0.00.00.00.0  0.0  0.00.1   12.6   0   0 c6t0d0
0.00.00.00.0  0.0  0.00.1   13.4   0   0 c6t1d0
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   25.9   12.01.30.3  0.0  0.20.04.4   0  14 c0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
   75.20.0   75.20.0  0.0  1.00.1   12.7   0  96 c5t0d0
   68.20.0   68.20.0  0.0  0.90.1   13.1   0  89 c5t1d0
   71.70.0   71.70.0  0.0  0.90.1   13.1   0  94 c6t0d0
   62.80.0   62.80.0  0.0  0.90.1   14.0   0  88 c6t1d0
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   24.0   16.00.60.3  0.0  0.00.10.8   0   3 c0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
   65.50.0   65.50.0  0.0  0.90.1   14.2   0  93 c5t0d0
   59.00.0   59.00.0  0.0  0.90.1   14.9   0  88 c5t1d0
   67.50.0   67.50.0  0.0  0.90.1   13.2   0  89 c6t0d0
   66.50.0   66.50.0  0.0  0.90.1   14.0   0  93 c6t1d0
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   47.0   15.50.80.2  0.1  0.11.91.6   3   5 c0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
   55.50.0   55.50.0  0.0  0.80.1   14.5   0  80 c5t0d0
   73.00.0   73.00.0  0.0  1.00.1   13.2   0  96 c5t1d0
   72.50.0   72.50.0  0.0  1.00.1   13.3   0  96 c6t0d0
   68.00.0   68.00.0  0.0  1.00.1   14.3   0  97 c6t1d0
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.09.50.00.2  0.0  0.00.00.3   0   0 c0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
   65.00.0   65.00.0  0.0  0.90.1   14.5   0  94 c5t0d0
   73.50.0   73.50.0  0.0  0.90.1   12.8   0  94 c5t1d0
   75.00.0   75.00.0  0.0  0.90.1   11.8   0  89 c6t0d0
   68.50.0   68.50.0  0.0  0.90.1   13.9   0  95 c6t1d0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss