[zfs-discuss] SXCE, ZFS root, b101 - b103, wierd zfs list ?

2008-12-09 Thread Turanga Leela
I've been playing with liveupgrade for the first time. (See 
http://www.opensolaris.org/jive/thread.jspa?messageID=315231). I've at least 
got a workaround for that issue.

One strange thing i've noticed, however, is after I luactivate the new 
environmentalism (snv_103) the root pool snapshots that *used* to belong to 
rpool/ROOT/snv_101 now appear as being for snv_103? Why on earth would this 
happen?

# zfs list -r -t all rpool
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 13.7G  19.5G41K  /rpool
rpool/ROOT11.7G  19.5G18K  legacy
rpool/ROOT/snv_10193.5M  19.5G  6.01G  /
rpool/ROOT/snv_10311.6G  19.5G  6.13G  /
rpool/ROOT/[EMAIL PROTECTED]   287M  -  6.20G  -
rpool/ROOT/[EMAIL PROTECTED] 66.7M  -  5.99G  -
rpool/ROOT/[EMAIL PROTECTED]  28.2M  -  6.00G  -
rpool/ROOT/[EMAIL PROTECTED]31.6M  -  6.00G  -
rpool/dump1.00G  19.5G  1.00G  -
rpool/swap   1G  20.5G16K  -
# 

Those snapshots were all taken over snv_101, the final snapshot being [EMAIL 
PROTECTED] which was then cloned to create rpool/ROOT/snv_103.

So any idea why this is happening?

# df |grep rpool
/  (rpool/ROOT/snv_103):40897072 blocks 40897072 files
/rpool (rpool ):40897072 blocks 40897072 files
#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-11-02 Thread Turanga Leela
[b]Set up for test:[/b]

(I picked v8 for no particular reason this linux install can speak zfs v3 
and zpool v13, as can our solaris b101 installs, but some of the older solaris 
boxes we have can't I guess... so... I picked 8? I want to use this my little 
500gb external ZFS disk. But first some testing!)

sdd is an external USB drive.

from dmesg:

usb-storage: device found at 11
usb-storage: waiting for device to settle before scanning
usb-storage: device scan complete
scsi 7:0:0:0: Direct-Access WDC WD50 00AAKB-00UKA0 PQ: 0 ANSI: 0
sd 7:0:0:0: [sdd] 976773168 512-byte hardware sectors (500108 MB)
sd 7:0:0:0: [sdd] Write Protect is off
sd 7:0:0:0: [sdd] Mode Sense: 33 00 00 00
sd 7:0:0:0: [sdd] Assuming drive cache: write through
sd 7:0:0:0: [sdd] 976773168 512-byte hardware sectors (500108 MB)
sd 7:0:0:0: [sdd] Write Protect is off
sd 7:0:0:0: [sdd] Mode Sense: 33 00 00 00
sd 7:0:0:0: [sdd] Assuming drive cache: write through
 sdd: sdd1
sd 7:0:0:0: [sdd] Attached SCSI disk
sd 7:0:0:0: Attached scsi generic sg4 type 0


ganymede:~# uname -a
Linux ganymede 2.6.27.4-51.fc10.x86_64 #1 SMP Sun Oct 26 20:40:00 EDT 2008 
x86_64 x86_64 x86_64 GNU/Linux
ganymede:~# zpool create -f external -o version=8 sdd
ganymede:~# zpool status
  pool: external
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  sdd   ONLINE   0 0 0

errors: No known data errors
ganymede:~# zfs set compression=gzip-9 external
ganymede:~# cd /external/
ganymede:/external# 
ganymede:/external# ls -al
total 6
drwxr-xr-x  2 root root2 2008-10-31 12:20 .
drwxr-xr-x 26 root root 4096 2008-10-31 12:20 ..
ganymede:/external# touch testfile
ganymede:/external# ln -s testfile a_symlink
ganymede:/external# ln -s /to/some/random/location/to/see/what/happens 
another_symlink
ganymede:/external# ls -al
total 7
drwxr-xr-x  2 root root5 2008-10-31 12:25 .
drwxr-xr-x 26 root root 4096 2008-10-31 12:20 ..
lrwxrwxrwx  1 root root   44 2008-10-31 12:25 another_symlink - 
/to/some/random/location/to/see/what/happens
lrwxrwxrwx  1 root root8 2008-10-31 12:24 a_symlink - testfile
-rw-rw-rw-  1 root root0 2008-10-31 12:24 testfile
ganymede:/external# zfs snapshot [EMAIL PROTECTED]
ganymede:/external# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
external 64K   457G19K  /external
[EMAIL PROTECTED]  0  -19K  -
ganymede:/external# zpool status
  pool: external
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  sdd   ONLINE   0 0 0

errors: No known data errors
ganymede:/external# 
ganymede:/external# cd /
ganymede:/# zpool export external
ganymede:/# 

[b]Now we boot the b99 livecd import and look at it:[/b] (this is the same 
machine as the linux box, ie. ganymede)

[EMAIL PROTECTED]:~$ pfexec zpool import
  pool: external
id: 6611776765231431925
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

externalONLINE
  c3t0d0p0  ONLINE
[EMAIL PROTECTED]:~$ pfexec zpool import external
[EMAIL PROTECTED]:~$ ls -al /external/
total 4
drwxr-xr-x  2 root root   5 2008-10-30 18:55 .
drwxr-xr-x 24 root root 512 2008-10-30 19:22 ..
lrwxrwxrwx  1 root root  44 2008-10-30 18:55 another_symlink - 
/to/some/random/location/to/see/what/happens
lrwxrwxrwx  1 root root   8 2008-10-30 18:54 a_symlink - testfile
-rw-rw-rw-  1 root root   0 2008-10-30 18:54 testfile
[EMAIL PROTECTED]:~$ pfexec zpool scrub external
[EMAIL PROTECTED]:~$ pfexec zpool status
  pool: external
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: scrub completed after 0h0m with 0 errors on Thu Oct 30 19:23:37 2008
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  c3t0d0p0  ONLINE   0 0 0

errors: No known data errors
[EMAIL PROTECTED]:~$ 

Re: [zfs-discuss] Lost Disk Space

2008-11-02 Thread Turanga Leela
I guess difficult questions go unanswered :(
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost Disk Space

2008-10-29 Thread Turanga Leela
 No takers? :)
 
 benr.

I'm quite curious about finding out about this too, to be honest :)

And its not just ZFS on Solaris because I've filled up and imported pools into 
ZFS Fuse 0.5.0 (which is based on the latest ZFS code) in Linux, and on FreeBSD 
too.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-29 Thread Turanga Leela
 Example is:
 
 [EMAIL PROTECTED] ls -la
 /data/zones/testfs/root/etc/services
 lrwxrwxrwx   1 root root  15 Oct 13 14:35
 /data/zones/testfs/root/etc/services -
 ./inet/services
 
 [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
 lrwxrwxrwx   1 root root  15 Oct 13 14:35
 /data/zones/testfs/root/etc/services -
 s/teni/.ervices

Ouch, thats a bad one.

I downloaded and burnt b101 to dvd for x86 and solaris, i'm gonna install them 
tomorrow at work and try moving a pool between them to see what happens...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss