[zfs-discuss] Downgrading a zpool

2008-11-06 Thread Chris Ridd
I probably need to downgrade a machine from 10u5 to 10u3. The zpool on  
u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.

Will this pool automatically import when I downgrade the OS?

Assuming I'm not that lucky, can I use 10u5's zfs send to take a  
backup of the filesystems, and zfs receive on 10u3 to restore them?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread Mattias Pantzare
 Planning to stick in a 160-gig Samsung drive and use it for lightweight 
 household server.  Probably some Samba usage, and a tiny bit of Apache  
 RADIUS.   I don't need it to be super-fast, but slow as watching paint dry 
 won't

 You know that you need a minimum of 2 disks to form a (mirrored) pool
 with ZFS?  A pool with no redundancy is not a good idea!

My pools with no redundancy is working very fine. Redundancy is better
but you can certainly run without.  You should do backups in all
cases.


 work either.   Just curious if anyone else has tried something similar 
 everything I  read says ZFS wants 1-gig RAM but don't say what size of 
 penalty I would pay
 for having less.  I could run Linux on it of course but now prefer to remain 
 free of  the tyranny of fsck.

 I  don't think that there is enough CPU horse-power on this platform
 to run OpenSolaris - and you need approx 768Kb (3/4 of a Gb) of RAM
 just to install it.  After that OpenSolaris will only increase in size
 over time   To try to run it as a ZFS server would be madness -
 worse than watching paint dry.

I don't know about the CPU but 1Gb RAM on a home server works fine.
I even have a 256Mb debian in virtualbox on my server with 1Gb RAM.

Just turn X11 off. (/usr/dt/bin/dtconfig -d)

The installation have a higher RAM requirement than the installed
system as you can't have swap for the installation.

Before ZFS solaris has improved its RAM usage for every release.

Workstations are a different matter.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread Jonathan Hogg
On 6 Nov 2008, at 04:09, Vincent Fox wrote:

 According to the slides I have seen, a ZFS filesystem even on a  
 single disk can handle massive amounts of sector failure before it  
 becomes unusable.   I seem to recall it said 1/8th of the disk?  So  
 even on a single disk the redundancy in the metadata is valuable.   
 And if I don't have really very much data I can set copies=2 so I  
 have better protection for the data as well.

 My goal is a compact low-powered and low-maintenance widget.   
 Eliminating the chance of fsck is always a good thing now that I  
 have tasted ZFS.

In my personal experience, disks are more likely to fail completely  
than suffer from small sector failures. But don't get me wrong,  
provided you have a good backup strategy and can afford the downtime  
of replacing the disk and restoring, then ZFS is still a great  
filesystem to use for a single disk.

Dont be put off. Many of the people on this list are running multi- 
terabyte enterprise solutions and are unable to think in terms of non- 
redundant, small numbers of gigabytes :-)

 I'm going to try and see if Nevada will even install when it  
 arrives, and report back.  Perhaps BSD is another option.  If not I  
 will fall back to Ubuntu.

I have FreeBSD and ZFS working fine(*) on a 1.8GHz VIA C7 (32bit)  
processor. Admittedly this is with 2GB of RAM, but I set aside 1GB for  
ARC and the machine is still showing 750MB free at the moment, so I'm  
sure it could run with 256MB of ARC in under 512MB. 1.8GHz is a fair  
bit faster than the Geode in the Fit-PC, but the C7 scales back to  
900MHz and my machine still runs acceptably at that speed (although I  
wouldn't want to buildworld with it).

I say, give it a go and see what happens. I'm sure I can still dimly  
recall a time when 500MHz/512MB was a kick-ass system...

Jonathan


(*) This machine can sustain 110MB/s off of the 4-disk RAIDZ1 set,  
which is substantially more than I can get over my 100Mb network.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs

2008-11-06 Thread Jens Hamisch
Hi Victor,

ok, not exactly ...


zdb -e -bb share fails on an assertions as follows:


zfsnix,root /root 16 # zdb -e -bb share

Traversing all blocks to verify nothing leaked ...
Assertion failed: space_map_load(msp-ms_map, zdb_space_map_ops, 0x0, 
msp-ms_smo, spa-spa_meta_objset) == 0, file ../zdb.c, line 1416, function 
zdb_leak_init
Abort



We're already running Solaris Express Build 101.


Jens
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Ian Collins
Chris Ridd wrote:
 I probably need to downgrade a machine from 10u5 to 10u3. The zpool on  
 u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.

 Will this pool automatically import when I downgrade the OS?

   
No you are out of luck.

 Assuming I'm not that lucky, can I use 10u5's zfs send to take a  
 backup of the filesystems, and zfs receive on 10u3 to restore them?

   
Same again. You can't receive a stream sent from a newer pool version.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread Darren J Moffat
Al Hopper wrote:
Linux on it of course but now prefer to remain free of  the tyranny of 
fsck.
 
 I  don't think that there is enough CPU horse-power on this platform
 to run OpenSolaris - and you need approx 768Kb (3/4 of a Gb) of RAM
 just to install it.  After that OpenSolaris will only increase in size
 over time   To try to run it as a ZFS server would be madness -
 worse than watching paint dry.

I run OpenSolaris as a webserver and ZFS based NFS NAS on an old 900Mhz 
AMD Athlon with 640Mb of RAM it for my needs it works just fine and I've 
never noticed any performance problems with the NFS serving for my needs 
- which is mostly just serving up photos and music to MacOS X over NFS.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Chris Ridd

On 6 Nov 2008, at 09:53, Ian Collins wrote:

 Chris Ridd wrote:
 I probably need to downgrade a machine from 10u5 to 10u3. The zpool  
 on
 u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.

 Will this pool automatically import when I downgrade the OS?


 No you are out of luck.

I thought that might be the case :-)

 Assuming I'm not that lucky, can I use 10u5's zfs send to take a
 backup of the filesystems, and zfs receive on 10u3 to restore them?


 Same again. You can't receive a stream sent from a newer pool version.

That's a pity. I'm slightly surprised that the pool version affects  
the filesystem/snapshot stream format.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Mark J Musante
On Thu, 6 Nov 2008, Chris Ridd wrote:

 I probably need to downgrade a machine from 10u5 to 10u3. The zpool on 
 u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.

The only difference between a v4 pool and a v3 pool is that v4 added 
history ('zpool history pool').  I would expect a v3 pool to be able to 
receive a v4 send stream, as the zpool history is not part of 'zfs send'.

Trying this out on my nevada box:


-bash-3.2# zpool create -o version=3 v3pool c0t1d0s0
-bash-3.2# zpool create -o version=4 v4pool c0t2d0s0
-bash-3.2# zpool status
   pool: v3pool
  state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
 still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
 pool will no longer be accessible on older software versions.
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 v3pool  ONLINE   0 0 0
   c0t1d0s0  ONLINE   0 0 0

errors: No known data errors

   pool: v4pool
  state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
 still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
 pool will no longer be accessible on older software versions.
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 v4pool  ONLINE   0 0 0
   c0t2d0s0  ONLINE   0 0 0

errors: No known data errors
-bash-3.2# zfs create v4pool/foobar
-bash-3.2# mkfile 1m /v4pool/foobar/1meg
-bash-3.2# ls -l /v4pool/foobar/1meg
-rw--T   1 root root 1048576 Nov  6 05:53 /v4pool/foobar/1meg
-bash-3.2# zfs snapshot v4pool/[EMAIL PROTECTED]
-bash-3.2# zfs send v4pool/[EMAIL PROTECTED] | zfs recv v3pool/[EMAIL PROTECTED]
-bash-3.2# ls -l /v3pool/foobar
total 2053
-rw--T   1 root root 1048576 Nov  6 05:53 1meg
-bash-3.2# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
v3pool  1.09M   134G20K  /v3pool
v3pool/foobar   1.02M   134G  1.02M  /v3pool/foobar
v3pool/[EMAIL PROTECTED]  0  -  1.02M  -
v4pool  1.09M   134G20K  /v4pool
v4pool/foobar   1.02M   134G  1.02M  /v4pool/foobar
v4pool/[EMAIL PROTECTED]  0  -  1.02M  -
-bash-3.2# 


Works fine.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Chris Ridd

On 6 Nov 2008, at 10:19, Christian Vallo wrote:

 Hi Chris,

 i think there is no way to downgrade. I think you must copy/sync the  
 data from one pool (v4) to another pool (v3).

Darn. I've managed to push back on doing the downgrade for now anyway...

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs free space

2008-11-06 Thread Robert Milkowski
Hello none,

Thursday, November 6, 2008, 2:52:53 AM, you wrote:

n Hi, I'm trying to get a status from zfs on where the free space in
n my zfs filesystem is. Its a RAIDZ2 pool on 4 x 320GB HDD. I have
n several snapshots and I've just deleted rougly 150GB worth of data
n I didn't need from the current filesystem. The non-snapshot data
n now only takes up 156GB but I can't see where the rest is in the snapshots.

n The USED for the main filesystem (storage/freddofrog) shows 357G
n and I would expect the deleted 150G or so of data to show up in the
n snapshots below, but it doesn't. A du -c -h on /storage/freddofrog
n shows 152G used, about the same as the REFER for storage/freddofrog below

n So how can I tell where the space is being used (which snapshots)?

n [EMAIL PROTECTED]:~$ zfs list
n NAME USED  AVAIL  REFER
n storage  357G   227G  28.4K
n [EMAIL PROTECTED] 0  -  28.4K  
n [EMAIL PROTECTED] 0  -  28.4K  
n storage/freddofrog   357G   227G   151G
n storage/[EMAIL PROTECTED]   4.26G  -   187G  
n storage/[EMAIL PROTECTED] 61.1M  -   206G  
n storage/[EMAIL PROTECTED]  773M  -   201G  
n storage/[EMAIL PROTECTED] 33.2M  -   192G  
n storage/[EMAIL PROTECTED]   62.6M  -   212G  
n storage/[EMAIL PROTECTED] 5.29G  -   217G

When multple snapshots are pointing to the same data than instead of
ccounting that used space to a random snapshot it will be acocunted
against a filesystem. The used space you see by each snapshot is a
uniq usage by that snapshot. If you would remove all anapshots you
should re-gain 357G-151G of disk space.

It's not an issue with ZFS snapshot it's just that no-one has a better
idea how to account for used space by snapshots if multiple of them
are pointing to the same data.

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Enda O'Connor
Hi
try and get the stack trace from the core
ie mdb core.vold.24978
::status
$C
$r

also run the same 3 mdb commands on the cpio core dump.

also if you could extract some data from the truss log, ie a few hundred 
lines before the first SIGBUS


Enda

On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb 
 drive, but still getting that bus error. Originally when I restarted my 
 server it did not want to boot, do I had to power it off and then back 
 on and it then booted up. But constantly I am getting this Bus Error - 
 core dumped
 
 anyway in my /var/crash I see hundreds of core.void files and 3 
 core.cpio files. I would imagine core.cpio are the ones that are direct 
 result of what I am probably eperiencing.
 
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case 
 it will take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an 
 idea of where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2% 
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0% 
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%
 /.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 


 so I have /, /usr, /var and /export/home on that primary disk. 
 Original disk is 140gb, this new one is only 36gb, but disk 
 utilization on that primary disk is much less utilized so easily 
 should fit on it.

 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I 
 will change disk for 72gb and will try again. I do not beleive that I 
 need to match my main disk size as 146gb as I am not using that much 
 disk space on it. But let me try this and it might be why I am 
 getting this problem...



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate 
 from / and so on ( maybe a df -k ). Don't appear to have any zones 
 installed, just to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Krzys
Seems like core.vold.* are not being created until I try to boot from zfsBE, 
just creating zfsBE gets onlu core.cpio created.



[10:29:48] @adas: /var/crash  mdb core.cpio.5545
Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]
 ::status
debugging core file of cpio (32-bit) from adas
file: /usr/bin/cpio
initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt
threading model: multi-threaded
status: process terminated by SIGBUS (Bus Error)
 $C
ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0)
ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8)
ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98)
ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1)
ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870)
ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400)
ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0)
 $r
%g0 = 0x %l0 = 0x
%g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28
%g2 = 0x00037fe0 %l2 = 0x2e2f2e2f
%g3 = 0x8000 %l3 = 0x03c8
%g4 = 0x %l4 = 0x2e2f2e2f
%g5 = 0x %l5 = 0x
%g6 = 0x %l6 = 0xdc00
%g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree
%o0 = 0x %i0 = 0x0030
%o1 = 0x %i1 = 0x
%o2 = 0x000e70c4 %i2 = 0x00039c28
%o3 = 0x %i3 = 0x00ff
%o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f
%o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x
%o6 = 0xffbfe5b0 %i6 = 0xffbfe610
%o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394
libc.so.1`malloc+0x4c

  %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc
ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2
%y = 0x
   %pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164
  %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128
   %sp = 0xffbfe5b0
   %fp = 0xffbfe610

  %wim = 0x
  %tbr = 0x








On Thu, 6 Nov 2008, Enda O'Connor wrote:

 Hi
 try and get the stack trace from the core
 ie mdb core.vold.24978
 ::status
 $C
 $r

 also run the same 3 mdb commands on the cpio core dump.

 also if you could extract some data from the truss log, ie a few hundred 
 lines before the first SIGBUS


 Enda

 On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb drive, 
 but still getting that bus error. Originally when I restarted my server it 
 did not want to boot, do I had to power it off and then back on and it then 
 booted up. But constantly I am getting this Bus Error - core dumped
 
 anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
 files. I would imagine core.cpio are the ones that are direct result of 
 what I am probably eperiencing.
 
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap
 
 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it 
 will take the swap and dump size you have preset.
 
 But I think we need to see the coredump/truss at this point to get an idea 
 of where things went wrong.
 Enda
 
 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 

[zfs-discuss] ZFS and VTOC/EFI labelling mystery (was: ZFS on emcpower0a and labels)

2008-11-06 Thread David Magda
Answering myself because I've gotten things to work, but it's a mystery as
to why they're working (I have a Sun case number if anyone at Sun.com is
interested).

Steps:

   1. Try to create a pool on a pseudo-device:

# zpool create mypool emcpower0a

  This receives an I/O error (see previous message).

   2. Create a pool on the LUN using the traditional device name:

# zpool create mypool c1tFOO...

   3. Destroy the pool:

# zpool destroy mypool

   4. Go back to the EMC PowerPath pseudo-device and create the pool:

# zpool create mypool emcpower0a

  This now works.

The only difference I can see is that before the emcpower0a device had
slices that were numbered 0-7 according to format(1M). Now it has slices
0-6, and a slice numbered 8 that is reserved:

AVAILABLE DISK SELECTIONS:
   0. c0t0d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c0t1d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c1t5006016030602568d0 DGC-RAID 5-0219-100.00GB
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL 
PROTECTED],0
   3. c1t5006016830602568d0 DGC-RAID 5-0219-100.00GB
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL 
PROTECTED],0
   4. emcpower0a DGC-RAID 5-0219-100.00GB
  /pseudo/[EMAIL PROTECTED]
Specify disk (enter its number): 4
selecting emcpower0a
[disk formatted]
FORMAT MENU:
[...]
format p
PARTITION MENU:
[...]
partition p
Current partition table (original):
Total disk sectors available: 209698782 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34   99.99GB  209698782
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2096987838.00MB  209715166

Also, if I select disks 2 or 3 in the format(1M) menu I get a warning that
the device is part of a ZFS pool and that I should see zpool(1M).

From the ZFS Administrators Guide, partition 8 seems to indicate that an
EFI label is now being used on the LUN. Furthermore the Admin Guide says:

 To use whole disks, the disks must be named using the standard Solaris
 convention, such as /dev/dsk/cXtXdXsX. Some third-party drivers use a
 different naming convention or place disks in a location other than the
 /dev/dsk directory. To use these disks, you must manually label the disk
 and provide a slice to ZFS.

I'm not manually labeling the disk, but things are now working on the
pseudo-device.

Is this expected behaviour? Is there a reason why ZFS cannot access the
pseudo-device in a raw manner, even though /dev/dsk/emcpower0a exists
(see truss(1) output in previous message)?

The Sun support guys deal more in the break-fix aspect of this, and not as
much as the why? part, but I asked them to follow up internally if they
can. I figured I'd post publicly in case a greater audience may find some
of this useful (and it'll be in the archives for anyone doing future
searches).

Thanks for any info.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread dick hoogendijk

Mattias Pantzare wrote:

 I even have a 256Mb debian in virtualbox on my server with 1Gb RAM.
 Just turn X11 off. (/usr/dt/bin/dtconfig -d)

And how would that make VirtualBox run?
Does it not need X?

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
++ http://nagual.nl/ | SunOS 10u6 10/08 ++

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Enda O'Connor
Hi
Wierd, almost like some kind of memory corruption.

Could I see the upgrade logs, that got you to u6
ie
/var/sadm/system/logs/upgrade_log
for the u6 env.
What kind of upgrade did you do, liveupgrade, text based etc?

Enda

On 11/06/08 15:41, Krzys wrote:
 Seems like core.vold.* are not being created until I try to boot from zfsBE, 
 just creating zfsBE gets onlu core.cpio created.
 
 
 
 [10:29:48] @adas: /var/crash  mdb core.cpio.5545
 Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]
 ::status
 debugging core file of cpio (32-bit) from adas
 file: /usr/bin/cpio
 initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt
 threading model: multi-threaded
 status: process terminated by SIGBUS (Bus Error)
 $C
 ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0)
 ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8)
 ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98)
 ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1)
 ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870)
 ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400)
 ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0)
 $r
 %g0 = 0x %l0 = 0x
 %g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28
 %g2 = 0x00037fe0 %l2 = 0x2e2f2e2f
 %g3 = 0x8000 %l3 = 0x03c8
 %g4 = 0x %l4 = 0x2e2f2e2f
 %g5 = 0x %l5 = 0x
 %g6 = 0x %l6 = 0xdc00
 %g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree
 %o0 = 0x %i0 = 0x0030
 %o1 = 0x %i1 = 0x
 %o2 = 0x000e70c4 %i2 = 0x00039c28
 %o3 = 0x %i3 = 0x00ff
 %o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f
 %o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x
 %o6 = 0xffbfe5b0 %i6 = 0xffbfe610
 %o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394
 libc.so.1`malloc+0x4c
 
   %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc
 ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2
 %y = 0x
%pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164
   %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128
%sp = 0xffbfe5b0
%fp = 0xffbfe610
 
   %wim = 0x
   %tbr = 0x
 
 
 
 
 
 
 
 On Thu, 6 Nov 2008, Enda O'Connor wrote:
 
 Hi
 try and get the stack trace from the core
 ie mdb core.vold.24978
 ::status
 $C
 $r

 also run the same 3 mdb commands on the cpio core dump.

 also if you could extract some data from the truss log, ie a few hundred 
 lines before the first SIGBUS


 Enda

 On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb drive, 
 but still getting that bus error. Originally when I restarted my server it 
 did not want to boot, do I had to power it off and then back on and it then 
 booted up. But constantly I am getting this Bus Error - core dumped

 anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
 files. I would imagine core.cpio are the ones that are direct result of 
 what I am probably eperiencing.

 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it 
 will take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an idea 
 of where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%   

[zfs-discuss] Disk space usage of zfs snapshots and filesystems - my math doesn't add up

2008-11-06 Thread John.Stewart

We're running a Thumper with Solaris 10_u4 with 127112-11 kernel patch
in production as our mail CIFS/NFS file server. We have a big zpool
consisting of 6 raidz2 groups.
 
We have quite a few filesystems underneath that. We use TIm Foster's
automatic snapshot service, version 0.10, to do regular snapshots of all
filesystems.
 
Over the weekend, we hit 90% space usage (which, it seemed also caused
some serious NFS and performance issues, but that's not why I'm
writing)... the reason I'm writing is that It seems we're missing some
terabytes of storage.
 
Here's how I gathered my data:
 
First, I gathered the used and referenced for all snapshots and
filesystems:

zfs get -Hpr used,referenced vol0  vol0_all_used_referenced

Then I chunked this into four seperate files:

grep snap vol0_all_used_referenced | grep used  vol0_snap_used
grep snap vol0_all_used_referenced | grep referenced 
vol0_snap_referenced
grep -v snap vol0_all_used_referenced | grep used  vol0_fs_used
grep -v snap vol0_all_used_referenced | grep referenced 
vol0_fs_referenced

I pulled these files into Excel. I summed all of the snapshot used
fields, and get 0.75TB. The sum of all of the filesystem referenced
fields is 2.51TB, for a total of 3.26TB. However, the vol0 used numer
shows 5.04TB.
 
Snapshot Used Total (GB) == 766.73
Filesystem Referenced Total (GB) == 2570.04
Total of Snap Used + FS Ref (GB) == 3336.76

Vol0 filesystem Used (GB) == 5159.35

Where is my missing disk space? Or am I misunderstanding something as
far as how this data is reported?
 
I've got an open case with Sun on this... but so far the tech has not
been able to explain either where this space has gone, or what I've done
wrong in my methodology to build this data.
 
Can anyone here shed light?
 
thank you!
 
johnS
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] after controller crash, 'one or more devices is currently unavailable'

2008-11-06 Thread David Champion
I have a feeling I pushed people away with a long message.  Let me
reduce my problem to one question.

 # zpool import -f z
 cannot import 'z': one or more devices is currently unavailable
 
 
 'zdb -l' shows four valid labels for each of these disks except for the
 new one.  Is this what unavailable means, in this case?

I have now faked up a label for the disk that didn't have one and
applied it with dd.

Can anyone say what unavailable means, given that all eight disks are
registered devices at the correct paths, are readable, and have labels?

-- 
 -D.[EMAIL PROTECTED]NSITUniversity of Chicago
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread Tomas Ögren
On 06 November, 2008 - dick hoogendijk sent me these 0,4K bytes:

 
 Mattias Pantzare wrote:
 
  I even have a 256Mb debian in virtualbox on my server with 1Gb RAM.
  Just turn X11 off. (/usr/dt/bin/dtconfig -d)
 
 And how would that make VirtualBox run?
 Does it not need X?

There's a headless version, and you can RDP to it from another machine..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool missing device data recovery ?

2008-11-06 Thread Milan Miker
Hi,
I've been running zpool in disk (raid0) mode with two hdds.
Now, one hdd is damaged and i had to remove it, so zfs import now complains 
about missing device. I KNOW, that there were almost no data (few MBs) on the 
damaged hdd, so I'm looking for a way to access/recover data from surviving hdd.
Is such a thing possible ? I don't want to lose GBs of data that I physicaly 
have, because of falut of disk with a few MBs of data..
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Michael Schuster
all,

I've gotten myself into a fix I don't know how to resolve (and I can't 
reboot the machine, it's a build server we share):

$ zfs list -r tank/schuster
NAME  USED  AVAIL  REFER  MOUNTPOINT
tank/schuster17.6G   655G  5.83G  /exportz/schuster
tank/schuster/ilb5.72G   655G  5.72G  /exportz/schuster/ilb
tank/schuster/ip_wput_local  6.06G   655G  6.06G 
/exportz/schuster/ip_wput_local

note the 2nd one.

$ ls -las /exportz/schuster/
total 20
4 drwxr-xr-x   5 schuster staff  5 Nov  6 09:38 .
4 drwxrwxrwt  15 ml37995  staff 15 Sep 28 05:21 ..
4 drwxr-xr-x   9 schuster staff 12 Nov  6 08:46 ilb_hg
4 drwxr-xr-x   8 schuster staff 12 Oct 31 10:07 ip_wput_local
4 drwxr-xr-x   9 schuster staff 11 Sep 11 13:06 old_ilb
$

oops, no ilb/ subdirectory.

$ zfs mount | grep schuster
tank/schuster   /exportz/schuster
tank/schuster/ilb   /exportz/schuster/ilb
tank/schuster/ip_wput_local /exportz/schuster/ip_wput_local
$  mount | grep schuster
/exportz/schuster on tank/schuster ...
/exportz/schuster/ilb on tank/schuster/ilb ...
/exportz/schuster/ip_wput_local on tank/schuster/ip_wput_local ...
$

I've tried creating an ilb subdir, as well as set mountpointing, all to 
no avail, so far; zfs unmount also fails, even with -f. I've unshared the 
FS, still no luck, as with zfs rename.

I don't want to zfs destroy tank/schuster/ilb before I've had a chance to 
check what's inside ...

this is snv_89, btw. zfs and zpool are at current revisions (3 and 10, resp.).

does anyone have any hints what I could do to solve this?

TIA
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost Disk Space

2008-11-06 Thread Marcelo Leal
 A percentage of the total space is reserved for pool
 overhead and is not 
 allocatable, but shows up as available in zpool
 list.
 

 Something to change/show in the future?

--
 Leal
[http://www.posix.brte.com.br/blog]
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Mark J Musante

Hi Michael,

Did you try doing an export/import of tank?

On Thu, 6 Nov 2008, Michael Schuster wrote:

 all,

 I've gotten myself into a fix I don't know how to resolve (and I can't
 reboot the machine, it's a build server we share):

 $ zfs list -r tank/schuster
 NAME  USED  AVAIL  REFER  MOUNTPOINT
 tank/schuster17.6G   655G  5.83G  /exportz/schuster
 tank/schuster/ilb5.72G   655G  5.72G  /exportz/schuster/ilb
 tank/schuster/ip_wput_local  6.06G   655G  6.06G
 /exportz/schuster/ip_wput_local

 note the 2nd one.

 $ ls -las /exportz/schuster/
 total 20
4 drwxr-xr-x   5 schuster staff  5 Nov  6 09:38 .
4 drwxrwxrwt  15 ml37995  staff 15 Sep 28 05:21 ..
4 drwxr-xr-x   9 schuster staff 12 Nov  6 08:46 ilb_hg
4 drwxr-xr-x   8 schuster staff 12 Oct 31 10:07 ip_wput_local
4 drwxr-xr-x   9 schuster staff 11 Sep 11 13:06 old_ilb
 $

 oops, no ilb/ subdirectory.

 $ zfs mount | grep schuster
 tank/schuster   /exportz/schuster
 tank/schuster/ilb   /exportz/schuster/ilb
 tank/schuster/ip_wput_local /exportz/schuster/ip_wput_local
 $  mount | grep schuster
 /exportz/schuster on tank/schuster ...
 /exportz/schuster/ilb on tank/schuster/ilb ...
 /exportz/schuster/ip_wput_local on tank/schuster/ip_wput_local ...
 $

 I've tried creating an ilb subdir, as well as set mountpointing, all to
 no avail, so far; zfs unmount also fails, even with -f. I've unshared the
 FS, still no luck, as with zfs rename.

 I don't want to zfs destroy tank/schuster/ilb before I've had a chance to
 check what's inside ...

 this is snv_89, btw. zfs and zpool are at current revisions (3 and 10, resp.).

 does anyone have any hints what I could do to solve this?

 TIA
 Michael
 --
 Michael Schusterhttp://blogs.sun.com/recursion
 Recursion, n.: see 'Recursion'
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Michael Schuster
Mark J Musante wrote:
 
 Hi Michael,
 
 Did you try doing an export/import of tank?

no - that would make it unavailable for use right? I don't think I can 
(easily) do that during production hours.

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Krzys
I think I did figure it out.

It is the issue with cpio that is in my system... I am not sure but I did copy 
cpio from my solaris sparc 9 server and it seems like lucreate completed 
without 
bus error, and system booted up using root zpool.

original cpio that I have on all of my solaris 10 U6 boxes are:
[11:04:16] @adas: /usr/bin  ls -la cpi*
-r-xr-xr-x   1 root bin85856 May 21 18:48 cpio

then I did copy solaris 9 cpio to my system:
-r-xr-xr-x   1 root root   76956 May 14 15:46 cpio.3_sol9

so that old CPIO seems to work, new cpio on Soalris 10 U6 does not work. :(


[11:03:49] [EMAIL PROTECTED]: /root  zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rootpool   12.0G  54.9G19K  /rootpool
rootpool/ROOT18K  54.9G18K  /rootpool/ROOT
rootpool/dump 4G  58.9G16K  -
rootpool/swap  8.00G  62.9G16K  -
[11:04:06] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
Comparing source boot environment ufsBE file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
cannot get BE ID.
Creating configuration for boot environment zfsBE.
Source boot environment is ufsBE.
Creating boot environment zfsBE.
Creating file systems on boot environment zfsBE.
Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
Populating file systems on boot environment zfsBE.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point /.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment zfsBE.
Creating compare database for file system /var.
Creating compare database for file system /usr.
Creating compare database for file system /.
Updating compare databases on boot environment zfsBE.
Making boot environment zfsBE bootable.
Creating boot_archive for /.alt.tmp.b-tvg.mnt
updating /.alt.tmp.b-tvg.mnt/platform/sun4u/boot_archive
Population of boot environment zfsBE successful.
Creation of boot environment zfsBE successful.
[12:45:04] [EMAIL PROTECTED]: /root  lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
ufsBE  yes  yesyes   no -
zfsBE  yes  no noyes-
[13:14:57] [EMAIL PROTECTED]: /root 
[13:14:59] [EMAIL PROTECTED]: /root  zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rootpool 24.3G  42.6G19K  /rootpool
rootpool/ROOT12.3G  42.6G18K  /rootpool/ROOT
rootpool/ROOT/zfsBE  12.3G  42.6G  12.3G  /
rootpool/dump   4G  46.6G16K  -
rootpool/swap8.00G  50.6G16K  -
[13:15:25] [EMAIL PROTECTED]: /root  luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment 
zfsBE.


**

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

  setenv boot-device /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0:a

3. Boot to the original boot environment by typing:

  boot

**

Modifying boot archive service
Activation of boot environment zfsBE successful.
[13:16:57] [EMAIL PROTECTED]: /root  init 6
stopping NetWorker daemons:
  nsr_shutdown -q
svc.startd: The system is coming down.  Please wait.
svc.startd: 90 system services are now being stopped.
Nov  6 13:18:09 adas syslogd: going down on signal 15
umount: /appl busy
svc.startd: The system is down.
syncing file systems... done
rebooting...

SC Alert: Host System has Reset
Probing system devices
Probing memory
Probing I/O buses

Sun Fire V210, No Keyboard
Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.



Rebooting with command: boot
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args:
SunOS Release 5.10 Version Generic_137137-09 

Re: [zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Johan Hartzenberg
On Thu, Nov 6, 2008 at 8:22 PM, Michael Schuster
[EMAIL PROTECTED]wrote:

 Mark J Musante wrote:
 
  Hi Michael,
 
  Did you try doing an export/import of tank?

 no - that would make it unavailable for use right? I don't think I can
 (easily) do that during production hours.


Can you please post the output from:
zfs get all tank/schuster/ilb





-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs version number

2008-11-06 Thread Francois Dion
First,

Congrats to whoever/everybody was involved getting zfs booting in solaris 10 
u6. This is killer.

Second, somebody who has admin access to this page here:
http://www.opensolaris.org/os/community/zfs/version/10/

Solaris 10 U6 is not mentionned. It is mentionned here:
http://www.opensolaris.org/os/community/zfs/version/9/

But U6 is ZFS v 10.

Thanks!
Francois
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk space usage of zfs snapshots and filesystems -my math doesn't add up

2008-11-06 Thread John.Stewart

Mark - thank you very much for your explanation and example. I really
appreciate. Any ranting below is directed at ZFS, not you. =)

  Snapshot Used Total (GB) == 766.73
  Filesystem Referenced Total (GB) == 2570.04 Total of Snap Used + FS 
  Ref (GB) == 3336.76
 
  Vol0 filesystem Used (GB) == 5159.35
 
 The sums don't really work that way.

I followed your recipe, except using a 70MB tarfile instead of a 1GB
file:

 Consider this scenario:
 - I create a dataset and copy 1g into it

titan[root]:/ zfs create vol0/testfs

titan[root]:/ zfs list vol0/testfs
NAME  USED  AVAIL  REFER  MOUNTPOINT
vol0/testfs  52.3K  1.58T  52.3K  /vol0/testfs

titan[root]:/ cd /vol0/testfs
titan[root]:/vol0/testfs tar cvf test.tar /groups/eng_staff

titan[root]:/vol0/testfs zfs list vol0/testfs
NAME  USED  AVAIL  REFER  MOUNTPOINT
vol0/testfs  72.4M  1.58T  72.4M  /vol0/testfs

 - I take a snapshot of it, @snap1

titan[root]:/vol0/testfs zfs snapshot vol0/[EMAIL PROTECTED]

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs72.4M  1.58T  72.4M  /vol0/testfs
vol0/[EMAIL PROTECTED]  0  -  72.4M  -

 - I copy 1g more data into the datset

titan[root]:/vol0/testfs tar cvf test2.tar /groups/eng_staff

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T   145M  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -

titan[root]:/vol0/testfs du -sh .
 145M   .

 - I take another snapshot, @snap2

titan[root]:/vol0/testfs zfs snapshot vol0/[EMAIL PROTECTED]

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T   145M  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -
vol0/[EMAIL PROTECTED]  0  -   145M  -

 - I delete the two 1g files.

titan[root]:/vol0/testfs rm test.tar test2.tar
titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T   145M  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -
vol0/[EMAIL PROTECTED]  0  -   145M  -

After a minute:

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T  52.3K  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -
vol0/[EMAIL PROTECTED]  72.4M  -   145M  -

 
 What's left on the system?  The first file is available via 
 both the snapshots, and the second file will be available via 
 the second snapshot only.  In other words, @snap1 will have a 
 'refer' value of 1g, and @snap2 will have a 'refer' value of 
 2g.  The dataset itself will only 'refer' to the overhead, 
 but will have a 'used' value of 2g.

And you're completely correct!

 However, the 'used' values of @snap1 and @snap2 will only 
 contain the deltas for the snapshots.  @snap1 will contain 
 just the filesystem metadata for the 'used' - around 16-20k, 
 and @snap2 will contain the metadata plus the second 1g file.
 
 So, crunching the numbers using the method outlined above, 
 the snapshot used total is approx 1g, and the filesystem 
 refer total is 16-20k.  These don't add up to the amount of 
 data still being consumed by the pool (the two 1g files), 
 because used  refer are tracking different pieces of information.

Right, so I follow everything you have said here... but I guess I'm
still at a loss.

I think based on this, there really isn't a good way to know how much
space is being consumed by snapshots? 

The only way I can think of based on this is to use zfs list or zfs
get to get the referenced numbers, ONLY for filesystems (annoyingly,
zfs get doesn't seem to have a -t flag). Then I have to add up all of
the referenced numbers.

Only THEN can I subtract that number from the used number for the
entire volume to find out how much is being consumed by snapshots.

Is there any other way?

Then back to the issue at hand... which was the point of this whole
exercise:

How do I figure out how much disk space is being used by a given
snapshot, or group of snapshots?

This all of a sudden feels like a huge limitation of ZFS. I'm already
dealing with the annoyance of not having quotas... now I feel more at
sea without a rudder given that I can't even tell what disk space is
being used by my snapshots!

thanks

johnS
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs version number

2008-11-06 Thread Lori Alt

Thanks for pointing this out.  This has now been corrected.

Lori

Francois Dion wrote:


First,

Congrats to whoever/everybody was involved getting zfs booting in 
solaris 10 u6. This is killer.


Second, somebody who has admin access to this page here:
http://www.opensolaris.org/os/community/zfs/version/10/

Solaris 10 U6 is not mentionned. It is mentionned here:
http://www.opensolaris.org/os/community/zfs/version/9/

But U6 is ZFS v 10.

Thanks!
Francois



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk space usage of zfs snapshots and filesystems -my math doesn't add up

2008-11-06 Thread none
Hi,
My problem is very similar. I have a bunch of snapshots referring to old data 
which has been deleted from the current filesystem and I'd like to find out 
which snapshots refer to how much data, and how they are shared amongst the 
snapshots and the current filesystem. This is so I can work out which snapshots 
I'd need to delete to regain free disk space. Ie, trying not to delete 
snapshots that would only free up a few GB since they share most of their data 
with the current filesystem. The documentation doesn't seem to say anything 
about querying this information.

Thanks,
Mark
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk space usage of zfs snapshots and filesystems-my math doesn't add up

2008-11-06 Thread Romain Chatelain
Hi John,

You should take a look at this :

http://www.opensolaris.org/os/community/zfs/version/13/

There is some improve on zfs v13 on this.

(Please, Excuse my poor English)

Regards

-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de [EMAIL PROTECTED]
Envoyé : jeudi 6 novembre 2008 22:47
À : zfs-discuss@opensolaris.org
Cc : [EMAIL PROTECTED]
Objet : Re: [zfs-discuss] Disk space usage of zfs snapshots and filesystems-my 
math doesn't add up


Mark - thank you very much for your explanation and example. I really
appreciate. Any ranting below is directed at ZFS, not you. =)

  Snapshot Used Total (GB) == 766.73
  Filesystem Referenced Total (GB) == 2570.04 Total of Snap Used + FS 
  Ref (GB) == 3336.76
 
  Vol0 filesystem Used (GB) == 5159.35
 
 The sums don't really work that way.

I followed your recipe, except using a 70MB tarfile instead of a 1GB
file:

 Consider this scenario:
 - I create a dataset and copy 1g into it

titan[root]:/ zfs create vol0/testfs

titan[root]:/ zfs list vol0/testfs
NAME  USED  AVAIL  REFER  MOUNTPOINT
vol0/testfs  52.3K  1.58T  52.3K  /vol0/testfs

titan[root]:/ cd /vol0/testfs
titan[root]:/vol0/testfs tar cvf test.tar /groups/eng_staff

titan[root]:/vol0/testfs zfs list vol0/testfs
NAME  USED  AVAIL  REFER  MOUNTPOINT
vol0/testfs  72.4M  1.58T  72.4M  /vol0/testfs

 - I take a snapshot of it, @snap1

titan[root]:/vol0/testfs zfs snapshot vol0/[EMAIL PROTECTED]

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs72.4M  1.58T  72.4M  /vol0/testfs
vol0/[EMAIL PROTECTED]  0  -  72.4M  -

 - I copy 1g more data into the datset

titan[root]:/vol0/testfs tar cvf test2.tar /groups/eng_staff

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T   145M  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -

titan[root]:/vol0/testfs du -sh .
 145M   .

 - I take another snapshot, @snap2

titan[root]:/vol0/testfs zfs snapshot vol0/[EMAIL PROTECTED]

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T   145M  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -
vol0/[EMAIL PROTECTED]  0  -   145M  -

 - I delete the two 1g files.

titan[root]:/vol0/testfs rm test.tar test2.tar
titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T   145M  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -
vol0/[EMAIL PROTECTED]  0  -   145M  -

After a minute:

titan[root]:/vol0/testfs zfs list -r vol0/testfs
NAMEUSED  AVAIL  REFER  MOUNTPOINT
vol0/testfs 145M  1.58T  52.3K  /vol0/testfs
vol0/[EMAIL PROTECTED]  48.0K  -  72.4M  -
vol0/[EMAIL PROTECTED]  72.4M  -   145M  -

 
 What's left on the system?  The first file is available via 
 both the snapshots, and the second file will be available via 
 the second snapshot only.  In other words, @snap1 will have a 
 'refer' value of 1g, and @snap2 will have a 'refer' value of 
 2g.  The dataset itself will only 'refer' to the overhead, 
 but will have a 'used' value of 2g.

And you're completely correct!

 However, the 'used' values of @snap1 and @snap2 will only 
 contain the deltas for the snapshots.  @snap1 will contain 
 just the filesystem metadata for the 'used' - around 16-20k, 
 and @snap2 will contain the metadata plus the second 1g file.
 
 So, crunching the numbers using the method outlined above, 
 the snapshot used total is approx 1g, and the filesystem 
 refer total is 16-20k.  These don't add up to the amount of 
 data still being consumed by the pool (the two 1g files), 
 because used  refer are tracking different pieces of information.

Right, so I follow everything you have said here... but I guess I'm
still at a loss.

I think based on this, there really isn't a good way to know how much
space is being consumed by snapshots? 

The only way I can think of based on this is to use zfs list or zfs
get to get the referenced numbers, ONLY for filesystems (annoyingly,
zfs get doesn't seem to have a -t flag). Then I have to add up all of
the referenced numbers.

Only THEN can I subtract that number from the used number for the
entire volume to find out how much is being consumed by snapshots.

Is there any other way?

Then back to the issue at hand... which was the point of this whole
exercise:

How do I figure out how much disk space is being used by a given
snapshot, or group of snapshots?

This all of a sudden feels like a huge limitation of ZFS. I'm already
dealing with the annoyance of not having quotas... now I feel more at
sea without a rudder given that I can't even tell what disk space is
being used by my snapshots!

thanks

johnS
___
zfs-discuss mailing list

[zfs-discuss] 'zfs recv' is very slow

2008-11-06 Thread River Tarnell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

hi,

i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6).  i'm
using 'zfs send -i' to replicate changes on A to B.  however, the 'zfs recv' on
B is running extremely slowly.  if i run the zfs send on A and redirect output
to a file, it sends at 2MB/sec.  but when i use 'zfs send ... | ssh B zfs
recv', the speed drops to 200KB/sec.  according to iostat, B (which is
otherwise idle) is doing ~20MB/sec of disk reads, and very little writing.

i don't believe the problem is ssh, as the systems are on the same LAN, and
running 'tar' over ssh runs much faster (20MB/sec or more).

is this slowness normal?  is there any way to improve it?  (the idea here is to
use B as a backup of A, but if i can only replicate at 200KB/s, it's not going
to be able to keep up with the load...)

both systems are X4500s with 16GB ram, 48 SATA disks and 4 2.8GHz cores.

thanks,
river.
-BEGIN PGP SIGNATURE-

iD8DBQFJE3k5IXd7fCuc5vIRAs2JAJ0W0dYVgfyNUXGWHbg59D5mQgq9jQCfWUsm
5/c8g4JMmtIj59mZ5ghkdIY=
=QNG3
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-06 Thread Ian Collins
On Fri 07/11/08 12:09 , River Tarnell [EMAIL PROTECTED] sent:
 
 hi,
 
 i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). 
 i'musing 'zfs send -i' to replicate changes on A to B.  however, the 'zfs
 recv' onB is running extremely slowly.  if i run the zfs send on A and 
 redirect
 outputto a file, it sends at 2MB/sec.  but when i use 'zfs send ... | ssh B
 zfsrecv', the speed drops to 200KB/sec.  according to iostat, B (which is
 otherwise idle) is doing ~20MB/sec of disk reads, and very little
 writing.
 i don't believe the problem is ssh, as the systems are on the same LAN,
 andrunning 'tar' over ssh runs much faster (20MB/sec or more).
 
 is this slowness normal?  is there any way to improve it?  (the idea here
 is touse B as a backup of A, but if i can only replicate at 200KB/s, it's not
 goingto be able to keep up with the load...)
 
That's very slow.  What's the nature of your data?

I'm currently replicating data between an x4500 and an x4540 and I see about 
50% of ftp transfer speed for zfs sens/receive (about 60GB/hour).

Time each phase (send to a file, copy the file to B and receive from the file). 
 When I tried this on a filesystem with a range of file sizes, I had about 30% 
of the total transfer time in send, 50% in copy and 20% in receive.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk space usage of zfs snapshots and filesystems-my math doesn't add up

2008-11-06 Thread none
Hi,
Had a look at the new additions from zpool version 13 but I still don't think 
it will give me the information I was after.
Here is what I have gathered, can someone correct me if I'm wrong?

1. The USED property for a snapshot only includes the data that is UNIQUE to 
that snapshot.
2. The USED property for a filesystem includes all data that is SHARED amongst 
two or more of either the filesystem itself or its child snapshots.
3. There is no way to tell how much data would be freed by deleting a 
particular set of (one or more) snapshots, without actually deleting that set 
of snapshots. Or in other words, there is no way to tell how much data is 
shared amongst a particular set of snapshots but not shared with the current 
filesystem.

Maybe an incremental zfs send could be used by counting the number of bytes 
produced but that is only between two snapshots and is awkward.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-06 Thread River Tarnell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ian Collins:
 That's very slow.  What's the nature of your data?
 
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB.  they are organised into subdirectories, A/B/C/file.  each directory
has 18,000-25,000 files.  total data size is around 2.5TB.

hm, something changed while i was writing this mail: now the transfer is
running at 2MB/sec, and the read i/o has disappeared.  that's still slower than
i'd expect, but an improvement.

 Time each phase (send to a file, copy the file to B and receive from the 
 file).  When I tried this on a filesystem with a range of file sizes, I had 
 about 30% of the total transfer time in send, 50% in copy and 20% in receive.

i'd rather not interrupt the current send, as it's quite large.  once it's
finished, i'll test with smaller changes...

- river.
-BEGIN PGP SIGNATURE-

iD8DBQFJE4mXIXd7fCuc5vIRAv0/AJoCRtMBN1/WD7zVVRzV2n4xeqBvyACeLNL/
rLB1iHlu4xZdUPSiNj/iWl4=
=+F7d
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-06 Thread Brent Jones
On Thu, Nov 6, 2008 at 4:19 PM, River Tarnell
[EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Ian Collins:
 That's very slow.  What's the nature of your data?

 mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
 50KB.  they are organised into subdirectories, A/B/C/file.  each directory
 has 18,000-25,000 files.  total data size is around 2.5TB.

 hm, something changed while i was writing this mail: now the transfer is
 running at 2MB/sec, and the read i/o has disappeared.  that's still slower 
 than
 i'd expect, but an improvement.

 Time each phase (send to a file, copy the file to B and receive from the 
 file).  When I tried this on a filesystem with a range of file sizes, I had 
 about 30% of the total transfer time in send, 50% in copy and 20% in receive.

 i'd rather not interrupt the current send, as it's quite large.  once it's
 finished, i'll test with smaller changes...

- river.
 -BEGIN PGP SIGNATURE-

 iD8DBQFJE4mXIXd7fCuc5vIRAv0/AJoCRtMBN1/WD7zVVRzV2n4xeqBvyACeLNL/
 rLB1iHlu4xZdUPSiNj/iWl4=
 =+F7d
 -END PGP SIGNATURE-
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Theres been a couple threads about this now, tracked some bug ID's/ticket:

6333409
6418042
66104157

If you wanna see the status

-- 
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] copies set to greater than 1

2008-11-06 Thread Krzys
WHen property value copies is set to value greater than 1 how does it work? 
Will 
it store second copy of data on different disk? or does it store it on the same 
disk? Also when this setting is changed at some point on file system, will it 
make copies of existing data or just new data thats being written from now on?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] root zpool question

2008-11-06 Thread Krzys

Currently I have the following:

# zpool status
   pool: rootpool
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 rootpoolONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

errors: No known data errors
#

I would like to put c1t0d0s0 disk in and setup mirroring for my root disk. I am 
just addraid that if I do add my disk that instead of creating mirror I will 
add 
it to a pool to have it concat/stripe rather than mirror. How can I add disk to 
this pool and have mirroring instead of striping it?

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copies set to greater than 1

2008-11-06 Thread Bob Netherton
On Thu, 2008-11-06 at 19:54 -0500, Krzys wrote:
 WHen property value copies is set to value greater than 1 how does it work? 
 Will 
 it store second copy of data on different disk? or does it store it on the 
 same 
 disk? Also when this setting is changed at some point on file system, will it 
 make copies of existing data or just new data thats being written from now on?

I have done this on my home directory the microsecond that it became
available :-)

It tries to make copies on multiple devices if it can.   If not (as in
my single disk laptop) it places both copies on the same disk.   It will
not duplicate any existing data, so it would be a good idea to do a
zfs create -o copies=2 ..   so that all of the data in the dataset
will have some sort of replication from the beginning. 

df output reflects actual pool usage.

# mkfile 300m f

# ls -la 
total 1218860
drwxr-xr-x   2 bobn local  3 Nov  6 19:04 .
drwxr-xr-x  81 bobn sys  214 Nov  6 19:04 ..
-rw---   1 bobn local314572800 Nov  6 19:04 f

# du -h .
 600M   .



Bob





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root zpool question

2008-11-06 Thread Krzys
never mind I did figure it out, I did play around with my settings and spare 
disk and it seems like its working

# zpool create -f testroot c1t0d0s0
Nov  6 20:02:10 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rootpool 29.4G  37.6G  21.5K  /rootpool
rootpool/ROOT17.4G  37.6G18K  /rootpool/ROOT
rootpool/ROOT/zfsBE  17.4G  37.6G  17.4G  /
rootpool/dump4.02G  37.6G  4.02G  -
rootpool/swap8.00G  44.4G  1.13G  -
testroot 89.5K  9.78G 1K  /testroot
# zpool attach testroot c1t0d0s0 c1t0d0s1
# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
rootpool68G  22.6G  45.4G33%  ONLINE  -
testroot  9.94G   112K  9.94G 0%  ONLINE  -
# zpool status
   pool: rootpool
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 rootpoolONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

   pool: testroot
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Thu Nov  6 20:03:01 2008
config:

 NAME  STATE READ WRITE CKSUM
 testroot  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c1t0d0s0  ONLINE   0 0 0
 c1t0d0s1  ONLINE   0 0 0

errors: No known data errors
#





On Thu, 6 Nov 2008, Krzys wrote:


 Currently I have the following:

 # zpool status
   pool: rootpool
  state: ONLINE
  scrub: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 rootpoolONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

 errors: No known data errors
 #

 I would like to put c1t0d0s0 disk in and setup mirroring for my root disk. I 
 am
 just addraid that if I do add my disk that instead of creating mirror I will 
 add
 it to a pool to have it concat/stripe rather than mirror. How can I add disk 
 to
 this pool and have mirroring instead of striping it?

 Regards,

 Chris

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,491393cf9027197925582!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copies set to greater than 1

2008-11-06 Thread Richard Elling
Krzys wrote:
 WHen property value copies is set to value greater than 1 how does it work? 
 Will 
 it store second copy of data on different disk? or does it store it on the 
 same 
 disk? 

This is hard to describe in words, so I put together some pictures.
http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection

 Also when this setting is changed at some point on file system, will it 
 make copies of existing data or just new data thats being written from now on?
   

Changes to such parameters (copies, compression) affect new
writes.  Previously written data will remain as-is.  A new file
which is a copy of an old file will be written with the new
policy.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root zpool question

2008-11-06 Thread Richard Elling
Krzys wrote:
 Currently I have the following:

 # zpool status
pool: rootpool
   state: ONLINE
   scrub: none requested
 config:

  NAMESTATE READ WRITE CKSUM
  rootpoolONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

 errors: No known data errors
 #

 I would like to put c1t0d0s0 disk in and setup mirroring for my root disk. I 
 am 
 just addraid that if I do add my disk that instead of creating mirror I will 
 add 
 it to a pool to have it concat/stripe rather than mirror. How can I add disk 
 to 
 this pool and have mirroring instead of striping it?
   

The command is zpool attach
But more work is needed.  Please consult the ZFS Administration
Guide for the proper procedure for your hardware.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] after controller crash, 'one or more devices is currently unavailable'

2008-11-06 Thread Victor Latushkin
David Champion wrote:
 I have a feeling I pushed people away with a long message.  Let me
 reduce my problem to one question.
 
 # zpool import -f z
 cannot import 'z': one or more devices is currently unavailable


 'zdb -l' shows four valid labels for each of these disks except for the
 new one.  Is this what unavailable means, in this case?
 
 I have now faked up a label for the disk that didn't have one and
 applied it with dd.

 Can anyone say what unavailable means, given that all eight disks are
 registered devices at the correct paths, are readable, and have labels?

For that label to be valid you also need to make sure that checksum for 
the label is valid as well.

You may try to get better idea why what is going on during import with 
the help of DTrace. See topic more ZFS recovery for the example script.

regards,
victor



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help recovering zfs filesystem

2008-11-06 Thread Victor Latushkin
Sherwood Glazier wrote:
 Let me preface this by admitting that I'm a bonehead.
 
 I had a mirrored a zfs filesystem.  I needed to use one of the
 mirrors temporarily so I did a zpool detach to remove the member
 (call it disk1) leaving disk0 in the pool.  However, after the detach
 I mistakenly wiped disk0.
 
 So here is the question.  I haven't touched disk1 yet so the data is
 hopefully still there.  Is there any way to recover the data on
 disk1?  I've tried zpool import but it isn't finding anything.

Search archives for labelfix utility and related discussions

hth,
victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How does zfs handle with '..' while snapshotting

2008-11-06 Thread Chen Zheng
Hi,

Let's say dir a contains  b and c, if I change c to c',  a will be a'
based on COW, right?

  a a'
 b c c'

But what will happen if each children of a contains some pointer
pointing back to a, like the
'..' entry in b. It points to a before the change, which shall it
point to, a or a' after the change?

Shall we modify b too? If yes, there will be a new b', should we do
the same with children of b
recursively? If we leave b untouched,  in the view of a', after we cd
b, '..' will point to a,  that's
quite confusing.

zfs did it right,  but I can't find it in the code,  really appreciate
that if someone can give me
some hints.

Thanks

Best Regards
Chenz

-- 
args are passed by ref, but bindings are local, variables are in fact
just a symbol referencing an object
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] boot -L

2008-11-06 Thread Krzys

What am I doing wrong? I have sparc V210 and I am having difficulty with boot 
-L, I was under the impression that boot -L will give me options to which zfs 
mirror I could boot my root disk?

Anyway but even not that, I am seeing some strange behavior anyway... After 
trying boot -L I am unabl eto boot my system unless I do reset-all, is that 
normal? I have Solaris 10 U6 that I just upgraded my box to and I wanted to try 
all the cool things about zfs root disk mirroring and so on, but so far its 
quite strange experience with this whole thing...

[22:21:25] @adas: /root  init 0
[22:21:51] @adas: /root  stopping NetWorker daemons:
  nsr_shutdown -q
svc.startd: The system is coming down.  Please wait.
svc.startd: 90 system services are now being stopped.
svc.startd: The system is down.
syncing file systems... done
Program terminated
{0} ok boot -L

SC Alert: Host System has Reset
Probing system devices
Probing memory
Probing I/O buses

Sun Fire V210, No Keyboard
Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.



Rebooting with command: boot -L
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args: -L

Can't open bootlst

Evaluating:
The file just loaded does not appear to be executable.
{1} ok boot disk0
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
File and args:
ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss

{1} ok boot disk1
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
File and args:
ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss

{1} ok boot
ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss

{1} ok reset-all
Probing system devices
Probing memory
Probing I/O buses

Sun Fire V210, No Keyboard
Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.



Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args:
SunOS Release 5.10 Version Generic_137137-09 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: adas
Reading ZFS config: done.
Mounting ZFS filesystems: (3/3)

adas console login: Nov  6 22:27:13 squid[361]: Squid Parent: child process 363 
started
Nov  6 22:27:18 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
starting NetWorker daemons:
  nsrexecd

console login:


Does anyone have any idea why is that happening? what am I doing wrong?

Thanks for help.

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] boot -L

2008-11-06 Thread Nathan Kroenert
A quick google shows that it's not so much about the mirror, but the BE...

http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/

Might help?

Nathan.

On  7/11/08 02:39 PM, Krzys wrote:
 What am I doing wrong? I have sparc V210 and I am having difficulty with boot 
 -L, I was under the impression that boot -L will give me options to which zfs 
 mirror I could boot my root disk?
 
 Anyway but even not that, I am seeing some strange behavior anyway... After 
 trying boot -L I am unabl eto boot my system unless I do reset-all, is that 
 normal? I have Solaris 10 U6 that I just upgraded my box to and I wanted to 
 try 
 all the cool things about zfs root disk mirroring and so on, but so far its 
 quite strange experience with this whole thing...
 
 [22:21:25] @adas: /root  init 0
 [22:21:51] @adas: /root  stopping NetWorker daemons:
   nsr_shutdown -q
 svc.startd: The system is coming down.  Please wait.
 svc.startd: 90 system services are now being stopped.
 svc.startd: The system is down.
 syncing file systems... done
 Program terminated
 {0} ok boot -L
 
 SC Alert: Host System has Reset
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Rebooting with command: boot -L
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args: -L
 
 Can't open bootlst
 
 Evaluating:
 The file just loaded does not appear to be executable.
 {1} ok boot disk0
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
 File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot disk1
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
 File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok reset-all
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args:
 SunOS Release 5.10 Version Generic_137137-09 64-bit
 Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Hardware watchdog enabled
 Hostname: adas
 Reading ZFS config: done.
 Mounting ZFS filesystems: (3/3)
 
 adas console login: Nov  6 22:27:13 squid[361]: Squid Parent: child process 
 363 
 started
 Nov  6 22:27:18 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
 starting NetWorker daemons:
   nsrexecd
 
 console login:
 
 
 Does anyone have any idea why is that happening? what am I doing wrong?
 
 Thanks for help.
 
 Regards,
 
 Chris
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 


//
// Nathan Kroenert  [EMAIL PROTECTED]   //
// Senior Systems Engineer  Phone:  +61 3 9869 6255 //
// Global Systems Engineering   Fax:+61 3 9869 6288 //
// Level 7, 476 St. Kilda Road  //
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10U6 and x4500 thumper sata controller

2008-11-06 Thread Paul B. Henson
On Fri, 31 Oct 2008, Paul B. Henson wrote:

 S10U6 was released this morning (whoo-hooo!), and I was wondering if
 someone in the know could verify that it contains all the
 fixes/patches/IDRs for the x4500 sata problems?

For the archives, I received confirmation from Sun tech support than S10U6
contains fixes for all known x4500 issues.

-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS problems which scrub can't find?

2008-11-06 Thread Matt . Ingenthron
Hi,

After a recent pkg image-update to OpenSolaris build 100, my system 
booted once and now will no longer boot.  After exhausting other 
options, I am left wondering if there is some kind of ZFS issue a scrub 
won't find.

The current behavior is that it will load GRUB, but trying to boot the
most recent boot environment (b100 based) I get Error 16: Inconsistent
filesystem structure.  The pool has gone through two scrubs from a 
livecd based on b101a without finding anything wrong.  If I select the 
previous boot environment (b99 based), I get a kernel panic.

I've tried replacing the /etc/hostid based on a hunch from one of the 
engineers working on Indiana and ZFS boot.  I also tried rebuilding the 
boot_archive and reloading the GRUB based on build 100.  I then tried 
reloading the build 99 grub to hopefully get to where I could boot build 
99.  No luck with any of these thus far.

More below, and some comments in this bug:
http://defect.opensolaris.org/bz/show_bug.cgi?id=3965, though may need
to be a separate bug.

I'd appreciate any suggestions and be glad to gather any data to 
diagnose this if possible.


== Screen when trying to boot b100 after boot menu ==

  Booting 'opensolaris-15'

bootfs rpool/ROOT/opensolaris-15
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
loading '/platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS' ...
cpu: 'GenuineIntel' family 6 model 15 step 11
[BIOS accepted mixed-mode target setting!]
   [Multiboot-kludge, loadaddr=0xbffe38, text-and-data=0x1931a8, bss=0x0,
entry=0xc0]
'/platform/i86pc/kernel/amd64/unix -B
zfs-bootfs=rpool/391,bootpath=[EMAIL PROTECTED],0/pci1179,[EMAIL 
PROTECTED],2/[EMAIL PROTECTED],0:a,diskdevid=id1,[EMAIL PROTECTED]/a'
is loaded
module$ /platform/i86pc/$ISADIR/boot_archive
loading '/platform/i86pc/$ISADIR/boot_archive' ...

Error 16: Inconsistent filesystem structure

Press any key to continue...



== Booting b99 ==
(by selecting the grub entry from the GRUB menu and adding -kd then 
doing a :c to continue I get the following stack trace)

debug_enter+37 ()
panicsys+40b ()
vpanic+15d ()
panic+9c ()
(lines above typed in from ::stack, lines below typed in from when it 
dropped into the debugger)
unix:die+ea ()
unix:trap+3d0 ()
unix:cmntrap+e9 ()
unix:mutex_owner_running+d ()
genunix:lokuppnat+bc ()
genunix:vn_removeat+7c ()
genunix:vn_remove_28 ()
zfs:spa_config_write+18d ()
zfs:spa_config_sync+102 ()
zfs:spa_open_common+24b ()
zfs:spa_open+1c ()
zfs:dsl_dsobj_to_dsname+37 ()
zfs:zfs_parse_bootfs+68 ()
zfs:zfs_mountroot+10a ()
genunxi:fsop_mountroot+1a ()
genunix:rootconf+d5 ()
genunix:vfs_mountroot+65 ()
genunix:main+e6 ()
unix:_locore_start+92 ()

panic: entering debugger (no dump device, continue to reboot)
Loaded modules: [ scsi_vhci uppc sd zfs specfs pcplusmp cpu.generic ]
kmdb: target stopped at:
kmdb_enter+0xb: movq   %rax,%rdi



== Output from zdb ==

LABEL 0

version=10
name='rpool'
state=1
txg=327816
pool_guid=6981480028020800083
hostid=95693
hostname='opensolaris'
top_guid=5199095267524632419
guid=5199095267524632419
vdev_tree
type='disk'
id=0
guid=5199095267524632419
path='/dev/dsk/c4t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],2/[EMAIL 
PROTECTED],0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=29
ashift=9
asize=90374406144
is_log=0
DTL=161

LABEL 1

version=10
name='rpool'
state=1
txg=327816
pool_guid=6981480028020800083
hostid=95693
hostname='opensolaris'
top_guid=5199095267524632419
guid=5199095267524632419
vdev_tree
type='disk'
id=0
guid=5199095267524632419
path='/dev/dsk/c4t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],2/[EMAIL 
PROTECTED],0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=29
ashift=9
asize=90374406144
is_log=0
DTL=161

LABEL 2

version=10
name='rpool'
state=1
txg=327816
pool_guid=6981480028020800083
hostid=95693
hostname='opensolaris'
top_guid=5199095267524632419
guid=5199095267524632419
vdev_tree
type='disk'
id=0
guid=5199095267524632419
path='/dev/dsk/c4t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],2/[EMAIL 
PROTECTED],0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=29
ashift=9
asize=90374406144
is_log=0
DTL=161

Re: [zfs-discuss] boot -L

2008-11-06 Thread Krzys
Great, thank you.

Chris


On Fri, 7 Nov 2008, Nathan Kroenert wrote:

 A quick google shows that it's not so much about the mirror, but the BE...

 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/

 Might help?

 Nathan.

 On  7/11/08 02:39 PM, Krzys wrote:
 What am I doing wrong? I have sparc V210 and I am having difficulty with 
 boot -L, I was under the impression that boot -L will give me options to 
 which zfs mirror I could boot my root disk?
 
 Anyway but even not that, I am seeing some strange behavior anyway... After 
 trying boot -L I am unabl eto boot my system unless I do reset-all, is that 
 normal? I have Solaris 10 U6 that I just upgraded my box to and I wanted to 
 try all the cool things about zfs root disk mirroring and so on, but so far 
 its quite strange experience with this whole thing...
 
 [22:21:25] @adas: /root  init 0
 [22:21:51] @adas: /root  stopping NetWorker daemons:
   nsr_shutdown -q
 svc.startd: The system is coming down.  Please wait.
 svc.startd: 90 system services are now being stopped.
 svc.startd: The system is down.
 syncing file systems... done
 Program terminated
 {0} ok boot -L
 
 SC Alert: Host System has Reset
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Rebooting with command: boot -L
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args: -L
 
 Can't open bootlst
 
 Evaluating:
 The file just loaded does not appear to be executable.
 {1} ok boot disk0
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
  File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot disk1
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
  File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok reset-all
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args:
 SunOS Release 5.10 Version Generic_137137-09 64-bit
 Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Hardware watchdog enabled
 Hostname: adas
 Reading ZFS config: done.
 Mounting ZFS filesystems: (3/3)
 
 adas console login: Nov  6 22:27:13 squid[361]: Squid Parent: child process 
 363 started
 Nov  6 22:27:18 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
 starting NetWorker daemons:
   nsrexecd
 
 console login:
 
 
 Does anyone have any idea why is that happening? what am I doing wrong?
 
 Thanks for help.
 
 Regards,
 
 Chris
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 -- 


 //
 // Nathan Kroenert[EMAIL PROTECTED]   //
 // Senior Systems EngineerPhone:  +61 3 9869 6255 //
 // Global Systems Engineering Fax:+61 3 9869 6288 //
 // Level 7, 476 St. Kilda Road//
 // Melbourne 3004   Victoria  Australia   //
 //


 !DSPAM:122,4913bbea177061025419720!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-help] How to install to OpenSolaris on ZFS and leaving /export untouched?

2008-11-06 Thread Uwe Dippel
[i]Is there a current Linux distro that actually configures itself so
this can happen? Most of the ones I've seen don't bother.[/i]

Mike, does 'Debian' or 'Ubuntu' ring a bell? Both cater for this situation in 
the text based installer. And surely a few more, that I only haven't tried.
I *am* disappointed, this is so obvious to do. And is so obvious a feature 
needed and useful. [At times I really question the vision of higher management 
in SUN. Having great engineers doing the groundwork often is not enough for 
business survival.]

[i]Whether any of the
current installation scripts are smart enough to let you do so is
another question.[/i]

What do you mean with 'smart'? I would be even more disheartened if ZFS could 
only either distroy and recreate a whole pool. Does ZFS not allow to simply 
dump files to all but a single location, rpool/export?
Take heed, I don't talk about *creating* a pool and leaving parts of an old one 
untouched.
I only talk about adding two hooks to the installer script:
One that allows the user to keep the existing filesystem/pool for the new 
installation (and skip the creation), and one that skips the eventual writing 
of data into /export/home.

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss