Re: [zfs-discuss] ZFS resize partitions

2008-12-09 Thread Larry Liu
gsorin 写道:
> Hello,
>
> I have the following issue:
>
> I'm running solaris 10 in a vmware enviroment. I have a virtual hdd of 8gig 
> (for example). At some point I can increase the hard drive to 10gig. How can 
> I resize the ZFS pool to take advantage of the new available space?
> The same question applies for physical machines connected to SAN or other 
> kind of RAID which can increase the "hard drive" size.
>   
export the zpool, relabel the disk through format(1M), import the zpool 
again.

This could be done automatically after the following projects are 
integrated:
http://bugs.opensolaris.org/view_bug.do?bug_id=6475340
http://bugs.opensolaris.org/view_bug.do?bug_id=6606879

Larry
> The following documentation was found but without any real success:
> --
> http://www.sun.com/emrkt/campaign_docs/expertexchange/knowledge/solaris_zfs_perf.html
> Q: Are ZFS file systems shrinkable? How about fragmentation? Any need to 
> defrag them?
> A: ZFS file systems can be dynamically resized; they can grow or shrink as 
> needed. The allocation algorithms are such that defragmentation is not an 
> issue.
> --
> Over here (http://harryd71.blogspot.com/2008/08/how-to-resize-zfs.html) we 
> can see that we can "mirror" the original hard drive if we have 2 hard drives 
> available, which we don't.
> --
> At some googling I found:
> zfs set volsize=2G pool/name
> but zfs get volsize command returns:
> name property value source
> poolname volsize - -
> poolname/ROOT volsize - -
> --
> if I try set i get:
> cannot set property for "poolname/whatever" 'volsize' does not apply to 
> datasets of this type.
>
> If i trype format -> fdisk I can see that the first (only) partition is 
> active, has the same first and end cilinder and the percent is lower (80% or 
> whatever value I increased the hard drive). I can create a new partition.
>
> I also tried in format -> type but I don't have an autoconfigure option, i 
> have only default and other, which don't help.
>
> It is not mandatory to increase the size of the root (operating system) 
> partition in case this is an issue, I can use another virtual disk for the 
> increasing part.
>
> Any hints are really appreciated.
>
> Thanks.
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-09 Thread Casper . Dik

>On Mon, Dec 08, 2008 at 04:46:37PM -0600, Brian Cameron wrote:
>> >Is there a shortcomming in VT here?
>> 
>> I guess it depends on how you think VT should work.  My understanding
>> is that VT works on a first-come-first-serve basis, so the first user
>> who calls logindevperm interfaces gets permission.  While it might seem
>> nicer for the last user to get the device, this is much harder to manage.
>
>No, I think audio should be virtualized.  The current session should
>have access to the real audio hardware, and the others should not be
>able to produce sound (though as afar as apps go, they shouldn't know
>the difference).

I think we talked about this at the time but in the end we made
a vt subsystem which works like others do.

But I agree with you.  It doesn't necessarily virtualizing /dev/audio,
adding but adding $AUDIODEV to the environment of such a session.
(E.g., /dev/vt/sound/...).

At this time, it seems that the last one to login gets /dev/sound?


>> Making it work the other way would require logindevperm to be enhanced.
>
>Perhaps.  It will also require virtualizing the audio device.

And possibly not, because a simple rule can be used if we give the
virtualized audio a different device node.

>When I switch away from a session where programs are producing sound
>what should happen is this: a) those programs continue to operate, b)
>but they don't produce actual sound until I switch back to that VT (and
>unlock the screen).

I'm not sure why the programs should stop; it's just as valid to
throw the output away.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool "cannot replace a replacing device"

2008-12-09 Thread Ross
No, there won't be anything on the drive, I was just wondering if ZFS
might get confused seeing a disk it knows about, but with no data on
there.

To be honest, on a single parity raid array with that many drives, I'd
be buying another drive straight away.  You've got no protection for
your data right now.

I'd also advise adding a hot spare to that system.  If the new drive
works ok, you can probably use the one you're having problems with now
as a hot spare.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP Smart Array and b99?

2008-12-09 Thread Fajar A. Nugraha
sim wrote:
> OK,
>
> In the end I managed to install OpenSolaris snv_101b on hp blade on smart 
> array drive directly from install cd. Everything is fine. The problems  I 
> experienced with hangs on boot on snv_99+ is related to Qlogic driver, but 
> this is a different story.
>
>   

Hi Simon,

Your symptoms is very similar to mine. Can you tell me how you solve the
qlogic problem? My server also has qlogic FC cards, not connected to
anything right now (disks are on HP smart array).

Regards,

Fajar


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-09 Thread russell aspinwall
Hi,
A couple of years ago I compared Solaris 10 and XP x64 on the same hardware 
(dual opteron) running the same analytical cases. Each OS had a clean install 
before use :-

Case   CPU time   System Time Lapse   Bits   OS
No   Secs   Secs  hh:mm:ss

1   5890 21:38:37  64Solaris 10
2   5578 21:38:31  64Solaris 10 
3   1128 10:19:05  64Solaris 10

1   6536 21:49:19  32Solaris 10
2   8388 22:20:12  32Solaris 10
3   1311 10:22:11  32Solaris 10

10  03:16:24.21 32Win XP x64
20  03:17:27.71 32Win XP x64
3222000:37:00  32Win XP x64

The tests were repeated just to make sure. Unfortunately until software is 
built and tested on Solaris 10, people tend to assume Windows is faster. The 
above tests were completed with no virus scanner installed as not to distort 
the results.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cp: Operation not supported

2008-12-09 Thread Kristof Van Damme
Hi All,
We have set up a zpool on OpenSolaris 2008.11, but have difficulties copying 
files with special chars in the name when the name is encoded in ISO8859-15. 
When the name is in UTF8 we don't have this problem. We get "Operation not 
supported".
We want to copy the files with their name in ISO8859-15 encoding and do not 
want to convert them to UTF8, because we think this will cause it problems 
further down the road: The filesystem will be used as a network share (CIFS and 
NFS) for windows and linux clients. The clients may not all properly support 
UTF8 at the moment. So we'd like to avoid having to convert to UTF8 at this 
moment. 

How can we copy files with an ISO8859-encoded name?

To demonstrate the problem:
# /usr/bin/cp ISO8859-K?ln.url /datapool/appl
cp: cannot create /datapool/appl/ISO8859-K?ln.url: Operation not supported
# /usr/bin/cp UTF8-Köln.txt /datapool/appl/
#

Kristof/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Ross
While it's good that this is at least possible, that looks horribly complicated 
to me.  Does anybody know if there's any work being done on making it easy to 
remove obsolete boot environments?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-09 Thread Ross
I can tell you a little about Windows VSS snapshots compared to ZFS ones, since 
one of the main reasons I'm so interested in ZFS is because windows snapshots 
are so useless.

For windows VSS:
* You have OS overhead for taking the snapshot, as opposed to it being 
instantaneous for ZFS.  Microsoft actually recommend the snapshots are stored 
on a separate disk.
* You have to reserve space in advance for them, so if you guess wrong you're 
out of luck.
* Microsoft's snapshots can have one schedule.  They support hourly, daily or 
weekly snapshots, but you can only pick one period.
* You are limited to 64 snapshots.

So if you want hourly snapshots of your data, you're not even going to have 3 
days worth of backups.  If you can live with daily backups you can manage 2 
months worth.

When you compare that to Tim's excellent auto backup service it makes VSS look 
like a joke.  While ZFS doesn't actually limit how many snapshots you keep, 
with just 90 you can run:

8x 15 minute snapshots
48x hourly snapshots
14x daily snapshots
8x weekly snapshots
12x monthly snapshots

So you have snapshots being taken *far* more regularly than VSS can manage, and 
they go back a full year with considerable overlap between the different 
periods.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Tim Haley
Ross wrote:
> While it's good that this is at least possible, that looks horribly 
> complicated to me.  
> Does anybody know if there's any work being done on making it easy to remove 
> obsolete 
> boot environments?

If the clones were promoted at the time of their creation the BEs would 
stay independent and individually deletable.  Promotes can fail, though, 
if there is not enough space.

I was told a little while back when I ran into this myself on an Nevada 
build where ludelete failed, that beadm *did* promote clones.  This 
thread appears to be evidence to the contrary.  I think it's a bug, we 
should either promote immediately on creation, or perhaps beadm destroy 
could do the promotion behind the covers.

-tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Kyle McDonald
Tim Haley wrote:
> Ross wrote:
>   
>> While it's good that this is at least possible, that looks horribly 
>> complicated to me.  
>> Does anybody know if there's any work being done on making it easy to remove 
>> obsolete 
>> boot environments?
>> 
>
> If the clones were promoted at the time of their creation the BEs would 
> stay independent and individually deletable.  Promotes can fail, though, 
> if there is not enough space.
>
> I was told a little while back when I ran into this myself on an Nevada 
> build where ludelete failed, that beadm *did* promote clones.  This 
> thread appears to be evidence to the contrary.  I think it's a bug, we 
> should either promote immediately on creation, or perhaps beadm destroy 
> could do the promotion behind the covers.
>   
If I understand this right, the latter option looks better to me. Why 
consume the disk space before you have to?
What does LU do?

  -Kyle

> -tim
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Tim Haley
Kyle McDonald wrote:
> Tim Haley wrote:
>> Ross wrote:
>>   
>>> While it's good that this is at least possible, that looks horribly 
>>> complicated to me.  
>>> Does anybody know if there's any work being done on making it easy to 
>>> remove obsolete 
>>> boot environments?
>>> 
>> If the clones were promoted at the time of their creation the BEs would 
>> stay independent and individually deletable.  Promotes can fail, though, 
>> if there is not enough space.
>>
>> I was told a little while back when I ran into this myself on an Nevada 
>> build where ludelete failed, that beadm *did* promote clones.  This 
>> thread appears to be evidence to the contrary.  I think it's a bug, we 
>> should either promote immediately on creation, or perhaps beadm destroy 
>> could do the promotion behind the covers.
>>   
> If I understand this right, the latter option looks better to me. Why 
> consume the disk space before you have to?
> What does LU do?
> 

ludelete doesn't handle this any better than beadm destroy does, it 
fails for the same reasons. lucreate does not promote the clone it 
creates when a new BE is spawned, either.

-tim

>   -Kyle
> 
>> -tim
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Mark J Musante
On Tue, 9 Dec 2008, Tim Haley wrote:
>
> ludelete doesn't handle this any better than beadm destroy does, it 
> fails for the same reasons. lucreate does not promote the clone it 
> creates when a new BE is spawned, either.

Live upgrade's luactivate command is meant to promote the BE during init 6 
processing.  And ludelete calls lulib_demote_and_destroy_dataset to 
attempt to put a BE in the right configuration for zfs destroy to work. 
If it doesn't then that's a bug in LU.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Ross
>> we should either promote immediately on creation, or perhaps beadm destroy
>> could do the promotion behind the covers.
>> - Tim

> If I understand this right, the latter option looks
> better to me. Why 
> consume the disk space before you have to?
> What does LU do?
>   -Kyle

However, if you want to delete something to free up space and only then find 
you haven't got enough free space to do so, you're in a bit of a catch-22.

Might be better to at least attempt the promotion early on, and warn the user 
if it fails for any reason.  Would it be sensible to fail the update if the 
promotion fails?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-09 Thread Nicolas Williams
On Tue, Dec 09, 2008 at 09:09:15AM +0100, [EMAIL PROTECTED] wrote:
> >When I switch away from a session where programs are producing sound
> >what should happen is this: a) those programs continue to operate, b)
> >but they don't produce actual sound until I switch back to that VT (and
> >unlock the screen).
> 
> I'm not sure why the programs should stop; it's just as valid to
> throw the output away.

I didn't mean that they should stop -- they should continue, but their
sound output should be muted.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool "cannot replace a replacing device"

2008-12-09 Thread Courtney Malone
I have another drive on the way, which will be handy in the future, but it 
doesn't solve the problem that zfs wont let me manipulate that pool in a manner 
that will return it to a non-degraded state, (even with a replacement drive or 
hot spare, i have already tried adding a spare) and I don't have somewhere to 
dump ~6TB of data and do a restore.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-09 Thread Bob Friesenhahn
On Tue, 9 Dec 2008, russell aspinwall wrote:

> The tests were repeated just to make sure. Unfortunately until 
> software is built and tested on Solaris 10, people tend to assume 
> Windows is faster. The above tests were completed with no virus 
> scanner installed as not to distort the results.

Your data is surely out of date.  Windows itself inserts "anti-virus" 
type checking into your application as it runs.  Windows executes your 
application slower with each new service pack update and more and more 
run-time safety checks are added.  If you build your application with 
recent Visual Studio versions, then it may run vastly slower by 
default.  Adobe developers found that debug versions of Photoshop 
became completely unusable due to all the added background checking. 
Much of the checking can be turned off, but it may require rebuilding 
the SDKs with a special define.

There may be specialized areas where Windows does shine, but it is 
clearly not in application execution times.

It is possible that someone will develop a Solaris-specific virus, but 
that seems unlikely.  Windows is paying a high price for its policies.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mismatched replication level question

2008-12-09 Thread Scott Williamson
When I attempt to create a 46 disk pool with 5 and 6 disk raidz vdevs, I get
the following message:

mismatched replication level: both 5-way and 6-way raidz vdevs are present
mismatched replication level: both 6-way and 5-way raidz vdevs are present

I expect this is correct.[1]  But what does it mean for performance or other
issues? Why am I being warned?

The factory config for x4500s had the a raidz 5 and 6 disk vdev layout.

[1] http://docs.sun.com/app/docs/doc/819-5461/gazgc?a=view
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS vdev labels and EFI disk labels

2008-12-09 Thread Elaine Ashton
I'm running snv_101a and have been seeing some unexpected behaviour with ZFS 
and EFI disk labels. 

If I fdisk 2 disks to have EFI partitions and label them with the appropriate 
partition beginning at sector 34 and then give them to ZFS for a pool, ZFS 
would appear to change the beginning sector to 256. I've even done a low-level 
format to make sure that this wasn't a case of ZFS picking up an old vdev label 
from the back-end of the disk and preserving it in the front for some unknown 
reason.

I've looked through the ZFS technical documentation and around the net and have 
found nothing that would explain this behaviour. The vdev labels are 256k * 2 
in front, but if I'm doing my math properly, the 256 sectors only comes to 128k 
of space so it can't be a lingering vdev label. Not to mention that the prtvtoc 
claims that the space between sector 34 and 255 is unallocated, e.g. 


* /dev/rdsk/c0t3d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 286494720 sectors
* 286494653 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First   SectorLast
*   Sector CountSector 
*  34 222255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  4 00256286478047 286478302
   8 1100286478303 16384 286494686


So, have I found a bug? A feature? It would appear to be a harmless waste of 
space but at the very least I'd like to know why ZFS is doing this.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-09 Thread Joerg Schilling
Bob Friesenhahn <[EMAIL PROTECTED]> wrote:

> Your data is surely out of date.  Windows itself inserts "anti-virus" 
> type checking into your application as it runs.  Windows executes your 
> application slower with each new service pack update and more and more 
> run-time safety checks are added.  If you build your application with 
> recent Visual Studio versions, then it may run vastly slower by 

Are you talking about the stack overflow checking that is added by the compiler?

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vdev labels and EFI disk labels

2008-12-09 Thread elaine ashton

On 09 Dec, 2008, at 14:04, Mark J Musante wrote:

> On Tue, 9 Dec 2008, Elaine Ashton wrote:
>
>> If I fdisk 2 disks to have EFI partitions and label them with the  
>> appropriate partition beginning at sector 34 and then give them to  
>> ZFS for a pool, ZFS would appear to change the beginning sector to  
>> 256.
>
> Right.  This is done deliberately so that we don't generate  
> misaligned I/Os.  By switching from 34 to 256, we start on a 128k  
> boundary.  I can dig up the CR if you're curious how this was done.

Thanks! That'd be great as I have an snv_79 system that doesn't  
exhibit this behaviour so I'll assume that this has been added in  
sometime between that release and 101a?

e.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vdev labels and EFI disk labels

2008-12-09 Thread Mark J Musante
On Tue, 9 Dec 2008, elaine ashton wrote:

> Thanks! That'd be great as I have an snv_79 system that doesn't exhibit 
> this behaviour so I'll assume that this has been added in sometime 
> between that release and 101a?

According to the CR, the putback went into build 66.
external link: http://bugs.opensolaris.org/view_bug.do?bug_id=6532509


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS resize partitions

2008-12-09 Thread gsorin
Well, when I wanted to export and import the partition, the root partition said 
it was busy (obviously) and I tried using a secondary attached hdd.

By miracle, the zpool increased the size automatically after the reboot. So, 
it's that simple.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS resize partitions

2008-12-09 Thread Romain Chatelain
Lol ;D

I didn't not see that was the root pool...

-C

-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de gsorin
Envoyé : mardi 9 décembre 2008 20:49
À : zfs-discuss@opensolaris.org
Objet : Re: [zfs-discuss] ZFS resize partitions

Well, when I wanted to export and import the partition, the root partition said 
it was busy (obviously) and I tried using a secondary attached hdd.

By miracle, the zpool increased the size automatically after the reboot. So, 
it's that simple.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vdev labels and EFI disk labels

2008-12-09 Thread Mark J Musante
On Tue, 9 Dec 2008, Elaine Ashton wrote:

> If I fdisk 2 disks to have EFI partitions and label them with the 
> appropriate partition beginning at sector 34 and then give them to ZFS 
> for a pool, ZFS would appear to change the beginning sector to 256.

Right.  This is done deliberately so that we don't generate misaligned 
I/Os.  By switching from 34 to 256, we start on a 128k boundary.  I can 
dig up the CR if you're curious how this was done.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] (no subject)

2008-12-09 Thread Chris Dikranis


--
  * Chris Dikranis *
Proactive Support Engineer (ANZ)
*Sun Microsystems, Inc.*
476 St. Kilda Road
Melbourne, Victoria 3004 Australia
Phone x47041/+613 98640 041
Mobile +61 403 494 472
Fax +61 3 9869 6290
Email [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SXCE, ZFS root, b101 -> b103, wierd zfs list ?

2008-12-09 Thread Turanga Leela
I've been playing with liveupgrade for the first time. (See 
http://www.opensolaris.org/jive/thread.jspa?messageID=315231). I've at least 
got a workaround for that issue.

One strange thing i've noticed, however, is after I luactivate the new 
environmentalism (snv_103) the root pool snapshots that *used* to belong to 
rpool/ROOT/snv_101 now appear as being for snv_103? Why on earth would this 
happen?

# zfs list -r -t all rpool
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 13.7G  19.5G41K  /rpool
rpool/ROOT11.7G  19.5G18K  legacy
rpool/ROOT/snv_10193.5M  19.5G  6.01G  /
rpool/ROOT/snv_10311.6G  19.5G  6.13G  /
rpool/ROOT/[EMAIL PROTECTED]   287M  -  6.20G  -
rpool/ROOT/[EMAIL PROTECTED] 66.7M  -  5.99G  -
rpool/ROOT/[EMAIL PROTECTED]  28.2M  -  6.00G  -
rpool/ROOT/[EMAIL PROTECTED]31.6M  -  6.00G  -
rpool/dump1.00G  19.5G  1.00G  -
rpool/swap   1G  20.5G16K  -
# 

Those snapshots were all taken over snv_101, the final snapshot being [EMAIL 
PROTECTED] which was then cloned to create rpool/ROOT/snv_103.

So any idea why this is happening?

# df |grep rpool
/  (rpool/ROOT/snv_103):40897072 blocks 40897072 files
/rpool (rpool ):40897072 blocks 40897072 files
#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE, ZFS root, b101 -> b103, wierd zfs list ?

2008-12-09 Thread Cyril Plisko
On Wed, Dec 10, 2008 at 9:04 AM, Turanga Leela <[EMAIL PROTECTED]> wrote:
> I've been playing with liveupgrade for the first time. (See 
> http://www.opensolaris.org/jive/thread.jspa?messageID=315231). I've at least 
> got a workaround for that issue.
>
> One strange thing i've noticed, however, is after I luactivate the new 
> environmentalism (snv_103) the root pool snapshots that *used* to belong to 
> rpool/ROOT/snv_101 now appear as being for snv_103? Why on earth would this 
> happen?
>
> # zfs list -r -t all rpool
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> rpool 13.7G  19.5G41K  /rpool
> rpool/ROOT11.7G  19.5G18K  legacy
> rpool/ROOT/snv_10193.5M  19.5G  6.01G  /
> rpool/ROOT/snv_10311.6G  19.5G  6.13G  /
> rpool/ROOT/[EMAIL PROTECTED]   287M  -  6.20G  -
> rpool/ROOT/[EMAIL PROTECTED] 66.7M  -  5.99G  -
> rpool/ROOT/[EMAIL PROTECTED]  28.2M  -  6.00G  -
> rpool/ROOT/[EMAIL PROTECTED]31.6M  -  6.00G  -
> rpool/dump1.00G  19.5G  1.00G  -
> rpool/swap   1G  20.5G16K  -
> #
>
> Those snapshots were all taken over snv_101, the final snapshot being [EMAIL 
> PROTECTED] which was then cloned to create rpool/ROOT/snv_103.
>
> So any idea why this is happening?

The new boot environment (rpool/ROOT/snv_103) was "zfs promote"d.

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss