Re: [zfs-discuss] [OpenIndiana-discuss] format dumps the core

2010-11-09 Thread Roy Sigurd Karlsbakk
After making zfs filesystems on the bunch, rebooting into OI makes format 
no-longer dump the core - it works. Seems there might have been something on 
those drives after all.

roy

- Original Message -
> also, this last test was with two 160gig drives only, the 2TB drives
> and the SSD are all disconnected...
> 
> - Original Message -
> > I somehow doubt the problem is the same - looks more like cfgadm
> > can't
> > see my devices. I first tried with directly attached storage (1 SAS
> > cable to each disk). Now, that has been replaced with a SAS expander
> > (4xSAS to the expander, 12 drives on the expander). Format still
> > dumps
> > the core, and cfgadm doesn't seem to like my drives somehow.
> >
> > Any ideas?
> >
> > r...@tos-backup:~# format
> > Searching for disks...Arithmetic Exception (core dumped)
> > r...@tos-backup:~# ls -l /dev/rdsk/core
> > -rw--- 1 root root 2463431 2010-11-04 17:41 /dev/rdsk/core
> > r...@tos-backup:~# pstack /dev/rdsk/core
> > core '/dev/rdsk/core' of 1217: format
> > fee62e4a UDiv (4, 0, 8046c80, 80469a0, 8046a30, 8046a50) + 2a
> > 08079799 auto_sense (4, 0, 8046c80, 0) + 281
> > 080751a6 add_device_to_disklist (80479c0, 80475c0, fefd995b,
> > feffb140)
> > + 62a
> > 080746ff do_search (0, 1, 8047e28, 8066576) + 273
> > 0806658d main (1, 8047e58, 8047e60, 8047e4c) + c1
> > 0805774d _start (1, 8047f00, 0, 8047f07, 8047f0b, 8047f1f) + 7d
> > r...@tos-backup:~# zpool status
> > pool: rpool
> > state: ONLINE
> > scan: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > rpool ONLINE 0 0 0
> > c4t5000C50019891202d0s0 ONLINE 0 0 0
> >
> > errors: No known data errors
> > r...@tos-backup:~# cfgadm -a
> > Ap_Id Type Receptacle Occupant Condition
> > c6 scsi-sas connected configured unknown
> > c6::es/ses0 ESI connected configured unknown
> > c6::smp/expd0 smp connected configured unknown
> > c6::w5000c50019891202,0 disk-path connected configured unknown
> > c6::w5000c50019890fed,0 disk-path connected configured unknown
> > c7 scsi-sas connected unconfigured unknown
> > usb8/1 unknown empty unconfigured ok
> > usb8/2 unknown empty unconfigured ok
> > usb9/1 unknown empty unconfigured ok
> > usb9/2 usb-device connected configured ok
> > usb10/1 unknown empty unconfigured ok
> > usb10/2 unknown empty unconfigured ok
> > usb10/3 unknown empty unconfigured ok
> > usb10/4 unknown empty unconfigured ok
> > usb11/1 unknown empty unconfigured ok
> > usb11/2 unknown empty unconfigured ok
> > usb12/1 unknown empty unconfigured ok
> > usb12/2 unknown empty unconfigured ok
> > usb13/1 unknown empty unconfigured ok
> > usb13/2 unknown empty unconfigured ok
> > usb14/1 usb-hub connected configured ok
> > usb14/1.1 unknown empty unconfigured ok
> > usb14/1.2 unknown empty unconfigured ok
> > usb14/1.3 usb-hub connected configured ok
> > usb14/1.3.1 usb-device connected configured ok
> > usb14/1.3.2 unknown empty unconfigured ok
> > usb14/1.3.3 unknown empty unconfigured ok
> > usb14/1.3.4 unknown empty unconfigured ok
> > usb14/1.4 unknown empty unconfigured ok
> > usb14/2 unknown empty unconfigured ok
> > usb14/3 unknown empty unconfigured ok
> > usb14/4 unknown empty unconfigured ok
> > usb14/5 unknown empty unconfigured ok
> > usb14/6 unknown empty unconfigured ok
> > r...@tos-backup:~#
> >
> >
> > - Original Message -
> > > Moazam,
> > >
> > > Thanks for the update. I hope this is Roy's issue too.
> > >
> > > I can see that format would freak out over ext3, but it
> > > shouldn't core dump.
> > >
> > > Cindy
> > >
> > > On 11/02/10 17:00, Moazam Raja wrote:
> > > > Fixed!
> > > >
> > > > It turns out the problem was that we pulled these two disks from
> > > > a
> > > > Linux box and they were formatted with ext3 on partition 0 for
> > > > the
> > > > whole disk, which was somehow causing 'format' to freak out.
> > > >
> > > > So, we fdisk'ed the p0 slice to delete the Linux partition and
> > > > then
> > > > created a SOLARIS2 type partition on it. It worked and no more
> > > > crash
> > > > during format command.
> > > >
> > > > Cindy, please let the format team know about this since I'm sure
> > > > others will also run into this problem at some point if they
> > > > have
> > > > a
> > > > mixed Linux/Solaris environment.
> > > >
> > > >
> > > > -Moazam
> > > >
> > > > On Tue, Nov 2, 2010 at 3:15 PM, Cindy Swearingen
> > > >  wrote:
> > > >> Hi Moazam,
> > > >>
> > > >> The initial diagnosis is that the LSI controller is reporting
> > > >> bogus
> > > >> information. It looks like Roy is using a similar controller.
> > > >>
> > > >> You might report this problem to LSI, but I will pass this
> > > >> issue
> > > >> along to the format folks.
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Cindy
> > > >>
> > > >> On 11/02/10 15:26, Moazam Raja wrote:
> > > >>> I'm having the same problem after adding 2 SSD disks to my
> > > >>> machine.
> > > >>> The controller is LSI SAS9211-8i PCI Express.
> > > >>>
> > > >>> # format
> > > >>> Searching for disks...

Re: [zfs-discuss] How to grow root vdevs?

2010-11-09 Thread Ian Collins

On 11/10/10 04:11 PM, Peter Taps wrote:

Folks,

I am trying to understand if there is a way to increase the capacity of a 
root-vdev. After reading zpool man pages, the following is what I understand:

1. If you add a new disk by using "zpool add," this disk gets added as a new 
root-vdev. The existing root-vdevs are not changed.
   


The root device can only be a single drive or a mirror.


2. You can also add a new disk by using "zpool attach" on any existing disk of 
the pool. However, the existing disk cannot be part of a raidz vdev. Also, if the 
existing disk is part of a mirror, all we are doing is increasing redundancy but not 
growing the capacity of the vdev.

The only option to grow a root-vdev seems to be to use "zpool replace" and 
replace an existing disk with a bigger disk.\
   


The easiest way to grow the root pool is to mirror to a bigger drive and 
then detach the original.  Don't forget to install grub on the new drive.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to grow root vdevs?

2010-11-09 Thread Peter Taps
Folks,

I am trying to understand if there is a way to increase the capacity of a 
root-vdev. After reading zpool man pages, the following is what I understand:

1. If you add a new disk by using "zpool add," this disk gets added as a new 
root-vdev. The existing root-vdevs are not changed.
2. You can also add a new disk by using "zpool attach" on any existing disk of 
the pool. However, the existing disk cannot be part of a raidz vdev. Also, if 
the existing disk is part of a mirror, all we are doing is increasing 
redundancy but not growing the capacity of the vdev.

The only option to grow a root-vdev seems to be to use "zpool replace" and 
replace an existing disk with a bigger disk.

Is my understanding correct?

Thank you in advance for your help.

Regards,
Peter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-09 Thread Haudy Kazemi

Maurice Volaski wrote:

I think my initial response got mangled. Oops.

  

creating a ZFS pool out of files stored on another ZFS pool.  The main
reasons that have been given for not doing this are unknown edge and
corner cases that may lead to deadlocks, and that it creates a complex
structure with potentially undesirable and unintended performance and
reliability implications.



Computers are continually encountering unknown edge and corner cases in
the various things they do all the time. That's what we have testing for.
  


I agree.  The earlier discussions of this topic raised the issue that 
this is not a well tested area and is an unsupported configuration.  
Some the of problems that arise in nested pool configurations may also 
arise in supported pool configurations; nested pools may significantly 
aggravate the problems.  The trick is to find test cases in supported 
configurations so the problems can't simply be swept under the rug of 
"unsupported configuration".




Deadlocks may occur in low resource
conditions.  If resources (disk space and RAM) never run low, the
deadlock scenarios may not arise.



It sounds like you mean any low resource condition. Presumably, utilizing
complex pool structures like these will tax resources, but there are many
other ways to do that.
  


We have seen ZFS systems lose stability under low resource conditions.  
They don't always gracefully degrade/throttle back performance as 
resources run very low.


I see a parallel in the 64 bit vs 32 bit ZFS code...the 32 bit code has 
much tighter resource constraints put on it due to memory addressing 
limits, and we see notes in many places that the 32 bit code is not 
production ready and not recommended unless you have no other choice.  
The machines the 32 bit code is run on also tend to have tighter 
physical resource limits, which compounds the problems.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to decrease the zfs file system size

2010-11-09 Thread Ian Collins

On 11/10/10 10:29 AM, bhanu prakash wrote:

Hi ,
Currently the file system is with the capacity 50 GB. I want to reduce 
that to 30 Gb.

Quota or physical limit?


When I am trying to set the quota as
#zfs set quota=30G 
it's giving error like " cannot set property for ;size 
is less than current used or reserved space.

Please suggest me the steps how to resolve it...

Remove some stuff so the used space is less than the new quota.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive?

2010-11-09 Thread Lapo Luchini
Casper Dik wrote on 2010-09-26:
> A incremental backup:
> 
>   zfs snapshot -r exp...@backup-2010-07-13
>   zfs send -R -I exp...@backup-2010-07-12 exp...@backup-2010-07-13 | 
>   zfs receive -v -u -d -F portable/export

Unfortunately "zfs receive -F" does not skip existing snapshots and thus
if the "zfs send -R | zfs receive -F" process is somewhat interrupted
(e.g. network downtime) it can't be simply retried, as some
recursively-reached sub-filesystem will have some latest snapshot and
some others would have a different latest snapshot.

The code comments around libzfs_sendrecv.c:1885 seems to indicate that
existing data should be properly skipped, using a call to recv_skip(),
but the "zfs receive" process dies just after having warned about
ignored data, and thus "zfs send" dies of a broken pipe.

Also refer to:
http://thread.gmane.org/gmane.os.freebsd.devel.file-systems/6490

In your opinion is this intentional (ignoring as in "stopping now") or
really a bug (if my interpretation of the intent of recv_skip() is
correct this should be the case, but I wonder...).

-- 
Lapo Luchini - http://lapo.it/

“UNIX is user-friendly, it just chooses its friends.” (Andreas Bogk)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to decrease the zfs file system size

2010-11-09 Thread bhanu prakash
Hi ,

Currently the file system is with the capacity 50 GB. I want to reduce that
to 30 Gb.

When I am trying to set the quota as

#zfs set quota=30G 

it's giving error like " cannot set property for ;size is
less than current used or reserved space.


Please suggest me the steps how to resolve it...



Regards,
bhanu
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Glenn Lagasse
* Peter Taps (ptr...@yahoo.com) wrote:
> Thank you all for your help. Looks like "beadm" is the utility I was
> looking for.
> 
> When I run "beadm list," it gives me the complete list and indicates
> which one is currently active. It doesn't tell me which one is the
> "default" boot. Can I assume that whatever is "active" is also the
> "default?"

As outlined in beadm(1M):

beadm  list [-a | -ds] [-H] [beName]

 Lists information about the  existing  boot  environment
 named beName, or lists information for all boot environ-
 ments if beName is not provided. The Active field  indi-
 cates  whether  the  boot  environment  is  active  now,
 represented by N; active on reboot, represented by R; or

SunOS 5.11  Last change: 21 Jul 20103

System Administration Commands  beadm(1M)

 both, represented by NR.

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Peter Taps
Thank you all for your help. Looks like "beadm" is the utility I was looking 
for.

When I run "beadm list," it gives me the complete list and indicates which one 
is currently active. It doesn't tell me which one is the "default" boot. Can I 
assume that whatever is "active" is also the "default?"

Regards,
Peter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540 RIP

2010-11-09 Thread Ware Adams
On Nov 9, 2010, at 12:24 PM, Maurice Volaski wrote:
> 
>> http://www.supermicro.com/products/chassis/4U/?chs=847
>> 
>> Stay away from the 24 port expander backplanes. I've gone thru several
>> and they still don't work right - timeout and dropped drives under load.
>> The 12-port works just fine connected to a variety of controllers. If you
>> insist on the 24-port expander backplane, use a non-expander equipped LSI
>> controller to drive it.
> 
> I was wondering if you can clarify. Isn't the case that all 24-port
> backplane utilize expander chips directly on the backplane to support
> their 24 ports or are they utilized only when something else, such as
> another 12-port backplane, is connected to one of the cascade ports in the
> back?

I think he is referring to the different flavors of the 847, namely the one 
that uses expanders (E1, E2, E16, E26) vs. the one that does not (the 847A).  
This page about a storage server build does a very good job of detailing all 
the different versions of the 847:

http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs roolback

2010-11-09 Thread John Goolsby
I'm trying to rollback from a bad patch install on Solaris 10.  From the 
failsafe BE I tried to rollback, but zfs is asking me to provide allow rollback 
permissions.  It's hard for me to tell exactly because the messages are 
scrolling off the screen before I can read them. Any help would be appreciated.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] is opensolaris support ended?

2010-11-09 Thread sridhar surampudi
Hi,

I have downloaded and using opensolaris virtual box image which shows below 
versions

zfs version 3
zpool version 14

cat /etc/release shows
2009.06 snv_111b X86

Is this final build available ??
Can i upgrade it to higher version of zfs/zpool ?
can i get any updage vdi image to seek zfs/zpool having zpool split support?
please help

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename zpool

2010-11-09 Thread sridhar surampudi
zfs close is at zfs file system level. what i am look here is rebuild the file 
system stack from bottom to top. Once i took the snapshot ( hardware) the 
snapshot devices carry same copy of data and meta data.

If my snapshot device is dev2 then, the metadata will have smpoolsnap. If I 
need to use dev2 on the same machine since smpoolsnap is already present on 
dev1 which throws error.
what I am looking for is if I can modify this metadata, I can use dev2 with an 
alternate name so that all file systems would be available with an alternate 
zpool name.

As I mentioned below, I can do it for HPLVM or AIXLVM ( recreatevg command) if 
i create a snapshot at array level.

Thanks & Regards,
sridhar
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-09 Thread Maurice Volaski
>On 09/11/10 11:46 AM, Maurice Volaski wrote:
>> ...
>> 
>
>Is that horrendous mess Outlook's fault? If so, please consider not
>using it.

Yes, it is. :-( Outlook 2011 on the Mac, which just came out, so perhaps
I'll get lucky and they will fix it..eventually.

--
Maurice Volaski, maurice.vola...@einstein.yu.edu
Computing Support
Dominick P. Purpura Department of Neuroscience
Albert Einstein College of Medicine of Yeshiva University


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540 RIP

2010-11-09 Thread Maurice Volaski
>http://www.supermicro.com/products/chassis/4U/?chs=847
>
>Stay away from the 24 port expander backplanes. I've gone thru several
>and they still don't work right - timeout and dropped drives under load.
>The 12-port works just fine connected to a variety of controllers. If you
>insist on the 24-port expander backplane, use a non-expander equipped LSI
>controller to drive it.

I was wondering if you can clarify. Isn't the case that all 24-port
backplane utilize expander chips directly on the backplane to support
their 24 ports or are they utilized only when something else, such as
another 12-port backplane, is connected to one of the cascade ports in the
back?

What do you mean by a non-expander equipped LSI controller?

BTW, I have three SuperMicro SC846 systems, which 24-port backplanes, and
haven't any problem with them.

--
Maurice Volaski, maurice.vola...@einstein.yu.edu
Computing Support
Dominick P. Purpura Department of Neuroscience
Albert Einstein College of Medicine of Yeshiva University




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-09 Thread Maurice Volaski
I think my initial response got mangled. Oops.

>creating a ZFS pool out of files stored on another ZFS pool.  The main
>reasons that have been given for not doing this are unknown edge and
>corner cases that may lead to deadlocks, and that it creates a complex
>structure with potentially undesirable and unintended performance and
>reliability implications.

Computers are continually encountering unknown edge and corner cases in
the various things they do all the time. That's what we have testing for.


>Deadlocks may occur in low resource
>conditions.  If resources (disk space and RAM) never run low, the
>deadlock scenarios may not arise.

It sounds like you mean any low resource condition. Presumably, utilizing
complex pool structures like these will tax resources, but there are many
other ways to do that.



--
Maurice Volaski, maurice.vola...@einstein.yu.edu
Computing Support
Dominick P. Purpura Department of Neuroscience
Albert Einstein College of Medicine of Yeshiva University




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-09 Thread Toby Thain
On 09/11/10 11:46 AM, Maurice Volaski wrote:
> ...
> 

Is that horrendous mess Outlook's fault? If so, please consider not
using it.

--Toby
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-09 Thread Maurice Volaski
creating a ZFS pool out of files stored on another ZFS
pool.  The mainreasons that have been given for not doing this are unknown edge
andcorner cases that
may lead to deadlocks, and that it creates a complexstructure with potentially undesirable and
unintended performance andreliability implications.Computers are continually
encountering unknown edge and corner cases in the various things they
do all the time. That's what we have testing
for.Deadlocks may occur in low resourceconditions.  If resources (disk
space and RAM) never run low, thedeadlock scenarios may not
arise.It sounds like
you mean any low resource condition. Presumably, utilizing complex pool
structures like these will tax resources, but there are many other ways to
do that.--Maurice
Volaski, mailto:maurice.vola...@einstein.yu.edu";>maurice.vola...@einstein.yu.eduComputing SupportDominick P. Purpura Department of
NeuroscienceAlbert
Einstein College of Medicine of Yeshiva University


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540 RIP

2010-11-09 Thread Ray Van Dolson
On Mon, Nov 08, 2010 at 11:51:02PM -0800, matthew patton wrote:
> > I have this with 36 2TB drives (and 2 separate boot drives).
> >
> > http://www.colfax-intl.com/jlrid/SpotLight_more_Acc.asp?L=134&S=58&B=2267
> 
> That's just a Supermicro SC847.
> 
> http://www.supermicro.com/products/chassis/4U/?chs=847
> 
> Stay away from the 24 port expander backplanes. I've gone thru
> several and they still don't work right - timeout and dropped drives
> under load. The 12-port works just fine connected to a variety of
> controllers. If you insist on the 24-port expander backplane, use a
> non-expander equipped LSI controller to drive it.

What do you mean by non-expander equipped LSI controller?

> 
> I got fed up with the 24-port expander board and went with -A1 (all
> independent) and that's worked much more reliably.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Khushil Dep
I think you maybe wanting the same kind of thing that NexentaStor does when
it upgrade - takes snapshot and marks it a checkpoint in case the upgrade
fails - right? I think you may have to snap then clone from that and use
beadm thought it's something you should play with...

---
W. A. Khushil Dep - khushil@gmail.com -  07905374843

Visit my blog at http://www.khushil.com/






On 9 November 2010 14:43, Oscar del Rio  wrote:

> On 11/ 9/10 01:47 AM, Peter Taps wrote:
>
>> My understanding is that there is a way to create a zfs "checkpoint"
>> before doing any  system upgrade or installing a new software. If there is a
>> problem, one can simply rollback to the stable checkpoint.
>>
>> I am familiar with snapshots and clones. However, I am not clear on how to
>> manage checkpoints. I would appreciate your help in how I can create,
>> destroy and roll back to a checkpoint, and how I can list all the
>> checkpoints.
>>
>>
> Boot environments are managed with beadm
> http://dlc.sun.com/osol/docs/content/dev/snapupgrade/create.html
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] changing zpool information

2010-11-09 Thread sridhar surampudi
Hi,

If I compare zpool is like volume group or disk group, as an example on AIX we 
have aixlvm.
AIX lvm provideds command like recreatevg by providing snashot devices.

In case of HPLVM or for Linux LVM, we can create a new vg/lv structure and add 
the snapshoted devices in that and then we import the vg.

I would like to know fo zpool for snapshot devices once array snapshot is taken.

Thanks & Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool does not like iSCSI ?

2010-11-09 Thread Andreas Koppenhoefer
>From Oracle Support we got the following info:

Bug ID: 6992124 reboot of Sol10 u9 host makes zpool FAULTED when zpool uses 
iscsi LUNs
This is a duplicate of:
Bug ID: 6907687 zfs pool is not automatically fixed when disk are brought back 
online or after boot

An IDR patch already exists, but no official patch yet.

- Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Oscar del Rio

On 11/ 9/10 01:47 AM, Peter Taps wrote:

My understanding is that there is a way to create a zfs "checkpoint" before 
doing any  system upgrade or installing a new software. If there is a problem, one can 
simply rollback to the stable checkpoint.

I am familiar with snapshots and clones. However, I am not clear on how to 
manage checkpoints. I would appreciate your help in how I can create, destroy 
and roll back to a checkpoint, and how I can list all the checkpoints.



Boot environments are managed with beadm
http://dlc.sun.com/osol/docs/content/dev/snapupgrade/create.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Bryan Allen
Actually he likely means Boot Environments. On OpenSolaris or Solaris 11 you 
would use the pkg/ beadm commands. Previous Solaris used Live Upgrade.

See the documentation for IPS.
-- 
bdha

On Nov 9, 2010, at 2:56, Tomas Ögren  wrote:

> On 08 November, 2010 - Peter Taps sent me these 0,7K bytes:
> 
>> Folks,
>> 
>> My understanding is that there is a way to create a zfs "checkpoint"
>> before doing any  system upgrade or installing a new software. If
>> there is a problem, one can simply rollback to the stable checkpoint.
>> 
>> I am familiar with snapshots and clones. However, I am not clear on
>> how to manage checkpoints. I would appreciate your help in how I can
>> create, destroy and roll back to a checkpoint, and how I can list all
>> the checkpoints.
> 
> You probably refer to snapshots, as ZFS does not have checkpoints (and
> is pretty much the same as a snapshot).
> 
> /Tomas
> -- 
> Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
> |- Student at Computing Science, University of Umeå
> `- Sysadmin at {cs,acc}.umu.se
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss