Re: [zfs-discuss] lucreate error: Cannot determine the physical

2008-04-09 Thread Roman Morokutti
 Support will become available in the build 89 or 90
 time frame, at the same time that zfs as a root file
 system is supported.

I greatly appreciate this and there is nothing more to do
than to wait for zfs being capable of live upgrading.

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lucreate error: Cannot determine the physical boot device ...

2008-04-08 Thread Roman Morokutti
# lucreate -n B85
Analyzing system configuration.
Hi,

after typing 

  # lucreate -n B85

I get the following error:

No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name BE1.
Current boot environment is named BE1.
Creating initial configuration for primary boot environment BE1.
ERROR: Unable to determine major and minor device numbers for root device 
tank/BE1.
ERROR: Cannot determine the physical boot device for the current boot 
environment BE1.
Use the -C command line option to specify the physical boot device for the 
current boot environment BE1.
ERROR: Cannot create configuration for primary boot environment.


I tried to use the -C option like:

lucreate -C c0d0s0 -n B85 but also without 
success and got this:

# lucreate -C c0d0s0 -n B85
ERROR: No such file or directory: cannot stat c0d0s0
ERROR: cannot use c0d0s0 as a boot device because it is not a block device
Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ]
[ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ]
[ -l error_log-file ] [ -M slice_list-file [ -M ... ] ]
[ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ]
[ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X ]
[ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ]
[ -z filter_list-file ]


Could someone please tell me how to use lucreate?

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical boot device ...

2008-04-08 Thread Roman Morokutti
I further found out that there exists a nearly similar
problem described in Bug-Id: 6442921.

lubootdev reported:

# /etc/lib/lu/lubootdev -b
/dev/dsk/c0d0p0

Using this info for -C I got the following:

# lucreate -C /dev/dsk/c0d0p0 -n B85
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name BE1.
Current boot environment is named BE1.
Creating initial configuration for primary boot environment BE1.
INFORMATION: Unable to determine size or capacity of slice 
/dev/zvol/dsk/tank/swapvol.
ERROR: Unable to determine major and minor device numbers for root device 
tank/BE1.
INFORMATION: Unable to determine size or capacity of slice .
ERROR: Internal Configuration File /etc/lu/ICF.1 exists but has no contents.
ERROR: The file /etc/lu/ICF.1 specified by the -f option is not a valid ICF 
file.
ERROR: Cannot update boot environment configuration file with the current BE 
BE1 information.
ERROR: Cannot create configuration for primary boot environment.

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical

2008-04-08 Thread Roman Morokutti
 I didn't think that we had live upgrade support for
 zfs root filesystem yet.
 

Original quote from Lori Alt:

ZFS is ideally suited to making “clone and
modify” fast, easy, and space-efficient. Both
“clone and modify” tools will work much better
if your root file system is ZFS. (The new install
tool will require it for some features.)

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is swap still needed on c0d0s1 to get crash dumps?

2008-02-11 Thread Roman Morokutti
Thank you for your info. So with dumpadm I can
manage crash-dumps. And if ZFS is not capable
of handling those dumps, who cares. Then I will
create an extra slice for those purposes. No 
problem.

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is swap still needed on c0d0s1 to get crash dumps?

2008-02-07 Thread Roman Morokutti
Lori Alt writes in the netinstall README that a slice 
should be available for crash dumps. In order to get
this done the following line should be defined within
the profile:

filesys c0[t0]d0s1 auto swap

So my question is, is this still needed and how to
access a crash dump if it happened?

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] status of zfs boot netinstall kit

2008-02-06 Thread Roman Morokutti
Hi,

I would like to continue this (maybe a bit outdated) thread with
the question:

   1. How to create a netinstall image?
   2. How to write the netinstall image back as an ISO9660 on DVD?
   (after patching it for the zfsboot)

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get ZFS use the whole disk?

2008-02-04 Thread Roman Morokutti
Just another thought. After setting up a ZFS root on 
slice c0d0s4, it should be just possible after starting
into it, to add the remaining slices into the created
ZFS pool. Is this possible?

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to get ZFS use the whole disk?

2008-02-01 Thread Roman Morokutti
Hi,

I am new to ZFS and recently managed to get a ZFS root to work.
These were the steps I have done:

1. Installed b81 (fresh install)
2. Unmounted /second_root on c0d0s4
3. Removed /etc/vfstab entry of /second_root
4. Executed ./zfs-actual-root-install.sh c0d0s4
5. Rebooted (init 6)

After selecting ZFS boot entry in GRUB Solaris went up. Great.
Next I looked how the slices were configured. And I saw that the
layout hasn´t changed despite slice 4 is now ZFS root. What would
I have to do, to get a layout where zpool /tank occupies the whole
disk as presentated by Lori Alt?

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFSboot : Initial disk layout

2007-11-27 Thread Roman Morokutti
Hi,

thank you for your info. The netinstall would fit perfectly.
The following text out from the README does prevent me from
using it yet:

 Although this is not enforced yet, it is likely
 the required convention for dataset name will be this.

 pool-name/boot-environment-name[/directory in Solaris name space]

Is this kind of definition also possible? It would be desirable
just because of live updating the system.


The other way round I would have to make an ordinary installation
with only one root slice and to manually switch to ZFS and subsequently
making the BE pools.

--
Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss