Great, I will follow this, but I was wondering maybe I did not setup my disc 
correctly? from what I do understand zpool cannot be setup on whole disk as 
other pools are so I did partition my disk so all the space is in s0 slice. 
Maybe I thats not correct?

[10:03:45] [EMAIL PROTECTED]: /root > format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
        0. c1t0d0 <SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720>
           /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
        1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
           /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see zpool(1M).
/dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see zpool(1M).


FORMAT MENU:
         disk       - select a disk
         type       - select (define) a disk type
         partition  - select (define) a partition table
         current    - describe the current disk
         format     - format and analyze the disk
         repair     - repair a defective sector
         label      - write label to the disk
         analyze    - surface analysis
         defect     - defect list management
         backup     - search for backup labels
         verify     - read and display labels
         save       - save new disk/partition definitions
         inquiry    - show vendor, product and revision
         volname    - set 8-character volume name
         !<cmd>     - execute <cmd>, then return
         quit
format> verify

Primary label contents:

Volume name = <        >
ascii name  = <SUN36G cyl 24620 alt 2 hd 27 sec 107>
pcyl        = 24622
ncyl        = 24620
acyl        =    2
nhead       =   27
nsect       =  107
Part      Tag    Flag     Cylinders         Size            Blocks
   0       root    wm       0 - 24619       33.92GB    (24620/0/0) 71127180
   1 unassigned    wu       0                0         (0/0/0)            0
   2     backup    wm       0 - 24619       33.92GB    (24620/0/0) 71127180
   3 unassigned    wu       0                0         (0/0/0)            0
   4 unassigned    wu       0                0         (0/0/0)            0
   5 unassigned    wu       0                0         (0/0/0)            0
   6 unassigned    wu       0                0         (0/0/0)            0
   7 unassigned    wu       0                0         (0/0/0)            0

format>


On Wed, 5 Nov 2008, Enda O'Connor wrote:

> Hi
> did you get a core dump?
> would be nice to see the core file to get an idea of what dumped core,
> might configure coreadm if not already done
> run coreadm first, if the output looks like
>
> # coreadm
>     global core file pattern: /var/crash/core.%f.%p
>     global core file content: default
>       init core file pattern: core
>       init core file content: default
>            global core dumps: enabled
>       per-process core dumps: enabled
>      global setid core dumps: enabled
> per-process setid core dumps: disabled
>     global core dump logging: enabled
>
> then all should be good, and cores should appear in /var/crash
>
> otherwise the following should configure coreadm:
> coreadm -g /var/crash/core.%f.%p
> coreadm -G all
> coreadm -e global
> coreadm -e per-process
>
>
> coreadm -u to load the new settings without rebooting.
>
> also might need to set the size of the core dump via
> ulimit -c unlimited
> check ulimit -a first.
>
> then rerun test and check /var/crash for core dump.
>
> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c ufsBE 
> -n zfsBE -p rootpool
>
> might give an indication, look for SIGBUS in the truss log
>
> NOTE, that you might want to reset the coreadm and ulimit for coredumps after 
> this, in order to not risk filling the system with coredumps in the case of 
> some utility coredumping in a loop say.
>
>
> Enda
> On 11/05/08 13:46, Krzys wrote:
>> 
>> On Wed, 5 Nov 2008, Enda O'Connor wrote:
>> 
>>> On 11/05/08 13:02, Krzys wrote:
>>>> I am not sure what I did wrong but I did follow up all the steps to get 
>>>> my system moved from ufs to zfs and not I am unable to boot it... can 
>>>> anyone suggest what I could do to fix it?
>>>> 
>>>> here are all my steps:
>>>> 
>>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0
>>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool
>>>> Analyzing system configuration.
>>>> Comparing source boot environment <ufsBE> file systems with the file
>>>> system(s) you specified for the new boot environment. Determining which
>>>> file systems should be in the new boot environment.
>>>> Updating boot environment description database on all BEs.
>>>> Updating system configuration files.
>>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot 
>>>> environment; cannot get BE ID.
>>>> Creating configuration for boot environment <zfsBE>.
>>>> Source boot environment is <ufsBE>.
>>>> Creating boot environment <zfsBE>.
>>>> Creating file systems on boot environment <zfsBE>.
>>>> Creating <zfs> file system for </> in zone <global> on 
>>>> <rootpool/ROOT/zfsBE>.
>>>> Populating file systems on boot environment <zfsBE>.
>>>> Checking selection integrity.
>>>> Integrity check OK.
>>>> Populating contents of mount point </>.
>>>> Copying.
>>>> Bus Error - core dumped
>>> hmm above might be relevant I'd guess.
>>> 
>>> What release are you on , ie is this Solaris 10, or is this Nevada build?
>>> 
>>> Enda
>>>> Creating shared file system mount points.
>>>> Creating compare databases for boot environment <zfsBE>.
>>>> Creating compare database for file system </var>.
>>>> Creating compare database for file system </usr>.
>>>> Creating compare database for file system </rootpool/ROOT>.
>>>> Creating compare database for file system </>.
>>>> Updating compare databases on boot environment <zfsBE>.
>>>> Making boot environment <zfsBE> bootable.
>> 
>> Anyway I did restart the whole process again, and I got again that Bus 
>> Error
>> 
>> [07:59:01] [EMAIL PROTECTED]: /root > zpool create rootpool c1t1d0s0
>> [07:59:22] [EMAIL PROTECTED]: /root > zfs set compression=on rootpool/ROOT
>> cannot open 'rootpool/ROOT': dataset does not exist
>> [07:59:27] [EMAIL PROTECTED]: /root > zfs set compression=on rootpool
>> [07:59:31] [EMAIL PROTECTED]: /root > lucreate -c ufsBE -n zfsBE -p rootpool
>> Analyzing system configuration.
>> Comparing source boot environment <ufsBE> file systems with the file
>> system(s) you specified for the new boot environment. Determining which
>> file systems should be in the new boot environment.
>> Updating boot environment description database on all BEs.
>> Updating system configuration files.
>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot 
>> environment; cannot get BE ID.
>> Creating configuration for boot environment <zfsBE>.
>> Source boot environment is <ufsBE>.
>> Creating boot environment <zfsBE>.
>> Creating file systems on boot environment <zfsBE>.
>> Creating <zfs> file system for </> in zone <global> on 
>> <rootpool/ROOT/zfsBE>.
>> Populating file systems on boot environment <zfsBE>.
>> Checking selection integrity.
>> Integrity check OK.
>> Populating contents of mount point </>.
>> Copying.
>> Bus Error - core dumped
>> Creating shared file system mount points.
>> Creating compare databases for boot environment <zfsBE>.
>> Creating compare database for file system </var>.
>> Creating compare database for file system </usr>.
>> 
>> 
>> 
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> -- 
> Enda O'Connor x19781  Software Product Engineering
> Patch System Test : Ireland : x19781/353-1-8199718
>
>
> !DSPAM:122,4911a8521572681622464!
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to