I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
my file system is setup as follow:
[10:11:54] [EMAIL PROTECTED]: /root > df -h | egrep -v 
"platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr"
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0       16G   7.2G   8.4G    47%    /
swap                   8.3G   1.5M   8.3G     1%    /etc/svc/volatile
/dev/dsk/c1t0d0s6       16G   8.7G   6.9G    56%    /usr
/dev/dsk/c1t0d0s1       16G   2.5G    13G    17%    /var
swap                   8.5G   229M   8.3G     3%    /tmp
swap                   8.3G    40K   8.3G     1%    /var/run
/dev/dsk/c1t0d0s7       78G   1.2G    76G     2%    /export/home
rootpool                33G    19K    21G     1%    /rootpool
rootpool/ROOT           33G    18K    21G     1%    /rootpool/ROOT
rootpool/ROOT/zfsBE     33G    31M    21G     1%    /.alt.tmp.b-UUb.mnt
/export/home            78G   1.2G    76G     2% 
/.alt.tmp.b-UUb.mnt/export/home
/rootpool               21G    19K    21G     1%    /.alt.tmp.b-UUb.mnt/rootpool
/rootpool/ROOT          21G    18K    21G     1% 
/.alt.tmp.b-UUb.mnt/rootpool/ROOT
swap                   8.3G     0K   8.3G     0%    /.alt.tmp.b-UUb.mnt/var/run
swap                   8.3G     0K   8.3G     0%    /.alt.tmp.b-UUb.mnt/tmp
[10:12:00] [EMAIL PROTECTED]: /root >


so I have /, /usr, /var and /export/home on that primary disk. Original disk is 
140gb, this new one is only 36gb, but disk utilization on that primary disk is 
much less utilized so easily should fit on it.

/ 7.2GB
/usr 8.7GB
/var 2.5GB
/export/home 1.2GB
total space 19.6GB
I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
total space needed 31.6GB
seems like total available disk space on my disk should be 33.92GB
so its quite close as both numbers do approach. So to make sure I will change 
disk for 72gb and will try again. I do not beleive that I need to match my main 
disk size as 146gb as I am not using that much disk space on it. But let me try 
this and it might be why I am getting this problem...



On Wed, 5 Nov 2008, Enda O'Connor wrote:

> Hi Krzys
> Also some info on the actual system
> ie what was it upgraded to u6 from and how.
> and an idea of how the filesystems are laid out, ie is usr seperate from / 
> and so on ( maybe a df -k ). Don't appear to have any zones installed, just 
> to confirm.
> Enda
>
> On 11/05/08 14:07, Enda O'Connor wrote:
>> Hi
>> did you get a core dump?
>> would be nice to see the core file to get an idea of what dumped core,
>> might configure coreadm if not already done
>> run coreadm first, if the output looks like
>> 
>> # coreadm
>>      global core file pattern: /var/crash/core.%f.%p
>>      global core file content: default
>>        init core file pattern: core
>>        init core file content: default
>>             global core dumps: enabled
>>        per-process core dumps: enabled
>>       global setid core dumps: enabled
>>  per-process setid core dumps: disabled
>>      global core dump logging: enabled
>> 
>> then all should be good, and cores should appear in /var/crash
>> 
>> otherwise the following should configure coreadm:
>> coreadm -g /var/crash/core.%f.%p
>> coreadm -G all
>> coreadm -e global
>> coreadm -e per-process
>> 
>> 
>> coreadm -u to load the new settings without rebooting.
>> 
>> also might need to set the size of the core dump via
>> ulimit -c unlimited
>> check ulimit -a first.
>> 
>> then rerun test and check /var/crash for core dump.
>> 
>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
>> ufsBE -n zfsBE -p rootpool
>> 
>> might give an indication, look for SIGBUS in the truss log
>> 
>> NOTE, that you might want to reset the coreadm and ulimit for coredumps 
>> after this, in order to not risk filling the system with coredumps in the 
>> case of some utility coredumping in a loop say.
>> 
>> 
>> Enda
>> On 11/05/08 13:46, Krzys wrote:
>>> 
>>> On Wed, 5 Nov 2008, Enda O'Connor wrote:
>>> 
>>>> On 11/05/08 13:02, Krzys wrote:
>>>>> I am not sure what I did wrong but I did follow up all the steps to get 
>>>>> my system moved from ufs to zfs and not I am unable to boot it... can 
>>>>> anyone suggest what I could do to fix it?
>>>>> 
>>>>> here are all my steps:
>>>>> 
>>>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0
>>>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool
>>>>> Analyzing system configuration.
>>>>> Comparing source boot environment <ufsBE> file systems with the file
>>>>> system(s) you specified for the new boot environment. Determining which
>>>>> file systems should be in the new boot environment.
>>>>> Updating boot environment description database on all BEs.
>>>>> Updating system configuration files.
>>>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot 
>>>>> environment; cannot get BE ID.
>>>>> Creating configuration for boot environment <zfsBE>.
>>>>> Source boot environment is <ufsBE>.
>>>>> Creating boot environment <zfsBE>.
>>>>> Creating file systems on boot environment <zfsBE>.
>>>>> Creating <zfs> file system for </> in zone <global> on 
>>>>> <rootpool/ROOT/zfsBE>.
>>>>> Populating file systems on boot environment <zfsBE>.
>>>>> Checking selection integrity.
>>>>> Integrity check OK.
>>>>> Populating contents of mount point </>.
>>>>> Copying.
>>>>> Bus Error - core dumped
>>>> hmm above might be relevant I'd guess.
>>>> 
>>>> What release are you on , ie is this Solaris 10, or is this Nevada build?
>>>> 
>>>> Enda
>>>>> Creating shared file system mount points.
>>>>> Creating compare databases for boot environment <zfsBE>.
>>>>> Creating compare database for file system </var>.
>>>>> Creating compare database for file system </usr>.
>>>>> Creating compare database for file system </rootpool/ROOT>.
>>>>> Creating compare database for file system </>.
>>>>> Updating compare databases on boot environment <zfsBE>.
>>>>> Making boot environment <zfsBE> bootable.
>>> 
>>> Anyway I did restart the whole process again, and I got again that Bus 
>>> Error
>>> 
>>> [07:59:01] [EMAIL PROTECTED]: /root > zpool create rootpool c1t1d0s0
>>> [07:59:22] [EMAIL PROTECTED]: /root > zfs set compression=on rootpool/ROOT
>>> cannot open 'rootpool/ROOT': dataset does not exist
>>> [07:59:27] [EMAIL PROTECTED]: /root > zfs set compression=on rootpool
>>> [07:59:31] [EMAIL PROTECTED]: /root > lucreate -c ufsBE -n zfsBE -p rootpool
>>> Analyzing system configuration.
>>> Comparing source boot environment <ufsBE> file systems with the file
>>> system(s) you specified for the new boot environment. Determining which
>>> file systems should be in the new boot environment.
>>> Updating boot environment description database on all BEs.
>>> Updating system configuration files.
>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot 
>>> environment; cannot get BE ID.
>>> Creating configuration for boot environment <zfsBE>.
>>> Source boot environment is <ufsBE>.
>>> Creating boot environment <zfsBE>.
>>> Creating file systems on boot environment <zfsBE>.
>>> Creating <zfs> file system for </> in zone <global> on 
>>> <rootpool/ROOT/zfsBE>.
>>> Populating file systems on boot environment <zfsBE>.
>>> Checking selection integrity.
>>> Integrity check OK.
>>> Populating contents of mount point </>.
>>> Copying.
>>> Bus Error - core dumped
>>> Creating shared file system mount points.
>>> Creating compare databases for boot environment <zfsBE>.
>>> Creating compare database for file system </var>.
>>> Creating compare database for file system </usr>.
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> 
>> 
>
>
> -- 
> Enda O'Connor x19781  Software Product Engineering
> Patch System Test : Ireland : x19781/353-1-8199718
>
>
> !DSPAM:122,4911ac7c27292151120594!
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to