Hello Josh.

Anything on this?

If I can't resolve this issue, then it's a major bug!

I will then need to then figure out how to extract the information on the
disks of the KVMs and set up new KVMs. Or something.

Thanks.

Sam




On 24 April 2015 at 06:43, Sam M <[email protected]> wrote:

> Hello Josh,
>
> I installed "top" via pkgin in the GZ.
>
> I'm pasting the output of prstat -Z below, before and after launching a
> KVM that still works (I have shut down all other KVMs). QEMU is running.
>
> As you suggested, I launched the problem KVM in one console while watching
> prstat in another. QEMU did not come up. There's also no output to the log.
>
> # ls -l /zones/c972de99-c811-4806-bbc2-186b1efe4181/root/tmp/vm.log
> -rw-r--r--   1 root     root           0 Apr 24 01:00
> /zones/c972de99-c811-4806-bbc2-186b1efe4181/root/tmp/vm.log
>
> I did find there's output appended to the log in
> /var/log/zone_bh.c972de99-c811-4806-bbc2-186b1efe4181. I'm attaching the
> log from only the very last attempt at trying to start up the KVM.
>
> Thanks again.
>
> Sam
>
>
>
>
>
>
>    PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
>
>  18047 829      3787M  337M sleep    1    0   0:01:33 0.0% mongod/13
>     90 root        0K    0K sleep   99  -20   0:02:20 0.0% zpool-zones/229
>  24306 root     4316K 3268K sleep    1    0   0:00:00 0.0% bash/1
>  24303 root     6524K 4032K sleep    1    0   0:00:00 0.0% sshd/1
>  24322 root     4612K 3464K cpu1     1    0   0:00:00 0.0% prstat/1
>  16292 root       10M 4904K sleep   53    0   0:00:01 0.0% zoneadmd/5
>   2322 root     4496K 2732K sleep    1    0   0:00:00 0.0% inetd/3
>  20254 root     1960K 1300K sleep    1    0   0:00:00 0.0% ttymon/1
>   2210 root       54M   40M sleep    1    0   0:00:04 0.0% metadata/7
>    213 root     4516K 2880K sleep   29    0   0:00:01 0.0% devfsadm/8
>   2087 root     3880K 2564K sleep   29    0   0:00:00 0.0% picld/4
>   3616 root       59M   46M sleep    1    0   0:00:11 0.0% vmadmd/7
>   2331 root     1664K  956K sleep   29    0   0:00:00 0.0% utmpd/1
>    193 root     2268K 1484K sleep   29    0   0:00:00 0.0% powerd/4
>   2183 root     6864K 3092K sleep    1    0   0:00:02 0.0% ntpd/1
>   2209 root     1876K 1204K sleep    1    0   0:00:00 0.0% ctrun/1
>   2345 root     3088K 1964K sleep   59   -3   0:00:00 0.0% auditd/5
>    188 root     6356K 3032K sleep   29    0   0:00:00 0.0% syseventd/19
>   2455 root     6948K 2788K sleep    1    0   0:00:00 0.0% sendmail/1
>   2194 root     2216K 1096K sleep    1    0   0:00:00 0.0% cron/1
>   2479 noaccess 3556K 2388K sleep   29    0   0:00:00 0.0% smtp-notify/3
>   2103 root     2380K 1460K sleep   59    0   0:00:00 0.0% svc.ipfd/1
>   1231 root       11M 7412K sleep   29    0   0:00:01 0.0% nscd/32
>   2376 root     1876K 1368K sleep    1    0   0:00:00 0.0% vtdaemon/3
>   2105 root     3952K  904K sleep    1    0   0:00:01 0.0% ipmon/1
>   2166 root     2060K  836K sleep   29    0   0:00:00 0.0% iscsid/2
>   2334 noaccess 2500K 1564K sleep   29    0   0:00:00 0.0% mdnsd/1
>     62 root     2664K 1552K sleep   29    0   0:00:00 0.0% pfexecd/3
>     20 root     3040K 1692K sleep   29    0   0:00:00 0.0% dlmgmtd/7
>   1631 root     2640K  964K sleep   29    0   0:00:05 0.0% lldpd/1
>   2328 root     3632K 2792K sleep    1    0   0:00:00 0.0% rsyslogd/5
>     22 netadm   3756K 2152K sleep   29    0   0:00:00 0.0% ipmgmtd/3
>   2481 root     7648K 6356K sleep    1    0   0:00:03 0.0% intrd/1
> ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
>
>      0       53  353M  235M   1.4%   0:03:10 0.1% global
>
>     46       16 3839M  371M   2.3%   0:01:35 0.0%
> 58fb5a1d-a763-4228-a2b6-2c7*
>     42       17   64M   38M   0.2%   0:00:02 0.0%
> 11bf4bf1-7024-4b89-a6b7-652*
>     52       20  848M   65M   0.4%   0:00:04 0.0%
> 947650d7-b69c-4162-b3a0-34e*
>
> Same as above after starting an Ubuntu KVM (one of the 2, out of 5, that
> still boots up)
>
>    PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
>
>  24426 root     1560M   32M cpu3     1    0   0:00:18  15%
> qemu-system-x86/3
>  18047 829      3787M  337M sleep    1    0   0:01:33 0.0% mongod/13
>   2315 root       37M   29M sleep    1    0   0:00:08 0.0% fmd/28
>   2210 root       54M   43M sleep   59    0   0:00:04 0.0% metadata/7
>     90 root        0K    0K sleep   99  -20   0:02:21 0.0% zpool-zones/229
>  24355 root     9484K 4648K sleep   55    0   0:00:00 0.0% zoneadmd/5
>     10 root       11M 9176K sleep   29    0   0:00:09 0.0% svc.configd/20
>  24458 root     4612K 3460K cpu1    59    0   0:00:00 0.0% prstat/1
>   3616 root       59M   47M sleep   59    0   0:00:11 0.0% vmadmd/7
>    213 root     4516K 2880K sleep   29    0   0:00:01 0.0% devfsadm/8
>   2322 root     4496K 2732K sleep    1    0   0:00:00 0.0% inetd/3
>  20254 root     1960K 1300K sleep    1    0   0:00:00 0.0% ttymon/1
>   2087 root     3880K 2564K sleep   29    0   0:00:00 0.0% picld/4
>   2331 root     1664K  956K sleep   29    0   0:00:00 0.0% utmpd/1
>    193 root     2268K 1484K sleep   29    0   0:00:00 0.0% powerd/4
>   2183 root     6864K 3092K sleep    1    0   0:00:02 0.0% ntpd/1
>   2209 root     1876K 1204K sleep    1    0   0:00:00 0.0% ctrun/1
>   2345 root     3088K 1964K sleep   59   -3   0:00:00 0.0% auditd/5
>    188 root     6356K 3032K sleep   29    0   0:00:00 0.0% syseventd/19
>   2455 root     6948K 2788K sleep    1    0   0:00:00 0.0% sendmail/1
>   2194 root     2216K 1096K sleep    1    0   0:00:00 0.0% cron/1
>   2479 noaccess 3556K 2388K sleep   29    0   0:00:00 0.0% smtp-notify/3
>   2103 root     2380K 1460K sleep   59    0   0:00:00 0.0% svc.ipfd/1
>   1231 root       11M 7412K sleep   29    0   0:00:01 0.0% nscd/32
>   2376 root     1876K 1368K sleep    1    0   0:00:00 0.0% vtdaemon/3
>   2105 root     3952K  904K sleep    1    0   0:00:01 0.0% ipmon/1
>   2166 root     2060K  836K sleep   29    0   0:00:00 0.0% iscsid/2
>   2334 noaccess 2500K 1564K sleep   29    0   0:00:00 0.0% mdnsd/1
>     62 root     2664K 1552K sleep   29    0   0:00:00 0.0% pfexecd/3
>     20 root     3040K 1692K sleep   29    0   0:00:00 0.0% dlmgmtd/7
>   1631 root     2640K  964K sleep   29    0   0:00:05 0.0% lldpd/1
>   2328 root     3632K 2792K sleep    1    0   0:00:00 0.0% rsyslogd/5
>     22 netadm   3756K 2152K sleep   29    0   0:00:00 0.0% ipmgmtd/3
> ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
>
>     59        2 1560M   32M   0.2%   0:00:18  15%
> 20b8c32c-b800-4651-97bf-d99*
>      0       54  362M  242M   1.4%   0:03:11 0.1% global
>
>     46       16 3839M  371M   2.3%   0:01:35 0.0%
> 58fb5a1d-a763-4228-a2b6-2c7*
>     42       17   64M   38M   0.2%   0:00:02 0.0%
> 11bf4bf1-7024-4b89-a6b7-652*
>     52       20  848M   65M   0.4%   0:00:04 0.0%
> 947650d7-b69c-4162-b3a0-34e*
>
>
>
> On 24 April 2015 at 00:06, Josh Wilsdon <[email protected]> wrote:
>
>>
>> I don't think that's the problem in my case.
>>>
>>> Just checked anyway. From command "top" in the GZ -
>>>
>>>
>> Are you sure this is SmartOS? SmartOS doesn't have "top" in the GZ as far
>> as I know...
>>
>> If you can't find any logs in /zones/<uuid>/root/tmp/vm.log*, can you
>> check if there's anything in /var/log/zone_bh.<uuid> and/or
>> /var/log/sdc/upload/qemu_<uuid>*?
>>
>> Also: if you run the 'vmadm boot <uuid>' from one console and login on
>> another console, do you see the qemu-system-x86_64 process running between
>> the time you start the boot and the timeout?
>>
>> This should help narrow things down a bit.
>>
>> Josh
>> *smartos-discuss* | Archives
>> <https://www.listbox.com/member/archive/184463/=now>
>> <https://www.listbox.com/member/archive/rss/184463/23267814-13d98d1d> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to