Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-04 Thread Kirk Kosinski
> And how about virsh pool-list result which is not reflecting the
> actual IDs of the secondary storage we see from Cloudstack GUI, is it
> supposed to be the case? If not, how to rectify it?

This is normal.  More than one pool pointing to secondary storage can
exist at a time so they can't all be named after the ID in CloudStack.

Best regards,
Kirk

On 10/03/2013 01:30 PM, Indra Pramana wrote:
> Hi Marcus,
> 
> Good day to you, and thank you for your e-mail.
> 
> We will try to do what you have suggested and will see if we can fix the
> problem. I am looking into increasing the filesystem of the KVM hosts to
> see if it can resolve the problem for now.
> 
> Thank you very much for your help!
> 
> Cheers.
> 
> 
> 
> On Fri, Oct 4, 2013 at 2:32 AM, Marcus Sorensen  wrote:
> 
>> You'll have to go back further in the log then, and/or go to the agent
>> log, because the issue of the ghost mount is due to the filesystem
>> being full, and if that's your secondary storage I'm not sure why it
>> would be written to from the kvm host unless you were doing backups or
>> snapshots.
>>
>> On Thu, Oct 3, 2013 at 12:03 PM, Indra Pramana  wrote:
>>> Hi Marcus,
>>>
>>> It happens randomly, mounting fails, and then / gets filled. I suspect
>> this
>>> occurs when the mounting fails, any files which is supposed to be
>>> transferred to the mount point (e.g. templates used to create VMs) will
>>> fill-up the / partition of the KVM host instead. This doesn't happen in
>>> 4.1.1, this only happens after we upgraded to 4.2.0.
>>>
>>> And how about virsh pool-list result which is not reflecting the actual
>> IDs
>>> of the secondary storage we see from Cloudstack GUI, is it supposed to be
>>> the case? If not, how to rectify it?
>>>
>>> Looking forward to your reply, thank you.
>>>
>>> Cheers.
>>>
>>>
>>>
>>>
>>> On Fri, Oct 4, 2013 at 1:51 AM, Marcus Sorensen 
>> wrote:
>>>
 It sort of looks like the out of space triggered the issue. Libvirt
>> shows

 2013-10-03 15:38:57.414+: 2710: error : virCommandWait:2348 :
 internal error Child process (/bin/umount
 /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669) unexpected exit status 32:
 error writing /etc/mtab.tmp: No space left on device

 So that entry of the filesystem being mounted is orphaned in
 /etc/mtab, since it can't be removed. That seems to be the source of
 why 'df' shows the thing mounted when it isn't, at least.


 On Thu, Oct 3, 2013 at 11:45 AM, Indra Pramana  wrote:
> Hi Marcus and all,
>
> I also find some strange and interesting error messages from the
>> libvirt
> logs:
>
> http://pastebin.com/5ByfNpAf
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
>
>
> On Fri, Oct 4, 2013 at 1:38 AM, Indra Pramana  wrote:
>
>> Hi Marcus,
>>
>> Good day to you, and thank you for your e-mail. See my reply inline.
>>
>> On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen <
>> shadow...@gmail.com
> wrote:
>>
>>> Are you using the release artifacts, or your own 4.2 build?
>>>
>>
>> [Indra:] We are using the release artifacts from below repo since we
>> are
>> using Ubuntu:
>>
>> deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
>>
>>
>>> When you rebooted the host, did the problem go away or come back the
>>> same?
>>
>>
>> [Indra:] When I rebooted the host, the problem go away for a while,
>> but
 it
>> will come back again after some time. It will come randomly at the
>> time
>> when we need to create a new instance on that host, or start an
>> existing
>> stopped instance.
>>
>>
>>> You may want to look at 'virsh pool-list' to see if libvirt is
>>> mounting/registering the secondary storage.
>>>
>>
>> [Indra:] This is the result of the virsh pool-list command:
>>
>> root@hv-kvm-02:/var/log/libvirt# virsh pool-list
>> Name State  Autostart
>> -
>> 301071ac-4c1d-4eac-855b-124126da0a38 active no
>> 5230667e-9c58-3ff6-983c-5fc2a72df669 active no
>> d433809b-01ea-3947-ba0f-48077244e4d6 active no
>>
>> Strange thing is that none of my secondary storage IDs are there.
>> Could
 it
>> be that the ID might have changed during Cloudstack upgrade? Here is
>> the
>> list of my secondary storage (there are two of them) even though they
 are
>> on the same NFS server:
>>   c02da448-b9f4-401b-b8d5-83e8ead5cfde nfs://
>> 10.237.11.31/mnt/vol1/sec-storage NFS
>> 5937edb6-2e95-4ae2-907b-80fe4599ed87 nfs://
>> 10.237.11.31/mnt/vol1/sec-storage2 NFS
>>
>>> Is this happening on multiple hosts, the same way?
>>
>>
>> [Indra:] Yes, it is happening on all the two hosts that I have, the
>> same
>> way.
>>
>>
>>> You may want to
>>> look at /etc/mtab, if the system reports i

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Indra Pramana
Hi Marcus,

Good day to you, and thank you for your e-mail.

We will try to do what you have suggested and will see if we can fix the
problem. I am looking into increasing the filesystem of the KVM hosts to
see if it can resolve the problem for now.

Thank you very much for your help!

Cheers.



On Fri, Oct 4, 2013 at 2:32 AM, Marcus Sorensen  wrote:

> You'll have to go back further in the log then, and/or go to the agent
> log, because the issue of the ghost mount is due to the filesystem
> being full, and if that's your secondary storage I'm not sure why it
> would be written to from the kvm host unless you were doing backups or
> snapshots.
>
> On Thu, Oct 3, 2013 at 12:03 PM, Indra Pramana  wrote:
> > Hi Marcus,
> >
> > It happens randomly, mounting fails, and then / gets filled. I suspect
> this
> > occurs when the mounting fails, any files which is supposed to be
> > transferred to the mount point (e.g. templates used to create VMs) will
> > fill-up the / partition of the KVM host instead. This doesn't happen in
> > 4.1.1, this only happens after we upgraded to 4.2.0.
> >
> > And how about virsh pool-list result which is not reflecting the actual
> IDs
> > of the secondary storage we see from Cloudstack GUI, is it supposed to be
> > the case? If not, how to rectify it?
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
> >
> >
> > On Fri, Oct 4, 2013 at 1:51 AM, Marcus Sorensen 
> wrote:
> >
> >> It sort of looks like the out of space triggered the issue. Libvirt
> shows
> >>
> >> 2013-10-03 15:38:57.414+: 2710: error : virCommandWait:2348 :
> >> internal error Child process (/bin/umount
> >> /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669) unexpected exit status 32:
> >> error writing /etc/mtab.tmp: No space left on device
> >>
> >> So that entry of the filesystem being mounted is orphaned in
> >> /etc/mtab, since it can't be removed. That seems to be the source of
> >> why 'df' shows the thing mounted when it isn't, at least.
> >>
> >>
> >> On Thu, Oct 3, 2013 at 11:45 AM, Indra Pramana  wrote:
> >> > Hi Marcus and all,
> >> >
> >> > I also find some strange and interesting error messages from the
> libvirt
> >> > logs:
> >> >
> >> > http://pastebin.com/5ByfNpAf
> >> >
> >> > Looking forward to your reply, thank you.
> >> >
> >> > Cheers.
> >> >
> >> >
> >> >
> >> > On Fri, Oct 4, 2013 at 1:38 AM, Indra Pramana  wrote:
> >> >
> >> >> Hi Marcus,
> >> >>
> >> >> Good day to you, and thank you for your e-mail. See my reply inline.
> >> >>
> >> >> On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen <
> shadow...@gmail.com
> >> >wrote:
> >> >>
> >> >>> Are you using the release artifacts, or your own 4.2 build?
> >> >>>
> >> >>
> >> >> [Indra:] We are using the release artifacts from below repo since we
> are
> >> >> using Ubuntu:
> >> >>
> >> >> deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
> >> >>
> >> >>
> >> >>> When you rebooted the host, did the problem go away or come back the
> >> >>> same?
> >> >>
> >> >>
> >> >> [Indra:] When I rebooted the host, the problem go away for a while,
> but
> >> it
> >> >> will come back again after some time. It will come randomly at the
> time
> >> >> when we need to create a new instance on that host, or start an
> existing
> >> >> stopped instance.
> >> >>
> >> >>
> >> >>> You may want to look at 'virsh pool-list' to see if libvirt is
> >> >>> mounting/registering the secondary storage.
> >> >>>
> >> >>
> >> >> [Indra:] This is the result of the virsh pool-list command:
> >> >>
> >> >> root@hv-kvm-02:/var/log/libvirt# virsh pool-list
> >> >> Name State  Autostart
> >> >> -
> >> >> 301071ac-4c1d-4eac-855b-124126da0a38 active no
> >> >> 5230667e-9c58-3ff6-983c-5fc2a72df669 active no
> >> >> d433809b-01ea-3947-ba0f-48077244e4d6 active no
> >> >>
> >> >> Strange thing is that none of my secondary storage IDs are there.
> Could
> >> it
> >> >> be that the ID might have changed during Cloudstack upgrade? Here is
> the
> >> >> list of my secondary storage (there are two of them) even though they
> >> are
> >> >> on the same NFS server:
> >> >>   c02da448-b9f4-401b-b8d5-83e8ead5cfde nfs://
> >> >> 10.237.11.31/mnt/vol1/sec-storage NFS
> >> >> 5937edb6-2e95-4ae2-907b-80fe4599ed87 nfs://
> >> >> 10.237.11.31/mnt/vol1/sec-storage2 NFS
> >> >>
> >> >>> Is this happening on multiple hosts, the same way?
> >> >>
> >> >>
> >> >> [Indra:] Yes, it is happening on all the two hosts that I have, the
> same
> >> >> way.
> >> >>
> >> >>
> >> >>> You may want to
> >> >>> look at /etc/mtab, if the system reports it's mounted, though it's
> >> >>> not, it might be in there. Look at /proc/mounts as well.
> >> >>>
> >> >>
> >> >> [Indra:] Please find result of df, /etc/mtab and /proc/mounts below.
> The
> >> >> "ghost" mount point is on df and /etc/mtab, but not on /proc/mounts.
> >> >>
> >> >> root@hv-kvm-02:/etc# df
> >> >>
> >> >> Filesystem  1K

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Marcus Sorensen
You'll have to go back further in the log then, and/or go to the agent
log, because the issue of the ghost mount is due to the filesystem
being full, and if that's your secondary storage I'm not sure why it
would be written to from the kvm host unless you were doing backups or
snapshots.

On Thu, Oct 3, 2013 at 12:03 PM, Indra Pramana  wrote:
> Hi Marcus,
>
> It happens randomly, mounting fails, and then / gets filled. I suspect this
> occurs when the mounting fails, any files which is supposed to be
> transferred to the mount point (e.g. templates used to create VMs) will
> fill-up the / partition of the KVM host instead. This doesn't happen in
> 4.1.1, this only happens after we upgraded to 4.2.0.
>
> And how about virsh pool-list result which is not reflecting the actual IDs
> of the secondary storage we see from Cloudstack GUI, is it supposed to be
> the case? If not, how to rectify it?
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
>
>
>
> On Fri, Oct 4, 2013 at 1:51 AM, Marcus Sorensen  wrote:
>
>> It sort of looks like the out of space triggered the issue. Libvirt shows
>>
>> 2013-10-03 15:38:57.414+: 2710: error : virCommandWait:2348 :
>> internal error Child process (/bin/umount
>> /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669) unexpected exit status 32:
>> error writing /etc/mtab.tmp: No space left on device
>>
>> So that entry of the filesystem being mounted is orphaned in
>> /etc/mtab, since it can't be removed. That seems to be the source of
>> why 'df' shows the thing mounted when it isn't, at least.
>>
>>
>> On Thu, Oct 3, 2013 at 11:45 AM, Indra Pramana  wrote:
>> > Hi Marcus and all,
>> >
>> > I also find some strange and interesting error messages from the libvirt
>> > logs:
>> >
>> > http://pastebin.com/5ByfNpAf
>> >
>> > Looking forward to your reply, thank you.
>> >
>> > Cheers.
>> >
>> >
>> >
>> > On Fri, Oct 4, 2013 at 1:38 AM, Indra Pramana  wrote:
>> >
>> >> Hi Marcus,
>> >>
>> >> Good day to you, and thank you for your e-mail. See my reply inline.
>> >>
>> >> On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen > >wrote:
>> >>
>> >>> Are you using the release artifacts, or your own 4.2 build?
>> >>>
>> >>
>> >> [Indra:] We are using the release artifacts from below repo since we are
>> >> using Ubuntu:
>> >>
>> >> deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
>> >>
>> >>
>> >>> When you rebooted the host, did the problem go away or come back the
>> >>> same?
>> >>
>> >>
>> >> [Indra:] When I rebooted the host, the problem go away for a while, but
>> it
>> >> will come back again after some time. It will come randomly at the time
>> >> when we need to create a new instance on that host, or start an existing
>> >> stopped instance.
>> >>
>> >>
>> >>> You may want to look at 'virsh pool-list' to see if libvirt is
>> >>> mounting/registering the secondary storage.
>> >>>
>> >>
>> >> [Indra:] This is the result of the virsh pool-list command:
>> >>
>> >> root@hv-kvm-02:/var/log/libvirt# virsh pool-list
>> >> Name State  Autostart
>> >> -
>> >> 301071ac-4c1d-4eac-855b-124126da0a38 active no
>> >> 5230667e-9c58-3ff6-983c-5fc2a72df669 active no
>> >> d433809b-01ea-3947-ba0f-48077244e4d6 active no
>> >>
>> >> Strange thing is that none of my secondary storage IDs are there. Could
>> it
>> >> be that the ID might have changed during Cloudstack upgrade? Here is the
>> >> list of my secondary storage (there are two of them) even though they
>> are
>> >> on the same NFS server:
>> >>   c02da448-b9f4-401b-b8d5-83e8ead5cfde nfs://
>> >> 10.237.11.31/mnt/vol1/sec-storage NFS
>> >> 5937edb6-2e95-4ae2-907b-80fe4599ed87 nfs://
>> >> 10.237.11.31/mnt/vol1/sec-storage2 NFS
>> >>
>> >>> Is this happening on multiple hosts, the same way?
>> >>
>> >>
>> >> [Indra:] Yes, it is happening on all the two hosts that I have, the same
>> >> way.
>> >>
>> >>
>> >>> You may want to
>> >>> look at /etc/mtab, if the system reports it's mounted, though it's
>> >>> not, it might be in there. Look at /proc/mounts as well.
>> >>>
>> >>
>> >> [Indra:] Please find result of df, /etc/mtab and /proc/mounts below. The
>> >> "ghost" mount point is on df and /etc/mtab, but not on /proc/mounts.
>> >>
>> >> root@hv-kvm-02:/etc# df
>> >>
>> >> Filesystem  1K-blocks
>>  Used
>> >> Available Use% Mounted on
>> >> /dev/sda1 4195924
>> >> 4192372 0 100% /
>> >>
>> >> udev132053356
>> 4
>> >> 132053352   1% /dev
>> >> tmpfs52825052
>> 704
>> >> 52824348   1% /run
>> >>
>> >> none 5120
>> >> 0  5120   0% /run/lock
>> >> none132062620
>> 0
>> >> 132062620   0% /run/shm
>> >> cgroup  132062620

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Indra Pramana
Hi Marcus,

It happens randomly, mounting fails, and then / gets filled. I suspect this
occurs when the mounting fails, any files which is supposed to be
transferred to the mount point (e.g. templates used to create VMs) will
fill-up the / partition of the KVM host instead. This doesn't happen in
4.1.1, this only happens after we upgraded to 4.2.0.

And how about virsh pool-list result which is not reflecting the actual IDs
of the secondary storage we see from Cloudstack GUI, is it supposed to be
the case? If not, how to rectify it?

Looking forward to your reply, thank you.

Cheers.




On Fri, Oct 4, 2013 at 1:51 AM, Marcus Sorensen  wrote:

> It sort of looks like the out of space triggered the issue. Libvirt shows
>
> 2013-10-03 15:38:57.414+: 2710: error : virCommandWait:2348 :
> internal error Child process (/bin/umount
> /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669) unexpected exit status 32:
> error writing /etc/mtab.tmp: No space left on device
>
> So that entry of the filesystem being mounted is orphaned in
> /etc/mtab, since it can't be removed. That seems to be the source of
> why 'df' shows the thing mounted when it isn't, at least.
>
>
> On Thu, Oct 3, 2013 at 11:45 AM, Indra Pramana  wrote:
> > Hi Marcus and all,
> >
> > I also find some strange and interesting error messages from the libvirt
> > logs:
> >
> > http://pastebin.com/5ByfNpAf
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
> >
> > On Fri, Oct 4, 2013 at 1:38 AM, Indra Pramana  wrote:
> >
> >> Hi Marcus,
> >>
> >> Good day to you, and thank you for your e-mail. See my reply inline.
> >>
> >> On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen  >wrote:
> >>
> >>> Are you using the release artifacts, or your own 4.2 build?
> >>>
> >>
> >> [Indra:] We are using the release artifacts from below repo since we are
> >> using Ubuntu:
> >>
> >> deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
> >>
> >>
> >>> When you rebooted the host, did the problem go away or come back the
> >>> same?
> >>
> >>
> >> [Indra:] When I rebooted the host, the problem go away for a while, but
> it
> >> will come back again after some time. It will come randomly at the time
> >> when we need to create a new instance on that host, or start an existing
> >> stopped instance.
> >>
> >>
> >>> You may want to look at 'virsh pool-list' to see if libvirt is
> >>> mounting/registering the secondary storage.
> >>>
> >>
> >> [Indra:] This is the result of the virsh pool-list command:
> >>
> >> root@hv-kvm-02:/var/log/libvirt# virsh pool-list
> >> Name State  Autostart
> >> -
> >> 301071ac-4c1d-4eac-855b-124126da0a38 active no
> >> 5230667e-9c58-3ff6-983c-5fc2a72df669 active no
> >> d433809b-01ea-3947-ba0f-48077244e4d6 active no
> >>
> >> Strange thing is that none of my secondary storage IDs are there. Could
> it
> >> be that the ID might have changed during Cloudstack upgrade? Here is the
> >> list of my secondary storage (there are two of them) even though they
> are
> >> on the same NFS server:
> >>   c02da448-b9f4-401b-b8d5-83e8ead5cfde nfs://
> >> 10.237.11.31/mnt/vol1/sec-storage NFS
> >> 5937edb6-2e95-4ae2-907b-80fe4599ed87 nfs://
> >> 10.237.11.31/mnt/vol1/sec-storage2 NFS
> >>
> >>> Is this happening on multiple hosts, the same way?
> >>
> >>
> >> [Indra:] Yes, it is happening on all the two hosts that I have, the same
> >> way.
> >>
> >>
> >>> You may want to
> >>> look at /etc/mtab, if the system reports it's mounted, though it's
> >>> not, it might be in there. Look at /proc/mounts as well.
> >>>
> >>
> >> [Indra:] Please find result of df, /etc/mtab and /proc/mounts below. The
> >> "ghost" mount point is on df and /etc/mtab, but not on /proc/mounts.
> >>
> >> root@hv-kvm-02:/etc# df
> >>
> >> Filesystem  1K-blocks
>  Used
> >> Available Use% Mounted on
> >> /dev/sda1 4195924
> >> 4192372 0 100% /
> >>
> >> udev132053356
> 4
> >> 132053352   1% /dev
> >> tmpfs52825052
> 704
> >> 52824348   1% /run
> >>
> >> none 5120
> >> 0  5120   0% /run/lock
> >> none132062620
> 0
> >> 132062620   0% /run/shm
> >> cgroup  132062620
> 0
> >> 132062620   0% /sys/fs/cgroup
> >> /dev/sda610650544
> >> 2500460   7609056  25% /home
> >> 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
> >> 4192372 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669
> >>
> >> root@hv-kvm-02:/etc# cat /etc/mtab
> >> /dev/sda1 / ext4 rw,errors=remount-ro 0 0
> >> proc /proc proc rw,noexec,nosuid,nodev 0 0
> >> sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
> >> none /sys/fs/f

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Marcus Sorensen
It sort of looks like the out of space triggered the issue. Libvirt shows

2013-10-03 15:38:57.414+: 2710: error : virCommandWait:2348 :
internal error Child process (/bin/umount
/mnt/5230667e-9c58-3ff6-983c-5fc2a72df669) unexpected exit status 32:
error writing /etc/mtab.tmp: No space left on device

So that entry of the filesystem being mounted is orphaned in
/etc/mtab, since it can't be removed. That seems to be the source of
why 'df' shows the thing mounted when it isn't, at least.


On Thu, Oct 3, 2013 at 11:45 AM, Indra Pramana  wrote:
> Hi Marcus and all,
>
> I also find some strange and interesting error messages from the libvirt
> logs:
>
> http://pastebin.com/5ByfNpAf
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
>
>
> On Fri, Oct 4, 2013 at 1:38 AM, Indra Pramana  wrote:
>
>> Hi Marcus,
>>
>> Good day to you, and thank you for your e-mail. See my reply inline.
>>
>> On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen wrote:
>>
>>> Are you using the release artifacts, or your own 4.2 build?
>>>
>>
>> [Indra:] We are using the release artifacts from below repo since we are
>> using Ubuntu:
>>
>> deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
>>
>>
>>> When you rebooted the host, did the problem go away or come back the
>>> same?
>>
>>
>> [Indra:] When I rebooted the host, the problem go away for a while, but it
>> will come back again after some time. It will come randomly at the time
>> when we need to create a new instance on that host, or start an existing
>> stopped instance.
>>
>>
>>> You may want to look at 'virsh pool-list' to see if libvirt is
>>> mounting/registering the secondary storage.
>>>
>>
>> [Indra:] This is the result of the virsh pool-list command:
>>
>> root@hv-kvm-02:/var/log/libvirt# virsh pool-list
>> Name State  Autostart
>> -
>> 301071ac-4c1d-4eac-855b-124126da0a38 active no
>> 5230667e-9c58-3ff6-983c-5fc2a72df669 active no
>> d433809b-01ea-3947-ba0f-48077244e4d6 active no
>>
>> Strange thing is that none of my secondary storage IDs are there. Could it
>> be that the ID might have changed during Cloudstack upgrade? Here is the
>> list of my secondary storage (there are two of them) even though they are
>> on the same NFS server:
>>   c02da448-b9f4-401b-b8d5-83e8ead5cfde nfs://
>> 10.237.11.31/mnt/vol1/sec-storage NFS
>> 5937edb6-2e95-4ae2-907b-80fe4599ed87 nfs://
>> 10.237.11.31/mnt/vol1/sec-storage2 NFS
>>
>>> Is this happening on multiple hosts, the same way?
>>
>>
>> [Indra:] Yes, it is happening on all the two hosts that I have, the same
>> way.
>>
>>
>>> You may want to
>>> look at /etc/mtab, if the system reports it's mounted, though it's
>>> not, it might be in there. Look at /proc/mounts as well.
>>>
>>
>> [Indra:] Please find result of df, /etc/mtab and /proc/mounts below. The
>> "ghost" mount point is on df and /etc/mtab, but not on /proc/mounts.
>>
>> root@hv-kvm-02:/etc# df
>>
>> Filesystem  1K-blocksUsed
>> Available Use% Mounted on
>> /dev/sda1 4195924
>> 4192372 0 100% /
>>
>> udev132053356   4
>> 132053352   1% /dev
>> tmpfs52825052 704
>> 52824348   1% /run
>>
>> none 5120
>> 0  5120   0% /run/lock
>> none132062620   0
>> 132062620   0% /run/shm
>> cgroup  132062620   0
>> 132062620   0% /sys/fs/cgroup
>> /dev/sda610650544
>> 2500460   7609056  25% /home
>> 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
>> 4192372 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669
>>
>> root@hv-kvm-02:/etc# cat /etc/mtab
>> /dev/sda1 / ext4 rw,errors=remount-ro 0 0
>> proc /proc proc rw,noexec,nosuid,nodev 0 0
>> sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
>> none /sys/fs/fuse/connections fusectl rw 0 0
>> none /sys/kernel/debug debugfs rw 0 0
>> none /sys/kernel/security securityfs rw 0 0
>> udev /dev devtmpfs rw,mode=0755 0 0
>> devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
>> tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
>> none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
>> none /run/shm tmpfs rw,nosuid,nodev 0 0
>> cgroup /sys/fs/cgroup tmpfs rw,relatime,mode=755 0 0
>> cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
>> cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
>> cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
>> cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
>> cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
>> cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
>> cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
>>

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Indra Pramana
Hi Dean,

Good day to you, and thank you for your e-mail.

The current setting (value) of storage.overprovisioning.factor is 2. Do you
think it matters? I can't remember whether the value was the default value,
or is it something that we changed earlier. Do you think changing this to 1
will help?

Looking forward to your reply, thank you.

Cheers.



On Fri, Oct 4, 2013 at 12:24 AM, Dean Kamali  wrote:

> have you looked at storage over-provisioning settings ??
>
>
> On Thu, Oct 3, 2013 at 11:53 AM, Indra Pramana  wrote:
>
> > Dear all,
> >
> > We face a major problem after upgrading to 4.2.0. Mounting from KVM hosts
> > to secondary storage seems to fail, every time a new VM instance is
> > created, it will use up the / (root) partition of the KVM hosts instead.
> >
> > Here is the df result:
> >
> > 
> > root@hv-kvm-02:/home/indra# df
> > Filesystem  1K-blocksUsed
> > Available Use% Mounted on
> > /dev/sda1 4195924
> > 4195924 0 100% /
> > udev132053356   4
> > 132053352   1% /dev
> > tmpfs52825052 440
> > 52824612   1% /run
> > none 5120
> > 0  5120   0% /run/lock
> > none132062620   0
> > 132062620   0% /run/shm
> > cgroup  132062620   0
> > 132062620   0% /sys/fs/cgroup
> > /dev/sda610650544 2500424
> > 7609092  25% /home
> > 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
> > 4195924 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669
> > 
> >
> > The strange thing is that df shows that it seems to be mounted, but it's
> > actually not mounted. If you noticed, the total capacity of the mount
> point
> > is exactly the same as the capacity of the / (root) partition. By right
> it
> > should show 7 TB instead of just 4 GB.
> >
> > This caused VM creation to be error due to out of disk space. This also
> > affect the KVM operations since the / (root) partition becomes full, and
> we
> > can only release the space after we reboot the KVM host.
> >
> > Anyone experience this problem before? We are at loss on how to resolve
> the
> > problem.
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
>


Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Indra Pramana
Hi Marcus and all,

I also find some strange and interesting error messages from the libvirt
logs:

http://pastebin.com/5ByfNpAf

Looking forward to your reply, thank you.

Cheers.



On Fri, Oct 4, 2013 at 1:38 AM, Indra Pramana  wrote:

> Hi Marcus,
>
> Good day to you, and thank you for your e-mail. See my reply inline.
>
> On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen wrote:
>
>> Are you using the release artifacts, or your own 4.2 build?
>>
>
> [Indra:] We are using the release artifacts from below repo since we are
> using Ubuntu:
>
> deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
>
>
>> When you rebooted the host, did the problem go away or come back the
>> same?
>
>
> [Indra:] When I rebooted the host, the problem go away for a while, but it
> will come back again after some time. It will come randomly at the time
> when we need to create a new instance on that host, or start an existing
> stopped instance.
>
>
>> You may want to look at 'virsh pool-list' to see if libvirt is
>> mounting/registering the secondary storage.
>>
>
> [Indra:] This is the result of the virsh pool-list command:
>
> root@hv-kvm-02:/var/log/libvirt# virsh pool-list
> Name State  Autostart
> -
> 301071ac-4c1d-4eac-855b-124126da0a38 active no
> 5230667e-9c58-3ff6-983c-5fc2a72df669 active no
> d433809b-01ea-3947-ba0f-48077244e4d6 active no
>
> Strange thing is that none of my secondary storage IDs are there. Could it
> be that the ID might have changed during Cloudstack upgrade? Here is the
> list of my secondary storage (there are two of them) even though they are
> on the same NFS server:
>   c02da448-b9f4-401b-b8d5-83e8ead5cfde nfs://
> 10.237.11.31/mnt/vol1/sec-storage NFS
> 5937edb6-2e95-4ae2-907b-80fe4599ed87 nfs://
> 10.237.11.31/mnt/vol1/sec-storage2 NFS
>
>> Is this happening on multiple hosts, the same way?
>
>
> [Indra:] Yes, it is happening on all the two hosts that I have, the same
> way.
>
>
>> You may want to
>> look at /etc/mtab, if the system reports it's mounted, though it's
>> not, it might be in there. Look at /proc/mounts as well.
>>
>
> [Indra:] Please find result of df, /etc/mtab and /proc/mounts below. The
> "ghost" mount point is on df and /etc/mtab, but not on /proc/mounts.
>
> root@hv-kvm-02:/etc# df
>
> Filesystem  1K-blocksUsed
> Available Use% Mounted on
> /dev/sda1 4195924
> 4192372 0 100% /
>
> udev132053356   4
> 132053352   1% /dev
> tmpfs52825052 704
> 52824348   1% /run
>
> none 5120
> 0  5120   0% /run/lock
> none132062620   0
> 132062620   0% /run/shm
> cgroup  132062620   0
> 132062620   0% /sys/fs/cgroup
> /dev/sda610650544
> 2500460   7609056  25% /home
> 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
> 4192372 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669
>
> root@hv-kvm-02:/etc# cat /etc/mtab
> /dev/sda1 / ext4 rw,errors=remount-ro 0 0
> proc /proc proc rw,noexec,nosuid,nodev 0 0
> sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
> none /sys/fs/fuse/connections fusectl rw 0 0
> none /sys/kernel/debug debugfs rw 0 0
> none /sys/kernel/security securityfs rw 0 0
> udev /dev devtmpfs rw,mode=0755 0 0
> devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
> tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
> none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
> none /run/shm tmpfs rw,nosuid,nodev 0 0
> cgroup /sys/fs/cgroup tmpfs rw,relatime,mode=755 0 0
> cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
> cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
> cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
> cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
> cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
> cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
> cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
> cgroup /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
> /dev/sda6 /home ext4 rw 0 0
> rpc_pipefs /run/rpc_pipefs rpc_pipefs rw 0 0
> binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0
> 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288
> /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669 nfs rw,addr=10.237.11.31 0 0
>
> rootfs / rootfs rw 0 0
> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
> udev /dev devtmpfs rw,relatime,size=132053356k,nr_inodes=33013339,mode=755
> 0 0
> devpts /dev/pts devpts
> rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
> tmpfs /run tmpfs rw,

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Indra Pramana
Hi Marcus,

Good day to you, and thank you for your e-mail. See my reply inline.

On Fri, Oct 4, 2013 at 12:29 AM, Marcus Sorensen wrote:

> Are you using the release artifacts, or your own 4.2 build?
>

[Indra:] We are using the release artifacts from below repo since we are
using Ubuntu:

deb http://cloudstack.apt-get.eu/ubuntu precise 4.2


> When you rebooted the host, did the problem go away or come back the
> same?


[Indra:] When I rebooted the host, the problem go away for a while, but it
will come back again after some time. It will come randomly at the time
when we need to create a new instance on that host, or start an existing
stopped instance.


> You may want to look at 'virsh pool-list' to see if libvirt is
> mounting/registering the secondary storage.
>

[Indra:] This is the result of the virsh pool-list command:

root@hv-kvm-02:/var/log/libvirt# virsh pool-list
Name State  Autostart
-
301071ac-4c1d-4eac-855b-124126da0a38 active no
5230667e-9c58-3ff6-983c-5fc2a72df669 active no
d433809b-01ea-3947-ba0f-48077244e4d6 active no

Strange thing is that none of my secondary storage IDs are there. Could it
be that the ID might have changed during Cloudstack upgrade? Here is the
list of my secondary storage (there are two of them) even though they are
on the same NFS server:
c02da448-b9f4-401b-b8d5-83e8ead5cfdenfs://10.237.11.31/mnt/vol1/sec-storage
NFS 5937edb6-2e95-4ae2-907b-80fe4599ed87nfs://
10.237.11.31/mnt/vol1/sec-storage2NFS

> Is this happening on multiple hosts, the same way?


[Indra:] Yes, it is happening on all the two hosts that I have, the same
way.


> You may want to
> look at /etc/mtab, if the system reports it's mounted, though it's
> not, it might be in there. Look at /proc/mounts as well.
>

[Indra:] Please find result of df, /etc/mtab and /proc/mounts below. The
"ghost" mount point is on df and /etc/mtab, but not on /proc/mounts.

root@hv-kvm-02:/etc# df
Filesystem  1K-blocksUsed
Available Use% Mounted on
/dev/sda1 4195924
4192372 0 100% /
udev132053356   4
132053352   1% /dev
tmpfs52825052 704
52824348   1% /run
none 5120
0  5120   0% /run/lock
none132062620   0
132062620   0% /run/shm
cgroup  132062620   0
132062620   0% /sys/fs/cgroup
/dev/sda610650544 2500460
7609056  25% /home
10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
4192372 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669

root@hv-kvm-02:/etc# cat /etc/mtab
/dev/sda1 / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
udev /dev devtmpfs rw,mode=0755 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
none /run/shm tmpfs rw,nosuid,nodev 0 0
cgroup /sys/fs/cgroup tmpfs rw,relatime,mode=755 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
/dev/sda6 /home ext4 rw 0 0
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0
10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288
/mnt/5230667e-9c58-3ff6-983c-5fc2a72df669 nfs rw,addr=10.237.11.31 0 0

rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=132053356k,nr_inodes=33013339,mode=755
0 0
devpts /dev/pts devpts
rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=52825052k,mode=755 0 0
/dev/disk/by-uuid/d21f4676-075d-44be-a378-92ef6aaf2496 / ext4
rw,relatime,errors=remount-ro,data=ordered 0 0
none /sys/fs/fuse/connections fusectl rw,relatime 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
cgroup /sys/fs/cgroup tmpfs rw,relatime,mode=755 0 0
none /sys/kernel/security securityfs rw,relatime 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
none /run/shm tmpfs rw,n

Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Marcus Sorensen
Are you using the release artifacts, or your own 4.2 build?

When you rebooted the host, did the problem go away or come back the
same? You may want to look at 'virsh pool-list' to see if libvirt is
mounting/registering the secondary storage.

Is this happening on multiple hosts, the same way? You may want to
look at /etc/mtab, if the system reports it's mounted, though it's
not, it might be in there. Look at /proc/mounts as well.

On Thu, Oct 3, 2013 at 9:53 AM, Indra Pramana  wrote:
> Dear all,
>
> We face a major problem after upgrading to 4.2.0. Mounting from KVM hosts
> to secondary storage seems to fail, every time a new VM instance is
> created, it will use up the / (root) partition of the KVM hosts instead.
>
> Here is the df result:
>
> 
> root@hv-kvm-02:/home/indra# df
> Filesystem  1K-blocksUsed
> Available Use% Mounted on
> /dev/sda1 4195924
> 4195924 0 100% /
> udev132053356   4
> 132053352   1% /dev
> tmpfs52825052 440
> 52824612   1% /run
> none 5120
> 0  5120   0% /run/lock
> none132062620   0
> 132062620   0% /run/shm
> cgroup  132062620   0
> 132062620   0% /sys/fs/cgroup
> /dev/sda610650544 2500424
> 7609092  25% /home
> 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
> 4195924 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669
> 
>
> The strange thing is that df shows that it seems to be mounted, but it's
> actually not mounted. If you noticed, the total capacity of the mount point
> is exactly the same as the capacity of the / (root) partition. By right it
> should show 7 TB instead of just 4 GB.
>
> This caused VM creation to be error due to out of disk space. This also
> affect the KVM operations since the / (root) partition becomes full, and we
> can only release the space after we reboot the KVM host.
>
> Anyone experience this problem before? We are at loss on how to resolve the
> problem.
>
> Looking forward to your reply, thank you.
>
> Cheers.


Re: [URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Dean Kamali
have you looked at storage over-provisioning settings ??


On Thu, Oct 3, 2013 at 11:53 AM, Indra Pramana  wrote:

> Dear all,
>
> We face a major problem after upgrading to 4.2.0. Mounting from KVM hosts
> to secondary storage seems to fail, every time a new VM instance is
> created, it will use up the / (root) partition of the KVM hosts instead.
>
> Here is the df result:
>
> 
> root@hv-kvm-02:/home/indra# df
> Filesystem  1K-blocksUsed
> Available Use% Mounted on
> /dev/sda1 4195924
> 4195924 0 100% /
> udev132053356   4
> 132053352   1% /dev
> tmpfs52825052 440
> 52824612   1% /run
> none 5120
> 0  5120   0% /run/lock
> none132062620   0
> 132062620   0% /run/shm
> cgroup  132062620   0
> 132062620   0% /sys/fs/cgroup
> /dev/sda610650544 2500424
> 7609092  25% /home
> 10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
> 4195924 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669
> 
>
> The strange thing is that df shows that it seems to be mounted, but it's
> actually not mounted. If you noticed, the total capacity of the mount point
> is exactly the same as the capacity of the / (root) partition. By right it
> should show 7 TB instead of just 4 GB.
>
> This caused VM creation to be error due to out of disk space. This also
> affect the KVM operations since the / (root) partition becomes full, and we
> can only release the space after we reboot the KVM host.
>
> Anyone experience this problem before? We are at loss on how to resolve the
> problem.
>
> Looking forward to your reply, thank you.
>
> Cheers.
>


[URGENT] Mounting issues from KVM hosts to secondary storage

2013-10-03 Thread Indra Pramana
Dear all,

We face a major problem after upgrading to 4.2.0. Mounting from KVM hosts
to secondary storage seems to fail, every time a new VM instance is
created, it will use up the / (root) partition of the KVM hosts instead.

Here is the df result:


root@hv-kvm-02:/home/indra# df
Filesystem  1K-blocksUsed
Available Use% Mounted on
/dev/sda1 4195924
4195924 0 100% /
udev132053356   4
132053352   1% /dev
tmpfs52825052 440
52824612   1% /run
none 5120
0  5120   0% /run/lock
none132062620   0
132062620   0% /run/shm
cgroup  132062620   0
132062620   0% /sys/fs/cgroup
/dev/sda610650544 2500424
7609092  25% /home
10.237.11.31:/mnt/vol1/sec-storage2/template/tmpl/2/288   4195924
4195924 0 100% /mnt/5230667e-9c58-3ff6-983c-5fc2a72df669


The strange thing is that df shows that it seems to be mounted, but it's
actually not mounted. If you noticed, the total capacity of the mount point
is exactly the same as the capacity of the / (root) partition. By right it
should show 7 TB instead of just 4 GB.

This caused VM creation to be error due to out of disk space. This also
affect the KVM operations since the / (root) partition becomes full, and we
can only release the space after we reboot the KVM host.

Anyone experience this problem before? We are at loss on how to resolve the
problem.

Looking forward to your reply, thank you.

Cheers.