CloudStack meetup - Berlin, February 27

2020-02-03 Thread Steve Roles
Hi all,

We have published an agenda with our first 4 talks! It's looking like a great 
event - so if you haven't registered, or you have and want to see the agenda, 
please see here:

https://www.eventbrite.co.uk/e/cloudstack-european-user-group-meetup-tickets-81462441355

Hope to see you soon!

steve.ro...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



回复: Redundant NFS Storage for ACS

2020-02-03 Thread li jerry
This scheme can trigger HA
If you can, please provide remote login information. I can check it for you.

-邮件原件-
发件人: Cloud Udupi  
发送时间: 2020年2月2日 3:24
收件人: users@cloudstack.apache.org
主题: Re: Redundant NFS Storage for ACS

Hi Jerry,

I have tried doing the same as you mentioned but still no luck. HA doesn't 
work, host stucks in alert state.
*Primary Storage 1 -* Ceph RBD, *Tag= *rbd *Primary Storage 2 - *NFS (Size=100 
MB) All Offerings created with storage tag= rbd

In my case, HA works only when NFS as primary.
Do I need to do anything else to make the hosts to write all heartbeat times to 
the KVMHA directory under NFS. (have no Idea about this) Any specific steps to 
make it work.

And What if Primary NFS server holding heartbeat fails? Will it work with 
another Primary NFS storage?

Regards,
Mark.

On Sat, Feb 1, 2020 at 6:10 PM li jerry  wrote:

> Hi Mark
>
> I can provide one of my solutions for your reference:
>
> Hypervisor = KVM
> CloudStack = 4.11.3
> Primary Storage0 = Ceph RBD, tag = rbd Primary Storage1 = NFS
>
> Primary Storage0 stores VM Volume,
> Primary Storage1 stores the heartbeat of KVM (suggested capacity is 
> 100MB)
>
> all Compute Offerings.Storage Tags and Disk Offerings.Storage Tags = "rbd"
>
>
> ALL Host tag = ha
>
> Global setting-> ha.tag = ha
>
>
> With the above configuration, all hosts in the cluster are connected 
> to two storages (RBD, NFS) at the same time; all heartbeat times are 
> written to the KVMHA directory under NFS;
>
> When the host fails, it detects the KVM heartbeat timestamp through 
> Primary Storage1 and triggers VM HA
>
> -邮件原件-
> 发件人: Cloud Udupi 
> 发送时间: 2020年2月1日 20:28
> 收件人: users@cloudstack.apache.org
> 主题: Redundant NFS Storage for ACS
>
> Hi,
> We are new to Apache CloudStack. We are looking for a Primary Storage 
> (NFS
> share) solution, where it does not fail because of single node 
> failure. Is there a way where I can use the NFS via any kind of 
> clustering, so that when one node fails i will still have the VM's 
> working from another node which is in ACS using the NFS cluster.
>
> Has anyone done the Ceph Storage as NFS (NFS Ganesha) and used it for 
> the ACS on CentOS 7. Please share the steps so that we can look into it.
>
> Basically we need a system that has:-
> 1. One single point IP address with the shared mount point being same.
> 2. NFS storage, as Apache CloudStack supports HA only with NFS.
> 3. I need to deploy around around 60 VM's for our application.
>
> If NFS storage having the VM's goes down and not able to get back. How 
> to fix this, so that we can get back the VM's in running state.
>
> Regards,
> Mark.
>


Re: Redundant NFS Storage for ACS

2020-02-03 Thread Cloud Udupi
Hi All,

Can some one suggest for the NFS Primary Storage for ACS, that does not
goes off due to NFS host failure in CentOS 7.6

Regards,
Mark.


Re: new primary storage

2020-02-03 Thread Daan Hoogland
thanks Charlie, can you take your issue to a github issue on the project
please? thanks

On Mon, Feb 3, 2020 at 9:57 AM Charlie Holeowsky <
charlie.holeow...@gmail.com> wrote:

> Hi Daan and users,
> Yes, I resolved the issue but it's only a work-around, I have found that
> the problem is reproducible but I do not understand if I am the only one to
> have noticed it and if there is another more elegant and definitive
> solution.
>
>
>
> Il giorno ven 31 gen 2020 alle ore 13:31 Daan Hoogland <
> daan.hoogl...@gmail.com> ha scritto:
>
>> sorry to not have any focus on this Charlie,
>> Do I read correctly that you resolved your issue?
>>
>> On Tue, Jan 28, 2020 at 2:46 PM Charlie Holeowsky <
>> charlie.holeow...@gmail.com> wrote:
>>
>>> In this period I have performed some testsand I found a workarount for
>>> the metrics problem.
>>> I created a test environment in the laboratory with the main
>>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
>>> OS, KVM, NFS as shared storage and advanced network). Then I added a ubuntu
>>> 18.04 new primary storage.
>>>
>>> I create a new VM in the new storage server and and after a while the
>>> metrics appeared as on the first storage, so the storage is working.
>>>
>>> I destroyed this VM, I create a new one on the first (old) storage and
>>> then I migrate it on the new storage.
>>>
>>> After migrating the disk from the first primary storage to the second
>>> one i have encountered the same problem (same error on agent.log
>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>> volume:5a521eb0...) and the volume metrics and metrics didn't appear or
>>> update.
>>>
>>> In this test environment I created a symbolic link in the filesystem of
>>> the storage server taking the name of the disk just migrated (the one with
>>> the name path) and the uuid name (the one that appear in error message).
>>>
>>>
>>> Here is an example to better explain me.
>>>
>>> I took the list of volumes where path is different from uuid which
>>> returns the data of the migrated volumes:
>>>
>>> mysql> select id,uuid,path from volumes where uuid!=path;
>>>
>>> ++--+--+
>>> | id | uuid | path
>>>   |
>>>
>>> ++--+--+
>>> | 10 | 5a521eb0-266f-4353-b4b2-1d63a483e5b5 |
>>> 165b92ba-68f1-4172-be35-bbe1d032cb7c |
>>> | 12 | acb3bb29-9bac-4a2a-aefa-3c6ac1c2846b |
>>> 56aa3961-dbc2-4f98-9246-a7497eef3214 |
>>>
>>> ++--+--+
>>>
>>> In the storage server I make the symbolic links (ln -s  ):
>>>
>>> # ln
>>> -s 5a521eb0-266f-4353-b4b2-1d63a483e5b5 165b92ba-68f1-4172-be35-bbe1d032cb7c
>>> # ln -s acb3bb29-9bac-4a2a-aefa-3c6ac1c2846b
>>> 56aa3961-dbc2-4f98-9246-a7497eef3214
>>>
>>> After doing this, I waited some time and then I found that metrics were
>>> updated and the message in agent.log no longer appeared.
>>>
>>>
>>> Il giorno gio 23 gen 2020 alle ore 17:53 Charlie Holeowsky <
>>> charlie.holeow...@gmail.com> ha scritto:
>>>
 I still don't understand
 why com.cloud.hypervisor.kvm.storage.LibvirtStoragePool don't find the
 volume d93d3c0a-3859-4473-951d-9b5c5912c76 that exists as file
 39148fe1-842b-433a-8a7f-85e90f316e04...

 It's the only anomaly I have found. Where can I look again?

 Il giorno lun 20 gen 2020 alle ore 16:27 Daan Hoogland <
 daan.hoogl...@gmail.com> ha scritto:

> but the record you send earlier also says that is should be looking
> for 39148fe1-842b-433a-8a7f-85e90f316e04, in the path field. the message
> might be just that, a message.
>
> On Mon, Jan 20, 2020 at 3:35 PM Charlie Holeowsky <
> charlie.holeow...@gmail.com> wrote:
>
>> I think that's the problem because in the logs that I have forwarded
>> it reads:
>> Can't find volume: d93d3c0a-3859-4473-951d-9b5c5912c767
>>
>> This is the volume ID of migrated file but it do not exist on primary
>> storage (new or old one) but it exist
>> as 39148fe1-842b-433a-8a7f-85e90f316e04.
>>
>> Il giorno lun 20 gen 2020 alle ore 12:35 Daan Hoogland <
>> daan.hoogl...@gmail.com> ha scritto:
>>
>>> also, can you see the primary storage being mounted?
>>>
>>>
>>> On Mon, Jan 20, 2020 at 12:33 PM Daan Hoogland <
>>> daan.hoogl...@gmail.com> wrote:
>>>
 Why do you think that Charlie? Is it in the logs like that
 somewhere?

 On Mon, Jan 20, 2020 at 9:52 AM Charlie Holeowsky <
 charlie.holeow...@gmail.com> wrote:

> Hi Daan,
> in fact I find the volume file
> (39148fe1-842b-433a-8a7f-85e90f316e04) in the repositry id = 3 (the 
> new
> one) but it seems to me that the cloudstack system goes looking for 
> the

Re: new primary storage

2020-02-03 Thread Charlie Holeowsky
Hi Daan and users,
Yes, I resolved the issue but it's only a work-around, I have found that
the problem is reproducible but I do not understand if I am the only one to
have noticed it and if there is another more elegant and definitive
solution.



Il giorno ven 31 gen 2020 alle ore 13:31 Daan Hoogland <
daan.hoogl...@gmail.com> ha scritto:

> sorry to not have any focus on this Charlie,
> Do I read correctly that you resolved your issue?
>
> On Tue, Jan 28, 2020 at 2:46 PM Charlie Holeowsky <
> charlie.holeow...@gmail.com> wrote:
>
>> In this period I have performed some testsand I found a workarount for
>> the metrics problem.
>> I created a test environment in the laboratory with the main
>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
>> OS, KVM, NFS as shared storage and advanced network). Then I added a ubuntu
>> 18.04 new primary storage.
>>
>> I create a new VM in the new storage server and and after a while the
>> metrics appeared as on the first storage, so the storage is working.
>>
>> I destroyed this VM, I create a new one on the first (old) storage and
>> then I migrate it on the new storage.
>>
>> After migrating the disk from the first primary storage to the second one
>> i have encountered the same problem (same error on agent.log
>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>> volume:5a521eb0...) and the volume metrics and metrics didn't appear or
>> update.
>>
>> In this test environment I created a symbolic link in the filesystem of
>> the storage server taking the name of the disk just migrated (the one with
>> the name path) and the uuid name (the one that appear in error message).
>>
>>
>> Here is an example to better explain me.
>>
>> I took the list of volumes where path is different from uuid which
>> returns the data of the migrated volumes:
>>
>> mysql> select id,uuid,path from volumes where uuid!=path;
>>
>> ++--+--+
>> | id | uuid | path
>>   |
>>
>> ++--+--+
>> | 10 | 5a521eb0-266f-4353-b4b2-1d63a483e5b5 |
>> 165b92ba-68f1-4172-be35-bbe1d032cb7c |
>> | 12 | acb3bb29-9bac-4a2a-aefa-3c6ac1c2846b |
>> 56aa3961-dbc2-4f98-9246-a7497eef3214 |
>>
>> ++--+--+
>>
>> In the storage server I make the symbolic links (ln -s  ):
>>
>> # ln
>> -s 5a521eb0-266f-4353-b4b2-1d63a483e5b5 165b92ba-68f1-4172-be35-bbe1d032cb7c
>> # ln -s acb3bb29-9bac-4a2a-aefa-3c6ac1c2846b
>> 56aa3961-dbc2-4f98-9246-a7497eef3214
>>
>> After doing this, I waited some time and then I found that metrics were
>> updated and the message in agent.log no longer appeared.
>>
>>
>> Il giorno gio 23 gen 2020 alle ore 17:53 Charlie Holeowsky <
>> charlie.holeow...@gmail.com> ha scritto:
>>
>>> I still don't understand
>>> why com.cloud.hypervisor.kvm.storage.LibvirtStoragePool don't find the
>>> volume d93d3c0a-3859-4473-951d-9b5c5912c76 that exists as file
>>> 39148fe1-842b-433a-8a7f-85e90f316e04...
>>>
>>> It's the only anomaly I have found. Where can I look again?
>>>
>>> Il giorno lun 20 gen 2020 alle ore 16:27 Daan Hoogland <
>>> daan.hoogl...@gmail.com> ha scritto:
>>>
 but the record you send earlier also says that is should be looking for
 39148fe1-842b-433a-8a7f-85e90f316e04, in the path field. the message might
 be just that, a message.

 On Mon, Jan 20, 2020 at 3:35 PM Charlie Holeowsky <
 charlie.holeow...@gmail.com> wrote:

> I think that's the problem because in the logs that I have forwarded
> it reads:
> Can't find volume: d93d3c0a-3859-4473-951d-9b5c5912c767
>
> This is the volume ID of migrated file but it do not exist on primary
> storage (new or old one) but it exist
> as 39148fe1-842b-433a-8a7f-85e90f316e04.
>
> Il giorno lun 20 gen 2020 alle ore 12:35 Daan Hoogland <
> daan.hoogl...@gmail.com> ha scritto:
>
>> also, can you see the primary storage being mounted?
>>
>>
>> On Mon, Jan 20, 2020 at 12:33 PM Daan Hoogland <
>> daan.hoogl...@gmail.com> wrote:
>>
>>> Why do you think that Charlie? Is it in the logs like that somewhere?
>>>
>>> On Mon, Jan 20, 2020 at 9:52 AM Charlie Holeowsky <
>>> charlie.holeow...@gmail.com> wrote:
>>>
 Hi Daan,
 in fact I find the volume file
 (39148fe1-842b-433a-8a7f-85e90f316e04) in the repositry id = 3 (the new
 one) but it seems to me that the cloudstack system goes looking for the
 volume with its "old" name (path) that doesn't exist...

 Il giorno sab 18 gen 2020 alle ore 21:41 Daan Hoogland <
 daan.hoogl...@gmail.com> ha scritto:

> Charlie,
> forgive my not replying in a timely manner. This might happen if
> the disk was migrated. In this case