Hi marcoceppi,

Here we are doing vertical Scaling. I mean same copy of the charm deployed
multiple times to a single cinder node. every time it will add unique
backend per charm to the same cinder node. This OK.

But If I want to do Horizontal Scaling . I mean adding more cinder charms
as nodes to the existing OpenStack setup. And then same copy of the charm I
have to deploy to different cinder nodes. I mean In High avaialabilty case,
what are the consequences probably we will face. And how can we deploy it
to different cinder nodes separately. What was the best solution for this.
How can we achieve horizontal scalinf here. Could you please provide me the
information with respect to this.

Thanks,
Siva.



On Wed, Sep 7, 2016 at 9:36 PM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> Hi Siva,
>
> In Juju, and especially with Cinder plugins, you can deploy multiple
> copies of the Juju charm and relate them. Each application deployed is
> equivalent to the scope of a SAN cluster:
>
> juju deploy cinder
> juju deploy your-charm san1
> juju deploy your-charm san2
>
> juju add-relation cinder san1
> juju add-relation cinder san2
>
> Now, you can configure each of the new applications, which are teh same
> copy of the charm deployed multiple times. This will add a unique backend
> per charm copy which seems to be your intended use case.
>
> Thanks,
> Marco Ceppi
>
> On Wed, Sep 7, 2016 at 12:03 PM SivaRamaPrasad Ravipati <si...@vedams.com>
> wrote:
>
>> For example, We  have different storage arrays of same type with unique
>> config parameter values.[Like San IP, SAN password, San user............].
>> Assume that our charm has been deployed with some configuration values
>> and we added relation to cinder. Our charm will modify cinder.cong with the
>> storage array driver. Next time we want to redeploy our charm to append
>> only the new configuration changes. But we don't want to destroy already
>> existing changes.
>>
>> Upto which extension,  "juju set-config" and "juju upgrade-charm" will be
>> used here. Please give me a simple example if it possible.
>>
>> For this Scenario, Which use-case will be generally used. Please let me
>> know that in a detailed manner.
>>
>>
>> Thanks,
>>
>> Siva.
>>
>> On Wed, Sep 7, 2016 at 4:54 PM, SivaRamaPrasad Ravipati <si...@vedams.com
>> > wrote:
>>
>>> OK. Thank you.
>>>
>>> I have One more Question. Knowing answer for this question is very
>>> important for us.
>>>
>>> We have developed a JUJU Charm for configuring cinder to use one of our
>>> Storage array as the backend.
>>>
>>>
>>> So How to redeploy the Charm to add more storage arrays to configure
>>> cinder without destroying/removing the current deployed charm. [For
>>> example, We don't want to remove the current configured storage arrays from
>>> the Cinder configuration.]
>>>
>>> Thanks,
>>> Siva.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 3:37 PM, Adam Collard <adam.coll...@canonical.com
>>> > wrote:
>>>
>>>> Hi Siva,
>>>>
>>>> On Wed, 7 Sep 2016 at 10:58 SivaRamaPrasad Ravipati <si...@vedams.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have installed the openstack cloud using openstack Autopilot. I am 
>>>>> trying to deploy juju-gui in the internal juju environment.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I did the following.
>>>>> ====================
>>>>>
>>>>> ->From MAAS node
>>>>>
>>>>> $export JUJU_HOME=~/.cloud-install/juju
>>>>>
>>>>> -> Connecting Landscape server to deploy our charm  and add relation to 
>>>>> cinder charm.
>>>>>
>>>>> $juju ssh landscape-server/0 sudo 
>>>>> 'JUJU_HOME=/var/lib/landscape/juju-homes/`sudo ls -rt 
>>>>> /var/lib/landscape/juju-homes/ | tail -1` sudo -u landscape -E bash'
>>>>>
>>>>> -> From Landscape Server
>>>>>
>>>>> landscape@juju-machine-0-lxc-1:~$ juju deploy cs:juju-gui-134
>>>>>
>>>>> Added charm "cs:trusty/juju-gui-134" to the environment.
>>>>>
>>>>>
>>>>> ubuntu@juju-machine-0-lxc-1:~$ juju status
>>>>>   
>>>>> "4":
>>>>>     agent-state: error
>>>>>     agent-state-info: 'cannot run instances: cannot run instances: 
>>>>> gomaasapi: got
>>>>>       error back from server: 409 CONFLICT (No available node matches 
>>>>> constraints:
>>>>>       zone=region1)'
>>>>>     instance-id: pending
>>>>>     series: trusty
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>  juju-gui:
>>>>>     charm: cs:trusty/juju-gui-134
>>>>>     exposed: false
>>>>>     service-status:
>>>>>       current: unknown
>>>>>       message: Waiting for agent initialization to finish
>>>>>       since: 07 Sep 2016 06:46:22Z
>>>>>     units:
>>>>>       juju-gui/1:
>>>>>         workload-status:
>>>>>           current: unknown
>>>>>           message: Waiting for agent initialization to finish
>>>>>           since: 07 Sep 2016 06:46:22Z
>>>>>         agent-status:
>>>>>           current: allocating
>>>>>           since: 07 Sep 2016 06:46:22Z
>>>>>         agent-state: pending
>>>>>         machine: "4"
>>>>>
>>>>>
>>>>> JUJU Version
>>>>> =============             
>>>>>
>>>>> ubuntu@juju-machine-0-lxc-1:~$ juju --version
>>>>> 1.25.6-trusty-amd64
>>>>>   
>>>>>
>>>>> My assumption
>>>>>
>>>>> =============
>>>>>
>>>>> It looks like we need to define a pool of servers in a region called 
>>>>> region1.
>>>>>
>>>>>
>>>>> I have a question. Once we have Ubuntu OpenStack Autopilot deployment, If 
>>>>> we need to deploy any Charm externally we need to add a sever to MAAS?
>>>>>
>>>>> How can I solve this Issue. Please provide me some solution.
>>>>>
>>>>>
>>>> After cleaning up (juju destroy-service juju-gui) please try with an
>>>> explicit placement e.g.
>>>>
>>>> $ juju deploy juju-gui --to lxc:0
>>>>
>>>> See https://jujucharms.com/docs/1.25/charms-deploying#deploying-
>>>> to-specific-machines-and-containers for more information on providing
>>>> placement directives to Juju.
>>>>
>>>> Regards,
>>>>
>>>> Adam
>>>>
>>>>
>>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to