[Openstack-operators] Adding network node (Neutron agents) and test before deploying customer resource

2015-03-04 Thread Toshikazu Ichikawa
Hi,

 

I'm looking for the way to test a newly added network node by deploying test
resource before any customer resource on the node is deployed. I've learned
in this ML that Nova and Cinder has the setting of "enable_new_services" in
each conf to disable the initial service status to archive this.

 

My question is "Is there any function/configuration to do same thing for
Neutron?"

I know there is on-going bug fix to implement the function to block
scheduling for Neutron agent[1].

As mentioned here [2], this fix may enable only administrators deploy
routers/dhcp-servers for test rather than having customer's one.

However, the initial "admin_state_up" status of agent still remains "true"
right after the agent or node is added.

That means it still happens the customer routers/dhcp-servers are deployed
the node before changing the status by manual.

To resolve this, I believe a feature similar to "enable_new_services" of
Nova/Cinder should be implemented to Neutron to change initial
"admin_state_up" value.

Do you know any existing function, blueprint or other approach to achieve
same goal?

Or, Is this the feature what you agree to want and should be proposed as a
new blueprint? 

I'd like to have neutron operators comments and suggestions.

 

[1] https://bugs.launchpad.net/neutron/+bug/1408488

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054007.html

 

Thanks,

Kazu

 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-operators] [openstack-dev] [nova] Nova options as instance metadata

2015-03-04 Thread Belmiro Moreira
Hi,
in nova there are several options that can be defined in the flavor (extra
specs)
and/or as image properties.
This is great, however to deploy some of these options we will need offer
the
same image with different properties or let the users upload the same image
with
the right properties.

It will be interesting to have some of these options available as instance
metadata.
In this case the user would be able to speficy them when creating the
instance
(ex: --meta hw_watchdog_action=pause) avoiding the need to upload a
different
image or in other cases requesting a new flavor.

Is this option being considered?


Belmiro
---
CERN
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Rolling upgrades and Neutron

2015-03-04 Thread Jesse Keating

On 3/4/15 12:56 PM, Assaf Muller wrote:

Hello everyone,

An issue came up recently:
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058280.html

Where a recent Kilo patch made non-backwards compatible to the RPC interface
between the Neutron server and its agents. I'm trying to figure out how much
of an issue that really is.

The question is: Does anyone have any experience with performing a 'rolling 
upgrade'
for Neutron, specifically, upgrading the Neutron API server(s) first, and 
upgrading
Neutron agents later? Has anyone performed this from Icehouse to Juno 
successfully?
Would this typically work across the board for other services as well?


When database migrations are involved, typically we shut down all 
producers/consumers of the database, then migrate the database, then 
bring up new code for producers/consumers.


This model works across all the services (except for swift, because... 
swift).


When database migrations are /not/ at play then the general desire is to 
do a rolling upgrade, in order to have services down for as little time 
as possible. It's not just doing all the APIs at once and then agents, 
it's doing a sub-set of APIs in a batch mode, so that the API itself is 
never 100% down. This works in Nova, where there is a concept of a 
upgrade_levels for RPC message format, and there is a conductor service 
which can be upgraded first which can handle translating internals of 
RPC messages for older services. The end scenario was that we could 
upgrade conductors first in one swoop (since they are bus consumers and 
not API points), then roll through the APIs and other services, then 
finally roll through the computes. Once everything was updated we could 
bump the upgrade_levels for compute.


Without this sort of structure for Neutron it'll be... difficult to do 
mixed versions of individual API nodes as well as mixed versions of 
agents and APIs.


Given that agents aren't API listeners, an upgrade strategy could be to 
update the agents all at once to new code that's backwards compatible 
with the old API nodes then roll through the API nodes, or vice versa. 
Roll through API nodes to get to new code that is backwards compatible 
with old agents, then update all the agents.


Either way its preferable to do things in as small of "atomic" chunks as 
possible. In large clusters, with nova, there is a 1:1 relationship 
between nova-compute and hypervisors, so anything that has to be atomic 
across compute is painful. Slow. With Neutron, depending on the setup, 
there is a similar relationship, so being able to break those up into 
batches, or at least being able to treat them at a different time from 
the public APIs is desirable.




--
-jlk

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Rolling upgrades and Neutron

2015-03-04 Thread Fischer, Matt
We did this for our I to J move. Control nodes first, setting the nova
compat flag, then compute nodes, then removing the flag. Things worked
along the way but  when the compat flag was set live-migration was
disabled (by nova). Our larger concerns were dealing with a clustered DB
that was being upgraded from I to J.

I do not recall any specific neutron server/agent issues.

On 3/4/15, 1:56 PM, "Assaf Muller"  wrote:

>Hello everyone,
>
>An issue came up recently:
>http://lists.openstack.org/pipermail/openstack-dev/2015-March/058280.html
>
>Where a recent Kilo patch made non-backwards compatible to the RPC
>interface
>between the Neutron server and its agents. I'm trying to figure out how
>much
>of an issue that really is.
>
>The question is: Does anyone have any experience with performing a
>'rolling upgrade'
>for Neutron, specifically, upgrading the Neutron API server(s) first, and
>upgrading
>Neutron agents later? Has anyone performed this from Icehouse to Juno
>successfully?
>Would this typically work across the board for other services as well?
>
>Thank you.
>
>
>Assaf Muller, Cloud Networking Engineer
>Red Hat
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Rolling upgrades and Neutron

2015-03-04 Thread Assaf Muller
Hello everyone,

An issue came up recently:
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058280.html

Where a recent Kilo patch made non-backwards compatible to the RPC interface
between the Neutron server and its agents. I'm trying to figure out how much
of an issue that really is.

The question is: Does anyone have any experience with performing a 'rolling 
upgrade'
for Neutron, specifically, upgrading the Neutron API server(s) first, and 
upgrading
Neutron agents later? Has anyone performed this from Icehouse to Juno 
successfully?
Would this typically work across the board for other services as well?

Thank you.


Assaf Muller, Cloud Networking Engineer
Red Hat

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread Mathieu Gagné

On 2015-03-04 1:26 PM, matt wrote:> and thus different keys.
>
> so what's the issue?

fpm does not generate a .changes so you can't sign it. (for upload)

And since there is no .changes, you can't use dput to upload your 
package to the APT repository using standard methods.


Even if I don't plan on signing .changes and I managed to still upload 
the .deb using dput, the lack of .changes make it difficult for the APT 
repository to process the incoming package and check its checksum.


--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread Clint Byrum
Excerpts from Mathieu Gagné's message of 2015-03-04 09:39:34 -0800:
> On 2015-03-04 12:18 PM, Clint Byrum wrote:
> > Excerpts from Mathieu Gagné's message of 2015-03-04 08:31:45 -0800:
> >>
> >> I really like APT repositories and would like to continue using them for
> >> the time being.
> >
> > I'm impressed you took the time to setup dput!
> 
> It's super simple to setup and use. Create a basic dput.cf and you are 
> good to go.
> 
>  > You can also use reprepro, which is somewhat handy for combining a
>  > remote repo with locally built debs:
>  >
> 
> I use reprepro too. Super simple to setup and use, would recommend.
> 
>  > You really only need to run apt-ftparchive on a directory full of debs:
>  >
>  > apt-ftparchive packages path/to/your/debs | gzip > Packages.gz
>  >
> 
> This is something I would like to avoid as I might not always have full 
> shell access to the repository from where the package is built.
> 
> Furthermore, I don't have access to all the packages in the repository 
> in the same folder to manually generate Packages.gz. (reprepro can do it 
> for me)
> 
> Ideally, I would like to upload a signed .changes control file to ensure 
> the package wasn't tampered with or got corrupted during the transfer. 
> (since .changes contains checksums)
> 

So I guess I didn't realize that dput was that simple to make work for a
private repo. That's pretty interesting.

I do think that fpm not producing a .changes file is probably just a
matter of teaching fpm how to run the step that produces the changes
file, which probably wouldn't be as hard as changing all of your
workflow at this point.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread matt
and thus different keys.

so what's the issue?

On Wed, Mar 4, 2015 at 1:16 PM, Mathieu Gagné  wrote:

> On 2015-03-04 1:10 PM, matt wrote:
>
>> use a pgp signing key with pass phrase and sign the release / packages
>> files.  ubuntu already does this.
>>
>>
> You also need to sign the packages before uploading.
> You can sign the packages AND the repository.
> Both are done by different actors: uploader, repo manager.
>
> --
> Mathieu
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread Mathieu Gagné

On 2015-03-04 1:10 PM, matt wrote:

use a pgp signing key with pass phrase and sign the release / packages
files.  ubuntu already does this.



You also need to sign the packages before uploading.
You can sign the packages AND the repository.
Both are done by different actors: uploader, repo manager.

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread matt
use a pgp signing key with pass phrase and sign the release / packages
files.  ubuntu already does this.

On Wed, Mar 4, 2015 at 12:39 PM, Mathieu Gagné  wrote:

> On 2015-03-04 12:18 PM, Clint Byrum wrote:
>
>> Excerpts from Mathieu Gagné's message of 2015-03-04 08:31:45 -0800:
>>
>>>
>>> I really like APT repositories and would like to continue using them for
>>> the time being.
>>>
>>
>> I'm impressed you took the time to setup dput!
>>
>
> It's super simple to setup and use. Create a basic dput.cf and you are
> good to go.
>
>
> > You can also use reprepro, which is somewhat handy for combining a
> > remote repo with locally built debs:
> >
>
> I use reprepro too. Super simple to setup and use, would recommend.
>
>
> > You really only need to run apt-ftparchive on a directory full of debs:
> >
> > apt-ftparchive packages path/to/your/debs | gzip > Packages.gz
> >
>
> This is something I would like to avoid as I might not always have full
> shell access to the repository from where the package is built.
>
> Furthermore, I don't have access to all the packages in the repository in
> the same folder to manually generate Packages.gz. (reprepro can do it for
> me)
>
> Ideally, I would like to upload a signed .changes control file to ensure
> the package wasn't tampered with or got corrupted during the transfer.
> (since .changes contains checksums)
>
> --
> Mathieu
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread Mathieu Gagné

On 2015-03-04 12:18 PM, Clint Byrum wrote:

Excerpts from Mathieu Gagné's message of 2015-03-04 08:31:45 -0800:


I really like APT repositories and would like to continue using them for
the time being.


I'm impressed you took the time to setup dput!


It's super simple to setup and use. Create a basic dput.cf and you are 
good to go.



> You can also use reprepro, which is somewhat handy for combining a
> remote repo with locally built debs:
>

I use reprepro too. Super simple to setup and use, would recommend.


> You really only need to run apt-ftparchive on a directory full of debs:
>
> apt-ftparchive packages path/to/your/debs | gzip > Packages.gz
>

This is something I would like to avoid as I might not always have full 
shell access to the repository from where the package is built.


Furthermore, I don't have access to all the packages in the repository 
in the same folder to manually generate Packages.gz. (reprepro can do it 
for me)


Ideally, I would like to upload a signed .changes control file to ensure 
the package wasn't tampered with or got corrupted during the transfer. 
(since .changes contains checksums)


--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread Clint Byrum
Excerpts from Mathieu Gagné's message of 2015-03-04 08:31:45 -0800:
> Hi,
> 
> I'm currently experimenting with fpm.
> 
> I learned that fpm does not generate the files needed to upload your new 
> package to an APT repository. Since the package type built by fpm is 
> binary, that file would be the .changes control file.
> 
> This bothers me a lot because my current workflow looks like this:
> 1) Fork Ubuntu Cloud Archive OpenStack source packages
> 2) Apply custom patches using quilt [1]
> 3) Build source and binary packages using standard dpkg tools
> 4) Upload source and binary packages to private APT repository with dput
> 5) Install new packages
> 
> (repeat steps 2-4 until a new upstream release is available)
> 
> While I didn't test fpm against OpenStack packages, I did test it with 
> other internal projects. I faced the same challenges and came to similar 
> conclusions:
> 
> If I used fpm instead, step 4 would fail because there is no .changes 
> control file required by dput to upload to APT.
> 
> This raises the question:
> 
> How are people (using fpm) managing and uploading their deb packages for 
> distribution? APT? Maven? Pulp? Black magic?
> 
> I really like APT repositories and would like to continue using them for 
> the time being.

I'm impressed you took the time to setup dput!

You really only need to run apt-ftparchive on a directory full of debs:

apt-ftparchive packages path/to/your/debs | gzip > Packages.gz

You can also use reprepro, which is somewhat handy for combining a
remote repo with locally built debs:

http://mirrorer.alioth.debian.org/

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread Mathieu Gagné

On 2015-03-04 11:52 AM, matt wrote:

I like cowbuilder... pbuilder using copy on write qcow's for the build
environment.  Most folks automating debian package creation use pbuilder.



Thanks for sharing. I'm a bit more crazy, I use schroot and sbuild. =)

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging with fpm

2015-03-04 Thread matt
I like cowbuilder... pbuilder using copy on write qcow's for the build
environment.  Most folks automating debian package creation use pbuilder.

-Matt

On Wed, Mar 4, 2015 at 11:31 AM, Mathieu Gagné  wrote:

> Hi,
>
> I'm currently experimenting with fpm.
>
> I learned that fpm does not generate the files needed to upload your new
> package to an APT repository. Since the package type built by fpm is
> binary, that file would be the .changes control file.
>
> This bothers me a lot because my current workflow looks like this:
> 1) Fork Ubuntu Cloud Archive OpenStack source packages
> 2) Apply custom patches using quilt [1]
> 3) Build source and binary packages using standard dpkg tools
> 4) Upload source and binary packages to private APT repository with dput
> 5) Install new packages
>
> (repeat steps 2-4 until a new upstream release is available)
>
> While I didn't test fpm against OpenStack packages, I did test it with
> other internal projects. I faced the same challenges and came to similar
> conclusions:
>
> If I used fpm instead, step 4 would fail because there is no .changes
> control file required by dput to upload to APT.
>
> This raises the question:
>
> How are people (using fpm) managing and uploading their deb packages for
> distribution? APT? Maven? Pulp? Black magic?
>
> I really like APT repositories and would like to continue using them for
> the time being.
>
>
> fpm vs native tools
> ---
>
> To continue on the general subject of packaging:
>
> I do understand that fpm gives you some advantages like:
> - being able to run the packaging step on non-Debian operating systems
> - not having to create/manage a debian/ folder.
>
> Are those really advantages?
>
> With Docker, Vagrant and friends, you always have access to a "native"
> operating system to package your stuff and use their tooling.
>
> As for debian/, I feel not everyone like Debian packaging for various
> reasons but I happen to still like them. They serve me very well.
>
> So I thought about something regarding OpenStack packaging (for operators,
> not upstream):
>
> What if there was a place to get OpenStack package Debian *skeletons* so
> you can build your own packages from master? AFAIK, the Anvil project ships
> with .spec files. [2]
>
> Why not doing the same for Debian packages? If such thing exists, would
> you use it?
>
> One important requirement for me is that package should be "compatible"
> with native packages so Puppet can manage them without too much
> modifications. This means Puppet shouldn't fail because it couldn't find
> the nova-compute package. (because fpm only created the all-in-one nova
> package)
>
>
> Virtualenv
> --
>
> I understand that people also like Python virtualenv because they are
> (mostly) self-contained: no need to natively package python modules.
>
> That's super interesting and I'm planning on trying it out myself in the
> following months.
>
> The giftwrap project [3] leverages fpm by automating the git checkout and
> virtualenv parts.
>
> Unfortunately, the packages generated aren't 100% compatible with Puppet
> nor can they be uploaded to an APT repository. (still missing that .changes
> control file)
>
> Does it bother anyone else? Or are people using giftwrap not using Puppet
> and APT repository to deploy their stuff?
>
>
> Other system packages
> -
>
> What about other system packages like libvirt or QEMU. Anyone (patching
> and) packaging them? If so, how are you doing it? git-buildpackage?
>
> Would there be interests in sharing the tooling and methodology?
>
>
> So how can we have the cake AND eat it? =)
>
>
> [1] Using gbp-pq from git-buildpackage to keep some level of sanity
> [2] https://github.com/stackforge/anvil/tree/master/conf/
> templates/packaging/specs
> [3] https://github.com/blueboxgroup/giftwrap
>
> --
> Mathieu
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Packaging with fpm

2015-03-04 Thread Mathieu Gagné

Hi,

I'm currently experimenting with fpm.

I learned that fpm does not generate the files needed to upload your new 
package to an APT repository. Since the package type built by fpm is 
binary, that file would be the .changes control file.


This bothers me a lot because my current workflow looks like this:
1) Fork Ubuntu Cloud Archive OpenStack source packages
2) Apply custom patches using quilt [1]
3) Build source and binary packages using standard dpkg tools
4) Upload source and binary packages to private APT repository with dput
5) Install new packages

(repeat steps 2-4 until a new upstream release is available)

While I didn't test fpm against OpenStack packages, I did test it with 
other internal projects. I faced the same challenges and came to similar 
conclusions:


If I used fpm instead, step 4 would fail because there is no .changes 
control file required by dput to upload to APT.


This raises the question:

How are people (using fpm) managing and uploading their deb packages for 
distribution? APT? Maven? Pulp? Black magic?


I really like APT repositories and would like to continue using them for 
the time being.



fpm vs native tools
---

To continue on the general subject of packaging:

I do understand that fpm gives you some advantages like:
- being able to run the packaging step on non-Debian operating systems
- not having to create/manage a debian/ folder.

Are those really advantages?

With Docker, Vagrant and friends, you always have access to a "native" 
operating system to package your stuff and use their tooling.


As for debian/, I feel not everyone like Debian packaging for various 
reasons but I happen to still like them. They serve me very well.


So I thought about something regarding OpenStack packaging (for 
operators, not upstream):


What if there was a place to get OpenStack package Debian *skeletons* so 
you can build your own packages from master? AFAIK, the Anvil project 
ships with .spec files. [2]


Why not doing the same for Debian packages? If such thing exists, would 
you use it?


One important requirement for me is that package should be "compatible" 
with native packages so Puppet can manage them without too much 
modifications. This means Puppet shouldn't fail because it couldn't find 
the nova-compute package. (because fpm only created the all-in-one nova 
package)



Virtualenv
--

I understand that people also like Python virtualenv because they are 
(mostly) self-contained: no need to natively package python modules.


That's super interesting and I'm planning on trying it out myself in the 
following months.


The giftwrap project [3] leverages fpm by automating the git checkout 
and virtualenv parts.


Unfortunately, the packages generated aren't 100% compatible with Puppet 
nor can they be uploaded to an APT repository. (still missing that 
.changes control file)


Does it bother anyone else? Or are people using giftwrap not using 
Puppet and APT repository to deploy their stuff?



Other system packages
-

What about other system packages like libvirt or QEMU. Anyone (patching 
and) packaging them? If so, how are you doing it? git-buildpackage?


Would there be interests in sharing the tooling and methodology?


So how can we have the cake AND eat it? =)


[1] Using gbp-pq from git-buildpackage to keep some level of sanity
[2] 
https://github.com/stackforge/anvil/tree/master/conf/templates/packaging/specs

[3] https://github.com/blueboxgroup/giftwrap

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetup Monitoring/Tools Session

2015-03-04 Thread Joe Topjian
Hi all,

I'll be moderating the Monitoring/Tools session at next week's Ops Meetup.
The etherpage is here:

https://etherpad.openstack.org/p/PHL-ops-tools-wg

Please add items you'd like to see covered. So far, the general topics will
be:

* Discussion of Monasca, StackTach, and related tools. Members of the
Monasca and StackTach team will be attending, so feel free to ask
questions. They also want to gather feedback on the difficulties operators
are having in the areas that those tools solve.

* Review and focus on the action items on the Monitoring wiki page:

https://wiki.openstack.org/wiki/Operations/Monitoring

See everyone next week,
Joe

ps: A general note for everyone attending that Sunday March 8th marks the
start of Daylight Savings Time in North America. For those who still use a
time-keeping device that does not auto-adjust, the time will be shifted an
hour forward on March 8th at 2am.  :)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage error

2015-03-04 Thread Delatte, Craig
So it looks like you have pretty much everything commented out.

You need to define what you are using.  So:
[database] section
Rabbit
Auth-stratedgy
Host
volume_driver

And so on.  This is typically tailored to each environment.  If you are testing 
this out using vagrant, I am sure there is a lot of good documentation and 
possibly puppet modules to stand up a base openstack cluster and you can see 
what it did to get cinder up and functional.


Craig DeLatte
OpenStack DevOps
Time Warner Cable
704-731-3356
610-306-4816

From: Anwar Durrani mailto:durrani.an...@gmail.com>>
Date: Wednesday, March 4, 2015 at 7:46 AM
To: Time Warner Cable 
mailto:craig.dela...@twcable.com>>
Cc: openstack-operators 
mailto:openstack-operators@lists.openstack.org>>
Subject: Re: [Openstack-operators] Storage error

Hi Delatte,

I have installed cinder on controller node server, do i need to install 
separate storage for Cinder Volume Management or controller server will be ok ?

Please find configuration file attached.



Thanks

On Tue, Mar 3, 2015 at 7:48 PM, Delatte, Craig 
mailto:craig.dela...@twcable.com>> wrote:
What does your cinder.conf look like?  What are you using for storage?
Craig DeLatte
OpenStack DevOps
Time Warner Cable
704-731-3356
610-306-4816

From: Anwar Durrani mailto:durrani.an...@gmail.com>>
Date: Tuesday, March 3, 2015 at 5:28 AM
To: openstack-operators 
mailto:openstack-operators@lists.openstack.org>>
Subject: [Openstack-operators] Storage error

Hello Team,

I have setup icehouse with following nodes in test environment

1.) Controller node
2.) Compute Node
3.) Network Node

I have basic setup on it, when i was trying to create a volume to tenant called 
demo or admin i am able to create but with status error, i don't why it is so ? 
do i need to configure storage as a separate node to create cinder volume ? 
where i am missing ?

Thanks

--
Thanks & regards,
Anwar M. Durrani
+91-8605010721




This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.



--
Thanks & regards,
Anwar M. Durrani
+91-8605010721


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage error

2015-03-04 Thread Anwar Durrani
Hi Delatte,

I have installed cinder on controller node server, do i need to install
separate storage for Cinder Volume Management or controller server will be
ok ?

Please find configuration file attached.



Thanks

On Tue, Mar 3, 2015 at 7:48 PM, Delatte, Craig 
wrote:

>   What does your cinder.conf look like?  What are you using for storage?
>  Craig DeLatte
> OpenStack DevOps
> Time Warner Cable
> 704-731-3356
> 610-306-4816
>
>   From: Anwar Durrani 
> Date: Tuesday, March 3, 2015 at 5:28 AM
> To: openstack-operators 
> Subject: [Openstack-operators] Storage error
>
>   Hello Team,
>
>  I have setup icehouse with following nodes in test environment
>
>  1.) Controller node
>  2.) Compute Node
>  3.) Network Node
>
>  I have basic setup on it, when i was trying to create a volume to tenant
> called demo or admin i am able to create but with status error, i don't why
> it is so ? do i need to configure storage as a separate node to create
> cinder volume ? where i am missing ?
>
>  Thanks
>
>  --
>  Thanks & regards,
> Anwar M. Durrani
> +91-8605010721
>  
>
>
>
> --
> This E-mail and any of its attachments may contain Time Warner Cable
> proprietary information, which is privileged, confidential, or subject to
> copyright belonging to Time Warner Cable. This E-mail is intended solely
> for the use of the individual or entity to which it is addressed. If you
> are not the intended recipient of this E-mail, you are hereby notified that
> any dissemination, distribution, copying, or action taken in relation to
> the contents of and attachments to this E-mail is strictly prohibited and
> may be unlawful. If you have received this E-mail in error, please notify
> the sender immediately and permanently delete the original and any copy of
> this E-mail and any printout.
>



-- 
Thanks & regards,
Anwar M. Durrani
+91-8605010721



cinder.conf
Description: Binary data
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage error

2015-03-04 Thread Anwar Durrani
Hi Joseph,

This is error i am getting and this is configured on controller node.

[root@controller swift]# cinder list

+--++--+--+-+--+-+

|  ID  | Status | Display Name | Size |
Volume Type | Bootable | Attached to |

+--++--+--+-+--+-+

| 4b984108-4dd7-4aa7-9c06-02f868dcdc98 | error  |   myVolume   |  1   |
None|  false   | |

+--++--+--+-+--+-+

On Tue, Mar 3, 2015 at 6:18 PM, Joseph Bajin  wrote:

> You do have to create something for cinder to have as a backend.  That
> could be an LVM volume on a particular node, or it could be a CEPH setup,
> that's up to your requirements.
>
> Also, you didn't state the error that you are getting when you try to
> create a cinder volume.  That would certainly help in figuring out what is
> wrong.
>
>
> -- Joe
>
>
>
>
> On Tue, Mar 3, 2015 at 5:28 AM, Anwar Durrani 
> wrote:
>
>> Hello Team,
>>
>> I have setup icehouse with following nodes in test environment
>>
>> 1.) Controller node
>> 2.) Compute Node
>> 3.) Network Node
>>
>> I have basic setup on it, when i was trying to create a volume to tenant
>> called demo or admin i am able to create but with status error, i don't why
>> it is so ? do i need to configure storage as a separate node to create
>> cinder volume ? where i am missing ?
>>
>> Thanks
>>
>> --
>> Thanks & regards,
>> Anwar M. Durrani
>> +91-8605010721
>> 
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>


-- 
Thanks & regards,
Anwar M. Durrani
+91-8605010721

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators