Re: [Openstack] [openstack-dev] Mitaka - Unable to attach volume to VM

2017-02-09 Thread John Griffith
On Thu, Feb 9, 2017 at 9:04 PM, Sam Huracan 
wrote:

> Hi Sean,
>
> I've checked 'openstack volume list', the state of all volume is avaiable,
> and I can download image to volume.
> I also use Ceph as other Cinder volume backend, and issue is similarly.
> Same log.
>
> Port 3260 have opened on iptables.
>
> When I nova --debug volume-attach, I see nova contact to cinder for
> volume, but nova log still returns "VolumeNotFound", can't understand.
> http://paste.openstack.org/show/598332/
>
> cinder-scheduler.log and cinder-volume.log do not have any error and
> attaching log.
>
>
> 2017-02-10 10:16 GMT+07:00 Sean McGinnis :
>
>> On Fri, Feb 10, 2017 at 02:18:15AM +0700, Sam Huracan wrote:
>> > Hi guys,
>> >
>> > I meet this issue when deploying Mitaka.
>> > When I attach LVM volume to VM, it keeps state "Attaching". I am also
>> > unable to boot VM from volume.
>> >
>> > This is /var/log/nova/nova-compute.log in Compute node when I attach
>> volume.
>> > http://paste.openstack.org/show/598282/
>> >
>> > Mitaka version: http://prntscr.com/e6ns0u
>> >
>> > Can you help me solve this issue?
>> >
>> > Thanks a lot
>>
>>
>> Hi Sam,
>>
>> Any errors in the Cinder logs? Or just the ones from Nova not finding the
>> volume?
>>
>> Sean
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Long shot, but any chance your nova config is pointing to a different
Cinder Endpoint?  The Error in your logs states the volume DNE from the
cinderclient get call.

VolumeNotFound: Volume 5b69704f-14b4-41bb-af51-23d0aa55f148 could
not be found.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fw: Re: Cinder Multiple Backends - Filter On Tenant

2016-03-20 Thread John Griffith
On March 19, 2016 at 19:13:33, Erlon Cruz (sombra...@gmail.com) wrote:
>
> Hi Brent,
>
> Unfortunately that is not possible. The feature mentioned by Vahric,
> driver_filter, is a 'driver' custom filter capability. I.e., the vendor
> driver must implement that in order to be possible to use (surely, nothing
> holds you from implementing in the driver if your vendor does not supports
> it). You are trying to filter based in the volume size, the exiting
> filters, filter based in the host's available size not the volume.
>
> About the tenant filtering, the closest solution I can think of is is you
> create a volume type associated with the backend you want to cast volumes o
> the tenant, and then remove the permissions of all other available
> volume-types so, the tenant only sees that volume type.
>
> Can you give the reason why you need to filter like that? This could be a
> good addition to future Cinder features.
>
> Erlon
>
> On Thu, Feb 25, 2016 at 7:57 PM, Brent Troge 
> wrote:
>
>> yeah, i have read through those, plus i did some reading through the
>> actual code of all kilo/stable cinder filters
>>
>> i any case i think i have a good path forward, just need to start testing
>> to understand this a bit more.
>>
>> thanks for taking the time to assist.
>>
>>
>> On Thu, Feb 25, 2016 at 9:13 AM, Vahric Muhtaryan 
>> wrote:
>>
>>> I found this
>>>
>>>
>>> http://docs.openstack.org/admin-guide-cloud/blockstorage-driver-filter-weighing.html
>>>
>>> And this
>>>
>>>
>>> https://blueprints.launchpad.net/cinder/+spec/filtering-weighing-with-driver-supplied-functions
>>>
>>> From: Brent Troge 
>>> Date: Thursday 25 February 2016 at 16:17
>>> To: "openstack@lists.openstack.org" 
>>> Subject: [Openstack] Cinder Multiple Backends - Filter On Tenant
>>>
>>> I need the cinder filter to support directing volumes to a specific
>>> backend based on tenant id and volume size.
>>>
>>> Something like:
>>>
>>> # tenant specific backend
>>> [backend-1]
>>> filter = (volume_size > 100G) && (tenant_id == abc123)
>>>
>>> # all others
>>> [backend-2]
>>> filter = (volume_size < 100G)
>>>
>>>
>>>
>>> Is this possible with the current set of kilo stable cinder filters ?
>>>
>>> Thanks in advance.
>>>
>>>
>>> ___ Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to :
>>> openstack@lists.openstack.org Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​Hi Erlon,

Cinder has the ability to set access rights on Volume Types.  I haven't
looked at it in a while, but what you could do is set the type up based on
the backend, and then apply access to the tenant you want there.

One thing I guess we'd need to consider is different default types
per/tenant which seems like it would be a useful feature if there's not
already a way to make it work.

John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [CINDER] Scheduler Support for pools issue, not picking free pools

2016-02-01 Thread John Griffith
On Mon, Feb 1, 2016 at 6:49 PM, Dilip Sunkum Manjunath <
dilip.sunkummanjun...@toshiba-tsip.com> wrote:

> Hi all,
>
>
>
>
>
> *Use Case : *
>
>
>
> *Step 1*
>
> §  I have multiple backend pools,
>
> §  Updated the get_volume_stats() method to hold the pools information
> returned from storage API’s
>
>
>
> Sample log :
>
>
>
> *[{'pool_name': '0', 'total_capacity_gb': 838, 'QoS_support': False,
> 'thick_provisioning_support': True, 'reserved_percentage': 1,
> 'consistencygroup_support': False, 'thin_provisioning_support': True,
> 'free_capacity_gb': 83.75}, {'pool_name': '1',
> 'total_capacity_gb': 1126, 'QoS_support': False,
> 'thick_provisioning_support': True, 'reserved_percentage': 1,
> 'consistencygroup_support': False, 'thin_provisioning_support': True,
> 'free_capacity_gb': 67.560017}]*
>
>
>
>
>
> *Step 2*
>
>
>
>
>
> · In Create volume method am reading the pool information as
> storage api need pool info(on which pool in need to create)
>
> · Like pool_id = volume_utils.extract_host(volume['host'],
> level='pool') by making use of below import
>
>
>
> from cinder.volume import utils as volume_utils
>
>
>
> *Problem am facing is:*
>
>
>
> The scheduler is not
> always picking the free pool which has space, it returns the pool which
> don’t have space.
>
> I came to know when I
> printed the volume['host'] and above pool id.
>
>
>
>
>
> Is there anything else I must do? Please suggest.
>
>
>
>
>
>
>
> Thanks
>
> Dilip
>
>
>
>
>
>
>
>
>
> The information contained in this e-mail message and in any
> attachments/annexure/appendices is confidential to the
> recipient and may contain privileged information. If you are not the
> intended recipient, please notify the
> sender and delete the message along with any
> attachments/annexure/appendices. You should not disclose,
> copy or otherwise use the information contained in the message or any
> annexure. Any views expressed in this e-mail
> are those of the individual sender except where the sender specifically
> states them to be the views of
> Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
> Although this transmission and any attachments are believed to be free of
> any virus or other defect that might affect any computer system into which
> it is received and opened, it is the responsibility of the recipient to
> ensure that it is virus free and no responsibility is accepted by
> Toshiba Software India Pvt. Ltd, for any loss or damage arising in any way
> from its use.
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​Sounds like this is in reference to a new driver that you're working on?
Any chance you can share a link to either a patch in gerrit or the code on
Github?

John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] OpenStack [CINDER] : frequency of execution of get volume stats

2016-01-05 Thread John Griffith
Hi Dilip,

Currently the answer is NO, not for a single task.  In Cinder everything is
blanketed under a single periodic task interval.  You can modify the
interval itself for every periodic in Cinder.

The hooks to create multiple periodic tasks and spacing for them were added
a while back in this bug:
https://bugs.launchpad.net/nova/+bug/1319232

via this patch:
https://review.openstack.org/#/c/96512/

We probably could/should consider adding the config and options to "finish"
the work that was started here.  Keep in mind though that messing with the
spacing on this call too much can lead to undesirable scheduler behavior.

Thanks,
John

On Mon, Jan 4, 2016 at 7:47 PM, Dilip Sunkum Manjunath <
dilip.sunkummanjun...@toshiba-tsip.com> wrote:

> Hi all,
>
>
>
>
>
> I would like to know that, is it possible to configure the time frequency
> for executing the method get_volume_stats  ()?
>
>
>
>
>
>
>
>
>
> Thanks
>
> Dilip
>
> The information contained in this e-mail message and in any
> attachments/annexure/appendices is confidential to the
> recipient and may contain privileged information. If you are not the
> intended recipient, please notify the
> sender and delete the message along with any
> attachments/annexure/appendices. You should not disclose,
> copy or otherwise use the information contained in the message or any
> annexure. Any views expressed in this e-mail
> are those of the individual sender except where the sender specifically
> states them to be the views of
> Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
> Although this transmission and any attachments are believed to be free of
> any virus or other defect that might affect any computer system into which
> it is received and opened, it is the responsibility of the recipient to
> ensure that it is virus free and no responsibility is accepted by
> Toshiba Software India Pvt. Ltd, for any loss or damage arising in any way
> from its use.
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Add external storage to openstack

2015-11-05 Thread John Griffith
On Thu, Nov 5, 2015 at 4:01 AM, Navneet Singh 
wrote:

> Hi All
>
> I installed openstack kilo on an ubuntu linux-server 14.04 using devstack.
> I installed ubuntu and openstack kilo on a 512GB SSD (sda1). I also have
> 8TB HDD (sdb1) storage connected to the same system.
>
> Now I want that all my instances that i launch take up space 8TB drive
> when i look on the 8TB drive, but it seems it is using space on my SSD.
> Also here openstack gui shows no sign of that 8TB drive.
>
> Please help me in understanding  why is it so and what can be configured
> or steps I can follow to get desired goal.
>
> --
> Regards
> Navneet
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
​Did you set your instances_path variable in nova.conf to point to a
directory on your sdb1?  The default uses $state_path/instances, maybe
that's what you have going on.

John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Volume from image question

2015-10-12 Thread John Griffith
On Mon, Oct 12, 2015 at 7:46 PM, Cory Hawkless  wrote:

> In my setup I have one Ceph cluster but 2 different pools, one for images
> which is on SATA disks and one for volumes which is on faster SAS disks.
>
> Given this setup I don’t imagine there is any way to thin provision an
> image to a volume? It would need to be a complete copy from one pool to
> another, yes?
>
>
>
> I’m certainly not in a position to be able to contribute to a code change,
> I’m surprised this hasn’t been done already, it seems terribly inefficient
> to have to copy the images twice?
>
>
>
> Regards,
>
> Cory
>
>
>
> *From:* Avishay Traeger [mailto:avis...@stratoscale.com]
> *Sent:* Monday, 12 October 2015 9:55 PM
> *To:* Cory Hawkless 
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] Volume from image question
>
>
>
> The flow for all images for this process is to download from Glance to a
> temporary file, and then write to volume.  This is not necessary for raw
> images, but that optimization has not been done.  I did leave a comment
> about that in the code though 2.5 years ago, but never implemented it - you
> can give it a go if you'd like :-)
>
>
>
> Are you using two different Ceph clusters, one for images and one for
> volumes?  Otherwise it should just be doing a thin provisioned clone of the
> image (no download, no temp space, no upload).
>
>
>
> On Mon, Oct 12, 2015 at 12:27 PM, Cory Hawkless 
> wrote:
>
> Hi all,
>
>
>
> When creating a volume from an image(Using Horizon), why does the Cinder
> server need to do what appears to be a conversion of the image before it
> can create the volume?
>
> All of my images in Glance are uploaded in RAW format, images and volumes
> are stored in Ceph.
>
>
>
> The reason I know the images are being processed on my glance server is
> because it runs out of disk space when trying to make volumes from large
> images and the process fails. I can see the temporary file in
> /var/lib/cinder/conversion
>
>
>
> Is it not possible to have glance simply copy the image form the images
> store into the volumes store? I am going to regularly be creating new
> Windows Server instances, so it would take quite some time for a 20gb image
> to be processed by Cinder before it can be uploaded into Ceph.
>
>
>
> Regards,
>
> Cory
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
> --
>
> *Avishay Traeger, PhD*
>
> *System Architect*
>
>
>
> Mobile: +972 54 447 1475
>
> E-mail: avis...@stratoscale.com
>
>
>
>
>
> Web  | Blog
>  | Twitter
>  | Google+
> 
>  | Linkedin 
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​So trying to jog my memory a bit on this.  The initial work was done
here: [1]

A concern was raised but perhaps it was an IRC conversation as opposed to a
formal bug or review, but I reworked this after patch set 2 and added the
piece that does the conversion regardless of format.

Anyway, it is quite possible that Ceph can be configured so that this is
not an issue.  I seem to recall that you could in fact use it's internal
clones (that was actually what prompted me to work on this feature in the
first place).

Also, FWIW; in Liberty we fixed this up considerably by adding a cache
layer.  You'll still have this expense on initial creation so I guess it
won't really help you.

Sorry, I don't have better documentation for you on the reasoning behind
the change.  Perhaps some folks with more Ceph experience can chime in and
offer up a possible solution?

John
[1]: https://review.openstack.org/#/c/17437/2/cinder/volume/driver.py​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Volume from image question

2015-10-12 Thread John Griffith
On Mon, Oct 12, 2015 at 3:27 AM, Cory Hawkless  wrote:

> Hi all,
>
>
>
> When creating a volume from an image(Using Horizon), why does the Cinder
> server need to do what appears to be a conversion of the image before it
> can create the volume?
>
> All of my images in Glance are uploaded in RAW format, images and volumes
> are stored in Ceph.
>
>
>
> The reason I know the images are being processed on my glance server is
> because it runs out of disk space when trying to make volumes from large
> images and the process fails. I can see the temporary file in
> /var/lib/cinder/conversion
>
>
>
> Is it not possible to have glance simply copy the image form the images
> store into the volumes store? I am going to regularly be creating new
> Windows Server instances, so it would take quite some time for a 20gb image
> to be processed by Cinder before it can be uploaded into Ceph.
>
>
>
> Regards,
>
> Cory
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
​Hey Cory,​

​If I remember correctly, it was due to concerns around security issues.
It's using the intermediate/tmp file to perform image checks prior to
blindly laying down on the volume.  The maybe good news is that at least it
doesn't need to do a conversion of the file.

Thanks,
John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Cinder] Questions on implementing the Replication V2 spec

2015-09-25 Thread John Griffith
On Thu, Sep 24, 2015 at 3:21 PM, Price, Loren 
wrote:

> Hi John,
>
>
>
> Okay, it sounds like we’ll be okay to implement the replication V2 spec. I
> believe the failover aspect was the only API that we were seeing a problem
> with. It also sounds like there might be some areas for improvement around
> documentation, etc. Let me know if there’s anything I/we can do to help on
> that.
>
>
>
> Thanks,
>
>
>
> Michael
>
>
>
> *From:* John Griffith [mailto:john.griff...@solidfire.com]
> *Sent:* Thursday, September 24, 2015 2:26 PM
> *To:* Price, Loren
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] [Cinder] Questions on implementing the
> Replication V2 spec
>
>
>
>
>
>
>
> On Thu, Sep 24, 2015 at 11:48 AM, Price, Loren 
> wrote:
>
> Hey,
>
>
>
> We’re looking into implementing the VolumeReplication_V2
> <https://github.com/openstack/cinder-specs/blob/master/specs/liberty/replication_v2.rst>
> spec for our NetApp E-Series volume driver. Looking at the specification, I
> can foresee a problem with implementing the new API call 
> “failover_replicated_volume(volume) “
> with an unmanaged replication target. I believe with a managed target we
> can provide it, if I’m understanding correctly that it merely requires
> updating the host id for the volume. Based on that, I have two questions:
>
>
>
> 1.  Is it acceptable, in implementing this spec, to only provide this
> API for managed targets (and either throw an exception or essentially make
> a no-op) for an unmanaged replication target?
>
> 2.  In general, if a storage backend is incapable of performing a
> certain operation, what is the correct way to handle it? Can the driver
> implement the spec at all? Should it throw a NotImplementedError? No-op?
>
>
>
> Thanks,
>
>
>
> Michael Price
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​Ooops, did I not respond to the list on that last response?  Just incase
> here it is again:
>
> ​
>
>
>
> 1.  Is it acceptable, in implementing this spec, to only provide this
> API for managed targets (and either throw an exception or essentially make
> a no-op) for an unmanaged replication target?
>
> ​Yes by ​
>
>
>
> ​design it's set up such that ​it's left up to configuration.  In other
> words the idea is that we have fairly loose definitions around the API
> calls themselves to allow for differing implementations.
>
> 2.  In general, if a storage backend is incapable of performing a
> certain operation, what is the correct way to handle it? Can the driver
> implement the spec at all? Should it throw a NotImplementedError? No-op?
>
> ​Depends on who you ask :)  IMO we need to do a better job of this, this
> could be documenting in the deployment guides how to enable/disable API
> calls in certain deployments so that unsupported calls are just flat out
> not available.  My true belief is that we shouldn't be implementing
> features that you can't run with every/any backend device in the first
> place, but that's my usual rant and somewhat off topic here :)
>
>
>
> Note that a lot of the logic for replication in V2 was moved into the
> volume-type and the conf file precisely to address some of the issues you
> mention above.  The idea being that if the capabilities of the backend
> don't match replication specs in the type then the command fails for
> no-valid host.  The one thing I don't like about this is how we relay that
> info to the end user (or more accurately the fact that we don't).  We just
> put the volume in error state and the only info regarding why is in the
> logs which the end user doesn't have.  This is where something like a
> better more clear policy file would help as well as providing a
> capabilities call in the API.
>
>
>
> By the way, I'm glad you asked these questions here.  This is part of the
> reason why I was so strongly opposed to merging an implementation of the V2
> replication in Liberty.  I think it's important to have more than one or
> two vendors looking at this and working out details so we release something
> that is stable and usable.  My philosophy is that now for M we have a
> foundation in the core code that will likely evolve as drivers begin
> implementing the feature.
>
>
>
​By the way, have you looked at this at all (or is that what you're
referencing)[1]?  Might be more useful than the spec at this point.

[1]:
https://github.com/openstack/cinder/blob/master/doc/source/devref/replication.rst
​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Cinder] Questions on implementing the Replication V2 spec

2015-09-24 Thread John Griffith
On Thu, Sep 24, 2015 at 11:48 AM, Price, Loren 
wrote:

> Hey,
>
>
>
> We’re looking into implementing the VolumeReplication_V2
> 
> spec for our NetApp E-Series volume driver. Looking at the specification, I
> can foresee a problem with implementing the new API call “
> failover_replicated_volume(volume) “ with an unmanaged replication
> target. I believe with a managed target we can provide it, if I’m
> understanding correctly that it merely requires updating the host id for
> the volume. Based on that, I have two questions:
>
>
>
> 1.  Is it acceptable, in implementing this spec, to only provide this
> API for managed targets (and either throw an exception or essentially make
> a no-op) for an unmanaged replication target?
>
> 2.  In general, if a storage backend is incapable of performing a
> certain operation, what is the correct way to handle it? Can the driver
> implement the spec at all? Should it throw a NotImplementedError? No-op?
>
>
>
> Thanks,
>
>
>
> Michael Price
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​Ooops, did I not respond to the list on that last response?  Just incase
here it is again:
​

>
>
> 1.  Is it acceptable, in implementing this spec, to only provide this
> API for managed targets (and either throw an exception or essentially make
> a no-op) for an unmanaged replication target?
>
​Yes by ​

​design it's set up such that ​it's left up to configuration.  In other
words the idea is that we have fairly loose definitions around the API
calls themselves to allow for differing implementations.

> 2.  In general, if a storage backend is incapable of performing a
> certain operation, what is the correct way to handle it? Can the driver
> implement the spec at all? Should it throw a NotImplementedError? No-op?
>
​Depends on who you ask :)  IMO we need to do a better job of this, this
could be documenting in the deployment guides how to enable/disable API
calls in certain deployments so that unsupported calls are just flat out
not available.  My true belief is that we shouldn't be implementing
features that you can't run with every/any backend device in the first
place, but that's my usual rant and somewhat off topic here :)

Note that a lot of the logic for replication in V2 was moved into the
volume-type and the conf file precisely to address some of the issues you
mention above.  The idea being that if the capabilities of the backend
don't match replication specs in the type then the command fails for
no-valid host.  The one thing I don't like about this is how we relay that
info to the end user (or more accurately the fact that we don't).  We just
put the volume in error state and the only info regarding why is in the
logs which the end user doesn't have.  This is where something like a
better more clear policy file would help as well as providing a
capabilities call in the API.

By the way, I'm glad you asked these questions here.  This is part of the
reason why I was so strongly opposed to merging an implementation of the V2
replication in Liberty.  I think it's important to have more than one or
two vendors looking at this and working out details so we release something
that is stable and usable.  My philosophy is that now for M we have a
foundation in the core code that will likely evolve as drivers begin
implementing the feature.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova boot from volume problem after Kilo upgrade

2015-08-26 Thread John Griffith
On Wed, Aug 26, 2015 at 10:59 AM, Jonathan Proulx  wrote:

> Error: Flavor's disk is too small for
> requested image
>

​Without looking closely, wonder if this is what you're seeing:

https://bugs.launchpad.net/nova/+bug/1457517
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] changing hostname by changing instance name

2015-08-14 Thread John Griffith
On Fri, Aug 14, 2015 at 12:31 PM, Jerry Zhao  wrote:

> I wonder whether the previous hostname was associated with the instance
> name or not. If yes, then your cloud-init worked, just that metadata was
> not updated by changing the instance name. If no, you need cloud-init built
> in the image and it will poll the metadata after reboot.
>
>
>
> On 08/14/2015 10:45 AM, mad Engineer wrote:
>
> so is editing cloudinit manually the only way to change hostname?.I wonder
> how would it get the instance name as hostname during build state.
>
> On Fri, Aug 14, 2015 at 11:06 PM, David Medberry 
> wrote:
>
>> This is a VM specific issue. You may be able to do so with cloud init but
>> primarily you will need to go into the instance if you do this after the
>> instance is booted.
>>
>> How to do so within the instance is OS specific.
>> On Aug 14, 2015 11:05 AM, "mad Engineer" 
>> wrote:
>>
>>> is there any way to change hostname by changing instance name.
>>> I tried changing instance name and then did a hard reboot but hostname
>>> is still the same.If this is not the right approach for changing hostname
>>> can some one tell me what should be done to change hostname of instances
>>> without logging to instance.
>>>
>>> tried on Juno-Neutron and icehouse-nova-network none worked
>>>
>>> Thanks for any help
>>>
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​You could build a new one with the name you wanted... or you could
snapshot the one you have, and create from snapshot again with the name you
wanted.  I've never tried to change something like that, always found it
easier to just spawn a new instance, or ignore the name to begin with.​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] [all] OpenStack voting by the numbers

2015-07-30 Thread John Griffith
On Wed, Jul 29, 2015 at 10:36 AM, David Medberry 
wrote:

> Nice writeup maish! very nice.
>
> On Wed, Jul 29, 2015 at 10:27 AM, Maish Saidel-Keesing <
> mais...@maishsk.com> wrote:
>
>> Some of my thoughts on the Voting process.
>>
>>
>> http://technodrone.blogspot.com/2015/07/openstack-summit-voting-by-numbers.html
>>
>> Guess which category has the most number of submissions??
>> ;)
>>
>> --
>> Best Regards,
>> Maish Saidel-Keesing
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Well, I would expect most people attending the Summit would be most
interested in a general category like "Operations".  Most of the audience
here I think would naturally be most interested in "how to deploy and
manage"; that just makes sense I think and I don't know that anybody would
argue that.

I'm sort of confused how this correlates to "the operator community
providing feedback", unless I'm misinterpreting some of your writing here.

While there are some great talks about deploying and operating an OpenStack
cloud in there, I wouldn't make the general assumption that these are
"Operators giving feedback".  A quick glance, it appears that the bulk of
the talks are vendors talking about their monitoring and deployment tools
which IMO is different than "the voice of the operators".  This is in my
opinion sort of expected make up of the talks at the summit these days.

Just my two cents, great write up, just a grain of salt so to speak.

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [cinder] Scheduler filter question; examples sought

2015-06-11 Thread John Griffith
On Thu, Jun 11, 2015 at 7:27 AM, D'ANDREA, JOE (JOE) <
jdand...@research.att.com> wrote:

> Given a host with SSDs *and* HDDs (arrays and/or volume groups may or may
> not be involved), can a Cinder scheduler filter advise Cinder to "Use any
> SSD on host X" ... or "Use specific SSD Y on host X?"
>
> I've been rummaging around the docs to find an answer to this as well. So
> far, no luck. This is all I could find, and it doesn't look current (plus
> the formatting is off):
>
> http://docs.openstack.org/developer/cinder/devref/filter_scheduler.html
>
> Please don't hesitate to ask for more/specific info if it would help.
>
> Clues, pointers to specific docs, and sample code welcomed/appreciated!
>
> jd
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Not sure if this is a fit for use case, but you can use volume-types for
that and the filter scheduler will pick that up for you.

Check this doc here: [1]

NOTE:
You don't have to use the multi-backend feature on a single node if you
don't want to.  This works still with multiple unique volume-nodes as
well.  Let me know if you get hung up, happy to help.

Thanks
John

[1]: http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova-docker][nova]Different meta data information in nova and docker view

2015-05-27 Thread John Griffith
On Wed, May 27, 2015 at 4:26 AM,  wrote:

> Dims,
>
> Thanks for this input. I hope this is done in nova-docker code?
>
> Regards
> Ashish
>
> 
> From: Davanum Srinivas 
> Sent: Wednesday, May 27, 2015 3:44 PM
> To: Ashish Jain (WT01 - BAS)
> Cc: openstack@lists.openstack.org
> Subject: Re: [Openstack] [nova-docker][nova]Different meta data
> information in nova and docker view
>
> Ashish,
>
> when we create the docker container, we can pass it the name we want
> it to have, so yes, it's possible. you will have to try modifying the
> code to see the fallout.
>
> thanks,
> dims
>
> On Wed, May 27, 2015 at 5:12 AM,   wrote:
> >
> > Hello,
> >
> >
> > I am using nova-docker. I deploy a docker container and run the
> following 2 commands
> >
> > 1) nova list
> >
> +--+---+++-+---+
> > | ID   | Name  | Status |
> Task State | Power State | Networks  |
> >
> +--+---+++-+---+
> > | 35cdcbc6-516d-4e99-a8a1-28fedd5b088f | test123,and three | ACTIVE | -
> | Running | demo-net=192.168.1.71 |
> >
> +--+---+++-+---+
> >
> > 2) sudo docker ps
> > CONTAINER IDIMAGE   COMMAND  CREATED
>  STATUS  PORTS   NAMES
> > ad381f97816dhttpd:2 "httpd-foreground"   34 minutes
> ago  Up 34 minutes
>  nova-35cdcbc6-516d-4e99-a8a1-28fedd5b088f
> >
> >
> > Why is their a difference in the name of the container? It is possible
> to somehow customize what is being seen in the docker view. for example map
> the Name field in Nova to NAMES field in docker?
> >
> > Regards
> > Ashish
> >
> >
> >
> >
> > The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability
> for any damage caused by any virus transmitted by this email.
> www.wipro.com
> >
> > ___
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
> The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability
> for any damage caused by any virus transmitted by this email.
> www.wipro.com
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

Hi Ashish,

As Dims pointed out you could certainly change this and provide the same
display name that you provided to Nova to Docker.  The problem however is
that you probably don't want to do that.  Using Nova's UUID for the
container name in Docker is going to solve a number of head-aches and
potential problems for you.  Not the least of which is consistency.  The
"nova-" has been useful for many to easily identify containers that
are managed/owned by Nova, it makes sense to do things this way in most
cases I think.

Is there an advantage that you see in using Nova's display name property
instead of the UUID here?

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] rating talks (and keynotes) at the OpenStack summit

2015-05-20 Thread John Griffith
On Tue, May 19, 2015 at 11:45 PM, Amrith Kumar  wrote:

>  I attended a talk by Mark Baker today (http://sched.co/2qbJ) entitled
> “The OpenStack Summit Talk Selection Process is Broken” and I think it was
> an informative session.
>
>
>
> The statistics presented indicated that just under a third of the talks
> submitted got accepted at this summit and that is a very healthy ratio and
> that’s a great thing.
>
>
>
> One thing that was brought up at the talk was that there was no formal
> feedback mechanism about talks and keynotes at Summit. Is there any way in
> which we can get a feedback mechanism for the talks and sessions at this
> summit up and running? It would be a valuable piece of information if we
> could get it.
>
>
>
> Any thoughts on how this can be done? There were 296 sessions; we
> obviously know the names of the sessions and the speakers. Does our
> scheduling mechanism have a ‘ratings module’ that can be turned on? Is
> there some other quick and dirty mechanism we can use?
>
>
>
> It would be awesome if we could quickly get something in place before we
> all leave Vancouver so we can gather this information and it could serve as
> a valuable form of input for future selection committees.
>
>
>
> Thanks,
>
>
>
> -amrith
>
>
>
> P.S. I gave a talk and if there’s no formal mechanism for feedback I
> welcome email on the subject. The talk was about Trove and replication with
> MySQL on Monday evening.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
​Hi Amrith,

I think this is a great point, there's always informal conversations and
mixed reviews on talks, but it would be really great if we could formalize
this, and use it to aid in the process of choosing talks for the "next"
summit.

Of course the key is participation and some why of trying to make sure we
actually get info submitted by people that attended the talk.

Anyway, I think it's a great idea and certainly worth looking at possibly
doing for the summit in Tokyo.

Thanks,
John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Glance image can't be public

2015-05-12 Thread John Griffith
On Tue, May 12, 2015 at 10:55 PM, Wilson Kwok  wrote:

> Hello all,
>
> I have been upload image by Glance that can show in image-list:
>
> root@openstack:/home/wilson# glance image-list
>
> +--++-+--+---++
> | ID   | Name   | Disk Format | Container
> Format | Size  | Status |
>
> +--++-+--+---++
> | 53d53881-961e-463d-b898-e8f6e56ede87 | Ubuntu | qcow2   | bare
>   | 262078976 | active |
>
> +--++-+--+---++
>
> then I upload another one using following command:
>
> glance image-create --name "testing" --disk-format qcow2
> --container-format bare --is-public ture <
> ubuntu-12.04-server-cloudimg-amd64-disk1.img
>
> but it is not public
>
> +--+--+
> | Property | Value|
> +--+--+
> | checksum | 2bdc5bfac378385cedae59c4301799eb |
> | container_format | bare |
> | created_at   | 2015-05-13T04:50:12  |
> | deleted  | False|
> | deleted_at   | None |
> | disk_format  | qcow2|
> | id   | 013fca2c-8e13-4b67-a04f-7befca32c6a9 |
> | is_public| False|
> | min_disk | 0|
> | min_ram  | 0|
> | name | testing  |
> | owner| None |
> | protected| False|
> | size | 262078976|
> | status   | active   |
> | updated_at   | 2015-05-13T04:50:17  |
> | virtual_size | None |
> +--+--+
>
> so can't show in image-list, why ?
>
> Thanks
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> You spelled "true" incorrectly you have "--is-public ture".
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] STORAGE ON OPENSTACK

2015-04-18 Thread John Griffith
On Sat, Apr 18, 2015 at 4:24 AM, Chamambo Martin 
wrote:

> I just installed openstack and im still learning everything I can get my
> hands on , from youtube videos to openstack docs and just thought I can
> join
> a mailing list to speed up the process of acquiring openstack knowledge.
>
> This is my case : I have openstack all_in_one and right now everthing is
> just on one machine [from reading the docs seems like I can expand my
> openstack projects [cinder ,nova etc] to other nodes and that will be for
> later]
>
> Right now I have installed the RDO icehouse packstack and its on a centos
> machine , with 4 gig. So far so good.On the same machine I have configured
> a
> /home directory with 1 terabyte for storing my images and when I open
> openstack dashboard , I cant seem to find a way to reference it ,how can I
> add that filesystem as my storage for my VMs
>
> My filestystem is as follows
>
> [root@cumulonimbus ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/mapper/vg_cumulonimbus-lv_root
> ---on open stack dashboard ,it
> seems
> to recognize this only
>50G  2.6G   45G   6% /
> tmpfs 1.9G  4.0K  1.9G   1% /dev/shm
> /dev/sda1 477M   49M  403M  11% /boot
> /dev/mapper/vg_cumulonimbus-lv_home
>   351G   67M  333G   1% /home
> /srv/loopback-device/swiftloopback
>   1.9G  3.0M  1.8G   1% /srv/node/swiftloopback
> [root@cumulonimbus ~]#
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

Sounds like you're looking for instances path?  In other words the location
you want the qcow files for the instances to live?

That's set up in the nova.conf file as: instances_path

Check out the docs page here and see if that helps.  Items like conf
settings don't show up in the dashboard by the way.

http://docs.openstack.org/juno/config-reference/content/section_compute-config-samples.html

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Multiple cinder lvm nodes

2015-04-06 Thread John Griffith
On a single node using multi-backend you should keep in mind all of your
iSCSI traffic going through that node.  Usual VG size limits apply of
course as well.  Some guidelines I've heard from various OpenStack vendors
in the past suggest 5TB max on a single cinder-volume/LVM node.  From there
add more nodes (additional nodes, not multi-backend on the same node).

One of the biggest advantages and drivers for multi-backend was due to the
fact that third party devices require little of the volume-node, using LVM
however is a different story.

On Mon, Apr 6, 2015 at 1:25 AM, Vijay Kakkar  wrote:

> Is there any limit as well for LVM backends ?
>
> On Mon, Apr 6, 2015 at 2:12 AM, John Griffith  > wrote:
>
>>
>>
>> On Sun, Apr 5, 2015 at 2:42 PM, John Griffith <
>> john.griff...@solidfire.com> wrote:
>>
>>>
>>> On Apr 5, 2015 2:10 PM, "mad Engineer"  wrote:
>>>
>>>> Read about cinder multi backend support,for using different backend
>>>> types.
>>>> I have 2 servers for running as lvm+iscsi back ends.
>>>> Is it possible to use these 2 lvm backend nodes for same volume type
>>>> and schedule block drives between these for that volume type.
>>>> So that volumes will be distributed between these 2 nodes.
>>>>
>>>> ___
>>>> Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to : openstack@lists.openstack.org
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>>
>>> For sure, if you create without a type specification the weight filter
>>> just places based on available capacity among the backends you configured.
>>> You can also have an LVM type with no backend name
>>> , so you would
>>>
>>
>> Err... that should say "same" backend name.
>>
>>
>>> just
>>>  weigh between LVM backend
>>> s
>>> .  That's actually the default example shown here
>>> :
>>> https://wiki.openstack.org/wiki/Cinder-multi-backend
>>>
>>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
> --
> Regards,
>
> *Vijay Kakkar - RHC{E,SS,VA,DS,A,I,X}*
>
> Techgrills Systems Pvt. Ltd.
> E4,3rd Floor,
> South EX Part I,
> New Delhi,110049
> 011-46521313 | +91103657
> Singapore: +6593480537
> Australia: +61426044312
> http://lnkd.in/bnj2VUU
> http://www.facebook.com/techgrills
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Multiple cinder lvm nodes

2015-04-05 Thread John Griffith
On Sun, Apr 5, 2015 at 2:42 PM, John Griffith 
wrote:

>
> On Apr 5, 2015 2:10 PM, "mad Engineer"  wrote:
>
>> Read about cinder multi backend support,for using different backend types.
>> I have 2 servers for running as lvm+iscsi back ends.
>> Is it possible to use these 2 lvm backend nodes for same volume type and
>> schedule block drives between these for that volume type.
>> So that volumes will be distributed between these 2 nodes.
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
> For sure, if you create without a type specification the weight filter
> just places based on available capacity among the backends you configured.
> You can also have an LVM type with no backend name
> , so you would
>

Err... that should say "same" backend name.


> just
>  weigh between LVM backend
> s
> .  That's actually the default example shown here
> :
> https://wiki.openstack.org/wiki/Cinder-multi-backend
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Multiple cinder lvm nodes

2015-04-05 Thread John Griffith
On Apr 5, 2015 2:10 PM, "mad Engineer"  wrote:

> Read about cinder multi backend support,for using different backend types.
> I have 2 servers for running as lvm+iscsi back ends.
> Is it possible to use these 2 lvm backend nodes for same volume type and
> schedule block drives between these for that volume type.
> So that volumes will be distributed between these 2 nodes.
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
For sure, if you create without a type specification the weight filter just
places based on available capacity among the backends you configured.  You
can also have an LVM type with no backend name
, so you would just
 weigh between LVM backend
s
.  That's actually the default example shown here
:
https://wiki.openstack.org/wiki/Cinder-multi-backend
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Misinformation and my apologies, I am NOT the PTL for Cinder

2015-04-03 Thread John Griffith
All,

I've recently received a number of PM's and text messages with folks a bit
upset apparently that my bio at a recent speaking engagement was out of
date and had me listed as the PTL for the Cinder project.  I'm sorry that
this has caused a number of folks to be rather upset.

First off, my apologies, I removed that information from my profile and bio
in various places a long time ago.  I'm not completely certain where the
mix up is, it does seem that there's a role's section in openstack.org that
has not been updated and I do not have admin rights to change it.  Reed
fortunately is in the process of trying to fix that up as I type.

Secondly, Mike Perez is the current PTL, my apologies again to Mike.

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova instances

2015-03-14 Thread John Griffith
On Sat, Mar 14, 2015 at 12:18 PM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> I am using below local.conf file
> http://paste.openstack.org/show/192320/
>
> Actually i am deploying SOS-CI.
> I am using ubuntu 14.04 VM having 150 GB sda and 24GB RAM and 8 core cpu.
> Installing devstack is very fast on this VM and nova instances are
> creating on getting desired gerrit cinder events,
>
> But nova instances performance is very slow,
> I am using official trusty(ubuntu 14.04) image (
> http://uec-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
> )
> for nova instances.
> I am using flavor-id 3 for nova instances.
> Installing devstack in nova instance is taking 8-9 hrs.
>
> so wondering how to resolve this
>
>
>
> Regards
> Nikesh
>
>
> On Sat, Mar 14, 2015 at 8:10 PM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
>>
>>
>> On Sat, Mar 14, 2015 at 8:22 AM, Nikesh Kumar Mahalka <
>> nikeshmaha...@vedams.com> wrote:
>>
>>> from where devstack instances consumes space for rootdisk,
>>> if i have a ubuntu setup with sda 150gb.
>>>
>>> devstack is creating two volume groups on loop-back files of 10gb each
>>> (stack-volumes-default and stack-volumes-lvmdriver-1).
>>>
>>> are they consume these volume groups for nova instances root-disk?
>>>
>>>
>>> Regards
>>> Nikesh
>>>
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>> Those loopbacks are specifically for Cinder (well, the
>> stack-volume-default is actually for "nothing" right now but that's a whole
>> separate debate I have had with some folks and have given up on).
>>
>> Ephemeral Nova instances by default in devstack are going to consume qcow
>> files in /opt/stack/data/nova/instances (assuming you're still using the
>> same local.conf I gave you a while back).
>>
>> Thanks,
>> John
>>
>>
> Hi Nikesh,

I believe I already went through this with one of your coworkers via IRC a
week or so ago.  Check and make sure you're not using QEMU and that you're
using KVM for your virtualization.  This of course assumes you're on a
modern processor that support nested virt.  To check this, just set:
virt_type=kvm in your nova.conf.

You may need to enable virt settings on your physical machines bios.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova instances

2015-03-14 Thread John Griffith
On Sat, Mar 14, 2015 at 8:22 AM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> from where devstack instances consumes space for rootdisk,
> if i have a ubuntu setup with sda 150gb.
>
> devstack is creating two volume groups on loop-back files of 10gb each
> (stack-volumes-default and stack-volumes-lvmdriver-1).
>
> are they consume these volume groups for nova instances root-disk?
>
>
> Regards
> Nikesh
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Those loopbacks are specifically for Cinder (well, the
stack-volume-default is actually for "nothing" right now but that's a whole
separate debate I have had with some folks and have given up on).

Ephemeral Nova instances by default in devstack are going to consume qcow
files in /opt/stack/data/nova/instances (assuming you're still using the
same local.conf I gave you a while back).

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova network and neutron

2015-03-04 Thread John Griffith
Neutron and nova-net are just two completely separate networking services
for OpenStack.  Neutron being the newer which will eventually replace
nova-network altogether.

Which one you are running depends on which service you install and
configure.  If you take look at the network section at docs.openstack.org
it may paint a clearer picture.

Good luck,
John
On Mar 4, 2015 9:23 AM,  wrote:

> Hi
>
> My colleague and me got into a discussion today about nova network. From
> my understanding, a setup is said to be neutron if it has Neutron agents
> installed and running and not that it is a three node architecture
> meaning it has 3 physical hardware. And a set up is using nova network if
> it doesn't have neutron agents running and not that it is a 2 node
> architecture.
>
> Number of nodes needed depends on the configurations of the physical
> hardware. Meaning we have neutron setup with 2 nodes also..
>
> Or is it that 2 node setup is nova network and 3 node is neutron?
>
> Sent from my BlackBerry 10 smartphone.
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] "volume's current host" in retype [cinder]

2015-03-01 Thread John Griffith
On Sun, Mar 1, 2015 at 12:53 PM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> Hi,
> i was trying to understand below patch:
> https://review.openstack.org/#/c/44881/24
>
> What "volume's current host" means in this patch?
> I want to understand it with some examples.
>
>
>
>
>
>
> Regards
> Nikesh
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Please don't cross post against both general and dev lists.  Also you and I
have had this conversation multiple times in IRC, your best bet is to
continue with these sorts of questions in #openstack-cinder.  Not private
message, but the Cinder channel, and if there are still things that aren't
clear you should ask there for help with your driver development.

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Base Image size

2015-02-26 Thread John Griffith
On Thu, Feb 26, 2015 at 10:29 AM, Maish Saidel-Keesing 
wrote:

>  If you have a minimal installation - it would be best to zero out the
> free space after you have completed configuring your image.
>
> How much space is actually in use on the Image when you are done with it.
>
> Maish
>
>
> On 2/26/15 03:58, somshekar kadam wrote:
>
> Hello,
>
>  I am trying to create a manually base Image. I am trying to create
> fedora image.
> Once created its size is 5GB.
> When I see the fedora which is default part of openstack which round about
> 199MB.
> I have selected minimal install while installing fedora.
>
>  What is the miss, how can reduce the size.
> ANy pointers on this will be helpful.
>
>
>
> Regards
> Neelu
>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> --
> Best Regards, Maish Saidel-Keesing
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> As Maish pointed out you can look at sparsifying the image, but depending
on what you've installed (desktop etc) you may actually be using 5G of
space.  The cloud images IMO are awesome in terms of giving you a fully
functional base system with zero bloat in them.  Honestly I messed with
building/importing images for a while and finally just decided to start
with the base cloud images and build what I needed on them, then save them
off as images.  It was way more efficient for me, but I realize that
doesn't always work for everyone.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live Migration of VMs without shared storage

2015-02-26 Thread John Griffith
On Thu, Feb 26, 2015 at 7:07 AM, somshekar kadam 
wrote:

> First of all thanks for quick reply.
> Documentation does not mention much about
> *Volume-backed live migration. *
>
> *Is it tested, I mean supported and working. *
> *I will try *
> *Block live migration as no storage is required. *
>
>
> Regards
> Neelu
>
>
>   On Thursday, 26 February 2015 7:13 PM, Robert van Leeuwen <
> robert.vanleeu...@spilgames.com> wrote:
>
>
> > Is the Live Migration of VMs even without shared storage supported in
> Openstack now.
> > If yes is there any document for the same.
>
>
> Wel, depends on what you call "supported".
> Yes, it is possible.
> Will it always work? Probably not until you look at the bugs below.
> They have been fixed recently but they might not be merged with the
> version you are running:
> https://bugs.launchpad.net/nova/+bug/1270825
> https://bugs.launchpad.net/nova/+bug/1082414
>
> There might be more issues but I hit the ones mentioned above.
> Have a look at the docs to configure it:
>
> http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html
>
> Cheers,
> Robert van Leeuwen
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
It's dated and getting the patches backported has proven to take forever,
but I did a write up and testing on this a while back [1].  Should still be
accurate.

Thanks,
John

[1]:
https://griffithscorner.wordpress.com/2014/12/08/openstack-live-migration-with-cinder-backed-instances/
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [All] Summit Session Proposal Voting

2015-01-28 Thread John Griffith
On Wed, Jan 28, 2015 at 4:17 AM, Thierry Carrez  wrote:
> Maish Saidel-Keesing wrote:
>> CFP is upon us, and thereafter will be a period of voting for the sessions.
>>
>> What is the purpose of the voting period? Is it for the Foundation to
>> gauge what sessions are more popular?
>> How is this measured?
>> What weight does the popularity have in deciding if a session is
>> accepted or not?
>
> My understanding is that each conference track has a chair (or group of
> people) responsible for selecting the talks, and that the voting helps
> them select popular talks. It's not the only criteria they follow though
> (otherwise you would end up with 12 Docker talks).

You are exactly correct Thierry, being a track chair a few times in
the process has been something like:
* PIck a cut off based on votes
* Review remaining submissions as a panel

The process typically involves ranking not only by community votes,
but also by the track chairs, individually and collectively.  There's
typically a pretty significant discussion around the content,
pertenance etc.

Hope that helps.

John

>
> (disclaimer: I have never been a track chair so I only speculate on the
> process they follow)
>
> --
> Thierry Carrez (ttx)
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] multiple cinder backend with emc vnx and NFS backend

2015-01-19 Thread John Griffith
On Sun, Jan 18, 2015 at 11:41 PM, Amit Das  wrote:
> Hi John,
>
>>
>> Otherwise you can move to multibackend but you will need to update the
>> hosts column on your existing volumes.
>
>
> For above statement, did you mean a unique backend on separate volume nodes
> ?
>
> Will there be any issues, if the enabled_backends are used with each backend
> tied to particular volume type. Now this configuration is repeated for all
> volume nodes. Do we need to be concerned about the host entry ?
>
>
> Regards,
> Amit
> CloudByte Inc.
>
> On Mon, Jan 19, 2015 at 4:14 AM, John Griffith 
> wrote:
>>
>>
>> On Jan 16, 2015 9:03 PM, "mad Engineer"  wrote:
>> >
>> > Hello All,
>> >   i am working on integrating VNX with cinder,i have plan
>> > to add another NFS storage in the future,without removing VNX.
>> >
>> > Can i add another backend while first backend is running without
>> > causing problem to running volumes.
>> > I heard that multiple backend is supported,
>> >
>> > thanks for any help
>> >
>> > ___
>> > Mailing list:
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> > Post to : openstack@lists.openstack.org
>> > Unsubscribe :
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>> So as long as you used the "enabled backend" format in your existing
>> config you "should" be able to just add another backend without impacting
>> your existing setup (I've never tried this with NFS/VNX myself though).
>>
>> If you're not using the enabled backends directive you can deploy a new
>> cinder - volume node and just add your new driver that way.
>>
>> Otherwise you can move to multibackend but you will need to update the
>> hosts column on your existing volumes.
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>

Hi Amit,

My point was that the way multi-backend works is by the addition of
the "enabled_backends" parameter in the cinder.conf file, along with a
driver section:

enabled_backends = lvm1,lvm2
[lvm1]

[lvm2]


This will cause your host entry to be of the form:
@

In this scenario you can simply add another entry for enabled_backends
and it's corresponding driver info entry.

If you do NOT have multi backend setup your host entry will just be:
 and it's a bit more difficult to convert to
multi-backend.  You have two options:
1. Just deploy another cinder-volume node (skip multi-backend)
2. Convert existing setup to multi-backend (this will require
modification/update of the host entry of your existing volumes)

This all might be a bit more clear if you try it yourself in a
devstack deployment.  Give us a shout on IRC at openstack-cinder if
you get hung up.

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] multiple cinder backend with emc vnx and NFS backend

2015-01-18 Thread John Griffith
On Jan 16, 2015 9:03 PM, "mad Engineer"  wrote:
>
> Hello All,
>   i am working on integrating VNX with cinder,i have plan
> to add another NFS storage in the future,without removing VNX.
>
> Can i add another backend while first backend is running without
> causing problem to running volumes.
> I heard that multiple backend is supported,
>
> thanks for any help
>
> ___
> Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

So as long as you used the "enabled backend" format in your existing config
you "should" be able to just add another backend without impacting your
existing setup (I've never tried this with NFS/VNX myself though).

If you're not using the enabled backends directive you can deploy a new
cinder - volume node and just add your new driver that way.

Otherwise you can move to multibackend but you will need to update the
hosts column on your existing volumes.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] Kilo devstack issue

2015-01-12 Thread John Griffith
On Mon, Jan 12, 2015 at 10:03 AM, Nikesh Kumar Mahalka
 wrote:
> Hi,
> We deployed a kilo devstack on ubuntu 14.04 server.
> We successfully launched a instance from dashboard, but we are unable to
> open console from dashboard for instance.Also instacne is unable to get ip
>
> Below is link for local.conf
> http://paste.openstack.org/show/156497/
>
>
>
> Regards
> Nikesh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Correct, see this thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054157.html

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] BUG: soft lockup messages

2015-01-03 Thread John Griffith
On Sat, Jan 3, 2015 at 5:05 AM, Matej Mailing  wrote:
> Hi,
>
> we are experiencing randomly-timed soft lockup messages in different
> instances (running CentOS, Ubuntu, etc.) with different processes and
> on different compute nodes. I suspect that access to the cinder
> storage via NFS could be the cause of issue, but then perhaps all the
> instances would trigger the error message on the same time? Currently
> they don't and no higher access loads are repoted on the NFS server
> itself, neither the load is higher at those times ...
> What are the best methods to debug this error? Any suggestions and
> expeiences on fixing it will be very welcome :-)
>
> Thanks,
> Matej
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Matej,

I'm assuming the kernel logs indicate NFS as the culprit here? What
sort of IO load are you seeing on the system when these are
encountered?

A quick search on "soft lockup nfs" has a pretty lengthy list of
results including some setting recommendations etc.  Maybe start
there?

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] cinder migrate

2014-12-19 Thread John Griffith
On Fri, Dec 19, 2014 at 12:12 AM, Nikesh Kumar Mahalka
 wrote:
> http://paste.openstack.org/show/153041/

I think the main thing is your host specification is wrong, as I
mentioned in IRC you need the pool designation now as well:

juno-devstack-block@lvmdriver-4#lvmdriver-4

You can and should verify that naming by creating a volume on that
host and doing a show on it, look for the os-vol-host-attr:host
attribute.  This will also enable you to verify that you've configured
the lvmdriver-4 service correctly as well.  Also, as I mentioned
before I'd start this without the volume-type being assigned, then go
from there.  I have no idea what's in your extra-specs for that type
so it may or may not matter.

I think we have a blueprint or bug to clarify this and provide an API
call to get the "real" hostname needed, I'll see if I can find it.

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] using dedicated network for cinder/tgtd traffic

2014-12-04 Thread John Griffith
On Thu, Dec 4, 2014 at 2:54 PM, Dmitry Makovey  wrote:
> Hi,
>
> I've been tinkering with OpenStack a while now, however I have never
> paid attention to where and how my traffic goes. This time I'd like to
> direct all the block storage traffic through a specific interface on
> both cinder and compute nodes (10G interface).
>
> So in the end I would end up with management network, external net,
> instance tunnels net and storage net.
>
> The only place that I found which seems to be relevant is cinder.conf:
>
> iscsi_ip_address=<10Gb interface IP>
>
> do I need to look for more or is above setting sufficient to make sure
> all the block storage traffic goes over 10G interface?
>
> --
> Dmitry Makovey
> Web Systems Administrator
> Athabasca University
> (780) 675-6245
> ---
> Confidence is what you have before you understand the problem
> Woody Allen
>
> When in trouble when in doubt run in circles scream and shout
>  http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

That's all you need.  That tells cinder to use that specific IP to
build your targets off of.  Just make sure your compute nodes and
Cinder Control node can access that network and you should be all set.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Hi All, a problem about cinder, lvm, backing-file and reboot.

2014-11-24 Thread John Griffith
On Mon, Nov 24, 2014 at 11:15 AM, Edward HUANG  wrote:
> Hi All,
>   I'm trying to install Openstack on a local server in my lab, and I use
> devstack for the installation.
>   I use almost the default setting for local.conf.
>   And currently i encounter a problem, after I run unstack.sh, the volume
> group created by stack.sh is deleted, as well as the backing file, and by
> running rejoin-stack.sh, the cinder-volume cannot be restarted properly and
> it reports no 'stack-volumes-lvmdriver-1' is found.
>   Seems to me that rejoin-stack.sh does not bring back the volume group.
>   So, what is the proper way to unstack.sh and then re-run Openstack to
> preserve cinder storage properly?
>
> Best regards
> Edward ZILONG HUANG
> MS@ECE department, Carnegie Mellon University
> http://www.andrew.cmu.edu/user/zilongh/
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Oops.. didn't reply all I don't think:

Hi Edward,

Correct, devstack uses non-persistent loopback files for Cinders VG by
default and they go away on reboot.  The only way to recreate in that
situation is a clean stack.sh run.

That being said there are two other options available to you:
1. Make the loopbacks persistent (not really worth it IMO)
2. Create a VG on your system using "real" disks and that way devstack
will just use those and you'll save yourself jumping through these
hoops.

Thanks
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live migration no longer working after Icehouse -> Juno upgrade

2014-11-24 Thread John Griffith
On Mon, Nov 24, 2014 at 8:13 AM, Chris Friesen
 wrote:
> On 11/18/2014 04:32 AM, Darren Worrall wrote:
>>
>> I failed to mention that this was for volume backed images, there is a
>> genuine issue and we're making progress in a bug report:
>> https://bugs.launchpad.net/bugs/1392773
>
>
> Just wondering how this got past the Juno testing?  Is there no test-case
> (tempest, presumably) for live-migration of volume-backed instances?
>
> Chris
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Nope, gating is currently single node devstack so no tests for live
migration.  There's currently work in progress to change that though.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Nova Juno]

2014-11-12 Thread John Griffith
On Wed, Nov 12, 2014 at 11:16 AM, Amit Anand  wrote:
> Hi all,
>
> So been trying to figure out why this is happening I was hoping yall could
> maybe shed some light on it asI really would not like to have recreate
> everything on my controller node!
>
> So whenever I try to sync the nova database:
> su -s /bin/sh -c "nova-manage db sync" nova
>
> I get this error in my logs:
> 2014-11-12 13:09:45.323 29721 TRACE nova raise operational_error
> 2014-11-12 13:09:45.323 29721 TRACE nova OperationalError:
> (OperationalError) (1045, "Access denied for user 'nova'@'localhost' (using
> password: YES)") None None
>
>
> Ive tried dropping and recreating the nova table but still keep getting the
> same error. Ive looked into the mysql database for the users and this is
> what I have:
>
> Database changed
> MariaDB [mysql]> SELECT user,password,host FROM user;
> +--+---+---+
> | user | password  | host  |
> +--+---+---+
> | root | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
> | root | *7088873CEA983CB57491834389F9BB9369B9D756 | 127.0.0.1 |
> | root | *7088873CEA983CB57491834389F9BB9369B9D756 | ::1   |
> | keystone | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
> | keystone | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
> | glance   | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
> | glance   | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
> | neutron  | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
> | neutron  | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
> | nova | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
> | nova | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
> | root | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
> +--+---+---+
> 12 rows in set (0.00 sec)
>
>
> Any ideas on how to fix this would be greatly appreciated. Thank you very
> much!
>
> Amit
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

I seem to recall hitting something like this in an upgrade I did, and
I'm not sure but check your permissions on your /etc/nova directory,
particularly nova.conf.  I ran into the same problem and that's what
it turned out to be.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Cinder] Volume multi attachment

2014-10-15 Thread John Griffith
On Wed, Oct 15, 2014 at 6:54 AM, Heiko Krämer  wrote:

> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume


Hi Heiko,

Nope, neither of those landed in Juno.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] multiple compute node configuration

2014-09-24 Thread John Griffith
On Wed, Sep 24, 2014 at 8:28 AM, gustavo panizzo (gfa) 
wrote:

> just repeat what you did with first compute node
>
> On September 24, 2014 8:20:15 PM GMT+08:00, Srinivasreddy R <
> srinivasreddy4...@gmail.com> wrote:
>
>> hi,
>>
>> I have configured 3 node setup with icehouse ubuntu14.04 .
>> Now i want to configure multiple compute nodes [add one more compute node
>> in current setup ].
>> please provide any helping  guide.
>>
>>
>> thanks,
>> srinivas.
>>
>>
>>
>> --
>>
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
> --
> Sent from mobile.
> 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​You could check docs.openstack.org:
http://docs.openstack.org/icehouse/install-guide/install/apt/content/nova-compute.html
​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] cinder create error

2014-09-15 Thread John Griffith
On Sep 12, 2014 8:10 PM, "b t" <905...@gmail.com> wrote:
>
> I am installing icehouse in ubuntu.
> now get to point to create volume and I got status error , and sometimes
stuck on creating .
> any idea ?
> here is the log .  thanks !
>
>
> root@controller:/var/log/cinder#
> root@controller:/var/log/cinder# cinder list
>
+--++--+--+-+--+-+
> |  ID  | Status | Display Name | Size |
Volume Type | Bootable | Attached to |
>
+--++--+--+-+--+-+
> | 1e8b59be-454c-4532-a2ec-8457c0a04486 | error  |   myVolume   |  1   |
  None|  false   | |
> | acfdad32-f81c-487f-8afa-19ee8ba6c989 | error  |   myVolume   |  1   |
  None|  false   | |
> | b2cd112b-ecef-4c1e-9501-ebda34492657 | error  |   my-disk|  2   |
  None|  false   | |
>
+--++--+--+-+--+-+
>
>
>
>
> root@controller:/var/log/cinder# more cinder-scheduler.log
> 2014-09-12 21:58:02.875 10328 AUDIT cinder.service [-] Starting
cinder-scheduler node (version 2014.1.2)
> 2014-09-12 21:58:02.933 10328 INFO oslo.messaging._drivers.impl_rabbit
[req-d3ab770b-8cd0-4d5c-8939-6ef3ad8f92ca - - - - -] Connected to AMQP
server on controller:5672
> 2014-09-12 21:58:03.789 10328 INFO oslo.messaging._drivers.impl_rabbit
[-] Connected to AMQP server on controller:5672
> 2014-09-12 21:59:10.423 10328 WARNING cinder.context [-] Arguments
dropped when creating context: {'user':
u'33444db757c14559add65a2e606c2102', 'tenant':
u'd4c3bf6ee2af44129fc38bf76f4a5623', 'use
> r_identity': u'33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -'}
> 2014-09-12 21:59:11.028 10328 ERROR cinder.scheduler.flows.create_volume
[req-03be4b43-29d2-4208-8924-853840b9214b 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] Failed
> to schedule_create_volume: No valid host was found.
> 2014-09-12 22:47:41.737 10328 WARNING cinder.context [-] Arguments
dropped when creating context: {'user':
u'33444db757c14559add65a2e606c2102', 'tenant':
u'd4c3bf6ee2af44129fc38bf76f4a5623', 'use
> r_identity': u'33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -'}
> 2014-09-12 22:47:41.757 10328 ERROR cinder.scheduler.flows.create_volume
[req-5c268fcf-5c4d-49ed-be1b-0460411e76b0 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] Failed
> to schedule_create_volume: No valid host was found.
> 2014-09-12 22:56:01.742 10328 WARNING cinder.context [-] Arguments
dropped when creating context: {'user':
u'dfbdec26a06a4c5cb35f994937b092fb', 'tenant':
u'0faf8d7dfdac4a53af195b5436992976', 'use
> r_identity': u'dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -'}
> 2014-09-12 22:56:01.765 10328 ERROR cinder.scheduler.flows.create_volume
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] Failed
> to schedule_create_volume: No valid host was found.
> root@controller:/var/log/cinder#
>
>
>
>
>
> 2014-09-12 22:47:48.097 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44370)
> 2014-09-12 22:47:48.192 10354 INFO cinder.api.openstack.wsgi
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] GET http://controll
> er:8776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail
> 2014-09-12 22:47:48.228 10354 AUDIT cinder.api.v1.volumes
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migration_status
> ': None, 'availability_zone': u'nova', 'terminated_at': None,
'updated_at': datetime.datetime(2014, 9, 13, 2, 47, 41),
'provider_geometry': None, 'snapshot_id': None, 'ec2_id': None, 'mountpoint'
> : None, 'deleted_at': None, 'id':
u'da4108f1-f419-4f02-8e0d-29ca5d3c9c0f', 'size': 2L, 'user_id':
u'33444db757c14559add65a2e606c2102', 'attach_time': None, 'attached_host':
None, 'display_descrip
> tion': None, 'volume_admin_metadata': [], 'encryption_key_id': None,
'project_id': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launched_at': None,
'scheduled_at': None, 'status': u'error', 'volume_type
> _id': None, 'deleted': False, 'provider_location': None, 'host': None,
'source_volid': None, 'provider_auth': None, 'display_name': u'my-disk',
'instance_uuid': None, 'bootable': False, 'created_
> at': datetime.datetime(2014, 9, 13, 2, 47, 41), 'attach_status':
u'detached', 'volume_type': None, '_name_id': None, 'volume_metadata': []}
> 2014-09-12 22:47:48.229 10354 AUDIT cinder.api.v1.volumes
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migration_status
> ': None, 'availability_zone': u'nova', 'terminated_at': None,
'upda

Re: [Openstack] [openstack-dev] tempest bug

2014-08-09 Thread John Griffith
On Sat, Aug 9, 2014 at 1:58 PM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> I have reported a bug as "tempest volume-type test failed for hp_msa_fc
> driver" in tempest
> project.
> Bug Id is "Bug #1353850"
> My Tempest tests are failed on cinder driver.
>
>
>
> No one till responded to my bug.
> I am new in this area.
> Please help me to solve this.
>
>
>
> Regards
> Nikesh
>
> ___
> OpenStack-dev mailing list
> openstack-...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Hi Nikesh,

No need to send the note to the dev-list, but if you do you sould probably
ref the bug number [1].

What you're running into isn't actually a bug, I think I pointed you to
this before, but take a look at the "Configuring devstack to use your
driver" section of the Cinder Wiki here: [2].  You'll need to set the
variables to tell Tempest what Vendor and Protocol settings to use.

Grab myself or somebody else on IRC (#openstack-cinder) and we can help you
get things set up properly.

Thanks,
John

[1]: https://bugs.launchpad.net/tempest/+bug/1353850
[2]: ​https://wiki.openstack.org/wiki/Cinder
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Getting size of volume_type using API

2014-08-06 Thread John Griffith
On Wed, Aug 6, 2014 at 1:27 PM, Sekhar Vajjhala  wrote:

> Thanks.
>
> So seems like there is no way for me to the following : Allocate a volume
> of a given size from a volume_type using an API but only if volume_type has
> sufficient space. I can create a volume of a given size from a volume_type
> using API, and if there is insufficient space, then the API call will fail
> .
>
​Not sure I follow you're relationship of "size" and "volume-type" here.
 What you describe here is exactly how it works; user requests a volume of
type 'foo' and size 100Gig... the scheduler then checks for a host that can
in fact provide a volume of type 'foo', it also checks to see if said
backend has enough free-space.  If it does... great, if it does not, the
create will fail.

Not that this model works for a number of things that we call
specifications, that can be embedded in the volume-type.

Hope that helps, if I missed the point of your question please do let me
know.

Thanks,
John​

>
> Any other suggestions ?
>
> Thanks,
> Sekhar Vajjhala
>
>
>
> On Wed, Aug 6, 2014 at 11:43 AM, Jyoti Ranjan  wrote:
>
>> No, there is no way at this point of time. Also, it is little bit
>> difficult expectation because we may attach more than one physical devices
>> (say HP 3PAR CPG) to same volume type.
>>
>>
>> On Wed, Aug 6, 2014 at 1:31 AM, Sekhar Vajjhala 
>> wrote:
>>
>>> Is there a way to get the size of a volume_type using an API.
>>> According to the docs
>>> http://developer.openstack.org/api-ref-compute-v2-ext.html ,
>>> GET /v1.1/{tenant_id}/os-volume-types/{volume_type_id}
>>> returns the following ( but not the size ).
>>>
>>>
>>> {
>>> "volume_type": {
>>> "id": "289da7f8-6440-407c-9fb4-7db01ec49164",
>>> "name": "vol-type-001",
>>> "extra_specs": {
>>> "capabilities": "gpu"
>>> }
>>> }
>>> }
>>>
>>>
>>> Sekhar Vajjhala
>>>
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>
>
>
> --
> Sekhar Vajjhala | m : 603-785-8993
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] don't detach volume for VM error

2014-07-17 Thread John Griffith
On Thu, Jul 17, 2014 at 7:57 AM, mehmet hacısalihoğlu 
wrote:

> Hi,
>
> You must change  instances table in nova db.  After you will detach volume.
>
> Regards.
>
>
> 2014-07-17 10:16 GMT+03:00 Yugang LIU :
>
>
>> Hi
>>
>> I delete VM with "nova delete cirrors" but it raise error "The server
>> has either erred or is incapable of performing the requested operation.
>> (HTTP 500) "
>>
>> I find a volume attach the VM. The volume status is in-Use.
>>
>> nova volume-detach VM_server volume_lv
>>
>> ERROR: Cannot 'detach_volume' while instance is in vm_state error (HTTP
>> 409)
>>
>> nova reset-state VM. It is in ERROR state.
>>
>> So it is in loop.
>>
>> VM Error -> don't detach volume -> need VM normal state -> nova
>> reset-state ---
>>
>> |
>>  |
>>
>>
>> |__|
>>
>>
>> How to resolv it?
>>
>> --
>> Best regards
>>
>> Yugang LIU
>>
>> Keep It Simple, Stupid
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​What release of OpenStack?  Rather than mess with the DB directly you
could use the reset-state API to change the status of the instance.​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] flavor disk io limits with cinder volumes not supported ?

2014-07-16 Thread John Griffith
On Wed, Jul 16, 2014 at 9:23 AM, Pavel Stano  wrote:

> Hi,
>
> i am trying to set io limits for VM with:
> nova flavor-key lb_proxy set quota:disk_total_bytes_sec=1024
> nova flavor-key lb_proxy set quota:disk_total_iops_sec=20
>
> but when i check virsh dumpxml instance-id, it shows it is only set for
> device='cdrom' (thats --config-drive), but no ... is
> set for cinder volume ...
> I am missing something or cinder volumes has no support for io limits ?
>
> (testing) root@test-osmng:~# nova flavor-show lb_proxy |grep
> extra_specs
> | extra_specs | {"quota:disk_total_iops_sec": "20",
> "quota:disk_total_bytes_sec": "1024"} |
>
>
> (testing) root@test-compute1:~# virsh dumpxml instance-0003|sed -n
> '//p'
>
> 
>   
>   
> file='/var/lib/nova/mnt/21ada25d651c70702b9ecb8d530f188a/volume-63dddedb-f9b1-4b09-9005-a9183184fdef'/>
> 
> 63dddedb-f9b1-4b09-9005-a9183184fdef  name='virtio-disk0'/>  slot='0x06' function='0x0'/> 
>
> 
>   
>file='/mnt/instances/73cf44d4-66b3-4610-85ca-10dacb0f888c/disk.config'/>
>  
> 1024
> 20
>   
>   
>   
>   
> 
>
> --
> [ Ohodnotte kvalitu mailu: http://nicereply.com/websupport/Stano/ ]
>
> Pavel Stano | Troubleshooter
>
> http://WebSupport.sk
> *** BERTE A VYCHUTNAVAJTE ***
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​If I remember correctly this worked if you set qos-specs on your Cinder
volume.  The way it worked was you set qos-specs on the volume via Cinder
and that info get's stuffed inside of the connection info of the volume
when it's picked up by Nova.  It should then grab that info on attach and
set things appropriately in libvirt.

It's been a while since I've tested it or tried it out, but looking at the
code everything seems to still be in place.

John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Why doesn't suspend release vCPUs/memory?

2014-06-23 Thread John Griffith
On Mon, Jun 23, 2014 at 10:49 AM, Ricky Saltzer  wrote:

> Right, the quotas don't seem to be released. If I have 210/210 vCPUs used,
> and I suspend an instance with 4 vCPUs, I still have 210/210 vCPUs used.
>
>
> On Mon, Jun 23, 2014 at 11:38 AM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
>>
>> On Mon, Jun 23, 2014 at 7:45 AM, Ricky Saltzer 
>> wrote:
>>
>>>
>>> https://ask.openstack.org/en/question/32826/why-doesnt-suspend-release-vcpusmemory/
>>
>>
>> ​My understanding was always that the instance is no longer consuming any
>> resources via the virt layer, so in essence the resources are in fact freed
>> up on the Compute Node.  Quotas and such however aren't modified (which
>> seems correct to me).  Are you saying you want to see quota's adjusted
>> here? ​
>>
>>
>
>
> --
> Ricky Saltzer
> http://www.cloudera.com
>
>  ​Yeah, I think that makes sense and is expected, as a user you're still
consuming those "items" even if they're not active.  The alternative would
be (which I think is what you're getting at) to actually deduct items that
are suspended from the tenants quota count.  I guess when I think of it
though those resources are still "reserved" even if they're not in use.  I
suppose you could do this and then if on resume the quota isn't there we
don't actually resume... but I think this could be argued either way.

Maybe seperate quotas for active vs suspended?  ​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Why doesn't suspend release vCPUs/memory?

2014-06-23 Thread John Griffith
On Mon, Jun 23, 2014 at 7:45 AM, Ricky Saltzer  wrote:

>
> https://ask.openstack.org/en/question/32826/why-doesnt-suspend-release-vcpusmemory/


​My understanding was always that the instance is no longer consuming any
resources via the virt layer, so in essence the resources are in fact freed
up on the Compute Node.  Quotas and such however aren't modified (which
seems correct to me).  Are you saying you want to see quota's adjusted
here? ​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] keypair workaround

2014-06-22 Thread John Griffith
On Sun, Jun 22, 2014 at 1:42 PM, Frans Thamura  wrote:

> someone here setup openstack (based on mirantis) and i can ssh
> username@serverhost, and no need keypair
>
> but i setup here, need keypair.
>
> any idea?
> --
> Frans Thamura (曽志胜)
> Shadow Master and Lead Investor
> Meruvian.
> Integrated Hypermedia Java Solution Provider.
>
> Mobile: +628557888699
> Blog: http://blogs.mervpolis.com/roller/flatburger (id)
>
> FB: http://www.facebook.com/meruvian
> TW: http://www.twitter.com/meruvian / @meruvian
> Website: http://www.meruvian.org
>
> "We grow because we share the same belief."
>
>
> On Mon, Jun 23, 2014 at 1:10 AM, Alberto Molina Coballes
>  wrote:
> >
> > El 21/06/2014 20:17, "Frans Thamura"  escribió:
> >
> >
> >>
> >>
> >> i want to make more common ssh without have to add -i keypair2.pem
> >>
> >> any idea?
> >>
> >
> > What about using ssh-agent?
> >
> > Alberto
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

​Maybe you've already done this and it's not what you want; but in my
setups I just use my own image with the appropriate modifications to
sshd.conf to enable password authenticated ssh.  Then I don't need to mess
with keys any longer which I think is what you're getting at.

You can do this pretty easily by using any one of the cloud images out
there that takes a key, modifying it and then building your own custom
image out of it.  ​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Support options

2014-06-19 Thread John Griffith
On Thu, Jun 19, 2014 at 5:27 AM, Ian Marshall  wrote:

> Hi
>
> Does anyone know of any companies offering commercial support for small
> deployments - 4 nodes and manual deployment of Icehouse.
>
> Regards
> Ian
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
​You can check out the OpenStack Market Place page [1].  Not sure about
small deployments etc, but worth checking out.

http://www.openstack.org/marketplace/consulting/​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Weird Network problem

2014-05-28 Thread John Griffith
On Wed, May 28, 2014 at 4:46 PM, Georgios Dimitrakakis  wrote:

> Hi!
>
> I am performing a new installation of openstack-icehouse on a CentOS 6.5
> machine (all-in-one).
>
>
> I have configured a FlatDHCP nova network and I can start succesfully a
> cirrOS instance.
> Moreover, I can ping it (10.0.0.2) and I can ssh to it without any
> problems.
>
> The problem is that although I can resolve a hosts name (eg.
> www.google.com) from inside the instance I cannot ping it.
> It seems as if I cannot go outside my master host from inside that
> instance.
>
> The same thing happens if I provide a floating-ip to the instance. I can
> ping and ssh to the floating IP but if I ssh into the instance I cannot
> reach the "outside" world.
>
>
> Any ideas???
>
>
> Best,
>
>
> G.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack


​Double check your settings for flat_interface and public_interface.
 Sounds like your bridge is not configured correctly so you're never
actually bridging to the public net.​  Is this multiple nics or single?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cut over from lvm_type=XXX to volume_clear=XXX

2014-05-24 Thread John Griffith
On Fri, May 23, 2014 at 10:16 PM, Jeffrey Walton  wrote:

> Hi All,
>
> This is a follow up to the recent question on wiping.
>
> Previous versions of OpenStack used 'lvm_type=default' to signify a
> wipe. Current versions use 'volume_clear=zero' or
> 'volume_clear=shred'.
>
> For current versions, 'zero' is a single pass and writes 0's; while
> 'shred' uses three passes of predetermined bit patterns.
>
> When did the change occur from lvm_type to volume_clear?
>
> Thanks in advance.
>
> Jeff
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
​Hi Jeff,
I think there's some confusion, lvm_type is not explicitly for setting
secure delete.  The default lvm_type is still thick provisioned, and
further in that case the default is still to do a zero of the entire
volume.  None of the previous default behavior has changed.

What has changed however is that ability to specify different types of
delete or none at all and that was around the Grizzly time-frame.

To reiterate, lvm_type has no direct link to the secure delete or
volume_clear option.  Using thin however implicitly sets volume_clear=none
as it makes no sense in that context.

Hope that helps, sorry for any confusion and feel free to hit me up on IRC
if you still have questions.​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Large volume creation takes long time

2014-05-23 Thread John Griffith
On Fri, May 23, 2014 at 10:19 AM, Udara Liyanage wrote:

> HI Beyh,
>
> According to the changes made in [1], volumes are not cleared when
> "volume_clear=none"
>
> [1]
> https://github.com/openstack/cinder/commit/bb06ebd0f6a75a6ba55a7c022de96a91e3750d20
>

​Exactly!!  That's the whole point :)​

>
>
>
> On Fri, May 23, 2014 at 9:46 PM, Udara Liyanage 
> wrote:
>
>> HI John,
>>
>> Thanks for the detailed answer.
>> Just one more clarification. If "lvm_type=thin" will it clear only the
>> blocks which data are written on?
>>
>>
>> On Fri, May 23, 2014 at 8:51 PM, John Griffith <
>> john.griff...@solidfire.com> wrote:
>>
>>>
>>>
>>>
>>> On Thu, May 22, 2014 at 11:33 PM, Ruzicka, Marek <
>>> marek.ruzi...@t-systems.sk> wrote:
>>>
>>>> Can’t help you here… I’m still running Grizzly, and volume resize came
>>>> in IceHouse I believe.
>>>>
>>>>
>>>>
>>>> Marek
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *From:* Udara Liyanage [mailto:udaraliyan...@gmail.com]
>>>> *Sent:* 23. mája 2014 07:28
>>>> *To:* Ruzicka, Marek
>>>> *Cc:* openstack@lists.openstack.org
>>>> *Subject:* Re: [Openstack] Large volume creation takes long time
>>>>
>>>>
>>>>
>>>> HI marek,
>>>>
>>>> Thanks for the reply.
>>>> By the way how to resize a volume to zero. I found "cinder extend"
>>>> command where volumes can be extended. How to shrink the volume to zero?
>>>>
>>>>
>>>>
>>>> On Fri, May 23, 2014 at 10:52 AM,  wrote:
>>>>
>>>> I’m not 100% sure, but I remember (depending on the driver you are
>>>> using) it used to zero entire volume before deletion, which might or might
>>>> not make sense for you.
>>>>
>>>> Sorry for not being able to provide any details, this is just something
>>>> I remember from way back.
>>>>
>>>>
>>>>
>>>> Cheers,
>>>>
>>>> Marek
>>>>
>>>>
>>>>
>>>> *From:* Udara Liyanage [mailto:udaraliyan...@gmail.com]
>>>> *Sent:* 23. mája 2014 07:00
>>>> *To:* openstack@lists.openstack.org
>>>> *Subject:* [Openstack] Large volume creation takes long time
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>> When it tries to delete a large sized volume, it takes a long time. Is
>>>> there a another better way to do  this such as resizing a volume to zero.
>>>>
>>>>
>>>> --
>>>>
>>>> Udara S.S Liyanage.
>>>> Software Engineer at WSO2.
>>>> Commiter and PPMC Member of Apache Stratos.
>>>> Blog - http://udaraliyanage.wordpress.com
>>>>
>>>> phone: +94 71 443 6897
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Udara S.S Liyanage.
>>>> Software Engineer at WSO2.
>>>> Commiter and PPMC Member of Apache Stratos.
>>>> Blog - http://udaraliyanage.wordpress.com
>>>>
>>>> phone: +94 71 443 6897
>>>>
>>>> ___
>>>> Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to : openstack@lists.openstack.org
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>> ​The writing zeros to the volume on delete is indeed what you're
>>> seeing, it's a "feature" that is there for security purposes, so for
>>> example a public provider doesn't hand out a new volume to a tenant with
>>> the possibility of allocating blocks with somebody elses data on them.
>>>
>>> You can switch to using "lvm_type=thin" in your cinder.conf file which
>>> will basically handle this by returning zero for blocks that haven't been
>>> written yet (so we don't do the secure delete).  Or as was suggested if you
>>> don't need the secure delete option you can turn it off altogether by
>>> setting "volume_clear=none" in your cinder.conf file.
>>>
>>> As far as adjusting the size to zero, we don't allow reducing the volume
>>> size, only extending.  Reducing introduces quite a bit of risk and also it
>>> doesn't address the problem that secure_delete is there for anyway.
>>>
>>> Thanks,
>>> John​
>>>
>>>
>>
>>
>> --
>> Udara S.S Liyanage.
>> Software Engineer at WSO2.
>> Commiter and PPMC Member of Apache Stratos.
>> Blog - http://udaraliyanage.wordpress.com
>> phone: +94 71 443 6897
>>
>
>
>
> --
> Udara S.S Liyanage.
> Software Engineer at WSO2.
> Commiter and PPMC Member of Apache Stratos.
> Blog - http://udaraliyanage.wordpress.com
> phone: +94 71 443 6897
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Large volume creation takes long time

2014-05-23 Thread John Griffith
On Fri, May 23, 2014 at 10:16 AM, Udara Liyanage wrote:

> HI John,
>
> Thanks for the detailed answer.
> Just one more clarification. If "lvm_type=thin" will it clear only the
> blocks which data are written on?
>
>
> On Fri, May 23, 2014 at 8:51 PM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
>>
>>
>>
>> On Thu, May 22, 2014 at 11:33 PM, Ruzicka, Marek <
>> marek.ruzi...@t-systems.sk> wrote:
>>
>>> Can’t help you here… I’m still running Grizzly, and volume resize came
>>> in IceHouse I believe.
>>>
>>>
>>>
>>> Marek
>>>
>>>
>>>
>>>
>>>
>>> *From:* Udara Liyanage [mailto:udaraliyan...@gmail.com]
>>> *Sent:* 23. mája 2014 07:28
>>> *To:* Ruzicka, Marek
>>> *Cc:* openstack@lists.openstack.org
>>> *Subject:* Re: [Openstack] Large volume creation takes long time
>>>
>>>
>>>
>>> HI marek,
>>>
>>> Thanks for the reply.
>>> By the way how to resize a volume to zero. I found "cinder extend"
>>> command where volumes can be extended. How to shrink the volume to zero?
>>>
>>>
>>>
>>> On Fri, May 23, 2014 at 10:52 AM,  wrote:
>>>
>>> I’m not 100% sure, but I remember (depending on the driver you are
>>> using) it used to zero entire volume before deletion, which might or might
>>> not make sense for you.
>>>
>>> Sorry for not being able to provide any details, this is just something
>>> I remember from way back.
>>>
>>>
>>>
>>> Cheers,
>>>
>>> Marek
>>>
>>>
>>>
>>> *From:* Udara Liyanage [mailto:udaraliyan...@gmail.com]
>>> *Sent:* 23. mája 2014 07:00
>>> *To:* openstack@lists.openstack.org
>>> *Subject:* [Openstack] Large volume creation takes long time
>>>
>>>
>>>
>>> Hi,
>>>
>>> When it tries to delete a large sized volume, it takes a long time. Is
>>> there a another better way to do  this such as resizing a volume to zero.
>>>
>>>
>>> --
>>>
>>> Udara S.S Liyanage.
>>> Software Engineer at WSO2.
>>> Commiter and PPMC Member of Apache Stratos.
>>> Blog - http://udaraliyanage.wordpress.com
>>>
>>> phone: +94 71 443 6897
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Udara S.S Liyanage.
>>> Software Engineer at WSO2.
>>> Commiter and PPMC Member of Apache Stratos.
>>> Blog - http://udaraliyanage.wordpress.com
>>>
>>> phone: +94 71 443 6897
>>>
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>> ​The writing zeros to the volume on delete is indeed what you're seeing,
>> it's a "feature" that is there for security purposes, so for example a
>> public provider doesn't hand out a new volume to a tenant with the
>> possibility of allocating blocks with somebody elses data on them.
>>
>> You can switch to using "lvm_type=thin" in your cinder.conf file which
>> will basically handle this by returning zero for blocks that haven't been
>> written yet (so we don't do the secure delete).  Or as was suggested if you
>> don't need the secure delete option you can turn it off altogether by
>> setting "volume_clear=none" in your cinder.conf file.
>>
>> As far as adjusting the size to zero, we don't allow reducing the volume
>> size, only extending.  Reducing introduces quite a bit of risk and also it
>> doesn't address the problem that secure_delete is there for anyway.
>>
>> Thanks,
>> John​
>>
>>
>
>
> --
> Udara S.S Liyanage.
> Software Engineer at WSO2.
> Commiter and PPMC Member of Apache Stratos.
> Blog - http://udaraliyanage.wordpress.com
> phone: +94 71 443 6897
>

​Hi Udara,

When you use LVM Thin it implicitly solves the problem that secure delete
was introduced for.  In other words, it won't give out previous written
blocks.  It keeps a table of the blocks that have been written since
allocation, if you try and read a block that isn't listed in the table as
written, it will automatically just return "zero".  Thus we don't need to
worry on the Cinder side about trying to zero out written blocks etc.

FWIW, we've looked at doing what you suggest on the thick/linear version
(track the written blocks and only zero those out) but personally I've just
kind of moved on to using Thin provisioning instead. There are a number of
other advantages associated with it as well which just made it a much
better fit for a lot of people.  It still doesn't support mirroring which
is too bad, but for the most part it's a better choice (IMHO).
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Large volume creation takes long time

2014-05-23 Thread John Griffith
On Thu, May 22, 2014 at 11:33 PM, Ruzicka, Marek  wrote:

> Can’t help you here… I’m still running Grizzly, and volume resize came in
> IceHouse I believe.
>
>
>
> Marek
>
>
>
>
>
> *From:* Udara Liyanage [mailto:udaraliyan...@gmail.com]
> *Sent:* 23. mája 2014 07:28
> *To:* Ruzicka, Marek
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] Large volume creation takes long time
>
>
>
> HI marek,
>
> Thanks for the reply.
> By the way how to resize a volume to zero. I found "cinder extend" command
> where volumes can be extended. How to shrink the volume to zero?
>
>
>
> On Fri, May 23, 2014 at 10:52 AM,  wrote:
>
> I’m not 100% sure, but I remember (depending on the driver you are using)
> it used to zero entire volume before deletion, which might or might not
> make sense for you.
>
> Sorry for not being able to provide any details, this is just something I
> remember from way back.
>
>
>
> Cheers,
>
> Marek
>
>
>
> *From:* Udara Liyanage [mailto:udaraliyan...@gmail.com]
> *Sent:* 23. mája 2014 07:00
> *To:* openstack@lists.openstack.org
> *Subject:* [Openstack] Large volume creation takes long time
>
>
>
> Hi,
>
> When it tries to delete a large sized volume, it takes a long time. Is
> there a another better way to do  this such as resizing a volume to zero.
>
>
> --
>
> Udara S.S Liyanage.
> Software Engineer at WSO2.
> Commiter and PPMC Member of Apache Stratos.
> Blog - http://udaraliyanage.wordpress.com
>
> phone: +94 71 443 6897
>
>
>
>
> --
>
> Udara S.S Liyanage.
> Software Engineer at WSO2.
> Commiter and PPMC Member of Apache Stratos.
> Blog - http://udaraliyanage.wordpress.com
>
> phone: +94 71 443 6897
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​The writing zeros to the volume on delete is indeed what you're seeing,
it's a "feature" that is there for security purposes, so for example a
public provider doesn't hand out a new volume to a tenant with the
possibility of allocating blocks with somebody elses data on them.

You can switch to using "lvm_type=thin" in your cinder.conf file which will
basically handle this by returning zero for blocks that haven't been
written yet (so we don't do the secure delete).  Or as was suggested if you
don't need the secure delete option you can turn it off altogether by
setting "volume_clear=none" in your cinder.conf file.

As far as adjusting the size to zero, we don't allow reducing the volume
size, only extending.  Reducing introduces quite a bit of risk and also it
doesn't address the problem that secure_delete is there for anyway.

Thanks,
John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] RDO cinder upgrading havana to icehouse

2014-04-19 Thread John Griffith
On Fri, Apr 18, 2014 at 1:08 PM, Remo Mattei  wrote:

> Interesting solution I m curious to see if it works. I do not see why but
> maybe code changed to much to make it seamless.
>
>
> Inviato da iPhone ()
>
> > Il giorno Apr 18, 2014, alle ore 11:51, Dimitri Maziuk <
> dmaz...@bmrb.wisc.edu> ha scritto:
> >
> > Hi all,
> >
> > I'm thinking of adding a cinder storage node to our centos 6/rdo/havana
> > setup & I wonder if I should go icehouse on that. Anyone tried mixing
> > icehouse cinder storage node w/ havana controller (and the rest)? Any
> > gotchas or reasons I shouldn't do that?
> >
> > TIA
> > --
> > Dimitri Maziuk
> > Programmer/sysadmin
> > BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
> >
> > ___
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
> > !DSPAM:1,535176a833621865462940!
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
​Not something I've tried, in "theory" if you set the rpc version on your
Icehouse Cinder-Volume install to match that of your Havana Controller the
bulk of any issues should be covered.

The idea is that most things like DB access and API items are changed
before the scheduler/rpc layer and the items below that should maintain
compatibility.  In other words the RPC versioning is intended to allow you
to do exactly what you describe here.

Honestly I think the best answer unless somebody else has tried it is going
to be "If you have the patience and don't mind it possibly not working;
give it a shot".  It "should" work, but I don't know of anybody testing it.

I'd love to hear how it worked out if you try it.

John​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder Attach issues?

2014-04-09 Thread John Griffith
On Wed, Apr 9, 2014 at 11:18 PM, Erich Weiler  wrote:

> The cinder logs (all types) show nothing.  The only evidence of
> strangeness I see in any logs are in the nova compute log on the server
> that the instance I'm trying to attach to is running on.  I see this:
>
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 266, in
> decorated_function
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher return
> function(self, context, *args, **kwargs)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 309, in
> decorated_function
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher e,
> sys.exc_info())
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
> line 68, in __exit__
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> six.reraise(self.type_, self.value, self.tb)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 296, in
> decorated_function
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher return
> function(self, context, *args, **kwargs)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4130, in
> attach_volume
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> bdm.destroy(context)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
> line 68, in __exit__
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> six.reraise(self.type_, self.value, self.tb)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4127, in
> attach_volume
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher return
> self._attach_volume(context, instance, driver_bdm)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4148, in
> _attach_volume
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> self.volume_api.unreserve_volume(context, bdm.volume_id)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 173, in
> wrapper
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher res =
> method(self, ctx, volume_id, *args, **kwargs)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 249, in
> unreserve_volume
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> cinderclient(context).volumes.unreserve(volume_id)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 293,
> in unreserve
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher return
> self._action('os-unreserve', volume)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 250,
> in _action
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher return
> self.api.client.post(url, body=body)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in
> post
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher return
> self._cs_request(url, 'POST', **kwargs)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in
> _cs_request
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher raise
> exceptions.ConnectionError(msg)
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> ConnectionError: Unable to establish connection: [Errno 101] ENETUNREACH
> 2014-04-09 22:08:29.995 6547 TRACE oslo.messaging.rpc.dispatcher
> 2014-04-09 22:08:29.998 6547 ERROR oslo.messaging._drivers.common [-]
> Returning exception Unable to establish connection: [Errno 101] ENETUNREACH
> to caller
> 2014-04-09 22:08:29.998 6547 ERROR oslo.messaging._drivers.common [-]
> ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.6/site-
> packages/oslo/messaging/rpc/dispatcher.py", line 133, in
> _dispatch_and_reply\nincoming.message))\n', '  File
> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line
> 176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt,
> args)\n', '  File 
> "/usr/lib/python

Re: [Openstack] [Glance] Running devstack with just Glance

2014-04-08 Thread John Griffith
Use enabled_services in your local.conf file, something like:

ENABLED_SERVICES=g-api,g-reg,key

Might work



On Tue, Apr 8, 2014 at 5:49 PM, Shrinand Javadekar
wrote:

> Hi,
>
> I want to run devstack with just Glance (and Keystone because Glance
> requires Keystone I guess). My localrc is pasted below. However, when
> stack.sh completes, I don't see glance running. I looked at the
> catalog returned by keystone and the only service reported by keystone
> is the "identity service" which is itself.
>
> 
>
> ADMIN_PASSWORD=devstack
> MYSQL_PASSWORD=devstack
> RABBIT_PASSWORD=devstack
> SERVICE_PASSWORD=devstack
> SERVICE_TOKEN=devstack
>
> # Uncomment the BRANCHes below to use stable versions
>
> # unified auth system (manages accounts/tokens)
> KEYSTONE_BRANCH=stable/havana
>
> # image storage
> GLANCE_BRANCH=stable/havana
>
> disable_all_services
> enable_service key glance mysql
>
> 
>
> Any ideas how I can get only glance to run on devstack.
>
> Thanks in advance.
> -Shri
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] migrate cinder volume

2014-04-01 Thread John Griffith
On Tue, Apr 1, 2014 at 3:34 PM, Dimitri Maziuk wrote:

> Hi all,
>
> is there a command to migrate a volume from one cinder host to another?
>
> TIA
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
If you're on Havana or later you can use the migrate command:

[root@osc-1 cinder]# cinder help migrate
usage: cinder migrate [--force-host-copy ]  

Migrate the volume to the new host.

Positional arguments:
ID of the volume to migrate
  Destination host

Optional arguments:
  --force-host-copy 
Optional flag to force the use of the generic host-
based migration mechanism, bypassing driver
optimizations (Default=False).
[root@osc-1 cinder]#
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Removing --rescan option in iscsiadm command

2014-03-20 Thread John Griffith
On Wed, Mar 19, 2014 at 5:54 AM, Fiorenza Meini  wrote:

> Hi there,
> I'm experiencing a problem while attaching a volume to a VM; this is what
> I see in nova.log:
>
>  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T
> iqn.2010-10.org.openstack:volume-0c623f91-9a90-4ff5-805c-8e5861e3c92d -p
> 127.0.0.1:3260 --rescan
>


Any chance you could include the surrounding bits of the log file here?
 Specifically the error that you're encountering?

I'm not sure about using the local 127 address for your iscsi ip, but not
sure if that matters or not depending on what you're seeing.


> If I give the same command, on a shell session, without "--rescan" option,
> it works.
>
> There is a way to remove this option when the command is given?
> The command tgtadm --lld iscsi --mode target --op show give me the correct
> response: I see the volume I previously created throught the Dashbord
> interface-
>
> Thanks and regards
> --
>
> Fiorenza Meini
> Spazio Web S.r.l.
>
> V. Dante Alighieri, 10 - 13900 Biella
> Tel.: 015.2431982 - 015.9526066
> Fax: 015.2522600
> Reg. Imprese, CF e P.I.: 02414430021
> Iscr. REA: BI - 188936
> Iscr. CCIAA: Biella - 188936
> Cap. Soc.: 30.000,00 Euro i.v.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder error - No portal found

2014-03-11 Thread John Griffith
On Tue, Mar 11, 2014 at 10:28 AM, Narayanan, Krishnaprasad <
naray...@uni-mainz.de> wrote:

> 


Suspect an issue with your config files, particularly the "-p :3260 in your trace is a problem.  Verify your scsi_ip_address settings
in cinder.conf and nova.conf

Some additional tips can be found here:
https://ask.openstack.org/en/question/130/why-do-i-get-no-portal-found-error-while-attaching-cinder-volume-to-vm/
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to use the new feature of cinder about read-only volume

2014-02-13 Thread John Griffith
On Thu, Feb 13, 2014 at 7:54 PM, minmin ren  wrote:
> I found cinder havana version is added a new feature about the read-only
> volume.
> So I create a volume with setting volume metadata with key="readonly" and
> value="True", and attached it to an instance.
> However, it doesn't work. I can write data in volume.
>
> Any wrong with my steps?
> How could I use the read-only volume?
> Has been multi-attach readonly volume supported?
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Unfortunately this isn't something that's usable at this point.  What
you are seeing in the code are the beginnings of an effort that was
started but never picked back up this cycle.  I'll have a look at
where this is at but may revert the code for icehouse it's picked back
up.  There's no API call specifically because we weren't ready to
expose this yet.

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] openstack havana cinder chooses wrong host to create new volume

2014-02-12 Thread John Griffith
On Wed, Feb 12, 2014 at 10:57 AM, Staicu Gabriel
 wrote:
> Thanks for the answer John.
> This is really tricky and I will try to explain why:
> You were right. The snap volume itself was created from a volume on opstck01
> when it was up.
> This is the offending snap. Here is the proof:
> root@opstck10:~# cinder snapshot-show 30093123-0da2-4864-b8e6-87e023e842a4
> ++--+
> |  Property  |Value
> |
> ++--+
> | created_at |
> 2014-02-12T13:20:41.00  |
> |display_description |
> |
> |display_name|  cirros-0.3.1-snap
> |
> | id |
> 30093123-0da2-4864-b8e6-87e023e842a4 |
> |  metadata  |  {}
> |
> |  os-extended-snapshot-attributes:progress  | 100%
> |
> | os-extended-snapshot-attributes:project_id |
> 8c25ff44225f4e78ab3f526d99c1b7e1   |
> |size|  1
> |
> |   status   |  available
> |
> | volume_id  |
> 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |
> ++--+
>
> root@opstck10:~# cinder show 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9
> ++--+
> |Property|Value |
> ++--+
> |  attachments   |  []  |
> |   availability_zone| nova |
> |bootable|false |
> |   created_at   |  2014-02-11T15:33:58.00  |
> |  display_description   |  |
> |  display_name  | cirros-0.3.1 |
> |   id   | 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |
> |metadata|   {u'readonly': u'False'}|
> | os-vol-host-attr:host  |   opstck01   |
> | os-vol-mig-status-attr:migstat | None |
> | os-vol-mig-status-attr:name_id | None |
> |  os-vol-tenant-attr:tenant_id  |   8c25ff44225f4e78ab3f526d99c1b7e1   |
> |  size  |  1   |
> |  snapshot_id   | None |
> |  source_volid  | None |
> | status |  available   |
> |  volume_type   | None |
> ++--+
>
> And now follows the interesting part. I am using ceph as a backend for
> cinder and I have multiple cinder-volume agents for HA reasons. The volumes
> themselves are available. The agents can fall.  How can I overcome the
> limitation to create a volume on the same agent as the snap itself was
> created?
>
> Thanks a lot,
> Gabriel
>
>
>
> On Wednesday, February 12, 2014 6:20 PM, John Griffith
>  wrote:
> On Wed, Feb 12, 2014 at 3:24 AM, Staicu Gabriel
>  wrote:
>>
>>
>> Hi,
>>
>> I have a setup with Openstack Havana on ubuntu precise with multiple
>> schedulers and volumes.
>> root@opstck10:~# cinder service-list
>>
>> +--+--+--+-+---++
>> |  Binary  |  Host  | Zone |  Status | State |Updated_at
>> |
>>
>> +--+--+--+-+---++
>> | cinder-scheduler | opstck08 | nova | enabled |  up  |
>> 2014-02-12T10:08:28.00 |
>> | cinder-scheduler | opstck09 | nova | enabled |  up  |
>> 2014-02-12T10:08:29.00 |
>> | cinder-scheduler | opstck10 | nova | enabled |  up  |
>> 2014-02-12T10:08:28.00 |
>> |  cinder-volume  | opstck01 | nova | enabled |  down |
>> 2014-02-12T09:39:09.00 |
>> |  cinder-volume  | opstck04 | nova | enabled |  down |
>> 2014-02-12T09:39:09.00 |
>> |  cinder-volume  | opstck05 | nova | enabled |  down |
>> 2014-02-12T09:39:09.00 |
>> |  cinder-volume  | 

Re: [Openstack] openstack havana cinder chooses wrong host to create new volume

2014-02-12 Thread John Griffith
On Wed, Feb 12, 2014 at 3:24 AM, Staicu Gabriel
 wrote:
>
>
> Hi,
>
> I have a setup with Openstack Havana on ubuntu precise with multiple
> schedulers and volumes.
> root@opstck10:~# cinder service-list
> +--+--+--+-+---++
> |  Binary  |   Host   | Zone |  Status | State | Updated_at
> |
> +--+--+--+-+---++
> | cinder-scheduler | opstck08 | nova | enabled |   up  |
> 2014-02-12T10:08:28.00 |
> | cinder-scheduler | opstck09 | nova | enabled |   up  |
> 2014-02-12T10:08:29.00 |
> | cinder-scheduler | opstck10 | nova | enabled |   up  |
> 2014-02-12T10:08:28.00 |
> |  cinder-volume   | opstck01 | nova | enabled |  down |
> 2014-02-12T09:39:09.00 |
> |  cinder-volume   | opstck04 | nova | enabled |  down |
> 2014-02-12T09:39:09.00 |
> |  cinder-volume   | opstck05 | nova | enabled |  down |
> 2014-02-12T09:39:09.00 |
> |  cinder-volume   | opstck08 | nova | enabled |   up  |
> 2014-02-12T10:08:28.00 |
> |  cinder-volume   | opstck09 | nova | enabled |   up  |
> 2014-02-12T10:08:28.00 |
> |  cinder-volume   | opstck10 | nova | enabled |   up  |
> 2014-02-12T10:08:28.00 |
> +--+--+--+-+---++
>
> When I am trying to create a new instance from a volume snapshot it keeps
> choosing for the creation of the volume opstck01 on which cinder-volume is
> down.
> Did anyone encounter the same problem?
> Thanks
>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

Which node is the parent volume and snapshot on?  In the case of
create from source-volume and create from snapshot these need to be
created on the same node as the source or snapshot.  There's currently
a bug ([1]), where we don't check/enforce the type settings to make
sure these match up.  That's in progress right now and will be
backported.

[1]: https://bugs.launchpad.net/cinder/+bug/1276787

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack(Havana) + cinder + XCP - snapshot on the fly(attached disk)

2014-02-09 Thread John Griffith
On Sun, Feb 9, 2014 at 3:53 PM, Misha Dobrovolskyy  wrote:
> Good afternoon,
>
> I'm trying to get solution to get snapshots from the attached drive to XCP
> instances.
>
>
> Can somebody pls answer if it's possible at all, because I've tried few
> designs - and nothing is able to get what I want to get.
>
> Thanks in advance.
>
> --
> Best Regards,
> Misha Dobrovolskyy
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Cinder Snapshots aren't directly consumable by Xen (or any
hypervisor), but instead they're used to do things like create a new
volume from that snapshot.  The snapshot object doesn't have any
connector info associated with it.  Not sure if I fully understand
what you're describing, if not clarify a bit and I'll see if I can
help.

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova unique name generator middleware

2014-02-02 Thread John Griffith
On Sun, Feb 2, 2014 at 7:26 PM, Joel Cooklin  wrote:
> +1 intel use cases.  It would be nice to avoid our current custom patches.
>
>
> On Sunday, February 2, 2014, Joshua Harlow  wrote:
>>
>> +1 yahoo use case(s) would also benefit from not having to patch the code
>> to achieve similar results.
>>
>> Sent from my really tiny device...
>>
>> > On Feb 2, 2014, at 12:12 AM, "Tim Bell"  wrote:
>> >
>> > It would be interesting to have a formal exit inside OpenStack nova at
>> > VM creation for this sort of check rather than having to patch the code.
>> >
>> > Other scenarios that I've seen is where there is a need to enforce
>> > limits as the server name is used for other purposes. Examples are
>> > characters in the server name or maximum length.
>> >
>> > Tim
>> >
>> >> -Original Message-
>> >> From: gustavo panizzo  [mailto:g...@zumbi.com.ar]
>> >> Sent: 02 February 2014 05:27
>> >> To: Craig J; openstack@lists.openstack.org
>> >> Subject: Re: [Openstack] nova unique name generator middleware
>> >>
>> >> On 02/01/2014 04:19 PM, Craig J wrote:
>> >>
>> >>
>> >>> I think the best way to accomplish this is to write a custom piece of
>> >>> paste middleware and plug it into the nova api. I'm planning on
>> >>> basically overriding the name provide by the user with a name that we
>> >>> can guarantee to be unique.
>> >>
>> >> do you plan to rename the vm in case the name already exists or just
>> >> abort the creation? i would choose the second
>> >>
>> >>>
>> >>> So, two questions:
>> >>> 1. Does anyone have a similar piece of middleware they'd care to
>> >>> share?
>> >> i don't i would love to have it, i would share if i have it
>> >>
>> >>> 2. Are there any reasons this approach won't work? Any better
>> >>> approaches?
>> >> it think it should be done at tenant level, my use case is i want to
>> >> restrict vm names depending on the tenant them are in
>> >>
>> >> example:
>> >>
>> >> tenant foo, vm names allowes
>> >>
>> >> foo-mysql-n
>> >> foo-apache-n
>> >>
>> >> tenant bar
>> >>
>> >> foo-mongodb-n
>> >> foo-nginx
>> >>
>> >> this help a lot with orchestration, we are doing it manually now but i
>> >> would like to enforce it
>> >>
>> >>>
>> >>>
>> >>> Thanks in advance,
>> >>> Craig
>> >>>
>> >>>
>> >>> ___
>> >>> Mailing list:
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>> Post to : openstack@lists.openstack.org
>> >>> Unsubscribe :
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>
>> >>
>> >> --
>> >> 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333
>> >>
>> >>
>> >> ___
>> >> Mailing list:
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >> Post to : openstack@lists.openstack.org
>> >> Unsubscribe :
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >
>> > ___
>> > Mailing list:
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> > Post to : openstack@lists.openstack.org
>> > Unsubscribe :
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Maybe just an auto-naming option added to nova's existing boot call;
use a prefix and the UUID of the instance would satisfy the cases
people are talking about here and not impact existing users?

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to configure ESX datastore as a cinder backend storage

2014-01-26 Thread John Griffith
On Fri, Jan 24, 2014 at 3:27 AM, Rajshree Thorat
 wrote:
> Hi All,
>
> I'm deploying OpenStack Havana for provisioning VMs using ESXi as a
> hypervisor.
> I want to use ESX datastore as a cinder backend storage. However, when I try
> to create a volume,
> it gives me the following error.
>
> 2014-01-23 06:35:02.960 2862 ERROR cinder.scheduler.filters.capacity_filter
> [req-e9444c9b-4abb-411b-9476-80213e4c47a1 dc1ab39557d7405ebcf35d98885c8dcd
> 1d2b0c8ade234ca08bcef380acdeb8b9] Free capacity not set: volume node info
> collection broken.
> 2014-01-23 06:35:02.961 2862 ERROR cinder.volume.flows.create_volume
> [req-e9444c9b-4abb-411b-9476-80213e4c47a1 dc1ab39557d7405ebcf35d98885c8dcd
> 1d2b0c8ade234ca08bcef380acdeb8b9] Failed to schedule_create_volume: No valid
> host was found.
>

It would appear that the driver is not loading/starting up.  If you
look in the cinder-volume log you may get more info as to why.

> Provided is the driver configuration file which I am using.
>
> volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareEsxVmdkDriver

I'm assuming of course that the ip, username and password info below
is actually populated in your config file and you just scrubbed it for
the ML?

> vmware_host_ip=
> vmware_host_username=
> vmware_host_password=
>
> Does anyone have any idea? Pointers in the right direction are always
> welcome.
>
> Thanks in advance.
>
> Regards,
> Rajshree
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to configure ESX datastore as a cinder backend storage

2014-01-26 Thread John Griffith
On Sun, Jan 26, 2014 at 10:34 PM, John Griffith
 wrote:
> On Fri, Jan 24, 2014 at 3:27 AM, Rajshree Thorat
>  wrote:
>> Hi All,
>>
>> I'm deploying OpenStack Havana for provisioning VMs using ESXi as a
>> hypervisor.
>> I want to use ESX datastore as a cinder backend storage. However, when I try
>> to create a volume,
>> it gives me the following error.
>>
>> 2014-01-23 06:35:02.960 2862 ERROR cinder.scheduler.filters.capacity_filter
>> [req-e9444c9b-4abb-411b-9476-80213e4c47a1 dc1ab39557d7405ebcf35d98885c8dcd
>> 1d2b0c8ade234ca08bcef380acdeb8b9] Free capacity not set: volume node info
>> collection broken.
>> 2014-01-23 06:35:02.961 2862 ERROR cinder.volume.flows.create_volume
>> [req-e9444c9b-4abb-411b-9476-80213e4c47a1 dc1ab39557d7405ebcf35d98885c8dcd
>> 1d2b0c8ade234ca08bcef380acdeb8b9] Failed to schedule_create_volume: No valid
>> host was found.
>>
>
> It would appear that the driver is not loading/starting up.  If you
> look in the cinder-volume log you may get more info as to why.
>
>> Provided is the driver configuration file which I am using.
>>
>> volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareEsxVmdkDriver
>
> I'm assuming of course that the ip, username and password info below
> is actually populated in your config file and you just scrubbed it for
> the ML?
>
>> vmware_host_ip=
>> vmware_host_username=
>> vmware_host_password=
>>
>> Does anyone have any idea? Pointers in the right direction are always
>> welcome.
>>
>> Thanks in advance.
>>
>> Regards,
>> Rajshree
>>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Sorry... missed all of the responses that were posted, gmail was
apparently disconnected and I was using cached inbox.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [cinder] weird cinder issue

2013-12-15 Thread John Griffith
On Sun, Dec 15, 2013 at 12:02 PM, Xu (Simon) Chen  wrote:
> It doesn't always happen, but only happens when I create a batch of 5 VMs or
> more. A few of the VMs would fail to create and become ERROR state.
>
> By digging into the logs, it seems that the VM failed because the volume
> could not be attached, which was in turn due to the volume being deleted for
> some reason.
>
> I am running an HA setup, but even if I shut every component to a single
> instance this would still happen. Any ideas?
>
> -Simon
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Any chance of some cinder-volume logs for this?  There should be
something in there to indicate the delete etc.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [CINDER]Cinder Volume Creation

2013-12-04 Thread John Griffith
On Wed, Dec 4, 2013 at 2:26 PM, Byron Briggs  wrote:

> root@Compute01-CodeNode:/etc/cinder# vgs
>
> File descriptor 3 (/usr/share/bash-completion/completions) leaked on vgs
> invocation. Parent PID 10778: -bash
>
>   VG #PV #LV #SN Attr   VSize   VFree
>
>   cinder-volumes   1   0   0 wz--n- 100.00g 100.00g
>
>
>
>
>
> My control nodes don’t but that shouldn’t matter from my understanding.
>
> root@control01:/etc/cinder# vgs
>
>   VG   #PV #LV #SN Attr   VSize   VFree
>
>   control01-vg   1   2   0 wz--n- 931.27g 44.00m
>
>
>
> *From:* John Griffith [mailto:john.griff...@solidfire.com]
> *Sent:* Wednesday, December 04, 2013 3:20 PM
> *To:* Byron Briggs
> *Cc:* openstack@lists.openstack.org; SYSADMIN
>
> *Subject:* Re: [Openstack] [CINDER]Cinder Volume Creation
>
>
>
>
>
>
>
> On Wed, Dec 4, 2013 at 12:43 PM, Byron Briggs 
> wrote:
>
> When running
>
>
>
> *cinder create --display_name test 10*
>
>
>
> From the compute01NovaCompute.dmz-pod2 Node listed below. It is also
> running xenapi for a xenserver.
>
> ( Control01,2,3 are an HA cluster sitting behind haproxy/keepalivd running
> all the communication and schedulers  -all that is working well.)
>
>
>
>
>
> I get this error in my cinder-volume.log on the control nodes(since they
> run schedulers) There is nothing else to go off of other then “ERROR” on
> the volume status.
>
>
>
> 2013-12-04 12:55:32  WARNING [cinder.scheduler.host_manager] service is
> down or disabled.
>
> 2013-12-04 12:55:32ERROR [cinder.scheduler.manager] Failed to
> schedule_create_volume: No valid host was found.
>
>
>
>
>
> In case you can’t see excel  below.
>
> 192.168.220.40 Is the proxy being distributed into the three control nodes.
>
>
>
> control01.dmz-pod2(192.168.220.41) ->cinder-api,cinder-scheduler
>
> control02.dmz-pod2(192.168.220.42) ->cinder-api,cinder-scheduler
>
> control03.dmz-pod2(192.168.220.43) ->cinder-api,cinder-scheduler
>
> compute01NovaCompute.dmz-pod2(192.168.220.101) ->cinder-volume
>
>
>
>
>
>
>
> Server Name
>
> cinder-api
>
> cinder-scheduler
>
> cinder-volume
>
> control01.dmz-pod2
>
> YES
>
> YES
>
> NO
>
> control02.dmz-pod2
>
> YES
>
> YES
>
> NO
>
> control03.dmz-pod2
>
> YES
>
> YES
>
> NO
>
> compute01NovaCompute.dmz-pod2
>
> NO
>
> NO
>
> NO
>
>
>
> All services are running with no log errors.
>
>
>
> Here is my configs
>
> *compute01NovaCompute.dmz-pod2*
>
> */etc/cinder/api-paste.ini*
>
> [filter:authtoken]
>
> paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
>
> service_protocol = http
>
> service_host = %SERVICE_TENANT_NAME%
>
> service_port = 5000
>
> auth_host = 127.0.0.1
>
> auth_port = 35357
>
> auth_protocol = http
>
> admin_tenant_name = services
>
> admin_user = %SERVICE_USER%
>
> admin_password = %SERVICE_PASSWORD%
>
> signing_dir = /var/lib/cinder
>
>
>
> */etc/cinder/cinder.conf*
>
> [DEFAULT]
>
> iscsi_ip_address=192.168.220.101
>
> rabbit_ha_queues=True
>
> rabbit_hosts=control01:5672,control02:5672,control03:5672
>
> rabbit_userid=openstack_rabbit_user
>
> rabbit_password=openstack_rabbit_password
>
> sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder
>
> rootwrap_config = /etc/cinder/rootwrap.conf
>
> api_paste_confg = /etc/cinder/api-paste.ini
>
> iscsi_helper = tgtadm
>
> volume_name_template = volume-%s
>
> volume_group = cinder-volumes
>
> verbose = True
>
> auth_strategy = keystone
>
> state_path = /var/lib/cinder
>
> lock_path = /var/lock/cinder
>
> volumes_dir = /var/lib/cinder/volumes
>
>
>
> *pvscan*
>
> File descriptor 3 (/usr/share/bash-completion/completions) leaked on
> pvscan invocation. Parent PID 10778: -bash
>
>   PV /dev/xvdb   VG cinder-volumes   lvm2 [100.00 GiB / 100.00 GiB free]
>
>   Total: 1 [100.00 GiB] / in use: 1 [100.00 GiB] / in no VG: 0 [0   ]
>
>
>
>
>
> *control01,2,3*
>
> */etc/cinder/api-paste.ini*
>
> [filter:authtoken]
>
> paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
>
> service_protocol = http
>
> service_host = 192.168.220.40
>
> service_port = 5000
>
> auth_host = 192.168.220.40
>
> auth_port = 35357
>
> auth_protocol = http
>
> admin_tenant_name = services
>
> admin_user = cinder
>
> admin_password = keystone_admin
>
> signing_dir = /var/lib/cind

Re: [Openstack] [CINDER]Cinder Volume Creation

2013-12-04 Thread John Griffith
On Wed, Dec 4, 2013 at 12:43 PM, Byron Briggs  wrote:

> When running
>
>
>
> *cinder create --display_name test 10*
>
>
>
> From the compute01NovaCompute.dmz-pod2 Node listed below. It is also
> running xenapi for a xenserver.
>
> ( Control01,2,3 are an HA cluster sitting behind haproxy/keepalivd running
> all the communication and schedulers  -all that is working well.)
>
>
>
>
>
> I get this error in my cinder-volume.log on the control nodes(since they
> run schedulers) There is nothing else to go off of other then “ERROR” on
> the volume status.
>
>
>
> 2013-12-04 12:55:32  WARNING [cinder.scheduler.host_manager] service is
> down or disabled.
>
> 2013-12-04 12:55:32ERROR [cinder.scheduler.manager] Failed to
> schedule_create_volume: No valid host was found.
>
>
>
>
>
> In case you can’t see excel  below.
>
> 192.168.220.40 Is the proxy being distributed into the three control nodes.
>
>
>
> control01.dmz-pod2(192.168.220.41) ->cinder-api,cinder-scheduler
>
> control02.dmz-pod2(192.168.220.42) ->cinder-api,cinder-scheduler
>
> control03.dmz-pod2(192.168.220.43) ->cinder-api,cinder-scheduler
>
> compute01NovaCompute.dmz-pod2(192.168.220.101) ->cinder-volume
>
>
>
>
>
>
>
> Server Name
>
> cinder-api
>
> cinder-scheduler
>
> cinder-volume
>
> control01.dmz-pod2
>
> YES
>
> YES
>
> NO
>
> control02.dmz-pod2
>
> YES
>
> YES
>
> NO
>
> control03.dmz-pod2
>
> YES
>
> YES
>
> NO
>
> compute01NovaCompute.dmz-pod2
>
> NO
>
> NO
>
> NO
>
>
>
> All services are running with no log errors.
>
>
>
> Here is my configs
>
> *compute01NovaCompute.dmz-pod2*
>
> */etc/cinder/api-paste.ini*
>
> [filter:authtoken]
>
> paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
>
> service_protocol = http
>
> service_host = %SERVICE_TENANT_NAME%
>
> service_port = 5000
>
> auth_host = 127.0.0.1
>
> auth_port = 35357
>
> auth_protocol = http
>
> admin_tenant_name = services
>
> admin_user = %SERVICE_USER%
>
> admin_password = %SERVICE_PASSWORD%
>
> signing_dir = /var/lib/cinder
>
>
>
> */etc/cinder/cinder.conf*
>
> [DEFAULT]
>
> iscsi_ip_address=192.168.220.101
>
> rabbit_ha_queues=True
>
> rabbit_hosts=control01:5672,control02:5672,control03:5672
>
> rabbit_userid=openstack_rabbit_user
>
> rabbit_password=openstack_rabbit_password
>
> sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder
>
> rootwrap_config = /etc/cinder/rootwrap.conf
>
> api_paste_confg = /etc/cinder/api-paste.ini
>
> iscsi_helper = tgtadm
>
> volume_name_template = volume-%s
>
> volume_group = cinder-volumes
>
> verbose = True
>
> auth_strategy = keystone
>
> state_path = /var/lib/cinder
>
> lock_path = /var/lock/cinder
>
> volumes_dir = /var/lib/cinder/volumes
>
>
>
> *pvscan*
>
> File descriptor 3 (/usr/share/bash-completion/completions) leaked on
> pvscan invocation. Parent PID 10778: -bash
>
>   PV /dev/xvdb   VG cinder-volumes   lvm2 [100.00 GiB / 100.00 GiB free]
>
>   Total: 1 [100.00 GiB] / in use: 1 [100.00 GiB] / in no VG: 0 [0   ]
>
>
>
>
>
> *control01,2,3*
>
> */etc/cinder/api-paste.ini*
>
> [filter:authtoken]
>
> paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
>
> service_protocol = http
>
> service_host = 192.168.220.40
>
> service_port = 5000
>
> auth_host = 192.168.220.40
>
> auth_port = 35357
>
> auth_protocol = http
>
> admin_tenant_name = services
>
> admin_user = cinder
>
> admin_password = keystone_admin
>
> signing_dir = /var/lib/cinder
>
>
>
> */etc/cinder/conder.conf*
>
> [DEFAULT]
>
> sql_idle_timeout=30
>
> rabbit_ha_queues=True
>
> rabbit_hosts=control01:5672,control02:5672,control03:5672
>
> rabbit_userid=openstack_rabbit_user
>
> rabbit_password=openstack_rabbit_password
>
> sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder
>
> osapi_volume_listen = 192.168.220.41
>
> rootwrap_config = /etc/cinder/rootwrap.conf
>
> api_paste_confg = /etc/cinder/api-paste.ini
>
> iscsi_helper = tgtadm
>
> volume_name_template = volume-%s
>
> volume_group = nova-volumes
>
> verbose = True
>
> auth_strategy = keystone
>
> state_path = /var/lib/cinder
>
> lock_path = /var/lock/cinder
>
> volumes_dir = /var/lib/cinder/volumes
>
>
>
>
>
> Grizzly Release
>
>
>
>
>
> Any ideas on where to look more into the issue or something with my config?
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
So this happens in a couple of situations, the most common is when there's
not enough space/capacity being reported by the configured backend driver
to allocate the amount of space you're requesting.  Try a "sudo vgs" and
verify that you have enough capacity on your backing store (nova-volumes)
to deploy a 10G volume.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack

Re: [Openstack] Multiples storages

2013-11-13 Thread John Griffith
On Wed, Nov 13, 2013 at 5:50 AM, Guilherme Russi
 wrote:
> Hello Razique, I'm here opening this thread again, I've done some cinder
> delete but when I try to create another storeges it returns there's no space
> to create a new volume.
>
> Here is part of my lvdisplay output:
>
> Alloc PE / Size   52224 / 204,00 GiB
> Free  PE / Size   19350 / 75,59 GiB
>
> And here is my lvdisplay:
>
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
>   VG Namecinder-volumes
>   LV UUIDwdqxVd-GgUQ-21O4-OWlR-sRT3-HvUA-Q8j9kL
>   LV Write Accessread/write
>   LV snapshot status source of
>
> /dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09 [active]
>   LV Status  available
>   # open 0
>   LV Size10,00 GiB
>   Current LE 2560
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:1
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
>   VG Namecinder-volumes
>   LV UUIDEZz1lC-a8H2-1PlN-pJTN-XAIm-wW0q-qtUQOc
>   LV Write Accessread/write
>   LV snapshot status active destination for
> /dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
>   LV Status  available
>   # open 0
>   LV Size10,00 GiB
>   Current LE 2560
>   COW-table size 10,00 GiB
>   COW-table LE   2560
>   Allocated to snapshot  0,00%
>   Snapshot chunk size4,00 KiB
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:3
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-ca36920e-938e-4ad1-b9c4-74c1e28abd31
>   VG Namecinder-volumes
>   LV UUIDb40kQV-P8N4-R6jt-k97Z-I2a1-9TXm-5GXqfz
>   LV Write Accessread/write
>   LV Status  available
>   # open 1
>   LV Size60,00 GiB
>   Current LE 15360
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:4
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-70be4f36-10bd-4877-b841-80333ccfe985
>   VG Namecinder-volumes
>   LV UUID2YDrMs-BrYo-aQcZ-8AlX-A4La-HET1-9UQ0gV
>   LV Write Accessread/write
>   LV Status  available
>   # open 1
>   LV Size1,00 GiB
>   Current LE 256
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:5
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-00c532bd-91fb-4a38-b340-4389fb7f0ed5
>   VG Namecinder-volumes
>   LV UUIDMfVOuB-5x5A-jne3-H4Ul-4NP8-eI7b-UYSYE7
>   LV Write Accessread/write
>   LV Status  available
>   # open 0
>   LV Size1,00 GiB
>   Current LE 256
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:6
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-ae133dbc-6141-48cf-beeb-9d6576e57a45
>   VG Namecinder-volumes
>   LV UUID53w8j3-WT4V-8m52-r6LK-ZYd3-mMHA-FtuyXV
>   LV Write Accessread/write
>   LV Status  available
>   # open 0
>   LV Size1,00 GiB
>   Current LE 256
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:7
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-954d2f1b-837b-4ba5-abfd-b3610597be5e
>   VG Namecinder-volumes
>   LV UUIDbelquE-WxQ2-gt6Y-WlPE-Hmq3-B9Am-zcYD3P
>   LV Write Accessread/write
>   LV Status  available
>   # open 0
>   LV Size60,00 GiB
>   Current LE 15360
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 256
>   Block device   252:8
>
>   --- Logical volume ---
>   LV Name
> /dev/cinder-volumes/volume-05d037d1-4e61-4419-929a-fe340e00e1af
>   VG Namecinder-volumes
>   LV UUIDPt61e7-l3Nu-1IdX-T2sb-0GQD-PhS6-XtIIUj
>   LV Write Accessread/write
>   LV Status  available
>   # open 1
>   LV Size1,00 GiB
>   Current LE 256
>   Segmen

Re: [Openstack] Cinde muti-backend feature

2013-11-12 Thread John Griffith
On Tue, Nov 12, 2013 at 10:24 AM, Trivedi, Narendra
 wrote:
> Hi All,
>
>
>
> Could someone please explain the Cinder multi-backends feature (as of
> Havana)? Specifically, I had the following questions:
>
>
>
> 1)  Can I attach multiple physical storage backends (let’s say one all
> SSD, another all SATA for instance) to a single host?
>
> 2)  How does migration between different multiple backends work? Let’s
> say the an SSD volume is attached to a VM I want to migrate all the data in
> the SSD volume to a SATA volume without bringing down the interface. Do I
> have to manually mount/un-mount the persistent volume to the instance- how
> do paths are maintained? For instance, let’s say the persistent volume was
> /dev/vdx mounted to /mnt/vol0 , how does it re-appear ?
>
>
>
> Thanks a lot in advance!
>
> Narendra
>
>
>
>
>
>
> This message contains information which may be confidential and/or
> privileged. Unless you are the intended recipient (or authorized to receive
> for the intended recipient), you may not read, use, copy or disclose to
> anyone the message or any information contained in the message. If you have
> received the message in error, please advise the sender by reply e-mail and
> delete the message and any attachment(s) thereto without retaining any
> copies.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

Hi Narendra,

Yes, the whole point is in fact that you can have multiple physical
backends controlled by a single host.  Of course it still needs to be
a cinder supported backend/driver.  To configure this you can check
out the following link [1].

With respect to the migration, you should be able to run "migrate
 ", where host is the hostname of the backend you
want to migrate too.

John

[1] https://wiki.openstack.org/wiki/Cinder-multi-backend

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Wiping of old cinder volumes

2013-11-01 Thread John Griffith
On Fri, Nov 1, 2013 at 9:02 PM, Caitlin Bestler
 wrote:
> Why not just ask the volume driver to "delete securely" or 'delete with
> wipe'?
> It could even be a level of erasure, if people want to be real paranoid.
> Whatever it is, the vendor's volume driver will know the best way to do it.
>
>
I'm afraid maybe I wasn't clear above, what I was saying is that there
*is* already a configuration option for secure delete
(secure_delete=None) for example will skip the dd operation altogether
as well as provide some other options in terms of how a secure delete
is done.

I agree that it's necessary in some environments, and that some won't
want to use thin provisioning which is fine.  I completely agree that
we need to come up with a compromise here, I just wanted to point out
the options that exist today.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Wiping of old cinder volumes

2013-11-01 Thread John Griffith
On Fri, Nov 1, 2013 at 8:06 PM, David Hill  wrote:
> Hello Jeff,
>
> I understand that but does that mean it HAS to be done right away?
> I mean, performances for the rest of the VMs are sacrificed over security 
> concern
> (which are legitimate) but still have an impact over the remainder of the EBS
> volumes being attached to other VMs.   There're no better ways that could
> be implemented to deal with that?  Or maybe some faster ways ?   What
> if the LVM would be kept for a bit longer and be deleted slowly but surely?

By the way for the most part I agree with what you're saying above here.
>
> Thank you very much,
>
> Dave
>
> -Original Message-
> From: Jeffrey Walton [mailto:noloa...@gmail.com]
> Sent: November-01-13 9:21 PM
> To: David Hill
> Cc: openstack@lists.openstack.org
> Subject: Re: [Openstack] Wiping of old cinder volumes
>
> On Fri, Nov 1, 2013 at 8:33 PM, David Hill  wrote:
>> Hello John,
>>
>> Well, if it has an impact on the other volumes that are still being 
>> used by
>> some other VMs, this is worse in my opinion as it will degrade the service 
>> level
>> of the other VMs that need to get some work done.  If that space is not 
>> immediately
>> needed we can take our time to delete it or at least delay the deletion.  Or 
>> perhaps
>> the scheduler should try to delete the volumes when there's less activity on 
>> the storage
>> device (SAN, disks, etc) and even throttle the rate at which the bites are 
>> overwritten
>> by zeros.The fact is that our internal cloud users can delete multiple 
>> volumes at
>> the same time and thus, have an impact on other users VMs that may or may not
>> be doing critical operations and sometimes, Windows may even blue screen 
>> because
>> of the disk latency and this is very bad.
>>
>> Here are the answer to the alternatives:
>> 1) I don't think we do need secure delete but I'm not the one who will make 
>> this call but
>> If I could, I would turn it off right away as it would remove some stress 
>> over the storage
>> Systems.
> For some folks, this can be a compliance problem. If an organization
> is using a cloud provider, then it could be a governance issue too.
> See, for example, NIST Special Publication 800-63-1 and the
> discussions surrounding zeroization.
>
> Jeff
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Wiping of old cinder volumes

2013-11-01 Thread John Griffith
On Fri, Nov 1, 2013 at 4:20 PM, David Hill  wrote:
> Hi guys,
>
>
>
> I was wondering there was some better way of wiping the
> content of an old EBS volume before actually deleting the logical volume in
> cinder ?  Or perhaps, configure or add the possibility to configure the
> number of parallel “dd” processes that will be spawn at the same time…
>
> Sometimes, users will simply try to get rid of their volumes ALL at the same
> time and this is putting a lot of pressure on the SAN servicing those
> volumes and since the hardware isn’t replying fast enough, the process then
> fall in D state and are waiting for IOs to complete which slows down
> everything.
>
> Since this process isn’t (in my opinion) as critical as a EBS write or read,
> perhaps we should be able to throttle the speed of disk wiping or number of
> parallel wipings to something that wouldn’t affect the other read/write that
> are most probably more critical.
>
>
>
> Here is a small capture of the processes :
>
> cinder   23782  0.7  0.2 248868 20588 ?SOct24  94:23
> /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf
> --logfile /var/log/cinder/volume.log
>
> cinder   23790  0.0  0.5 382264 46864 ?SOct24   9:16  \_
> /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf
> --logfile /var/log/cinder/volume.log
>
> root 32672  0.0  0.0 175364  2648 ?S21:48   0:00  |   \_
> sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791
> count=102400 bs=1M co
>
> root 32675  0.0  0.1 173636  8672 ?S21:48   0:00  |   |   \_
> /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd
> if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d7
>
> root 32681  3.2  0.0 106208  1728 ?D21:48   0:47  |   |
> \_ /bin/dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791
> count=102400 bs=1M conv=fdatasync
>
> root 32674  0.0  0.0 175364  2656 ?S21:48   0:00  |   \_
> sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf
> count=102400 bs=1M co
>
> root 32676  0.0  0.1 173636  8672 ?S21:48   0:00  |   |   \_
> /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd
> if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dc
>
> root 32683  3.2  0.0 106208  1724 ?D21:48   0:47  |   |
> \_ /bin/dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf
> count=102400 bs=1M conv=fdatasync
>
> root 32693  0.0  0.0 175364  2656 ?S21:48   0:00  |   \_
> sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd
> count=102400 bs=1M co
>
> root 32694  0.0  0.1 173632  8668 ?S21:48   0:00  |   |   \_
> /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd
> if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6
>
> root 32707  3.2  0.0 106208  1728 ?D21:48   0:46  |   |
> \_ /bin/dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd
> count=102400 bs=1M conv=fdatasync
>
> root   342  0.0  0.0 175364  2648 ?S21:48   0:00  |   \_
> sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976ab
> count=102400 bs=1M co
>
> root   343  0.0  0.1 173636  8672 ?S21:48   0:00  |   |   \_
> /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd
> if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976
>
> root   347  3.2  0.0 106208  1728 ?D21:48   0:45  |   |
> \_ /bin/dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976ab
> count=102400 bs=1M conv=fdatasync
>
> root   380  0.0  0.0 175364  2656 ?S21:48   0:00  |   \_
> sudo cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8c4
> count=102400 bs=1M co
>
> root   382  0.0  0.1 173632  8668 ?S21:48   0:00  |   |   \_
> /usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd
> if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8
>
> root   388  3.2  0.0 106208  1724 ?R21:48   0:45  |   |
> \_ /bin/dd if=/dev/zero
> of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8c4
> count=102400 bs=1M conv=fdatasync
>
> root   381  0.0  0.0 17536

[Openstack] python-cinderclient 1.0.7

2013-10-31 Thread John Griffith
Hello,
Just a note that we've pushed a new release of python-cinderclient
(1.0.7).  There are a number of fixes that were needed that are in
this release.  Particularly a fix for the backup/restore CLI command.

In addition there a few new features that are added.

Thanks,
John

Release Notes
=
1.0.7
-
* Add support for read-only volumes
* Add support for setting snapshot metadata
* Deprecate volume-id arg to backup restore in favor of --volume
* Add quota-usage command
* Fix exception deprecation warning message
* Report error when no args supplied to rename cmd

.. _1241941: http://bugs.launchpad.net/python-cinderclient/+bug/1241941
.. _1242816: http://bugs.launchpad.net/python-cinderclient/+bug/1242816
.. _1233311: http://bugs.launchpad.net/python-cinderclient/+bug/1233311
.. _1227307: http://bugs.launchpad.net/python-cinderclient/+bug/1227307
.. _1240151: http://bugs.launchpad.net/python-cinderclient/+bug/1240151
.. _1241682: http://bugs.launchpad.net/python-cinderclient/+bug/1241682

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Question on installation of Open stack on one laptop under Fidora

2013-10-17 Thread John Griffith
On Thu, Oct 17, 2013 at 4:40 PM, Sean Mann  wrote:

>
>
>
> On Wed, Oct 9, 2013 at 9:38 AM, Sean Mann  wrote:
>
>>
>> Hi All,
>>
>> I am new in OpenStack and installing it on Fidora. I am using Oracle
>> VirtualBox as a virtual machine to run Fidora.
>>
>> I run the following shell script:
>>
>> cd devstack; ./stack.sh
>>
>> What happens is in the middle of installation it hangs at:
>>
>> nova x509-get-root-cert /home/sean/devstack/accrc/cacert.pem
>>
>>
>> I had also a report in Fodora as :
>>
>> Process /usr/bin/tgtd was killed by Signal 6 (SIGABRT). I am not sure if
>> the 2 are relavant.
>>
>> I need someone's help to be able to proceed with installation.
>>
>> Thanks,
>>
>> Sean Mann
>>
>>
>>
>>
>>
>> cd devstack; ./stack.sh
>>
>>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> I haven't seen this in a while, but in the past I've hit that and just hit
'enter' a few times and it moves back on it's way.  You'll hit this for
each of the certs, just do the same trick.

If that doesn't do it for you, I'm afraid it's something I haven't seen.

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] python-cinderclient 1.0.6

2013-10-04 Thread John Griffith
Hello,

I just wanted to send out a note and let everyone know that I just pushed a
new release of python-cinderclient (1.0.6).  This is meant to coincide with
the RC-1 release of Cinder for the Havana cycle.

Here's the release information which can also be found in the
doc/source/index.rst file.

Thanks
John

Release Notes
=
1.0.6
-
* Add support for multiple endpoints
* Add response info for backup command
* Add metadata option to cinder list command
* Add timeout parameter for requests
* Add update action for snapshot metadata
* Add encryption metadata support
* Add volume migrate support

.. _1221104: http://bugs.launchpad.net/python-cinderclient/+bug/1221104
.. _1220590: http://bugs.launchpad.net/python-cinderclient/+bug/1220590
.. _1220147: http://bugs.launchpad.net/python-cinderclient/+bug/1220147
.. _1214176: http://bugs.launchpad.net/python-cinderclient/+bug/1214176
.. _1210874: http://bugs.launchpad.net/python-cinderclient/+bug/1210874
.. _1210296: http://bugs.launchpad.net/python-cinderclient/+bug/1210296
.. _1210292: http://bugs.launchpad.net/python-cinderclient/+bug/1210292
.. _1207635: http://bugs.launchpad.net/python-cinderclient/+bug/1207635
.. _1207609: http://bugs.launchpad.net/python-cinderclient/+bug/1207609
.. _1207260: http://bugs.launchpad.net/python-cinderclient/+bug/1207260
.. _1206968: http://bugs.launchpad.net/python-cinderclient/+bug/1206968
.. _1203471: http://bugs.launchpad.net/python-cinderclient/+bug/1203471
.. _1200214: http://bugs.launchpad.net/python-cinderclient/+bug/1200214
.. _1195014: http://bugs.launchpad.net/python-cinderclient/+bug/1195014
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] cinder volume weird behavior

2013-09-30 Thread John Griffith
On Mon, Sep 30, 2013 at 4:19 AM, Ritesh Nanda wrote:

> Hello ,
>
> I have grizzly setup , in which i run cinder using IBM storvize 3700.
> Cinder shows a weird behavior every time i create a volume of some size
> and attach it to a vm , it shows some different size .
>
> e.g i create a 4gb volume and attach it to a vm it shows of 15gb , this is
> every-time different sometimes it shows a volume smaller of the size it
> created.
>
> While attaching a volume to a vm sometimes i get error on compute-nodes
> stating
>
>
> d9f36a440abdf2fdd] [instance: b9f128a9-d3e3-42a1-9511-74868b625b1b] *Failed
> to attach volume 676ef5b1-129b-4d42-b38d-df2005a3d634 at /dev/vdc*
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] Traceback (most recent call last):
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2878, in
> _attach_volume
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] mountpoint)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 981,
> in attach_volume
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] disk_dev)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] self.gen.next()
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 968,
> in attach_volume
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]
> virt_dom.attachDeviceFlags(conf.to_xml(), flags)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] result =
> proxy_call(self._autowrap, f, *args, **kwargs)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in
> proxy_call
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] rv = execute(f,*args,**kwargs)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] rv = meth(*args,**kwargs)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b]   File
> "/usr/lib/python2.7/dist-packages/libvirt.py", line 422, in
> attachDeviceFlags
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] if ret == -1: raise libvirtError
> ('virDomainAttachDeviceFlags() failed', dom=self)
> 2013-09-30 15:17:46.562 31953 TRACE nova.compute.manager [instance:
> b9f128a9-d3e3-42a1-9511-74868b625b1b] *libvirtError: internal error
> unable to execute QEMU command 'device_add': Duplicate ID 'virtio-disk2'
> for device*
>
> Then if i change the mount point from /dev/vdc to something random mount
> point , it attaches the disk. But still showing different sizes problem
> remains.
>
> Restarting open-iscsi services and reattaching  the volume to the vm
> solves the issue.
>
> Attaching my cinder.conf
>
>
> Has anyone encountered this problem , or any help would be really
> appreciated.
>
>
>
>
>
> --
>
> * With Regards
> *
>
> * Ritesh Nanda
> *
>
> ***
> *
> 
>
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Hi Ritesh,

Not positive on the trace you've shown, but it seems like you in fact
already have a device attached at that mount point (could try just using
"auto", and see how things go that way).

As far as your observation about the volume being a "different" size, what
are you using to determine this?  I'd first ask what "cinder list or show"
says the volume is in comparison to what you created, as well as possible
what

Re: [Openstack] (no subject)

2013-09-25 Thread John Griffith
On Wed, Sep 25, 2013 at 1:26 PM, Aaron Rosen  wrote:

> Hi Albert,
>
> Are you sure this is happening. I'm positive that neutron's dhcp agent
> will only hand out ip addresses for ports that it knows about and I'm sure
> nova-network does the same as well.
>
> Aaron
>
> On Wed, Sep 25, 2013 at 12:17 PM, Albert Vonpupp wrote:
>
>> Hello,
>>
>> I'm trying DevStack at the university lab. When I tried to deploy a VM I
>> noticed that all the machines from the lab started renewing their leases
>> with the DevStack DHCP server. That is inconvenient for me since I'm not
>> the only user of this lab and it could cause troubles. I thought that
>> perhaps changing the default port on the controller as on the compute nodes
>> would work, but I don't know how to do that.
>>
>> How can I change the dnsmasq DHCP port on DevStack? (controller and
>> compute nodes)
>>
>> Thanks a lot!
>>
>> Albert.
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Hi Albert,

I inadvertently did this once in our lab.  The issue I believe (if my
memory is correct) you're probably using nova-networking and you've
configured FlatDHCP.  The problem is that you're that your public network
is accessing your internal/private network (check your bridge setting) so
the result is that external DHCP requests can be received from your
OpenStack private network.

It might be helpful if you include your localrc file and some info
regarding your systems nics and how they're configured.

John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fwd: Making the Hong Kong Summit as inclusive as possible

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 2:26 PM, David Mortman wrote:

> On Thu, Sep 12, 2013 at 2:54 PM, Anne Gentle  wrote
>
>> I think more than one ombudsperson may be needed to assure coverage and
>> avoid anyone being reluctant to report, but I'm not familiar with how
>> ombuds work in general. I do know that I wouldn't ever report something to
>> a nameless phone number or generic email address, so I'd like there to be
>> some identity in place, whether it's "any staff member of the Foundation"
>> or "staff at the conference identified by a badge or shirt" or some sort of
>> spread-out reporting mechanism. Offering training, even just an hour for
>> role play, for those identified people would be helpful as well.
>>
>
>  Agreed. There should be at least a small group, especially for events, so
> that one person doesn't get hammered but also so there's a committee to
> make judgement calls etc. Love the idea of some training too.
>

Not to be politically incorrect, or to seem insensitive etc, but is it
possible that maybe we're getting a little ahead of ourselves here?  A
comittee, training etc?

Personally I think it's unfortunate that people can't just be respectful,
courteous and professional (aka excellent), and that the actions of some
dorks at a previous organizations events have lead to a a lot of time and
effort here regarding what we should do here.

Anyway, not criticizing and I by no means wan to appear as though I'm not
respectful of the ideas here, just noticing this seems like it could become
a bit of a beast in and of itself.

>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Root disks with Ceph

2013-09-11 Thread John Griffith
On Wed, Sep 11, 2013 at 11:15 AM, Greg Chavez  wrote:

>
> So if I use RBD as my storage backend for Cinder, what happens to the root
> disks of VMs that I terminate?
>
> Do they still exist as RBD volumes in Ceph or are they
> deleted/marked-as-free?
>
> If the answer is that they get deleted, or at the very least OpenStack no
> longer keeps track of them, then there isn't much difference between the
> root and ephemeral disks in the flavors I am using. other than their being
> distinct disk devices.  Or so it seems to me.
>
> --
> \*..+.-
> --Greg Chavez
> +//..;};
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Depends on what you're doing here.  Attaching Cinder volumes as
secondary/persistent storage has nothing to do with your root/ephemeral
disks.  If you're doing boot from volume then the Cinder volume is your
root disk.

In other words, in the first case when you terminate the instance yes, the
root disks go away (ephemeral).  Is that what you're getting at?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fwd: Making the Hong Kong Summit as inclusive as possible

2013-09-11 Thread John Griffith
On Wed, Sep 11, 2013 at 7:53 AM, Thierry Carrez wrote:

> David Mortman wrote:
> > Given the recent and ongoing issues with sexism (not to mention racism,
> > homophobia and general bigotry) at tech conferences, I recently engaged
> > with several folks on twitter about what was being done to make sure
> > that the Hong Kong Summit was as inclusive as possible regardless of an
> > attendee's age, sex, orientation, race or anything else. I think a good
> > place to start would be an official  anti-harassment policy and a
> > process for people to report issues to the event organizers who can then
> > deal with the issue appropriately. I am happy to help with the drafting
> > of both the policy and the process. What do folks think?
>
> FWIW the summit already has a minimal policy and reporting guidelines
> (see at the bottom of
> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/):
>
> """
> Reminder: Be Excellent
>
> Be excellent to everyone. If you think someone is not being excellent to
> you at the OpenStack Summit call  or email .
> """
>
+1

>
> --
> Thierry Carrez (ttx)
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Links to summit sessions

2013-09-09 Thread John Griffith
On Mon, Sep 9, 2013 at 3:31 AM, Thierry Carrez wrote:

> John Griffith wrote:
> > Not sure if I'm missing something but... I've had a number of people ask
> > me "what happened to the links to the summit sessions".  It seems that
> > if you try and go to a link for a session it just redirects back to the
> > main page.  Even from there if you search and find the session topic and
> > click, it seems to again redirect back to main page.
>
> Do you mean design summit sessions, or summit (conference) presentations
> ? Direct links to design summit sessions seem to work alright:
>
> http://summit.openstack.org/cfp/details/1
>
> --
> Thierry Carrez (ttx)
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

I was referring to conference presentations.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [OpenStack] Links to summit sessions

2013-09-06 Thread John Griffith
Not sure if I'm missing something but... I've had a number of people ask me
"what happened to the links to the summit sessions".  It seems that if you
try and go to a link for a session it just redirects back to the main page.
 Even from there if you search and find the session topic and click, it
seems to again redirect back to main page.

Any idea what's up?

Thanks,
John
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] question on volume resize

2013-09-05 Thread John Griffith
On Thu, Sep 5, 2013 at 4:58 PM, John-Paul Robinson  wrote:

>  Are there any recommendations on where in OS that patch/extension would
> live.
>
> I've managed to get familiar with the OS external interfaces but don't
> have much understanding on how things are structured internally.
>
> Given the motivation for moving Cinder out of Nova due to the complexity
> of the intertwined Nova compute and volume functionality, would this make
> such a patch a bit more complicated?
>
> Were you suggesting that the patch/extension would simply focus on bumping
> the db volume size or is there a more formal extension interface that would
> keep all parts happy (e.g. the account API)?
>
> Thanks,
>
> ~jpr
>
>
> On 09/05/2013 05:36 PM, John Griffith wrote:
>
> In the scenario we are developing, we'd like to instantiate and track
>> volumes via nova-volume so we can benefit from the OS accounting API's
>> to track storage usage over time.  We'd also like to be able to grow
>> these volumes in place to add space for a user's block device and grow
>> the file system in that container to consume the newly added space.
>>
>> Is there a way to update the recognized size of the volume in OS?
>>
>
> Currently the only solution for what you describe would be to update the
> size in the db directly.  That being said you would be much better off
> writing your own custom patch/extension to do this for you in side of
> OpenStack to avoid some of the impedance mismatches that you're likely to
> encounter here.
>
>
>  So what I was suggesting is that you'd add something custom of your own
in api/contrib, and you would use that to call out to ceph to do your
resize, as well as call in to the database and modify the volume size when
succesful.  There are details that you'd need to think about here such as
quotas etc.

As far as being on nova-volume still, that does make things slightly more
difficult IMO, however for what you're talking about you don't need to
worry about things like ec2 ID mappings etc so I think it would be ok.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] question on volume resize

2013-09-05 Thread John Griffith
On Thu, Sep 5, 2013 at 3:36 PM, John-Paul Robinson  wrote:

> Hi,
>
> I'm trying to get some clarity on volume resize capabilities in OS.
>
> As I understand it, this will be a Cinder feature in Havana.
>
> https://blueprints.launchpad.net/cinder/+spec/volume-resize
>
> Also, it appears that the nova-volume API in Essex through the Cinder
> API in Grizzly are based on the same v1.0 of the block storage API.
> That is, the features are consistent across these OS releases, but they
> don't have support for volume resize.
>
> In our Essex OpenStack deploy we are using Ceph RBD as our block
> backend.  Ceph supports volume resizing:
>
>
> http://ceph.com/docs/master/rbd/rados-rbd-cmds/#resizing-a-block-device-image
>
> I've successfully created a 1GB volume using nova volume commands and
> then resized (grown) that volume via the Ceph backend to 2GB.
> Unfortunately, Nova-volume doesn't recognize the added 1GB of space and
> still reports the volume as 1GB in size.
>
> In the scenario we are developing, we'd like to instantiate and track
> volumes via nova-volume so we can benefit from the OS accounting API's
> to track storage usage over time.  We'd also like to be able to grow
> these volumes in place to add space for a user's block device and grow
> the file system in that container to consume the newly added space.
>
> Is there a way to update the recognized size of the volume in OS?
>

Currently the only solution for what you describe would be to update the
size in the db directly.  That being said you would be much better off
writing your own custom patch/extension to do this for you in side of
OpenStack to avoid some of the impedance mismatches that you're likely to
encounter here.

>
> I would expect this to be a hack/work-around to tide us over until we
> can move to Havana some time early 2014.
>
> Any thoughts on how kludgey this would be and the likelihoods for
> running afoul of maintaining a sane OS environment?
>
> Thanks,
>
> ~jpr
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder question

2013-09-04 Thread John Griffith
On Wed, Sep 4, 2013 at 1:21 PM, Mark Brown wrote:

> It definitely would be something useful. Dont know why it is not already
> implemented.
>
>   --
>  *From:* Swapnil Kulkarni 
> *To:* Razique Mahroua 
> *Cc:* Openstack Openstack 
> *Sent:* Wednesday, September 4, 2013 2:44 AM
> *Subject:* Re: [Openstack] Cinder question
>
> Guys,
>
>
> You might want to have a look at
> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
>
> Its already proposed
>
> Best Regards,
> Swapnil Kulkarni
> swapnilkulkarni2...@gmail.com
> +91-87960 10622(c)
> http://in.linkedin.com/in/coolsvap
>
>
>
> On Wed, Sep 4, 2013 at 2:24 PM, Razique Mahroua  > wrote:
>
> It doesn't necessarily uses iscsi,
> but yes I definitely agree man !
>
> Le 4 sept. 2013 à 10:37, Martinx - ジェームズ  a
> écrit :
>
> This is a must!! It would be great to attach 1 volume, to multiple
> instances!!
> Sounds pretty basic, since it uses iSCSI, there is no reason to not allow
> this...
>
> Cheers!
> Thiago
>
>
> On 3 September 2013 20:25, Mark Brown  wrote:
>
> Hello,
> I had a Cinder question, and maybe its pretty basic:-)
>
> Isn't there a way to attach the same Cinder volume to two different VMs,
> whether same physical server or different? I don't mean across different
> data centers, but any domain (zone, or whatever) within the same data
> center?
>
> Thanks.
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to: openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> ​Very useful, and we had hoped to have had it for H but there are a number
of folks that have concerns so it's getting further analysis for I.​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Raw Devices Through Cinder?

2013-08-20 Thread John Griffith
On Tue, Aug 20, 2013 at 7:40 PM, Steven Carter (stevenca) <
steve...@cisco.com> wrote:

>  Is there a way to present a raw device to a VM through Cinder?  It seems
> like I can do it with KVM specifically, but I would like to stick within
> the OpenStack framework if possible.
>
> ** **
>
> Thanks,
>
> ** **
>
> Steven.
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Nope, not yet.  We have a raw/disk driver but it's not really useful at
this point, and there are some other things in progress to get what you're
looking for in the future but it looks like it will be another release
before it's all implemented.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Patching installed python packages

2013-08-07 Thread John Griffith
On Wed, Aug 7, 2013 at 8:07 AM, Nicolae Paladi  wrote:

> Hi,
>
> I'm setting up a vlan network and hit a bug that
> has been confirmed earlier and a fix has been committed less than 24 hrs
> ago
> (https://bugs.launchpad.net/python-novaclient/+bug/1167779).
>
> Since the fix will likely take a while to be packaged (I am using CentOS
> 6.4),
> I've installed python-novaclient from the source
> (https://github.com/openstack/python-novaclient) in order to get the fix.
>
> However, after installing the new package, running the command to create
> a vlan still returns the same error (the logs claim 'DuplicateVlan:
> Detected existing vlan with id 100\n']) so I assume the old package is
> still being used (I also assume the fix actually
> addresses the problem)
>
> I am I missing something, or I need to somehow "load" the updated version?
>
>
> cheers,
> /nicolae
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> You'll need to run setup.py in the novaclient source directory to get
things updated.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [OpenStack] python-cinderclient 1.0.5 release

2013-08-06 Thread John Griffith
Below is a list of the major items that were added in this release. There
are a number of other lower priority fixes as well as refactoring and
imports from oslo that aren't listed here.

1.0.5
-
* Add CLI man page
* Add Availability Zone list command
* Add support for scheduler-hints
* Add support to extend volumes
* Add support to reset state on volumes and snapshots
* Add snapshot support for quota class

.. _1190853: http://bugs.launchpad.net/python-cinderclient/+bug/1190853
.. _1190731: http://bugs.launchpad.net/python-cinderclient/+bug/1190731
.. _1169455: http://bugs.launchpad.net/python-cinderclient/+bug/1169455
 .. _1188452: http://bugs.launchpad.net/python-cinderclient/+bug/1188452
.. _1180393: http://bugs.launchpad.net/python-cinderclient/+bug/1180393
.. _1182678: http://bugs.launchpad.net/python-cinderclient/+bug/1182678
 .. _1179008: http://bugs.launchpad.net/python-cinderclient/+bug/1179008
.. _1180059: http://bugs.launchpad.net/python-cinderclient/+bug/1180059
.. _1170565: http://bugs.launchpad.net/python-cinderclient/+bug/1170565
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Nova] Deprecation of nova-network deferred to Icehouse

2013-07-31 Thread John Griffith
On Tue, Jul 30, 2013 at 8:01 PM, Dean Troyer  wrote:

> On Tue, Jul 30, 2013 at 5:58 PM, Dan Sneddon  wrote:
> > I agree that Neutron should be the default in Devstack, for exactly the
> > reason that Russell gives, but also because nova-network is now
> officially
> > deprecated.
>
> I am personally not ready to throw that switch yet, at least until the
> Neutron gate job passes regularly and is voting again. (Yes, I know
> there is a reason, that's not the point.)
>

+1000

>
> > What would be the appropriate Neutron model for Devstack by default?
> > FlatManager?
>
> Yes.  That is what DevStack does for nova-network today.  IIRC the
> idea was to get Neutron duplicating that setup, including security
> groups and floating IPs, then we woud make the change.  The first step
> would be to make sure that the default Neutron config in DevStack is
> the correct setup so that only changing ENABLED_SERVICES is necessary
> to have the same basic configuration.
>
> The next thing I want to have is the code for unstack.sh to call to
> back out as much of the net configuration as possible.  At least as
> far as nova-network does today with br100 holding the IP for the
> public interface, but even getting that out would be cool.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack