[openstack-dev] [tempest][radosgw][devstack-ceph-plugin] tempest failure with radosgw as swift endpoint

2016-01-05 Thread Deepak Shetty
Hi stackers,
   Facing some issue running tempest with ceph radosgw as the swift
endpoint.

   I enabled radosgw as part of a test-only patch which configures it as the
   swift endpoint, the CI fails since tempest's verify-tempest-config fails

   Console logs are at [1], tempest logs @ [2]
   The actual failure is due to the list_extensions call being done @ [4]

   Long story short,
  verify-tempest-configs checks for API extensions of different
services,
  incl. swift, which calls the swift endpoints /info API which fails
with
  Unexpected Content Type error. IIUC radosgw still doesn't support
/info
  API, which calls for skipping the list_extensions check for swift when
  radosgw is being used.

  I wanted to check with the experts here on whether its a good idea to
  skip calling list_extensions() (see [4]) by setting
   CONF.object_storage_feature_enabled.discoverability to False, as
part of
   radosgw setup in the devstack-ceph-plugin code ? Something similar to
   whats being done in [3] already

   Thoughts ?

   thanx,
   deepak


   [1]:
http://logs.openstack.org/04/260904/6/check/gate-tempest-dsvm-full-devstack-plugin-ceph/12cfb27/logs/devstacklog.txt.gz
   [2]:
http://logs.openstack.org/04/260904/6/check/gate-tempest-dsvm-full-devstack-plugin-ceph/12cfb27/logs/tempest.txt.gz#_2016-01-04_14_24_32_094
   [3]:
https://github.com/openstack/tempest/blob/master/tempest/api/object_storage/test_account_services.py#L123-L126
   [4]:
https://github.com/openstack/tempest/blob/master/tempest/cmd/verify_tempest_config.py#L177
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][radosgw][devstack-ceph-plugin] tempest failure with radosgw as swift endpoint

2016-01-05 Thread Deepak Shetty
On Tue, Jan 5, 2016 at 5:02 PM, Deepak Shetty <dpkshe...@gmail.com> wrote:

> Hi stackers,
>Facing some issue running tempest with ceph radosgw as the swift
> endpoint.
>
>I enabled radosgw as part of a test-only patch which configures it as
> the
>swift endpoint, the CI fails since tempest's verify-tempest-config fails
>
>Console logs are at [1], tempest logs @ [2]
>The actual failure is due to the list_extensions call being done @ [4]
>
>Long story short,
>   verify-tempest-configs checks for API extensions of different
> services,
>   incl. swift, which calls the swift endpoints /info API which fails
> with
>   Unexpected Content Type error. IIUC radosgw still doesn't support
> /info
>   API, which calls for skipping the list_extensions check for swift
> when
>   radosgw is being used.
>
>   I wanted to check with the experts here on whether its a good idea
> to
>   skip calling list_extensions() (see [4]) by setting
>CONF.object_storage_feature_enabled.discoverability to False, as
> part of
>radosgw setup in the devstack-ceph-plugin code ? Something similar
> to
>whats being done in [3] already
>
>

Forgot to mention, that verify_tempest_config.py currently doesn't honor
any object-storage conf options
I was wondering if this is a miss or there is some reason to it ?

thanx,
deepak



>Thoughts ?
>
>thanx,
>deepak
>
>
>[1]:
> http://logs.openstack.org/04/260904/6/check/gate-tempest-dsvm-full-devstack-plugin-ceph/12cfb27/logs/devstacklog.txt.gz
>[2]:
> http://logs.openstack.org/04/260904/6/check/gate-tempest-dsvm-full-devstack-plugin-ceph/12cfb27/logs/tempest.txt.gz#_2016-01-04_14_24_32_094
>[3]:
> https://github.com/openstack/tempest/blob/master/tempest/api/object_storage/test_account_services.py#L123-L126
>[4]:
> https://github.com/openstack/tempest/blob/master/tempest/cmd/verify_tempest_config.py#L177
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mitaka Design Summit Recap

2015-11-25 Thread Deepak Shetty
Thanks Sean for the nice recap, helps folks who couldn't attend the summit.

On Thu, Nov 5, 2015 at 2:53 AM, Sean McGinnis  wrote:

> Cinder Mitaka Design Summit Summary
>
> Will the Real Block Storage Service Please Stand Up
> ===
> Should Cinder be usable outside of a full OpenStack environment.
> There are several solutions out there for providing a Software
> Defined Storage service with plugins for various backends. Most
> of the functionality used for these is already done by Cinder.
> So the question is, should Cinder try to be that ubiquitous SDS
> interface?
>
> The concern is that Cinder should either try to address this
> broader use case or be left behind. Especially since there is
> already a lot of overlap in functionality, and end users already
> asking about it.
>
> Some concern about doing this is whether it will be a distraction
> from our core purpose - to be a solid and useful service for
> providing block storage in an OpenStack cloud.
>
> On the other hand, some folks have played around with doing this
> already and found there really are only a few key issues with
> being able to use Cinder without something like Keystone. Based on
> this, it was decided we will spend some time looking into doing
> this, but at a lower priority than our core work.
>
> Availability Zones in Cinder
> 
> Recently it was highlighted that there are issues between AZs
> used in Cinder versus AZs used in Nova. When Cinder was originally
> branched out of the Nova code base we picked up the concept of
> Availability Zones, but the ideas was never fully implemented and
> isn't exactly what some expect it to be in its current state.
>
> Speaking with some of the operators in the room, there were two
> main desires for AZ interaction with Nova - either the AZ specified
> in Nova needs to match one to one with the AZ in Cinder, or there
> is no connection between the two and the Nova AZ doesn't matter on
> the Cinder side.
>
> There is currently a workaround in Cinder. If the config file
> value for allow_availability_zone_fallback is set to True, if a
> request for a new volume comes in with a Nova AZ not present, the
> default Cinder AZ will be used instead.
>
> A few options for improving AZ support were suggested. At least for
> those present, the current "dirty fix" workaround is sufficient. If
> further input makes it clear that this is not enough, we can look
> in to one of the proposed alternatives to address those needs.
>
> API Microversions
> =
> Some projects, particularly Nova and Manila, have already started
> work on supporting API microversions. We plan on leveraging their
> work to add support in Cinder. Scott D'Angelo has done some work
> porting that framework from Manila into a spec and proof of concept
> in Cinder.
>
> API microversions would allow us to make breaking API changes while
> still providing backward compatibility to clients that expect the
> existing behavior. It may also allow us to remove functionality
> more easily.
>
> We still want to be restrictive about modifying the API. Just
> because this will make it slightly easier to do, it still has
> an ongoing maintenance cost, and slightly a higher one at that,
> that we will want to limit as much as possible.
>
> A great explanation of the microversions concept was written up by
> Sean Dague here:
>
> https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
>
> Experimental APIs
> =
> Building on the work with microversions, we would use that to expose
> experimental APIs and make it explicit that they are experimental
> only and could be removed at any time, without the normal window
> provided with deprecating other features.
>
> Although there were certainly some very valid concerns raised about
> doing this, and whether it would be useful or not, general consensus
> was that it would be good to support it.
>
> After further discussion, it was pointed out that there really isn't
> anything in the works that needs this right now, so it may be delayed.
> The issue there being that if we wait to do it, when we actually do
> need to use it for something it won't be ready to go.
>
> Cinder Nova Interaction
> ===
> Great joint session with some of the Nova folks. Talked through some
> of the issues we've had with the interaction between Nova and Cinder
> and areas where we need to improve it.
>
> Some of the decisions were:
> - Working on support for multiattach. Will delay encryption support
>   until non-encrypted issues get worked out.
> - Rootwrap issues with the use of os-brick. Priv-sep sounds like it
>   is the better answer. Will need to wait for that to mature before
>   we can switch away from rootwrap though.
> - API handling changes. A lot of cases where an API call is made and
>   it is assumed to succeed. Will use event notifications to report
>   results back 

Re: [openstack-dev] [Manila] CephFS native driver

2015-10-06 Thread Deepak Shetty
On Thu, Oct 1, 2015 at 3:32 PM, John Spray <jsp...@redhat.com> wrote:

> On Thu, Oct 1, 2015 at 8:36 AM, Deepak Shetty <dpkshe...@gmail.com> wrote:
> >
> >
> > On Thu, Sep 24, 2015 at 7:19 PM, John Spray <jsp...@redhat.com> wrote:
> >>
> >> Hi all,
> >>
> >> I've recently started work on a CephFS driver for Manila.  The (early)
> >> code is here:
> >> https://github.com/openstack/manila/compare/master...jcsp:ceph
> >>
> >
> > 1) README says driver_handles_share_servers=True, but code says
> >
> > + if share_server is not None:
> > + log.warning("You specified a share server, but this driver doesn't use
> > that")
>
> The warning is just for my benefit, so that I could see which bits of
> the API were pushing a share server in.  This driver doesn't care
> about the concept of a share server, so I'm really just ignoring it
> for the moment.
>
> > 2) Would it good to make the data_isolated option controllable from
> > manila.conf config param ?
>
> That's the intention.
>
> > 3) CephFSVolumeClient - it sounds more like CephFSShareClient , any
> reason
> > you chose the
> > word 'Volume" instead of Share ? Volumes remind of RBD volumes, hence
> the Q
>
> The terminology here is not standard across the industry, so there's
> not really any right term.  For example, in docker, a
> container-exposed filesystem is a "volume".  I generally use volume to
> refer to a piece of storage that we're carving out, and share to refer
> to the act of making that visible to someone else.  If I had been
> writing Manila originally I wouldn't have called shares shares :-)
>
> The naming in CephFSVolumeClient will not be the same as Manilas,
> because it is not intended to be Manila-only code, though that's the
> first use for it.
>
> > 4) IIUC there is no need to do access_allow/deny in the cephfs usecase ?
> It
> > looks like
> > create_share, put the cephx keyring in client and it can access the
> share,
> > as long as the
> > client has network access to the ceph cluster. Doc says you don't use IP
> > address based
> > access method, so which method is used in case you are using access_allow
> > flow ?
>
> Currently, as you say, a share is accessible to anyone who knows the
> auth key (created a the time the share is created).
>
> For adding the allow/deny path, I'd simply create and remove new ceph
> keys for each entity being allowed/denied.
>

Ok, but how does that map to the existing Manila access types (IP, User,
Cert) ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread Deepak Shetty
On Sat, Sep 26, 2015 at 4:32 PM, John Spray  wrote:

> On Sat, Sep 26, 2015 at 1:27 AM, Ben Swartzlander 
> wrote:
> > On 09/24/2015 09:49 AM, John Spray wrote:
> >>
> >> Hi all,
> >>
> >> I've recently started work on a CephFS driver for Manila.  The (early)
> >> code is here:
> >> https://github.com/openstack/manila/compare/master...jcsp:ceph
> >
> >
> > Awesome! This is something that's been talking about for quite some time
> and
> > I'm pleased to see progress on making it a reality.
> >
> >> It requires a special branch of ceph which is here:
> >> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
> >>
> >> This isn't done yet (hence this email rather than a gerrit review),
> >> but I wanted to give everyone a heads up that this work is going on,
> >> and a brief status update.
> >>
> >> This is the 'native' driver in the sense that clients use the CephFS
> >> client to access the share, rather than re-exporting it over NFS.  The
> >> idea is that this driver will be useful for anyone who has suchq
> >> clients, as well as acting as the basis for a later NFS-enabled
> >> driver.
> >
> >
> > This makes sense, but have you given thought to the optimal way to
> provide
> > NFS semantics for those who prefer that? Obviously you can pair the
> existing
> > Manila Generic driver with Cinder running on ceph, but I wonder how that
> > wound compare to some kind of ganesha bridge that translates between NFS
> and
> > cephfs. It that something you've looked into?
>
> The Ceph FSAL in ganesha already exists, some work is going on at the
> moment to get it more regularly built and tested.  There's some
> separate design work to be done to decide exactly how that part of
> things is going to work, including discussing with all the right
> people, but I didn't want to let that hold up getting the initial
> native driver out there.
>
> >> The export location returned by the driver gives the client the Ceph
> >> mon IP addresses, the share path, and an authentication token.  This
> >> authentication token is what permits the clients access (Ceph does not
> >> do access control based on IP addresses).
> >>
> >> It's just capable of the minimal functionality of creating and
> >> deleting shares so far, but I will shortly be looking into hooking up
> >> snapshots/consistency groups, albeit for read-only snapshots only
> >> (cephfs does not have writeable shapshots).  Currently deletion is
> >> just a move into a 'trash' directory, the idea is to add something
> >> later that cleans this up in the background: the downside to the
> >> "shares are just directories" approach is that clearing them up has a
> >> "rm -rf" cost!
> >
> >
> > All snapshots are read-only... The question is whether you can take a
> > snapshot and clone it into something that's writable. We're looking at
> > allowing for different kinds of snapshot semantics in Manila for Mitaka.
> > Even if there's no create-share-from-snapshot functionality a readable
> > snapshot is still useful and something we'd like to enable.
>
> Enabling creation of snapshots is pretty trivial, the slightly more
> interesting part will be accessing them.  CephFS doesn't provide a
> rollback mechanism, so
>
> > The deletion issue sounds like a common one, although if you don't have
> the
> > thing that cleans them up in the background yet I hope someone is
> working on
> > that.
>
> Yeah, that would be me -- the most important sentence in my original
> email was probably "this isn't done yet" :-)
>
> >> A note on the implementation: cephfs recently got the ability (not yet
> >> in master) to restrict client metadata access based on path, so this
> >> driver is simply creating shares by creating directories within a
> >> cluster-wide filesystem, and issuing credentials to clients that
> >> restrict them to their own directory.  They then mount that subpath,
> >> so that from the client's point of view it's like having their own
> >> filesystem.  We also have a quota mechanism that I'll hook in later to
> >> enforce the share size.
> >
> >
> > So quotas aren't enforced yet? That seems like a serious issue for any
> > operator except those that want to support "infinite" size shares. I hope
> > that gets fixed soon as well.
>
> Same again, just not done yet.  Well, actually since I wrote the
> original email I added quota support to my branch, so never mind!
>
> >> Currently the security here requires clients (i.e. the ceph-fuse code
> >> on client hosts, not the userspace applications) to be trusted, as
> >> quotas are enforced on the client side.  The OSD access control
> >> operates on a per-pool basis, and creating a separate pool for each
> >> share is inefficient.  In the future it is expected that CephFS will
> >> be extended to support file layouts that use RADOS namespaces, which
> >> are cheap, such that we can issue a new namespace to each share and
> >> enforce the separation between shares on the OSD side.
> >
> 

Re: [openstack-dev] [cinder] Proposing Gorka Eguileor for core

2015-08-14 Thread Deepak Shetty
On Fri, Aug 14, 2015 at 12:43 AM, Mike Perez thin...@gmail.com wrote:

 It gives me great pleasure to nominate Gorka Eguileor for Cinder core.

 Gorka's contributions to Cinder core have been much apprecated:


 https://review.openstack.org/#/q/owner:%22Gorka+Eguileor%22+project:openstack/cinder,p,0035b6410002dd11

 60/90 day review stats:

 http://russellbryant.net/openstack-stats/cinder-reviewers-60.txt
 http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt

 Cinder core, please reply with a +1 for approval. This will be left
 open until August 19th. Assuming there are no objections, this will go
 forward after voting is closed.


I am not cinder core, but +1 for Gorka to be one.
His reviews have helped me in the past, and I particularly appreciate the
fine grain reviews he does, which helps reduce patch iterations for the
author

Good luck Gorka!



 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [FFE] Feature Freeze Exception Request for 'Volume Snapshot Improvements'

2015-08-04 Thread Deepak Shetty
Hello Nova cores,

I would like to request feature freeze exception for the volume
snapshot improvements
blueprint [1]

This bp was proposed in Juno, but could not be implemented then, hence
was pushed to Liberty, and the patches [2] for the same have been on review
for the last 2+ months and are in good shape AFAICT.

This helps improve the support for online volume snapshots (CInder --
Nova flows) and will be useful for anyone in Cinder that use hypervisor
assisted snapshots. Eg: Glusterfs, NFS, Scality et.al. I didn't agree to it
being 'low' priority, but didn't debate against it as I was already in
implementation phase and didn't knew that 'low' would matter! Lesson learnt
for me :)

This patch is pre-req for the Cinder side of volume snapshot
improvements (BP @ [3], patches @ [4]) which is 'Medium' Priority and the
cinder side of patches are in reasonable shape too. I have been working on
all the review comments there on a timely basis and I hope to get the
CInder side of things merged in Liberty as well, which won't be possible if
Nova patches aren't in :(( since Nova is a dep for Cinder patches.

Worst case, even if Cinder patches are not merged, Nova patches are
written with Cinder backward compability in mind, so Nova should work
seamlessly with old Cinder.

Thus i request the Nova cores to grant me FFE. Thanks in advance.

thanx,
deepak

[1]:
https://blueprints.launchpad.net/nova/+spec/volume-snapshot-improvements
[2]:
https://review.openstack.org/#/q/topic:bp/volume-snapshot-improvements,n,z

[3]:
https://blueprints.launchpad.net/cinder/+spec/assisted-snapshot-improvements
[4]:
https://review.openstack.org/#/q/topic:bp/assisted-snapshot-improvements,n,z

[5]: https://review.openstack.org/#/c/165393/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-07-10 Thread Deepak Shetty
Thanks Mike for the heads up

I fixed it for GlusterFS CI [1]
Post the fix, glusterfs CI jobs are running fine [2] See Jul 10, 10:43 AM
onwards

thanx,
deepak

[1]: https://review.openstack.org/#/c/200399/2
[2]:
https://jenkins07.openstack.org/job/check-tempest-dsvm-full-glusterfs-nv/

On Fri, Jul 10, 2015 at 12:52 AM, Mike Perez thin...@gmail.com wrote:

 On 16:47 Jun 30, Mike Perez wrote:
  On 12:24 Jun 26, Matt Riedemann wrote:
  snip
   So the question is, is everyone OK with this and ready to make that
 change?
 
  Thanks for all your work on this Matt.
 
  I'm fine with this. I say bite the bullet and we'll see the CI's surface
 that
  aren't skipping or failing this test.
 
  I will communicate with CI maintainers on the CI list about failures as
 I've
  been doing, and reference this thread and the meeting discussion.

 This landed.

 If your Cinder CI is now failing, set
 ATTACH_ENCRYPTED_VOLUME_AVAILABLE=False
 [1] as explained earlier in this thread.

 [1] - https://review.openstack.org/#/c/199709/

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-07-02 Thread Deepak Shetty
On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com wrote:

 On 12:24 Jun 26, Matt Riedemann wrote:
 snip
  So the question is, is everyone OK with this and ready to make that
 change?

 Thanks for all your work on this Matt.


+100, awesome debug, followup and fixing work by Matt



 I'm fine with this. I say bite the bullet and we'll see the CI's surface
 that
 aren't skipping or failing this test.


Just curious, shouldn't this mean we need to have some way of Cinder
querying Nova
for do u have this capability and only then setting the 'encryption' key
in conn_info ?

Better communication between nova and cinder ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-07-02 Thread Deepak Shetty
Oh, just to be clear, I don't mean to discard what you fixed
My intention was to discuss what would be a better way to fix this in
future thru a feature/blueprint, given there is a consensus

thanx,
deepak

On Thu, Jul 2, 2015 at 8:57 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Thu, Jul 2, 2015 at 7:05 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
  wrote:



 On 7/2/2015 4:12 AM, Deepak Shetty wrote:



 On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com
 mailto:thin...@gmail.com wrote:

 On 12:24 Jun 26, Matt Riedemann wrote:
 snip
  So the question is, is everyone OK with this and ready to make
 that change?

 Thanks for all your work on this Matt.


 +100, awesome debug, followup and fixing work by Matt


 I'm fine with this. I say bite the bullet and we'll see the CI's
 surface that
 aren't skipping or failing this test.


 Just curious, shouldn't this mean we need to have some way of Cinder
 querying Nova
 for do u have this capability and only then setting the 'encryption'
 key in conn_info ?

 Better communication between nova and cinder ?

 thanx,
 deepak




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I thought the same about some capability flag in cinder where the volume
 driver would tell the volume manager if it supported encryption and then
 the cinder volume manager would use that to tell if a request to create a
 volume from an encryption type was possible.  But the real problem in our
 case is the encryption provider support, which is currently the luks and
 cryptsetup modules in nova.  However, the encryption provider is completely
 pluggable [1] from what I can tell, the libvirt driver in nova just creates
 the provider class (assuming it can import it) and calls the methods
 defined in the VolumeEncryptor abstract base class [2].

 So whether or not encryption is supported during attach is really up to
 the encryption provider implementation, the volume driver connector code
 (now in os-brick), and what the cinder volume driver is providing back to
 nova during os-initialize_connection.


 Yes I understand the issue, hence i said that why not cinder checks with
 Nova whether it supports enc for volume-attach , nova returns yes/no and
 based on that cinder accepts/rejects the 'create new enc volume' request.


 I guess my point is I don't have a simple solution besides actually
 failing when we know we can't encrypt the volume during attach - which is
 at least better than the false positive we have today.


 Definitely what u have proposed/fixed is appreciated. But its a
 workaround, the better way seems to be improving the Nova-Cinder
 communication ?

 thanx,
 deepak


 [1]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/__init__.py#n47
 [2]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/base.py#n28

 --

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-07-02 Thread Deepak Shetty
On Thu, Jul 2, 2015 at 7:05 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 7/2/2015 4:12 AM, Deepak Shetty wrote:



 On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com
 mailto:thin...@gmail.com wrote:

 On 12:24 Jun 26, Matt Riedemann wrote:
 snip
  So the question is, is everyone OK with this and ready to make that
 change?

 Thanks for all your work on this Matt.


 +100, awesome debug, followup and fixing work by Matt


 I'm fine with this. I say bite the bullet and we'll see the CI's
 surface that
 aren't skipping or failing this test.


 Just curious, shouldn't this mean we need to have some way of Cinder
 querying Nova
 for do u have this capability and only then setting the 'encryption'
 key in conn_info ?

 Better communication between nova and cinder ?

 thanx,
 deepak



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I thought the same about some capability flag in cinder where the volume
 driver would tell the volume manager if it supported encryption and then
 the cinder volume manager would use that to tell if a request to create a
 volume from an encryption type was possible.  But the real problem in our
 case is the encryption provider support, which is currently the luks and
 cryptsetup modules in nova.  However, the encryption provider is completely
 pluggable [1] from what I can tell, the libvirt driver in nova just creates
 the provider class (assuming it can import it) and calls the methods
 defined in the VolumeEncryptor abstract base class [2].

 So whether or not encryption is supported during attach is really up to
 the encryption provider implementation, the volume driver connector code
 (now in os-brick), and what the cinder volume driver is providing back to
 nova during os-initialize_connection.


Yes I understand the issue, hence i said that why not cinder checks with
Nova whether it supports enc for volume-attach , nova returns yes/no and
based on that cinder accepts/rejects the 'create new enc volume' request.


 I guess my point is I don't have a simple solution besides actually
 failing when we know we can't encrypt the volume during attach - which is
 at least better than the false positive we have today.


Definitely what u have proposed/fixed is appreciated. But its a workaround,
the better way seems to be improving the Nova-Cinder communication ?

thanx,
deepak


 [1]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/__init__.py#n47
 [2]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/base.py#n28

 --

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-06-18 Thread Deepak Shetty
On Thu, Jun 18, 2015 at 8:43 AM, Ben Swartzlander b...@swartzlander.org
wrote:

  On 06/03/2015 12:43 PM, Deepak Shetty wrote:



 On Tue, Jun 2, 2015 at 4:42 PM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:

 Deepak,

  transfer-* is not suitable in this particular case. Usage of share
 networks causes creation of resources, when transfer does not. Also in
 this topic we have creation of new share based on some snapshot.


  In the original mail it was said:
 
 From user point of view, he may want to copy share and use its copy in
 different network and it is valid case.
 
  So create share from snapshot, then transfer that share to a different
 tenant , doesn't that work ?



 Transferring shares between tenants is not something we've discussed
 before. The cinder project allows transferring of volumes but its easier
 for them to implement that feature because they don't have the concepts of
 share networks and share servers to tie the share to a tenant.

 We implemented public shares which allows a similar use case where 1
 tenant can allow others to read/write to a share and should address many of
 the same use cases that share transferring would address.

 -Ben




  Valeriy

 On Sun, May 31, 2015 at 4:23 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:


  On Thu, May 28, 2015 at 4:54 PM, Duncan Thomas duncan.tho...@gmail.com
  wrote:

 On 28 May 2015 at 13:03, Deepak Shetty dpkshe...@gmail.com wrote:

  Isn't this similar to what cinder transfer-* cmds are for ? Ability
 to transfer cinder volume across tenants
  So Manila should be implementing the transfer-* cmds, after which
 admin/user can create a clone
  then initiate a transfer to a diff tenant  ?


  Cinder doesn't seem to have any concept analogous to a share network
 from what I can see; the cinder transfer commands are for moving a volume
 between tenants, which is a different thing, I think.


I agree that 'share transfer' (like volume transfer of cinder) would be
more complex, but shouldn't be impossible.
IIUC Its eq to creating a new share for the destination tenant (which is
same as create share for that tenant) and then copy data (or allow backend
to optimize if possible) then delete the source share


   Yes, cinder doesn't have any eq of share network. But my comment was
 from the functionality perpsective. In cinder transfer-* commands are used
 to transfer ownership of volumes across tenants. IIUC the ability in Manila
 to create a share from snapshot and have that share in a different share
 network is eq to creating a share from a snapshot for a different tenant,
 no ? Share networks are typically 1-1 with tenant network AFAIK, correct me
 if i am wrong


Didn't knew this just wondering, this means the public share can be
accessed by multiple tenants ? Doesn't that break the tenant isolation ?

thanx,
deepak





 --
 Duncan Thomas


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-06-18 Thread Deepak Shetty
On Thu, Jun 18, 2015 at 6:16 PM, Ben Swartzlander b...@swartzlander.org
wrote:

  On 06/18/2015 07:08 AM, Deepak Shetty wrote:



 On Thu, Jun 18, 2015 at 8:43 AM, Ben Swartzlander b...@swartzlander.org
 wrote:

  On 06/03/2015 12:43 PM, Deepak Shetty wrote:



 On Tue, Jun 2, 2015 at 4:42 PM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:

 Deepak,

  transfer-* is not suitable in this particular case. Usage of share
 networks causes creation of resources, when transfer does not. Also in
 this topic we have creation of new share based on some snapshot.


  In the original mail it was said:
 
 From user point of view, he may want to copy share and use its copy in
 different network and it is valid case.
 
  So create share from snapshot, then transfer that share to a different
 tenant , doesn't that work ?



  Transferring shares between tenants is not something we've discussed
 before. The cinder project allows transferring of volumes but its easier
 for them to implement that feature because they don't have the concepts of
 share networks and share servers to tie the share to a tenant.

 We implemented public shares which allows a similar use case where 1
 tenant can allow others to read/write to a share and should address many of
 the same use cases that share transferring would address.

 -Ben




  Valeriy

 On Sun, May 31, 2015 at 4:23 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:


  On Thu, May 28, 2015 at 4:54 PM, Duncan Thomas 
 duncan.tho...@gmail.com wrote:

 On 28 May 2015 at 13:03, Deepak Shetty dpkshe...@gmail.com wrote:

  Isn't this similar to what cinder transfer-* cmds are for ? Ability
 to transfer cinder volume across tenants
  So Manila should be implementing the transfer-* cmds, after which
 admin/user can create a clone
  then initiate a transfer to a diff tenant  ?


  Cinder doesn't seem to have any concept analogous to a share network
 from what I can see; the cinder transfer commands are for moving a volume
 between tenants, which is a different thing, I think.


  I agree that 'share transfer' (like volume transfer of cinder) would be
 more complex, but shouldn't be impossible.
  IIUC Its eq to creating a new share for the destination tenant (which is
 same as create share for that tenant) and then copy data (or allow backend
 to optimize if possible) then delete the source share


 Yes, we can implement a share transfer, but I'm arguing that we don't need
 to. Such a feature would be a lot of effort to implement for (arguably)
 little gain.


Well, I would argue that 'transfer' as a API does have value and
implementation being simple/complex shouldn't matter as long as there is
'value' in the API, so I disagree a bit here.




   Yes, cinder doesn't have any eq of share network. But my comment was
 from the functionality perpsective. In cinder transfer-* commands are used
 to transfer ownership of volumes across tenants. IIUC the ability in Manila
 to create a share from snapshot and have that share in a different share
 network is eq to creating a share from a snapshot for a different tenant,
 no ? Share networks are typically 1-1 with tenant network AFAIK, correct me
 if i am wrong


  Didn't knew this just wondering, this means the public share can be
 accessed by multiple tenants ? Doesn't that break the tenant isolation ?



 Yes this was the point of public shares. It doesn't break tenant isolation
 any more than a feature like share transfer would. It's optional and


Not really, public share (IIUC) allows  1 tenant to access/share the share
at the same time, while transfer ensures exclusivity to one share at a
time, so they are different

thanx,
deepak

you have to turn it on explicitly on a per-share basis. Also, the most
 common application for public shares would be in a read-only mode, so the
 possibility for bad things to happen is very small.




  thanx,
  deepak





 --
 Duncan Thomas


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp

Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-09 Thread Deepak Shetty
Thang,
  Thanks! Its still not clear to me why my method doesn't work. FWIW, i did
try db.snapshot_get = mock.Mock(...) before and when that didn't work, i
was just trying out
with remotefs.db.snapshot_get with the assumption that maybe some scope
issue is there hence I should try using the complete path right from
module, but that didn't work either and i guess its still not clear why
a mock in one test module affects another.

Given that mock.patch is working as evident from your patch, i will
continue to use it.

Thanks for helping out.

On Thu, Jun 4, 2015 at 9:21 PM, Thang Pham thang.g.p...@gmail.com wrote:

 The problem is in your test case.  There is no such methods as
 remotefs.db.snapshot_get or remotefs.db.snapshot_admin_metadata_get.
 You need to use with mock.patch('cinder.db.snapshot_get') as snapshot_get,
 mock.patch('cinder.db.snapshot_admin_metadata_get')
 as snapshot_admin_metadata_get.  These incorrect calls somehow created a
 side effect in the other test cases.  I updated you patch with what is
 correct, so you should follow it for you other tests.  Your test case needs
 a lot more work, I just edited it to just have it pass the unit tests.

 Thang

 On Thu, Jun 4, 2015 at 4:36 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 I was able to narrow down to the scenario where it fails only when i do:

 ./run_tests.sh -N cinder.tests.unit.test_remotefs
 cinder.tests.unit.test_volume.VolumeTestCase

 and fails with:
 {0}
 cinder.tests.unit.test_volume.VolumeTestCase.test_can_delete_errored_snapshot
 [0.507361s] ... FAILED

 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File cinder/tests/unit/test_volume.py, line 3029, in
 test_can_delete_errored_snapshot
 snapshot_obj = objects.Snapshot.get_by_id(self.context,
 snapshot_id)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 169,
 in wrapper
 result = fn(cls, context, *args, **kwargs)
   File cinder/objects/snapshot.py, line 130, in get_by_id
 expected_attrs=['metadata'])
   File cinder/objects/snapshot.py, line 112, in _from_db_object
 snapshot[name] = value
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 691,
 in __setitem__
 setattr(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 70,
 in setter
 field_value = field.coerce(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 183, in coerce
 return self._null(obj, attr)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 161, in _null
 raise ValueError(_(Field `%s' cannot be None) % attr)
 ValueError: Field `volume_id' cannot be None

 Both the testsuites run fine when i run them individually, as in the
 below is success:

 ./run_tests.sh -N cinder.tests.unit.test_remotefs - no errors

 ./run_tests.sh -N cinder.tests.unit.test_volume.VolumeTestCase - no errors

 So i modified my patch @ https://review.openstack.org/#/c/172808/ (Patch
 set 6) and
 removed all testcase i added in test_remotefs.py except one, so that we
 have lesser code to debug/deal with!

 See
 https://review.openstack.org/#/c/172808/6/cinder/tests/unit/test_remotefs.py

 Now when i disable test_create_snapshot_online_success then running both
 the suites work,
 but when i enable test_create_snapshot_online_success then it fails as
 above.

 I am unable to figure whats the connection between 
 test_create_snapshot_online_success
 in test_remotefs.py
 and VolumeTestCase.test_can_delete_errored_snapshot in test_volume.py
 failure

 Can someone help here ?

 thanx,
 deepak



 On Thu, Jun 4, 2015 at 1:37 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi Thang,
   Since you are working on Snapshot Objects, any idea on why the
 testcase when run all by itself, works, but when run as part of the overall
 suite, fails ?
 This seems to be related to the Snapshot Objects, hence Ccing you.

 On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi All,
   I am hitting a strange issue when running Cinder unit tests against
 my patch @
 https://review.openstack.org/#/c/172808/5

 I have spent 1 day and haven't been successfull at figuring how/why my
 patch is causing it!

 All tests failing are part of VolumeTestCase suite and from the error
 (see below) it seems
 the Snapshot Object is complaining that 'volume_id' field is null
 (while it shouldn't be)

 An example error from the associated Jenkins run can be seen @

 http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140

 I am seeing a total of 21 such errors.

 Its strange because, when I try to reproduce it locally in my devstack
 env, I see the below:

 1) When i just run: ./run_tests.sh -N cinder.tests.unit.test_volume.
 VolumeTestCase
 all testcases pass

 2) When i run 1 individual

Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-09 Thread Deepak Shetty
Thangp,
  I have a related Question wrt your comment in
https://review.openstack.org/#/c/172808/6/cinder/api/contrib/snapshot_actions.py

Do i need to add support for snapshot_admin_metadata table in
object/snapshot.py or I need to create
a new object since its a new table, I am not clear on this, can you let me
know pls ?

ALternatively I am fine if you want to collaborate on my patch and add
snapshot_admin_metadata object support too ?

Let me know pls

thanx,
deepak

On Tue, Jun 9, 2015 at 12:12 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Thang,
   Thanks! Its still not clear to me why my method doesn't work. FWIW, i
 did try db.snapshot_get = mock.Mock(...) before and when that didn't work,
 i was just trying out
 with remotefs.db.snapshot_get with the assumption that maybe some scope
 issue is there hence I should try using the complete path right from
 module, but that didn't work either and i guess its still not clear why
 a mock in one test module affects another.

 Given that mock.patch is working as evident from your patch, i will
 continue to use it.

 Thanks for helping out.

 On Thu, Jun 4, 2015 at 9:21 PM, Thang Pham thang.g.p...@gmail.com wrote:

 The problem is in your test case.  There is no such methods as
 remotefs.db.snapshot_get or remotefs.db.snapshot_admin_metadata_get.
 You need to use with mock.patch('cinder.db.snapshot_get') as snapshot_get,
 mock.patch('cinder.db.snapshot_admin_metadata_get')
 as snapshot_admin_metadata_get.  These incorrect calls somehow created a
 side effect in the other test cases.  I updated you patch with what is
 correct, so you should follow it for you other tests.  Your test case needs
 a lot more work, I just edited it to just have it pass the unit tests.

 Thang

 On Thu, Jun 4, 2015 at 4:36 AM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 I was able to narrow down to the scenario where it fails only when i do:

 ./run_tests.sh -N cinder.tests.unit.test_remotefs
 cinder.tests.unit.test_volume.VolumeTestCase

 and fails with:
 {0}
 cinder.tests.unit.test_volume.VolumeTestCase.test_can_delete_errored_snapshot
 [0.507361s] ... FAILED

 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File cinder/tests/unit/test_volume.py, line 3029, in
 test_can_delete_errored_snapshot
 snapshot_obj = objects.Snapshot.get_by_id(self.context,
 snapshot_id)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 169,
 in wrapper
 result = fn(cls, context, *args, **kwargs)
   File cinder/objects/snapshot.py, line 130, in get_by_id
 expected_attrs=['metadata'])
   File cinder/objects/snapshot.py, line 112, in _from_db_object
 snapshot[name] = value
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 691,
 in __setitem__
 setattr(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 70,
 in setter
 field_value = field.coerce(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 183, in coerce
 return self._null(obj, attr)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 161, in _null
 raise ValueError(_(Field `%s' cannot be None) % attr)
 ValueError: Field `volume_id' cannot be None

 Both the testsuites run fine when i run them individually, as in the
 below is success:

 ./run_tests.sh -N cinder.tests.unit.test_remotefs - no errors

 ./run_tests.sh -N cinder.tests.unit.test_volume.VolumeTestCase - no
 errors

 So i modified my patch @ https://review.openstack.org/#/c/172808/
 (Patch set 6) and
 removed all testcase i added in test_remotefs.py except one, so that we
 have lesser code to debug/deal with!

 See
 https://review.openstack.org/#/c/172808/6/cinder/tests/unit/test_remotefs.py

 Now when i disable test_create_snapshot_online_success then running
 both the suites work,
 but when i enable test_create_snapshot_online_success then it fails as
 above.

 I am unable to figure whats the connection between 
 test_create_snapshot_online_success
 in test_remotefs.py
 and VolumeTestCase.test_can_delete_errored_snapshot in test_volume.py
 failure

 Can someone help here ?

 thanx,
 deepak



 On Thu, Jun 4, 2015 at 1:37 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi Thang,
   Since you are working on Snapshot Objects, any idea on why the
 testcase when run all by itself, works, but when run as part of the overall
 suite, fails ?
 This seems to be related to the Snapshot Objects, hence Ccing you.

 On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi All,
   I am hitting a strange issue when running Cinder unit tests against
 my patch @
 https://review.openstack.org/#/c/172808/5

 I have spent 1 day and haven't been successfull at figuring how/why my
 patch is causing it!

 All tests failing are part of VolumeTestCase suite and from the error
 (see

Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-04 Thread Deepak Shetty
I was able to narrow down to the scenario where it fails only when i do:

./run_tests.sh -N cinder.tests.unit.test_remotefs
cinder.tests.unit.test_volume.VolumeTestCase

and fails with:
{0}
cinder.tests.unit.test_volume.VolumeTestCase.test_can_delete_errored_snapshot
[0.507361s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File cinder/tests/unit/test_volume.py, line 3029, in
test_can_delete_errored_snapshot
snapshot_obj = objects.Snapshot.get_by_id(self.context, snapshot_id)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 169,
in wrapper
result = fn(cls, context, *args, **kwargs)
  File cinder/objects/snapshot.py, line 130, in get_by_id
expected_attrs=['metadata'])
  File cinder/objects/snapshot.py, line 112, in _from_db_object
snapshot[name] = value
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 691,
in __setitem__
setattr(self, name, value)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 70,
in setter
field_value = field.coerce(self, name, value)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
183, in coerce
return self._null(obj, attr)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
161, in _null
raise ValueError(_(Field `%s' cannot be None) % attr)
ValueError: Field `volume_id' cannot be None

Both the testsuites run fine when i run them individually, as in the below
is success:

./run_tests.sh -N cinder.tests.unit.test_remotefs - no errors

./run_tests.sh -N cinder.tests.unit.test_volume.VolumeTestCase - no errors

So i modified my patch @ https://review.openstack.org/#/c/172808/ (Patch
set 6) and
removed all testcase i added in test_remotefs.py except one, so that we
have lesser code to debug/deal with!

See
https://review.openstack.org/#/c/172808/6/cinder/tests/unit/test_remotefs.py

Now when i disable test_create_snapshot_online_success then running both
the suites work,
but when i enable test_create_snapshot_online_success then it fails as
above.

I am unable to figure whats the connection between
test_create_snapshot_online_success
in test_remotefs.py
and VolumeTestCase.test_can_delete_errored_snapshot in test_volume.py
failure

Can someone help here ?

thanx,
deepak



On Thu, Jun 4, 2015 at 1:37 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi Thang,
   Since you are working on Snapshot Objects, any idea on why the testcase
 when run all by itself, works, but when run as part of the overall suite,
 fails ?
 This seems to be related to the Snapshot Objects, hence Ccing you.

 On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi All,
   I am hitting a strange issue when running Cinder unit tests against my
 patch @
 https://review.openstack.org/#/c/172808/5

 I have spent 1 day and haven't been successfull at figuring how/why my
 patch is causing it!

 All tests failing are part of VolumeTestCase suite and from the error
 (see below) it seems
 the Snapshot Object is complaining that 'volume_id' field is null (while
 it shouldn't be)

 An example error from the associated Jenkins run can be seen @

 http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140

 I am seeing a total of 21 such errors.

 Its strange because, when I try to reproduce it locally in my devstack
 env, I see the below:

 1) When i just run: ./run_tests.sh -N cinder.tests.unit.test_volume.
 VolumeTestCase
 all testcases pass

 2) When i run 1 individual testcase: ./run_tests.sh -N
 cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
 that passes too

 3) When i run : ./run_tests.sh -N
 I see 21 tests failing and all are failing with error similar to the below

 {0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
 [0.537366s] ... FAILED

 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File cinder/tests/unit/test_volume.py, line 3219, in
 test_delete_busy_snapshot
 snapshot_obj = objects.Snapshot.get_by_id(self.context,
 snapshot_id)
   File /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py,
 line 163, in wrapper
 result = fn(cls, context, *args, **kwargs)
   File cinder/objects/snapshot.py, line 130, in get_by_id
 expected_attrs=['metadata'])
   File cinder/objects/snapshot.py, line 112, in _from_db_object
 snapshot[name] = value
   File /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py,
 line 675, in __setitem__
 setattr(self, name, value)
   File /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py,
 line 70, in setter
 field_value = field.coerce(self, name, value)
   File 
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py,
 line 182, in coerce

Re: [openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-04 Thread Deepak Shetty
Hi Thang,
  Since you are working on Snapshot Objects, any idea on why the testcase
when run all by itself, works, but when run as part of the overall suite,
fails ?
This seems to be related to the Snapshot Objects, hence Ccing you.

On Wed, Jun 3, 2015 at 9:54 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi All,
   I am hitting a strange issue when running Cinder unit tests against my
 patch @
 https://review.openstack.org/#/c/172808/5

 I have spent 1 day and haven't been successfull at figuring how/why my
 patch is causing it!

 All tests failing are part of VolumeTestCase suite and from the error (see
 below) it seems
 the Snapshot Object is complaining that 'volume_id' field is null (while
 it shouldn't be)

 An example error from the associated Jenkins run can be seen @

 http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140

 I am seeing a total of 21 such errors.

 Its strange because, when I try to reproduce it locally in my devstack
 env, I see the below:

 1) When i just run: ./run_tests.sh -N
 cinder.tests.unit.test_volume.VolumeTestCase
 all testcases pass

 2) When i run 1 individual testcase: ./run_tests.sh -N
 cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
 that passes too

 3) When i run : ./run_tests.sh -N
 I see 21 tests failing and all are failing with error similar to the below

 {0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
 [0.537366s] ... FAILED

 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File cinder/tests/unit/test_volume.py, line 3219, in
 test_delete_busy_snapshot
 snapshot_obj = objects.Snapshot.get_by_id(self.context,
 snapshot_id)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 163,
 in wrapper
 result = fn(cls, context, *args, **kwargs)
   File cinder/objects/snapshot.py, line 130, in get_by_id
 expected_attrs=['metadata'])
   File cinder/objects/snapshot.py, line 112, in _from_db_object
 snapshot[name] = value
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 675,
 in __setitem__
 setattr(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 70,
 in setter
 field_value = field.coerce(self, name, value)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 182, in coerce
 return self._null(obj, attr)
   File
 /usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
 160, in _null
 raise ValueError(_(Field `%s' cannot be None) % attr)
 ValueError: Field `volume_id' cannot be None

 Any suggestions / thoughts on why this could be happening ?

 thanx,
 deepak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-06-03 Thread Deepak Shetty
On Tue, Jun 2, 2015 at 4:42 PM, Valeriy Ponomaryov vponomar...@mirantis.com
 wrote:

 Deepak,

 transfer-* is not suitable in this particular case. Usage of share
 networks causes creation of resources, when transfer does not. Also in
 this topic we have creation of new share based on some snapshot.


In the original mail it was said:

From user point of view, he may want to copy share and use its copy in
different network and it is valid case.

So create share from snapshot, then transfer that share to a different
tenant , doesn't that work ?


 Valeriy

 On Sun, May 31, 2015 at 4:23 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:


 On Thu, May 28, 2015 at 4:54 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 On 28 May 2015 at 13:03, Deepak Shetty dpkshe...@gmail.com wrote:

 Isn't this similar to what cinder transfer-* cmds are for ? Ability to
 transfer cinder volume across tenants
 So Manila should be implementing the transfer-* cmds, after which
 admin/user can create a clone
 then initiate a transfer to a diff tenant  ?


 Cinder doesn't seem to have any concept analogous to a share network
 from what I can see; the cinder transfer commands are for moving a volume
 between tenants, which is a different thing, I think.


 Yes, cinder doesn't have any eq of share network. But my comment was from
 the functionality perpsective. In cinder transfer-* commands are used to
 transfer ownership of volumes across tenants. IIUC the ability in Manila to
 create a share from snapshot and have that share in a different share
 network is eq to creating a share from a snapshot for a different tenant,
 no ? Share networks are typically 1-1 with tenant network AFAIK, correct me
 if i am wrong




 --
 Duncan Thomas


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-03 Thread Deepak Shetty
Hi All,
  I am hitting a strange issue when running Cinder unit tests against my
patch @
https://review.openstack.org/#/c/172808/5

I have spent 1 day and haven't been successfull at figuring how/why my
patch is causing it!

All tests failing are part of VolumeTestCase suite and from the error (see
below) it seems
the Snapshot Object is complaining that 'volume_id' field is null (while it
shouldn't be)

An example error from the associated Jenkins run can be seen @
http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140

I am seeing a total of 21 such errors.

Its strange because, when I try to reproduce it locally in my devstack env,
I see the below:

1) When i just run: ./run_tests.sh -N
cinder.tests.unit.test_volume.VolumeTestCase
all testcases pass

2) When i run 1 individual testcase: ./run_tests.sh -N
cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
that passes too

3) When i run : ./run_tests.sh -N
I see 21 tests failing and all are failing with error similar to the below

{0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
[0.537366s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File cinder/tests/unit/test_volume.py, line 3219, in
test_delete_busy_snapshot
snapshot_obj = objects.Snapshot.get_by_id(self.context, snapshot_id)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 163,
in wrapper
result = fn(cls, context, *args, **kwargs)
  File cinder/objects/snapshot.py, line 130, in get_by_id
expected_attrs=['metadata'])
  File cinder/objects/snapshot.py, line 112, in _from_db_object
snapshot[name] = value
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 675,
in __setitem__
setattr(self, name, value)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py, line 70,
in setter
field_value = field.coerce(self, name, value)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
182, in coerce
return self._null(obj, attr)
  File
/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py, line
160, in _null
raise ValueError(_(Field `%s' cannot be None) % attr)
ValueError: Field `volume_id' cannot be None

Any suggestions / thoughts on why this could be happening ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-05-31 Thread Deepak Shetty
On Thu, May 28, 2015 at 4:54 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 On 28 May 2015 at 13:03, Deepak Shetty dpkshe...@gmail.com wrote:

 Isn't this similar to what cinder transfer-* cmds are for ? Ability to
 transfer cinder volume across tenants
 So Manila should be implementing the transfer-* cmds, after which
 admin/user can create a clone
 then initiate a transfer to a diff tenant  ?


 Cinder doesn't seem to have any concept analogous to a share network from
 what I can see; the cinder transfer commands are for moving a volume
 between tenants, which is a different thing, I think.


Yes, cinder doesn't have any eq of share network. But my comment was from
the functionality perpsective. In cinder transfer-* commands are used to
transfer ownership of volumes across tenants. IIUC the ability in Manila to
create a share from snapshot and have that share in a different share
network is eq to creating a share from a snapshot for a different tenant,
no ? Share networks are typically 1-1 with tenant network AFAIK, correct me
if i am wrong



 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-05-28 Thread Deepak Shetty
Isn't this similar to what cinder transfer-* cmds are for ? Ability to
transfer cinder volume across tenants
So Manila should be implementing the transfer-* cmds, after which
admin/user can create a clone
then initiate a transfer to a diff tenant  ?

On Wed, May 27, 2015 at 6:08 PM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 Hi everyone,

 At last IRC meeting
 http://eavesdrop.openstack.org/meetings/manila/2015/manila.2015-05-14-15.00.log.html
  was
 raised following question:

 Whether Manila should allow us to create shares from snapshots with
 different share networks or not?

 What do users/admins expect in that case?

 For the moment Manila restricts creation of shares from snapshot with
 share network that is different than parent's.

 From user point of view, he may want to copy share and use its copy in
 different network and it is valid case.

 From developer point of view, he will be forced to rework logic of share
 servers creation for driver he maintains.

 Also, how many back-ends are able to support such feature?

 Regards,
 Valeriy Ponomaryov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Does Nova has a command line to restart an instance?

2015-05-21 Thread Deepak Shetty
nova start existing instance name or uuid

On Fri, May 22, 2015 at 9:46 AM, Lily.Sing lily.s...@gmail.com wrote:

 Hi experts,

 I setup an OpenStack multinode environment without Horizon installed.
 After host rebooting, the instances are in 'shutoff' status. I hope to
 re-use these instances, but seems there is no command line for this. Any
 suggestion?

 Thanks.

 Best regards,
 Lily Xing(邢莉莉)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][libvirt] Understand why we lookup libvirt domains by instance name

2015-05-21 Thread Deepak Shetty
On Fri, May 22, 2015 at 5:19 AM, Mathieu Gagné mga...@iweb.com wrote:

 On 2015-05-21 4:18 PM, Chris Friesen wrote:
 
 
  I guess it's for the reason I mentioned above:
 
  To not break all OpenStack installations on Earth running with default
  config values.
 
 
  So that's not breaking *upgrades* of all OpenStack installations with
  default config values, which is a slightly different thing.
 

 Yes it will. Try to change instance_template_name and manage existing
 instances. You won't be able.

 The above scenario is the same as upgrading OpenStack Nova and having
 instance_template_name changes its default value. Unless someone patches
 Nova to use the instance UUID instead of its name to find it in libvirt.


Novice question:
   For each instance I believe we store the name (instance-xx) and uuid
in
the DB ? If yes, then can't nova lookup using uuid given the name or vice
versa
since the mapping to name - uuid is possible from DB ?



 --
 Mathieu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Do we have weekly meeting today ?

2015-05-20 Thread Deepak Shetty
Given folks are at the summit, is this mtg happening today ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] gate-nova-python27 failure

2015-05-04 Thread Deepak Shetty
Hi All,
  I am seeing the below failure for one of my patch (which i believe is not
related to the changes i did in my patch) - Correct me if i am wrong :)

2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| {3} 
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read
[0.026257s] ... FAILED2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| 2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| Captured traceback:2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| ~~~2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| Traceback (most recent call last):2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
|   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
line 1201, in patched2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
| return func(*args, **keywargs)2015-05-04 07:22:06.116
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
|   File nova/tests/unit/virt/vmwareapi/test_read_write_util.py,
line 49, in test_ipv6_host_read2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| verify=False)2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
|   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
line 846, in assert_called_once_with2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| return self.assert_called_with(*args, **kwargs)2015-05-04
07:22:06.117 
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
|   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
line 835, in assert_called_with2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| raise AssertionError(msg)2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| AssertionError: Expected call: request('get',
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
verify=False, headers={'User-Agent': 'OpenStack-ESX-Adapter'},
stream=True, allow_redirects=True)2015-05-04 07:22:06.117
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
| Actual call: request('get',
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
verify=False, params=None, headers={'User-Agent':
'OpenStack-ESX-Adapter'}, stream=True, allow_redirects=True)


I ran it locally on my setup with my patch present and the test passes. See
below:

stack@devstack-f21 nova]$ git log --pretty=oneline -1
df8fb45121dacf22e232f72d096305cf285a0d12 libvirt: Use 'relative' flag for
online snapshot's commit/rebase operations

[stack@devstack-f21 nova]$  ./run_tests.sh -N
nova.tests.unit.virt.vmwareapi.test_read_write_util
Running ` python setup.py testr --testr-args='--subunit --concurrency 0
nova.tests.unit.virt.vmwareapi.test_read_write_util'`
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./
${OS_TEST_PATH:-./nova/tests} --list
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./
${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpueNq4A
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase
test_ipv6_host_read   OK
0.06


Ran 1 test in 10.977s

OK
=

Addnl Details:

My patch @ https://review.openstack.org/#/c/168805/
Complete failure log @
http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_23_32_995

thanx,
deepak

Re: [openstack-dev] [SOLVED] [Nova] gate-nova-python27 failure

2015-05-04 Thread Deepak Shetty
https://bugs.launchpad.net/nova/+bug/1451389 and the associated bugfix @
https://review.openstack.org/179746 should solve this.

thanks garyk!

thanx,
deepak


On Mon, May 4, 2015 at 3:02 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi All,
   I am seeing the below failure for one of my patch (which i believe is
 not related to the changes i did in my patch) - Correct me if i am wrong :)

 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | {3} 
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read
  [0.026257s] ... FAILED2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | 2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | Captured traceback:2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | ~~~2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | Traceback (most recent call last):2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 1201, in patched2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  | return func(*args, **keywargs)2015-05-04 07:22:06.116 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_116
  |   File nova/tests/unit/virt/vmwareapi/test_read_write_util.py, line 
 49, in test_ipv6_host_read2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | verify=False)2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 846, in assert_called_once_with2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | return self.assert_called_with(*args, **kwargs)2015-05-04 
 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  |   File 
 /home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
  line 835, in assert_called_with2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | raise AssertionError(msg)2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | AssertionError: Expected call: request('get', 
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
  verify=False, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, stream=True, 
 allow_redirects=True)2015-05-04 07:22:06.117 
 http://logs.openstack.org/05/168805/8/check/gate-nova-python27/584359c/console.html#_2015-05-04_07_22_06_117
  | Actual call: request('get', 
 'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
  verify=False, params=None, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
 stream=True, allow_redirects=True)


 I ran it locally on my setup with my patch present and the test passes.
 See below:

 stack@devstack-f21 nova]$ git log --pretty=oneline -1
 df8fb45121dacf22e232f72d096305cf285a0d12 libvirt: Use 'relative' flag for
 online snapshot's commit/rebase operations

 [stack@devstack-f21 nova]$  ./run_tests.sh -N
 nova.tests.unit.virt.vmwareapi.test_read_write_util
 Running ` python setup.py testr --testr-args='--subunit --concurrency 0
 nova.tests.unit.virt.vmwareapi.test_read_write_util'`
 running testr
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests} --list
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./
 ${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpueNq4A
 nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase
 test_ipv6_host_read   OK

Re: [openstack-dev] [devstack] plugin: Giving ability for plugin repo to override tempest tests

2015-04-15 Thread Deepak Shetty
On Wed, Apr 15, 2015 at 3:23 PM, Andrey M. Pavlov andrey...@yandex.ru
wrote:


 And another solution - make tempest job as non-voting for your
 project(glusterfs) like we did it for ec2-api.

 Andrey.


We already do that for glusterfs  its voting,  but changes to tempest
regex doesn't go thru the plugin repo, so can't validate it until the
project-config patch is merged

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] plugin: Giving ability for plugin repo to override tempest tests

2015-04-15 Thread Deepak Shetty
On Wed, Apr 15, 2015 at 3:59 PM, Sean Dague s...@dague.net wrote:

 On 04/15/2015 01:51 AM, Andrey M. Pavlov wrote:
  Hi,
 
  We have similar situation with our stackforge/ec2-api project.
  Patches in openstack repos can break our plugin.
  But I don't see any possibility until our project not in OpenStack repo.
  And I don't understand how plugins for tempest can help to us.
 
  My thoughts is only ec2-api gating jobs(this is implemented now in our
  gating):
  - make own tempest job with plugin definition and restrict tempest with
  regex


Andrey,
   Could you give some pointers (link to patches maybe) on this. I did not
completely understand what you mean here.


 
  but inserting regex in plugin can restrict to have several tempest jobs
  and not all gating job requires tempest
 
  Kind regards,
  Andrey.

 I think the right approach is as Andrey describes, build a custom test
 job appropriate for your system.

 I'm not a huge fan of having plugins disable tests themselves because
 it's not clear to the consumer that some part of your validation stack
 was just disabled. I think that should be a separate specific decision
 made by whoever is running those tests to turn a thing off.


I think the consumer is aware because he is using the enable_plugin
so everything that the plugin sets or unsets is effective for this env.

I feel plugin havign ability to set tempest regex also helps the plugin
author ensure that the CI job runs well before the regex makes into
the plugin, as the CI job is voting for plugin repo (it is for glusterfs
case)

so its a good way to validate ur changes before running against cinder
patches. Since we don't have it today, we need to wait for project-config
patch to get the tempest regex changes in, hence need to wait for
project-config reviewers to bless .. which adds to the latency of getting
a fix in

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Query on adding new table to cinder DB

2015-04-14 Thread Deepak Shetty
Thanks eOne! :)

On Tue, Apr 14, 2015 at 1:03 AM, Ivan Kolodyazhny e...@e0ne.info wrote:

 Hi Deepak,

 Your steps look good for e except #3.1 - add unit-tests for new migrations

 Regards,
 Ivan Kolodyazhny

 On Mon, Apr 13, 2015 at 8:20 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi Stackers,
 As part of my WIP work for implementing
 https://blueprints.launchpad.net/nova/+spec/volume-snapshot-improvements
 I am required to add a new table to cinder (snapshot_admin_metadata) and I
 was looking for some inputs on whats are the steps to add a new table to
 existing DB

 From what I know:

 1) Create a new migration script at
 cinder/db/sqlalchemy/migrate_repo/versions

 2) Implement the upgrade and downgrade methods

 3) Create your model inside cinder/db/sqlalchemy/models.py

 4) Sync DB using cinder-manage db sync

 Are these steps correct ?

 thanx,
 deepak

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] plugin: Giving ability for plugin repo to override tempest tests

2015-04-14 Thread Deepak Shetty
Hi Stackers,
   devstack-plugin doesn't have ability to override tempest settings.
Currently it only
provides a way to override localrc settings, but over time we have seen
that many CI jobs
fail due to many different issues (lack of support, new tempest test
enabled recently which fails for a particular backend etc) and there is no
way for the plugin to override tempest tests using
its stackforge/{plugin} repo

   Recently GlusterFS hit upon a issue due to enablement of
test_volume_boot_pattern test
for which we needed to skip this test (until the issue gets fixed) and sent
patch [1] but it was shot down (see [1]'s comments) and hence we sent
another patch to disable it using tempest regex filter to project-config
[2] ) Since devstack-plugin uses common template for all 3rd party CI jobs,
it becomes a bit messy to skip for backend specific only.

  It would good and ideal to have plugin be able to do the same via
stackforge/{plugin} repo
that ways the CI job can be tested with the tempest changes well before it
run against cinder patches.  Something liek tempest/settings in plugin repo
and change devstack-gate-wrap.sh ? to source tempest regex from plugin repo
and pass on to the tempest cmdline ?

Thoughts ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] plugin: Giving ability for plugin repo to override tempest tests

2015-04-14 Thread Deepak Shetty
Forgot to mention the patch links.. see below


On Wed, Apr 15, 2015 at 10:49 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi Stackers,
devstack-plugin doesn't have ability to override tempest settings.
 Currently it only
 provides a way to override localrc settings, but over time we have seen
 that many CI jobs
 fail due to many different issues (lack of support, new tempest test
 enabled recently which fails for a particular backend etc) and there is no
 way for the plugin to override tempest tests using
 its stackforge/{plugin} repo

Recently GlusterFS hit upon a issue due to enablement of
 test_volume_boot_pattern test
 for which we needed to skip this test (until the issue gets fixed) and
 sent patch [1] but it was shot down (see [1]'s comments) and hence we sent
 another patch to disable it using tempest regex filter to project-config
 [2] ) Since devstack-plugin uses common template for all 3rd party CI jobs,
 it becomes a bit messy to skip for backend specific only.

   It would good and ideal to have plugin be able to do the same via
 stackforge/{plugin} repo
 that ways the CI job can be tested with the tempest changes well before it
 run against cinder patches.  Something liek tempest/settings in plugin repo
 and change devstack-gate-wrap.sh ? to source tempest regex from plugin repo
 and pass on to the tempest cmdline ?

 Thoughts ?


[1]: https://review.openstack.org/172841
[2]: https://review.openstack.org/173408
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Query on adding new table to cinder DB

2015-04-13 Thread Deepak Shetty
Hi Stackers,
As part of my WIP work for implementing
https://blueprints.launchpad.net/nova/+spec/volume-snapshot-improvements I
am required to add a new table to cinder (snapshot_admin_metadata) and I
was looking for some inputs on whats are the steps to add a new table to
existing DB

From what I know:

1) Create a new migration script at
cinder/db/sqlalchemy/migrate_repo/versions

2) Implement the upgrade and downgrade methods

3) Create your model inside cinder/db/sqlalchemy/models.py

4) Sync DB using cinder-manage db sync

Are these steps correct ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Regarding deleting snapshot when instance is OFF

2015-04-08 Thread Deepak Shetty
Hi,
Cinder w/ GlusterFS backend is hitting the below error as part of
test_volume_boot_pattern tempest testcase

(at the end of testcase when it deletes the snap)

/usr/local/

lib/python2.7/dist-packages/libvirt.py, line 792, in blockRebase
2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver if ret == -1:
raise libvirtError ('virDomainBlockRebase() failed', dom=self)
2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver
libvirtError: *Requested
operation is not valid: domain is not running*
2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver

More details in the LP bug [1]

In looking closely at the testcase, it waits for the Instance to turn OFF
post which the cleanup starts which tried to delete the snap, but since the
cinder volume is attached state (in-use) it lets nova take control of the
snap del operation, and nova fails as it cannot do blockRebase as domain is
offline.

Questions:

1) Is this a valid scenario being tested ? Some say yes, I am not sure,
since the test makes sure that instance is OFF before snap is deleted and
this doesn't work for fs-backed drivers as they use hyp assisted snap which
needs domain to be active.


2) If this is valid scenario, then it means libvirt.py in nova should be
modified NOT to raise error, but continue with the snap delete (as if
volume was not attached) and take care of the dom xml (so that domain is
still bootable post snap deletion), is this the way to go ?


Appreciate suggestions/comments


thanx,

deepak

[1]: https://bugs.launchpad.net/cinder/+bug/1441050
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Tempest] Regarding deleting snapshot when instance is OFF

2015-04-08 Thread Deepak Shetty
+ [Cinder] and [Tempest] in the $subject since this affects them too

On Thu, Apr 9, 2015 at 4:22 AM, Eric Blake ebl...@redhat.com wrote:

 On 04/08/2015 12:01 PM, Deepak Shetty wrote:
 
  Questions:
 
  1) Is this a valid scenario being tested ? Some say yes, I am not sure,
  since the test makes sure that instance is OFF before snap is deleted and
  this doesn't work for fs-backed drivers as they use hyp assisted snap
 which
  needs domain to be active.

 Logically, it should be possible to delete snapshots when a domain is
 off (qemu-img can do it, but libvirt has not yet been taught how to
 manage it, in part because qemu-img is not as friendly as qemu in having
 a re-connectible Unix socket monitor for tracking long-running progress).


Is there a bug/feature already opened for this ? I didn't understand much
on what you
mean by re-connectible unix socket :)... are you hinting that qemu-img
doesn't have
ability to attach to a qemu / VM process for long time over unix socket ?

Looks like many believe that this should be a valid scenario but it
currently breaks the
fs-backed cinder drivers as the testcase proves.



 
 
  2) If this is valid scenario, then it means libvirt.py in nova should be
  modified NOT to raise error, but continue with the snap delete (as if
  volume was not attached) and take care of the dom xml (so that domain is
  still bootable post snap deletion), is this the way to go ?

 Obviously, it would be nice to get libvirt to support offline snapshot
 deletion, but until then, upper layers will have to work around
 libvirt's shortcomings.  I don't know if that helps answer your
 questions, though.


Thanks, it does in a way.

Q to Tempest folks,
Given that libvirt doesn't support this scenario yet, can fs-backed
cinder drivers affected
by this be able to skip this testcase (using storage_protocol = 'glusterfs'
for
gluster case) until either this is supported by libvirt or some workaround
in Nova is
decided upon ?

Appreciate your inputs.

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-04-02 Thread Deepak Shetty
On Thu, Apr 2, 2015 at 4:25 AM, Ian Wienand iwien...@redhat.com wrote:

 Note; I haven't finished debugging the glusterfs job yet.  This
 relates to the OOM that started happening on Centos after we moved to
 using as much pip-packaging as possible.  glusterfs was still failing
 even before this.


Cool, and its not related to glusterfs IMHO. Since it was happening
even w/o glusterfs (with just the tempest all tests running with defaults)

thanx,
deepak



 On 04/01/2015 07:58 PM, Deepak Shetty wrote:

 1) So why did this happen on rax VM only, the same (Centos job)on hpcloud
 didn't seem to hit it even when we ran hpcloud VM with 8GB memory.


 I am still not entirely certain that hp wasn't masking the issue when
 we were accidentally giving hosts 32gb RAM.  We can get back to this
 once these changes merge.

  2) Should this also be sent to centos-devel folks so that they don't
 upgrade/update the pyopenssl in their distro repos until the issue
 is resolved ?


 I think let's give the upstream issues a little while to play-out,
 then we decide our next steps around use of the library based on that
 information.

 thanks

 -i


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-04-01 Thread Deepak Shetty
On Wed, Apr 1, 2015 at 8:20 AM, Ian Wienand iwien...@redhat.com wrote:

 On 03/27/2015 08:47 PM, Alan Pevec wrote:

 But how come that same recent pyOpenSSL doesn't consume more memory on
 Ubuntu?


 Just to loop back on the final status of this ...

 pyOpenSSL 0.14 does seem to use about an order of magnitude more
 memory than 0.13 (2mb - 20mb).  For details see [1].

 This is due to the way it now goes through cryptography (the
 package, not the concept :) which binds to openssl using cffi.  This
 ends up parsing a bunch of C to build up the ABI representation, and
 it seems pycparser's model of this consumes most of the memory [2].
 If that is a bug or not remains to be seen.

 Ubuntu doesn't notice this in our CI environment because it comes with
 python-openssl 0.13 pre-installed in the image.  Centos started
 hitting this when I merged my change to start using as many libraries
 from pip as possible.

 I have a devstack workaround for centos out (preinstall the package)
 [3] and I think a global solution of avoiding it in requirements [4]
 (reviews appreciated).

 I'm also thinking about how we can better monitor memory usage for
 jobs.  Being able to see exactly what change pushed up memory usage by
 a large % would have made finding this easier.  We keep some overall
 details for devstack runs in a log file, but there is room to do
 better.


Interesting debug, and good to see this was finally nailed.

Few questions:

1) So why did this happen on rax VM only, the same (Centos job)on hpcloud
didn't seem to hit it even when we ran hpcloud VM with 8GB memory.

2) Should this also be sent to centos-devel folks so that they don't
upgrade/update
the pyopenssl in their distro repos until the issue is resolved ?

thanx,
deepak




 -i

 [1] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job
 [2] https://github.com/eliben/pycparser/issues/72
 [3] https://review.openstack.org/168217
 [4] https://review.openstack.org/169596


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-26 Thread Deepak Shetty
On Thu, Mar 26, 2015 at 5:09 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Thu, Mar 26, 2015 at 4:40 PM, Sean Dague s...@dague.net wrote:

 On 03/25/2015 09:26 AM, Dean Troyer wrote:
  On Wed, Mar 25, 2015 at 6:04 AM, Deepak Shetty dpkshe...@gmail.com
  mailto:dpkshe...@gmail.com wrote:
 
  Had a question here, why is this source in the end ?
 
 
  More often than not, you will want the variables defined by the other
  plugins (including the built-ins), this is really the first case we've
  had to deviate from that.  The right solution is to add an
  'override_plugins' phase that runs before the built-ins are sourced so
  you can override the built-in defaults.

 Ok, we did a quick discussion at the QA Sprint yesterday on this and the
 result is - https://review.openstack.org/#/c/167933/

 Please see if that would work in the glusterfs case.


 Thanks Sean.

 +Bharat


Adding Bharat now

thanx,
deepak




 Bharat,
   Pls check the below scenarios with sean's patch:

 1) enable_plugin glusterfs set  CINDER_ENABLED_BACKENDS unset - it should
 pick from plugin
 2) enable_plugin glusterfs set  CINDER_ENABLED_BACKENDS set - it should
 pick from localrc
 3) enable_plugin glusterfs set  set some backend-specific var not touched
 by lib/cinder (eg: GLUSTERFS_LOOPBACK_SIZE) and see if it picks up correctly
 4) enable_plugin glusterfs unset - it should pick cinder default

 thanx,
 deepak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-26 Thread Deepak Shetty
On Thu, Mar 26, 2015 at 4:40 PM, Sean Dague s...@dague.net wrote:

 On 03/25/2015 09:26 AM, Dean Troyer wrote:
  On Wed, Mar 25, 2015 at 6:04 AM, Deepak Shetty dpkshe...@gmail.com
  mailto:dpkshe...@gmail.com wrote:
 
  Had a question here, why is this source in the end ?
 
 
  More often than not, you will want the variables defined by the other
  plugins (including the built-ins), this is really the first case we've
  had to deviate from that.  The right solution is to add an
  'override_plugins' phase that runs before the built-ins are sourced so
  you can override the built-in defaults.

 Ok, we did a quick discussion at the QA Sprint yesterday on this and the
 result is - https://review.openstack.org/#/c/167933/

 Please see if that would work in the glusterfs case.


Thanks Sean.

+Bharat

Bharat,
  Pls check the below scenarios with sean's patch:

1) enable_plugin glusterfs set  CINDER_ENABLED_BACKENDS unset - it should
pick from plugin
2) enable_plugin glusterfs set  CINDER_ENABLED_BACKENDS set - it should
pick from localrc
3) enable_plugin glusterfs set  set some backend-specific var not touched
by lib/cinder (eg: GLUSTERFS_LOOPBACK_SIZE) and see if it picks up correctly
4) enable_plugin glusterfs unset - it should pick cinder default

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 11:29 AM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Mar 25, 2015 at 12:58 AM, Ian Wienand iwien...@redhat.com wrote:

 On 03/24/2015 03:17 PM, Deepak Shetty wrote:
  For eg: Look at [1]
  [1]
 https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings

  I would like ability to change these while I use the enable_plugin
  apporach to setup devstack w/ GlusterFS per my local glusterfs setup

 So I think the plugin should do


 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}

 i.e. provide a default only if the variable is unset.


 Bah! That was easy, i should have figured that myself :)
 Thanks for catching that



 This seems like one of those traps for new players and is one
 concern I have with devstack plugins -- that authors keep having to
 find out lessons learned independently.  I have added a note on this
 to the documentation in [1].

 -i

 [1] https://review.openstack.org/#/c/167375/


 Great, i +1'ed it.

 Also i posted patch to fix settings file @
 https://review.openstack.org/167494


Ian,
   Looks like usign bash default in settings file of plugin is not working,
in my patch it didn't use glusterfs driver, it used LVM (default)
I think whats happening here is that by the time settings file is sourced,
CINDER_ENABLED_BACKENDS is already set to lvm by lib/cinder
so settings file's default value is never taken

IIUC there are 3 scenarios (taking CINDER_ENABLED_BACKENDS as example var) :

1) localrc doesn't have CINDER_ENABLED_BACKENDS and enable_plugin
- Here we want the lib/cinder's default value to be taken
- this should work fine

2) localrc doesn't have CINDER_ENABLED_BACKENDS but has enable_plugin
glusterfs
- Here we want the plugin's default values to be taken, but its not as
lib/cinder already initialized CINDER_ENABLED_BACKENDS to use lvm backend
- Thus broken

3) localrc has both CINDER_ENABLED_BACKENDS and enable_plugin glusterfs
specified
- Here we want CINDER_ENABLED_BACKENDS present in my localrc to be
chosen
- This will work as by the time settings file is sourced
CINDER_ENABLED_BACKENDS is already initialised to my value in localrc

So #2 scenario would need some changes in stack.sh handling of plugin code ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 12:58 AM, Ian Wienand iwien...@redhat.com wrote:

 On 03/24/2015 03:17 PM, Deepak Shetty wrote:
  For eg: Look at [1]
  [1]
 https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings

  I would like ability to change these while I use the enable_plugin
  apporach to setup devstack w/ GlusterFS per my local glusterfs setup

 So I think the plugin should do


 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}

 i.e. provide a default only if the variable is unset.


Bah! That was easy, i should have figured that myself :)
Thanks for catching that



 This seems like one of those traps for new players and is one
 concern I have with devstack plugins -- that authors keep having to
 find out lessons learned independently.  I have added a note on this
 to the documentation in [1].

 -i

 [1] https://review.openstack.org/#/c/167375/


Great, i +1'ed it.

Also i posted patch to fix settings file @
https://review.openstack.org/167494

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Third-Party CI: adding Mellanox CI to Wiki

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 12:41 PM, Lenny Verkhovsky len...@mellanox.com
wrote:

  Hi All,

 Please add Mellanox Cinder CI to this page

 https://wiki.openstack.org/wiki/Cinder/third-party-ci-status


Lenny,
  You should be able to edit and add this yourself, once you login with
your ID on that page

thanx,
deepak




 Driver name: iSER-LIO, iSER-ISCSI

 Contact: Lenny Verkhovsky cinder...@mellanox.com

 Status: voting and reporting

 Issues: none



 Reference
 https://wiki.openstack.org/wiki/ThirdPartySystems/Mellanox_Cinder_CI



 Thanks.

 *Lenny Verkhovsky*

 SW Engineer,  Mellanox Technologies

 www.mellanox.com



 Office:+972 74 712 9244

 Mobile:  +972 54 554 0233

 Fax:+972 72 257 9400

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 7:30 PM, Dean Troyer dtro...@gmail.com wrote:

 I wasn't clear, let me try again:

   CINDER_ENABLED_BACKENDS=;

 set the value to the separator character semi-colon.  That evaluates to
 not-empty for the shell :- but has no entries so is still effectively null
 for the cinder config code.



Ah, i didn't notice the semi-colon :)
I guess this should work, so #1 (per sean's suggested options) seems to be
the way to go for now

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 6:58 PM, Dean Troyer dtro...@gmail.com wrote:

 On Wed, Mar 25, 2015 at 5:58 AM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Sorry, hit send before i could complete
 back to square one (unless we modify lib/cinder to *not* use default for
 CINDER_ENABLED_BACKENDS
 if 'CINDER_ENABLED_BACKENDS=' specified in localrc)


 You could safely set CINDER_ENABLED_BACKENDS=; in local.conf to avoid the
 :- default setting.


Per the comment from yamamoto [1] it seems :- stands for unset or empty, so
it won't work (IIUC)

[1]: https://review.openstack.org/#/c/167375/1/doc/source/plugins.rst

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 3:58 PM, Sean Dague s...@dague.net wrote:

 On 03/25/2015 03:16 AM, Deepak Shetty wrote:
 
 
  On Wed, Mar 25, 2015 at 11:29 AM, Deepak Shetty dpkshe...@gmail.com
  mailto:dpkshe...@gmail.com wrote:
 
 
 
  On Wed, Mar 25, 2015 at 12:58 AM, Ian Wienand iwien...@redhat.com
  mailto:iwien...@redhat.com wrote:
 
  On 03/24/2015 03:17 PM, Deepak Shetty wrote:
   For eg: Look at [1]
   [1]
 https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings
 
   I would like ability to change these while I use the
 enable_plugin
   apporach to setup devstack w/ GlusterFS per my local glusterfs
 setup
 
  So I think the plugin should do
 
 
  
 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}
 
  i.e. provide a default only if the variable is unset.
 
 
  Bah! That was easy, i should have figured that myself :)
  Thanks for catching that
 
 
 
  This seems like one of those traps for new players and is one
  concern I have with devstack plugins -- that authors keep having
 to
  find out lessons learned independently.  I have added a note on
 this
  to the documentation in [1].
 
  -i
 
  [1] https://review.openstack.org/#/c/167375/
 
 
  Great, i +1'ed it.
 
  Also i posted patch to fix settings file @
  https://review.openstack.org/167494
 
 
  Ian,
 Looks like usign bash default in settings file of plugin is not
  working, in my patch it didn't use glusterfs driver, it used LVM
 (default)
  I think whats happening here is that by the time settings file is
  sourced, CINDER_ENABLED_BACKENDS is already set to lvm by lib/cinder
  so settings file's default value is never taken
 
  IIUC there are 3 scenarios (taking CINDER_ENABLED_BACKENDS as example
 var) :
 
  1) localrc doesn't have CINDER_ENABLED_BACKENDS and enable_plugin
  - Here we want the lib/cinder's default value to be taken
  - this should work fine
 
  2) localrc doesn't have CINDER_ENABLED_BACKENDS but has enable_plugin
  glusterfs
  - Here we want the plugin's default values to be taken, but its not
  as lib/cinder already initialized CINDER_ENABLED_BACKENDS to use lvm
 backend
  - Thus broken
 
  3) localrc has both CINDER_ENABLED_BACKENDS and enable_plugin glusterfs
  specified
  - Here we want CINDER_ENABLED_BACKENDS present in my localrc to be
  chosen
  - This will work as by the time settings file is sourced
  CINDER_ENABLED_BACKENDS is already initialised to my value in localrc
 
  So #2 scenario would need some changes in stack.sh handling of plugin
 code ?

 Right, so this code runs late enough that you don't get to change the
 defaults. I think that's ok.

 I would instead do the following:

 1) CINDER_ENABLED_BACKENDS+=,glusterfs:glusterfs

 or

 2) CINDER_ENABLED_BACKENDS=glusterfs:glusterfs

 in the plugin.

 Clearly, if you've enabled the plugin, you want glusterfs. I think that
 in most cases you probably only want glusterfs as your backend, so
 option #2 seems sensible.



#1 is needed for multi-backend testing
#2 is needed for single-backend testing

#2 is what we currently, we blindly override the var, but that forces the
devstack user to
use the config given in the plugin, I wanted a way to either use plugin
config or override it

I think #1 is better, since it gives the power in localrc to do:

1) CINDER_ENABLED_BACKENDS=
This will ensure lib/cinder doesn't populate it and plugin adds
glusterfs:glusterfs for single backend

2) No mention of CINDER_ENABLED_BACKENDS in localrc
This will make it CINDER_ENABLED_BACKENDS=lvm:lvm-driver1,
glusterfs:glusterfs for multi-backend

Also for vars in settings file that are backend specific (hence not touched
by lib/cinder):

GLUSTERFS_LOOPBACK_DISK_SIZE  CINDER_GLUSTERFS_SHARES

They can remain as :
GLUSTERFS_LOOPBACK_DISK_SIZE=${GLUSTERFS_LOOPBACK_DISK_SIZE:-8G}

CINDER_GLUSTERFS_SHARES=${CINDER_GLUSTERFS_SHARES:-127.0.0.1:
/vol1;127.0.0.1:/vol2}

(as mentioned in the patch @
https://review.openstack.org/#/c/167494/1/devstack/settings)

This will give the end user the ability to change loopback size and/or
gluster server IPs
based on the needs of his/her local setup

Agree ?

If yes, then we must mention this in the plugin.rst in a nice way for other
plugin writers to
understand properly :) ?

thanx,
deepak




 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http

Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 4:20 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Mar 25, 2015 at 3:58 PM, Sean Dague s...@dague.net wrote:

 On 03/25/2015 03:16 AM, Deepak Shetty wrote:
 
 
  On Wed, Mar 25, 2015 at 11:29 AM, Deepak Shetty dpkshe...@gmail.com
  mailto:dpkshe...@gmail.com wrote:
 
 
 
  On Wed, Mar 25, 2015 at 12:58 AM, Ian Wienand iwien...@redhat.com
  mailto:iwien...@redhat.com wrote:
 
  On 03/24/2015 03:17 PM, Deepak Shetty wrote:
   For eg: Look at [1]
   [1]
 https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings
 
   I would like ability to change these while I use the
 enable_plugin
   apporach to setup devstack w/ GlusterFS per my local
 glusterfs setup
 
  So I think the plugin should do
 
 
  
 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}
 
  i.e. provide a default only if the variable is unset.
 
 
  Bah! That was easy, i should have figured that myself :)
  Thanks for catching that
 
 
 
  This seems like one of those traps for new players and is one
  concern I have with devstack plugins -- that authors keep
 having to
  find out lessons learned independently.  I have added a note on
 this
  to the documentation in [1].
 
  -i
 
  [1] https://review.openstack.org/#/c/167375/
 
 
  Great, i +1'ed it.
 
  Also i posted patch to fix settings file @
  https://review.openstack.org/167494
 
 
  Ian,
 Looks like usign bash default in settings file of plugin is not
  working, in my patch it didn't use glusterfs driver, it used LVM
 (default)
  I think whats happening here is that by the time settings file is
  sourced, CINDER_ENABLED_BACKENDS is already set to lvm by lib/cinder
  so settings file's default value is never taken
 
  IIUC there are 3 scenarios (taking CINDER_ENABLED_BACKENDS as example
 var) :
 
  1) localrc doesn't have CINDER_ENABLED_BACKENDS and enable_plugin
  - Here we want the lib/cinder's default value to be taken
  - this should work fine
 
  2) localrc doesn't have CINDER_ENABLED_BACKENDS but has enable_plugin
  glusterfs
  - Here we want the plugin's default values to be taken, but its not
  as lib/cinder already initialized CINDER_ENABLED_BACKENDS to use lvm
 backend
  - Thus broken
 
  3) localrc has both CINDER_ENABLED_BACKENDS and enable_plugin glusterfs
  specified
  - Here we want CINDER_ENABLED_BACKENDS present in my localrc to be
  chosen
  - This will work as by the time settings file is sourced
  CINDER_ENABLED_BACKENDS is already initialised to my value in localrc
 
  So #2 scenario would need some changes in stack.sh handling of plugin
 code ?

 Right, so this code runs late enough that you don't get to change the
 defaults. I think that's ok.

 I would instead do the following:

 1) CINDER_ENABLED_BACKENDS+=,glusterfs:glusterfs

 or

 2) CINDER_ENABLED_BACKENDS=glusterfs:glusterfs

 in the plugin.

 Clearly, if you've enabled the plugin, you want glusterfs. I think that
 in most cases you probably only want glusterfs as your backend, so
 option #2 seems sensible.



 #1 is needed for multi-backend testing
 #2 is needed for single-backend testing

 #2 is what we currently, we blindly override the var, but that forces the
 devstack user to
 use the config given in the plugin, I wanted a way to either use plugin
 config or override it

 I think #1 is better, since it gives the power in localrc to do:

 1) CINDER_ENABLED_BACKENDS=
 This will ensure lib/cinder doesn't populate it and plugin adds
 glusterfs:glusterfs for single backend


My bad here, lib/cinder uses :- which IIUC means empty or unset, use default
so with #1 or #2 there isn't a way to provide ability to use plugin config
or override it , both ?

back to square one.

thanx,
deepak



 2) No mention of CINDER_ENABLED_BACKENDS in localrc
 This will make it CINDER_ENABLED_BACKENDS=lvm:lvm-driver1,
 glusterfs:glusterfs for multi-backend

 Also for vars in settings file that are backend specific (hence not
 touched by lib/cinder):

 GLUSTERFS_LOOPBACK_DISK_SIZE  CINDER_GLUSTERFS_SHARES

 They can remain as :
 GLUSTERFS_LOOPBACK_DISK_SIZE=${GLUSTERFS_LOOPBACK_DISK_SIZE:-8G}

 CINDER_GLUSTERFS_SHARES=${CINDER_GLUSTERFS_SHARES:-127.0.0.1:
 /vol1;127.0.0.1:/vol2}

 (as mentioned in the patch @
 https://review.openstack.org/#/c/167494/1/devstack/settings)

 This will give the end user the ability to change loopback size and/or
 gluster server IPs
 based on the needs of his/her local setup

 Agree ?

 If yes, then we must mention this in the plugin.rst in a nice way for
 other plugin writers to
 understand properly :) ?

 thanx,
 deepak




 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ

Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 4:24 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Mar 25, 2015 at 4:20 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Mar 25, 2015 at 3:58 PM, Sean Dague s...@dague.net wrote:

 On 03/25/2015 03:16 AM, Deepak Shetty wrote:
 
 
  On Wed, Mar 25, 2015 at 11:29 AM, Deepak Shetty dpkshe...@gmail.com
  mailto:dpkshe...@gmail.com wrote:
 
 
 
  On Wed, Mar 25, 2015 at 12:58 AM, Ian Wienand iwien...@redhat.com
  mailto:iwien...@redhat.com wrote:
 
  On 03/24/2015 03:17 PM, Deepak Shetty wrote:
   For eg: Look at [1]
   [1]
 https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings
 
   I would like ability to change these while I use the
 enable_plugin
   apporach to setup devstack w/ GlusterFS per my local
 glusterfs setup
 
  So I think the plugin should do
 
 
  
 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}
 
  i.e. provide a default only if the variable is unset.
 
 
  Bah! That was easy, i should have figured that myself :)
  Thanks for catching that
 
 
 
  This seems like one of those traps for new players and is one
  concern I have with devstack plugins -- that authors keep
 having to
  find out lessons learned independently.  I have added a note
 on this
  to the documentation in [1].
 
  -i
 
  [1] https://review.openstack.org/#/c/167375/
 
 
  Great, i +1'ed it.
 
  Also i posted patch to fix settings file @
  https://review.openstack.org/167494
 
 
  Ian,
 Looks like usign bash default in settings file of plugin is not
  working, in my patch it didn't use glusterfs driver, it used LVM
 (default)
  I think whats happening here is that by the time settings file is
  sourced, CINDER_ENABLED_BACKENDS is already set to lvm by lib/cinder
  so settings file's default value is never taken
 
  IIUC there are 3 scenarios (taking CINDER_ENABLED_BACKENDS as example
 var) :
 
  1) localrc doesn't have CINDER_ENABLED_BACKENDS and enable_plugin
  - Here we want the lib/cinder's default value to be taken
  - this should work fine
 
  2) localrc doesn't have CINDER_ENABLED_BACKENDS but has enable_plugin
  glusterfs
  - Here we want the plugin's default values to be taken, but its not
  as lib/cinder already initialized CINDER_ENABLED_BACKENDS to use lvm
 backend
  - Thus broken
 
  3) localrc has both CINDER_ENABLED_BACKENDS and enable_plugin glusterfs
  specified
  - Here we want CINDER_ENABLED_BACKENDS present in my localrc to be
  chosen
  - This will work as by the time settings file is sourced
  CINDER_ENABLED_BACKENDS is already initialised to my value in localrc
 
  So #2 scenario would need some changes in stack.sh handling of plugin
 code ?

 Right, so this code runs late enough that you don't get to change the
 defaults. I think that's ok.

 I would instead do the following:

 1) CINDER_ENABLED_BACKENDS+=,glusterfs:glusterfs

 or

 2) CINDER_ENABLED_BACKENDS=glusterfs:glusterfs

 in the plugin.

 Clearly, if you've enabled the plugin, you want glusterfs. I think that
 in most cases you probably only want glusterfs as your backend, so
 option #2 seems sensible.



 #1 is needed for multi-backend testing
 #2 is needed for single-backend testing

 #2 is what we currently, we blindly override the var, but that forces the
 devstack user to
 use the config given in the plugin, I wanted a way to either use plugin
 config or override it

 I think #1 is better, since it gives the power in localrc to do:

 1) CINDER_ENABLED_BACKENDS=
 This will ensure lib/cinder doesn't populate it and plugin adds
 glusterfs:glusterfs for single backend


 My bad here, lib/cinder uses :- which IIUC means empty or unset, use
 default
 so with #1 or #2 there isn't a way to provide ability to use plugin config
 or override it , both ?

 back to square one.


Sorry, hit send before i could complete
back to square one (unless we modify lib/cinder to *not* use default for
CINDER_ENABLED_BACKENDS
if 'CINDER_ENABLED_BACKENDS=' specified in localrc)

thanx,
deepak



 thanx,
 deepak



 2) No mention of CINDER_ENABLED_BACKENDS in localrc
 This will make it CINDER_ENABLED_BACKENDS=lvm:lvm-driver1,
 glusterfs:glusterfs for multi-backend

 Also for vars in settings file that are backend specific (hence not
 touched by lib/cinder):

 GLUSTERFS_LOOPBACK_DISK_SIZE  CINDER_GLUSTERFS_SHARES

 They can remain as :
 GLUSTERFS_LOOPBACK_DISK_SIZE=${GLUSTERFS_LOOPBACK_DISK_SIZE:-8G}

 CINDER_GLUSTERFS_SHARES=${CINDER_GLUSTERFS_SHARES:-127.0.0.1:
 /vol1;127.0.0.1:/vol2}

 (as mentioned in the patch @
 https://review.openstack.org/#/c/167494/1/devstack/settings)

 This will give the end user the ability to change loopback size and/or
 gluster server IPs
 based on the needs of his/her local setup

 Agree ?

 If yes, then we must mention this in the plugin.rst in a nice way for
 other plugin

Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Deepak Shetty
On Wed, Mar 25, 2015 at 3:58 PM, Sean Dague s...@dague.net wrote:

 On 03/25/2015 03:16 AM, Deepak Shetty wrote:
 
 
  On Wed, Mar 25, 2015 at 11:29 AM, Deepak Shetty dpkshe...@gmail.com
  mailto:dpkshe...@gmail.com wrote:
 
 
 
  On Wed, Mar 25, 2015 at 12:58 AM, Ian Wienand iwien...@redhat.com
  mailto:iwien...@redhat.com wrote:
 
  On 03/24/2015 03:17 PM, Deepak Shetty wrote:
   For eg: Look at [1]
   [1]
 https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings
 
   I would like ability to change these while I use the
 enable_plugin
   apporach to setup devstack w/ GlusterFS per my local glusterfs
 setup
 
  So I think the plugin should do
 
 
  
 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}
 
  i.e. provide a default only if the variable is unset.
 
 
  Bah! That was easy, i should have figured that myself :)
  Thanks for catching that
 
 
 
  This seems like one of those traps for new players and is one
  concern I have with devstack plugins -- that authors keep having
 to
  find out lessons learned independently.  I have added a note on
 this
  to the documentation in [1].
 
  -i
 
  [1] https://review.openstack.org/#/c/167375/
 
 
  Great, i +1'ed it.
 
  Also i posted patch to fix settings file @
  https://review.openstack.org/167494
 
 
  Ian,
 Looks like usign bash default in settings file of plugin is not
  working, in my patch it didn't use glusterfs driver, it used LVM
 (default)
  I think whats happening here is that by the time settings file is
  sourced, CINDER_ENABLED_BACKENDS is already set to lvm by lib/cinder
  so settings file's default value is never taken
 
  IIUC there are 3 scenarios (taking CINDER_ENABLED_BACKENDS as example
 var) :
 
  1) localrc doesn't have CINDER_ENABLED_BACKENDS and enable_plugin
  - Here we want the lib/cinder's default value to be taken
  - this should work fine
 
  2) localrc doesn't have CINDER_ENABLED_BACKENDS but has enable_plugin
  glusterfs
  - Here we want the plugin's default values to be taken, but its not
  as lib/cinder already initialized CINDER_ENABLED_BACKENDS to use lvm
 backend
  - Thus broken
 
  3) localrc has both CINDER_ENABLED_BACKENDS and enable_plugin glusterfs
  specified
  - Here we want CINDER_ENABLED_BACKENDS present in my localrc to be
  chosen
  - This will work as by the time settings file is sourced
  CINDER_ENABLED_BACKENDS is already initialised to my value in localrc
 
  So #2 scenario would need some changes in stack.sh handling of plugin
 code ?

 Right, so this code runs late enough that you don't get to change the
 defaults. I think that's ok.


Had a question here, why is this source in the end ?

plugin config is something that should be allowed to be overridden, thus
this should
be sourced in the beginning and then anything in localrc will override what
the plugin
sets/unsets

That ways if someone wants to use plugin config, just enable the plugin.
if they want to override, enable plugin and override the env specific parts

Am i thinkign wrong ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Third-Party CI: what next? (was Re: [cinder] Request exemption for removal of NetApp FC drivers (no voting CI))

2015-03-24 Thread Deepak Shetty
On Tue, Mar 24, 2015 at 2:51 AM, Walter A. Boring IV walter.bor...@hp.com
wrote:

 On 03/23/2015 01:50 PM, Mike Perez wrote:

 On 12:59 Mon 23 Mar , Stefano Maffulli wrote:

 On Mon, 2015-03-23 at 11:43 -0700, Mike Perez wrote:

 We've been talking about CI's for a year. We started talking about CI
 deadlines
 in August. If you post a driver for Kilo, it was communicated that
 you're
 required to have a CI by the end of Kilo [1][2][3][4][5][6][7][8]. This
 should've been known by your engineers regardless of when you submitted
 your
 driver.

 Let's work to fix the CI bits for Liberty and beyond. I have the feeling
 that despite your best effort to communicate deadlines, some quite
 visible failure has happened.

 You've been clear about Cinder's deadlines, I've been trying to add them
 also to the weekly newsletter, too.

 To the people whose drivers don't have their CI completed in time: what
 do you suggest should change so that you won't miss the deadlines in the
 future? How should the processes and tool be different so you'll be
 successful with your OpenStack-based products?

 Just to be clear, here's all the communication attempts made to vendors:

 1) Talks during the design summit and the meetup on Friday at the summit.

 2) Discussions at the Cinder midcycle meetups in Fort Collins and Austin.

 4) Individual emails to driver maintainers. This includes anyone else who
 has
 worked on the driver file according to the git logs.

 5) Reminders on the mailing list.

 6) Reminders on IRC and Cinder IRC meetings every week.

 7) If you submitted a new driver in Kilo, you had the annoying reminder
 from
 reviewers that your driver needs to have a CI by Kilo.

 And lastly I have made phone calls to companies that have shown zero
 responses
 to my emails or giving me updates. This is very difficult with larger
 companies because you're redirected from one person to another of who
 their
 OpenStack person is.  I've left reminders on given voice mail
 extensions.

 I've talked to folks at the OpenStack foundation to get feedback on my
 communication, and was told this was good, and even better than previous
 communication to controversial changes.

 I expected nevertheless people to be angry with me and blame me
 regardless of
 my attempts to help people be successful and move the community forward.

  I completely agree here Mike.   The Cinder cores, PTL, and the rest of
 the
 community have been talking about getting CI as a requirement for quite
 some time now.
 It's really not the fault of the Cinder PTL, or core members, that your
 driver got pulled from the Kilo
 release, because you had issues getting your CI up and stable in the
 required time frame.
 Mike made every possible attempt to let folks know, up front, that the
 deadline was going to happen.

 Getting CI in place is critical for the stability of Cinder in general.
  We have already benefited from
 having 3rd Party CI in place.  It wasn't but a few weeks ago that a change
 that was submitted actually
 broke the HP drivers.   The CI we had in place discovered it, and brought
 it to the surface.   Without
 having that CI in place for our drivers, we would be in a bad spot now.


+1, we (GlusterFS) too discovered issues with live snapshot (being one of
the very few that uses it in cinder)
tests failing as part of CI and we fixed it [1]

[1]: https://review.openstack.org/#/c/156940/

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Updating 'scheduled_at' field of nova instances in the database.

2015-03-24 Thread Deepak Shetty
On Tue, Mar 24, 2015 at 11:57 AM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Tue, Mar 24, 2015 at 10:58 AM, Deepthi Dharwar 
 deep...@linux.vnet.ibm.com wrote:

 On 03/23/2015 09:00 PM, Jay Pipes wrote:
  On Mon, Mar 23, 2015 at 11:18:28AM +0530, Deepthi Dharwar wrote:
  All the VM information is stored in the instances table.
  This includes all the time related field like scheduled_at,
 launched_at etc.
 
  After upgrading to Juno, I have noticed that my 'scheduled_at' field
  is not getting reflected at all in the database. I do see my VMs
  being spawned and running just fine. However, the 'launched_at' time
  does get reflected rightly.
 
 
  MariaDB [nova] select created_at, deleted_at, host,scheduled_at,
 launched_at from instances;
 
 +-+-+---+--+-+
  | created_at  | deleted_at  | host  |
 scheduled_at | launched_at |
 
 +-+-+---+--+-+
  | 2015-03-09 20:00:41 | 2015-03-10 17:12:11 | localhost |
 NULL | 2015-03-09 20:01:30 |
  | 2015-03-11 05:53:13 | NULL| localhost
   | NULL | 2015-03-18 19:48:12 |
 
 
  Can anyone let me know if this is a genuine issue or have there been
  a recent change in regard to updating this field ?
 
  I am basically trying to find as to how long a particular VM is
 running on a given host.
  I was using the current time - scheduled time for the same.
  Is there a better way to get this value ?
 
  Use current_time - launched_at.

 'launched_at' will give me the time a particular VM came into being.
 In a scenario where the VM was launched on host H1, later migrated on to
 a different host H2, 'launched_at' will not give the time the VM
 has been running on host H2, where as 'scheduled_at' would have
 addressed this issue or will it ?


 Per the code , patch @ https://review.openstack.org/#/c/143725/2 removed
 scheduled_at
 since the function was no longer used. Googling i couldn't find more info
 on the history behind
 why these 2 fields were there to begin with.

 I am _not_ nova expert, but iiuc, a VM is scheduled
 once and changing the host as part of migration isn't scheduling, but i
 could be wrong too.

 Since your requirement is to find the time a VM lived on a host, as long
 as launched_at is updated
 post migration, it should suffice i feel ?


FWIW, In nova/compute/manager.py line 4036 is where migration ends and line
4042 is where
the launched_at is updated, post migration.

HTH

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Updating 'scheduled_at' field of nova instances in the database.

2015-03-24 Thread Deepak Shetty
On Tue, Mar 24, 2015 at 10:58 AM, Deepthi Dharwar 
deep...@linux.vnet.ibm.com wrote:

 On 03/23/2015 09:00 PM, Jay Pipes wrote:
  On Mon, Mar 23, 2015 at 11:18:28AM +0530, Deepthi Dharwar wrote:
  All the VM information is stored in the instances table.
  This includes all the time related field like scheduled_at, launched_at
 etc.
 
  After upgrading to Juno, I have noticed that my 'scheduled_at' field
  is not getting reflected at all in the database. I do see my VMs
  being spawned and running just fine. However, the 'launched_at' time
  does get reflected rightly.
 
 
  MariaDB [nova] select created_at, deleted_at, host,scheduled_at,
 launched_at from instances;
 
 +-+-+---+--+-+
  | created_at  | deleted_at  | host  |
 scheduled_at | launched_at |
 
 +-+-+---+--+-+
  | 2015-03-09 20:00:41 | 2015-03-10 17:12:11 | localhost |
 NULL | 2015-03-09 20:01:30 |
  | 2015-03-11 05:53:13 | NULL| localhost
   | NULL | 2015-03-18 19:48:12 |
 
 
  Can anyone let me know if this is a genuine issue or have there been
  a recent change in regard to updating this field ?
 
  I am basically trying to find as to how long a particular VM is running
 on a given host.
  I was using the current time - scheduled time for the same.
  Is there a better way to get this value ?
 
  Use current_time - launched_at.

 'launched_at' will give me the time a particular VM came into being.
 In a scenario where the VM was launched on host H1, later migrated on to
 a different host H2, 'launched_at' will not give the time the VM
 has been running on host H2, where as 'scheduled_at' would have
 addressed this issue or will it ?


Per the code , patch @ https://review.openstack.org/#/c/143725/2 removed
scheduled_at
since the function was no longer used. Googling i couldn't find more info
on the history behind
why these 2 fields were there to begin with.

I am _not_ nova expert, but iiuc, a VM is scheduled
once and changing the host as part of migration isn't scheduling, but i
could be wrong too.

Since your requirement is to find the time a VM lived on a host, as long as
launched_at is updated
post migration, it should suffice i feel ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-23 Thread Deepak Shetty
Hi all,
  I was wondering if there was a neat way to override the settings file
present in the devstack plugin stackforge project.

For eg: stackforge/devstack-plugin-glusterfs

I plan to use `enable_plugin glusterfs repo` in my local to setup
GlusterFS backend for openstack

But I am forced to use the settings that the above repo has.

Is there a way for the user setting up devstack to use plugin  but specify
his/her own settings file ?

I guess not, and i see 2 ways of doing it - both not the best solutions
tho':

1) create settings.multi-backend, settings.blah-blah etc in the stackforge
project and have an optional [settings] arg to the enable_plugin which
tells devstack which settings file to source

2) Create different gitref's (branches) for each version of settings file
one want to support and use that gitref in the enable_plugin to clone

Both of these need the plugin dev's co-operation

I was hoping it would be good if there was a way for backend specific
options in the settings file to be overridden by the end user, that ways
he/she can use the plugin and still override the backend specific options
based on the local env

Thoughts ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-23 Thread Deepak Shetty
On Tue, Mar 24, 2015 at 8:36 AM, Ian Wienand iwien...@redhat.com wrote:

 On 03/23/2015 09:20 PM, Deepak Shetty wrote:

 Hi all,
I was wondering if there was a neat way to override the settings file
 present in the devstack plugin stackforge project.

 For eg: stackforge/devstack-plugin-glusterfs

 I plan to use `enable_plugin glusterfs repo` in my local to setup
 GlusterFS backend for openstack

 But I am forced to use the settings that the above repo has.


 Can you explain more what you mean?  The glusterfs plugin should have
 access to anything defined by the local.conf?


For eg: Look at [1], it configures 2 backends (glusterfs and lvm), but I
just need glusterfs backend
Also CINDER_GLUSTERFS_SHARES is hardcoded to be present locally, in my
devstack setup i might have
different gluster volume names and/or IP address

I would like ability to change these while I use the enable_plugin apporach
to setup
devstack w/ GlusterFS per my local glusterfs setup

[1]:
https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Cinder-GlusterFS CI update

2015-03-10 Thread Deepak Shetty
Hi All,
 Quick update.

 We added GlusterFS CI job (gate-tempest-dsvm-full-glusterfs) to *check
pipeline (non-voting)* after the patch @ [1]  was merged.

Its been running successfully ( so far so good ) on Cinder patches, few
examples are in [2]

I also updated the 3rd party CI status page [3] with the current status.

[1]: https://review.openstack.org/162556
 [2]: https://review.openstack.org/#/c/162532/ ,
https://review.openstack.org/#/c/157956/ ,
https://review.openstack.org/#/c/160682/
 [3]: https://wiki.openstack.org/wiki/Cinder/third-party-ci-status

 thanx,
 deepak  bharat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-03-05 Thread Deepak Shetty
Update:

   Cinder - GlusterFS CI job (ubuntu based) was added as experimental (non
voting) to cinder project [1]
Its running successfully without any issue so far [2], [3]

We will monitor it for few days and if it continues to run fine, we will
propose a patch to make it check (voting)

[1]: https://review.openstack.org/160664
[2]: https://jenkins07.openstack.org/job/gate-tempest-dsvm-full-glusterfs/
[3]: https://jenkins02.openstack.org/job/gate-tempest-dsvm-full-glusterfs/

thanx,
deepak

On Fri, Feb 27, 2015 at 10:47 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Fri, Feb 27, 2015 at 4:02 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


 Maybe true, but we did the same on hpcloud provider VM too and both time
 it ran successfully with glusterfs as the cinder backend. Also before
 starting
 the 2nd run, we did unstack and saw that free memory did go back to 5G+
 and then re-invoked your script, I believe the contamination could
 result in some
 additional testcase failures (which we did see) but shouldn't be
 related to
 whether system can OOM or not, since thats a runtime thing.

 I see that the VM is up again. We will execute the 2nd run afresh now
 and update
 here.


 Ran tempest with configured with default backend i.e. LVM and was able
 to recreate
 the OOM issue, so running tempest without gluster against a fresh VM
 reliably
 recreates the OOM issue, snip below from syslog.

 Feb 25 16:58:37 devstack-centos7-rax-dfw-979654 kernel: glance-api
 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

 Had a discussion with clarkb on IRC and given that F20 is discontinued,
 F21 has issues with tempest (under debug by ianw)
 and centos7 also has issues on rax (as evident from this thread), the
 only option left is to go with ubuntu based CI job, which
 BharatK is working on now.


 Quick Update:

 Cinder-GlusterFS CI job on ubuntu was added (
 https://review.openstack.org/159217)

 We ran it 3 times against our stackforge repo patch @
 https://review.openstack.org/159711
 and it works fine (2 testcase failures, which are expected and we're
 working towards fixing them)

 For the logs of the 3 experimental runs, look @

 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/

 Of the 3 jobs, 1 was schedued on rax and 2 on hpcloud, so its working
 nicely across
 the different cloud providers.


 Clarkb, Fungi,
   Given that the ubuntu job is stable, I would like to propose to add it
 as experimental to the
 openstack cinder while we work on fixing the 2 failed test cases in
 parallel

 thanx,
 deepak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-03 Thread Deepak Shetty
On Tue, Mar 3, 2015 at 12:51 AM, Luis Pabon lpa...@redhat.com wrote:

 What is the status on virtfs?  I am not sure if it is being maintained.
 Does anyone know?


The last i knew its not maintained.
Also for what its worth, p9 won't work for windows guest (unless there is a
p9 driver for windows ?) if that is part of your usecase/scenario ?

Last but not the least, p9/virtfs would expose a p9 mount , not a ceph
mount to VMs, which means if there are cephfs specific mount options they
may not work




 - Luis

 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org
 Sent: Sunday, March 1, 2015 9:07:36 AM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

 Am 27.02.2015 um 01:04 schrieb Sage Weil:
  [sorry for ceph-devel double-post, forgot to include
  openstack-dev]
 
  Hi everyone,
 
  The online Ceph Developer Summit is next week[1] and among other
  things we'll be talking about how to support CephFS in Manila.  At
  a high level, there are basically two paths:

 We discussed the CephFS Manila topic also on the last Manila Midcycle
 Meetup (Kilo) [1][2]

  2) Native CephFS driver
 
  As I currently understand it,
 
  - The driver will set up CephFS auth credentials so that the guest
  VM can mount CephFS directly - The guest VM will need access to the
  Ceph network.  That makes this mainly interesting for private
  clouds and trusted environments. - The guest is responsible for
  running 'mount -t ceph ...'. - I'm not sure how we provide the auth
  credential to the user/guest...

 The auth credentials need to be handled currently by a application
 orchestration solution I guess. I see currently no solution on the
 Manila layer level atm.


There were some discussion in the past in Manila community on guest auto
mount
but i guess nothing was conclusive there.

Appln orchestration can be achived by having tenant specific VM images with
creds
pre-loaded or have the creds injected via cloud-init too should work ?



 If Ceph would provide OpenStack Keystone authentication for
 rados/cephfs instead of CephX, it could be handled via app orch easily.

  This would perform better than an NFS gateway, but there are
  several gaps on the security side that make this unusable currently
  in an untrusted environment:
 
  - The CephFS MDS auth credentials currently are _very_ basic.  As
  in, binary: can this host mount or it cannot.  We have the auth cap
  string parsing in place to restrict to a subdirectory (e.g., this
  tenant can only mount /tenants/foo), but the MDS does not enforce
  this yet.  [medium project to add that]
 
  - The same credential could be used directly via librados to access
  the data pool directly, regardless of what the MDS has to say about
  the namespace.  There are two ways around this:
 
  1- Give each tenant a separate rados pool.  This works today.
  You'd set a directory policy that puts all files created in that
  subdirectory in that tenant's pool, then only let the client access
  those rados pools.
 
  1a- We currently lack an MDS auth capability that restricts which
  clients get to change that policy.  [small project]
 
  2- Extend the MDS file layouts to use the rados namespaces so that
   users can be separated within the same rados pool.  [Medium
  project]
 
  3- Something fancy with MDS-generated capabilities specifying which
   rados objects clients get to read.  This probably falls in the
  category of research, although there are some papers we've seen
  that look promising. [big project]
 
  Anyway, this leads to a few questions:
 
  - Who is interested in using Manila to attach CephFS to guest VMs?


I didn't get this question... Goal of manila is to provision shared FS to
VMs
so everyone interested in using CephFS would be interested to attach (
'guess you meant mount?)
CephFS to VMs, no ?



  - What use cases are you interested? - How important is security in
  your environment?


NFS-Ganesha based service VM approach (for network isolation) in Manila is
still
 under works, afaik.



 As you know we (Deutsche Telekom) are may interested to provide shared
 filesystems via CephFS to VMs instead of e.g. via NFS. We can
 provide/discuss use cases at CDS.

 For us security is very critical, as the performance is too. The first
 solution via ganesha is not what we prefer (to use CephFS via p9 and
 NFS would not perform that well I guess). The second solution, to use
 CephFS directly to the VM would be a bad solution from the security
 point of view since we can't expose the Ceph public network directly
 to the VMs to prevent all the security issues we discussed already.


Is there any place the security issues are captured for the case where VMs
access
CephFS directly ? I was curious to understand. IIUC Neutron provides
private and public
networks and for VMs to access external CephFS network, the 

Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-03 Thread Deepak Shetty
On Wed, Mar 4, 2015 at 5:10 AM, Danny Al-Gaaf danny.al-g...@bisect.de
wrote:

 Am 03.03.2015 um 19:31 schrieb Deepak Shetty:
 [...]
  For us security is very critical, as the performance is too. The
  first solution via ganesha is not what we prefer (to use CephFS
  via p9 and NFS would not perform that well I guess). The second
  solution, to use CephFS directly to the VM would be a bad
  solution from the security point of view since we can't expose
  the Ceph public network directly to the VMs to prevent all the
  security issues we discussed already.
 
 
  Is there any place the security issues are captured for the case
  where VMs access CephFS directly ?

 No there isn't any place and this is the issue for us.

  I was curious to understand. IIUC Neutron provides private and
  public networks and for VMs to access external CephFS network, the
  tenant private network needs to be bridged/routed to the external
  provider network and there are ways neturon achives it.
 
  Are you saying that this approach of neutron is insecure ?

 I don't say neutron itself is insecure.

 The problem is: we don't want any VM to get access to the ceph public
 network at all since this would mean access to all MON, OSDs and MDS
 daemons.

 If a tenant VM has access to the ceph public net, which is needed to
 use/mount native cephfs in this VM, one critical issue would be: the
 client can attack any ceph component via this network. Maybe I misses
 something, but routing doesn't change this fact.


Agree, but there are ways you can restrict the tenant VMs to specific
network ports
only using neutron security groups and limit what tenant VM can do. On the
CephFS side one can use selinux labels to provide addnl level of security
for
Ceph daemons, where in only certain process can access/modify them, I am
just thinking aloud here, i m not sure how well cephfs works with selinux
combined.

Thinking more, it seems like then you need a solution that goes via the
serviceVM
approach but provide native CephFS mounts instead of NFS ?

thanx,
deepak



 Danny




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Need +A (workflow +1) for https://review.openstack.org/156940

2015-03-03 Thread Deepak Shetty
Anteaya,
  In general i agree, but because of TZ differences, not always you can do
that.
Also i sent mail only for the case where we had all +1, +2, just needed
workflow +1, which I think is justfiable !

thanx,
deepak

On Tue, Mar 3, 2015 at 1:30 PM, Anita Kuno ante...@anteaya.info wrote:

 On 03/03/2015 02:17 AM, Deepak Shetty wrote:
   Hi all,
  Can someone give +A to https://review.openstack.org/156940 - we have
  the rest. Need to get this merged for glusterfs CI to pass the
  snapshot_when_volume_in_use testcases.
 
  thanx,
  deepak
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Do not request reviews on the mailing list.

 Please spend time in the project channel to which you wish to contribute
 and discuss patch status in there.

 Thank you,
 Anita.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][glusterfs] Online Snapshot fails with GlusterFS

2015-03-02 Thread Deepak Shetty
Duncan, Attila,
Thanks for your response.

We found a issue when using os_priviledge_user_name which is when we sent
this patch
https://review.openstack.org/#/c/156940/

Post that we used admin user, its password and admin tenant in cinder.conf
to make it work
But we thought that using admin creds (esp password) isn't secure in
cinder.conf so we thought of
patching tempest test case, but later while talking with Attila on IRC,
figured  that we can use
user 'nova' which is added to admin role in lib/nova as part of devstack
setup

So we plan to test using nova creds.

thanx,
deepak

On Mon, Mar 2, 2015 at 5:24 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 I'm assuming your mean the following lines in nova policy.js:
 compute_extension:os-assisted-volume-snapshots:create:rule:admin_api,
 compute_extension:os-assisted-volume-snapshots:delete:rule:admin_api

 These 2 calls are not intended to be made directly via an end user, but
 via cinder, as a privileged user.

 Please do not patch tempest, since this is a real bug it is highlighting.
 The fix is to get cinder to use a privileged user account to make this
 call. Please raise a cinder bug.

 Thanks





  Hi,

 As part of tempest job  gate-tempest-dsvm-full-glusterfs
 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
 run [1], the test case  test_snapshot_create_with_volume_in_use [2] is
 failing.
 This is because demo user is unable to create online snapshots, due to
 nova policy rules[3].

 To avoid this issue we can modify test case, to make demo user as an
 admin before creating snapshot and reverting after it finishes.

 Another approach is to use privileged user (
 https://review.openstack.org/#/c/156940/) to create online snapshot.

 [1]
 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
 [2]
 https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_snapshots.py#L66
 [3]
 https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L329

 --
 Warm Regards,
 Bharat Kumar Kobagana
 Software Engineer
 OpenStack Storage – RedHat India


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-27 Thread Deepak Shetty
On Fri, Feb 27, 2015 at 4:02 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


 Maybe true, but we did the same on hpcloud provider VM too and both time
 it ran successfully with glusterfs as the cinder backend. Also before
 starting
 the 2nd run, we did unstack and saw that free memory did go back to 5G+
 and then re-invoked your script, I believe the contamination could
 result in some
 additional testcase failures (which we did see) but shouldn't be related
 to
 whether system can OOM or not, since thats a runtime thing.

 I see that the VM is up again. We will execute the 2nd run afresh now
 and update
 here.


 Ran tempest with configured with default backend i.e. LVM and was able to
 recreate
 the OOM issue, so running tempest without gluster against a fresh VM
 reliably
 recreates the OOM issue, snip below from syslog.

 Feb 25 16:58:37 devstack-centos7-rax-dfw-979654 kernel: glance-api
 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

 Had a discussion with clarkb on IRC and given that F20 is discontinued,
 F21 has issues with tempest (under debug by ianw)
 and centos7 also has issues on rax (as evident from this thread), the
 only option left is to go with ubuntu based CI job, which
 BharatK is working on now.


 Quick Update:

 Cinder-GlusterFS CI job on ubuntu was added (
 https://review.openstack.org/159217)

 We ran it 3 times against our stackforge repo patch @
 https://review.openstack.org/159711
 and it works fine (2 testcase failures, which are expected and we're
 working towards fixing them)

 For the logs of the 3 experimental runs, look @

 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/

 Of the 3 jobs, 1 was schedued on rax and 2 on hpcloud, so its working
 nicely across
 the different cloud providers.


Clarkb, Fungi,
  Given that the ubuntu job is stable, I would like to propose to add it as
experimental to the
openstack cinder while we work on fixing the 2 failed test cases in parallel

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-27 Thread Deepak Shetty
On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


 Maybe true, but we did the same on hpcloud provider VM too and both time
 it ran successfully with glusterfs as the cinder backend. Also before
 starting
 the 2nd run, we did unstack and saw that free memory did go back to 5G+
 and then re-invoked your script, I believe the contamination could result
 in some
 additional testcase failures (which we did see) but shouldn't be related
 to
 whether system can OOM or not, since thats a runtime thing.

 I see that the VM is up again. We will execute the 2nd run afresh now and
 update
 here.


 Ran tempest with configured with default backend i.e. LVM and was able to
 recreate
 the OOM issue, so running tempest without gluster against a fresh VM
 reliably
 recreates the OOM issue, snip below from syslog.

 Feb 25 16:58:37 devstack-centos7-rax-dfw-979654 kernel: glance-api invoked
 oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

 Had a discussion with clarkb on IRC and given that F20 is discontinued,
 F21 has issues with tempest (under debug by ianw)
 and centos7 also has issues on rax (as evident from this thread), the only
 option left is to go with ubuntu based CI job, which
 BharatK is working on now.


Quick Update:

Cinder-GlusterFS CI job on ubuntu was added (
https://review.openstack.org/159217)

We ran it 3 times against our stackforge repo patch @
https://review.openstack.org/159711
and it works fine (2 testcase failures, which are expected and we're
working towards fixing them)

For the logs of the 3 experimental runs, look @
http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/

Of the 3 jobs, 1 was schedued on rax and 2 on hpcloud, so its working
nicely across
the different cloud providers.

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][glusterfs] Online Snapshot fails with GlusterFS

2015-02-26 Thread Deepak Shetty
Thanks Bharat for starting this thread

I would like to invite suggestions/opinions from tempest folks on whats the
right way to get this to work ?

1) Use priviledge user in cinder.conf

--or --

2) Modify tempest volume snapshot_in_use testcase to bump the user to
admin, run the test, revert back to demo before leaving the testcase

thanx,
deepak


On Fri, Feb 27, 2015 at 11:57 AM, Bharat Kumar bharat.kobag...@redhat.com
wrote:

  Hi,

 As part of tempest job  gate-tempest-dsvm-full-glusterfs
 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
 run [1], the test case  test_snapshot_create_with_volume_in_use [2] is
 failing.
 This is because demo user is unable to create online snapshots, due to
 nova policy rules[3].

 To avoid this issue we can modify test case, to make demo user as an
 admin before creating snapshot and reverting after it finishes.

 Another approach is to use privileged user (
 https://review.openstack.org/#/c/156940/) to create online snapshot.

 [1]
 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
 [2]
 https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_snapshots.py#L66
 [3]
 https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L329

 --
 Warm Regards,
 Bharat Kumar Kobagana
 Software Engineer
 OpenStack Storage – RedHat India


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-26 Thread Deepak Shetty
On Wed, Feb 25, 2015 at 6:11 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
 [...]
  After running 971 test cases VM inaccessible for 569 ticks
 [...]

 Glad you're able to reproduce it. For the record that is running
 their 8GB performance flavor with a CentOS 7 PVHVM base image. The
 steps to recreate are http://paste.openstack.org/show/181303/ as
 discussed in IRC (for the sake of others following along). I've held
 a similar worker in HPCloud (15.126.235.20) which is a 30GB flavor
 artifically limited to 8GB through a kernel boot parameter.
 Hopefully following the same steps there will help either confirm
 the issue isn't specific to running in one particular service
 provider, or will yield some useful difference which could help
 highlight the cause.

 Either way, once 104.239.136.99 and 15.126.235.20 are no longer
 needed, please let one of the infrastructure root admins know to
 delete them.


You can delete these VMs, wil request if needed again

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-25 Thread Deepak Shetty
On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


Maybe true, but we did the same on hpcloud provider VM too and both time
it ran successfully with glusterfs as the cinder backend. Also before
starting
the 2nd run, we did unstack and saw that free memory did go back to 5G+
and then re-invoked your script, I believe the contamination could result
in some
additional testcase failures (which we did see) but shouldn't be related to
whether system can OOM or not, since thats a runtime thing.

I see that the VM is up again. We will execute the 2nd run afresh now and
update
here.



  The VM (104.239.136.99) is now in such a bad shape that existing
  ssh sessions are no longer responding for a long long time now,
  tho' ping works. So need someone to help reboot/restart the VM so
  that we can collect the logs for records. Couldn't find anyone
  during apac TZ to get it reboot.
 [...]

 According to novaclient that instance was in a shutoff state, and
 so I had to nova reboot --hard to get it running. Looks like it's
 back up and reachable again now.


Cool, thanks!



  So from the above we can conclude that the tests are running fine
  on hpcloud and not on rax provider. Since the OS (centos7) inside
  the VM across provider is same, this now boils down to some issue
  with rax provider VM + centos7 combination.

 This certainly seems possible.

  Another data point I could gather is:
  The only other centos7 job we have is
  check-tempest-dsvm-centos7 and it does not run full tempest
  looking at the job's config it only runs smoke tests (also
  confirmed the same with Ian W) which i believe is a subset of
  tests only.

 Correct, so if we confirm that we can't successfully run tempest
 full on CentOS 7 in both of our providers yet, we should probably
 think hard about the implications on yesterday's discussion as to
 whether to set the smoke version gating on devstack and
 devstack-gate changes.

  So that brings to the conclusion that probably cinder-glusterfs CI
  job (check-tempest-dsvm-full-glusterfs-centos7) is the first
  centos7 based job running full tempest tests in upstream CI and
  hence is the first to hit the issue, but on rax provider only

 Entirely likely. As I mentioned last week, we don't yet have any
 voting/gating jobs running on the platform as far as I can tell, so
 it's still very much in an experimental stage.


So is there a way for a job to ask for hpcloud affinity, since thats where
our
job ran well (faster and only 2 failures, which were expected) ? I am not
sure
how easy and time consuming it would be to root cause why centos7 + rax
provider
is causing oom.

Alternatively do you recommend using some other OS as the base for our job
F20 or F21 or ubuntu ? I assume that there are other Jobs in rax provider
that
run on Fedora or Ubuntu with full tempest and don't OOM, would you know ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-25 Thread Deepak Shetty
On Wed, Feb 25, 2015 at 6:11 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
 [...]
  After running 971 test cases VM inaccessible for 569 ticks
 [...]

 Glad you're able to reproduce it. For the record that is running
 their 8GB performance flavor with a CentOS 7 PVHVM base image. The


So we had 2 runs in total in the rax provider VM and below are the results:

Run 1) It failed and re-created the OOM. The setup had glusterfs as a
storage
backend for Cinder.

[deepakcs@deepakcs r6-jeremy-rax-vm]$ grep oom-killer
run1-w-gluster/logs/syslog.txt
Feb 24 18:41:08 devstack-centos7-rax-dfw-979654.slave.openstack.org kernel:
mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

Run 2) We *removed glusterfs backend*, so Cinder was configured with the
default
storage backend i.e. LVM. *We re-created the OOM here too*

So that proves that glusterfs doesn't cause it, as its happening without
glusterfs too.
The VM (104.239.136.99) is now in such a bad shape that existing ssh
sessions
are no longer responding for a long long time now, tho' ping works. So need
someone to
help reboot/restart the VM so that we can collect the logs for records.
Couldn't find anyone
during apac TZ to get it reboot.

We managed to get the below grep to work after a long time from another
terminal
to prove that oom did happen for run2

bash-4.2$ sudo cat /var/log/messages| grep oom-killer
Feb 25 08:53:16 devstack-centos7-rax-dfw-979654 kernel: ntpd invoked
oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Feb 25 09:03:35 devstack-centos7-rax-dfw-979654 kernel: beam.smp invoked
oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Feb 25 09:57:28 devstack-centos7-rax-dfw-979654 kernel: mysqld invoked
oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Feb 25 10:40:38 devstack-centos7-rax-dfw-979654 kernel: mysqld invoked
oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0


steps to recreate are http://paste.openstack.org/show/181303/ as
 discussed in IRC (for the sake of others following along). I've held
 a similar worker in HPCloud (15.126.235.20) which is a 30GB flavor


We ran 2 runs in total in the hpcloud provider VM (and this time it was
setup correctly with 8g ram, as evident from /proc/meminfo as well as dstat
output)

Run1) It was successfull. The setup had glusterfs as a storage
backend for Cinder. Only 2 testcases failed, they were expected. No oom
happened.

[deepakcs@deepakcs r7-jeremy-hpcloud-vm]$ grep oom-killer
run1-w-gluster/logs/syslog.txt
[deepakcs@deepakcs r7-jeremy-hpcloud-vm]$

Run 2) Since run1 went fine, we enabled tempest volume backup testcases too
and ran again.
It was successfull and no oom happened.

[deepakcs@deepakcs r7-jeremy-hpcloud-vm]$ grep oom-killer
run2-w-gluster/logs/syslog.txt
[deepakcs@deepakcs r7-jeremy-hpcloud-vm]$


 artifically limited to 8GB through a kernel boot parameter.
 Hopefully following the same steps there will help either confirm
 the issue isn't specific to running in one particular service
 provider, or will yield some useful difference which could help
 highlight the cause.


So from the above we can conclude that the tests are running fine on hpcloud
and not on rax provider. Since the OS (centos7) inside the VM across
provider is same,
this now boils down to some issue with rax provider VM + centos7
combination.

Another data point I could gather is:
The only other centos7 job we have is check-tempest-dsvm-centos7 and it
does not run full tempest
looking at the job's config it only runs smoke tests (also confirmed the
same with Ian W) which i believe
is a subset of tests only.

So that brings to the conclusion that probably cinder-glusterfs CI job
(check-tempest-dsvm-full-glusterfs-centos7) is the first centos7
based job running full tempest tests in upstream CI and hence is the first
to hit the issue , but on rax provider only

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-24 Thread Deepak Shetty
FWIW, we tried to run our job in a rax provider VM (provided by ianw from
his personal account)
and we ran the tempest tests twice, but the OOM did not re-create. Of the 2
runs, one of the run
used the same PYTHONHASHSEED as we had in one of the failed runs, still no
oom.

Jeremy graciously agreed to provide us 2 VMs , one each from rax and
hpcloud provider
to see if provider platform has anything to do with it.

So we plan to run again wtih the VMs given from Jeremy , post which i will
send
next update here.

thanx,
deepak


On Tue, Feb 24, 2015 at 4:50 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 Due to an image setup bug (I have a fix proposed currently), I was
 able to rerun this on a VM in HPCloud with 30GB memory and it
 completed in about an hour with a couple of tempest tests failing.
 Logs at: http://fungi.yuggoth.org/tmp/logs3.tar

 Rerunning again on another 8GB Rackspace VM with the job timeout
 increased to 5 hours, I was able to recreate the network
 connectivity issues exhibited previously. The job itself seems to
 have run for roughly 3 hours while failing 15 tests, and the worker
 was mostly unreachable for a while at the end (I don't know exactly
 how long) until around the time it completed. The OOM condition is
 present this time too according to the logs, occurring right near
 the end of the job. Collected logs are available at:
 http://fungi.yuggoth.org/tmp/logs4.tar

 Given the comparison between these two runs, I suspect this is
 either caused by memory constraints or block device I/O performance
 differences (or perhaps an unhappy combination of the two).
 Hopefully a close review of the logs will indicate which.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Deepak Shetty
On Feb 21, 2015 12:20 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-20 16:29:31 +0100 (+0100), Deepak Shetty wrote:
  Couldn't find anything strong in the logs to back the reason for
  OOM. At the time OOM happens, mysqld and java processes have the
  most RAM hence OOM selects mysqld (4.7G) to be killed.
 [...]

 Today I reran it after you rolled back some additional tests, and it
 runs for about 117 minutes before the OOM killer shoots nova-compute
 in the head. At your request I've added /var/log/glusterfs into the
 tarball this time: http://fungi.yuggoth.org/tmp/logs2.tar

Thanks jeremy, can we get ssh access to one of these env to debug?

Thanks
Deepak

 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Deepak Shetty
On Feb 21, 2015 12:26 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty dpkshe...@gmail.com
wrote:

 Hi Jeremy,
   Couldn't find anything strong in the logs to back the reason for OOM.
 At the time OOM happens, mysqld and java processes have the most RAM
hence OOM selects mysqld (4.7G) to be killed.

 From a glusterfs backend perspective, i haven't found anything
suspicious, and we don't have the logs of glusterfs (which is typically in
/var/log/glusterfs) so can't delve inside glusterfs too much :(

 BharatK (in CC) also tried to re-create the issue in local VM setup, but
it hasn't yet!

 Having said that, we do know that we started seeing this issue after we
enabled the nova-assisted-snapshot tests (by changing nova' s policy.json
to enable non-admin to create hyp-assisted snaps). We think that enabling
online snaps might have added to the number of tests and memory load 
thats the only clue we have as of now!


 It looks like OOM killer hit while qemu was busy and during
a ServerRescueTest. Maybe libvirt logs would be useful as well?

Thanks for the data point, will look at this test to understand more what's
happening


 And I don't see any tempest tests calling assisted-volume-snapshots

Maybe it still hasn't reached to it yet.

Thanks
Deepak


 Also this looks odd: Feb 19 18:47:16
devstack-centos7-rax-iad-916633.slave.openstack.org libvirtd[3753]: missing
__com.redhat_reason in disk io error event



 So :

   1) BharatK  has merged the patch (
https://review.openstack.org/#/c/157707/ ) to revert the policy.json in the
glusterfs job. So no more nova-assisted-snap tests.

   2) We also are increasing the timeout of our job in patch (
https://review.openstack.org/#/c/157835/1 ) so that we can get a full run
without timeouts to do a good analysis of the logs (logs are not posted if
the job times out)

 Can you please re-enable our job, so that we can confirm that disabling
online snap TCs is helping the issue, which if it does, can help us narrow
down the issue.

 We also plan to monitor  debug over the weekend hence having the job
enabled can help us a lot.

 thanx,
 deepak


 On Thu, Feb 19, 2015 at 10:37 PM, Jeremy Stanley fu...@yuggoth.org
wrote:

 On 2015-02-19 17:03:49 +0100 (+0100), Deepak Shetty wrote:
 [...]
  For some reason we are seeing the centos7 glusterfs CI job getting
  aborted/ killed either by Java exception or the build getting
  aborted due to timeout.
 [...]
  Hoping to root cause this soon and get the cinder-glusterfs CI job
  back online soon.

 I manually reran the same commands this job runs on an identical
 virtual machine and was able to reproduce some substantial
 weirdness.

 I temporarily lost remote access to the VM around 108 minutes into
 running the job (~17:50 in the logs) and the out of band console
 also became unresponsive to carriage returns. The machine's IP
 address still responded to ICMP ping, but attempts to open new TCP
 sockets to the SSH service never got a protocol version banner back.
 After about 10 minutes of that I went out to lunch but left
 everything untouched. To my excitement it was up and responding
 again when I returned.

 It appears from the logs that it runs well past the 120-minute mark
 where devstack-gate tries to kill the gate hook for its configured
 timeout. Somewhere around 165 minutes in (18:47) you can see the
 kernel out-of-memory killer starts to kick in and kill httpd and
 mysqld processes according to the syslog. Hopefully this is enough
 additional detail to get you a start at finding the root cause so
 that we can reenable your job. Let me know if there's anything else
 you need for this.

 [1] http://fungi.yuggoth.org/tmp/logs.tar
 --
 Jeremy Stanley


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-20 Thread Deepak Shetty
Hi Jeremy,
  Couldn't find anything strong in the logs to back the reason for OOM.
At the time OOM happens, mysqld and java processes have the most RAM hence
OOM selects mysqld (4.7G) to be killed.

From a glusterfs backend perspective, i haven't found anything suspicious,
and we don't have the logs of glusterfs (which is typically in
/var/log/glusterfs) so can't delve inside glusterfs too much :(

BharatK (in CC) also tried to re-create the issue in local VM setup, but it
hasn't yet!

Having said that,* we do know* that we started seeing this issue after we
enabled the nova-assisted-snapshot tests (by changing nova' s policy.json
to enable non-admin to create hyp-assisted snaps). We think that enabling
online snaps might have added to the number of tests and memory load 
thats the only clue we have as of now!

So :

  1) BharatK  has merged the patch (
https://review.openstack.org/#/c/157707/ ) to revert the policy.json in the
glusterfs job. So no more nova-assisted-snap tests.

  2) We also are increasing the timeout of our job in patch (
https://review.openstack.org/#/c/157835/1 ) so that we can get a full run
without timeouts to do a good analysis of the logs (logs are not posted if
the job times out)

Can you please re-enable our job, so that we can confirm that disabling
online snap TCs is helping the issue, which if it does, can help us narrow
down the issue.

We also plan to monitor  debug over the weekend hence having the job
enabled can help us a lot.

thanx,
deepak


On Thu, Feb 19, 2015 at 10:37 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-19 17:03:49 +0100 (+0100), Deepak Shetty wrote:
 [...]
  For some reason we are seeing the centos7 glusterfs CI job getting
  aborted/ killed either by Java exception or the build getting
  aborted due to timeout.
 [...]
  Hoping to root cause this soon and get the cinder-glusterfs CI job
  back online soon.

 I manually reran the same commands this job runs on an identical
 virtual machine and was able to reproduce some substantial
 weirdness.

 I temporarily lost remote access to the VM around 108 minutes into
 running the job (~17:50 in the logs) and the out of band console
 also became unresponsive to carriage returns. The machine's IP
 address still responded to ICMP ping, but attempts to open new TCP
 sockets to the SSH service never got a protocol version banner back.
 After about 10 minutes of that I went out to lunch but left
 everything untouched. To my excitement it was up and responding
 again when I returned.

 It appears from the logs that it runs well past the 120-minute mark
 where devstack-gate tries to kill the gate hook for its configured
 timeout. Somewhere around 165 minutes in (18:47) you can see the
 kernel out-of-memory killer starts to kick in and kill httpd and
 mysqld processes according to the syslog. Hopefully this is enough
 additional detail to get you a start at finding the root cause so
 that we can reenable your job. Let me know if there's anything else
 you need for this.

 [1] http://fungi.yuggoth.org/tmp/logs.tar
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-19 Thread Deepak Shetty
Hi clarkb, fungi,
   As discussed in
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-02-19.log
( 2015-02-19T14:51:46 onwards), I am starting this thread to track the
abrupt job failures seen on cinder-glusterfs CI job in the recent past.

A small summary of the things that happened until now ...

For some reason we are seeing the centos7 glusterfs CI job getting
aborted/killed either by Java exception
or the build getting aborted due to timeout.

*1) 
**https://jenkins07.openstack.org/job/check-tempest-dsvm-full-glusterfs-centos7/35/consoleFull
https://jenkins07.openstack.org/job/check-tempest-dsvm-full-glusterfs-centos7/35/consoleFull
- due to hudson Java exception*

*2)
https://jenkins07.openstack.org/job/check-tempest-dsvm-full-glusterfs-centos7/34/consoleFull
https://jenkins07.openstack.org/job/check-tempest-dsvm-full-glusterfs-centos7/34/consoleFull
- due to build timeout*


For a list of all job failures, see

https://jenkins07.openstack.org/job/check-tempest-dsvm-full-glusterfs-centos7/

Most of the failures are of type #1

As a result of whcih the cinder-glusterfs CI job was removed ...
https://review.openstack.org/#/c/157213/

Per the discussion on IRC (see link above), fungi graciously agreed to
debug this as it looks like happening on the 'rax' provider. Thanks fungi
and clarkb :)

Hoping to root cause this soon and get the cinder-glusterfs CI job back
online soon.

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha

2015-02-11 Thread Deepak Shetty
On Tue, Feb 10, 2015 at 1:51 AM, Li, Chen chen...@intel.com wrote:

  Hi list,



 I’m trying to understand how manila use NFS-Ganesha, and hope to figure
 out what I need to do to use it if all patches been merged (only one patch
 is under reviewing,  right ?).



 I have read:

 https://wiki.openstack.org/wiki/Manila/Networking/Gateway_mediated

 https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha



 From documents, it is said, within Ganesha, multi-tenancy would be
 supported:

 *And later the Ganesha core would be extended to use the infrastructure
 used by generic driver to provide network separated multi-tenancy. The core
 would manage Ganesha service running in the service VMs, and the VMs
 themselves that reside in share networks.*



 ð  it is said : *extended to use the infrastructure used by generic
 driver to provide network separated multi-tenancy*

 So, when user create a share, a VM (share-server) would be created to run
 Ganesha-server.

 ð  I assume this VM should connect the 2 networks : user’s share-network
 and the network where Glusterfs cluster is running.



 But, in generic driver, it create a manila service network at beginning.

 When user create a share, a “subnet” would be created in manila service
 network corresponding to each user’s “share-network”:

 This means every VM(share-server) generic driver has created are living in
 different subnets, they’re not able to connect to each other.


When you say VM, its confusing, whether you are referring to service VM or
tenant VM. Since you are also saying share-server, I presume you mean
service VM!

IIUC each share-server VM (also called service VM) is serving all VMs
created by a tenant. In other words, generic driver creates 1 service VM
per tenant, and hence serves all the VMs (tenant VMs) created by that tenant
Manila experts on the list can correct me if I am wrong here. Generic
driver creates service VM (if not already present for that tenant) as part
of creating a new share and connect the tenant network to the service VM
network via neutron router (creates ports on the router which helps connect
the 2 different subnets), thus the tenant VMs can ping/access the service
VM. There is no question and/or need to have 2 service VMs talk to each
other, because they are serving different tenants, thus they need to be
isolated!





 If my understanding here is correct, the VMs that running Ganesha are
 living the different subnets too.

 ð  Here is my question:

 How VMs(share-servers) running Ganesha be able to connect to the single
 Glusterfs cluster ?



Typically GlusterFS will be deployed on storage nodes (by storage admin)
that are NOT part of openstack. So having the share-server talk/connect
with GlusterFS is equivalent to saying Allow openstack VM to talk with
non-openstack nodes, in other words Connect the neutron network to
non-neutron network (also called provider/host network).

This is achieved by ensuring your openstack Network node is configured to
forward tenant traffic to provider network, which involves neutron skills
and some neutron black magic :)
To know what this involves, pls see section Setup devstack networking to
allow Nova VMs access external/provider network in my blog @
http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html

This should be taken care by your openstack network admin who should
configure the openstack network node to allow this to happen, this isn't a
Manila / GlusterFS driver responsibility, rather its an openstack
deployment option thats taken care by the network admins during openstack
deployment.



*Disclaimer: I am not neutron expert, so feel free to correct/update me*
HTH,

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha

2015-02-11 Thread Deepak Shetty
On Thu, Feb 12, 2015 at 6:41 AM, Li, Chen chen...@intel.com wrote:

  Hi Deepak,



 Ø  When you say VM, its confusing, whether you are referring to service
 VM or

 Ø  tenant VM. Since you are also saying share-server, I presume you mean

 Ø  service VM!



 Ø  IIUC each share-server VM (also called service VM) is serving all VMs

 Ø  created by a tenant. In other words, generic driver creates 1 service
 VM

 Ø  per tenant, and hence serves all the VMs (tenant VMs) created by that
 tenant

 Ø  Manila experts on the list can correct me if I am wrong here. Generic

 Ø  driver creates service VM (if not already present for that tenant) as
 part

 Ø  of creating a new share and connect the tenant network to the service
 VM

 Ø  network via neutron router (creates ports on the router which helps
 connect

 Ø  the 2 different subnets), thus the tenant VMs can ping/access the
 service

 Ø  VM. There is no question and/or need to have 2 service VMs talk to each

 Ø  other, because they are serving different tenants, thus they need to be

 Ø  isolated!



 Sorry for the bad expression, yes, I mean service VM.



 I don’t agree with “each share-server VM (also called service VM) is
 serving all VMs created by a tenant”.

 Because from my practices , 1 service VM is created for 1 share-network.

 A share-network - A service VM - shares which are created with the same
 “share-network”.


You are probably right, I don't remember the insides of share-network now,
but I always created 1 share-network, so i always had the notion of 1
service VM per tenant.


  A tenant(the tenant concept in keystone) can has more than one
 share-networks, even a same neutron network  subnet can be “registered”
 to several share-networks if user do want to do that.

 Actually, I didn’t see strong connections between manila shares and
 tenant (the concept in keystone), but this is other topics then.



 But, I think I get your point about service VMs need to be isolated.

 I agree with that.



 Ø  Typically GlusterFS will be deployed on storage nodes (by storage admin)

 Ø  that are NOT part of openstack. So having the share-server talk/connect

 Ø  with GlusterFS is equivalent to saying Allow openstack VM to talk with

 Ø  non-openstack nodes, in other words Connect the neutron network to

 Ø  non-neutron network (also called provider/host network).





 This is the part I disagree.


What exactly do you disagree here ?






 Ø  This is achieved by ensuring your openstack Network node is configured to

 Ø  forward tenant traffic to provider network, which involves neutron skills

 Ø  and some neutron black magic :)

 Ø  To know what this involves, pls see section Setup devstack networking to

 Ø  allow Nova VMs access external/provider network in my blog @

 Ø  http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html





 Ø  This should be taken care by your openstack network admin who should

 Ø  configure the openstack network node to allow this to happen, this isn't a

 Ø  Manila / GlusterFS driver responsibility, rather its an openstack

 Ø  deployment option thats taken care by the network admins during openstack

 Ø  deployment.







 What I want to do is enable GluserFS with Manila with Ganesha in my
 environment.

 I’m working as a cloud admin.

 So, what I expecting is,

 1.   I need to prepare a GlusterFS cluster

 2.   I need to prepare images and other stuff for service VM


Right now, i think all we support is running Ganesha inside the GlusterFS
server node only. I don't think we have qualified
the scenario where Ganesha is running in service VM. The Blueprint talks
about doing this in near future.

Ccing Csaba and Ramana who are the right folks to comment more on this.



  3.   I need to configure my GluserFS cluster’s information (IPs,
 volumes) into manila.conf



 ð  All things can work if I start Manila now, Yeah !

 The only thing I know is manila would create VMs to connect to my
 GlusterFS cluster.





 Currently, the neutron network  subnet where service VMs work is created
 by Manila.

 Manila called them service_network  service_subnet.

 So, I don’t think it is possible for me to configure the network before I
 create shares.


service_network and service_subnet is pre-created i thought ? Even if it
isn't you can bridge the service_network with provider network after the
service_network is created (Ideally it should have been pre-created)




 Another problem is there is no router I can used to let service_network
 connected to GlusterFS cluster.

 Because service_subnet are already connected to user’s router ( if
 connect_share_server_to_tenant_network = False)


If you read my blog, it talks about connecting tenant network to GlusterFS
cluster which is on the host/provider network
For your case, it maps to connecting service VM (service_network and
service_subnet) to GlusterFS cluster. You can either
use the existign router or create  a new router and have it connect the

Re: [openstack-dev] [Manila]Question about gateway-mediated-with-ganesha

2015-02-11 Thread Deepak Shetty
On Thu, Feb 12, 2015 at 7:32 AM, Li, Chen chen...@intel.com wrote:

  Yes,  I’m asking about plans for gateway-mediated-with-ganesha.

 I want to know what would you do to achieve “*And later the Ganesha core
 would be extended to use the infrastructure used by generic driver to
 provide network separated multi-tenancy. The core would manage Ganesha
 service running in the service VMs, and the VMs themselves that reside in
 share networks.*”

 Because after I studied current infrastructure of generic driver,  I guess
 directly use it for Ganesha would now work.


You may be right, but we cannot be sure until we test, qualify, validate
against a real setup. Also there is no infrastrucutre to run Ganesha in
service VM, so the major work would be to bundle Ganesha and make it
available as a service VM image and use that image instead of the existing
service VM image. Csaba and Ramana (in CC) can comment more on this.


  This is what I have learned from code:



 Manila create service_network and service_subnet based on configurations
 in manila.conf:

 *service_network_name =  manila_service_network*

 service_network_cidr =  10.254.0.0/16


So even if the service_network or service_subnet is not created, this
information from the conf file can be used by the network admin to
bridge/connect the service network (whenever it comes up) with the
host/provider network.


  service_network_division_mask = 28



 service_network is created when manila-share service start.

 service_subnet is created when manila-share service  get a share create
 command, and no share-server exists for current share-network.

 ð  Service_subnet create at the same time as share-server created.


Thanks for clarifying.

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack-gate] How to pass through devstack config

2015-01-28 Thread Deepak Shetty
Putting the right tag in the subject to see if somone can help answer the
below

thanx,
deepak


On Tue, Jan 27, 2015 at 7:57 PM, Bharat Kumar bharat.kobag...@redhat.com
wrote:

  Hi,

 I have seen Sean Dague's patch [1], if I understood correctly, by this
 patch we can reduce the number of DEVSTACK_GATE variables that we need.
 Trying to follow this patch to configure my gate job
 DEVSTACK_GATE_GLUSTERFS [2].

 I am not able to figure out the way to use this patch [1].
 Please suggest me how to use the patch [1] to configure my gate job [2].

 [1] https://review.openstack.org/#/c/145321/
 [2] https://review.openstack.org/#/c/143308/7/devstack-vm-gate.sh

 --
 Warm Regards,
 Bharat Kumar Kobagana
 Software Engineer
 OpenStack Storage – RedHat India


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-19 Thread Deepak Shetty
Just so that people following this thread know about the final decision,
Per https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
the deadline for CI is extended to Mar. 3, 2015 for all volume drivers.

snip
Deadlines

All volume drivers
https://github.com/openstack/cinder/tree/master/cinder/volume/drivers
need to have a CI by end of* K-3, March 19th 2015*.* Failure will result in
removal in the Kilo release.* Discussion regarding this was in the
#openstack-meeting IRC room during the Cinder meeting. Read discussion logs
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21

/snip

On Tue, Jan 13, 2015 at 3:55 AM, Mike Perez thin...@gmail.com wrote:

 On 09:03 Mon 12 Jan , Erlon Cruz wrote:
  Hi guys,
 
  Thanks for answering my questions. I have 2 points:
 
  1 - This (remove drivers without CI) is a way impacting change to be
  implemented without exhausting notification and discussion on the mailing
  list. I myself was in the meeting but this decision wasn't crystal clear.
  There must be other driver maintainers completely unaware of this.

 I agree that the mailing list has not been exhausted, however, just
 reaching
 out to the mailing list is not good enough. My instructions back in
 November
 19th [1][2] were that we need to email individual maintainers and the
 openstack-dev list. That was not done. As far as I'm concerned, we can't
 stick
 to the current deadline for existing drivers. I will bring this up in the
 next
 Cinder meeting.

  2 - Build a CI infrastructure and have people to maintain a the CI for a
  new driver in a 5 weeks frame. Not all companies has the knowledge and
  resources necessary to this in such sort period. We should consider a
 grace
  release period, i.e. drivers entering on K, have until L to implement
  theirs CIs.

 New driver maintainers have until March 19th. [3] That's around 17 weeks
 since
 we discussed this in November [2]. This is part the documentation for how
 to
 contribute a driver [4], which links to the third party requirement
 deadline
 [3].

 [1] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html
 [2] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
 [3] -
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
 [4] - https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-19 Thread Deepak Shetty
Yuck ! its Mar. 19, 2015 (bad copy paste before)

On Tue, Jan 20, 2015 at 12:16 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Just so that people following this thread know about the final decision,
 Per
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
 the deadline for CI is extended to Mar. 3, 2015 for all volume drivers.

 snip
 Deadlines

 All volume drivers
 https://github.com/openstack/cinder/tree/master/cinder/volume/drivers
 need to have a CI by end of* K-3, March 19th 2015*.* Failure will result
 in removal in the Kilo release.* Discussion regarding this was in the
 #openstack-meeting IRC room during the Cinder meeting. Read discussion
 logs
 http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21

 /snip

 On Tue, Jan 13, 2015 at 3:55 AM, Mike Perez thin...@gmail.com wrote:

 On 09:03 Mon 12 Jan , Erlon Cruz wrote:
  Hi guys,
 
  Thanks for answering my questions. I have 2 points:
 
  1 - This (remove drivers without CI) is a way impacting change to be
  implemented without exhausting notification and discussion on the
 mailing
  list. I myself was in the meeting but this decision wasn't crystal
 clear.
  There must be other driver maintainers completely unaware of this.

 I agree that the mailing list has not been exhausted, however, just
 reaching
 out to the mailing list is not good enough. My instructions back in
 November
 19th [1][2] were that we need to email individual maintainers and the
 openstack-dev list. That was not done. As far as I'm concerned, we can't
 stick
 to the current deadline for existing drivers. I will bring this up in the
 next
 Cinder meeting.

  2 - Build a CI infrastructure and have people to maintain a the CI for a
  new driver in a 5 weeks frame. Not all companies has the knowledge and
  resources necessary to this in such sort period. We should consider a
 grace
  release period, i.e. drivers entering on K, have until L to implement
  theirs CIs.

 New driver maintainers have until March 19th. [3] That's around 17 weeks
 since
 we discussed this in November [2]. This is part the documentation for how
 to
 contribute a driver [4], which links to the third party requirement
 deadline
 [3].

 [1] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html
 [2] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
 [3] -
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
 [4] - https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Devstack plugins and gate testing

2015-01-13 Thread Deepak Shetty
On Tue, Jan 13, 2015 at 4:54 AM, Ian Wienand iwien...@redhat.com wrote:

 Hi,

 With [1] merged, we now have people working on creating external
 plugins for devstack.


The devstack plugin concept seems logical and useful to me.

GlusterFS Cinder CI is probably the first one implementing the plugin.
See https://review.openstack.org/#/c/146822/





 I worry about use of arbitrary external locations as plugins for gate
 jobs.  If a plugin is hosted externally (github, bitbucket, etc) we
 are introducing a whole host of problems when it is used as a gate
 job.  Lack of CI testing for proposed changes, uptime of the remote
 end, ability to accept contributions, lack of administrative access
 and consequent ability to recover from bad merges are a few.


+1

One suggestion. The doc for creating a new project at stackforge (
http://docs.openstack.org/infra/manual/creators.html )
is for a full blown community project, there are few stuff that can be
skipped/ignored for the devstack-plugin case.

Would it be a good idea to take that doc and trim it to have just enuf
details that apply for creating a new devstack-plugin project ?



 I would propose we agree that plugins used for gate testing should be
 hosted in stackforge unless there are very compelling reasons
 otherwise.

 To that end, I've proposed [2] as some concrete wording.  If we agree,
 I could add some sort of lint for this to project-config testing.


+1

thanx,
deepak



 Thanks,

 -i

 [1] https://review.openstack.org/#/c/142805/ (Implement devstack external
 plugins)
 [2] https://review.openstack.org/#/c/146679/ (Document use of plugins for
 gate jobs)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Driver modes, share-servers, and clustered backends

2015-01-09 Thread Deepak Shetty
Some of my comments inline prefixed with deepakcs

On Fri, Jan 9, 2015 at 6:43 AM, Li, Chen chen...@intel.com wrote:

 Thanks for the explanations!
 Really helpful.

 My questions are added in line.

 Thanks.
 -chen

 -Original Message-
 From: Ben Swartzlander [mailto:b...@swartzlander.org]
 Sent: Friday, January 09, 2015 6:02 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Manila] Driver modes, share-servers, and
 clustered backends

 There has been some confusion on the topic of driver modes and
 share-server, especially as they related to storage controllers with
 multiple physical nodes, so I will try to clear up the confusion as much as
 I can.

 Manila has had the concept of share-servers since late icehouse. This
 feature was added to solve 3 problems:
 1) Multiple drivers were creating storage VMs / service VMs as a side
 effect of share creation and Manila didn't offer any way to manage or even
 know about these VMs that were created.
 2) Drivers needed a way to keep track of (persist) what VMs they had
 created

 == so, a corresponding relationship do exist between share server and
 virtual machines.


deepakcs: I also have the same Q.. is there a relation between share server
and service VM or not ? Is there any other way you can implement a share
server w/o creating a service VM ?
IIUC, some may say that the vserver created in case of netapp storage is eq
to share server ? If this is true, then we should have a notion of whether
share server is within Manila or outside Manila too, no ? If this is not
true, then does the netapp cluster_mode driver get classified as single_svm
mode driver ?



 3) We wanted to standardize across drivers what these VMs looked like to
 Manila so that the scheduler and share-manager could know about them

 ==Q, why scheduler and share-manager need to know them ?


deepakcs: I guess because these service VMs will be managed by Manila hence
they need to know about it



 It's important to recognize that from Manila's perspective, all a
 share-server is is a container for shares that's tied to a share network
 and it also has some network allocations. It's also important to know that
 each share-server can have zero, one, or multiple IP addresses and can
 exist on an arbitrary large number of physical nodes, and the actual form
 that a share-server takes is completely undefined.


deepakcs: I am confused about `can exist on an arbitrary large number of
physical nodes` - How is this true in case of generic driver, where service
VM is just a VM on one node. What does large number of physical nodes mean,
can you provide a real world example to understand this pls ?



 During Juno, drivers that didn't explicity support the concept of
 share-servers basically got a dummy share server created which acted as a
 giant container for all the shares that backend created. This worked okay,
 but it was informal and not documented, and it made some of the things we
 want to do in kilo impossible.

 == Q, what things are impossible?  Dummy share server solution make sense
 to me.


deepakcs: I looked at the stable/juno branch and I am not sure exactly to
which part of the code you refer to as dummy server. can you pinpoint it
pls so that its clear for all. Are you referring to the ability of driver
to handle setup_server as a dummy server creation ? For eg: in glusterfs
case setup_server is no-op and I don't see how a dummy share server
(meanign service VM) is getting created from the code.




 To solve the above problem I proposed driver modes. Initially I proposed
 3 modes:
 1) single_svm
 2) flat_multi_svm
 3) managed_multi_svm

 Mode (1) was supposed to correspond to driver that didn't deal with share
 servers, and modes (2) and (3) were for drivers that did deal with share
 servers, where the difference between those 2 modes came down to networking
 details. We realized that (2) can be implemented as a special case of (3)
 so we collapsed the modes down to 2 and that's what's merged upstream now.

 == driver that didn't deal with share servers 
   =
 https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver
   = This is where I get totally lost.
   = Because for generic driver, it is not create and delete share
 servers and its related network, but would still use a share server(the
 service VM) .
   = The share (the cinder volume) need to attach to an instance no matter
 what the driver mode is.
   = I think use is some kind of deal too.


deepakcs: I partly agree with Chen above. If (1) doesn't deal with share
server, why even have 'svm' in it ? Also in *_multi_svm mode, what does
'multi' mean ? IIRC we provide the ability to manage share servers, 1 per
tenant, so how does multi fit into 1 share server per tenant notion ? Or
am i completely wrong about it ?



 The specific names we settled on (single_svm and multi_svm) were perhaps
 poorly chosen, because svm is not a term we've used 

Re: [openstack-dev] Hierarchical Multitenancy

2014-12-24 Thread Deepak Shetty
Raildo,
   Thanks for putting the blog, i really liked it as it helps to understand
how hmt works. I am interested to know more about how hmt can be exploited
for other OpenStack projects... Esp cinder, manila
On Dec 23, 2014 5:55 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hi Raildo,

 Thanks for putting this post together. I really appreciate all the work
 you guys have done (and continue to do) to get the Hierarchical
 Mulittenancy code into Keystone. It’s great to have the base implementation
 merged into Keystone for the K1 milestone. I look forward to seeing the
 rest of the development land during the rest of this cycle and what the
 other OpenStack projects build around the HMT functionality.

 Cheers,
 Morgan



 On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:

 Hello folks, My team and I developed the Hierarchical Multitenancy concept
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we
 implemented? What are the next steps for kilo?
 To answers these questions, I created a blog post 
 *http://raildo.me/hierarchical-multitenancy-in-openstack/
 http://raildo.me/hierarchical-multitenancy-in-openstack/*

 Any question, I'm available.

 --
 Raildo Mascena
 Software Engineer.
 Bachelor of Computer Science.
 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-13 Thread Deepak Shetty
I think you completely mis-understood my Q
I am completely in agreement for _not_ putting CI status in mailing list.

Let me rephrase:

As of now, I see 2 places where CI status is being tracked:

https://wiki.openstack.org/wiki/ThirdPartySystems (clicking on the Link
tells u the status)
and
https://wiki.openstack.org/wiki/Cinder/third-party-ci-status (one of column
is status column)

How are the 2 different ? Do we need to update both ?

thanx,
deepak

On Sat, Dec 13, 2014 at 1:32 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/12/2014 03:28 AM, Deepak Shetty wrote:
  On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno ante...@anteaya.info
 wrote:
 
  On 12/11/2014 09:36 AM, Jon Bernard wrote:
  Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
  was marked as skipped, only the revert_resize test was failing.  I have
  submitted a patch to nova for this [1], and that yields an all green
  ceph ci run [2].  So at the moment, and with my revert patch, we're in
  good shape.
 
  I will fix up that patch today so that it can be properly reviewed and
  hopefully merged.  From there I'll submit a patch to infra to move the
  job to the check queue as non-voting, and we can go from there.
 
  [1] https://review.openstack.org/#/c/139693/
  [2]
 
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html
 
  Cheers,
 
  Please add the name of your CI account to this table:
  https://wiki.openstack.org/wiki/ThirdPartySystems
 
  As outlined in the third party CI requirements:
  http://ci.openstack.org/third_party.html#requirements
 
  Please post system status updates to your individual CI wikipage that is
  linked to this table.
 
 
  How is posting status there different than here :
  https://wiki.openstack.org/wiki/Cinder/third-party-ci-status
 
  thanx,
  deepak
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 There are over 100 CI accounts now and growing.

 Searching the email archives to evaluate the status of a CI is not
 something that infra will do, we will look on that wikipage or we will
 check the third-party-announce email list (which all third party CI
 systems should be subscribed to, as outlined in the third_party.html
 page lined above).

 If we do not find information where we have asked you to put it and were
 we expect it, we may disable your system until you have fulfilled the
 requirements as outlined in the third_party.html page linked above.

 Sprinkling status updates amongst the emails posted to -dev and
 expecting the infra team and other -devs to find them when needed is
 unsustainable and has been for some time, which is why we came up with
 the wikipage to aggregate them.

 Please direct all further questions about this matter to one of the two
 third-party meetings as linked above.

 Thank you,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-12 Thread Deepak Shetty
On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno ante...@anteaya.info wrote:

 On 12/11/2014 09:36 AM, Jon Bernard wrote:
  Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
  was marked as skipped, only the revert_resize test was failing.  I have
  submitted a patch to nova for this [1], and that yields an all green
  ceph ci run [2].  So at the moment, and with my revert patch, we're in
  good shape.
 
  I will fix up that patch today so that it can be properly reviewed and
  hopefully merged.  From there I'll submit a patch to infra to move the
  job to the check queue as non-voting, and we can go from there.
 
  [1] https://review.openstack.org/#/c/139693/
  [2]
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html
 
  Cheers,
 
 Please add the name of your CI account to this table:
 https://wiki.openstack.org/wiki/ThirdPartySystems

 As outlined in the third party CI requirements:
 http://ci.openstack.org/third_party.html#requirements

 Please post system status updates to your individual CI wikipage that is
 linked to this table.


How is posting status there different than here :
https://wiki.openstack.org/wiki/Cinder/third-party-ci-status

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Need reviews for Deploy GlusterFS server patch

2014-12-01 Thread Deepak Shetty
Just correcting the tag and subject line in $subject, so that it gets the
attention of the right group  of folks (from devstack).

thanx,
deepak

On Mon, Dec 1, 2014 at 11:51 AM, Bharat Kumar bharat.kobag...@redhat.com
wrote:

 Hi All,

 Regarding the patch Deploy GlusterFS Server (
 https://review.openstack.org/#/c/133102/).
 Submitted this patch long back, this patch also got Code Review +2.

 I think it is waiting for Workflow approval. Another task is dependent on
 this patch.
 Please review (Workflow) this patch and help me to merge this patch.

 --
 Thanks  Regards,
 Bharat Kumar K


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-28 Thread Deepak Shetty
On Fri, Nov 28, 2014 at 10:32 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Deepak Shetty dpkshe...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
  But isn't *-specs comes very early in the process where you have an
  idea/proposal of a feature, u don't have it yet implemented. Hence specs
  just end up with Para's on how the feature is supposed to work, but
 doesn't
  include any real world screen shots as the code is not yet ready at that
  point of time. Along with patch it would make more sense, since the
 author
  would have tested it so it isn't a big overhead to catch those cli screen
  shots and put it in a .txt or .md file so that patch reviewers can see
 the
  patch in action and hence can review more effectively
 
  thanx,
  deepak

 Sure but in the original email you listed a number of other items, not
 just CLI screen shots, including:

1) What changes are needed in manila.conf to make this work
2) How to use the cli with this change incorporated
3) Some screen shots of actual usage
4) Any caution/caveats that one has to keep in mind while using this

 Ideally I see 1, 2, and 4 as things that should be added to the spec
 (retrospectively if necessary) to ensure that it maintains an accurate
 record of the feature. I can see potential benefits to including listings
 of real world usage (3) in the client projects, but I don't think all of
 the items listed belong there.


Agree, IMHO (2) and (3) will be possible only when patch is ready, others
can be part of spec.

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-26 Thread Deepak Shetty
Hi Valeriy,
   I know about docs, but this was a proposal to provide small doc which
are patch specific as that helps reviewers and other doc writers

I have many a times seen people asking on IRC or list on how to test this
patch, or i did this with your patch but didn't work , such iterations can
be reduced if we can have small docs (in free flowing text to begin with)
associated with each patch than can help people other than author to
understand what/how the patch adds functionality, which will improve the
overall review quality and reviewers in general

thanx,
deepak
P.S. I took the Manila patch just as an example , nothing specific about it
:)


On Wed, Nov 26, 2014 at 3:40 PM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 Hi Deepak,

 Docs are present in any project already, according to example with manila
 - https://github.com/openstack/manila/tree/master/doc/source

 It is used for docs on http://docs.openstack.org/ , also everyone if able
 to contribute to it.

 See docs built on basis of files from manila repo:
 http://docs.openstack.org/developer/manila/

 For most of projects we have already useful resource:
 http://docs.openstack.org/cli-reference/content/

 In conclusion I can say that it is question more to the organization of
 creation such docs than possibility to create it in general.

 Regards,
 Valeriy Ponomaryov

 On Wed, Nov 26, 2014 at 8:01 AM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi stackers,
I was having this thought which i believe applies to all projects of
 openstack (Hence All in the subject tag)

 My proposal is to have examples or usecase folder in each project which
 has info on how to use the feature/enhancement (which was submitted as part
 of a gerrit patch)
 In short, a description with screen shots (cli, not GUI) which should be
 submitted (optionally or mandatory) along with patch (liek how testcases
 are now enforced)

 I would like to take an example to explain. Take this patch @
 https://review.openstack.org/#/c/127587/ which adds a default volume
 type in Manila

 Now it would have been good if we could have a .txt or .md file alogn
 with the patch that explains :

 1) What changes are needed in manila.conf to make this work

 2) How to use the cli with this change incorporated

 3) Some screen shots of actual usage (Now the author/submitted would have
 tested in devstack before sending patch, so just copying those cli screen
 shots wouldn't be too big of a deal)

 4) Any caution/caveats that one has to keep in mind while using this

 It can be argued that some of the above is satisfied via commit msg and
 lookign at test cases.
 But i personally feel that those still doesn't give a good visualization
 of how a feature patch works in reality

 Adding such a example/usecase file along with patch helps in multiple
 ways:

 1) It helps the reviewer get a good picture of how/which clis are
 affected and how this patch fits in the flow

 2) It helps documentor get a good view of how this patch adds value,
 hence can document it better

 3) It may help the author or anyone else write a good detailed blog post
 using the examples/usecase as a reference

 4) Since this becomes part of the patch and hence git log, if the
 feature/cli/flow changes in future, we can always refer to how the feature
 was designed, worked when it was first posted by looking at the example
 usecase

 5) It helps add a lot of clarity to the patch, since we know how the
 author tested it and someone can point missing flows or issues (which
 otherwise now has to be visualised)

 6) I feel this will help attract more reviewers to the patch, since now
 its more clear what this patch affects, how it affects and how flows are
 changing, even a novice reviewer can feel more comfortable and be confident
 to provide comments.

 Thoughts ?

 thanx,
 deepak


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-26 Thread Deepak Shetty
But isn't *-specs comes very early in the process where you have an
idea/proposal of a feature, u don't have it yet implemented. Hence specs
just end up with Para's on how the feature is supposed to work, but doesn't
include any real world screen shots as the code is not yet ready at that
point of time. Along with patch it would make more sense, since the author
would have tested it so it isn't a big overhead to catch those cli screen
shots and put it in a .txt or .md file so that patch reviewers can see the
patch in action and hence can review more effectively

thanx,
deepak


On Thu, Nov 27, 2014 at 8:30 AM, Dolph Mathews dolph.math...@gmail.com
wrote:


 On Wed, Nov 26, 2014 at 1:15 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Deepak Shetty dpkshe...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
  Hi stackers,
 I was having this thought which i believe applies to all projects of
  openstack (Hence All in the subject tag)
 
  My proposal is to have examples or usecase folder in each project which
 has
  info on how to use the feature/enhancement (which was submitted as part
 of
  a gerrit patch)
  In short, a description with screen shots (cli, not GUI) which should be
  submitted (optionally or mandatory) along with patch (liek how testcases
  are now enforced)
 
  I would like to take an example to explain. Take this patch @
  https://review.openstack.org/#/c/127587/ which adds a default volume
 type
  in Manila
 
  Now it would have been good if we could have a .txt or .md file alogn
 with
  the patch that explains :
 
  1) What changes are needed in manila.conf to make this work
 
  2) How to use the cli with this change incorporated
 
  3) Some screen shots of actual usage (Now the author/submitted would
 have
  tested in devstack before sending patch, so just copying those cli
 screen
  shots wouldn't be too big of a deal)
 
  4) Any caution/caveats that one has to keep in mind while using this
 
  It can be argued that some of the above is satisfied via commit msg and
  lookign at test cases.
  But i personally feel that those still doesn't give a good
 visualization of
  how a feature patch works in reality
 
  Adding such a example/usecase file along with patch helps in multiple
 ways:
 
  1) It helps the reviewer get a good picture of how/which clis are
 affected
  and how this patch fits in the flow
 
  2) It helps documentor get a good view of how this patch adds value,
 hence
  can document it better
 
  3) It may help the author or anyone else write a good detailed blog post
  using the examples/usecase as a reference
 
  4) Since this becomes part of the patch and hence git log, if the
  feature/cli/flow changes in future, we can always refer to how the
 feature
  was designed, worked when it was first posted by looking at the example
  usecase
 
  5) It helps add a lot of clarity to the patch, since we know how the
 author
  tested it and someone can point missing flows or issues (which otherwise
  now has to be visualised)
 
  6) I feel this will help attract more reviewers to the patch, since now
 its
  more clear what this patch affects, how it affects and how flows are
  changing, even a novice reviewer can feel more comfortable and be
 confident
  to provide comments.
 
  Thoughts ?

 I would argue that for the projects that use *-specs repositories this is
 the type of detail we would like to see in the specifications associated
 with the feature themselves rather than creating another separate
 mechanism. For the projects that don't use specs repositories (e.g. Manila)
 maybe this demand is an indication they should be considering them?


 +1 this is describing exactly what I expect out of *-specs.



 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-25 Thread Deepak Shetty
Hi stackers,
   I was having this thought which i believe applies to all projects of
openstack (Hence All in the subject tag)

My proposal is to have examples or usecase folder in each project which has
info on how to use the feature/enhancement (which was submitted as part of
a gerrit patch)
In short, a description with screen shots (cli, not GUI) which should be
submitted (optionally or mandatory) along with patch (liek how testcases
are now enforced)

I would like to take an example to explain. Take this patch @
https://review.openstack.org/#/c/127587/ which adds a default volume type
in Manila

Now it would have been good if we could have a .txt or .md file alogn with
the patch that explains :

1) What changes are needed in manila.conf to make this work

2) How to use the cli with this change incorporated

3) Some screen shots of actual usage (Now the author/submitted would have
tested in devstack before sending patch, so just copying those cli screen
shots wouldn't be too big of a deal)

4) Any caution/caveats that one has to keep in mind while using this

It can be argued that some of the above is satisfied via commit msg and
lookign at test cases.
But i personally feel that those still doesn't give a good visualization of
how a feature patch works in reality

Adding such a example/usecase file along with patch helps in multiple ways:

1) It helps the reviewer get a good picture of how/which clis are affected
and how this patch fits in the flow

2) It helps documentor get a good view of how this patch adds value, hence
can document it better

3) It may help the author or anyone else write a good detailed blog post
using the examples/usecase as a reference

4) Since this becomes part of the patch and hence git log, if the
feature/cli/flow changes in future, we can always refer to how the feature
was designed, worked when it was first posted by looking at the example
usecase

5) It helps add a lot of clarity to the patch, since we know how the author
tested it and someone can point missing flows or issues (which otherwise
now has to be visualised)

6) I feel this will help attract more reviewers to the patch, since now its
more clear what this patch affects, how it affects and how flows are
changing, even a novice reviewer can feel more comfortable and be confident
to provide comments.

Thoughts ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-26 Thread Deepak Shetty
On Thu, Sep 25, 2014 at 9:22 PM, Ben Nemec openst...@nemebean.com wrote:

 On 09/22/2014 01:29 AM, Deepak Shetty wrote:
  Thats incorrect, as i said in my original mail.. I am usign
 devstack+manila
  and it wasn't very clear to me that mysql-devel needs to be installed and
  it didn't get installed. I am on F20, not sure if that causes this , if
  yes, then we need to debug and fix this.

 This is because by default devstack only installs the packages needed to
 actually run OpenStack.  For unit test deps, you need the
 INSTALL_TESTONLY_PACKAGES variable set to true in your localrc.  I've


Interesting, I didn't know that I will that using that next time I
re-spin my devstack!


 advocated to get it enabled by default in the past but was told that
 running unit tests on a devstack vm isn't the recommended workflow so
 they don't want to do that.


Hmm really, devstack is for openstack dev, so any all testing happens in
devstack
before you post the code, so I wonder who/why said that!
Maybe you should try again, giving this thread as an example


 
  Maybe its a good idea to put a comment in requirements.txt statign that
 the
  following C libs needs to be installed for  the venv to work smoothly.
 That
  would help too for the short term.

 It's worth noting that you would need multiple entries for each lib
 since every distro tends to call them something different.

Agreed, hence i suggested that a comment should be put which will hint the
dev to install the right package

thanx,
deepak



 
  On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov 
  vponomar...@mirantis.com wrote:
 
  Dep MySQL-python is already in test-requirements.txt file. As Andreas
  said, second one mysql-devel is C lib and can not be installed via
 pip.
  So, project itself, as all projects in OpenStack, can not install it.
 
  C lib deps are handled by Devstack, if it is used. See:
  https://github.com/openstack-dev/devstack/tree/master/files/rpms
 
 
 https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895
 
  Yes, Manila could have its files in the same way in
  https://github.com/openstack/manila/tree/master/contrib/devstack , but
  this lib is already exist in deps for other projects. So, I guess you
 used
  Manila run_tests.sh file on host without devstack installation, in
 that
  case all other projects would fail in the same way.
 
  On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt 
 aleonhardt...@gmail.com
  wrote:
 
  And yet it's a dependency so I'm with Deepak and it should at least be
  mentioned in the prerequisites on a webpage somewhere .. :) I might
 even
  try and update/add that myself as it caught me out a few times too..
 
  Alex
   On 20 Sep 2014 12:44, Andreas Jaeger a...@suse.com wrote:
 
  On 09/20/2014 09:34 AM, Deepak Shetty wrote:
  thanks , that worked.
  Any idea why it doesn't install it automatically and/or it isn't
  present
  in requirements.txt ?
  I thought that was the purpose of requirements.txt ?
 
  AFAIU requirements.txt has only python dependencies while
  mysql-devel is a C development package,
 
  Andreas
  --
   Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica:
 jaegerandi
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
 GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG
 Nürnberg)
  GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
 A126
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Kind Regards
  Valeriy Ponomaryov
  www.mirantis.com
  vponomar...@mirantis.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-22 Thread Deepak Shetty
Thats incorrect, as i said in my original mail.. I am usign devstack+manila
and it wasn't very clear to me that mysql-devel needs to be installed and
it didn't get installed. I am on F20, not sure if that causes this , if
yes, then we need to debug and fix this.

Maybe its a good idea to put a comment in requirements.txt statign that the
following C libs needs to be installed for  the venv to work smoothly. That
would help too for the short term.

On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 Dep MySQL-python is already in test-requirements.txt file. As Andreas
 said, second one mysql-devel is C lib and can not be installed via pip.
 So, project itself, as all projects in OpenStack, can not install it.

 C lib deps are handled by Devstack, if it is used. See:
 https://github.com/openstack-dev/devstack/tree/master/files/rpms

 https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895

 Yes, Manila could have its files in the same way in
 https://github.com/openstack/manila/tree/master/contrib/devstack , but
 this lib is already exist in deps for other projects. So, I guess you used
 Manila run_tests.sh file on host without devstack installation, in that
 case all other projects would fail in the same way.

 On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt aleonhardt...@gmail.com
 wrote:

 And yet it's a dependency so I'm with Deepak and it should at least be
 mentioned in the prerequisites on a webpage somewhere .. :) I might even
 try and update/add that myself as it caught me out a few times too..

 Alex
  On 20 Sep 2014 12:44, Andreas Jaeger a...@suse.com wrote:

 On 09/20/2014 09:34 AM, Deepak Shetty wrote:
  thanks , that worked.
  Any idea why it doesn't install it automatically and/or it isn't
 present
  in requirements.txt ?
  I thought that was the purpose of requirements.txt ?

 AFAIU requirements.txt has only python dependencies while
 mysql-devel is a C development package,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-22 Thread Deepak Shetty
Even better, whenever ./run_tests fail... maybe put a msg stating the
following C libs needs to be installed, have the user check the
same..something like that would help too.

On Mon, Sep 22, 2014 at 11:59 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Thats incorrect, as i said in my original mail.. I am usign
 devstack+manila and it wasn't very clear to me that mysql-devel needs to be
 installed and it didn't get installed. I am on F20, not sure if that causes
 this , if yes, then we need to debug and fix this.

 Maybe its a good idea to put a comment in requirements.txt statign that
 the following C libs needs to be installed for  the venv to work smoothly.
 That would help too for the short term.

 On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:

 Dep MySQL-python is already in test-requirements.txt file. As Andreas
 said, second one mysql-devel is C lib and can not be installed via
 pip. So, project itself, as all projects in OpenStack, can not install it.

 C lib deps are handled by Devstack, if it is used. See:
 https://github.com/openstack-dev/devstack/tree/master/files/rpms

 https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895

 Yes, Manila could have its files in the same way in
 https://github.com/openstack/manila/tree/master/contrib/devstack , but
 this lib is already exist in deps for other projects. So, I guess you used
 Manila run_tests.sh file on host without devstack installation, in that
 case all other projects would fail in the same way.

 On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt aleonhardt...@gmail.com
 wrote:

 And yet it's a dependency so I'm with Deepak and it should at least be
 mentioned in the prerequisites on a webpage somewhere .. :) I might even
 try and update/add that myself as it caught me out a few times too..

 Alex
  On 20 Sep 2014 12:44, Andreas Jaeger a...@suse.com wrote:

 On 09/20/2014 09:34 AM, Deepak Shetty wrote:
  thanks , that worked.
  Any idea why it doesn't install it automatically and/or it isn't
 present
  in requirements.txt ?
  I thought that was the purpose of requirements.txt ?

 AFAIU requirements.txt has only python dependencies while
 mysql-devel is a C development package,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] ./run_tests issue

2014-09-20 Thread Deepak Shetty
I keep hitting this issue in my F20 based devstack env ...

./run_tests.sh -V


.

gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
-D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
-D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(1,2,5,'final',1)
-D__version__=1.2.5 -I/usr/include/mysql -I/usr/include/python2.7 -c
_mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -g -pipe
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -D_GNU_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fPIC -g -static-libgcc
-fno-omit-frame-pointer -fno-strict-aliasing -DMY_PTHREAD_FASTMUTEX=1

_mysql.c:44:23: fatal error: my_config.h: No such file or directory

 #include my_config.h

   ^

compilation terminated.

error: command 'gcc' failed with exit status 1


Cleaning up...
Command /opt/stack/manila/.venv/bin/python -c import setuptools,
tokenize;__file__='/opt/stack/manila/.venv/build/MySQL-python/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))
install --record /tmp/pip-J7FYeG-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/opt/stack/manila/.venv/include/site/python2.7 failed with error code 1 in
/opt/stack/manila/.venv/build/MySQL-python
Traceback (most recent call last):
  File /opt/stack/manila/.venv/bin/pip, line 11, in module
sys.exit(main())
  File
/opt/stack/manila/.venv/lib/python2.7/site-packages/pip/__init__.py, line
185, in main
return command.main(cmd_args)
  File
/opt/stack/manila/.venv/lib/python2.7/site-packages/pip/basecommand.py,
line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 36:
ordinal not in range(128)
Command tools/with_venv.sh pip install --upgrade -r
/opt/stack/manila/requirements.txt -r
/opt/stack/manila/test-requirements.txt failed.
None



Version of different tools on my system...

[stack@devstack-large-vm manila]$ pip --version
pip 1.5.6 from /usr/lib/python2.7/site-packages (python 2.7)
[stack@devstack-large-vm manila]$ tox --version
1.7.2 imported from /usr/lib/python2.7/site-packages/tox/__init__.pyc
[stack@devstack-large-vm manila]$ virtualenv --version
1.11.6

=

Can anybody see what I am missing for ./run_tests.sh to fail during
install/build of mysql

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-20 Thread Deepak Shetty
thanks , that worked.
Any idea why it doesn't install it automatically and/or it isn't present in
requirements.txt ?
I thought that was the purpose of requirements.txt ?

On Sat, Sep 20, 2014 at 12:00 PM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 These should help:
 sudo yum -y install mysql-devel
 sudo pip install MySQL-python

 On Sat, Sep 20, 2014 at 9:15 AM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 I keep hitting this issue in my F20 based devstack env ...

 ./run_tests.sh -V

 
 .

 gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
 --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
 -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
 --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
 -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(1,2,5,'final',1)
 -D__version__=1.2.5 -I/usr/include/mysql -I/usr/include/python2.7 -c
 _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -g -pipe
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
 --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -D_GNU_SOURCE
 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fPIC -g -static-libgcc
 -fno-omit-frame-pointer -fno-strict-aliasing -DMY_PTHREAD_FASTMUTEX=1

 _mysql.c:44:23: fatal error: my_config.h: No such file or directory

  #include my_config.h

^

 compilation terminated.

 error: command 'gcc' failed with exit status 1

 
 Cleaning up...
 Command /opt/stack/manila/.venv/bin/python -c import setuptools,
 tokenize;__file__='/opt/stack/manila/.venv/build/MySQL-python/setup.py';exec(compile(getattr(tokenize,
 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))
 install --record /tmp/pip-J7FYeG-record/install-record.txt
 --single-version-externally-managed --compile --install-headers
 /opt/stack/manila/.venv/include/site/python2.7 failed with error code 1 in
 /opt/stack/manila/.venv/build/MySQL-python
 Traceback (most recent call last):
   File /opt/stack/manila/.venv/bin/pip, line 11, in module
 sys.exit(main())
   File
 /opt/stack/manila/.venv/lib/python2.7/site-packages/pip/__init__.py, line
 185, in main
 return command.main(cmd_args)
   File
 /opt/stack/manila/.venv/lib/python2.7/site-packages/pip/basecommand.py,
 line 161, in main
 text = '\n'.join(complete_log)
 UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 36:
 ordinal not in range(128)
 Command tools/with_venv.sh pip install --upgrade -r
 /opt/stack/manila/requirements.txt -r
 /opt/stack/manila/test-requirements.txt failed.
 None

 

 Version of different tools on my system...

 [stack@devstack-large-vm manila]$ pip --version
 pip 1.5.6 from /usr/lib/python2.7/site-packages (python 2.7)
 [stack@devstack-large-vm manila]$ tox --version
 1.7.2 imported from /usr/lib/python2.7/site-packages/tox/__init__.pyc
 [stack@devstack-large-vm manila]$ virtualenv --version
 1.11.6

 =

 Can anybody see what I am missing for ./run_tests.sh to fail during
 install/build of mysql

 thanx,
 deepak


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Debug data for the NFS v4 hang issue

2014-08-08 Thread Deepak Shetty
Per yesterday's IRC meeting, I have updated the debug data I had collected
in the github issue @

https://github.com/csabahenk/cirros/issues/9

It has data for both :
32bit nfs client accessing 64bit cirros nfs server
64bit nfs client accessing 64bit cirros nfs server

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Request to review cert-based-access-type blueprint

2014-07-18 Thread Deepak Shetty
Hi List,
I just proposed a new bp @

https://blueprints.launchpad.net/manila/+spec/cert-based-access-type

Looking for your feedback/comments.

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Query on docstrings and method names

2014-06-26 Thread Deepak Shetty
Hi All,
With respect to the comment made by xian-yang @
https://review.openstack.org/#/c/102496/1/manila/share/drivers/glusterfs.py

for _update_share_status and the docstring that the method has, which is
Retrieve status info from share volume group.

I have few questions based on the above...

1) share volume group in docstring is incorrect, since its a glusterfs
driver. But I think i know why it says volume group, probably because it
came from lvm.py to begin with. I see that all other drivers also say
volume group, tho' it may not be the right thing to say for their
respective case.

Do we want to ensure that the docstrings are put in a way thats meaningful
to the driver ?

2) _update_share_status method - I see the same issue here.. it says the
same in all other drivers.. but as xian pointed, it should be rightfully
called _update_share_stats. So should we wait for all driver to follow suit
or start changing in the driver specific code as and when we touch that
part of code ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][Manila][docs] Setting up Devstack with Manila on Fedora 20

2014-06-25 Thread Deepak Shetty
Hi List,
I have created a new wikipage with the goal to document the steps
needed to setup DevStack with Manila on F20. Added some troubleshooting
tips based on my experience.

https://wiki.openstack.org/wiki/Manila/docs/Setting_up_DevStack_with_Manila_on_Fedora_20

Pls have a look and provide comments, if any.

The idea is to have this updated as and when new tips and/or corrections
are needed so that this can become a good reference for people starting on
Manila

thanx,
deepak

P.S. I added this page on the https://wiki.openstack.org/wiki/Manila/docs/
under `Fedora 20` link and removed the F19 link that was present before
which is now outdated.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] GenericDriver cinder volume error during manila create

2014-06-16 Thread Deepak Shetty
I am trying devstack on F20 setup with Manila sources.

When i am trying to do
*manila create --name cinder_vol_share_using_nfs2 --share-network-id
36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 *

I see the below error in c-vol due to which even tho' my service VM is
started, manila create errors out as cinder volume is not getting exported
as iSCSI

2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a
6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created as
raw with specification: {'status': u'creating', 'volume_size': 2,
'volume_name': u'volume-8bfd
424d-9877-4c20-a9d1-058c06b9bdda'}
2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a6d5c
795095] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf lvcreate -n
volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from (pid=4
623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a
6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
(8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully
2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped when
creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2', 'tenant':
u'b65a066f32df4aca80
fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095 - - -'}
2014-06-16 16:39:38.426 DEBUG cinder.volume.manager
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Volume
8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623)
initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781
2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Creat
ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Crea
ted volume path
/opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda,
content:
*target
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
* backing-store
/dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
* lld iscsi*
* IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe*
* /target*
from (pid=4623) create_iscsi_target
/opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183
2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c
795095] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf tgt-admin --update
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
a from (pid=4623) execute
/opt/stack/cinder/cinder/openstack/common/processutils.py:142
2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c
795095] Result was 107 from (pid=4623) execute
/opt/stack/cinder/cinder/openstack/common/processutils.py:167
2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
d9bb59a6a2394483902b382a991ffea2 *b65a066f32df4aca80fa9a6d5c795095]
Fa*
*iled to create iscsi target for volume
id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while
running command.*
*Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
*Exit code: 107*
Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1
-T
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda\nexited
with code: 107.\n'
Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport
endpoint is not connected\ntgtadm: failed to send request hdr to tgt
daemon, Transport endpoint is not connected\ntgtadm: failed to send request
hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to
send request hdr to tgt daemon, Transport endpoint is not connected\n'
2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed
to create iscsi target for volume
volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda.
2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher Traceback (most
recent call last):
2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher 

[openstack-dev] [Nova][devstack] Unable to boot cirros-0.3.2-x86_64-uec image

2014-06-11 Thread Deepak Shetty
Hi,
  I am using the below cmd to boot cirros-0.3.2-x86_64-uec image thats
present in devstack
by default...

 nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --key_name
mykey --security_group default myvm_nano
nova list - shows the instance as ACTIVE/Running

Taking the VNC console, I see that it stuck at Booting from ROM .

Can someone help why the image is not booting ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] How to mock the LOG inside cinder driver

2014-06-03 Thread Deepak Shetty
deepakcs Hi, whats the right way to mock the LOG variable inside the
driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
deepakcs and then doing...
deepakcs mock_logger.warning.assert_called_once() - which passes and is
expected to pass per my code
deepakcs but
deepakcs mock_logger.debug.assert_called_once() - shud fail , but this
also passes !
deepakcs any idea why ?

I feel that I am not mocking the LOG inside the driver correctly.

I also tried
   mock.patch.object(glusterfs.LOG, 'warning'),
mock.patch.object(glusterfs.LOG, 'debug')
as mock_logger_warn and mock_logger_debug respectively

But here too
.debug and .warning both passes.. while the expected result is for .warning
to pass and .debug to fail

So somehow I am unable to mock LOG properly

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Cinder] How to mock the LOG inside cinder driver

2014-06-03 Thread Deepak Shetty
Wrongly sent to Joshua only, hence fwding to the list.

--

Joshua,
  If my code has logs warning, error, debug based on diff exceptions or
conditions, its good to test them  have a unit test around it
so that we can catch scenarios where we modified code that ideally shud
have just put a warn but wrongly put a debug/error. Thats my only intention
here


On Tue, Jun 3, 2014 at 11:54 PM, Joshua Harlow harlo...@yahoo-inc.com
wrote:

  Why is mocking the LOG object useful/being used?

  Testing functionality which depends on LOG triggers/calls imho is bad
 practice (and usually means something needs to be refactored).

  LOG statements, and calls should be expected to move/be removed *often*
 so testing functionality in tests with them seems like the wrong approach.

  My 2 cents.

   From: Deepak Shetty dpkshe...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 3, 2014 at 9:16 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Cinder] How to mock the LOG inside cinder driver

  deepakcs Hi, whats the right way to mock the LOG variable inside
 the driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
 deepakcs and then doing...
 deepakcs mock_logger.warning.assert_called_once() - which passes and is
 expected to pass per my code
 deepakcs but
 deepakcs mock_logger.debug.assert_called_once() - shud fail , but this
 also passes !
 deepakcs any idea why ?

  I feel that I am not mocking the LOG inside the driver correctly.

 I also tried
mock.patch.object(glusterfs.LOG, 'warning'),
 mock.patch.object(glusterfs.LOG, 'debug')
  as mock_logger_warn and mock_logger_debug respectively

  But here too
  .debug and .warning both passes.. while the expected result is for
 .warning to pass and .debug to fail

  So somehow I am unable to mock LOG properly

 thanx,
 deepak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [solved] How to mock the LOG inside cinder driver

2014-06-03 Thread Deepak Shetty
The below issue was resolved (thanks to akerr on IRC).
It seems called_once_with is not a function of mock and doesn't work
properly
Need to use assertTrue(mock_func.called) and thats working for me.

thanx,
deepak


On Tue, Jun 3, 2014 at 9:46 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 deepakcs Hi, whats the right way to mock the LOG variable inside the
 driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
 deepakcs and then doing...
 deepakcs mock_logger.warning.assert_called_once() - which passes and is
 expected to pass per my code
 deepakcs but
 deepakcs mock_logger.debug.assert_called_once() - shud fail , but this
 also passes !
 deepakcs any idea why ?

 I feel that I am not mocking the LOG inside the driver correctly.

 I also tried
mock.patch.object(glusterfs.LOG, 'warning'),
 mock.patch.object(glusterfs.LOG, 'debug')
 as mock_logger_warn and mock_logger_debug respectively

 But here too
 .debug and .warning both passes.. while the expected result is for
 .warning to pass and .debug to fail

 So somehow I am unable to mock LOG properly

 thanx,
 deepak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-28 Thread Deepak Shetty
Mitsuhiro,
  Few questions that come to my mind based on your proposal

1) There is a lof of manual work needed here.. like every time the new host
added.. admin needs to do FC zoning to ensure that LU is visible by the
host. Also the method you mentioend for refreshing (echo '---'  ...)
doesn't work reliably across all storage types does it ?

2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't
step on each other is using the LVs ? In other words.. how is it ensured
that LV1 is not used by compute nodes 1 and 2 at the same time ?

3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the
nodes.. this is wrong.. it can be seen as anything (/dev/sdx on control
node, sdn on compute 1, sdz on compute 2) so assumign sdx on all nodes is
wrong.
How does this different device names handled.. in short, how does compute
node 2 knows that LU1 is actually sdn and not sdz (assuming you had  1 LUs
provisioned)

4) What abt multipath ? In most prod env.. the FC storage will be
multipath'ed.. hence you will actually see sdx and sdy on each node and you
actually need to use mpathN (which is multipathe'd to sdx anx sdy) device
and NOT the sd? device to take adv of the customer multipath env. How does
the nodes know which mpath? device to use and which mpath? device maps to
which LU on the array ?

5) Doesnt this new proposal also causes the compute nodes to be physcially
connected (via FC) to the array, which means more wiring and need for FC
HBA on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes
so you are actualluy adding cost of each FC HBA to the compute nodes and
slowly turning commodity system to non-commodity ;-) (in a way)

6) Last but not the least... since you are using 1 BIG LU on the array to
host multiple volumes, you cannot possibly take adv of the premium,
efficient snapshot/clone/mirroring features of the array, since they are at
LU level, not at the LV level. LV snapshots have limitations (as mentioned
by you in other thread) and are always in-efficient compared to array
snapshots. Why would someone want to use less efficient method when they
invested on a expensive array ?

thanx,
deepak



On Tue, May 20, 2014 at 9:01 PM, Mitsuhiro Tanino
mitsuhiro.tan...@hds.comwrote:

  Hello All,



 I’m proposing a feature of LVM driver to support LVM on a shared LU.

 The proposed LVM volume driver provides these benefits.
   - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.
   - Provide quicker volume creation and snapshot creation without storage
 workloads.
   - Enable cinder to any kinds of shared storage volumes without specific
 cinder storage driver.

   - Better I/O performance using direct volume access via Fibre channel.



 In the attachment pdf, following contents are explained.

   1. Detail of Proposed LVM volume driver

   1-1. Big Picture

   1-2. Administrator preparation

   1-3. Work flow of volume creation and attachment

   2. Target of Proposed LVM volume driver

   3. Comparison of Proposed LVM volume driver



 Could you review the attachment?

 Any comments, questions, additional ideas would be appreciated.





 Also there are blueprints, wiki and patches related to the slide.

 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

 https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage


 https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder

 https://review.openstack.org/#/c/92479/

 https://review.openstack.org/#/c/92443/



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >