[openstack-dev] [infra] [devstack] [smaug] gate-smaug-dsvm-fullstack-nv is failed with exit code: 2

2016-05-31 Thread xiangxinyong
Hello team,


The gate-smaug-dsvm-fullstack-nv is failed with exit code: 2.


The console.html [1] includes the below information:
Running devstack
ERROR: the main setup script run by this job failed - exit code: 2
The devstacklog.txt.gz [2] includes the below information:+ 
functions-common:git_clone:533:   echo 'The /opt/stack/new/noVNC project was 
not found; if this is a gate job, add'The /opt/stack/new/noVNC project was not 
found; if this is a gate job, add+ functions-common:git_clone:534:   echo 'the 
project to the $PROJECTS variable in the job definition.'the project to the 
$PROJECTS variable in the job definition.+ functions-common:git_clone:535:   
die 535 'Cloning not allowed in this configuration'+ functions-common:die:186:  
 local exitcode=0 
I guess the problem is related with this file [3].
Could some one help?Thanks very much.
[1] 
http://logs.openstack.org/29/321329/2/check/gate-smaug-dsvm-fullstack-nv/c734eea/console.html
[2] 
http://logs.openstack.org/29/321329/2/check/gate-smaug-dsvm-fullstack-nv/c734eea/logs/devstacklog.txt.gz
[3] 
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/smaug.yaml


Best Regards,
  xiangxinyong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Horizon in devstack is broken, rechecks are futile

2016-05-31 Thread Timur Sufiev
The final patch https://review.openstack.org/#/c/321640/ has just merged,
Devstack Horizon is fully functional and integration tests should be
passing from this moment. If any of your patches are still failing due to
dsvm-integration job's multiple failures, please rebase them.

On Fri, May 27, 2016 at 4:59 PM Brant Knudson  wrote:

> On Fri, May 27, 2016 at 8:39 AM, Timur Sufiev 
> wrote:
>
>> The root cause of Horizon issue has been identified and fixed at
>> https://review.openstack.org/#/c/321639/
>> The next steps are to release new version of django-openstack-auth
>> library (which the above fix belongs to), update global-requirements (if
>> it's not automatic, I'm not very into the details of release managing of
>> openstack components), update horizon requirements from global
>> requirements, and then merge the final patch
>> https://review.openstack.org/#/c/321640/ - this time into horizon repo.
>> Once all that is done, gate should be unblocked.
>>
>> Optimistic ETA is by tonight.
>>
>> On Wed, May 25, 2016 at 10:57 PM Timur Sufiev 
>> wrote:
>>
>>> Dear Horizon contributors,
>>>
>>> The test job dsvm-integration fails for a reason for the last ~24 hours,
>>> please do not recheck your patches if you see that almost all integration
>>> tests fail (and only these tests) - it won't help. The fix for
>>> django_openstack_auth issue which has been uncovered by the recent devstack
>>> change (see https://bugs.launchpad.net/horizon/+bug/1585682) is being
>>> worked on. Stay tuned, there will be another notification when rechecks
>>> will become meaningful again.
>>>
>>
>>
> Thanks for working on this. It will help us eventually get to a devstack
> where keystone and potentially the rest of the API servers are listening on
> paths rather than on ports. I had to fix a similar issue in tempest.
>
> To request a release send a review with to update
> http://git.openstack.org/cgit/openstack/releases/tree/deliverables/mitaka/django-openstack-auth.yaml
> with the new library version and commit hash. You'll have to create a new
> yaml file for newton since there hasn't been a release yet.
>
> --
> - Brant
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-31 Thread Daniel P. Berrange
On Tue, May 24, 2016 at 01:59:17PM -0400, Sean Dague wrote:
> The team working on live migration testing started with an experimental
> job on Ubuntu 16.04 to try to be using the latest and greatest libvirt +
> qemu under the assumption that a set of issues we were seeing are
> solved. The short answer is, it doesn't look like this is going to work.
> 
> We run tests on a bunch of different clouds. Those clouds expose
> different cpu flags to us. These are not standard things that map to
> "Haswell". It means live migration in the multinode cases can hit cpus
> with different flags. So we found the requirement was to come up with a
> least common denominator of cpu flags, which we call gate64, and push
> that into the libvirt cpu_map.xml in devstack, and set whenever we are
> in a multinode scenario.
> (https://github.com/openstack-dev/devstack/blob/master/tools/cpu_map_update.py)
>  Not ideal, but with libvirt 1.2.2 it works fine.
> 
> It turns out it works fine because libvirt *actually* seems to take the
> data from cpu_map.xml and do a translation to what it believes qemu will
> understand. On these systems apparently this turns into "-cpu
> Opteron_G1,-pse36"
> (http://logs.openstack.org/29/42529/24/check/gate-tempest-dsvm-multinode-full/5f504c5/logs/libvirt/qemu/instance-000b.txt.gz)
> 
> At some point between libvirt 1.2.2 and 1.3.1, this changed. Now libvirt
> seems to be passing our cpu_model directly to qemu, and assumes that as
> a user you will be responsible for writing all the  stanzas to
> add/remove yourself. When libvirt sends 'gate64' to qemu, this explodes,
> as qemu has no idea what we are talking about.
> http://logs.openstack.org/34/319934/2/experimental/gate-tempest-dsvm-multinode-live-migration/b87d689/logs/screen-n-cpu.txt.gz#_2016-05-24_15_59_12_531
> 
> Unlike libvirt, which has a text file (xml) that configures the cpus
> that could exist in the world, qemu builds this in statically at compile
> time:
> http://git.qemu.org/?p=qemu.git;a=blob;f=target-i386/cpu.c;h=895a386d3b7a94e363ca1bb98821d3251e70c0e0;hb=HEAD#l694
> 
> 
> So, the existing cpu_map.xml workaround for our testing situation will
> no longer work.
> 
> So, we have a number of open questions:
> 
> * Have our cloud providers standardized enough that we might get away
> without this custom cpu model? (Have some of them done it and only use
> those for multinode?)
> * Is there any way to get this feature back in libvirt to do the cpu
> computation?
> * Would we have to build a whole nova feature around setting libvirt xml
>  to be able to test live migration in our clouds?
> * Other options?
> * Do we give up and go herd goats?

Rather than try to define our own custom CPU models, we can probably
just use one of the standard CPU models and then explicitly tell
libvirt which flags to turn off in order to get compatibility with
our cloud environments.

This is not currently possible with Nova, since our nova.conf option
only allow us to specify a bare CPU model. We would have to extend
nova.conf to allow us to specify a list of CPU features to add or
remove. Libvirt should then correctly pass these changes through
to QEMU.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-05-31 Thread Dmitry Tantsur

On 05/31/2016 10:25 AM, Tan, Lin wrote:

Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck 
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in 
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is 
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node 
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state 
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live 
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).


We can and should fix all three cases.



Definitely we should improve the option 2, but there are could be more issues I 
didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier 
without accessing DB, like this PoC [2]:
  ironic-noderecover --node_uuids=UUID1,UUID2  
--config-file=/etc/ironic/ironic.conf


I'm -1 to anything silently removing the lock until I see a clear use 
case which is impossible to improve within Ironic itself. Such utility 
may and will be abused.


I'm fine with anything that does not forcibly remove the lock by default.



Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] vitrage - how much links model in API is permanent?

2016-05-31 Thread Malin, Eylon (Nokia - IL)
Good to know.
Thank you very much.

-Original Message-
From: Afek, Ifat (Nokia - IL) [mailto:ifat.a...@nokia.com] 
Sent: Tuesday, May 31, 2016 11:15 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [vitrage] vitrage - how much links model in API is 
permanent?

Hi Eylon,

This dictionary is created by NetworkX, and is not about to be changed. You can 
definitely rely on this API to be permanent. And we should make sure we have a 
tempest test for verifying it.

Best regards,
Ifat.

> -Original Message-
> From: Malin, Eylon (Nokia - IL) [mailto:eylon.ma...@nokia.com]
> Sent: Monday, May 30, 2016 10:40 AM
> 
> Hi,
> 
> While calling /v1/topology/  the response has links part, which is 
> list of dicts.
> Each dict have the following properties :
> 
>   is_deleted : Boolean
>   key : string
>   relationship_type : string
>   source : int
>   target : int
> 
> How much that structure is permanent ?
> Can I assume that these keys are here for long ?
> 
> Thank you
> 
> Eylon



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Tooling for recovering nodes

2016-05-31 Thread Tan, Lin
Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck 
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in 
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is 
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node 
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state 
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live 
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).

Definitely we should improve the option 2, but there are could be more issues I 
didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier 
without accessing DB, like this PoC [2]:
  ironic-noderecover --node_uuids=UUID1,UUID2  
--config-file=/etc/ironic/ironic.conf

Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] vitrage - how much links model in API is permanent?

2016-05-31 Thread Afek, Ifat (Nokia - IL)
Hi Eylon,

This dictionary is created by NetworkX, and is not about to be changed. You can 
definitely rely on this API to be permanent. And we should make sure we have a 
tempest test for verifying it.

Best regards,
Ifat.

> -Original Message-
> From: Malin, Eylon (Nokia - IL) [mailto:eylon.ma...@nokia.com]
> Sent: Monday, May 30, 2016 10:40 AM
> 
> Hi,
> 
> While calling /v1/topology/  the response has links part, which is list
> of dicts.
> Each dict have the following properties :
> 
>   is_deleted : Boolean
>   key : string
>   relationship_type : string
>   source : int
>   target : int
> 
> How much that structure is permanent ?
> Can I assume that these keys are here for long ?
> 
> Thank you
> 
> Eylon



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-31 Thread Thierry Carrez

Clint Byrum wrote:

I 100% support those who are managing bugs doing whatever they need
to do to make sure users' issues are being addressed as well as can be
done with the resources available. However, I would also urge everyone
to remember that the bug tracker is not only a way for developers to
manage the bugs, it is also a way for the community of dedicated users
to interact with the project as a whole.


This is a classic dilemma in open source bug tracking: the data is 
worthwhile, but keeping it around is generally making the tool less 
usable as a task tracker to organize the work to be done. Most of it 
comes from the fact that we are using the same tool ("bugs") for bug 
reporting and task tracking, and those are different things. Most 
developers want to use a task tracker to organize and prioritize their 
work. They create "bugs" in Launchpad but what they are really doing is 
creating a task for them (or an immediate peer) to process later. They 
may look at bugs/tasks that someone outside the team creates, but that's 
a completely different workflow. So the tension here is that the tool 
presents unqualified user bugs in the same lists as qualified team tasks.


In a fully-controlled environment those tasks are separated. You have a 
bug reporting system, which is mostly a collection of symptoms. Specific 
squads of triagers work on verifying them, deduplicating them, giving 
them some criticality, and checking them again after every release. You 
also have a task tracking system, which is used by teams to organize 
their work and assign it between team members. Team members create tasks 
directly. They may look into the bug tracker for critical issues raised 
by triagers and create tasks to address some of those critical bugs.


This works well, but it supposes that you have a tool that enables those 
two workflows, and a triagers team to handle the first one. In open 
source communities it's generally hard to find people to work purely on 
symptoms triaging -- those who do tend to move to something more 
rewarding very quickly. And the tools generally handle the distinction 
between bug reporting and task tracking poorly... Which leads to the 
dilemma of throwing out unqualified symptoms data to keep the tool 
usable to organize work.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-05-31 Thread Julien Danjou
On Mon, May 30 2016, Jaesuk Ahn wrote:

> It seems like “clustering” and “high availablity” of influxDB will be
> available only in commercial version.
> Monasca is currently leveraging influxDB as a metrics and alarm database.
> Beside vertical, influxDB is currently only an open source option to use.

Indeed, it's a shame than there's nobody developing an opensource TSDB
based on open technologies that is used in OpenStack, which supports
high availability, clustering, and a ton of other features…

Wait… what about OpenStack Gnocchi?

  http://gnocchi.xyz/

:)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [keystone] rolling dogpile.core into dogpile.cache, removing namespace packaging (PLEASE REVIEW)

2016-05-31 Thread Thomas Goirand
On 05/31/2016 12:17 AM, Mike Bayer wrote:
> Hi all -
> 
> Just a heads up what's happening for dogpile.cache, in version 0.6.0 we
> are rolling the functionality of the dogpile.core package into
> dogpile.cache itself, and retiring the use of namespace package naming
> for dogpile.cache.
> 
> Towards retiring the use of namespace packaging, the magic
> "declare_namespace() / extend_path()" logic is being removed from the
> file dogpile/__init__.py from dogpile.cache, and the "namespace_package"
> directive being removed from setup.py.
> 
> However, currently, the plan is to leave alone entirely the
> "dogpile.core" package as is, and to no longer use the name
> "dogpile.core" within dogpile.cache at all; the constructs that it
> previously imported from "dogpile.core" it now just imports from
> "dogpile" and "dogpile.util" from within the dogpile.cache package.
> 
> The caveat here is that Python environments that have dogpile.cache
> 0.5.7 or earlier installed will also have dogpile.core 0.4.1 installed
> as well, and dogpile.core *does* still contain the namespace package
> verbiage as before.   From our testing, we don't see there being any
> problem with this, however, I know there are people on this list who are
> vastly more familiar than I am with namespace packaging and I would
> invite them to comment on this as well as on the gerrit review [1] (the
> gerrit invites anyone with a Github account to register and comment).
> 
> Note that outside of the Openstack world, there are a very small number
> of applications that make use of dopgile.core directly.  From our
> grepping we can find no mentions of "dogpile.core" in any Openstack
> requirements files.For these applications, if a Python environment
> already has dogpile.core installed, this would continue to be used;
> however dogpile.cache also includes a file dogpile/core.py which sets up
> a compatible namespace, so that applications which list only
> dogpile.cache in their requirements but make use of "dogpile.core"
> constructs will continue to work as before.
> 
> I would ask that anyone reading this to please alert me to anyone, any
> project, or any announcement medium which may be necessary in order to
> ensure that anyone who needs to be made aware of these changes are aware
> of them and have vetted them ahead of time.   I would like to release
> dogpile.cache 0.6.0 by the end of the week if possible.  I will send
> this email a few more times to the list to make sure that it is seen.

FYI, in Debian, there's no direct dependency on dogpile.core. So this
will be a graceful change, I'll just upload the new dogpile.cache and
ask for the removal of dogpile.core.

Will it "just work" for all of OpenStack, through oslo.cache? Will it
also work with Mitaka?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Tempest] Abondoned old code reviews

2016-05-31 Thread Yaroslav Lobankov
+1! But I think we can leave patches created in 2016 and abandon others.

Regards,
Yaroslav Lobankov.

On Mon, May 30, 2016 at 8:20 PM, Ken'ichi Ohmichi 
wrote:

> Hi,
>
> There are many patches which are not updated in Tempest review queue
> even if having gotten negative feedback from reviewers or jenkins.
> Nova team is abandoning such patches like [1].
> I feel it would be nice to abandone such patches which are not updated
> since the end of 2015.
> Any thoughts?
>
> [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/096112.html
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for Commiters & Contributors for daisycloud-core project

2016-05-31 Thread Shake Chen
Hi Zhijiang

I think you can put Daisy into docker, then use ansible or kolla deploy
Daisy.



On Tue, May 31, 2016 at 9:43 AM,  wrote:

> Hi All,
>
> I would like to introduce to you a new OpenStack installer project
> Daisy(project name: daisycloud-core). Daisy used to be a closed source
> project mainly developed by ZTE, but currently we make it a OpenStack
> related project(http://www.daisycloud.org,
> https://github.com/openstack/daisycloud-core).
>
> Although it is not mature and still under development, Daisy concentrates
> on deploying OpenStack fast and efficiently for large data center which has
> hundreds of nodes. In order to reach that goal, Daisy was born to focus on
> many features that may not be suitable for small clusters, but definitely
> conducive to the deployment of big clusters. Those features include but not
> limited to the following:
>
> 1. Containerized OpenStack Services
> In order to speed up installation and upgrading as a whole, Daisy decides
> to use Kolla as underlying deployment module to support containerized
> OpenStack services.
>
> 2. Multicast
> Daisy utilizes multicast as much as possible to speed up imaging work flow
> during the installation. For example, instead of using centralized Docker
> registry while adopting Kolla, Daisy multicasts all Docker images to each
> node of the cluster, then creates and uses local registries on each node
> during Kolla deployment process. The Same things can be done for OS imaging
> too.
>
> 3. Automatic Deployment
> Instead of letting users decide if a node can be provisioned and deserved
> to join to the cluster, Daisy provide a characteristics matching mechanism
> to recognize if a new node has the same capabilities as a current working
> computer nodes. If it is true, Daisy will start deployment on that node
> right after it is discovered and make it a computer node with the same
> configuration as that current working computer nodes.
>
> 4. Configuration Template
> Using precise configuration file to describe a big dynamic cluster is not
> applicable, and it is not able to be reused when moving to another
> approximate environment either. Daisy’s configuration template only
> describes the common part of the cluster and the representative of the
> controller/compute nodes. It can be seen as a semi-finished configuration
> file which can be used in any approximate environments. During deployment,
> users only have to evaluate few specific parameters to make the
> configuration template a final configuration file.
>
> 5. Your comments on anything else that can brings unique value to the
> large data center deployment?
>
> As the project lead, I would like to get feedback from you about this new
> project. You are more than welcome to join this project!
>
> Thank you
> Zhijiang
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Shake Chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-31 Thread Blair Bethwaite
Hi Nikhil,

2000UTC might catch a few kiwis, but it's 6am everywhere on the east
coast of Australia, and even earlier out west. 0800UTC, on the other
hand, would be more sociable.

On 26 May 2016 at 15:30, Nikhil Komawar  wrote:
> Thanks Sam. We purposefully chose that time to accommodate some of our
> community members from the Pacific. I'm assuming it's just your case
> that's not working out for that time? So, hopefully other Australian/NZ
> friends can join.
>
>
> On 5/26/16 12:59 AM, Sam Morrison wrote:
>> I’m hoping some people from the Large Deployment Team can come along. It’s 
>> not a good time for me in Australia but hoping someone else can join in.
>>
>> Sam
>>
>>
>>> On 26 May 2016, at 2:16 AM, Nikhil Komawar  wrote:
>>>
>>> Hello,
>>>
>>>
>>> Firstly, I would like to thank Fei Long for bringing up a few operator
>>> centric issues to the Glance team. After chatting with him on IRC, we
>>> realized that there may be more operators who would want to contribute
>>> to the discussions to help us take some informed decisions.
>>>
>>>
>>> So, I would like to call for a 2 hour sync for the Glance team along
>>> with interested operators on Thursday June 9th, 2016 at 2000UTC.
>>>
>>>
>>> If you are interested in participating please RSVP here [1], and
>>> participate in the poll for the tool you'd prefer. I've also added a
>>> section for Topics and provided a template to document the issues clearly.
>>>
>>>
>>> Please be mindful of everyone's time and if you are proposing issue(s)
>>> to be discussed, come prepared with well documented & referenced topic(s).
>>>
>>>
>>> If you've feedback that you are not sure if appropriate for the
>>> etherpad, you can reach me on irc (nick: nikhil).
>>>
>>>
>>> [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
>>>
>>> --
>>>
>>> Thanks,
>>> Nikhil Komawar
>>> Newton PTL for OpenStack Glance
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
>
> Thanks,
> Nikhil
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2