I have a question on agent as part of cfninit that communicates with heat
about config done state indication or config tool agent such as chef or
puppet communicating with chef server.
Since the VM resides on the data network, how does it reach the heat server
that is on openstack management netwo
On Thu, Jan 23, 2014 at 7:22 PM, Ben Nemec wrote:
> On 2014-01-23 12:03, Florian Haas wrote:
>
> Ben,
>
> thanks for taking this to the list. Apologies for my brevity and for HTML,
> I'm on a moving train and Android Gmail is kinda stupid. :)
>
> I have some experience with the quirks of phone GMa
Hi all,
On Thu, Jan 09, 2014 at 01:13:53PM +, Derek Higgins wrote:
> It looks like we have some duplication and inconsistencies on the 3
> os-*-config elements in the tripleo repositories
>
> os-apply-config (duplication) :
>We have two elements that install this
> diskimage-builder/
Excerpts from Alexander Tivelkov's message of 2014-01-21 11:55:34 -0800:
> Hi folks,
>
> As we are moving towards incubation application, I took a closer look at
> what is going on with our repositories.
> An here is what I found. We currently have 11 repositories at stackforge:
>
>- murano-a
Excerpts from Prasad Vellanki's message of 2014-01-24 00:21:06 -0800:
> I have a question on agent as part of cfninit that communicates with heat
> about config done state indication or config tool agent such as chef or
> puppet communicating with chef server.
>
> Since the VM resides on the data
On 24 January 2014 22:26, Clint Byrum wrote:
>> This enourmous amount of repositories adds too much infrustructural
>> complexity, and maintaining the changes in in consistent and reliable
>> manner becomes a really tricky tasks. We often have changes which require
>> modifing two or more reposit
Tim,
w.r.t. different tenants I might be missing something - why should policies
remain stored per-user? In general, when the user creates something, wouldn't
the user's policies (more like preferences/template) be applied to and saved
for the tenant/created elements they're active in? IMHO you
2014/1/24 Matt Riedemann :
> Stable is OK again apparently so for anyone else waiting on a response here,
> go ahead and 'recheck no bug' stable branch patches that were waiting for
> this.
Note that there are still sporadic "Timed out waiting for thing..." failures
e.g.
http://logs.openstack.or
Hi-
Need support for ways to contribute code to Neutron regarding the ML2 Mechanism
drivers.
I have installed Jenkins and created account in github and launchpad.
Kindly guide me on
[1] How to configure Jenkins to submit the code for review?
[2] What is the process involved in pushing the code
On 01/24/2014 12:10 PM, trinath.soman...@freescale.com wrote:
> Hi-
>
>
>
> Need support for ways to contribute code to Neutron regarding the ML2
> Mechanism drivers.
>
>
>
> I have installed Jenkins and created account in github and launchpad.
>
>
>
> Kindly guide me on
>
>
>
> [1]
And it is worth mentioning activity report in Stackalytics, the link to it
is located in contribution summary block on user's statistics screen. The
report looks like http://stackalytics.com/report/users/zaneb and contains
all reviews, posted patches, commits, emails and blueprints.
Thanks,
Ilya
(2014/01/22 2:56), Lucas Eznarriaga wrote:
> Hi,
>
>
> For step 3/5, is the right procedure. Or is there a way to use a cmd to
> run all the tests and use a different mechanism to specify a filter for the
> tests to be run.
>
>
> I don't know if Tempest allows you to filter for the tests to
tempest defines some sets of tests like "smoke-serial" in tox.ini [1].
We can use "tox -e smoke-serial" or corresponding testr command like
testr run '(?!.*\[.*\bslow\b.*\])((smoke)|(^tempest\.scenario))'
to run a specific set of tests.
The command runs tests with "smoke" tag and all tests fr
Hi-
While running unit test case for test the ML2 mechanism driver, I have got
this error.
Command: tox -epy27 --
neutron.tests.unit.ml2.drivers.test_fslsdn_mech.TestFslSdnMechanismDriver.test_create_network_postcommit
Error output:
...
...
byte-compiling
/root/neutron_icehouse/neutron-20
> >
> > Cool. I like this a good bit better as it avoids the reboot. Still, this is
> > a rather
> large amount of data to copy around if I'm only changing a single file in
> Nova.
> >
>
> I think in most cases transfer cost is worth it to know you're deploying what
> you tested. Also it is pret
Periodically I've seen people submit big coding style cleanups to Nova
code. These are typically all good ideas / beneficial, however, I have
rarely (perhaps even never?) seen the changes accompanied by new hacking
check rules.
The problem with not having a hacking check added *in the same commit*
It may feel like it's been gate bug day all the days, but we would
really like to get people together for gate bug day on Monday, and get
as many people, including as many PTLs as possible, to dive into issues
that we are hitting in the gate.
We have 2 goals for the day.
** Fingerprint all the bu
> On 01/22/2014 12:17 PM, Dan Prince wrote:
> > I've been thinking a bit more about how TripleO updates are developing
> specifically with regards to compute nodes. What is commonly called the
> "update story" I think.
> >
> > As I understand it we expect people to actually have to reboot a compute
Hi Andreas -
Thanks you for the reply.. It helped me understand the ground work required.
But then, I'm writing a new Mechanism driver (FSL SDN Mechanism driver) for ML2.
For submitting new file sets, can I go with GIT or require Jenkins for the
adding the new code for review.
Kindly help me i
andrew,
what about having swift:// which defaults to the configured tenant and
auth url for what we now call swift-internal, and we allow for user
input to change tenant and auth url for what would be swift-external?
in fact, we may need to add the tenant selection in icehouse. it's a
pretty
Hi Trinath,
Jenkins is not directly related to proposing a new code.
The process to contribute the code is described in the links
Andreas pointed. There is no difference even if you are writing
a new ML2 mech driver.
In addition to the above, Neutron now requires a third party testing
for all new
HI Sylvain,
The change only makes the user have to supply a network ID if there is more
than one private network available (and the issue there is that otherwise the
assignment order in the Guest is random, which normally leads to all sorts of
routing problems).
I'm running a standard Devstack
So looking at the gate this morning, stable/* nova is failing on unit
test a lot. Russell has fixes for those things in master.
I'd ask the stable team to pull all the nova stable/* changes out of the
gate (grab the change, and push a new version, which will kick it back
to check) and rebase them
Things are still not good, but they are getting better.
Current Gate Stats:
* Gate Queue Depth - 79
* Check Queue Depth - 18
* Top of gate entered - ?? (we did a couple zuul restarts, so numbers
here are inaccurate)
* Gate Fail Categorization Rate: 73%
== Major Classes of Issues ==
The bigge
Hi Justin,
I can see the value of this, but I'm a bit wary of the metadata service
extending into a general API - for example I can see this extending into a
debate about what information needs to be made available about the instances
(would you always want all instances exposed, all details, e
Hello All,
I have not received any reply on my mail,
I will wait one more day for your comments on the same and proceed with a
checkin, that removes the given file from python-troveclient.
Let me know your thoughts.
On Thu, Jan 23, 2014 at 1:43 AM, Nilakhya <
nilakhya.chatter...@globallogic.co
Hi Phil,
Le 24/01/2014 14:13, Day, Phil a écrit :
HI Sylvain,
The change only makes the user have to supply a network ID if there is
more than one private network available (and the issue there is that
otherwise the assignment order in the Guest is random, which normally
leads to all sorts
Hello, Nilakhya Chatterjee.
I would suggest you to ping trove -core team and ask them if there's any
need of keeping it at codebase.
Also, i suggest you to analyze how python-trove client installed while
building dev. env. in trove-integration project.
Fast research gave me next resalts:
[novacli
I agree its oddly inconsistent (you'll get used to that over time ;-) - but to
me it feels more like the validation is missing on the attach that that the
create should allow two VIFs on the same network. Since these are both
virtualised (i.e share the same bandwidth, don't provide any additi
In going through the bug list, I spotted this one and would like to discuss
it:
"can't disable file injection for bare metal"
https://bugs.launchpad.net/ironic/+bug/1178103
There's a #TODO in Ironic's PXE driver to *add* support for file injection,
but I don't think we should do that. For the var
On Jan 24, 2014, at 7:50 AM, Matthew Farrellee wrote:
> andrew,
>
> what about having swift:// which defaults to the configured tenant and auth
> url for what we now call swift-internal, and we allow for user input to
> change tenant and auth url for what would be swift-external?
I like this
Hi Ceilometer guys,
We are implementing a complex query functionality for Ceilometer. We got a
comment to our implementation that using JSON in a string for representing the
query filter expression, is probably not the best solution.
The description of our current API design can be found here:
I agree that I'd like to see a set of use cases for this. This is the second
time in as many days that I've heard about a desire to have such a thing but I
still don't think I understand any use cases adequately.
In the physical world it makes perfect sense, LACP, MLT,
Etherchannel/Portchannel,
>> Would it make more sense for an operator to configure a "time window", and
>> then let users choose a slot within a time window (and say there are a
>> finite number of slots in a time window). The slotting would be done behind
>> the scenes and a user would only be able to select a window,
>what about having swift:// which defaults to the configured tenant and
auth url for what we now call swift-internal, and we allow for user input
to change tenant and auth url for what would be swift-external?
I like the proposal.
Andrew.
On Fri, Jan 24, 2014 at 4:50 AM, Matthew Farrellee wrot
Matt et al,
Yes, "swift-internal" was meant as a marker to distinguish it from
"swift-external" someday. I agree, this could be indicated by setting
other fields.
Little bit of implementation detail for scope:
In the current EDP implementation, SWIFT_INTERNAL_PREFIX shows up in
essentially
On Thu, Jan 23, 2014 at 4:07 PM, Florent Flament <
florent.flament-...@cloudwatt.com> wrote:
> I understand that not everyone may be interested in such feature.
>
> On the other hand, some (maybe shallow) Openstack users may be
> interested in setting quotas on users or projects. Also, this featur
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/23/2014 08:31 PM, Michael Basnight wrote:
>
> On Jan 23, 2014, at 5:10 PM, Mark McClain wrote:
>
>>
>> On Jan 23, 2014, at 5:02 PM, Russell Bryant
>> wrote:
>>
>>> Greetings,
>>>
>>> Last cycle we had A "feature proposal deadline" across so
On 01/23/2014 05:31 PM, Christopher Yeoh wrote:
>
>
>
>
> On Fri, Jan 24, 2014 at 8:34 am, Russell Bryant
> mailto:rbry...@redhat.com";>> wrote:
>
> Greetings,
>
> Recently Sean Dague started some threads [1][2] about the future of XML
> support in Nova's compute API. Specifically
Being the gate one of those things that we all use (and abuse) everyday
whatever
project we work on, I wouldn’t sleep well by skipping this call. :-)
Myself and my fellow Cloudbasers ociuhandu and gsamfira are going to join in on
Monday.
We got our small share of “learning the hard way” on thi
On Fri, Jan 24, 2014 at 3:29 AM, Florian Haas wrote:
> On Thu, Jan 23, 2014 at 7:22 PM, Ben Nemec wrote:
> > On 2014-01-23 12:03, Florian Haas wrote:
> >
> > Ben,
> >
> > thanks for taking this to the list. Apologies for my brevity and for
> HTML,
> > I'm on a moving train and Android Gmail is k
On Fri, Jan 24, 2014 at 8:26 AM, Russell Bryant wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 01/23/2014 08:31 PM, Michael Basnight wrote:
>>
>> On Jan 23, 2014, at 5:10 PM, Mark McClain wrote:
>>
>>>
>>> On Jan 23, 2014, at 5:02 PM, Russell Bryant
>>> wrote:
>>>
Greetings,
On 01/24/2014 08:33 AM, CARVER, PAUL wrote:
I agree that I’d like to see a set of use cases for this. This is the
second time in as many days that I’ve heard about a desire to have such
a thing but I still don’t think I understand any use cases adequately.
In the physical world it makes perfect
Hi all!
Nikolay Starodubtsev
Software Engineer
Mirantis Inc.
Skype: dark_harlequine1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks everyone who joined our weekly meeting.
Here are meeting minutes:
Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-24-15.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-24-15.01.txt
Log:
http://eavesdrop.openstack.
Good points - thank you. For arbitrary operations, I agree that it would
be better to expose a token in the metadata service, rather than allowing
the metadata service to expose unbounded amounts of API functionality. We
should therefore also have a per-instance token in the metadata, though I
do
Hi all!
While we add new features to Climate 0.1 release we have some problems with
novaclient. The problem is that novaclient 2.15.0 can't shelve/unshelve
instances, but this feature is in master branch. Can anyone say when
novaclient will be updated?
Nikolay Starodubtsev
Software Engineer
Mi
On Fri, Jan 24, 2014 at 02:11:02PM +, Day, Phil wrote:
> I agree its oddly inconsistent (you'll get used to that over time ;-)
> - but to me it feels more like the validation is missing on the attach
> that that the create should allow two VIFs on the same network. Since
> these are both vir
Hi All;
Jannis Leidel, author of Django-Compressor which Horizon relies on recently
sent out a message saying that he needs help maintaining/releasing
django_compressor:
https://twitter.com/jezdez/status/423559915660382209
If we have people willing to help upstream dependencies, this would be
Thanks everyone who have joined Savanna meeting.
Here are the logs from the meeting:
Minutes:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-23-18.07.html
Log:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-23-18.07.log.html
--
Sincerely yours,
Sergey
It looks like more than 220 commits was merged to the nova client since
2.15.0 version [1].
[1] https://github.com/openstack/python-novaclient/compare/2.15.0...master
On Fri, Jan 24, 2014 at 7:49 PM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:
> Hi all!
> While we add new features
>
>Will we be doing more complex things than "every day at some time"? ie,
>does the user base see value in configuring backups every 12th day of
>every other month? I think this is easy to write the schedule code, but i
>fear that it will be hard to build a smarter scheduler that would only
>allow
Hi Sean,
Given the swift failure happened once in the available logstash recorded
history, do we still feel this is a major gate issue?
See:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogdGVzdF9ub2RlX3dyaXRlX3RpbWVvdXRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsI
https://review.openstack.org/#/c/66494/ was already approved and it looks
like 0.4.2 is enough new.
On Fri, Jan 24, 2014 at 7:44 PM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:
> Hi all!
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequi
Le 23/01/2014 18:17, Day, Phil a écrit :
Just to be clear I'm not advocating putting any form of automated instance
life-cycle into Nova - I agree that belongs in an external system like Climate.
However for a reservations model to work efficiently it seems to be you need
two complementary ty
On Fri, Jan 24, 2014 at 7:40 AM, Sean Dague wrote:
> It may feel like it's been gate bug day all the days, but we would
> really like to get people together for gate bug day on Monday, and get
> as many people, including as many PTLs as possible, to dive into issues
> that we are hitting in the g
Looks like we need to review prefixes and cleanup them. After the first
look I'd like the idea of using common prefix for swift data.
On Fri, Jan 24, 2014 at 7:05 PM, Trevor McKay wrote:
> Matt et al,
>
> Yes, "swift-internal" was meant as a marker to distinguish it from
> "swift-external" so
On Fri, Jan 24, 2014 at 7:24 AM, Daniel P. Berrange wrote:
> Periodically I've seen people submit big coding style cleanups to Nova
> code. These are typically all good ideas / beneficial, however, I have
> rarely (perhaps even never?) seen the changes accompanied by new hacking
> check rules.
>
>
Hi Folks,
I have postponed this meeting to the week of February 10th on Thursday Feb
13th, so that there is enough time for people to plan to attend this meeting.
Meeting details will be discussed in the neutron meeting and will send out the
details.
Thanks
Swami
From: Vasudevan, Swaminathan
Yep, 2.16.0 will be nice to be released. As I see 2.15.0 is September 2013
[1] - that's quite old now, I suppose.
[1] http://pypi.openstack.org/openstack/python-novaclient/
On Fri, Jan 24, 2014 at 8:13 PM, Sergey Lukjanov wrote:
> It looks like more than 220 commits was merged to the nova clie
Thank you Anne.
Mark
From: Anne Gentle [mailto:a...@openstack.org]
Sent: Thursday, January 23, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] nova-cert information
It's a known identified deficiency, and all we have is this:
ht
* Vishvananda Ishaya wrote:
>
> On Jan 16, 2014, at 1:28 PM, Jon Bernard wrote:
>
> > * Vishvananda Ishaya wrote:
> >>
> >> On Jan 14, 2014, at 2:10 PM, Jon Bernard wrote:
> >>
> >>>
> >>>
> As you’ve defined the feature so far, it seems like most of it could
> be implemented cl
Hi Sylvain,
Thanks for the clarification, I'd missed that it was where the public network
belonged to the same tenant (it's not a use case we run with).
So I can see that option [1] would make the validation work by (presumably) not
including the shared network in the list of networks, but loo
On 01/24/2014 11:18 AM, Peter Portante wrote:
> Hi Sean,
>
> Given the swift failure happened once in the available logstash recorded
> history, do we still feel this is a major gate issue?
>
> See:
> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogdGVzdF9ub2RlX3dyaXRlX3RpbWVv
Given your obviously much more extensive understanding of networking than
mine, I'm starting to move over to the "we shouldn't make this fix" camp.
Mostly because of this:
"CARVER, PAUL" wrote on 01/23/2014 08:57:10 PM:
> Putting a friendly helper in Horizon will help novice users and
> provide
> I haven't actually found where metadata caching is implemented, although the
> constructor of InstanceMetadata documents restrictions that really only make
> sense if it is. Anyone know where it is cached?
Here's the code that does the caching:
https://github.com/openstack/nova/blob/master/no
On Fri, Jan 24, 2014 at 12:55 PM, Day, Phil wrote:
> > I haven't actually found where metadata caching is implemented,
> although the constructor of InstanceMetadata documents restrictions that
> really only make sense if it is. Anyone know where it is cached?
>
> Here’s the code that does the
Joining to providing our backgrounds.. I’d be happy to help here too since I
have pretty solid background in using and developing caching solutions, however
mostly in Java world (expertise in GemFire and Coherence, developing GridGain
distributed cache).
Renat Akhmerov
@ Mirantis Inc.
On 23
Clint, Rob,
Thanks a lot for your input: that's really a good point, and we didn't
consider it before, while we definitely should.
Team,
Let's discuss this topic again before making any final decisions.
--
Regards,
Alexander Tivelkov
2014/1/24 Robert Collins
> On 24 January 2014 22:26, Clin
>
>
>
> That's a pretty high rate of failure, and really needs investigation.
>
That's a great point, did you look into the logs of any of those jobs?
Thanks for bringing it to my attention.
I saw a few swift tests that would pop, I'll open bugs to look into those.
But the cardinality of the fa
Excerpts from Justin Santa Barbara's message of 2014-01-24 07:43:23 -0800:
> Good points - thank you. For arbitrary operations, I agree that it would
> be better to expose a token in the metadata service, rather than allowing
> the metadata service to expose unbounded amounts of API functionality.
>
> Good points - thank you. For arbitrary operations, I agree that it would be
> better to expose a token in the metadata service, rather than allowing the
> metadata service to expose unbounded amounts of API functionality. We
> should therefore also have a per-instance token in the metadata,
Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
>
> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to have
> MariaDB as the default MySQL-like DB.
>
> Can someone summarise the status of the OpenStack in terms of
>
>
> -What MySQL-flavor is/are cur
On Fri, Jan 24, 2014 at 11:37 AM, Clay Gerrard wrote:
>>
>>
>> That's a pretty high rate of failure, and really needs investigation.
>
>
> That's a great point, did you look into the logs of any of those jobs?
> Thanks for bringing it to my attention.
>
> I saw a few swift tests that would pop, I'
Hi Sean,
In the last 7 days I see only 6 python27 based test failures:
http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWdlOlwiRVJST1I6ICAgcHkyNzogY29tbWFuZHMgZmFpbGVkXCIiLCJmaWVs
On 01/24/2014 11:47 AM, Clint Byrum wrote:
Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to have
MariaDB as the default MySQL-like DB.
Can someone summarise the status of the OpenStack in terms of
-
Hi Justin,
It's nice to see someone bringing this kind of thing up. Seeding discovery is a
handy primitive to have.
Multicast is not generally used over the internet, so the comment about
removing multicast is not really justified, and any of the approaches that work
there could be used. Alter
thanks for all the feedback folks.. i've registered a bp for this...
https://blueprints.launchpad.net/savanna/+spec/swift-url-proto-cleanup
On 01/24/2014 11:30 AM, Sergey Lukjanov wrote:
Looks like we need to review prefixes and cleanup them. After the first
look I'd like the idea of using comm
Would it make sense to simply have the neutron metadata service re-export every
endpoint listed in keystone at /openstack/api/?
Thanks,
Kevin
From: Murray, Paul (HP Cloud Services) [pmur...@hp.com]
Sent: Friday, January 24, 2014 11:04 AM
To: OpenStack Development
Excerpts from Steven Dake's message of 2014-01-24 11:05:25 -0800:
> On 01/24/2014 11:47 AM, Clint Byrum wrote:
> > Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
> >> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
> >> have MariaDB as the default MySQL-lik
what do you consider "EDP internal", and how does it relate to the v1.1
or v2 API?
i'm ok with making it plugin independent. i'd just suggest moving it out
of /jobs and to something like /extra/config-hints/{type}, maybe along
with /extra/validations/config.
best,
matt
On 01/22/2014 06:25
On Fri, Jan 24, 2014, at 10:51 AM, John Griffith wrote:
> On Fri, Jan 24, 2014 at 11:37 AM, Clay Gerrard
> wrote:
> >>
> >>
> >> That's a pretty high rate of failure, and really needs investigation.
> >
> >
> > That's a great point, did you look into the logs of any of those jobs?
> > Thanks for b
Excerpts from Devananda van der Veen's message of 2014-01-24 06:15:12 -0800:
> In going through the bug list, I spotted this one and would like to discuss
> it:
>
> "can't disable file injection for bare metal"
> https://bugs.launchpad.net/ironic/+bug/1178103
>
> There's a #TODO in Ironic's PXE d
On 01/24/2014 02:02 PM, Peter Portante wrote:
> Hi Sean,
>
> In the last 7 days I see only 6 python27 based test
> failures:
> http://logstash.openstack.org/#eyJzZWFyY2giOiJwcm9qZWN0Olwib3BlbnN0YWNrL3N3aWZ0XCIgQU5EIGJ1aWxkX3F1ZXVlOmdhdGUgQU5EIGJ1aWxkX25hbWU6Z2F0ZS1zd2lmdC1weXRob24qIEFORCBtZXNzYWd
On Fri, Jan 24, 2014 at 2:05 PM, Steven Dake wrote:
> On 01/24/2014 11:47 AM, Clint Byrum wrote:
>
>> Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
>>
>>> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
>>> have MariaDB as the default MySQL-like DB.
>>>
>
On Fri, Jan 24, 2014 at 10:37 AM, Clay Gerrard wrote:
>
>>
>> That's a pretty high rate of failure, and really needs investigation.
>>
>
> That's a great point, did you look into the logs of any of those jobs?
> Thanks for bringing it to my attention.
>
> I saw a few swift tests that would pop,
Excerpts from Day, Phil's message of 2014-01-24 04:39:10 -0800:
> > On 01/22/2014 12:17 PM, Dan Prince wrote:
> > > I've been thinking a bit more about how TripleO updates are developing
> > specifically with regards to compute nodes. What is commonly called the
> > "update story" I think.
> > >
>
Excerpts from Day, Phil's message of 2014-01-24 04:24:11 -0800:
> > >
> > > Cool. I like this a good bit better as it avoids the reboot. Still, this
> > > is a rather
> > large amount of data to copy around if I'm only changing a single file in
> > Nova.
> > >
> >
> > I think in most cases trans
This is exactly my worry... at what point can I consider moving to MariaDB with
the expectation that the testing confidence is equivalent to that which is
currently available from MySQL ?
The on-disk format is not so much a concern but there are many potential subtle
differences in the API whi
>
> Well if you're on a Neutron private network then you'd only be DDOS-ing
> yourself.
> In fact I think Neutron allows broadcast and multicast on private
> networks, and
> as nova-net is going to be deprecated at some point I wonder if this is
> reducing
> to a corner case ?
Neutron may well re
OH yeah that's much better. I had found those eventually but had to dig
through all that other stuff :'(
Moving forward I think we can keep an eye on that page, open bugs for those
tests causing issue and dig in.
Thanks again!
-Clay
On Fri, Jan 24, 2014 at 11:37 AM, Sean Dague wrote:
> On 0
Correction, Monday Jan 27th.
My calendar widget was apparently still on May for summit planning...
On 01/24/2014 07:40 AM, Sean Dague wrote:
> It may feel like it's been gate bug day all the days, but we would
> really like to get people together for gate bug day on Monday, and get
> as many peop
Clint Byrum wrote:
>
> Heat has been working hard to be able to do per-instance limited access
> in Keystone for a while. A trust might work just fine for what you want.
>
I wasn't actually aware of the progress on trusts. It would be helpful
except (1) it is more work to have to create a separ
Oh shoot. That reminds me i needed to rebase the code i was working on.
And yes this changes things a little because we are using the same template
paths for the validation_rules as the base template which uses the manager
field on the datastore_version. This means that we need to make the path
ov
Hi Phil,
2014/1/24 Day, Phil
>
>
>
> So I can see that option [1] would make the validation work by
> (presumably) not including the shared network in the list of networks, but
> looking further into the code allocate_for_instance() uses the same call to
> decide which networks it needs to cr
Excerpts from Chuck Short's message of 2014-01-24 11:46:47 -0800:
> On Fri, Jan 24, 2014 at 2:05 PM, Steven Dake wrote:
>
> > On 01/24/2014 11:47 AM, Clint Byrum wrote:
> >
> >> Excerpts from Tim Bell's message of 2014-01-24 10:32:26 -0800:
> >>
> >>> We are reviewing options between MySQL and Ma
Excerpts from Tim Bell's message of 2014-01-24 11:55:02 -0800:
>
> This is exactly my worry... at what point can I consider moving to MariaDB
> with the expectation that the testing confidence is equivalent to that which
> is currently available from MySQL ?
>
> The on-disk format is not so muc
Murray, Paul (HP Cloud Services) wrote:
>
>
> Multicast is not generally used over the internet, so the comment about
> removing multicast is not really justified, and any of the approaches that
> work there could be used.
>
I think multicast/broadcast is commonly used 'behind the firewall', but
Fox, Kevin M wrote:
> Would it make sense to simply have the neutron metadata service
> re-export every endpoint listed in keystone at
> /openstack/api/?
>
Do you mean with an implicit token for read-only access, so the instance
doesn't need a token? That is a superset of my proposal, so it wou
Excerpts from Justin Santa Barbara's message of 2014-01-24 12:29:49 -0800:
> Clint Byrum wrote:
>
> >
> > Heat has been working hard to be able to do per-instance limited access
> > in Keystone for a while. A trust might work just fine for what you want.
> >
>
> I wasn't actually aware of the pr
1 - 100 of 138 matches
Mail list logo