On Thu, 15 Aug 2019 at 18:05, James E. Blair wrote:
>
> Hi,
>
> We have made the switch to begin storing all of the build logs from Zuul
> in Swift.
>
> Each build's logs will be stored in one of 7 randomly chosen Swift
> regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those
>
Thanks for the write up Eduardo. I thought you and Surya did a good job of
presenting and moderating those sessions.
Mark
On Wed, 21 Nov 2018 at 17:08, Eduardo Gonzalez wrote:
> Hi kollagues,
>
> During the Berlin Summit kolla team had a few talks and forum discussions,
> as well as
Thanks for the write up Eduardo. I thought you and Surya did a good job of
presenting and moderating those sessions.
Mark
On Wed, 21 Nov 2018 at 17:08, Eduardo Gonzalez wrote:
> Hi kollagues,
>
> During the Berlin Summit kolla team had a few talks and forum discussions,
> as well as
Thanks for the write up Eduardo. I thought you and Surya did a good job of
presenting and moderating those sessions.
Mark
On Wed, 21 Nov 2018 at 17:08, Eduardo Gonzalez wrote:
> Hi kollagues,
>
> During the Berlin Summit kolla team had a few talks and forum discussions,
> as well as
.
The 'ironic_cleaning_network' variable should be the name of a network in
neutron to be used for cleaning, rather than an interface name. If you're
using flat networking, this will just be 'the' network.
Regards,
Mark
On Mon, 12 Nov 2018 at 01:23, Manuel Sopena Ballesteros <
manuel...@garvan.org.au> wrote:
> D
-queens
Documentation: https://kayobe.readthedocs.io
Thanks to everyone who contributed to this release!
Looking forward, we intend to catch up with the OpenStack release cycle, by
making a smaller release with support for OpenStack Rocky, then moving
straight onto Stein.
Cheers,
Mark
-queens
Documentation: https://kayobe.readthedocs.io
Thanks to everyone who contributed to this release!
Looking forward, we intend to catch up with the OpenStack release cycle, by
making a smaller release with support for OpenStack Rocky, then moving
straight onto Stein.
Cheers,
Mark
On Wed, 10 Oct 2018 at 08:08, Florian Engelmann <
florian.engelm...@everyware.ch> wrote:
> Am 10/9/18 um 1:47 PM schrieb Mark Goddard:
> >
> >
> > On Tue, 9 Oct 2018 at 12:03, Florian Engelmann
> > mailto:florian.engelm...@everyware.ch>>
>
> > w
On Tue, 9 Oct 2018 at 12:03, Florian Engelmann <
florian.engelm...@everyware.ch> wrote:
> Am 10/9/18 um 11:04 AM schrieb Mark Goddard:
> > Thanks for these suggestions Florian, there are some interesting ideas
> > in here. I'm a little concerned about the maintenan
been able to move quickly by providing a flexible configuration
mechanism that avoids the need to maintain support for every OpenStack
feature. Other thoughts inline.
Regards,
Mark
On Mon, 8 Oct 2018 at 11:15, Florian Engelmann <
florian.engelm...@everyware.ch> wrote:
> Hi,
>
> I woul
Hi Hongbin,
I'll add this to our meeting agenda for tomorrow, but I see no reason why
we should not make another queens series release.
Cheers,
Mark
On Sun, 7 Oct 2018 at 18:50, Hongbin Lu wrote:
> Hi Kolla team,
>
> I have a fixup on the configuration of Zun service
On Wed, 3 Oct 2018 at 17:10, James LaBarre wrote:
> On 10/2/18 10:37 AM, Mark Goddard wrote:
>
>
>
> On Tue, 2 Oct 2018 at 14:03, Jay Pipes wrote:
>
>> On 10/02/2018 08:58 AM, Mark Goddard wrote:
>> > Tenks is a project for managing 'virtual bare metal clu
On Tue, 2 Oct 2018 at 17:10, Jim Rollenhagen wrote:
> On Tue, Oct 2, 2018 at 11:40 AM Eric Fried wrote:
>
>> > What Eric is proposing (and Julia and I seem to be in favor of), is
>> > nearly the same as your proposal. The single difference is that these
>> > config templates or deploy templates
On Tue, 2 Oct 2018 at 14:03, Jay Pipes wrote:
> On 10/02/2018 08:58 AM, Mark Goddard wrote:
> > Hi,
> >
> > In the most recent Ironic meeting we discussed [1] tenks, and the
> > possibility of adding the project under Ironic governance. We agreed to
> > move t
does everyone think? Is this something that the ironic community could
or should take ownership of?
[1]
http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-10-01-15.00.log.html#l-170
Thanks,
Mark
__
OpenStack
change from second to second. The idea of passing temperature information
via traits sounded somewhat ridiculous to me. I think that might have been
the intent of the original poster to present a ridiculous example and hope
people understood. I hope nobody was taking it seriously. :-)
--
Mar
On Fri, 28 Sep 2018 at 22:07, melanie witt wrote:
> On Fri, 28 Sep 2018 15:42:23 -0500, Eric Fried wrote:
> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
> >>
> >>
> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote:
> >>> It's time somebody said this.
> >>>
> >>> Every time we turn a
To add some context around what I suspect is the reason for the most recent
incarnation of this debate, many Ironic users have a requirement to be able
to influence the configuration of a server at deploy time, beyond the
existing supported mechanisms. The classic example is hardware RAID - the
+1
On Tue, 25 Sep 2018 at 16:48, Eduardo Gonzalez wrote:
> Hi,
>
> I would like to propose Chason Chan to the kolla-ansible core team.
>
> Chason is been working on addition of Vitrage roles, rework VpnaaS
> service, maintaining
> documentation as well as fixing many bugs.
>
> Voting will be
gt; There is a more general plan that will help, but its not quite ready yet:
> https://review.openstack.org/#/c/504952/
>
> As such, I think we can't get pull the plug on flavors including
> capabilities and passing them to Ironic, but (after a cycle of deprecation)
> I think we can no
/call-for-presentations
Cheers,
Mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo
Looks like the ironic and kolla rooms are next to each other this time, so
I can hop between them to help setup the conferencing etc.
On Fri, 7 Sep 2018 at 15:21, Mark Goddard wrote:
> Thanks for putting that together Eduardo. I've listed the sessions that I
> expect to attend below.
Thanks for putting that together Eduardo. I've listed the sessions that I
expect to attend below.
Mark
On Thu, 6 Sep 2018 at 17:40, Eduardo Gonzalez wrote:
> Hi folks,
> This is the schedule for Kolla Denver PTG. If someone have a hard conflict
> with any discussion please let me know
Thanks Erik, see you then.
Mark
On Wed, 29 Aug 2018 at 17:20, Erik McCormick
wrote:
> Hey Mark,
>
> Here's the link to the Ops etherpad
>
> https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018
>
> I added a listing for you, and we'll have a schedule out s
+1. I like it. Could also be a good fit for Kayobe's undercloud equivalent
at some point.
On Tue, 28 Aug 2018 at 18:51, Jim Rollenhagen
wrote:
> On Mon, Aug 27, 2018 at 12:09 PM, Dmitry Tantsur
> wrote:
>
>> Hi all,
>>
>> I would like propose the metalsmith library [1][2] for inclusion into
+1
On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen, wrote:
> ++
>
>
> // jim
>
> On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger > wrote:
>
>> Greetings everyone!
>>
>> In our team meeting this week we stumbled across the subject of
>> promoting contributors to be sub-project's core reviewers.
>>
On Wed, 22 Aug 2018, 19:08 Erik McCormick,
wrote:
>
>
> On Wed, Aug 22, 2018, 1:52 PM Mark Goddard wrote:
>
>> Hello Kayobians,
>>
>> I thought it is about time to do another update.
>>
>
>
>
>
>> # PTG
>>
>> There won't
our software and documentation.
[1] https://review.openstack.org/#/c/568804
[2] https://review.openstack.org/562306
[3] https://review.openstack.org/572370
[4] https://review.openstack.org/592932
Cheers,
Mark
__
OpenStack
Yeah. I’ll post the retirement commits this week.
mark
> On Aug 18, 2018, at 13:39, Andreas Jaeger wrote:
>
> Mark, shall I start the retirement of astara now? I would appreciate a "go
> ahead" - unless you want to do it yourself...
>
> Andreas
>
>> On 2
Whether there is a physical PTG session or not, I'd certainly like to meet
up with other folks who are using and/or contributing to Kolla, let's be
sure to make time for that.
Mark
On 17 August 2018 at 12:54, Adam Harwell wrote:
> As one of the other two in the etherpad, I will say tha
Whether there is a physical PTG session or not, I'd certainly like to meet
up with other folks who are using and/or contributing to Kolla, let's be
sure to make time for that.
Mark
On 17 August 2018 at 12:54, Adam Harwell wrote:
> As one of the other two in the etherpad, I will say tha
As one of the lucky three kolleagues able to make the PTG, here's my
position (inline).
On 17 August 2018 at 11:52, Eduardo Gonzalez wrote:
> Fellow kolleages.
>
> In september is Denver PTG, as per the etherpad [0] only 3 contributors
> confirmed their presence in the PTG, we expected more
As one of the lucky three kolleagues able to make the PTG, here's my
position (inline).
On 17 August 2018 at 11:52, Eduardo Gonzalez wrote:
> Fellow kolleages.
>
> In september is Denver PTG, as per the etherpad [0] only 3 contributors
> confirmed their presence in the PTG, we expected more
, since they are tied to the configuration layout and the use of a
kolla_toolbox container for executing keystone/DB ansible modules.
[1] https://review.openstack.org/587591
[2] https://review.openstack.org/587590
Mark
On 10 August 2018 at 10:59, Chandan kumar wrote:
> Hello,
>
> On Fri, Aug
Thanks for your work as PTL during the Rocky cycle Jeffrey. Hope you are
able to stay part of the community.
Cheers,
Mark
On 25 July 2018 at 04:48, Jeffrey Zhang wrote:
> Hi all,
>
> I just wanna to say I am not running PTL for Stein cycle. I have been
> involved in Kolla project
Hi,
I've been caught by this myself - by default s3api has the parameter:
dns_compliant_bucket_names = True
which will forbid _ in the bucket name. Just set this to False under
your [s3api] section (or the [swift3] section if it is called that in
your proxy pipeline).
regards
Mark
On 23
-guide.html#custom-log-filtering
Cheers,
Mark
On 14 July 2018 at 14:29, Sergey Glazyrin
wrote:
> Hello guys!
> We are migrating our product to kolla-ansible and as far as probably you
> know, it uses fluentd to control logs, etc. In non containerized openstack
> we use rsyslog
used when the system boots IronicPythonAgent to
provision the disk?
--
Mark
You must be the change you wish to see in the world. -- Mahatma Gandhi
Never let the future disturb you. You will meet it, if you have to, with
the same weapons of reason which today arm you against the present. --
Marcus
used when the system boots IronicPythonAgent to
provision the disk?
--
Mark
You must be the change you wish to see in the world. -- Mahatma Gandhi
Never let the future disturb you. You will meet it, if you have to, with
the same weapons of reason which today arm you against the present. --
Marcus
tack.org/#!/story/2001649
[8] https://storyboard.openstack.org/#!/story/2002096
[9] https://storyboard.openstack.org/#!/story/2001627
Cheers,
Mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: o
ign networks to
roles.
{%- for role in roles %}
8<-8<-8<-8<-8<-
Note that I had to do an else clause because jinja2 would not output the
newline that was outside of the for block.
Am I following the correct path to fix this issue?
--
Mark
You must be th
ary: true
...
(undercloud) [stack@oscloud5 ~]$ grep External
templates/environments/network-environment.yaml
ExternalNetCidr: 9.114.118.0/24
ExternalNetworkVlanID: 10
ExternalAllocationPools: [{'start': '9.114.118.240', 'end':
'9.114.118.248'}]
ExternalInterfaceDefaultRoute: 9.114.118.2
+1
On 1 June 2018 at 08:55, Eduardo Gonzalez wrote:
> +1
>
> 2018-06-01 8:57 GMT+02:00 Michał Jastrzębski :
>
>> +1 from me:)
>>
>> On Thu, May 31, 2018, 11:40 PM Martin André wrote:
>>
>>> If Steve wrote half of kolla-cli then it's a no brainer to me. +1!
>>>
>>> On Thu, May 31, 2018 at 7:02
+1
On May 31, 2018 at 1:06:43 PM, Borne Mace (borne.m...@oracle.com) wrote:
Greetings all,
I would like to propose the addition of Steve Noyes to the kolla-cli
core reviewer team. Consider this nomination as my personal +1.
Steve has a long history with the kolla-cli and should be considered
;
>>>
>>>
>>> Regards,
>>>
>>> Duong
>>>
>>>
>>>
>>> *From:* Jeffrey Zhang [mailto:zhang.lei@gmail.com]
>>> *Sent:* Thursday, April 26, 2018 10:31 PM
>>> *To:* OpenStack Development Mailing List <o
=true' in config.
There are many paths we could take from there, but perhaps this would be
best discussed at the next PTG?
Cheers,
Mark
On Mon, 30 Apr 2018, 14:07 Jeffrey Zhang, <zhang.lei@gmail.com> wrote:
> Thanks hongbin
>
> In Kolla, one job is used to test multi Ope
On 20/04/18 04:54, Dean Troyer wrote:
On Thu, Apr 19, 2018 at 7:51 AM, Doug Hellmann <d...@doughellmann.com> wrote:
Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200:
Swift has had storage policies for a while now. These are enabled by
setting the 'X-Storage-Policy'
-*' type headers.
It seems to me that adding this would be highly desirable. Is it in the
pipeline? If not I might see how much interest there is at my end for
adding such - as (famous last words) it looks pretty straightforward to do.
regards
Mark
On Thu, 5 Apr 2018, 20:28 Martin André, wrote:
> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke
> wrote:
> > Hi all,
> >
> > This mail is to serve as a follow on to the discussion during yesterday's
> > team meeting[4], which was regarding the desire to
before recreating/restarting the container. COPY_ONCE is the king of
immutable here, but even for COPY_ALWAYS, this works as long as the
container doesn't restart while the config files are being written.
Mark
On 5 April 2018 at 21:41, Michał Jastrzębski <inc...@gmail.com> wrote:
> S
Thanks John,
I was leaning towards '2 is not quite enough' for parity, but wanted to
get a 2nd opinion. The level of detail and discussion in your answer is
very helpful, much appreciated!
Mark
On 05/04/18 08:25, John Dickinson wrote:
The answer always starts with "it depends..."
...hearing crickets - come on guys, I know you have some thoughts about
this :-) !
On 29/03/18 13:08, Mark Kirkwood wrote:
Hi,
We are looking at implementing EC Policies with similar durability to 3x
replication. Now naively this corresponds to m=2 (using notation from
previous thread
r: https://www.openstack.org/summit/vancouver-2018/sponsors/
<https://www.openstack.org/summit/vancouver-2018/sponsors/>
Code of Conduct:
https://www.openstack.org/summit/vancouver-2018/code-of-conduct/
<https://www.openstack.org/summit/vancouver-2018/code-of-conduct/>
See you at the
r: https://www.openstack.org/summit/vancouver-2018/sponsors/
<https://www.openstack.org/summit/vancouver-2018/sponsors/>
Code of Conduct:
https://www.openstack.org/summit/vancouver-2018/code-of-conduct/
<https://www.openstack.org/summit/vancouver-2018/code-of-conduct/>
See you at the
r: https://www.openstack.org/summit/vancouver-2018/sponsors/
<https://www.openstack.org/summit/vancouver-2018/sponsors/>
Code of Conduct:
https://www.openstack.org/summit/vancouver-2018/code-of-conduct/
<https://www.openstack.org/summit/vancouver-2018/code-of-conduct/>
See you at the
documentation. I'd love to get some guidance about how to decide on the
'right amount' of parity!
Cheers
Mark
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http
On 14/03/18 13:33, Mark Kirkwood wrote:
Doing a bit more playing about leads me to think that for a *single
region* EC policy we can get a tighter lower bound on the number of
hosts: I'm calculating it as (k+m)/m.
I probably should have shared the reasoning rather than just plumping
On 14/03/18 11:47, Clay Gerrard wrote:
On Tue, Mar 13, 2018 at 3:05 PM, Mark Kirkwood
<mark.kirkw...@catalyst.net.nz <mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
To me this suggests that a certain minimum number of *hosts* per
region is needed for a given EC policy t
to
reassemble objects :-(
To me this suggests that a certain minimum number of *hosts* per region
is needed for a given EC policy to be durable in the advent of host
outage (or destruction). Is this correct - or have a flubbed the
calculations?
regards
Mark
On 10/03/18 07:21, Clay Gerrard wrote:
On Thu, Mar 8, 2018 at 8:07 PM, Mark Kirkwood
<mark.kirkw...@catalyst.net.nz <mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
Are we supposed to do a bit of python for ourselves to use these?
(rubs hands ready to hack...)...
May
binary calling any
of this stuff (that I can see anyway). Are we supposed to do a bit of
python for ourselves to use these? (rubs hands ready to hack...)...
Cheers
Mark
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
g
[3] https://docs.openstack.org/infra/storyboard/
[4] https://storyboard.openstack.org/#!/project/928
[5] https://git.openstack.org/cgit/openstack/kayobe
[6] https://docs.openstack.org/infra/manual/developers.html
[7] https://docs.openstack.org/infra/zuul/
Cheers,
something.
Mark
[1] https://docs.openstack.org/releasenotes/python-ironicclient/queens.html
[2] https://bugs.launchpad.net/ironic/+bug/1753435
On 4 March 2018 at 23:32, Michael Still <mi...@stillhq.com> wrote:
> I think one might be a bug in the deploy guide then. It states:
>
> &quo
On the enroll state, you can move it to available via manageable by setting
the provision state to manage, then provide.
Try an ironic node-validate to diagnose the issue, and make sure the ipmi
credentials given can be used to query the nodes power state using ipmitool.
Mark
On 4 Mar 2018 9:42
Try setting the ironic_log_dir variable to /var/log/ironic, or setting
[default] log_dir to the same in ironic.conf.
I'm surprised it's not logging to a file by default.
Mark
On 4 Mar 2018 8:33 p.m., "Michael Still" <mi...@stillhq.com> wrote:
> Ok, so I applied your patch an
The ILO hardware type was also not loading because the required management
and power interfaces were not enabled. The patch should address that but
please let us know if there are further issues.
Mark
On 4 Mar 2018 7:59 p.m., "Michael Still" <mi...@stillhq.com> wrote:
Replying t
Hi Michael,
If you're using the latest release of biifrost I suspect you're hitting
https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox for
review.
For a workaround, modify /etc/ironic/ironic.conf, and set
enabled_hardware_types=ipmi.
Cheers,
Mark
On 4 Mar 2018 5:50 p.m
and the Foundation staff will
be happy to help!
At Your Service,
Mark T. Voelker
> On Feb 26, 2018, at 11:22 PM, Waines, Greg <greg.wai...@windriver.com> wrote:
>
>
> · I have a commercial OpenStack product that I would like to claim
> com
+1
On 26 Feb 2018 7:58 p.m., "Eduardo Gonzalez" wrote:
> +1
>
> On Mon, Feb 26, 2018, 7:40 PM Paul Bourke wrote:
>
>> Hey Kolla,
>>
>> Hope you're all enjoying Dublin so far :) Some have expressed interest
>> in getting together for a team meal, how
I like the swift/HTTP proposal.
In kayobe we've considered how to customise the deployment process, with
hook points [1] looking like the right approach.
Another possibility when deploy steps [2] land could be to break up the
ansible deployment into multiple steps, and allow each step to be
depend upon it. I
guess the question is, for the supported values of kolla-ansible's
variables, should a minimal working deployment also be supported? Does this
logic inevitably lead to (1), or is it sustainable?
Mark
On 30 January 2018 at 12:54, Simon Leinen <simon.lei...@switch.ch> wrote:
differences in configuration)
* some config files are not in a format that can be easily merged (HAProxy,
dnsmasq, etc.)
These should be the exception, rather than the rule, however.
Mark
On 29 January 2018 at 14:12, Jeffrey Zhang <zhang.lei@gmail.com> wrote:
> Thank Paul for pointing
Looks like this should be resolved by
https://review.openstack.org/#/c/537453/.
Mark
On 26 January 2018 at 10:33, Mark Goddard <m...@stackhpc.com> wrote:
> Also seeing this for the u-c [1] and g-r [2] bumps for python-ironicclient
> 2.2.0. These are required in order to use the
Also seeing this for the u-c [1] and g-r [2] bumps for python-ironicclient
2.2.0. These are required in order to use the ironic node traits feature in
nova.
[1] https://review.openstack.org/#/c/538093
[2] https://review.openstack.org/#/c/538066/3
On 25 January 2018 at 11:15, Afek, Ifat (Nokia -
-traits.html
[2]
https://review.openstack.org/#/c/504952/7/specs/approved/config-template-traits.rst
[3] https://review.openstack.org/#/q/topic:bug/1722194+(status:open)
Thanks,
Mark (mgoddard)
__
OpenStack Development Mailing
.
mark
> On Jan 10, 2018, at 5:20 PM, Sean McGinnis <sean.mcgin...@gmx.com> wrote:
>
> While going through various repos looking at things to be cleaned up, I
> noticed the last commit for openstack/astara
> was well over a year ago. Based on this and the littl
/CloudArchive
Best Regards
Mark Baker
On 4 January 2018 at 10:52, Eduardo Gonzalez <dabar...@gmail.com> wrote:
> Hi João,
>
> It would be possible but there is not any container image with the
> nova-lxc code on it at the moment. (No binary rpm in RDO neither)
>
>
Hi,
I've registered the #openstack-kayobe channel for developer and operator
discussion of kayobe [1]. Jump on if you're using kayobe or interested in
doing so.
Mark
[1] https://github.com/stackhpc/kayobe
__
OpenStack
Alex Schultz <aschu...@redhat.com> wrote on 12/14/2017 09:24:54 AM:
> On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
> ... As I said previously, please post the
> patches ASAP so we can get eyes on these changes. Since this does
> have a
Alex Schultz <aschu...@redhat.com> wrote on 12/13/2017 04:29:49 PM:
> On Wed, Dec 13, 2017 at 3:22 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
> > What I have done at a high level is to rename the images into
architecture
> > specific
> > images. For example,
>
> I just need an understanding on the impact and the timeline. Replying
> here is sufficient.
>
> I assume since some of this work was sort of done earlier outside of
> tripleo and does not affect the default installation path that most
> folks will consume, it shouldn't be impacting to general
+1
Congrats, Jun! Thanks for all the hard work. Keep it up.
- Original message -From: Xing Yang To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: Re: [openstack-dev] 回复: [manila] Nominating Zhong
doraproject.org/wiki/User:Hamzy/TripleO_mixed_undercloud_overcloud_try8
--
Mark
You must be the change you wish to see in the world. -- Mahatma Gandhi
Never let the future disturb you. You will meet it, if you have to, with
the same weapons of reason which today arm you against the present.
Joe Talerico wrote on 10/09/2017 11:26:14 AM:
> If you ssh to your compute node can you ping 172.16.0.14?
Just a quick note to let everyone know that I am no longer seeing this
problem.
I had to wait a while for lab services to update the firmware and then I
was
seeing IP
complexity.
Mark
[1] http://paste.openstack.org/show/623681/
[2] https://kayobe.readthedocs.io
On 13 October 2017 at 10:55, Sam Betts (sambetts) <sambe...@cisco.com>
wrote:
> There are multiple options for doing this, but I suggest avoiding manually
> plumbing anything into OVS as
rd ff:ff:ff:ff:ff:ff
inet 172.16.0.14/24 brd 172.16.0.255 scope global vlan50
valid_lft forever preferred_lft forever
inet6 fe80::4cd6:7eff:fed8:45da/64 scope link
valid_lft forever preferred_lft forever
--
Mark
You must be the change you wish to see in the world. -- Mahatma
Hi,
I am pleased to announce the availability of not one, but two releases of
kayobe[1]. Both provide numerous enhancements over previous releases.
Kayobe 2.0.0 is based on the Ocata OpenStack release, and Kayobe 3.0.0 on
Pike. Please see the release notes[2] for further information.
A bit about
this
community has already accomplished, and plan for all the things we will
accomplish together in the future.
Mark (sparkycollier)
> On Aug 30, 2017, at 8:24 AM, Thierry Carrez <thie...@openstack.org> wrote:
>
> Hello OpenStack community,
>
> I'm proud and excited to annou
Got it, thanks for explaining.
Mark
On 11 August 2017 at 10:46, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:
> Hi Mark,
>
> I do not propose to remove handling of plain http image references
> altogether, just remove the code pieces in glance service ut
Hi Pavlo,
#3 is used in Bifrost, where there is no Glance service but the default
driver is agent_ipmitool. The images are served by the local nginx service.
For example, taken from one ironic node:
'image_source': u'http://10.41.253.100:8080/deployment_image.qcow2'
Mark
On 10 August 2017
from organization
to organization).
At Your Service,
Mark T. Voelker
> On Aug 7, 2017, at 10:27 PM, symack <sym...@gmail.com> wrote:
>
> Hello Everyone,
>
> New to the cloud world and wondering how does OpenStack come to play with
> Cloud Foundry. A very high le
Hi Aimee,
I suspect the reason that ansible is owned by root in your setup is that
you ran scripts/env-setup.sh using sudo. Could you paste the errors seen
when running without sudo?
Mark
On 3 August 2017 at 23:09, Aimee Ukasick <aimeeu.opensou...@gmail.com>
wrote:
> Thanks Mark for
In my previous job we had to build a firewall solution for our OpenStack
control plane. Our research found that firewalld may have a habit of
'fighting' against the rules added by certain OpenStack services. This was
over a year ago, so things may have changed. We didn't pursue firewalld as
a
and would lead to the node's DHCP requests landing at the wrong neutron
DHCP server instance in some cases.
[1] https://bugs.launchpad.net/ironic/+bug/1666009
Mark
On 28 July 2017 at 12:29, Waines, Greg <greg.wai...@windriver.com> wrote:
> Thanks for the info Mark.
>
>
>
> A du
/drivers/k8s_coreos_v1/templates
[3] https://bugs.launchpad.net/ironic/+bug/1666009
Mark
On 17 July 2017 at 14:27, Waines, Greg <greg.wai...@windriver.com> wrote:
> When MAGNUM launches a VM or Ironic instance for a COE master or minion
> node, with the COE Image,
>
> Wha
On Thu, Jul 27, 2017 at 2:31 AM, Jean-Philippe Evrard <
jean-phili...@evrard.me> wrote:
>
> For ppl who aren't iptables experts, firewalld module brings a lot of
> readability.
> If we are doing the tasks equivalent with iptables, the readability will
> be brought in by variables (I mean variables
Hi Greg,
You're correct - magnum features support for running on top of VMs or
baremetal. Currently baremetal is supported for kubernetes on Fedora core
only[1]. There is a cluster template parameter 'server_type', which should
be set to 'BM' for baremetal clusters.
In terms of how this works
Hi,
Kolla-ansible went through this process a few years ago, and ended up with
a solution involving heka pulling logs from files in a shared docker volume
(kolla_logs). Heka was recently switched for fluentd due to the
disappearance of upstream support. I suspect kolla-kubernetes has been
through
I'll throw a second grenade in.
Kayobe[1][2] is an OpenStack deployment tool based on kolla-ansible that
adds sounds in some ways similar to what you're describing. It roughly
follows the TripleO undercloud/overcloud model, with Bifrost used to deploy
the overcloud. Kayobe augments kolla-ansible
etc)*but* now we are thinking about re-architecting
(plus more options exist now), it would make sense to revisit this area.
Best wishes
Mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe
1 - 100 of 919 matches
Mail list logo