Hey James,
I'am unable to open this document :(
TP (Tomasz from Intel)
On Tue, Oct 18, 2016 at 2:35 PM, James Penick (via Google Docs) <
jpen...@gmail.com> wrote:
> James Penick has invited you to *edit* the following
> document:
> OpenStack upstream specs
>
Thanks Michael for the info, let me try Octavia first.
Regards,
Liping Mao
在 17/9/11 05:30,“Michael Johnson” 写入:
Hi Liping,
FYI, Neutron LBaaS is no longer part of Neutron. Load balancing has
been consolidated under the Octavia project. I have added that
On Sun, Sep 10, 2017 at 10:47 PM, Emilien Macchi wrote:
> Today I found 2 issues:
>
> 1) https://bugs.launchpad.net/tripleo/+bug/1716256
>
>
Welcome to the Queens PTG!
Just reminding that the Mistral PTG is at [1] and it includes the approximate
schedule. I’m saying “approximate” because, if needed, we can change it a
little bit if some important topics will be raised unexpectedly. So I’d
suggest we quickly review the entire
Hi Liping,
FYI, Neutron LBaaS is no longer part of Neutron. Load balancing has
been consolidated under the Octavia project. I have added that tag to
the subject.
We currently do not have plans to add HA capabilities to the haproxy
namespace driver. The intention behind building the octavia
Today I found 2 issues:
1) https://bugs.launchpad.net/tripleo/+bug/1716256
http://logs.openstack.org/94/500794/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-containers-oooq/c3e3945/logs/undercloud/home/jenkins/overcloud_prep_containers.log.txt.gz#_2017-09-10_14_49_29
I think we need this
Fellow interop members,
I would like to request we consider adding information on what was tested for
interop.
Specifically, if you are listed in distros and appliances it would be good to
be able to specify what underlying HW was used for interop testing.
I do not propose that as the
The amount of time taken to create a trove instance (or cluster) is largely
related to how long it takes to create a nova instance of the same size.
So, how long does it take to create a nova instance of the same size on
your system?
Does your trove image install the database statically into the
Hey all,
The schedule [0] has been updated with room information for the
policy-in-code effort. We'll be in Grays Peak on Level 3 on Monday and
Tuesday to help projects with the Queens goal [1].
[0] https://etherpad.openstack.org/p/policy-queens-ptg
[1]
Looks like we'll be in Telluride B, Atrium Level. I've updated the room
information in the etherpad [0].
[0] https://etherpad.openstack.org/p/keystone-queens-ptg
On 08/24/2017 02:25 PM, Lance Bragstad wrote:
> I've worked the topics into a schedule [0]. Monday and Tuesday are
> pretty general,
Looks like the Baremetal/VM SIG (#compute) will meet in Ballroom B,
Banquet Level. I've updated the etherpad with the room information [0].
[0] https://etherpad.openstack.org/p/queens-PTG-vmbm
On 09/07/2017 10:01 AM, Lance Bragstad wrote:
> I spoke with John a bit today in IRC and we have a
We released TripleO Pike RC2 today!
Here are some numbers:
6 blueprints implemented (from FFE)
70 bugs fixed
The release team will propose the final release based on this RC2 tag
during the next days.
We'll continue to work on Pike stabilization and doing bugfix backports.
A new tag will be
[followup...]
On Sat, Sep 9, 2017 at 4:51 PM, Dean Troyer wrote:
> [0] https://review.openstack.org/#/c/485232
I rebased this one to see where we stand with just removing the
'volume' service type.
dt
--
Dean Troyer
dtro...@gmail.com
Hello everyone,
A new release candidate for tripleo-puppet-elements for the end of the Pike
cycle is available! You can find the source code tarball at:
https://tarballs.openstack.org/tripleo-puppet-elements/
Unless release-critical issues are found that warrant a release
candidate
Hello everyone,
A new release candidate for tripleo-image-elements for the end of the Pike
cycle is available! You can find the source code tarball at:
https://tarballs.openstack.org/tripleo-image-elements/
Unless release-critical issues are found that warrant a release
candidate respin,
Hello everyone,
A new release candidate for tripleo-heat-templates for the end of the Pike
cycle is available! You can find the source code tarball at:
https://tarballs.openstack.org/tripleo-heat-templates/
Unless release-critical issues are found that warrant a release
candidate respin,
Probably https://github.com/openstack/kuryr-kubernetes
On Sun, Sep 10, 2017 at 4:29 PM, Gary Kotton wrote:
> Hi,
>
> I suggest that you take a look at https://wiki.openstack.org/wiki/Kuryr.
> This most probably already has the relevant watchers implemented.
>
> Thanks
>
>
Hi Neutron LBaaS team,
One quick question about HA LBaaS V2 with haproxy namespace driver(not Octavia).
Do we have any plan to support Haproxy HA with Keepalived?
(Something similar with L3 HA).
I see we got some patch supported some level HA[1][2], but
Still need sometime(Mins) to failover
Hi,
I suggest that you take a look at https://wiki.openstack.org/wiki/Kuryr. This
most probably already has the relevant watchers implemented.
Thanks
Gary
From: Mridu Bhatnagar
Date: Sunday, September 10, 2017 at 1:40 PM
To: "openstack@lists.openstack.org"
Thanks for this most interesting and informative exchange.
On Fri, Sep 8, 2017 at 5:20 PM, Michael Bayer wrote:
> On Fri, Sep 8, 2017 at 8:53 AM, Kevin Benton wrote:
> > Since the goal of that patch is to deal with deadlocks, the retry can't
> > happen down
Thanks Kevin, then I will remove the fuel server and free the NIC occupied
by the PXE/Admin network on all controller and compute nodes.
Cheers,
Dan
2017-09-09 14:06 GMT+08:00 kevin parrikar :
> PXE/Admin network is used for rsyslog,nailgun agent-server
Hi
I am Mridu. A B.tech computer science and engineering graduate. I looking
forward to contributing in OpenStack in the coming outreachy. I am
interested in topic
Add introspection HTTP REST points to the Kubernetes API watchers.
Required skills: Python
Optional skills: API design
Can someone
22 matches
Mail list logo