In the spirit of "better late than never", here's a summary of our CI
Squad meeting.
Time: Thursdays, 15:30-16:30 UTC
Place: https://bluejeans.com/4113567798/
Configuration management in TripleO CI
==
There was a design meeting organized by Gabrielle (thank
On Wed, Jan 25, 2017 at 3:42 PM, Sagi Shnaidman wrote:
> HI, all
>
> I'd like to propose a bit different approach to run experimental jobs in
> TripleO CI.
> As you know we have OVB jobs and not-OVB jobs, and different pipelines for
> running these two types of them.
>
> What is current flow:
> if
HI, all
I'd like to propose a bit different approach to run experimental jobs in
TripleO CI.
As you know we have OVB jobs and not-OVB jobs, and different pipelines for
running these two types of them.
What is current flow:
if you need to run experimental jobs, you write comment with "check
experi
Everybody interested in the TripleO CI and Quickstart is welcome to join
the weekly meeting:
Time: Thursdays, 15:30-16:30 UTC
Place: https://bluejeans.com/4113567798/
Here's this week's summary:
* There aren't any blockers or bottlenecks slowing down the transition
to the Quickstart based CI.
On Mon, Jan 16, 2017 at 3:08 AM, Dougal Matthews wrote:
>
>
> On 15 January 2017 at 20:24, Sagi Shnaidman wrote:
>>
>> Hi, all
>>
>> FYI, the periodic TripleO nonha jobs fail because of introspection
>> failure, there is opened bug in mistral:
>>
>> Ironic introspection fails because unexpected k
Hi,
as part of an effort to bring success rate of tempest test closer to
100% in tripleo-ci, we propose to replace the current periocic ha
tempest job with a one that is using quickstart, but tests in nonha.
We pushed a change in infra: https://review.openstack.org/420647
that will replace the cu
On 15 January 2017 at 20:24, Sagi Shnaidman wrote:
> Hi, all
>
> FYI, the periodic TripleO nonha jobs fail because of introspection
> failure, there is opened bug in mistral:
>
> Ironic introspection fails because unexpected keyword "insecure"
> https://bugs.launchpad.net/tripleo/+bug/1656692
>
Hi, all
FYI, the periodic TripleO nonha jobs fail because of introspection failure,
there is opened bug in mistral:
Ironic introspection fails because unexpected keyword "insecure"
https://bugs.launchpad.net/tripleo/+bug/1656692
and marked as promotion blocker.
Thanks
--
Best regards
Sagi Shna
On 01/13/2017 09:25 AM, Emilien Macchi wrote:
> On Fri, Jan 13, 2017 at 9:09 AM, Gabriele Cerami wrote:
>>
>> Hi,
>>
>> following a suggestion from Alal Pevec I'm proposing to stop using
>> "current" repo from dlrn and start using "consistent" instead.
>> The main difference should only be that
On Fri, Jan 13, 2017 at 9:09 AM, Gabriele Cerami wrote:
>
> Hi,
>
> following a suggestion from Alal Pevec I'm proposing to stop using
> "current" repo from dlrn and start using "consistent" instead.
> The main difference should only be that "consistent" is not affected by
> packages in ftbfs, so
Hi,
following a suggestion from Alal Pevec I'm proposing to stop using
"current" repo from dlrn and start using "consistent" instead.
The main difference should only be that "consistent" is not affected by
packages in ftbfs, so we're testing with a bit more stability.
This is the proposal
https:
On Thu, Jan 12, 2017 at 12:26 PM, Attila Darazs wrote:
> We had our first meeting as the the CI Squad today. We re-purposed our
> "Quickstart to Upstream Transitioning" meeting into the Squad meeting, so
> the topics were and will be focused on the transition for the next month or
> so.
>
> Everyb
We had our first meeting as the the CI Squad today. We re-purposed our
"Quickstart to Upstream Transitioning" meeting into the Squad meeting,
so the topics were and will be focused on the transition for the next
month or so.
Everybody interested in the TripleO CI and Quickstart is welcome to j
On Fri, Jan 6, 2017 at 6:57 AM, Emilien Macchi wrote:
> I found useful to share a status on what's going on in CI now.
>
> 1) We fixed ovb-updates job: https://review.openstack.org/#/c/416706/
> It should be stable again now. Please don't ignore it anymore (it was
> for a few days until last night
I found useful to share a status on what's going on in CI now.
1) We fixed ovb-updates job: https://review.openstack.org/#/c/416706/
It should be stable again now. Please don't ignore it anymore (it was
for a few days until last night).
2) multinode & ovb-nonha are green and pretty stable.
3) ov
On Wed, Jan 4, 2017 at 11:22 AM, Attila Darazs wrote:
> On 01/04/2017 10:34 AM, Steven Hardy wrote:
>>
>> Hi Harry,
>>
>> On Tue, Jan 03, 2017 at 04:04:51PM -0500, Harry Rybacki wrote:
>>>
>>> Greetings All,
>>>
>>> Folks have been diligently working on the blueprint[1] to prepare
>>> TripleO-Quic
On 01/04/2017 10:34 AM, Steven Hardy wrote:
Hi Harry,
On Tue, Jan 03, 2017 at 04:04:51PM -0500, Harry Rybacki wrote:
Greetings All,
Folks have been diligently working on the blueprint[1] to prepare
TripleO-Quickstart (OOOQ)[2] and TripleO-Quickstart-Extras[3] for
their transition into TripleO-
Greetings Steve
On Wed, Jan 4, 2017 at 4:34 AM, Steven Hardy wrote:
> Hi Harry,
>
> On Tue, Jan 03, 2017 at 04:04:51PM -0500, Harry Rybacki wrote:
> > Greetings All,
> >
> > Folks have been diligently working on the blueprint[1] to prepare
> > TripleO-Quickstart (OOOQ)[2] and TripleO-Quickstart-
Hi Harry,
On Tue, Jan 03, 2017 at 04:04:51PM -0500, Harry Rybacki wrote:
> Greetings All,
>
> Folks have been diligently working on the blueprint[1] to prepare
> TripleO-Quickstart (OOOQ)[2] and TripleO-Quickstart-Extras[3] for
> their transition into TripleO-CI. Presently, our aim is to begin th
adding [ci] to the subject.
On Tue, Jan 3, 2017 at 4:04 PM, Harry Rybacki wrote:
> Greetings All,
>
> Folks have been diligently working on the blueprint[1] to prepare
> TripleO-Quickstart (OOOQ)[2] and TripleO-Quickstart-Extras[3] for
> their transition into TripleO-CI. Presently, our aim is to
Hi Emilien and all,
On 16.12.2016 01:26, Emilien Macchi wrote:
> On Thu, Dec 15, 2016 at 12:22 PM, Sven Anderson wrote:
>> Hi all,
>>
>> while I was waiting again for the CI to be fixed and didn't want to
>> torture it with additional rechecks, I wanted to find out, how much of
>> our CI infrastr
On Fri, Dec 16, 2016 at 9:12 AM, Flavio Percoco wrote:
> On 14/12/16 21:44 -0500, Emilien Macchi wrote:
>
>> On Wed, Dec 14, 2016 at 7:22 PM, Wesley Hayutin
>> wrote:
>>
>>>
>>>
>>> On Fri, Dec 2, 2016 at 12:04 PM, Wesley Hayutin
>>> wrote:
>>>
Greetings,
I wanted to send a
On 14/12/16 21:44 -0500, Emilien Macchi wrote:
On Wed, Dec 14, 2016 at 7:22 PM, Wesley Hayutin wrote:
On Fri, Dec 2, 2016 at 12:04 PM, Wesley Hayutin wrote:
Greetings,
I wanted to send a status update on the quickstart based containerized
compute ci.
The work is here:
https://review.open
On Thu, Dec 15, 2016 at 12:22 PM, Sven Anderson wrote:
> Hi all,
>
> while I was waiting again for the CI to be fixed and didn't want to
> torture it with additional rechecks, I wanted to find out, how much of
> our CI infrastructure we waste with rechecks. My assumption was that
> every recheck i
Neat, thanks Sven!
Here are the nova stats:
http://paste.openstack.org/show/592551/
--diana
On Thu, Dec 15, 2016 at 12:22 PM, Sven Anderson wrote:
> Hi all,
>
> while I was waiting again for the CI to be fixed and didn't want to
> torture it with additional rechecks, I wanted to find out,
Hi all,
while I was waiting again for the CI to be fixed and didn't want to
torture it with additional rechecks, I wanted to find out, how much of
our CI infrastructure we waste with rechecks. My assumption was that
every recheck is a waste of resources based on a false negative, because
it render
On Wed, Dec 14, 2016 at 7:22 PM, Wesley Hayutin wrote:
>
>
> On Fri, Dec 2, 2016 at 12:04 PM, Wesley Hayutin wrote:
>>
>> Greetings,
>>
>> I wanted to send a status update on the quickstart based containerized
>> compute ci.
>>
>> The work is here:
>> https://review.openstack.org/#/c/393348/
>>
>
On Fri, Dec 2, 2016 at 12:04 PM, Wesley Hayutin wrote:
> Greetings,
>
> I wanted to send a status update on the quickstart based containerized
> compute ci.
>
> The work is here:
> https://review.openstack.org/#/c/393348/
>
> I had two passes on the morning of Nov 30 in a row, then later that day
On Tue, Dec 6, 2016 at 9:34 PM, Ian Main wrote:
> Wesley Hayutin wrote:
> > Greetings,
> >
> > I wanted to send a status update on the quickstart based containerized
> > compute ci.
> >
> > The work is here:
> > https://review.openstack.org/#/c/393348/
> >
> > I had two passes on the morning of N
Wesley Hayutin wrote:
> Greetings,
>
> I wanted to send a status update on the quickstart based containerized
> compute ci.
>
> The work is here:
> https://review.openstack.org/#/c/393348/
>
> I had two passes on the morning of Nov 30 in a row, then later that day the
> deployment started to fai
Giving few updates here:
- we implemented option 1.a), which means that we moved the tripleo CI
scenarios environments and pingtests into tripleo-heat-template.
- we created tripleo-scenarioXXX-puppet jobs that run on some modules.
Some Example:
- puppet-gnocchi now runs tripleo-scenario001, that
Greetings,
I wanted to send a status update on the quickstart based containerized
compute ci.
The work is here:
https://review.openstack.org/#/c/393348/
I had two passes on the morning of Nov 30 in a row, then later that day the
deployment started to fail due the compute node loosing it's networ
On 11/15/2016 10:41 AM, Gabriele Cerami wrote:
On 18 Oct, Gabriele Cerami wrote:
Hello,
after adding coverage in CI for HA IPv6 scenario here
https://review.openstack.org/363674 we wanted to add IPv6 testing on the
gates.
To not use any more resources the first suggestion to add IPv6 to the
g
On 11/22/2016 09:02 PM, Emilien Macchi wrote:
> == Context
>
> In Newton we added new multinode jobs called "scenarios".
> The challenged we tried to solve was "how to test the maximum of
> services without overloading the nodes that run tests".
>
> Each scenarios deploys a set of services, whi
On Fri, Nov 25, 2016 at 7:22 AM, Gabriele Cerami wrote:
> On 22 Nov, Emilien Macchi wrote:
>> 1) Re-use experience from Puppet OpenStack CI and have environments
>> that are in a branched repository.
>> a) Move CI environments and pingtest into
>> tripleo-heat-templates/environments/ci/(scenarios|
On 22 Nov, Emilien Macchi wrote:
> 1) Re-use experience from Puppet OpenStack CI and have environments
> that are in a branched repository.
> a) Move CI environments and pingtest into
> tripleo-heat-templates/environments/ci/(scenarios|pingtest). This repo
> is branched and we could add a README to
On Thu, Nov 24, 2016 at 11:08 AM, Juan Antonio Osorio
wrote:
> I don't have a strong opinion about any option, as long as we have something
> in place I'm happy.
>
> But regarding option 1.A: what would be done for newton once these templates
> are moved to t-h-t. Would they be backported? What ab
I don't have a strong opinion about any option, as long as we have
something in place I'm happy.
But regarding option 1.A: what would be done for newton once these
templates are moved to t-h-t. Would they be backported? What about mitaka?
On 24 Nov 2016 17:55, "Carlos Camacho Gonzalez" wrote:
>
I think would be cool to go with option, +1 to 1.A
IMHO,
- Easier to read.
- Easier to maintain.
- We don't make backports, instead we guarantee backwards compatibility.
- We'll re-use experience from Puppet OpenStack CI.
On Wed, Nov 23, 2016 at 10:13 PM, Giulio Fidente wrote:
> hi Emilien,
>
>
hi Emilien,
thanks for putting some thought into this. We have a similar problem to
test RGW which was only added in Newton.
On 11/23/2016 03:02 AM, Emilien Macchi wrote:
== Context
In Newton we added new multinode jobs called "scenarios".
The challenged we tried to solve was "how to test th
== Context
In Newton we added new multinode jobs called "scenarios".
The challenged we tried to solve was "how to test the maximum of
services without overloading the nodes that run tests".
Each scenarios deploys a set of services, which allows us to
horizontally scale the number of scenarios to
On Tue, Nov 15, 2016 at 11:41 AM, Gabriele Cerami wrote:
> On 18 Oct, Gabriele Cerami wrote:
>> Hello,
>>
>> after adding coverage in CI for HA IPv6 scenario here
>> https://review.openstack.org/363674 we wanted to add IPv6 testing on the
>> gates.
>> To not use any more resources the first sugges
On 18 Oct, Gabriele Cerami wrote:
> Hello,
>
> after adding coverage in CI for HA IPv6 scenario here
> https://review.openstack.org/363674 we wanted to add IPv6 testing on the
> gates.
> To not use any more resources the first suggestion to add IPv6 to the
> gates was to make all HA jobs IPv6, mov
On 10/12/2016 05:58 PM, Dan Sneddon wrote:
I recently evaluated our needs for testing coverage for TripleO
isolated networking. I wanted to post my thoughts on the matter for
discussion, which will hopefully lead to a shared understanding of what
improvements we need to make. I think we can cov
On Tue, Oct 18, 2016 at 8:10 AM, Gabriele Cerami wrote:
> Hello,
>
> after adding coverage in CI for HA IPv6 scenario here
> https://review.openstack.org/363674 we wanted to add IPv6 testing on the
> gates.
> To not use any more resources the first suggestion to add IPv6 to the
> gates was to make
Hello,
after adding coverage in CI for HA IPv6 scenario here
https://review.openstack.org/363674 we wanted to add IPv6 testing on the
gates.
To not use any more resources the first suggestion to add IPv6 to the
gates was to make all HA jobs IPv6, move network isolation testing for
IPv4 on non-ha j
Hi,
Over the week-end a patch was merged in puppetlabs-ntp, that broke
deployments on Puppet3.
While TripleO is still running Puppet 3 - and will use Puppet 4 very
soon, we need to find a workaround until then.
The bug is reported here:
https://bugs.launchpad.net/tripleo/+bug/1633713
And we're t
On Fri, Oct 14, 2016 at 2:01 PM, Wesley Hayutin wrote:
> Greetings,
>
> Hey everyone, I wanted to post a link to a blueprint I'm interested in
> discussing at summit with everyone. Please share your thoughts and comments
> in the spec / gerrit review.
>
> https://blueprints.launchpad.net/tripleo/
Greetings,
Hey everyone, I wanted to post a link to a blueprint I'm interested in
discussing at summit with everyone. Please share your thoughts and
comments in the spec / gerrit review.
https://blueprints.launchpad.net/tripleo/+spec/tripleo-third-party-ci-quickstart
Thank you!
On Thu, Oct 13, 2016 at 4:28 AM, Dan Sneddon wrote:
> I recently evaluated our needs for testing coverage for TripleO
> isolated networking. I wanted to post my thoughts on the matter for
> discussion, which will hopefully lead to a shared understanding of what
> improvements we need to make. I t
I recently evaluated our needs for testing coverage for TripleO
isolated networking. I wanted to post my thoughts on the matter for
discussion, which will hopefully lead to a shared understanding of what
improvements we need to make. I think we can cover the majority of
end-user requirements by tes
On 25/08/16 09:49, James Slagle wrote:
On Thu, Aug 25, 2016 at 5:40 AM, Derek Higgins wrote:
On 25 August 2016 at 02:56, Paul Belanger wrote:
On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
The latest recurring problem that is failing a lot of the nonha ssl
jobs in tripleo-ci i
Hi, all
with Dereks help we set up OVB dev environment on rh1/rh2 clouds, which
allow developers to run their patches in real CI environment and debug
their issues there. In case you have a problem with your patch on CI and
locally it works - you can reproduce and debug it on this environment.
Ple
Hi, all
FYI, jobs failed after last images promotion because of corrupted image,
seems like last promotion job failed to upload it correctly, it didn't
match md5. I've replaced it on mirror server with image from previous
delorean hash run, it should be OK because we anyway update them and it
shou
On Thu, Sep 22, 2016 at 1:40 PM, Steven Hardy wrote:
> On Thu, Sep 22, 2016 at 04:36:30PM +0200, Gabriele Cerami wrote:
>> Hi,
>>
>> As reported on this bug
>>
>> https://bugs.launchpad.net/tripleo/+bug/1626483
>>
>> HA gate and periodic jobs for master and sometimes newton started to
>> fail for
On Thu, Sep 22, 2016 at 04:36:30PM +0200, Gabriele Cerami wrote:
> Hi,
>
> As reported on this bug
>
> https://bugs.launchpad.net/tripleo/+bug/1626483
>
> HA gate and periodic jobs for master and sometimes newton started to
> fail for errors related to memory shortage. Memory on undercloud
> ins
Hi,
On Thu, Sep 22, 2016 at 1:48 PM, James Slagle
wrote:
> On Thu, Sep 22, 2016 at 10:36 AM, Gabriele Cerami
> wrote:
> > Hi,
> >
> > As reported on this bug
> >
> > https://bugs.launchpad.net/tripleo/+bug/1626483
> >
> > HA gate and periodic jobs for master and sometimes newton started to
> >
On Thu, Sep 22, 2016 at 10:36 AM, Gabriele Cerami wrote:
> Hi,
>
> As reported on this bug
>
> https://bugs.launchpad.net/tripleo/+bug/1626483
>
> HA gate and periodic jobs for master and sometimes newton started to
> fail for errors related to memory shortage. Memory on undercloud
> instance was
On 09/22/2016 09:36 AM, Gabriele Cerami wrote:
Hi,
As reported on this bug
https://bugs.launchpad.net/tripleo/+bug/1626483
HA gate and periodic jobs for master and sometimes newton started to
fail for errors related to memory shortage. Memory on undercloud
instance was increased to 8G less t
Hi,
As reported on this bug
https://bugs.launchpad.net/tripleo/+bug/1626483
HA gate and periodic jobs for master and sometimes newton started to
fail for errors related to memory shortage. Memory on undercloud
instance was increased to 8G less than a month ago, so the problem
needs a different a
For those interested we now have a minimal way to reproduce the
MessagingTimeout in Mistral.
https://bugs.launchpad.net/mistral/+bug/1624284
It seems to be related to this change in Mistral:
https://github.com/openstack/mistral/commit/1b0f0cddd620a3785017bb28d432cb0030b627d7
And even more
So here's an update about current situation:
Master / Newton
gate-tripleo-ci-centos-7-ovb-nonha
gate-tripleo-ci-centos-7-ovb-ha
The 2 jobs are supposed to pass, but some jobs are timing out in RH1 cloud.
In order to reduce the timeouts, Ben ran:
heat-manage purge_deleted 3
nova-manage db archive_d
On Wed, Sep 14, 2016 at 10:13 PM, Emilien Macchi wrote:
> Hi,
>
> Just a heads-up before end of day:
>
> 1) multinode job is failing 80% of time. James and myself did some
> attempts to revert or fix things but we have been unfortunate until
> now.
> Everything is documented here: https://bugs.lau
Hi,
Just a heads-up before end of day:
1) multinode job is failing 80% of time. James and myself did some
attempts to revert or fix things but we have been unfortunate until
now.
Everything is documented here: https://bugs.launchpad.net/tripleo/+bug/1623606
2) ovb jobs are timeing out during Net
On Thu, Aug 25, 2016 at 9:49 AM, James Slagle wrote:
> On Thu, Aug 25, 2016 at 5:40 AM, Derek Higgins wrote:
>> On 25 August 2016 at 02:56, Paul Belanger wrote:
>>> On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
The latest recurring problem that is failing a lot of the nonha
On Wed, Aug 24, 2016 at 9:56 PM, Paul Belanger wrote:
> I actually believe these problem highlights how large tripleo-ci has grown,
> and
> in need of a refactor. While we won't solve this problem today, I do think
> tripleo-ci is to monolithic today. I believe there is some discussion on
> break
On Thu, Aug 25, 2016 at 5:40 AM, Derek Higgins wrote:
> On 25 August 2016 at 02:56, Paul Belanger wrote:
>> On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
>>> The latest recurring problem that is failing a lot of the nonha ssl
>>> jobs in tripleo-ci is:
>>>
>>> https://bugs.launchp
On 25 August 2016 at 02:56, Paul Belanger wrote:
> On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
>> The latest recurring problem that is failing a lot of the nonha ssl
>> jobs in tripleo-ci is:
>>
>> https://bugs.launchpad.net/tripleo/+bug/1616144
>> tripleo-ci: nonha jobs failing
On 24 August 2016 at 19:11, James Slagle wrote:
> The latest recurring problem that is failing a lot of the nonha ssl
> jobs in tripleo-ci is:
>
> https://bugs.launchpad.net/tripleo/+bug/1616144
> tripleo-ci: nonha jobs failing with Unable to establish connection to
> https://192.0.2.2:13004/v1/a9
On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
> The latest recurring problem that is failing a lot of the nonha ssl
> jobs in tripleo-ci is:
>
> https://bugs.launchpad.net/tripleo/+bug/1616144
> tripleo-ci: nonha jobs failing with Unable to establish connection to
> https://192.0.2
On 08/25/2016 01:51 AM, Steve Baker wrote:
Heat now has efficient polling of nested events, but it doesn't look
like tripleoclient is using that.
Its not clear if the current polling is contributing to the above issue
but I'd definitely recommend switching over.
was simple enough so here is a
On 25/08/16 06:11, James Slagle wrote:
The latest recurring problem that is failing a lot of the nonha ssl
jobs in tripleo-ci is:
https://bugs.launchpad.net/tripleo/+bug/1616144
tripleo-ci: nonha jobs failing with Unable to establish connection to
https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1
On Wed, Aug 24, 2016 at 2:11 PM, James Slagle wrote:
> The latest recurring problem that is failing a lot of the nonha ssl
> jobs in tripleo-ci is:
>
> https://bugs.launchpad.net/tripleo/+bug/1616144
> tripleo-ci: nonha jobs failing with Unable to establish connection to
> https://192.0.2.2:13004/
The latest recurring problem that is failing a lot of the nonha ssl
jobs in tripleo-ci is:
https://bugs.launchpad.net/tripleo/+bug/1616144
tripleo-ci: nonha jobs failing with Unable to establish connection to
https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1b1671ced2ff/stacks/overcloud/f9f6f712-8e8
On Fri, Aug 19, 2016 at 10:04 AM, Sagi Shnaidman wrote:
> Hi, Derek
>
> I suspect Sahara can cause it, it started to run on overcloud since my patch
> was merged: https://review.openstack.org/#/c/352598/
> I don't think it ever ran on jobs, because was either improperly configured
> or disabled. A
Hi, Derek
I suspect Sahara can cause it, it started to run on overcloud since my
patch was merged: https://review.openstack.org/#/c/352598/
I don't think it ever ran on jobs, because was either improperly configured
or disabled. And according to reports it's most memory consuming service on
overcl
On 19 August 2016 at 11:08, Giulio Fidente wrote:
> On 08/19/2016 11:41 AM, Derek Higgins wrote:
>>
>> On 19 August 2016 at 00:07, Sagi Shnaidman wrote:
>>>
>>> Hi,
>>>
>>> we have a problem again with not enough memory in HA jobs, all of them
>>> constantly fails in CI: http://status-tripleoci.r
On 08/19/2016 12:12 PM, Erno Kuvaja wrote:
On Fri, Aug 19, 2016 at 10:53 AM, Hugh Brock wrote:
On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins wrote:
On 19 August 2016 at 00:07, Sagi Shnaidman wrote:
Hi,
we have a problem again with not enough memory in HA jobs, all of them
constantly fail
On Fri, Aug 19, 2016 at 10:53 AM, Hugh Brock wrote:
> On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins wrote:
>> On 19 August 2016 at 00:07, Sagi Shnaidman wrote:
>>> Hi,
>>>
>>> we have a problem again with not enough memory in HA jobs, all of them
>>> constantly fails in CI: http://status-tripl
On 08/19/2016 11:41 AM, Derek Higgins wrote:
On 19 August 2016 at 00:07, Sagi Shnaidman wrote:
Hi,
we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/
Have we any idea why we need more memory all of a sudden? For
On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins wrote:
> On 19 August 2016 at 00:07, Sagi Shnaidman wrote:
>> Hi,
>>
>> we have a problem again with not enough memory in HA jobs, all of them
>> constantly fails in CI: http://status-tripleoci.rhcloud.com/
>
> Have we any idea why we need more memo
On 19 August 2016 at 00:07, Sagi Shnaidman wrote:
> Hi,
>
> we have a problem again with not enough memory in HA jobs, all of them
> constantly fails in CI: http://status-tripleoci.rhcloud.com/
Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of
Hi,
we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/
I've created a patch that will increase it[1], but we need to increase it
right now on rh1.
I can't do it now, because unfortunately I'll not be able to watch thi
Hi, all
we have current tripleo repo[1] pointing to old repository[2] which
contains broken cinder[3] with volumes type bug[4] That breaks all our CI
jobs which can not create pingtest stack, because current-tripleo is the
main repo that is used in jobs[5]
Could we move the link please to any new
On Fri, Jul 22, 2016 at 4:53 PM, Emilien Macchi wrote:
> Hi,
>
> I started some work to have a CI job that will only deploy an undercloud.
> We'll save time and resources.
>
> I used storyboard: https://storyboard.openstack.org/#!/story/2000682
> and I invite our contributors to use it too when wo
Hi,
I started some work to have a CI job that will only deploy an undercloud.
We'll save time and resources.
I used storyboard: https://storyboard.openstack.org/#!/story/2000682
and I invite our contributors to use it too when working in TripleO
CI, it helps us to track our current work.
So far
All,
During today's rdo-ci scrum[1], I briefed the team on PoC work that
will bring trown's old PoC[2] review inline with the current state of
affairs of TripleO-Quickstart (OOOQS) and related ansible roles used
heavily in upstream CI[3].
To summarize, our goal is to leverage the power Sphinx[4]
Dan restarted Gearman, but CI is still failing on something else now:
qemu-img convert -f raw -O qcow2 /opt/stack/new/overcloud-full.raw
/opt/stack/new/overcloud-full.qcow2
qemu-img: error while writing sector 8217856: No space left on device
I still don't have access to anything but I hope it ca
Hi all,
CI is currently entirely red:
https://bugs.launchpad.net/tripleo/+bug/1594732
http://logs.openstack.org/11/333511/5/check-tripleo/gate-tripleo-ci-centos-7-nonha/16f72e8/console.html#_2016-06-25_15_43_35_798040
gear.Client.unknown - ERROR - Connection timed out waiting for a response to
A quick update on this. It appears that
https://review.rdoproject.org/r/1500 did indeed resolve the issue. There
have been no hits on the logstash query [1] since that merged.
[1]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22cleaning%20directory%20and%20cloning
On 06/23/2016 02:56 PM, Dan Prince wrote:
> After discovering some regressions today we found what we think is a
> package build issue in our CI environment which might be the cause of
> our issues:
>
> https://bugs.launchpad.net/tripleo/+bug/1595660
>
> Specifically, there is a case where DLRN
After discovering some regressions today we found what we think is a
package build issue in our CI environment which might be the cause of
our issues:
https://bugs.launchpad.net/tripleo/+bug/1595660
Specifically, there is a case where DLRN might not be giving an error
code if build failures occur
On Tue, 2016-05-17 at 20:03 +0300, Sagi Shnaidman wrote:
> Hi,
> raising again the question about tempest running on TripleO CI as it
> was discussed in the last TripleO meeting.
>
> I'd like to get your attention that in these tests, which I ran just
> for ensure it works, there were bugs discove
Hi,
raising again the question about tempest running on TripleO CI as it was
discussed in the last TripleO meeting.
I'd like to get your attention that in these tests, which I ran just for
ensure it works, there were bugs discovered, and these weren't corner cases
but real failures of TripleO inst
On 05/09/2016 02:32 PM, Clark Boylan wrote:
> On Mon, May 9, 2016, at 10:22 AM, Sagi Shnaidman wrote:
>> Hi, all
>>
>> I'd like to enable elastic recheck on TripleO CI and have submitted
>> patches
>> for refreshing the tracked logs [1] (please review) and for timeout case
>> [2].
>> But according
On Mon, May 9, 2016, at 10:22 AM, Sagi Shnaidman wrote:
> Hi, all
>
> I'd like to enable elastic recheck on TripleO CI and have submitted
> patches
> for refreshing the tracked logs [1] (please review) and for timeout case
> [2].
> But according to Derek's comment behind the timeout issue could be
Sorry, missed the mentioned patches:
[1] Refresh log files for tripleo project:
https://review.openstack.org/#/c/312985/
[2] Add bug for TripleO timeouts: https://review.openstack.org/#/c/313038/
On Mon, May 9, 2016 at 8:22 PM, Sagi Shnaidman wrote:
> Hi, all
>
> I'd like to enable elastic rech
Hi, all
I'd like to enable elastic recheck on TripleO CI and have submitted patches
for refreshing the tracked logs [1] (please review) and for timeout case
[2].
But according to Derek's comment behind the timeout issue could be multiple
issues and bugs, so I'd like to clarify - what are criteria
On Mon, Apr 18, 2016 at 8:36 AM, Sagi Shnaidman wrote:
> For making clear all advantages and disadvantages, I've created a doc:
>
> https://docs.google.com/document/d/1HmY-I8OzoJt0SzLzs79hCa1smKGltb-byrJOkKKGXII/edit?usp=sharing
>
> Please comment.
>
> On Sun, Apr 17, 2016 at 12:14 PM, Sagi Shnai
For making clear all advantages and disadvantages, I've created a doc:
https://docs.google.com/document/d/1HmY-I8OzoJt0SzLzs79hCa1smKGltb-byrJOkKKGXII/edit?usp=sharing
Please comment.
On Sun, Apr 17, 2016 at 12:14 PM, Sagi Shnaidman
wrote:
>
> Hi,
>
> John raised up the issue - where should we
201 - 300 of 368 matches
Mail list logo