I was tried to be quick and become wrong. ;-)
Here are the working ways:
On Mon, Aug 6, 2018 at 3:49 PM, Attila Fazekas wrote:
> Please use ostestr or stestr instead of testr.
>
> $ git clone https://github.com/openstack/tempest
> $ cd tempest/
> $ stestr init
> $ stestr li
Please use ostestr or stestr instead of testr.
$ git clone https://github.com/openstack/tempest
$ cd tempest/
$ stestr --list
$ ostestr -l #old way, also worked
These tools handling the config creation implicitly.
__
seen faster setup time, but I'll return to this in an
another topic.
On Tue, Sep 26, 2017 at 6:16 PM, Michał Jastrzębski <inc...@gmail.com>
wrote:
> On 26 September 2017 at 07:34, Attila Fazekas <afaze...@redhat.com> wrote:
> > decompressing those registry tar.gz takes ~0
some disk limit.
On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski <inc...@gmail.com>
wrote:
> On 22 September 2017 at 17:21, Paul Belanger <pabelan...@redhat.com>
> wrote:
> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> On 2017-09-
The main offenders reported by devstack does not seams to explain the
growth visible on OpenstackHealth [1] .
The logs also stated to disappear which does not makes easy to figure out.
Which code/infra changes can be related ?
h expected
to be solved within 1..2 cycle in the worst case scenario.
If the above thing proven to be not working, we need to draw the line based
on the
expected usage frequency.
On Wed, Sep 20, 2017 at 3:46 PM, Jeremy Stanley <fu...@yuggoth.org> wrote:
> On 2017-09-20 15:17:28 +0
On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand wrote:
> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and
The gate-tempest-dsvm-neutron-full-ubuntu-xenial job is 20..30 min slower
than it supposed to be/used to be.
The extra time has multiple reasons and it is not because we test more :( .
Usually we are just less smart than before.
Huge time increment is visible in devstack as well.
devstack is
tuning:
- Do you think is it good idea to spam the users with internal data which
useless for them unless they want to use it against you ?
>
> On 07/24/2017 10:23 AM, Attila Fazekas wrote:
>
>> Thanks for your answer.
>>
>> The real question is do we agree in the
>&g
fide...@redhat.com>> wrote:
>>
>> Only a comment about the status in TripleO
>>
>> On 07/21/2017 12:40 PM, Attila Fazekas wrote:
>>
>> [...]
>>
>> > We should seriously consider using names instead of ip address also
>
envs) to keep
endpoint types ?
On Fri, Jul 21, 2017 at 1:37 PM, Giulio Fidente <gfide...@redhat.com> wrote:
> Only a comment about the status in TripleO
>
> On 07/21/2017 12:40 PM, Attila Fazekas wrote:
>
> [...]
>
> > We should seriously consider using na
Hi All,
I thought it is already well know fact the endpoint types are there ONLY
for historical reasons, today they just exists to confuse the one who tries
to deploy OpenStack,
but it is considered as a deprecated concept and it will die out sooner or
later.
The keystone v3 API already allows
Hi all,
Long time ago it was discussed to make the keystone HEAD responses
right [1] as the RFC [2][3] recommends:
" A response to the HEAD method is identical to what an equivalent
request made with a GET would have been, except it lacks a body. "
So, the status code needs to be identical
In order to twist things even more ;-),
We should consider making tempest working in environment where the users
instead of getting IPV4 floating IP, they are allowed to get a globally
route-able
IPV6 range (prefix/ subnet from a subnetpool) .
Tempest should be able to do connectivity tests
+1, Totally agree.
Best Regards,
Attila
On Tue, May 16, 2017 at 10:22 AM, Andrea Frittoli wrote:
> Hello team,
>
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
>
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and
On Tue, Apr 18, 2017 at 11:04 AM, Arx Cruz wrote:
>
>
> On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy wrote:
>
>> On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:
>> > On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec
I wonder, can we switch to CINDER_ISCSI_HELPER="lioadm" ?
On Fri, Feb 10, 2017 at 9:17 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:
> I believe those are traces left by the reference implementation of cinder
> setting very high debug level on tgtd. I'm not sure if that's related or
7 at 6:59 AM Thomas Goirand <z...@debian.org> wrote:
>
>> On 02/01/2017 10:54 AM, Attila Fazekas wrote:
>> > Hi all,
>> >
>> > Typically we have two keystone service listening on two separate ports
>> > 35357 and 5000.
>> >
>&g
Hi all,
Typically we have two keystone service listening on two separate ports
35357 and 5000.
Historically one of the port had limited functionality, but today I do not
see why we want
to have two separate service/port from the same code base for similar
purposes.
Effective we use double
Most negative test supposed to be very simple and we should not spend
too much time in them.
The right question:
Are we able to run 100 negative test/sec ?
Where is the time spent ?
If we are able to solve the main issue,
probably we do not need to worry about how many negative test we have.
NO : For any kind of extra quota service.
In other places I saw other reasons for a quota service or similar,
the actual cost of this approach is higher than most people would think so NO.
Maybe Library,
But I do not want to see for example the bad pattern used in nova to spread
everywhere.
On 12 May 2015 at 10:12, Attila Fazekas afaze...@redhat.com wrote:
If you can illustrate a test script that demonstrates the actual failing
of OS threads that does not occur greenlets here, that would make it
immediately apparent what it is you're getting at here.
http
- Original Message -
From: John Garbutt j...@johngarbutt.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Dan Smith d...@danplanet.com
Sent: Saturday, May 9, 2015 12:45:26 PM
Subject: Re: [openstack-dev] [all] Replace
- Original Message -
From: John Garbutt j...@johngarbutt.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Saturday, May 9, 2015 1:18:48 PM
Subject: Re: [openstack-dev] [nova] Service group foundations and features
On
- Original Message -
From: Mike Bayer mba...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Monday, May 11, 2015 9:07:13 PM
Subject: Re: [openstack-dev] [all] Replace mysql-python with mysqlclient
On 5/11/15 2:02 PM, Attila Fazekas wrote:
Not just with local
- Original Message -
From: Mike Bayer mba...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Monday, May 11, 2015 4:44:58 PM
Subject: Re: [openstack-dev] [all] Replace mysql-python with mysqlclient
On 5/11/15 9:58 AM, Attila Fazekas wrote:
- Original
-network
to Neutron migration work]
On 2015-04-21 03:19:04 -0400 (-0400), Attila Fazekas wrote:
[...]
IMHO the OVS is less complex than netfilter (iptables, *tables),
if someone able to deal with reading the netfilter rules he should
be able to deal with OVS as well.
In a simple DevStack
How many compute nodes do you want to manage ?
If it less than ~1000, you do not need to care.
If you have more, just use SSD with good write IOPS value.
Mysql actually can be fast with enough memory and good SSD.
Even faster than [1].
zk as technology is good, the current nova driver is not.
- Original Message -
From: Jeremy Stanley fu...@yuggoth.org
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, April 17, 2015 9:35:07 PM
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in
here:
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Use_Case
Best Regards
Chaoyi Huang ( joehuang )
-Original Message-
From: Attila Fazekas [mailto:afaze...@redhat.com]
Sent: Thursday, April 16, 2015 3:06 PM
To: OpenStack Development Mailing List (not for usage
purposes. Did you have something else
in mind to determine if an agent is alive?
On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
wrote:
I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack to multiple smaller openstack
- Original Message -
From: Ken Giusti kgiu...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Thursday, April 16, 2015 4:47:50 PM
Subject: Re: [openstack-dev] [all] QPID incompatible with python 3 and
untested
have something else
in mind to determine if an agent is alive?
On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
wrote:
I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack to multiple smaller openstack,
or use
are still alive for scheduling purposes. Did you have something else
in mind to determine if an agent is alive?
On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com
wrote:
I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack
- Original Message -
From: Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Sunday, April 12, 2015 4:17:29 AM
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
So IIUC
- Original Message -
From: Kevin L. Mitchell kevin.mitch...@rackspace.com
To: openstack-dev@lists.openstack.org
Sent: Friday, April 10, 2015 5:47:26 PM
Subject: Re: [openstack-dev] [nova][database][quotas] reservations table ??
On Fri, 2015-04-10 at 02:38 -0400, Attila Fazekas
).
(or just stop doing soft-delete.)
- Original Message -
From: Mike Bayer mba...@redhat.com
To: Attila Fazekas afaze...@redhat.com
Cc: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, March 13, 2015 5:04:21 PM
Subject: Re
- Original Message -
From: Rohan Kanade openst...@rohankanade.com
To: openstack-dev@lists.openstack.org
Sent: Monday, March 16, 2015 1:13:12 PM
Subject: [openstack-dev] [qa][tempest] Service tag blueprint incomplete
Hi,
I could find some tests in tempest are still not tagged
The archiving has issues since very long time [1],
something like this [2] is expected to replace it.
The archiving just move trash to the other side of the desk,
usually just permanently deleting everything what is deleted
for more than 7 day is better for everyone.
For now, maybe just wiping
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: openstack-dev@lists.openstack.org
Sent: Wednesday, March 4, 2015 9:22:43 PM
Subject: Re: [openstack-dev] [nova] blueprint about multiple workers
supported in nova-scheduler
On 03/04/2015 01:51 AM, Attila Fazekas wrote
- Original Message -
From: Nikola Đipanov ndipa...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Tuesday, March 10, 2015 10:53:01 AM
Subject: Re: [openstack-dev] [nova] blueprint about multiple workers
supported in nova-scheduler
On 03/06/2015 03:19 PM, Attila Fazekas
- Original Message -
From: Attila Fazekas afaze...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Tuesday, March 10, 2015 12:48:00 PM
Subject: Re: [openstack-dev] [nova] blueprint about multiple workers
- Original Message -
From: Christopher Yeoh cbky...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Monday, March 9, 2015 1:04:15 PM
Subject: Re: [openstack-dev] [nova][api] Microversions. And why do we need
API
- Original Message -
From: Mike Bayer mba...@redhat.com
To: Attila Fazekas afaze...@redhat.com
Cc: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, March 6, 2015 2:20:45 AM
Subject: Re: [openstack-dev] [all] SQLAlchemy
Hi All,
This is follow up on [1].
Running the full tempest test-suite in parallel without the
allow_tenant_isolation=True settings, can cause random not too obvious
failures, which caused lot of issue to tempest newcomers.
There are special uses case when you might want to disable it,
for
I agree with Jay.
The extension layer is also expensive in CPU usage,
and it also makes more difficult to troubleshoot issues.
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org,
Sergey Nikitin
Looks like we need some kind of _per compute node_ mutex in the critical
section,
multiple scheduler MAY be able to schedule to two compute node at same time,
but not for scheduling to the same compute node.
If we don't want to introduce another required component or
reinvent the wheel there are
- Original Message -
From: Attila Fazekas afaze...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, March 6, 2015 4:19:18 PM
Subject: Re: [openstack-dev] [nova] blueprint about multiple workers
supported
Can you check is this patch does the right thing[1]:
[1] https://review.openstack.org/#/c/112523/6
- Original Message -
From: Fredy Neeser fredy.nee...@solnet.ch
To: openstack-dev@lists.openstack.org
Sent: Friday, March 6, 2015 6:01:08 PM
Subject: [openstack-dev] [neutron] VXLAN with
:49 PM
Subject: Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming
features (was: [nova] blueprint about
multiple workers)
Mike Bayer mba...@redhat.com wrote:
Attila Fazekas afaze...@redhat.com wrote:
Hi,
I wonder what is the planned future
Hi,
I wonder what is the planned future of the scheduling.
The scheduler does a lot of high field number query,
which is CPU expensive when you are using sqlalchemy-orm.
Does anyone tried to switch those operations to sqlalchemy-core ?
The scheduler does lot of thing in the application, like
- Original Message -
From: Attila Fazekas afaze...@redhat.com
To: Jay Pipes jaypi...@gmail.com
Cc: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org, Pavel Kholkin pkhol...@mirantis.com
Sent: Thursday, February 12, 2015 11:52:39 AM
Subject
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: Attila Fazekas afaze...@redhat.com
Cc: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org, Pavel
Kholkin pkhol...@mirantis.com
Sent: Wednesday, February 11, 2015 9:52:55 PM
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: Attila Fazekas afaze...@redhat.com
Cc: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org, Pavel
Kholkin pkhol...@mirantis.com
Sent: Tuesday, February 10, 2015 7:32:11 PM
:
Excerpts from Jay Pipes's message of 2015-02-09 10:15:10 -0800:
On 02/09/2015 01:02 PM, Attila Fazekas wrote:
I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.
snip
Am I missed something ?
Yes. Galera does
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: Attila Fazekas afaze...@redhat.com, OpenStack Development Mailing
List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Pavel Kholkin pkhol...@mirantis.com
Sent: Monday, February 9, 2015 7:15:10 PM
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: openstack-dev@lists.openstack.org, Pavel Kholkin pkhol...@mirantis.com
Sent: Wednesday, February 4, 2015 8:04:10 PM
Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody
should know about Galera
On
- Original Message -
From: Matthew Booth mbo...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Thursday, February 5, 2015 12:32:33 PM
Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody
should know about Galera
On 05/02/15 11:01, Attila Fazekas
I have a question related to deadlock handling as well.
Why the DBDeadlock exception is not caught generally for all api/rpc request ?
The mysql recommendation regarding to Deadlocks [1]:
Normally, you must write your applications so that they are always
prepared to re-issue a transaction if
used the same fixedip as the
test_rescued_vm_detach_volume.
Tempest could be stricter and fail the test suite at tearDownClass when the vm
moves to ERROR state at delete.
- Eredeti üzenet -
Feladó: Attila Fazekas afaze...@redhat.com
Címzett: Ian Wienand iwien...@redhat.com
Másolatot
://review.openstack.org/#/c/146039/ .
- Original Message -
From: Ian Wienand iwien...@redhat.com
To: Attila Fazekas afaze...@redhat.com
Cc: Alvaro Lopez Ortega aort...@redhat.com, Jeremy Stanley
fu...@yuggoth.org, Sean Dague s...@dague.net,
dean Troyer dtro...@gmail.com
Sent: Friday
+1
- Original Message -
From: Marc Koderer m...@koderer.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Wednesday, November 26, 2014 7:58:06 AM
Subject: Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for Tempest
Hi All,
I have a `little` trouble with the volume attachment stability.
The test_stamp_pattern test is skipped since long, you
can see what would happen if it would be enabled [1] now.
There is a workaround kind way for enabling that test [2].
I suspected the acpi hot plug event is not
+1
- Original Message -
From: Matthew Treinish mtrein...@kortar.org
To: openstack-dev@lists.openstack.org
Sent: Tuesday, July 22, 2014 12:34:28 AM
Subject: [openstack-dev] [QA] Proposed Changes to Tempest Core
Hi Everyone,
I would like to propose 2 changes to the Tempest core
+1 for both!
- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, November 15, 2013 2:38:27 PM
Subject: [openstack-dev] [qa] Proposals for Tempest core
It's post
65 matches
Mail list logo