[OpenStack-Infra] Nodepool schema change

2016-06-22 Thread James E. Blair
Hi,

In nodepool master we just merged a change (add "--reason" to 'nodepool
hold' command) which will require a schema change.  To do so, run the
following:

mysql> alter table node add column comment varchar(255) after state_time;

-Jim

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Problem with Cross-Repo Dependencies in Zuul

2016-06-22 Thread James E. Blair
Artur Zarzycki  writes:

> ...
> And now as I understand zuul should recognize dependencies between patches.
> So I created patch1 in repo1 and patch2 in repo2 with Depends-On:
> I45137d1186caeafda0cee3504370d01ef3d9d271 (patch1)
> and I'm trying to merge patch2. I see that zuul run  gate jobs for
> repo2 and above area with job I see message:
> Queue: some/repo1, some/repo2
> When job finish with success -  code from repo2 is merged.
>
> In zuul debug log I see:
> 2016-06-22 12:09:39,079 DEBUG zuul.DependentPipelineManager: Checking
> for changes needed by :
> 2016-06-22 12:09:39,080 DEBUG zuul.DependentPipelineManager:   No
> changes needed
> 2016-06-22 12:09:39,080 DEBUG zuul.DependentPipelineManager: Adding
> change  to queue  some/repo1, some/repo2>
>
> Could you please give me some hints what I did wrong? Do I need to
> make something else to get it working?

>From what you describe, I don't see anything configured incorrectly.
Those log entries suggest that Zuul did not recognize the Depends-On
header and associate the two changes.  First, I would check:

* You are running a version of Zuul that supports Depends-On
* The syntax of the Depends-On header is correct (the line in the commit
  message must match this regex):
  "^Depends-On: (I[0-9a-f]{40})\s*$"

Then I would suggest looking in the debug log for entries near when Zuul
queried Gerrit for the change.  Assuming you are running git master, you
should see log messages like:

INFO zuul.source.Gerrit: Updating 
DEBUG zuul.source.Gerrit: Updating : Running query 
change:I45137d1186caeafda0cee3504370d01ef3d9d271 to find needed changes
DEBUG zuul.source.Gerrit: Updating : Getting 
commit-dependent change patch1
INFO zuul.source.Gerrit: Updating 

Indicating that when it loaded the data for patch2, it parsed the
Depends-On header, saw the I..271 change id, queried for it, and found
patch1 as a result.

> Can it cause problems if I download code with
>   git clone ssh://some_user@repo_url:29418/$ZUUL_PROJECT .
>   git fetch $ZUUL_URL/$ZUUL_PROJECT $ZUUL_REF
>   git checkout FETCH_HEAD
> instead of using zuul_cloner?

What you do in the job should not affect this problem, however, I would
strongly recommend using zuul-cloner as it contains the collective
wisdom of the best way to prepare a git repo for testing.

-Jim

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Problem with Cross-Repo Dependencies in Zuul

2016-06-22 Thread Artur Zarzycki

Hi,
I've zuul+gerrit+jenkins with check/gate/post jobs. The flow is:
- check jobs give verify +1
- core give +2 and +w
- after +w, gate job is triggered and if it pass code should be merged

Everything was working fine until I wanted to create cross-repo 
dependencies.
I created one shared gate job (just return success) and added it to both 
repos.

So now I've:
  - name: some/repo1
check:
  - verify-tox-bashate
gate:
  - some-gate-job1
  - deps-handler-job

...
  - name: some/repo2
check:
  - verify-tox-bashate
gate:
  - some-gate-job2
  - deps-handler-job

And now as I understand zuul should recognize dependencies between patches.
So I created patch1 in repo1 and patch2 in repo2 with Depends-On: 
I45137d1186caeafda0cee3504370d01ef3d9d271 (patch1)
and I'm trying to merge patch2. I see that zuul run  gate jobs for repo2 
and above area with job I see message:

Queue: some/repo1, some/repo2
When job finish with success -  code from repo2 is merged.

In zuul debug log I see:
2016-06-22 12:09:39,079 DEBUG zuul.DependentPipelineManager: Checking 
for changes needed by :
2016-06-22 12:09:39,080 DEBUG zuul.DependentPipelineManager:   No 
changes needed
2016-06-22 12:09:39,080 DEBUG zuul.DependentPipelineManager: Adding 
change  to queue some/repo1, some/repo2>


Could you please give me some hints what I did wrong? Do I need to make 
something else to get it working?


Can it cause problems if I download code with
  git clone ssh://some_user@repo_url:29418/$ZUUL_PROJECT .
  git fetch $ZUUL_URL/$ZUUL_PROJECT $ZUUL_REF
  git checkout FETCH_HEAD
instead of using zuul_cloner?

Thanks

--
Regards,
Artur Zarzycki


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Infra priorities and spec cleanup

2016-06-22 Thread Joshua Hesketh
On Wed, Jun 22, 2016 at 6:33 PM, Thierry Carrez 
wrote:

> Jeremy Stanley wrote:
>
>> On 2016-06-21 17:34:07 + (+), Jeremy Stanley wrote:
>>
>>> On 2016-06-21 18:16:49 +0200 (+0200), Thierry Carrez wrote:
>>>
 It hurts a lot when it's down because of so many services being served
 from
 it. We could also separate the published websites (status.o.o,
 governance.o.o, security.o.o, releases.o.o...) which require limited
 resources and grow slowly, from the more resource-hungry storage sites
 (logs.o.o, tarballs.o.o...).

>>>
>>> Agreed, that's actually a pretty trivial change, comparatively
>>> speaking.
>>>
>>
>> Oh, though it bears mention that the most recent extended outage
>> (and by far longest we've experienced in a while) would have been
>> just as bad either way. It had nothing to do with recovering
>> attached volumes/filesystems, but rather was a host outage at the
>> provider entirely outside our sphere of control. That sort of issue
>> can potentially happen with any of our servers/services no matter
>> how much we split them up.
>>
>
> I don't think it would have been just as bad... Even in the unlucky case
> where the VMs end up on the same machine and are all affected, IIUC
> rebuilding some of them would have been much faster if they were split up
> (less data to rsync) ?



I believe only the VM was rsync'd to a new server and the cinder volumes
reattached. What did take a while though was the fsck that ran after it
rebooted. This would have been faster for some services if they were on a
separate VM with smaller disks.

Another gain from separating the services is the surface area affected
should a node go down is smaller. If we're lucky just one service would go
down at a time (for example job logs, but tarballs stays up).

Cheers,
Josh



>
>
> --
> Thierry Carrez (ttx)
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Infra priorities and spec cleanup

2016-06-22 Thread Thierry Carrez

Jeremy Stanley wrote:

On 2016-06-21 17:34:07 + (+), Jeremy Stanley wrote:

On 2016-06-21 18:16:49 +0200 (+0200), Thierry Carrez wrote:

It hurts a lot when it's down because of so many services being served from
it. We could also separate the published websites (status.o.o,
governance.o.o, security.o.o, releases.o.o...) which require limited
resources and grow slowly, from the more resource-hungry storage sites
(logs.o.o, tarballs.o.o...).


Agreed, that's actually a pretty trivial change, comparatively
speaking.


Oh, though it bears mention that the most recent extended outage
(and by far longest we've experienced in a while) would have been
just as bad either way. It had nothing to do with recovering
attached volumes/filesystems, but rather was a host outage at the
provider entirely outside our sphere of control. That sort of issue
can potentially happen with any of our servers/services no matter
how much we split them up.


I don't think it would have been just as bad... Even in the unlucky case 
where the VMs end up on the same machine and are all affected, IIUC 
rebuilding some of them would have been much faster if they were split 
up (less data to rsync) ?


--
Thierry Carrez (ttx)

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra