On 23/03/2015 17:27, Yuri Weinstein wrote:
> Loic, done, pls review and edit.

Perfect. I did not realize it was organized to be viewed via

http://tracker.ceph.com/rb/master_backlog/ceph-releases

very convenient.

> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <[email protected]>
> To: "Yuri Weinstein" <[email protected]>
> Cc: "Sage Weil" <[email protected]>, "Ceph Development" 
> <[email protected]>
> Sent: Monday, March 23, 2015 9:10:20 AM
> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> 
> 
> On 23/03/2015 16:44, Yuri Weinstein wrote:
>>
>>
>> Thx
>> YuriW
>>
>> ----- Original Message -----
>> From: "Loic Dachary" <[email protected]>
>> To: "Yuri Weinstein" <[email protected]>
>> Cc: "Sage Weil" <[email protected]>, "Ceph Development" 
>> <[email protected]>
>> Sent: Monday, March 23, 2015 8:40:02 AM
>> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>
>> Hi Yuri,
>>
>> On 23/03/2015 16:09, Yuri Weinstein wrote:
>>> "How will that go for the next run of upgrade/giant-x ?"
>>>
>>> I was thinking that as soon as for example this suite passed, #11189 gets 
>>> resolved as thus indicates that it's ready for for the hammer release cut. 
>>
>> If the following happens:
>>
>> * hammer: upgrade/giant-x runs and passes
>> * a dozen more commits are added because problems are fixed
>> * hammer: upgrade/giant-x runs and passes
>>
>> That leaves us with two issues with the same name but with different update 
>> dates. So if I look at the "hammer: upgrade/giant-x" issues in chronological 
>> order, I have a complete history of the successive runs and I can check the 
>> latest one to see how it went. Or older ones if I need to dig the history. 
>>
>> This is good :-)
>>
>> After hammer is released, the same will presumably happen for point 
>> releases. Instead of naming them "hammer: upgrade/giant-x" which would be 
>> confusing, I guess we could name them "v0.94.1: upgrade/giant-x" instead. 
>>
>> Does that sound right ?
>> ============
>> Yes, we can alternatively name the set of those tasks as hammer v0.94.1
> 
> Great !
> 
> Would you like me to add a section at 
> http://tracker.ceph.com/projects/ceph-releases/wiki/Wiki to summarize this 
> conversation ?
> 
>>
>> ============
>>>
>>> Thx
>>> YuriW
>>>
>>> ----- Original Message -----
>>> From: "Loic Dachary" <[email protected]>
>>> To: "Yuri Weinstein" <[email protected]>
>>> Cc: "Sage Weil" <[email protected]>, "Ceph Development" 
>>> <[email protected]>
>>> Sent: Sunday, March 22, 2015 5:35:19 PM
>>> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>>
>>>
>>>
>>> On 22/03/2015 17:16, Yuri Weinstein wrote:
>>>> Loic, I think the idea was to do more process driven approach for 
>>>> releasing hammer, e.g. keep track of suites vs. results and open issues, 
>>>> so we can have a high level view on status at any time before the final 
>>>> cut day.
>>>>
>>>> Do you have any suggestions or objections?
>>>
>>> Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a 
>>> run of failed tests, and got resolved because all passed. The title is 
>>> hammer: upgrade/giant-x. How will that go for the next run of 
>>> upgrade/giant-x ?
>>>
>>> I use a python snippet to display the errors in a redmine format 
>>> (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)
>>>
>>> $ python ../fail.py 
>>> teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
>>> ** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- 
>>> /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 
>>> CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" 
>>> PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage 
>>> /home/ubuntu/cephtest/archive/coverage timeout 3h 
>>> /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
>>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 
>>> 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml 
>>> rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml 
>>> test_rbd_api.yaml test_rbd_python.yaml} 
>>> 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 
>>> 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml 
>>> rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 
>>> distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
>>> ** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster 
>>> [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks 
>>> not synchronized" in cluster log*
>>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 
>>> 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 
>>> 3-upgrade-sequence/upgrade-all.yaml 
>>> 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml 
>>> rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 
>>> distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
>>> ** *Could not reconnect to [email protected]*
>>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 
>>> 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 
>>> 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 
>>> 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml 
>>> rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 
>>> distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
>>> ** *Could not reconnect to [email protected]*
>>> *** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 
>>> 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 
>>> 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 
>>> 6-next-mon/monb.yaml 8-next-mon/monc.yaml 
>>> 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml 
>>> distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
>>> ** *'sudo adjust-ulimits ceph-coverage 
>>> /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
>>> *** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 
>>> 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 
>>> 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 
>>> 6-next-mon/monb.yaml 8-next-mon/monc.yaml 
>>> 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml 
>>> distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
>>> ** *timed out waiting for admin_socket to appear after osd.13 restart*
>>> *** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 
>>> 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 
>>> 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml 
>>> rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 
>>> 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 
>>> 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml 
>>> snaps-many-objects.yaml} 
>>> distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186
>>>
>>>>
>>>> Thx
>>>> YuriW
>>>>
>>>> ----- Original Message -----
>>>> From: "Loic Dachary" <[email protected]>
>>>> To: "Sage Weil" <[email protected]>
>>>> Cc: "Ceph Development" <[email protected]>
>>>> Sent: Sunday, March 22, 2015 1:54:06 AM
>>>> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>>>
>>>> Hi Sage,
>>>>
>>>> You have created a few hammer related tasks at 
>>>> http://tracker.ceph.com/projects/ceph-releases/issues . What did you have 
>>>> in mind ?
>>>>
>>>> Cheers
>>>>
>>>
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to