Of course ! Using teuthology tasks to configure the node makes perfect sense.

Thanks !

On 09/03/2015 14:37, Andrew Schoen wrote:
> Loic,
> 
> After locking the node like normal, you can use teuthology to install the 
> kernel you need.  Just include the kernel stanza in your yaml file.  
> http://ceph.com/teuthology/docs/teuthology.task.html#teuthology.task.kernel.task
> 
> Something like this:
> 
>   interactive-on-error: true                                                  
>                                                                               
>                   
>   roles:
>      - [mon.0, client.0]
>   kernel:
>      branch: testing
>   tasks:
>      - interactive: 
> 
> Use teuthology-lock —list-targets to get the connection information for you 
> newly locked node and add that to your yaml.
> 
> Best,
> Andrew
> 
> On Mar 8, 2015, at 7:50 AM, Loic Dachary <l...@dachary.org 
> <mailto:l...@dachary.org>> wrote:
> 
>> Hi Andrew,
>>
>> After successfully locking a centos 6.5 VPS in the community lab with
>>
>> teuthology-lock --lock-many 1 --owner l...@dachary.org 
>> <mailto:l...@dachary.org> --machine-type vps --os-type centos --os-version 
>> 6.5
>>
>> it turns out that it has a 2.6.32 kernel by default. A more recent kernel is 
>> required to run the ceph-disk tests because it relies on /dev/loop handling 
>> partition tables as a regular disk would. After installing a 3.10 kernel 
>> from http://elrepo.org/tiki/kernel-lt and rebooting, it was no longer 
>> possible to reach the machine.
>>
>> The teuthology-suite command has a -k option which suggest there is a way to 
>> specify the kernel when provisioning a machine. The command
>>
>> ./virtualenv/bin/teuthology-suite --dry-run -k testing --priority 101 
>> --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira 
>> --distro ubuntu --email l...@dachary.org <mailto:l...@dachary.org> --owner 
>> l...@dachary.org <mailto:l...@dachary.org>  --ceph firefly-backports
>>
>> shows lines like:
>>
>> 2015-03-08 13:43:26,432.432 INFO:teuthology.suite:dry-run: 
>> ./virtualenv/bin/teuthology-schedule --name 
>> loic-2015-03-08_13:43:06-rgw-firefly-backports-testing-basic-multi --num 1 
>> --worker multi --priority 101 --owner l...@dachary.org 
>> <mailto:l...@dachary.org> --description 'rgw/multifs/{clusters/fixed-2.yaml 
>> fs/btrfs.yaml rgw_pool_type/erasure-coded.yaml 
>> tasks/rgw_multipart_upload.yaml}' -- /tmp/schedule_suite_AQ2b6w 
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/clusters/fixed-2.yaml
>>  /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/fs/btrfs.yaml 
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/rgw_pool_type/erasure-coded.yaml
>>  
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/tasks/rgw_multipart_upload.yaml
>>
>> which show the testing word as part of the job name. The 
>> https://github.com/ceph/teuthology/ page shows some more information about 
>> kernel choices but it's non trivial to figure out how to translate that into 
>> something that could be used in the context of teuthology-lock.
>>
>> I'm not sure where to look and I would be grateful if you could give me a 
>> pointer to go in the right direction.
>>
>> Cheers
>>
>> -- 
>> Loïc Dachary, Artisan Logiciel Libre
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to