Based on further feedback, I'd like to include this interdiff:

diff --git a/doc/design-performance-tests.rst
b/doc/design-performance-tests.rst
index b1b2e2b..0cf0ded 100644
--- a/doc/design-performance-tests.rst
+++ b/doc/design-performance-tests.rst
@@ -67,6 +67,12 @@ The following tests are added to the QA:
     return within a reasonable low timeout.
   * For the maximum amount of instances in the cluster, submit add-,
     remove- and list-tags jobs.
+  * Submit 200 `gnt-debug delay` jobs with a delay of 1 seconds. To
+    speed up submission, perform multiple job submissions in parallel.
+    Verify that submitting jobs doesn't significantly slow down during
+    the process. Verify that querying cluster information over CLI and
+    RAPI succeeds in a timely fashion with the delay jobs
+    running/queued.

 Parallel job execution performance
 ----------------------------------
@@ -96,6 +102,11 @@ be added to cover more real-world use-cases. Also,
based on user
 requests, specially crafted performance tests modeling those workloads
 can be added too.

+Additionally, the correlations between job submission time and job
+queue size could be detected. Therefore, a snapshot of the job queue
+before job submission could be taken to measure job submission time
+based on the jobs in the queue.
+
 .. vim: set textwidth=72 :
 .. Local Variables:
 .. mode: rst

Cheers,
Thomas


On Wed, Apr 23, 2014 at 9:52 AM, Thomas Thrainer <[email protected]>wrote:

> Some changes to the design:
>  * Instance remove jobs can't be submitted before create jobs are done, so
> rectify this.
>  * Add test of instance info jobs during heavy load on the cluster.
>  * Add some more tests scenarios for the parallel instance creation test.
>
> @pudlak: Do you have comments on the design, since this was originally
> your idea?
>
> Cheers,
> Thomas
>
> Interdiff:
>
> diff --git a/doc/design-performance-tests.rst
> b/doc/design-performance-tests.rst
> index 1f804e0..b1b2e2b 100644
> --- a/doc/design-performance-tests.rst
> +++ b/doc/design-performance-tests.rst
> @@ -32,6 +32,10 @@ two areas:
>    * Parallel job execution performance. How well does Ganeti
>      parallelize jobs?
>
> +Jobs are submitted to the job queue in sequential order, but the
> +execution of the jobs runs in parallel. All job submissions must
> +complete within a reasonable timeout.
> +
>  In order to make it easier to recognize performance related tests, all
>  tests added in the context of this design get a description with a
>  "PERFORMANCE: " prefix.
> @@ -46,7 +50,7 @@ they are designed to run in a vcluster QA environment.
>  The following tests are added to the QA:
>
>    * Submit the maximum amount of instance create jobs in parallel. As
> -    soon as a creation job starts to run, submit a removal job for this
> +    soon as a creation job succeeds, submit a removal job for this
>      instance.
>    * Submit as many instance create jobs as there are nodes in the
>      cluster in parallel (for non-redundant instances). Removal jobs
> @@ -58,7 +62,9 @@ The following tests are added to the QA:
>    * For the maximum amount of instances in the cluster, submit multiple
>      list and info jobs in parallel.
>    * For the maximum amount of instances in the cluster, submit move
> -    jobs in parallel.
> +    jobs in parallel. While the move operations are running, get
> +    instance information using info jobs. Those jobs are required to
> +    return within a reasonable low timeout.
>    * For the maximum amount of instances in the cluster, submit add-,
>      remove- and list-tags jobs.
>
> @@ -76,10 +82,11 @@ The following tests are added to the QA:
>
>    * Submitting twice as many instance creation request as there are
>      nodes in the cluster, using DRBD as disk template. As soon as a
> -    creation job starts to run, submit a removal job for this instance.
> +    creation job succeeds, submit a removal job for this instance.
>    * Create an instance using DRBD. Fail it over, migrate it, recreate
> -    its disk and change its secondary node while creating an additional
> -    instance in parallel to each of those operations.
> +    its disk, change its secondary node, reboot it and reinstall it
> +    while creating an additional instance in parallel to each of those
> +    operations.
>
>  Future work
>  ===========
>
>
>
> On Tue, Apr 22, 2014 at 12:11 PM, Klaus Aehlig <[email protected]> wrote:
>
>> > +Job queue performance
>> > +---------------------
>> > +
>> > +Tests targeting the job queue should eliminate external factors (like
>> > +network/disk performance or hypervisor delays) as much as possible, so
>> > +they are designed to run in a vcluster QA environment.
>>
>> When testing the cost of Ganeti-internal communication only, it at
>> least should be made sure, that daemons are not started in debug mode;
>> the amount of details logged at debug level has increased significantly,
>> so for the test to be meaningful, writing the debug-level entries to
>> log files shouldn't be the bottleneck...
>>
>> --
>> Klaus Aehlig
>> Google Germany GmbH, Dienerstr. 12, 80331 Muenchen
>> Registergericht und -nummer: Hamburg, HRB 86891
>> Sitz der Gesellschaft: Hamburg
>> Geschaeftsfuehrer: Graham Law, Christine Elizabeth Flores
>>
>
>
>
> --
> Thomas Thrainer | Software Engineer | [email protected] |
>
> Google Germany GmbH
> Dienerstr. 12
> 80331 München
>
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg
> Geschäftsführer: Graham Law, Christine Elizabeth Flores
>



-- 
Thomas Thrainer | Software Engineer | [email protected] |

Google Germany GmbH
Dienerstr. 12
80331 München

Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Geschäftsführer: Graham Law, Christine Elizabeth Flores

Reply via email to