"lau stephen" <stephen....@gmail.com> schrieb
im Newsbeitrag
news:c26a05160906150443h44b8f434s46e6ab4215f61...@mail.gmail.com...
2009/6/14 Olaf Schmidt <s...@online.de>:

>> Each client could do (with a 1:10 relation of writes and reads)
>>
>> Loop 1000 times
>> With LoopCounter Modulo 7 (case-switch)
>> case 0: Insert one single record
>> case 1: Insert 10 records
>> case 2: Insert 100 records
>> case 3: Update one single Record
>>case 4: Update 10 affected Records
>> case 5: Update 100 affected Record
>> case 6: Delete one single Record
>> End of WritePart
>>
>> Followed by 10 different Read-Jobs,
>> each requesting a completely delivered
>> resultset, based on different where-clauses, etc.
>> reporting only the recordcount to the console or
>> into a file, together with the current timing
>>
>> (maybe fill up two different tables - and also include
>> some Joins for the read-direction-jobs)
>> End loop
>>

>Did you mean to do this test for one user?
>
>The request info:
>
>{
>        "method" : "execute",
>        "params" : [
>                {
>                        "dbfile" : 0,
>                        "user" : "foobar",
>                        "dbname" : "addrbook",
>                        "sql" : [
>                                "insert into addrbook values ( 1,
>\"f...@bar.com\" )",
>                                "select * from addrbook"
>                        ]
>                }
>        ],
>        "id" : "foobar"
>}
>
>The server will lock the user ( foobar ) while processing this request.
>If the server receives two requests for one user at the same time,
>it will process this two requests serial.

No, I meant a test, which does not perform "multi-requests"
at the serverside (thereby saving roundtrips), but instead a
load of single requests from the clientside, to measure the
Requests per Second in the end, the server is able to deliver
in total (under concurrency-stress) for the given "single-job-
volume".

The whole thing (as in the loop I've posted) performed by 8
parallel working Clientscripts - maybe split over two client-
machines, running then 4 parallel script-loops each, and all
doing the same (only having a difference in the username,
ranging from "user1" to "user8" for example).

So, what I posted was only the clientside "stress-loop"-
definition (roughly).

I was only trying, to formulate something like a simple
RPC-(or DB-)Server "client-stress-schema" that is not all
that complicated to implement in different languages (for
the clientside) - and could also be performed in more or
less the same way against PostgreSQL for example, since
the JSON-serialized job-definitions contain only SQL-
Statements - and deliver (JSON-serialized) resultsets
in case of incoming Select-Statements for the DB-Read-
direction.

Each of the clients (clientscripts) should have performed
the following (after finishing the outer main-loop):

assuming the outer loop-count was 1000...

- 1000 different write-jobs (ranging from Inserts, over
  Updates to simple Deletes) - delivering a "success"
  or "no success" as the result of the RPC only.

- 10000 different Read-Jobs (Selects, delivering a resultset
   in the RPC-Result)

So, the server has to process finally an Request-Count of
11000 single, DB-related RPC-jobs for each Client ...
suming up to 88000 processed DB-Requests over all the
8 concurrently working scripts (with a Writes vs. Reads-
ratio of 1:10).

The time of the last returning client-script is then setting
the total-time, the server was busy in performing these
88000 concurrent RPC-requests (or DB-requests).
And that result could be used as an indicator, how such
an approach compares with a "real" DB-Server (although
such servers usually don't deliver their resultsets in a
JSON-format over the sockets, so there's probably a
measurable serialization-overhead for the JSON-based
requests, depending also a bit, if these JSON-jobs are
triggered from within Javascript - or from e.g. a simple
C-Client, that makes use of a faster JSON-helper-lib.

To "blend-out" the clientside overhead a bit in that
testloop, the incoming resultsets from the performed
"DB-Select-RPCs" shouldn't be processed further at the
clientside - maybe defining only the requirement, to
perform just the deserialization, to have these returned
resultsets in a "usable format" available within the client
(meaning, only up to the deserialization into an array - and
then forget this job - and the array, perform the next one).

An eventual PostgreSQL-comparison-client should then
also provide the results of performed Read-Selects in a
client-usable "container-format" (an easy to access array -
or list - or Object, whatever) - before proceeding with
the next job - but the JSON-protocol doesn't have to be
a necessity for such a PostgreSQL-client implementation.


Regards,

Olaf Schmidt





_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to