What type of client do you use? is it JDBC thin driver?

The best if you can share benchmark source code, so we can see what queries
you use, what flags you set to queries and etc.

On Thu, Jan 2, 2020 at 10:07 PM Rajan Ahlawat <rajan.ahla...@gmail.com>
wrote:

> If QPS > 2000 I am using multiple hosts for application which is shooting
> requests to cache.
> If benchmark is the bottleneck, we shouldn't see drop from 2600 to 2200
> when we go from 1 to 3 node cluster.
>
> On Fri, Jan 3, 2020 at 11:24 AM Rajan Ahlawat <rajan.ahla...@gmail.com>
> wrote:
>
>> Hi Mikhail
>>
>> could you please share the benchmark code with us?
>> I am first filling up around a million records in cache. Then through
>> direct cache service classes, fetching those records randomly.
>>
>> do you run queries against the same amount of records each time?
>> Yes, 2600 QPS means, it picks 2600 records randomly over a second and do
>> get query over sql caches of different tables.
>>
>> what host machines do you use for your nodes? when you say that you have
>> 5 nodes, does it mean that you use 5 dedicates machines for each node?
>> Yes, these are five dedicated linux machines.
>>
>> Also, it might be that the benchmark itself is the bottleneck, so your
>> system can handle more QPS, but you need to run a benchmark from several
>> machines. Please try to use at least 2 hosts for the benchmark application
>> and check if there any changes in QPS.
>> As you can see in the table, I have tried with different combinations on
>> nodes, and with increase in nodes, our qps of requests being served under
>> 50ms is getting down each time.
>>
>>
>> On Fri, Jan 3, 2020 at 1:29 AM Mikhail Cherkasov <mcherka...@gridgain.com>
>> wrote:
>>
>>> Hi Rajan,
>>>
>>> could you please share the benchmark code with us?
>>> do you run queries against the same amount of records each time?
>>> what host machines do you use for your nodes? when you say that you have
>>> 5 nodes, does it mean that you use 5 dedicates machines for each node?
>>> Also, it might be that the benchmark itself is the bottleneck, so your
>>> system can handle more QPS, but you need to run a benchmark from several
>>> machines. Please try to use at least 2 hosts for the benchmark application
>>> and check if there any changes in QPS.
>>>
>>> Thanks,
>>> Mike.
>>>
>>> On Thu, Jan 2, 2020 at 2:49 AM Rajan Ahlawat <rajan.ahla...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> ---------- Forwarded message ---------
>>>> From: Rajan Ahlawat <rajan.ahla...@gmail.com>
>>>> Date: Thu, Jan 2, 2020 at 4:05 PM
>>>> Subject: Ignite partitioned mode not scaling
>>>> To: <user@ignite.apache.org>
>>>>
>>>>
>>>> We are moving from replicated (1-node cluster) to multinode partitioned
>>>> cluster.
>>>> So assumption was that max QPS we can reach would be more if no. of
>>>> nodes are added to cluster.
>>>> We compared under 50ms QPS stats of partitioned mode with increasing
>>>> no. of nodes in cluster, and found that performance actually degraded.
>>>> We are using ignite key value as well as sql cache, where most of the
>>>> data in sql cache, no persistence is being used.
>>>>
>>>> please let us know what we are doing wrong or what can be done to make
>>>> it scalable.
>>>> here are the results of perf tests :
>>>>
>>>> *50ms in 95 percentile comparison of partitioned-mode*
>>>>
>>>> Response time in ms
>>>> cache mode (partitioned)QPSread from sql tableread from sql table with
>>>> joinread from sql table
>>>> 1-node 2600 48 46 47
>>>> 3-node 2190 50 48 49
>>>> 3-node-1-backup 2200 55 53 54
>>>> 5-node 2000 54 52 53
>>>> 5-node-2-backup 1990 51 49 50
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Mikhail.
>>>
>>

-- 
Thanks,
Mikhail.

Reply via email to