Yes, of course I started with measuring of total iteration time. After that I
found that throughput is about 200Mb/s, then I started looking for a
bottleneck. Because "downloading" time is less than "waiting" time I
conclude that "waiting" step is bottleneck and so that this thread has been
started
In this case I would not advise to measure how long does it take to get the
first byte. We may set page size to 1 and get that byte very quickly, but
it doesn't help you to iterate over all results quickly. Correct
measurement here would be to calculate total iteration time. Provided that
object si
To be precise it's not only about first page, it's about getting next pages
as well.
Regarding use case, in my client application I need to iterate over the
dataset stored in Apache Ignite as fast as it possible. It means I should
provide maximal throughput for simple "read all" operation.
--
S
Anton,
I still struggle to understand the problem. Delay in getting the first
page is not a problem on its own. But it might be a problem for your use
case. My question is - what is the use case and what is your goal?
Minimal latency? Maximal throughout? Getting the first result ASAP? Getting
al
BTW, measurements for the example I've been talking above:
Page size 5 Mb, waiting time 119.85 ± 6.72 ms
Page size 10 Mb, waiting time 157.70 ± 15.35 ms
Page size 20 Mb, waiting time 204.50 ± 19.18 ms
Page size 50 Mb, waiting time 264.70 ± 22.30 ms
Page size 100 Mb, waiting time 463.35 ± 17.12 ms
I have already experimented with different page sizes and found out that
"downloading" time is relatively small compare to this "waiting" time, so
I've decided that this "waiting" is bottleneck and that's why I'm talking
about it and measuring it. In case of AWS 10Gbit network allows us to
receive
I am not sure what is the purpose of measuring receive time of the first
byte. Please try to measure time of getting the first page, or the whole
result set you are interested in.
The main purpose of page size is to better utilize network and decrease
number of request responses. If you are intere
Hi,
I prepared an example that reproduces what I'm talking about. Please take a
look:
https://github.com/dmitrievanthony/slow-scan-query-reproducer/blob/master/src/main/java/Client.java.
I calculate time between the has been sent and the result is ready to be
received (not fully received). And I
Reference:
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheScanQueryRequest#process
On Thu, Aug 30, 2018 at 11:53 AM Vladimir Ozerov
wrote:
> Hi Dmitriy,
>
> Why do you thing that results are not fetched to the client at this point?
> We respond to the client with first p
Hi Dmitriy,
Why do you thing that results are not fetched to the client at this point?
We respond to the client with first page.
On Thu, Aug 23, 2018 at 5:22 PM dmitrievanthony
wrote:
> I checked and it looks like the result is the same (or even worse, I get
> 1150ms with page size 1000, but th
I checked and it looks like the result is the same (or even worse, I get
1150ms with page size 1000, but the reason might be in other changes,
previous measures I did using 2.6).
--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
ingly spend this time on waiting. This
> leads to the fact that I can't get throughput more than 200Mb/s using
> network 10Gbit/s. It's very confusing.
>
> So, the question. How to reduce Scan Query execution time in such
> configuration?
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>
on
preparing response and I correspondingly spend this time on waiting. This
leads to the fact that I can't get throughput more than 200Mb/s using
network 10Gbit/s. It's very confusing.
So, the question. How to reduce Scan Query execution time in such
configuration?
--
Sent from: htt
13 matches
Mail list logo