Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Justin Ji
Ilya - 

Thank you for your kind help.
Do you mind sharing your server configuration? I re-run with your
configuration, it cost more than 60 minutes to load 4000 records.

And I increased data region size and checkout frequency, they improve a bit,
but still too slow.

According to my test, the last 2000 records take most of the time.

Is this normal? And why the last part of records takes so much time?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Access a cache loaded by DataStreamer with SQL

2019-03-01 Thread Mike Needham
I have looked at the documentation and the code samples and nothing is
doing what I am trying to do.  I want to be able to use the datastreamer to
load 3 or 4 TABLES in a cache for an application that we use.  If I create
the tables using a create table syntax how do attach a datastreamer to the
different caches if the cache name is PUBLIC for all of them?

On Thu, Feb 28, 2019 at 8:13 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I have linked the documentation page, there are also some code examples in
> distribution.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 28 февр. 2019 г. в 17:10, Mike Needham :
>
>> Is there any examples that show the steps to do this correctly?  I
>> stumbled upon this but have no idea if it is the best way to do this
>>
>> On Thu, Feb 28, 2019 at 6:27 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> There's no restriction on cache name but setting it up for the first
>>> time may be tricky indeed.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> ср, 27 февр. 2019 г. в 19:48, needbrew99 :
>>>
 OK, was able to get it working.  Apparently the cache name has to be
 PUBLIC
 and it will create a table based on the object definition that I have.



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>
>>
>> --
>> *Some days it just not worth chewing through the restraints*
>>
>

-- 
*Some days it just not worth chewing through the restraints*


Re: Performance degradation in case of high volumes

2019-03-01 Thread Ilya Kasnacheev
Hello Antonio!

I canonly observe 'timeout' checkpoints which is good news - you are not
running out of checkpoint buffer.

Otherwise, maybe you are hitting actual performance limit, i.e., your
system is saturated for good at this point. What is total amount of data
per node at this time? What's the size of your cache entry?

I have also noted that your 2 nodes have wildly different config. The first
one has around 39G of data region, the second one has 8G.

Regards,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 17:19, Antonio Conforti :

> Hello Ilya.
>
> I ran again the test from scratch with fixed rate at 4000 msg/sec. with the
> environment variable IGNITE_MAX_INDEX_PAYLOAD_SIZE=66 and Cache:
>
> 1) PARTITIONED
> 2) TRANSACTIONAL
> 3) persistence enabled
> 4) backup=0
> 5) indexes on key and value
> 6) Data region 8 GB
> 7) Checkpoint buffer size 2 GB
> 8) WAL mode LOG_ONLY
> 9) WAL archive disabled
> 10)Pages Writes Throttling enabled)
>
> During the test I could observe the total checkpoint elapsed grew up and
> the
> platform processed all the messages without queueing. The throttling  was
> logged in every checkpoint.
> After about 18 milions of entries  the performance slowdown and queues
> formed. The total checkpoint elapsed exceeded the checkpoint timeout
> (default).
> A lot of "Critical system error detected" were logged and the platform
> never
> recovered the performance.
> I observed also the data region was filled after the perfomance slowdown
> and
> the checkpoint elapsed exceeded the timeout.
>
> In attach the logs and configuration files.
>
> log_ignite_190301_HOST1.gz
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/log_ignite_190301_HOST1.gz>
>
> log_ignite_190301_HOST2.gz
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/log_ignite_190301_HOST2.gz>
>
>
> Waiting your suggestions let me know if may be usueful a reproducer project
> for the slow down of performance that i observed
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a mechanism that allows the user to evict cache entries that relate to an affinityKey.

2019-03-01 Thread Ilya Kasnacheev
Hello again!

For added benefits, ScanQuery can filter entries on data node for you, and
return only the IDs needed for pruning.

Regards,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 20:08, Ilya Kasnacheev :

> Hello!
>
> 1. I have no idea about your use case so it's up to you.
> 2. Please refer to ScanQuery(int) constructor.
> 3. ScanQueries are pretty solid, one partition should have around 40k
> records assuming 1024 partitions/cluster, that's peanuts.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 1 мар. 2019 г. в 17:57, Justin Ji :
>
>> Ilya -
>> First of all, thank for your reply.
>> For your suggest, I have some questions:
>> 1.Do you mean I should disable eviction policies?
>> 2.How to scan partition by ScanQuery, I did not find an example in ignite.
>> 3.If we have more 40 million records,  does this manner has good
>> performance?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Is there a mechanism that allows the user to evict cache entries that relate to an affinityKey.

2019-03-01 Thread Ilya Kasnacheev
Hello!

1. I have no idea about your use case so it's up to you.
2. Please refer to ScanQuery(int) constructor.
3. ScanQueries are pretty solid, one partition should have around 40k
records assuming 1024 partitions/cluster, that's peanuts.

Regards,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 17:57, Justin Ji :

> Ilya -
> First of all, thank for your reply.
> For your suggest, I have some questions:
> 1.Do you mean I should disable eviction policies?
> 2.How to scan partition by ScanQuery, I did not find an example in ignite.
> 3.If we have more 40 million records,  does this manner has good
> performance?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Ilya Kasnacheev
Hello!

I assume we're still talking about your reproducer
https://github.com/RedBlackTreei/streamer.git
With your code and reduced data set of 2500

Total time:628120ms when using

cacheCfg.setSqlIndexMaxInlineSize(64);
devIdIdx.setInlineSize(96);

as opposed to Total time:820821ms with your settings.

devIdIdx needs to be so large due to
https://issues.apache.org/jira/browse/IGNITE-11125 - it will include _key :(

This may look slow, so generic optimizations might be needed:
- Have larger data region and/or more nodes.
- If possible load with WAL disabled (you can do that in runtime on per
cache basis).
- If not, have less frequent checkpoints and larger checkpoint page buffer
size.

Regards,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 18:02, Justin Ji :

> I have tried to increase QueryIndex.setInlineSize and
> CacheConfiguration.setSqlIndexMaxInlineSize to 128 256 and 512, but the
> performace became worse.
>
> Do I miss some configuration?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Justin Ji
I have tried to increase QueryIndex.setInlineSize and 
CacheConfiguration.setSqlIndexMaxInlineSize to 128 256 and 512, but the 
performace became worse. 

Do I miss some configuration? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Justin Ji
I have tried to increase QueryIndex.setInlineSize and 
CacheConfiguration.setSqlIndexMaxInlineSize to 128 256 and 512, but the 
performace became worse. 

Do I miss some configuration? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Ilya Kasnacheev
Hello!

>From the shared logs it seems that you spend time building indexes (which
are possibly not-inlined as we discussed) and I can see nothing related to
backups here.

Regards,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 17:55, Justin Ji :

> Thank for your reply!
> 1. No, I did not use FULL_SYNC, because it will wait for write or commit to
> complete on all participating remote nodes (primary and backup), so it may
> lead to a drop of write performance, am I right? But I will try it.
> 2. Yes, please refer to the attachment, I dumped thread stacks of all three
> server nodes, every nodes dumped four files.
> dump.zip
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a mechanism that allows the user to evict cache entries that relate to an affinityKey.

2019-03-01 Thread Justin Ji
Ilya -
First of all, thank for your reply.
For your suggest, I have some questions:
1.Do you mean I should disable eviction policies?
2.How to scan partition by ScanQuery, I did not find an example in ignite.
3.If we have more 40 million records,  does this manner has good
performance?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Justin Ji
Thank for your reply!
1. No, I did not use FULL_SYNC, because it will wait for write or commit to
complete on all participating remote nodes (primary and backup), so it may
lead to a drop of write performance, am I right? But I will try it.
2. Yes, please refer to the attachment, I dumped thread stacks of all three
server nodes, every nodes dumped four files.
dump.zip
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-03-01 Thread Antonio Conforti
Hello Ilya.

I ran again the test from scratch with fixed rate at 4000 msg/sec. with the
environment variable IGNITE_MAX_INDEX_PAYLOAD_SIZE=66 and Cache:

1) PARTITIONED
2) TRANSACTIONAL
3) persistence enabled
4) backup=0
5) indexes on key and value
6) Data region 8 GB
7) Checkpoint buffer size 2 GB
8) WAL mode LOG_ONLY
9) WAL archive disabled
10)Pages Writes Throttling enabled)

During the test I could observe the total checkpoint elapsed grew up and the
platform processed all the messages without queueing. The throttling  was
logged in every checkpoint.
After about 18 milions of entries  the performance slowdown and queues
formed. The total checkpoint elapsed exceeded the checkpoint timeout
(default).
A lot of "Critical system error detected" were logged and the platform never
recovered the performance.
I observed also the data region was filled after the perfomance slowdown and
the checkpoint elapsed exceeded the timeout.

In attach the logs and configuration files.

log_ignite_190301_HOST1.gz

  
log_ignite_190301_HOST2.gz

  

Waiting your suggestions let me know if may be usueful a reproducer project
for the slow down of performance that i observed

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread Ilya Kasnacheev
Hello!

Do you use FULL_SYNC for a chance? Can you provide thread dumps during
slowdown?

Regards,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 12:53, BinaryTree :

> Hi Igniters -
>
> I know backups will impact the performance of the cluster:
>
> If you use a PARTITIONED cache and the data loss is not critical for you
> (for example, when you have a backing cache store), consider disabling
> backups for the cache. When backups are enabled, the cache engine has to
> maintain a remote copy of each entry, which requires network exchange and
> is time-consuming.
>
> Because the data is important and can not lose, so the backup is necessary.
>
> But the backup make DataStreamer performance decreased a lot, if backups
> are disabled, 40 million records can be loaded in 4 minutes, but when set
> backup = 1, after loading 20 million records, the speed decreased a lot,
> sometimes, it will cost more 20 seconds to load 10 thousands records.
>
> Are there any configurations or methods can improve the performace of
> DataStreamer?
>
> Related post:
>
>
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-tp21161.html
>
> I attached the thread dumps in this post.
>
> I also create a project to reproduce the problem, you can refer to :
>
> https://github.com/RedBlackTreei/streamer.git
>
>


Re: Is there a mechanism that allows the user to evict cache entries that relate to an affinityKey.

2019-03-01 Thread Ilya Kasnacheev
Hello!

You can do scanQuery on every partition periodically and remove old entries
along with their dependencies. This might be easier than this eviction
policy business. Just my 5c.

Regard,
-- 
Ilya Kasnacheev


пт, 1 мар. 2019 г. в 15:41, BinaryTree :

> I have a cache, it contains many datapoint, the datapoint looks like:
>
> dpId integer
> devId String
> name String
>
> The datapoint relates to device, their relationship is one-to-many, and
> they are connected according to devId, so the devId is the affinityKey.
>
> The cache key is:
>
> //key=devId + "_" + dpId
> private String key;
> @AffinityKeyMapped
> private String devId;
> ​
> public DpKey() {
> }
>
> For product purpose, we should keep the integrity of a set of datapoint,
> but when *eviction policies* triggered, a part of records that belong to
> a device my be evicted.
>
> So my question is :
>
> *Is there a mechanism that allows the user to evict cache entries that
> relate to an affinityKey?*
>
> If no, is there a convenient way to implement it, how to do?
>
> Anyone who can give me any advice will be appreciated.
>


Re: pre-load data (Apache ignite native persistence store or Cassandra) into two partitioned cache tables

2019-03-01 Thread Ilya Kasnacheev
Hello!

If you are using two rows as @AffinityKeyMapped, you can join other tables
which use same two rows, BUT you can't join tables which use only first row
as @AffinityKeyMapped or only second row.

That's why you can join invoice_line to fact_purhcase_line - I guess they
both have invoiceId and factLineId and annotation on both.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 22:38, xmw45688 :

> Can some one comments on the following questions in my previous post?
>
>
> 4. fact_purhcase_line, invoice and invoice line via factLineId and
> InvoiceID
> do not work, please see annotation below
>
> public class InvoiceLineKey {
> /** Primary key. */
> private long id;
>
> /** Foreign key to fact_purhcase_line */
> @AffinityKeyMapped
> private long factLineId;
>
> /** Foreign key to invoice */
> @AffinityKeyMapped
> private long invoiceId;
>
>
> 5. I don't quite understand that invoiceId affinity key mapped between
> invoice and invoice_line does not require factLineId key mapped between
> fact_purchase_line and invoice_line.  Is this because of having factId key
> affinity between purchase_fact and purchase_fact_line, between
> purchase_fact
> and invoice.
>
> So I just have the following key affinity mapped -
>
> purchase_fact -> factId-> purchase_fact_line
> purchase_fact -> factId -> invoice
> invoice -> invoiceId -> invoice_line
>
> Interestingly, invoice_line join fact_purhcase_line works fine (see
> queries
> below).  Can someone please shed some lights on this?
>
> // expected
> SELECT count(*) from PARTITION.invoice inv, PARTITION.invoiceline il
> WHERE inv.id = il.invoiceid;
>
> // why does this query work? note there is a join between
> li.id=il.factLineId which is not a key affinity mapped.
> SELECT count(*)
>  from PARTITION.factpurchaseline li, PARTITION.invoice inv,
> PARTITION.invoiceline il
> WHERE li.id = il.factlineid
>   AND inv.id = il.invoiceid
> ;
>
> // why does this query work? note there is a join between
> li.id=il.factLineId which is not a key affinity mapped.
> SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoiceline
> il
> WHERE li.id = il.factlineid
> ;
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Is there a mechanism that allows the user to evict cache entries that relate to an affinityKey.

2019-03-01 Thread BinaryTree
I have a cache, it contains many datapoint, the datapoint looks like:
dpId integer
devId String
name String
The datapoint relates to device, their relationship is one-to-many, and they 
are connected according to devId, so the devId is the affinityKey.

The cache key is:
//key=devId + "_" + dpId
private String key;
@AffinityKeyMapped
private String devId;
​
public DpKey() {
}
For product purpose, we should keep the integrity of a set of datapoint, but 
when eviction policies triggered, a part of records that belong to a device my 
be evicted.

So my question is :

Is there a mechanism that allows the user to evict cache entries that relate to 
an affinityKey?

If no, is there a convenient way to implement it, how to do?

Anyone who can give me any advice will be appreciated.

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-03-01 Thread Andrey Mashenkov
Hi,

Most likely heap size is too low.
Try to increase Xmx up to 4Gb or higher or avoid G1GC usage on small heaps
as it is very sensitive to free heap memory.

Looks like you have Visor node (or may be web-console) in grid. Is OOM
happened only when Visor attached to grid?

On Fri, Mar 1, 2019 at 7:17 AM James Wang 王升平 (edvance CN) <
james.w...@edvancesecurity.com> wrote:

> OS: 4C +8GB
> Data Region = 4GB
>
> start command:
> nohup $IGNITE_HOME/bin/ignite.sh -Xmx1024m -XX:+UseG1GC
> -XX:MaxDirectMemorySize=1G grtip-config.xml > ignite.log 2>&1 &
>
> How to adjust the the memory tunning.
>
> [21:29:22,777][SEVERE][mgmt-#33519%234-236-237-241%][GridJobWorker]
> Runtime error caught during grid runnable execution: GridJobWorker
> [createTime=1551360450243, startTime=1551360453071,
> finishTime=1551360483944, taskNode=TcpDiscoveryNode
> [id=a7839266-6396-4f7c-9ef7-c3a4b2355782, addrs=[0:0:0:0:0:0:0:1%lo,
> 127.0.0.1, 192.168.1.236], sockAddrs=[/192.168.1.236:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=56,
> intOrder=31, lastExchangeTime=1551151457630, loc=false,
> ver=2.7.0#20181201-sha1:256ae401, isClient=false], internal=true,
> marsh=BinaryMarshaller [], ses=GridJobSessionImpl [ses=GridTaskSessionImpl
> [taskName=o.a.i.i.v.service.VisorServiceTask, dep=GridDeployment
> [ts=1551094729510, depMode=SHARED,
> clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6,
> clsLdrId=3aef2742961-5a63d018-0eb4-493e-91a4-be6d41caff85, userVer=0,
> loc=true, sampleClsName=o.a.i.i.processors.cache.CachesRegistry,
> pendingUndeploy=false, undeployed=false, usage=0],
> taskClsName=o.a.i.i.v.service.VisorServiceTask,
> sesId=8a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782,
> startTime=1551360450899, endTime=9223372036854775807,
> taskNodeId=a7839266-6396-4f7c-9ef7-c3a4b2355782,
> clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, closed=false,
> cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false,
> internal=true, topPred=ContainsNodeIdsPredicate [],
> subjId=a7839266-6396-4f7c-9ef7-c3a4b2355782, mapFut=IgniteFuture
> [orig=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
> hash=327805292]], execName=null],
> jobId=9a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782],
> jobCtx=GridJobContextImpl
> [jobId=9a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782, timeoutObj=null,
> attrs={}], dep=GridDeployment [ts=1551094729510, depMode=SHARED,
> clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6,
> clsLdrId=3aef2742961-5a63d018-0eb4-493e-91a4-be6d41caff85, userVer=0,
> loc=true, sampleClsName=o.a.i.i.processors.cache.CachesRegistry,
> pendingUndeploy=false, undeployed=false, usage=0], finishing=true,
> masterLeaveGuard=false, timedOut=false, sysCancelled=false,
> sysStopping=false, isStarted=true, job=VisorServiceJob [], held=0,
> partsReservation=null, reqTopVer=null, execName=null]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> [21:29:22,781][SEVERE][nio-acceptor-tcp-comm-#28%234-236-237-241%][] JVM
> will be halted immediately due to the failure: [failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=java.lang.OutOfMemoryError: GC overhead limit
> exceeded]]
> [21:29:22,814][SEVERE][grid-timeout-worker-#23%234-236-237-241%][GridTimeoutProcessor]
> Error when executing timeout callback: CancelableTask
> [id=94ef2742961-0742c515-5b96-4aad-b07a-9f1ec60af5f3,
> endTime=1551360514795, period=3000, cancel=false, task=MetricsUpdater
> [prevGcTime=38806, prevCpuTime=2195727,
> super=o.a.i.i.managers.discovery.GridDiscoveryManager$MetricsUpdater@73766331
> ]]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
>
> James Wang / edvance
> Mobile/WeChat: +86 135 1215 1134
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>


-- 
Best regards,
Andrey V. Mashenkov


Re: How to avoid start multiple instances in single machine

2019-03-01 Thread Stephen Darlington
If you set the localPortRange to zero (property in the TcpDiscoverySpi), Ignite 
will only start on the port number you specific. That way, if you bring up 
another node it will fail to start. Though automating how your environment is 
configured so this could never happen would probably be a better idea!

Regards,
Stephen

> On 1 Mar 2019, at 05:18, jameswsp  wrote:
> 
> Hi Support,
> 
> I hope to host only one instance/node in one machine.
> 
> But I am worried that I startup another node in one machine by mistake. How
> to configure to avoid this/
> 
> Thank you.
> James Wang
> M/WeChat: 135 1215 1134 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread BinaryTree
Hi Igniters - 

I know backups will impact the performance of the cluster:

If you use a PARTITIONED cache and the data loss is not critical for you (for 
example, when you have a backing cache store), consider disabling backups for 
the cache. When backups are enabled, the cache engine has to maintain a remote 
copy of each entry, which requires network exchange and is time-consuming.

Because the data is important and can not lose, so the backup is necessary.

But the backup make DataStreamer performance decreased a lot, if backups are 
disabled,  40 million records can be loaded in 4 minutes, but when set backup  
= 1, after loading 20 million records, the speed decreased a lot, sometimes, it 
will cost more 20 seconds to load 10 thousands records.

Are there any configurations or methods can improve the performace of 
DataStreamer?

Related post:

http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-tp21161.html

I attached the thread dumps in this post.

I also create a project to reproduce the problem, you can refer to :

https://github.com/RedBlackTreei/streamer.git

Re: Performance degradation in case of high volumes

2019-03-01 Thread Justin Ji
Thank for your reply, the project was created, you can refer to :
https://github.com/RedBlackTreei/streamer.git

Related post:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-td21161.html

Looking forward to your reply!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-03-01 Thread Antonio Conforti
Hello Ilya,
you are right from 65681 to 82206 and the corresponding duration of
checkpoint from 1,6 sec to 9,6 sec it is not catastrophic. I just wanted to
highlight a noticeable increase in dirty pages in a short time despite the
constant rate, suggesting that in the long run the performance would be
degraded. Unfortunately I could not finish the test but right now I'm doing
a complete test: 4000 entries per second trying to get to 60 million
entries.
I'll post you results as soon as possible.

Thanks again for your collaboration!

Antonio



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-03-01 Thread Justin Ji
Thank for your reply, the project was created, you can refer to :
https://github.com/RedBlackTreei/streamer.git

Related post:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-td21161.html

Looking forward to your reply!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/