Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-04-28 Thread Denis Magda
Hi Shiva,

That was designed to prevent global cluster performance degradation or
other outages. Have you tried to apply my recommendation of turning of the
failure handler for this system threads?

-
Denis


On Sun, Apr 28, 2019 at 10:28 AM shivakumar 
wrote:

> HI Denis,
>
> is there any specific reason for the blocking of critical thread, like CPU
> is full or Heap is full ?
> We are again and again hitting this issue.
> is there any other way to drop tables/cache ?
> This looks like a critical issue.
>
> regards,
> shiva
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Use case for federated search

2019-04-28 Thread Enrique Medina Montenegro
Hi,

I came across Apache Ignite because I am struggling to find the best
solution to support a federated search from completely heterogeneous
sources. Let me elaborate a bit more.

I have two distinct sources from different providers and I want my users to
be able to search in a federated way and apply criteria and
sorting/pagination/ordering to the merge of the results coming from both
sources. While at the beginning there will only be two sources, I expect to
keep adding new ones as users start familiarizing and finding advantages
based on its use.

Therefore, would Apache Ignite be the proper technology to achieve a
temporary federation mechanism able to support my use case? Has anyone done
this successfully? If so, would you mind to share the approach?

Thanks,
Enrique Medina.


Re: cache update slow

2019-04-28 Thread Ilya Kasnacheev
Hello!

I don't think that we publish results.

Please take look at our benchmarking approach at
https://apacheignite.readme.io/docs/perfomance-benchmarking

Regards,
-- 
Ilya Kasnacheev


пн, 29 апр. 2019 г. в 05:57, Coleman, JohnSteven (Agoda) <
johnsteven.cole...@agoda.com>:

> Hi,
>
>
>
> Thanks for that observation. I increased cache test to 100,000 entries and
> the average write time is far better at around 23K wps. It seems like a lot
> of latency on the first few hundred writes.
>
> Do you have any benchmarks published?
>
> John
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Friday, April 26, 2019 7:29 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: cache update slow
>
>
>
> Email received from outside the company. If in doubt don't click links nor
> open attachments!
> --
>
> Hello!
>
>
>
> I think that comparing steady state benchmarks of multi-million operations
> versus 500 operations is misleading.
>
>
>
> 500 operations is probably not enough to gain full benefits from e.g. JIT.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пт, 26 апр. 2019 г. в 12:20, Coleman, JohnSteven (Agoda) <
> johnsteven.cole...@agoda.com>:
>
> Hi,
>
> Yes, comparing to DMA is apples and oranges comparison, but gives an idea
> of the relative gap in performance.
>
> A better comparison would be to an alike product such as NCache. They
> claims 20K wps*, thus 20 times faster than my ignite results, but obvs I'd
> have to compare using my scenario for a valid comparison. But this is more
> like the kind of gap in performance I'd expect vs DMA. But then NCache
> product is also quite different from ignite, so what to say?
>
> regards,
> John
>
> http://www.alachisoft.com/ncache/ncache-performance-benchmarks.html
>
> -Original Message-
> From: Maxim.Pudov 
> Sent: Friday, April 26, 2019 3:17 PM
> To: user@ignite.apache.org
> Subject: RE: cache update slow
>
> Email received from outside the company. If in doubt don't click links nor
> open attachments!
> 
>
> Glad you met your requirements. I think it is not fair to compare Ignite
> with direct memory access, so I can't really say whether this is a good
> result or not. In your case .net process starts a java process and
> communicates with it via JNI [1]. Also Ignite stores cache data off-heap,
> which requires serialisation [2].
>
> [1] https://apacheignite-net.readme.io/docs#section-ignite-and-ignitenet
> [2] https://apacheignite.readme.io/docs/durable-memory
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
> 
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by copyright
> or other legal rules. If you have received it by mistake please let us know
> by reply email and delete it from your system. It is prohibited to copy
> this message or disclose its content to anyone. Any confidentiality or
> privilege is not waived or lost by any mistaken delivery or unauthorized
> disclosure of the message. All messages sent to and from Agoda may be
> monitored to ensure compliance with company policies, to protect the
> company's interests and to remove potential malware. Electronic messages
> may be intercepted, amended, lost or deleted, or contain viruses.
>
>


RE: cache update slow

2019-04-28 Thread Coleman, JohnSteven (Agoda)
Hi,

Thanks for that observation. I increased cache test to 100,000 entries and the 
average write time is far better at around 23K wps. It seems like a lot of 
latency on the first few hundred writes.

Do you have any benchmarks published?

John

From: Ilya Kasnacheev 
Sent: Friday, April 26, 2019 7:29 PM
To: user@ignite.apache.org
Subject: Re: cache update slow

Email received from outside the company. If in doubt don't click links nor open 
attachments!

Hello!

I think that comparing steady state benchmarks of multi-million operations 
versus 500 operations is misleading.

500 operations is probably not enough to gain full benefits from e.g. JIT.

Regards,
--
Ilya Kasnacheev


пт, 26 апр. 2019 г. в 12:20, Coleman, JohnSteven (Agoda) 
mailto:johnsteven.cole...@agoda.com>>:
Hi,

Yes, comparing to DMA is apples and oranges comparison, but gives an idea of 
the relative gap in performance.

A better comparison would be to an alike product such as NCache. They claims 
20K wps*, thus 20 times faster than my ignite results, but obvs I'd have to 
compare using my scenario for a valid comparison. But this is more like the 
kind of gap in performance I'd expect vs DMA. But then NCache product is also 
quite different from ignite, so what to say?

regards,
John

http://www.alachisoft.com/ncache/ncache-performance-benchmarks.html

-Original Message-
From: Maxim.Pudov mailto:pudov@gmail.com>>
Sent: Friday, April 26, 2019 3:17 PM
To: user@ignite.apache.org
Subject: RE: cache update slow

Email received from outside the company. If in doubt don't click links nor open 
attachments!


Glad you met your requirements. I think it is not fair to compare Ignite with 
direct memory access, so I can't really say whether this is a good result or 
not. In your case .net process starts a java process and communicates with it 
via JNI [1]. Also Ignite stores cache data off-heap, which requires 
serialisation [2].

[1] https://apacheignite-net.readme.io/docs#section-ignite-and-ignitenet
[2] https://apacheignite.readme.io/docs/durable-memory



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.


NATIVE PERSISTENCE: Cache data is destroyed after disable WAL and restart

2019-04-28 Thread Manu
Hi! 

I have a question, is it normal that if WAL is deactivated for a persisted 
cache when the server node(s) is restarted, the persisted content of the 
cache is completely destroyed? 

I need to disable WAL for large heavy ingestion processes, but eventually 
ingestion may fail (OS, machine crash), so WAL state is not re-enabled. On
this situation if I restart a server node, cache’s persistent 
directory is deleted and recreated again, so data is lost. 

Thanks! 

This is the method that does this hell thing 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.beforeCacheGroupStart
 

Process to replicate it: 

1. Start one or more server nodes with native persistence enabled 
2. Create a cache (natively persisted) and store some data 
3. Disable WAL for cache - ignite().cluster().disableWal("TheCacheName") 
4. Restart server/s nodes 
5. Check cache directory was deleted and recreated again, all data was lost. 

Call stack on server node start: 
*org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.beforeCacheGroupStart*
 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.registerCacheGroup
 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.registerNewCache 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.processJoiningNode
 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.onStart 
*org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnStart*
 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onReadyForRead 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetastorageReadyForRead
 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore
 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead
org.apache.ignite.internal.IgniteKernal.start 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start 
org.apache.ignite.internal.IgnitionEx.start0 
org.apache.ignite.internal.IgnitionEx.startConfigurations 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.Ignition.start 
org.apache.ignite.startup.cmdline.CommandLineStartup.main 

Ignite version 2.7.0



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


JDBC client disconnected without any exception

2019-04-28 Thread shivakumar
Hi all,
I have created 3 tables and trying to ingest (batch commit with JDBC
connection) 50 crore records to each table parallely, with 5000 records as
batch commit size, after ingesting around 25 to 30 crore records the program
which ingest data using JDBC connection is stopping without any exception
and even in ignite logs there are no exceptions and also there are no node
restarts.
is there any known issue, similar to this?

regards,
shiva
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-04-28 Thread shivakumar
HI Denis,

is there any specific reason for the blocking of critical thread, like CPU
is full or Heap is full ?
We are again and again hitting this issue.
is there any other way to drop tables/cache ?
This looks like a critical issue.

regards,
shiva 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/