Re: number of way segments in wal

2019-12-10 Thread krkumar24061...@gmail.com
Hi Anton - Initially we had the wal and wal archive configured to different
folders, later we changed the config to the same folder and restarted the
cluster. Is that a problem?

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Understanding Write/Read IOPS during Ignite data load through Streamer and JDBC SQL

2019-12-10 Thread krkumar24061...@gmail.com
HI ilya - Thanx for the reply. What you said is exactly what's happening I
guess. We do see this warning that pages will be rotated from disk and the
write process will slow down. 

Here is the message that we see in the logs:

WARN ][data-streamer-stripe-22-#59][PageMemoryImpl] Page replacements
started, pages will be rotated with disk, this will affect storage
performance (consider increasing DataRegionConfiguration#setMaxSize).

We do have inline indexes on the SQL tables, that's also probably  causing
some of this I guess.

Is there any solution other than increasing the off heap size ??

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adhoc Reporting use case (Ignite+Spark?)

2019-12-10 Thread sri hari kali charan Tummala
Check these examples.

https://aws.amazon.com/blogs/big-data/real-time-in-memory-oltp-and-analytics-with-apache-ignite-on-aws/

Spark + ignite using thrift connection:-

https://github.com/kali786516/ApacheIgnitePoc/blob/ef95cdd65ef2a91aeef59dd2b4ebafec4da19a17/src/main/scala/com/ignite/examples/spark/SparkClientConnectionTest.scala#L75


https://github.com/kali786516/ApacheIgnitePoc/blob/master/src/main/scala/com/ignite/examples/spark/SparkClientConnectionTest.scala#L75

Thanks
Sri

https://github.com/kali786516/ApacheIgnitePoc/blob/ef95cdd65ef2a91aeef59dd2b4ebafec4da19a17/src/main/scala/com/ignite/examples/spark/SparkClientConnectionTest.scala#L7


On Tuesday, December 10, 2019, Ilya Kasnacheev 
wrote:

> Hello!
>
> I think you will have to provide way more details about your case before
> getting any feedback.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 5 дек. 2019 г. в 00:07, Deepak :
>
>> Hello,
>>
>> I am trying out Ignite+Spark combination for my adhoc reporting use case.
>> Any suggestion/help on similar architecture would be helpful. Does it make
>> sense or I am going totally south ??
>>
>> thanks in advance
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 
Thanks & Regards
Sri Tummala


Re: Ignite Hanging on startup after OOME

2019-12-10 Thread Mitchell Rathbun (BLOOMBERG/ 731 LEX)
We are running in LOCAL cache mode with persistence enabled. We were able to 
trigger the crash with large putAll calls and low off-heap memory allocated 
(persistence is enabled). It was IgniteOutOfMemory exception that occurred. I 
can try and get a thread dump and add it to this e-mail chain.

From: user@ignite.apache.org At: 12/10/19 09:30:47To:  user@ignite.apache.org
Subject: Re: Ignite Hanging on startup after OOME

Hi Mitchell,

What is the reproducer for the issue, is my assumption correct: no
persistence, fill small data region with data, crash, restart? Are you hit
with OutOfMemory or exactly IgniteOutOfMemory?

Are you able to capture JVM thread dump in this case?

Best regards,
Anton


--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: IgniteOutOfMemoryException in LOCAL cache mode with persistence enabled

2019-12-10 Thread Mitchell Rathbun (BLOOMBERG/ 731 LEX)
2 GB is not reasonable for off heap memory for our use case. In general, even 
if off-heap is very low, performance should just degrade and calls should 
become blocking, I don't think that we should crash. Either way, the issue 
seems to be with putAll, not concurrent updates of different caches in the same 
data region. If I use Ignite's DataStreamer API instead of putAll, I get much 
better performance and no OOM exception. Any insight into why this might be 
would be appreciated.

From: user@ignite.apache.org At: 12/10/19 11:24:35To:  Mitchell Rathbun 
(BLOOMBERG/ 731 LEX ) ,  user@ignite.apache.org
Subject: Re: IgniteOutOfMemoryException in LOCAL cache mode with persistence 
enabled

Hello!

10M is very very low-ball for testing performance of disk, considering how 
Ignite's wal/checkpoints are structured. As already told, it does not even work 
properly.

I recommend using 2G value instead. Just load enough data so that you can 
observe constant checkpoints.

Regards,
-- 
Ilya Kasnacheev


ср, 4 дек. 2019 г. в 03:16, Mitchell Rathbun (BLOOMBERG/ 731 LEX) 
:

For the requested full ignite log, where would this be found if we are running 
using local mode? We are not explicitly running a separate ignite node, and our 
WorkDirectory does not seem to have any logs

From: user@ignite.apache.org At: 12/03/19 19:00:18To:  user@ignite.apache.org
Subject: Re: IgniteOutOfMemoryException in LOCAL cache mode with persistence 
enabled

For our configuration properties, our DataRegion initialSize and MaxSize was 
set to 11 MB and persistence was enabled. For DataStorage, our pageSize was set 
to 8192 instead of 4096. For Cache, write behind is disabled, on heap cache is 
disabled, and Atomicity Mode is Atomic

From: user@ignite.apache.org At: 12/03/19 13:40:32To:  user@ignite.apache.org
Subject: Re: IgniteOutOfMemoryException in LOCAL cache mode with persistence 
enabled

Hi Mitchell,

Looks like it could be easily reproduced on low off-heap sizes, I tried with
simple puts and got the same exception:

class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Failed to
find a page for eviction [segmentCapacity=1580, loaded=619,
maxDirtyPages=465, dirtyPages=619, cpPages=0, pinnedInSegment=0,
failedToPrepare=620]
Out of memory in data region [name=Default_Region, initSize=10.0 MiB,
maxSize=10.0 MiB, persistenceEnabled=true] Try the following:
  ^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies

It looks like Ignite must issue a proper warning in this case and couple of
issues must be filed against Ignite JIRA.

Check out this article on persistent store available in Ignite confluence as
well:
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+und
er+the+hood#IgnitePersistentStore-underthehood-Checkpointing

I've managed to make kind of similar example working with 20 Mb region with
a bit of tuning, added following properties to
org.apache.ignite.configuration.DataStorageConfiguration:
/
/

The whole idea behind this is to trigger checkpoint on timeout rather than
on too much dirty pages percentage threshold. The checkpoint page buffer
size may not exceed data region size, which is 10 Mb, which might be
overflown during checkpoint as well.

I assume that checkpoint is never triggered in this case because of
per-partition overhead: Ignite writes some meta per partition and it looks
like that it is at least 1 meta page utilized for each which results in some
amount of off-heap devoured by these meta pages. In the case with the lowest
possible region size, this might consume more than 3 Mb for cache with 1k
partitions and 70% dirty data pages threshold would never be reached.

However, I found another issue when it is not possible to save meta page on
checkpoint begin, this reproduces on 10 Mb data region with mentioned
storage configuration options.

Could you please describe the configuration if you have anything different
from defaults (page size, wal mode, partitions count) and types of key/value
that you use? And if it is possible, could you please attach full Ignite log
from the node that has suffered from IOOM?

As for the data region/cache, in reality you do also have cache groups here
playing a role. But generally I would recommend you to go with one data
region for all caches unless you have a particular reason to have multiple
regions. As for example, you have some really important data in some cache
that always needs to be available in durable off-heap memory, then you
should have a separate data region for this cache as I'm not aware if there
is possibility to disallow evicting pages for a specific cache.

Cache groups documentation link:
https://apacheignite.readme.io/docs/cache-groups

By default (a cache doesn't have cacheGroup property defined) each cache has
it's own cache group with the very same name, that lives in the dat

Re: H2 version security concern

2019-12-10 Thread Evgenii Zhuravlev
Hi,

There are plans to replace H2 with Calcite. You can read more about it on
dev list, I've seen several threads regarding this topic there.

Evgenii


вт, 10 дек. 2019 г. в 13:29, Sobolevsky, Vladik :

> Hi,
>
>
>
> It looks like all the recent versions of Apache Ignite ( apache ignite
> indexing) depends on H2 version 1.4.197.
>
> This version has at least 2 CVE’s :
>
> https://nvd.nist.gov/vuln/detail/CVE-2018-10054
>
> https://nvd.nist.gov/vuln/detail/CVE-2018-14335
>
>
>
> I do understand that not all above CVE’s can be exploited due to a way
> Ignite uses H2 but still : Is there any plans to upgrade to version that
> doesn’t has those ?
>
>
>
> Thank You,
>
> Vladik
>
>
>
>
>
>
>


H2 version security concern

2019-12-10 Thread Sobolevsky, Vladik
Hi,

It looks like all the recent versions of Apache Ignite ( apache ignite 
indexing) depends on H2 version 1.4.197.
This version has at least 2 CVE’s :
https://nvd.nist.gov/vuln/detail/CVE-2018-10054
https://nvd.nist.gov/vuln/detail/CVE-2018-14335

I do understand that not all above CVE’s can be exploited due to a way Ignite 
uses H2 but still : Is there any plans to upgrade to version that doesn’t has 
those ?

Thank You,
Vladik





Geometry via SQL?

2019-12-10 Thread richard.ows...@viasat.com
Is there an example of how to use the SQL Interface to user SQL Insert of a
GEOMETRY Data Type?  The GEOMETRY data type is supported through the SQL
interface (I can create a table with GEOMETRY field and create the necessary
index), but cannot INSERT a geometry?  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Topology version changing very frequently

2019-12-10 Thread Prasad Bhalerao
Can you please try to answer question 1, 2  and 3?

In my logs I could see that number of server are also changing. Topology
version changes when server node becomes 3 and again becomes 4. Total
number of server nodes are 4.

Thanks,
Prasad


On Tue 10 Dec, 2019, 11:45 PM akurbanov  Hello,
>
> Are you able to share full logs from the server and client instances?
>
> In short: by default clients can reconnect to the cluster after the network
> connection was disrupted by anything (network issues, gc pauses etc.).
>
>
> https://apacheignite.readme.io/docs/clients-vs-servers#section-reconnecting-a-client
>
> Server drops client node from topology once failure detection timeout is
> reached, if you want a client to be stopped and segmented in this case, use
> property clientReconnectDisabled and set it to true, the sample is on the
> documentation page.
>
> Network timeout defines the timeout for different network operations, among
> the usages are client reconnect attempt timeout, connection establishing
> timeout on join attempt and sending messages.
>
> Best regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: JVM crashed - Failed to get field because type ID of passed object differs from type ID this BinaryField belongs to

2019-12-10 Thread akurbanov
Hello,

I would also recommend to check which objects are used here, grep through
the binary_meta and marshaller (in $IGNITE_HOME/work) directories to
identify the classes. You should see x.classname0 files for x equals to both
numbers.

Are both classes present? Are they used to be stored in same cache?

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Topology version changing very frequently

2019-12-10 Thread akurbanov
Hello,

Are you able to share full logs from the server and client instances? 

In short: by default clients can reconnect to the cluster after the network
connection was disrupted by anything (network issues, gc pauses etc.). 

https://apacheignite.readme.io/docs/clients-vs-servers#section-reconnecting-a-client

Server drops client node from topology once failure detection timeout is
reached, if you want a client to be stopped and segmented in this case, use
property clientReconnectDisabled and set it to true, the sample is on the
documentation page.

Network timeout defines the timeout for different network operations, among
the usages are client reconnect attempt timeout, connection establishing
timeout on join attempt and sending messages.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client (weblogic) attempting to rejoin cluster causes shutdown.

2019-12-10 Thread Stanislav Lukyanov
Ok, there is a lot to dig through here but let me try with establishing
simple things first.
1. If two nodes (client or server) have the same cache specified in the
configuration, the configs must be identical.
2. If one node has a cache configuration, it will be shared between all
nodes automatically.
3. Client doesn't store data (except for LOCAL caches or near caches)

Try only specifying caches and data regions on your server.
Does it help?

Stan

On Tue, Dec 10, 2019 at 7:58 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Unfortunately it's hard to say without looking at complete logs from your
> nodes. There's too many questions and not enough clues.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 6 дек. 2019 г. в 03:56, Steven.Gerdes :
>
>> The production instance has issues with ignite heaping out, the solution
>> we
>> attempted to implement was to set the default data region to have swap
>> enabled and also set a eviction policy on the server with a maxMemorySize
>> such that it was much less then the Xmx jvm memory size.
>>
>> Testing locally with a dev version of our server (weblogic acting as
>> ignite
>> with client mode enabled) and the docker instance of ignite 2.7.6 it
>> appears
>> as though using this configuration does not solve ignites instability
>> issues.
>>
>> Many different configurations were attempted (for the full config see
>> bottom
>> of post). The desired configuration would be one which the client has no
>> cache and the server does all the caching. That was done with attempting
>> the
>> below on the server:
>>
>>   
>>
>>   
>>
>> > class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
>>   
>>
>>   
>> 
>>   
>>
>> With the above server configuration 3 attempts were made to the client
>> configuration:
>>
>> 1. Mirrored configuration on the client
>> 2. Similar configuration with maxSize set to 0 (an attempt at ensuring the
>> client didn't try to cache)
>> 3. enabling IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK and not having any
>> eviction policy for the client
>>
>> All three of these configurations resulted in the client weblogic to
>> disconnect from the cluster and finally to die (while attempting to
>> reconnect to ignite)
>>
>> Error from client before death:
>>   | Servlet failed with an Exception
>>   | java.lang.IllegalStateException: Grid is in invalid state to perform
>> this operation. It either not started yet or has already being or have
>> stopped [igniteInstanceName=null, state=STOPPING]
>>   | at
>>
>> org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:201)
>>   | at
>>
>> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:95)
>>   | at
>> org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3886)
>>   | at
>>
>> org.apache.ignite.internal.IgniteKernal.transactions(IgniteKernal.java:2862)
>>   | at
>>
>> org.apache.ignite.cache.websession.CustomWebSessionFilter.init(CustomWebSessionFilter.java:273)
>>   | Truncated. see log file for complete stacktrace
>>
>> Error in ignitevisorcmd.sh:
>> SEVERE: Blocked system-critical thread has been detected. This can lead to
>> cluster-wide undefined behaviour [threadName=tcp-disco-msg-worker,
>> blockedFor=17s]
>> Dec 06, 2019 12:17:27 AM java.util.logging.LogManager$RootLogger log
>> SEVERE: Critical system error detected. Will be handled accordingly to
>> configured handler [hnd=StopNodeFailureHandler
>> [super=AbstractFailureHandler
>> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SEGMENTATION]]],
>> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
>> o.a.i.IgniteException: GridWorker [name=tcp-disco-msg-worker,
>> igniteInstanceName=null, finished=false, heartbeatTs=1575591430824]]]
>> class org.apache.ignite.IgniteException: GridWorker
>> [name=tcp-disco-msg-worker, igniteInstanceName=null, finished=false,
>> heartbeatTs=1575591430824]
>> at
>>
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>> at
>>
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>> at
>>
>> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
>> at
>>
>> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
>> at
>>
>> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$TimeoutWorker.body(GridTimeoutProcessor.java:221)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>> at java.lang.Thread.run(Thread.java:748)
>>
>> Sometimes in testing it was possible for the client to successfully
>> reconnect. But I could not see why it was inconsistent with this behavior.
>>
>> A separate test was conducted in which there was no eviction policy or
>> on-heap enabled on either the client or server. This seems to 

Re: number of way segments in wal

2019-12-10 Thread akurbanov
Hello,

One important question: if you can recall, did you start a clean cluster
with WAL /WAL archive pointing to the very same directory, or you have
stopped the node without WAL /PDS cleanup and changed this setting?

So, when this configuration was applied, were there real WAL segment files
in WAL folder?

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JVM crashed - Failed to get field because type ID of passed object differs from type ID this BinaryField belongs to

2019-12-10 Thread Ilya Kasnacheev
Hello!

This time, the error is more specific, and it has to do with composition of
your objects. It seems that you manage to put some objects in your cache
which have different fields' types than the rest, and this interferes with
building of indexes. This should normally be handled rather than causing
data corruption, but something could slip.

How do you populate the cache? Can you check which operations caused this
error, and what data was passed to them?

Regards,
-- 
Ilya Kasnacheev


пт, 6 дек. 2019 г. в 11:13, sgtech19 :

> Hello,
>With/Without native persistence, We have been getting the below
> exception in production.
>
>We upgraded to latest Ignite version 2.7.6 after reading
> through this
> post , still no luck. Any help is appreciated. Unfortunately I couldn't
> reproduce in local setup.
>
> Thanks
>
>
>
> http://apache-ignite-users.70518.x6.nabble.com/index-corrupted-error-org-apache-ignite-internal-processors-cache-persistence-tree-CorruptedTreeExcew-td29405.html
>
>
>Caused by: org.h2.jdbc.JdbcSQLException: General error: "class
>
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
> Runtime failure on row: Row@15e0373e[
>
>
>
>
>Caused by: org.apache.ignite.binary.BinaryObjectException:
> Failed to get
> field because type ID of passed object differs from type ID this
> BinaryField
> belongs to [expected=-998325920, actual=905211022]
> at
>
> org.apache.ignite.internal.binary.BinaryFieldImpl.fieldOrder(BinaryFieldImpl.java:287)
> ~[ignite-core-2.7.6.jar:2.7.6]
> at
>
> org.apache.ignite.internal.binary.BinaryFieldImpl.value(BinaryFieldImpl.java:109)
> ~[ignite-core-2.7.6.jar:2.7.6]
> at
>
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:260)
> ~[ignite-core-2.7.6.jar:2.7.6]
> at
>
> org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:156)
> ~[ignite-core-2.7.6.jar:2.7.6]
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client (weblogic) attempting to rejoin cluster causes shutdown.

2019-12-10 Thread Ilya Kasnacheev
Hello!

Unfortunately it's hard to say without looking at complete logs from your
nodes. There's too many questions and not enough clues.

Regards,
-- 
Ilya Kasnacheev


пт, 6 дек. 2019 г. в 03:56, Steven.Gerdes :

> The production instance has issues with ignite heaping out, the solution we
> attempted to implement was to set the default data region to have swap
> enabled and also set a eviction policy on the server with a maxMemorySize
> such that it was much less then the Xmx jvm memory size.
>
> Testing locally with a dev version of our server (weblogic acting as ignite
> with client mode enabled) and the docker instance of ignite 2.7.6 it
> appears
> as though using this configuration does not solve ignites instability
> issues.
>
> Many different configurations were attempted (for the full config see
> bottom
> of post). The desired configuration would be one which the client has no
> cache and the server does all the caching. That was done with attempting
> the
> below on the server:
>
>   
>
>   
>
>  class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
>   
>
>   
> 
>   
>
> With the above server configuration 3 attempts were made to the client
> configuration:
>
> 1. Mirrored configuration on the client
> 2. Similar configuration with maxSize set to 0 (an attempt at ensuring the
> client didn't try to cache)
> 3. enabling IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK and not having any
> eviction policy for the client
>
> All three of these configurations resulted in the client weblogic to
> disconnect from the cluster and finally to die (while attempting to
> reconnect to ignite)
>
> Error from client before death:
>   | Servlet failed with an Exception
>   | java.lang.IllegalStateException: Grid is in invalid state to perform
> this operation. It either not started yet or has already being or have
> stopped [igniteInstanceName=null, state=STOPPING]
>   | at
>
> org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:201)
>   | at
>
> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:95)
>   | at
> org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3886)
>   | at
>
> org.apache.ignite.internal.IgniteKernal.transactions(IgniteKernal.java:2862)
>   | at
>
> org.apache.ignite.cache.websession.CustomWebSessionFilter.init(CustomWebSessionFilter.java:273)
>   | Truncated. see log file for complete stacktrace
>
> Error in ignitevisorcmd.sh:
> SEVERE: Blocked system-critical thread has been detected. This can lead to
> cluster-wide undefined behaviour [threadName=tcp-disco-msg-worker,
> blockedFor=17s]
> Dec 06, 2019 12:17:27 AM java.util.logging.LogManager$RootLogger log
> SEVERE: Critical system error detected. Will be handled accordingly to
> configured handler [hnd=StopNodeFailureHandler
> [super=AbstractFailureHandler
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SEGMENTATION]]],
> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
> o.a.i.IgniteException: GridWorker [name=tcp-disco-msg-worker,
> igniteInstanceName=null, finished=false, heartbeatTs=1575591430824]]]
> class org.apache.ignite.IgniteException: GridWorker
> [name=tcp-disco-msg-worker, igniteInstanceName=null, finished=false,
> heartbeatTs=1575591430824]
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
> at
>
> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
> at
>
> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
> at
>
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$TimeoutWorker.body(GridTimeoutProcessor.java:221)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:748)
>
> Sometimes in testing it was possible for the client to successfully
> reconnect. But I could not see why it was inconsistent with this behavior.
>
> A separate test was conducted in which there was no eviction policy or
> on-heap enabled on either the client or server. This seems to be more
> stable.
>
> Is there something incorrect with the configuration? Is there something
> missing that would allow us to use on-heap memory without it causing issue
> with our client?
>
>
>
>
> Apendix:
>
> Client Configuration:
> 
>
> http://www.springframework.org/schema/beans";
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
>   
> 
> 
>   
> class="org.apache.ignite.spi.

Re: What is the purpose of indexes in Cassandra table when they can not be queried?

2019-12-10 Thread Ilya Kasnacheev
Hello!

I will give an alternative explanation.

Even if Ignite will not go to the underlying database when doing its own
SQL, it has to use underlying database to do readThrough, so underlying
database should have an index on the cache key.

If you only ever do writeThrough, I think you can go without any keys or
indexes in some extreme cases.

Regards,
-- 
Ilya Kasnacheev


ср, 4 дек. 2019 г. в 00:52, Stefan Miklosovic <
stefan.mikloso...@instaclustr.com>:

> Hi,
>
> I am using Cassandra Ignite integration. In this integration, one is
> able to provide specific "mapping" in persistence xml so a key is e.g.
> a primitive and value is POJO.
>
> Now, I was quite surprised (as a lot of other people too as I read
> more about this), that it is not possible to do SQL queries on data
> which are not in memory / cached as a query is not going down to DB
> level when cache is empty - one has to load it all into memory or one
> has to know a key so it will be cached (and sql queries can work on
> it).
>
> Do not get me wrong but once this is the reality, what is the purpose
> of secondary indexes in Cassandra table? For example, I have this
> POJO:
>
> @Builder
> @Getter
> @Setter
> @ToString
> @EqualsAndHashCode
> @AllArgsConstructor
> @NoArgsConstructor
> public class CustomCounterIgnite implements Serializable {
>
> @NotNull
> @QuerySqlField(index = true)
> private String username;
>
> @NotNull
> @QuerySqlField(index = true)
> private Integer counterone;
>
> @NotNull
> @QuerySqlField(index = true)
> private Integer countertwo;
> }
>
> It is said that "in order to do some selects / filtering on fields,
> you have to set "index = true" for these fields".
>
> But, given how this SQL mechanism works, once it is in memory, cached,
> why there are indexes created in database? What is the purpose if I
> can do SQL queries only on data in memory? It does not matter if some
> indexes are created in Cassandra or not because it will never go to
> database 
>
> On initialisation, this is the output:
>
> create table if not exists "ignite"."custom_counter"
> (
>  "username" text,
>  "countertwo" int,
>  "counterone" int,
>  primary key (("username"))
> );
>
> Creating indexes for Cassandra table 'ignite.custom_counter'
> create index if not exists on "ignite"."custom_counter" ("countertwo");
> create index if not exists on "ignite"."custom_counter" ("counterone");
> Indexes for Cassandra table 'ignite.custom_counter' were successfully
> created
>
> Thanks!
>


Re: how to optimization this

2019-12-10 Thread Ilya Kasnacheev
Hello!

These threads are idle. Maybe you also have non-idle threads, but not these.

Yes, JVM reporting of thread state leaves a lot to be desired.

Regards,
-- 
Ilya Kasnacheev


чт, 5 дек. 2019 г. в 09:38, ?青春狂-^ :

> Hi
> we have Performance testing capabilities for project,
> the project build with ignite client config  by TcpDiscoverySpi , only
> one ignite node.
> we deploy two server machine, one web project with ignite client on
> serverA,one ignite 2.7.0 on serverB.
>
> the opration is get object from ignite.
> TPS result is 4200 ,every get opration cost 4-5ms.
> we  monitor the project jvm infos by jstack commands , we found the
> most CPU-intensive pid,
> found so many infos like this , how to optimization this?
>
> "grid-nio-worker-tcp-comm-2-#26" #48 prio=5 os_prio=0
> tid=0x7f01ff0cd800 nid=0x43 runnable [0x7f01f9be2000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x00070185cb68> (a
> org.apache.ignite.internal.util.nio.SelectedSelectionKeySet)
> - locked <0x00070185cb88> (a
> java.util.Collections$UnmodifiableSet)
> - locked <0x00070185cb20> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2151)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:745)
>
> "grid-nio-worker-tcp-comm-1-#25" #47 prio=5 os_prio=0
> tid=0x7f01ff0cc800 nid=0x42 runnable [0x7f01f99e]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x00070181b9c8> (a
> org.apache.ignite.internal.util.nio.SelectedSelectionKeySet)
> - locked <0x00070181b9e8> (a
> java.util.Collections$UnmodifiableSet)
> - locked <0x00070181b980> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2151)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:745)
>
> "grid-nio-worker-tcp-comm-0-#24" #46 prio=5 os_prio=0
> tid=0x7f01ff0cc000 nid=0x41 runnable [0x7f01f9ae1000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x00070199d7c0> (a
> org.apache.ignite.internal.util.nio.SelectedSelectionKeySet)
> - locked <0x00070199f840> (a
> java.util.Collections$UnmodifiableSet)
> - locked <0x00070199d718> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2151)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:745)
>
>


Re: Adhoc Reporting use case (Ignite+Spark?)

2019-12-10 Thread Ilya Kasnacheev
Hello!

I think you will have to provide way more details about your case before
getting any feedback.

Regards,
-- 
Ilya Kasnacheev


чт, 5 дек. 2019 г. в 00:07, Deepak :

> Hello,
>
> I am trying out Ignite+Spark combination for my adhoc reporting use case.
> Any suggestion/help on similar architecture would be helpful. Does it make
> sense or I am going totally south ??
>
> thanks in advance
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Understanding Write/Read IOPS during Ignite data load through Streamer and JDBC SQL

2019-12-10 Thread Ilya Kasnacheev
Hello!

When you are writing only, this means you can write to memory now and
persist it to disk later. This is fast.

When you are reading, this means that page replacement has started, you do
not have all data available in RAM and you have to read pages from disk
just to update them and put back.

This may be further compound by under-inlined SQL indexes, so please check
that you don't have any. We have a warning about it in Ignite 2.7.

Regards,
-- 
Ilya Kasnacheev


чт, 5 дек. 2019 г. в 05:06, krkumar24061...@gmail.com <
krkumar24061...@gmail.com>:

> When we are writing the data to ignite thru streamer ( key, value) and
> Ignite
> JDBC ( into couple of SQL tables) we get very high throughput when read
> IOPS
> are low and Write IOPS are high and we get very low throughput when Reads
> and Writes are competing. So my question when I am writing to the Ignite
> which is mostly write intensive operation, why are the read IOPS are going
> up. I understand you much be reading some ( meta data and indexes ) but it
> gets to a point where it starts competing with writes and grabs almost
> half.
> I am attaching the graph which gives you some insights into the write/read
> pattern. I am curious what is causing the read IOPS go up and how can we
> avoid if possible ? Let me know what you guys think??
>
> In the diagram attached take a look blue ( write  ) and yellow ( read )
> IOPS
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2578/read-write-grapha.png>
>
>
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Error when we shutdown the node during a rebalance

2019-12-10 Thread Ilya Kasnacheev
Hello!

What walMode do you happen to use? Can you share your nodes' configurations?

Regards,
-- 
Ilya Kasnacheev


ср, 4 дек. 2019 г. в 19:28, krkumar24061...@gmail.com <
krkumar24061...@gmail.com>:

> Hi - I am bumping into the following error frequently and causing the data
> loss whenever we shutdown the ignite node during data rebalance. I am
> shutting down the ignite in a safe mode i.e.
> Ignition.stop(false);
>
> Here is the stack trace :
>
>
>
> Caused by: class org.apache.ignite.IgniteCheckedException: WAL tail reached
> in archive directory, WAL segment file is corrupted.
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.validateTailReachedException(AbstractWalRecordsIterator.java:195)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.advance(AbstractWalRecordsIterator.java:172)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.onNext(AbstractWalRecordsIterator.java:123)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.onNext(AbstractWalRecordsIterator.java:52)
> at
>
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.nextX(GridCloseableIteratorAdapter.java:41)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$RestoreStateContext.next(GridCacheDatabaseSharedManager.java:4900)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$RestoreBinaryState.next(GridCacheDatabaseSharedManager.java:4977)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreMemory(GridCacheDatabaseSharedManager.java:2032)
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore(GridCacheDatabaseSharedManager.java:665)
>
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead(GridCacheDatabaseSharedManager.java:4730)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1048)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
> at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:66)
> at
>
> org.apache.ignite.IgniteSpringBean.afterSingletonsInstantiated(IgniteSpringBean.java:172)
> Caused by: class
>
> org.apache.ignite.internal.processors.cache.persistence.wal.WalSegmentTailReachedException:
> WAL segment tail reached. [idx=39758, isWorkDir=false,
>
> serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer@4525e9e8
> ]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.advanceRecord(AbstractWalRecordsIterator.java:254)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.advance(AbstractWalRecordsIterator.java:154)
> ... 22 more
> Caused by: class
>
> org.apache.ignite.internal.processors.cache.persistence.wal.WalSegmentTailReachedException:
> WAL segment tail reached. [ Expected next state:
> {Index=39758,Offset=33460408}, Actual state : {Index=0,Offset=0} ]
> recordType=PAGE_RECORD
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer.readPositionAndCheckPoint(RecordV2Serializer.java:260)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer.access$200(RecordV2Serializer.java:56)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer$2.readWithHeaders(RecordV2Serializer.java:116)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV1Serializer.readWithCrc(RecordV1Serializer.java:372)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer.readRecord(RecordV2Serializer.java:235)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.advanceRecord(AbstractWalRecordsIterator.java:243)
> Suppressed: class
>
> org.apache.ignite.internal.processors.cache.persistence.wal.crc.IgniteDataIntegrityViolationException:
> val: -500392315 writtenCrc: 0
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.io.FileInput$Crc32CheckingFileInput.close(FileInput.java:104)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.serialize

Re: IgniteOutOfMemoryException in LOCAL cache mode with persistence enabled

2019-12-10 Thread Ilya Kasnacheev
Hello!

10M is very very low-ball for testing performance of disk, considering how
Ignite's wal/checkpoints are structured. As already told, it does not even
work properly.

I recommend using 2G value instead. Just load enough data so that you can
observe constant checkpoints.

Regards,
-- 
Ilya Kasnacheev


ср, 4 дек. 2019 г. в 03:16, Mitchell Rathbun (BLOOMBERG/ 731 LEX) <
mrathb...@bloomberg.net>:

> For the requested full ignite log, where would this be found if we are
> running using local mode? We are not explicitly running a separate ignite
> node, and our WorkDirectory does not seem to have any logs
>
> From: user@ignite.apache.org At: 12/03/19 19:00:18
> To: user@ignite.apache.org
> Subject: Re: IgniteOutOfMemoryException in LOCAL cache mode with
> persistence enabled
>
> For our configuration properties, our DataRegion initialSize and MaxSize
> was set to 11 MB and persistence was enabled. For DataStorage, our pageSize
> was set to 8192 instead of 4096. For Cache, write behind is disabled, on
> heap cache is disabled, and Atomicity Mode is Atomic
>
> From: user@ignite.apache.org At: 12/03/19 13:40:32
> To: user@ignite.apache.org
> Subject: Re: IgniteOutOfMemoryException in LOCAL cache mode with
> persistence enabled
>
> Hi Mitchell,
>
> Looks like it could be easily reproduced on low off-heap sizes, I tried
> with
> simple puts and got the same exception:
>
> class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Failed to
> find a page for eviction [segmentCapacity=1580, loaded=619,
> maxDirtyPages=465, dirtyPages=619, cpPages=0, pinnedInSegment=0,
> failedToPrepare=620]
> Out of memory in data region [name=Default_Region, initSize=10.0 MiB,
> maxSize=10.0 MiB, persistenceEnabled=true] Try the following:
> ^-- Increase maximum off-heap memory size
> (DataRegionConfiguration.maxSize)
> ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
> ^-- Enable eviction or expiration policies
>
> It looks like Ignite must issue a proper warning in this case and couple of
> issues must be filed against Ignite JIRA.
>
> Check out this article on persistent store available in Ignite confluence
> as
> well:
>
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+und
> er+the+hood#IgnitePersistentStore-underthehood-Checkpointing
>
> I've managed to make kind of similar example working with 20 Mb region with
> a bit of tuning, added following properties to
> org.apache.ignite.configuration.DataStorageConfiguration:
> /
> /
>
> The whole idea behind this is to trigger checkpoint on timeout rather than
> on too much dirty pages percentage threshold. The checkpoint page buffer
> size may not exceed data region size, which is 10 Mb, which might be
> overflown during checkpoint as well.
>
> I assume that checkpoint is never triggered in this case because of
> per-partition overhead: Ignite writes some meta per partition and it looks
> like that it is at least 1 meta page utilized for each which results in
> some
> amount of off-heap devoured by these meta pages. In the case with the
> lowest
> possible region size, this might consume more than 3 Mb for cache with 1k
> partitions and 70% dirty data pages threshold would never be reached.
>
> However, I found another issue when it is not possible to save meta page on
> checkpoint begin, this reproduces on 10 Mb data region with mentioned
> storage configuration options.
>
> Could you please describe the configuration if you have anything different
> from defaults (page size, wal mode, partitions count) and types of
> key/value
> that you use? And if it is possible, could you please attach full Ignite
> log
> from the node that has suffered from IOOM?
>
> As for the data region/cache, in reality you do also have cache groups here
> playing a role. But generally I would recommend you to go with one data
> region for all caches unless you have a particular reason to have multiple
> regions. As for example, you have some really important data in some cache
> that always needs to be available in durable off-heap memory, then you
> should have a separate data region for this cache as I'm not aware if there
> is possibility to disallow evicting pages for a specific cache.
>
> Cache groups documentation link:
> https://apacheignite.readme.io/docs/cache-groups
>
> By default (a cache doesn't have cacheGroup property defined) each cache
> has
> it's own cache group with the very same name, that lives in the data region
> specified or default data region. You might use them or not, depending on
> the goal you have: use when you want to reduce meta
> overhead/checkpoints/partition exchanges and share internal structures to
> save up space a bit, or do not use them if you want to speed up
> inserts/lookups by having it's own dedicated partition maps and B+ trees.
>
> Best regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
>


Re: How sql works for near cache

2019-12-10 Thread Stanislav Lukyanov
Not out of the box but you could use SQL or ScanQuery for that.

With SQL:
SELECT _key FROM mycache
(given that your cache is SQL-enabled).

With ScanQuery:
cache.query(new ScanQuery(), Cache.Entry::getKey)
(may need to fix type errors to compile this)

Stan

On Wed, Dec 4, 2019 at 2:36 AM Hemambara  wrote:

> Is there any way we can get complete keyset or values from a near cache.
> Something like cache.keyset()
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How sql works for near cache

2019-12-10 Thread Ilya Kasnacheev
Hello!

You can use cache.localEntries(CachePeekMode.NEAR);

Regards,
-- 
Ilya Kasnacheev


ср, 4 дек. 2019 г. в 02:36, Hemambara :

> Is there any way we can get complete keyset or values from a near cache.
> Something like cache.keyset()
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Alter table issue

2019-12-10 Thread Ilya Kasnacheev
Hello!

I'm afraid the development of 3.0 have not started yet.

Please avoid re-creating columns that were previously dropped. Instead, use
new column name.

Regards,
-- 
Ilya Kasnacheev


вт, 3 дек. 2019 г. в 15:51, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Hi Ivan,
>
> Thank you for the confirmation.
>
> In the link, I can see that the issue will be resolved in Ignite 3.0
> version.
> When will Ignite 3.0 be released? Is there any tentative date?
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
> --
> *From:* Ivan Pavlukhin 
> *Sent:* Tuesday, December 3, 2019 6:12 PM
> *To:* user@ignite.apache.org 
> *Subject:* Re: Alter table issue
>
> Hi Shravya,
>
> It is a know issue. You can find more details in ticket [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6611
>
> вт, 3 дек. 2019 г. в 07:54, Shravya Nethula <
> shravya.neth...@aline-consulting.com>:
>
> Hi,
>
> I added a new column in an existing table using the following query:
> *ALTER TABLE person ADD COLUMN (order_id LONG)*
>
> Now, I am trying to change the datatype of the new column, so I tried
> executing the following queries:
>
> *ALTER TABLE person DROP COLUMN (order_id) *
>
> *ALTER TABLE person ADD COLUMN (order_id VARCHAR(64)) *
>
> Now when I am trying to insert a row with "varchar" value in "order_id"
> column, it is throwing the following error:
> *Error: Wrong value has been set
> [typeName=SQL_PUBLIC_PERSON_fc7e0bd5_d052_43c1_beaf_fb01b65f2f96,
> fieldName=ORDER_ID, fieldType=long, assignedValueType=String]*
>
> In this link: https://apacheignite-sql.readme.io/docs/alter-table
> The command does not remove actual data from the cluster which means that
> if the column 'name' is dropped, the value of the 'name' will still be
> stored in the cluster. This limitation is to be addressed in the next
> releases.
>
> I saw that there is a limitation in Alter Table. So in which release can
> this support be provided? What is the tentative date?
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: Ignite native persistence

2019-12-10 Thread Ilya Kasnacheev
Hello!

When running your reproducer, I don't see any "method will have no effect"
messages, instead I can see in debugger that loadCache() is triggered and
then
com.gpc.rpm.IgniteCacheLoader.datagrid.store.spring.CacheSpringARInvoiceStore2#loadCache
is called.

Can you please describe the problem in more detail?

Regards,
-- 
Ilya Kasnacheev


пн, 2 дек. 2019 г. в 23:46, niamin :

> Tried with appending cache name as schema with no success. Attach is a link
> to my project:
> https://drive.google.com/open?id=1F53um8TeUK45U3SOW0_Vlj04S8DWyjKI
>
> Please take a look. I suspect it is something in my code.
>
> Thanks,
> Naushad
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache data not being stored on server nodes

2019-12-10 Thread Ilya Kasnacheev
Hello!

> When a put(key, value) call is made on the client would that put the data
in one of the servers automatically if the cache is configured in PARTIONED
mode

Yes.

Regards,
-- 
Ilya Kasnacheev


пн, 2 дек. 2019 г. в 22:40, swattal :

> Hi Andrei,
>
> Thank you for the response. I do see expiration listener being called on
> the
> server node. What I don’t see is the value attached with Event. The key is
> present but the value is null. I would have expected the expired value to
> be
> present in the event as well.  I think i am puzzled over client behavior
> and
> am surely missing something. When a put(key, value) call is made on the
> client would that put the data in one of the servers automatically if the
> cache is configured in PARTIONED mode or does the server have to listen for
> CacheEntryCreation event to add the key, value pair in memory?
>
> Thanks,
> Sumit
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite hanging (dead looping) at TCP discovery stage

2019-12-10 Thread Ilya Kasnacheev
Hello!

Is this the only node? I think it gets confused by connecting to an
unfamilliar address and finding itself there. Maybe also it no longer sees
its own address in its discovery.

I recommend setting igniteCfg.localHost with a "right" address, possibly
discoverySpi.localAddress too.

Regards,
-- 
Ilya Kasnacheev


ср, 4 дек. 2019 г. в 14:01, relax ken :

> Hi,
>
> We are running an ignite cluster by TcpDiscoverySharedFsIpFinder. It works
> well until we installed docker on the machine. After docker is installed,
> docker created a NAT
>
> Ethernet adapter vEthernet (DockerNAT):
>
>Connection-specific DNS Suffix  . :
>Link-local IPv6 Address . . . . . : fe80::516:7def:3395:bad0%22
>IPv4 Address. . . . . . . . . . . : 10.0.75.1
>Subnet Mask . . . . . . . . . . . : 255.255.255.0
>Default Gateway . . . . . . . . . :
>
> when we run ignite node again on this machine, it's dead looping at tcp
> discovery:
>
> INFO  [2019-12-04 10:48:13,583] org.apache.ignite.internal.IgniteKernal:
> Non-loopback local IPs: 10.0.75.1, 169.254.197.211, 169.254.219.50,
> 169.254.240.113, 169.254.254.204, 169.254.88.122, 192.168.2.87,
> 192.168.254.113, fe80:0:0:0:110d:3de2:3dee:f071%eth3, fe80:0
> :0:0:516:7def:3395:bad0%eth2, fe80:0:0:0:a054:3460:9c2b:c5d3%wifi1,
> fe80:0:0:0:ac2c:9472:ee11:db32%wifi2, fe80:0:0:0:b99c:8e2f:a213:b566%eth1,
> fe80:0:0:0:bcbe:513:3d05:fecc%wifi0, fe80:0:0:0:d85b:bbb6:21d8:3b0c%eth0,
> fe80:0:0:0:d901:489e:558a:587a%eth4
> INFO  [2019-12-04 10:48:13,583] org.apache.ignite.internal.IgniteKernal:
> Enabled local MACs: 00155D01E302, 00155D057947, 00FF22454205, 02004C4F4F50,
> 1E8BCA1FCA30, 2E8BCA1FCA30, 7085C25781CD, 7C8BCA1FCA30
> INFO  [2019-12-04 10:48:13,611]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Connection check
> threshold is calculated: 1
> INFO  [2019-12-04 10:48:13,613]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Successfully bound to
> TCP port [port=47500, localHost=0.0.0.0/0.0.0.0,
> locNodeId=801474c3-8b60-4adf-95d9-37d47baad031]
> INFO  [2019-12-04 10:48:13,626]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery accepted
> incoming connection [rmtAddr=/10.0.75.1, rmtPort=55732]
> INFO  [2019-12-04 10:48:13,632]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery spawning
> a new thread for connection [rmtAddr=/10.0.75.1, rmtPort=55732]
> INFO  [2019-12-04 10:48:13,632]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Started serving remote
> node connection [rmtAddr=/10.0.75.1:55732, rmtPort=55732]
> INFO  [2019-12-04 10:48:13,635]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Finished serving
> remote node connection [rmtAddr=/10.0.75.1:55732, rmtPort=55732
> WARN  [2019-12-04 10:48:15,582] org.apache.ignite.internal.GridDiagnostic:
> Default local host is unreachable. This may lead to delays on grid network
> operations. Check your OS network setting to correct it.
> INFO  [2019-12-04 10:48:15,640]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery accepted
> incoming connection [rmtAddr=/10.0.75.1, rmtPort=55733]
> INFO  [2019-12-04 10:48:15,640]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery spawning
> a new thread for connection [rmtAddr=/10.0.75.1, rmtPort=55733]
> INFO  [2019-12-04 10:48:15,640]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Started serving remote
> node connection [rmtAddr=/10.0.75.1:55733, rmtPort=55733]
> INFO  [2019-12-04 10:48:15,641]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Finished serving
> remote node connection [rmtAddr=/10.0.75.1:55733, rmtPort=55733
> INFO  [2019-12-04 10:48:17,644]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery accepted
> incoming connection [rmtAddr=/10.0.75.1, rmtPort=55734]
> INFO  [2019-12-04 10:48:17,644]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery spawning
> a new thread for connection [rmtAddr=/10.0.75.1, rmtPort=55734]
> INFO  [2019-12-04 10:48:17,644]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Started serving remote
> node connection [rmtAddr=/10.0.75.1:55734, rmtPort=55734]
> INFO  [2019-12-04 10:48:17,645]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Finished serving
> remote node connection [rmtAddr=/10.0.75.1:55734, rmtPort=55734
> INFO  [2019-12-04 10:48:19,648]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery accepted
> incoming connection [rmtAddr=/10.0.75.1, rmtPort=55735]
> INFO  [2019-12-04 10:48:19,648]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: TCP discovery spawning
> a new thread for connection [rmtAddr=/10.0.75.1, rmtPort=55735]
> INFO  [2019-12-04 10:48:19,648]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Started serving remote
> node connection [rmtAddr=/10.0.75.1:55735, rmtPort=55735]
> INFO  [2019-12-04 10:48:19,649]
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi: Finished serving
> remote node connection [rmtAddr=/10.0.75.1:55

[no subject]

2019-12-10 Thread contact
Hi



Re: Ignite Hanging on startup after OOME

2019-12-10 Thread akurbanov
Hi Mitchell,

What is the reproducer for the issue, is my assumption correct: no
persistence, fill small data region with data, crash, restart? Are you hit
with OutOfMemory or exactly IgniteOutOfMemory?

Are you able to capture JVM thread dump in this case?

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: can't use ignite get() method with error logs

2019-12-10 Thread akurbanov
Hello,

Could you please provide the full log for the case, the message seems to be
cut. The perfect way would be to provide short reproducer for the thing that
you are trying to do.

The configuration itself is fine, there is nothing that may break the
cluster.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Deadlock on concurrent calls to getAll and invokeAll on cache with read-through

2019-12-10 Thread peter108418
Thank you for looking into the issue, good to hear you were able to reproduce
it.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Mikael

Hi!

I guess you should forward that information to GridGain as web console 
is not part of Apache Ignite.


Mikael

Den 2019-12-10 kl. 13:10, skrev Prasad Bhalerao:

Hi,

We found 3 vulnerabilities while scanning Grid Gain Web console 
application.


We are using HTTP and not HTTPS due to some issues on our side. 
Although vulnerabilities are of lower severity, but thought of 
reporting it here.


1) HTTP TRACE / TRACK Methods Enabled. (CVE-2004-2320 
, CVE-2010-0386 
, CVE-2003-1567 
)

2) Session Cookie Does Not Contain the "Secure" Attribute.
3) Web Server HTTP Trace/Track Method Support Cross-Site Tracing 
Vulnerability. (CVE-2004-2320 
, CVE-2007-3008 
)


Can these be fixed?

Thanks,
Prasad


On Tue, Dec 10, 2019 at 4:39 PM Denis Magda > wrote:


It's free software without limitations. Just download and use it.

-
Denis


On Tue, Dec 10, 2019 at 1:21 PM Prasad Bhalerao
mailto:prasadbhalerao1...@gmail.com>> wrote:

Hi,

Can apache ignite users use it for free in their production
environments?
What license does it fall under?

Thanks,
Prasad

On Fri, Oct 4, 2019 at 5:33 AM Denis Magda mailto:dma...@apache.org>> wrote:

Igniters,

There is good news. GridGain made its distribution of Web
Console
completely free. It goes with advanced monitoring and
management dashboard
and other handy screens. More details are here:

https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite

-
Denis



Re: Manage offset of KafkaStreamer

2019-12-10 Thread Andrei Aleksandrov

Hi,

Generally you can't do it using Ignite KafkaStreamer API because it's 
not so flexible to configure the KafkaConsumer used inside.


Generally it supported in 
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long) 
but can't be configured. You can create the JIRA ticket for it.


However, you can try to configure it using Kafka scripts like it 
described here:


https://stackoverflow.com/questions/29791268/how-to-change-start-offset-for-topic

BR,
Andrei

12/3/2019 1:07 PM, ashishb888 пишет:

I want to start from a specific offset of the Kafka partition, it is possible
with KafkaStreamer?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Prasad Bhalerao
Hi,

We found 3 vulnerabilities while scanning Grid Gain Web console application.

We are using HTTP and not HTTPS due to some issues on our side. Although
vulnerabilities are of lower severity, but thought of reporting it here.

1) HTTP TRACE / TRACK Methods Enabled. (CVE-2004-2320
, CVE-2010-0386
, CVE-2003-1567
)
2) Session Cookie Does Not Contain the "Secure" Attribute.
3) Web Server HTTP Trace/Track Method Support Cross-Site Tracing
Vulnerability. (CVE-2004-2320
, CVE-2007-3008
)

Can these be fixed?

Thanks,
Prasad


On Tue, Dec 10, 2019 at 4:39 PM Denis Magda  wrote:

> It's free software without limitations. Just download and use it.
>
> -
> Denis
>
>
> On Tue, Dec 10, 2019 at 1:21 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>> Hi,
>>
>> Can apache ignite users use it for free in their production environments?
>> What license does it fall under?
>>
>> Thanks,
>> Prasad
>>
>> On Fri, Oct 4, 2019 at 5:33 AM Denis Magda  wrote:
>>
>>> Igniters,
>>>
>>> There is good news. GridGain made its distribution of Web Console
>>> completely free. It goes with advanced monitoring and management
>>> dashboard
>>> and other handy screens. More details are here:
>>>
>>> https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite
>>>
>>> -
>>> Denis
>>>
>>


Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Denis Magda
It's free software without limitations. Just download and use it.

-
Denis


On Tue, Dec 10, 2019 at 1:21 PM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> Can apache ignite users use it for free in their production environments?
> What license does it fall under?
>
> Thanks,
> Prasad
>
> On Fri, Oct 4, 2019 at 5:33 AM Denis Magda  wrote:
>
>> Igniters,
>>
>> There is good news. GridGain made its distribution of Web Console
>> completely free. It goes with advanced monitoring and management dashboard
>> and other handy screens. More details are here:
>>
>> https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite
>>
>> -
>> Denis
>>
>


ValueExtractor support in Apache Ignite

2019-12-10 Thread Rastogi, Arjit (CWM-NR)
Hi All,

What is the Oracle Coherence 
ValueExtractor
 equivalent in Apache Ignite which can enable us to create dynamic indexes by 
extracting elements in a Map/ List inside POJO.

Use case:
Example: We want to create cache of following class-
class Employee {
long employeeId;
String employeeName;
Map identifierMap;
}

identifierMap example = [{"id1" : "123456"},{"id2" : "45678"}]

We want to put objects of Employee in cache. We want to create index on all 
entries in identifierMap like "id1", "id2" etc.
We have achieved the same using 
ValueExtractor
 in Oracle Coherence.

Thanks & Regards,
Arjit Rastogi

__

This email is intended only for the use of the individual(s) to whom it is 
addressed and may be privileged and confidential.
Unauthorised use or disclosure is prohibited. If you receive this e-mail in 
error, please advise immediately
and delete the original message. This message may have been altered without 
your or our knowledge
and the sender does not accept any liability for any errors or omissions in the 
message.

Emails are monitored by supervisory personnel in jurisdictions where monitoring 
is permitted. 
Such communications are retained and may be produced to regulatory authorities 
or others with legal rights to the information.


Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Prasad Bhalerao
Hi,

Can apache ignite users use it for free in their production environments?
What license does it fall under?

Thanks,
Prasad

On Fri, Oct 4, 2019 at 5:33 AM Denis Magda  wrote:

> Igniters,
>
> There is good news. GridGain made its distribution of Web Console
> completely free. It goes with advanced monitoring and management dashboard
> and other handy screens. More details are here:
>
> https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite
>
> -
> Denis
>


can't use ignite get() method with error logs

2019-12-10 Thread ???????-^
Hi:
   when i used ignite get() method in my java project   to 
connect my ignite 2.7.6 on linux, it print logs ,and i cant use get(),logs like 
this:



 
2019-12-10 16:58:03.402  WARN 1 --- [sys-#969] o.a.i.i.p.c.d.near.GridNearTxLocal   : The transaction was forciblersion=GridCacheVersion [topVer=187426084, order=1576292371280, nodeOrder=3], concurrency=PESSIMISTIC, isolation=SERI=5000, duration=5006, label=null]
 



my ignite config like this: