DDL should work without prefixing the schema

2018-03-15 Thread Naveen
Hi

I am using ignite 2.3

I have created a table thru SQL DDL like below

CREATE TABLE MAP_ASSOCIATED (PARTY_ID VARCHAR, ASSOCIATED_LIST VARCHAR,
UPDATEDDATETIME TIMESTAMP, UPDATEDBY VARCHAR, PRIMARY KEY (PARTY_ID))WITH
"template=partitioned,backups=1,cache_name=MAP_ASSOCIATED,
value_type=com.ril.edif.model.MAP_ASSOCIATED";

Table Schema is public, so I can insert a record with the below SQL without
the schema

insert into MAP_ASSOCIATED (PARTY_ID, ASSOCIATED_LIST, UPDATEDDATETIME,
UPDATEDBY) values ('1','1',current_timestamp(),'1');

Howver, same time, I have created another table with java API, it got
created with a schema
So I had to always prefix with table schema for running any DMLs

INSERT INTO "MapDummyCache".MAP_DUMMY (ENTITY_ID, MAPPING_ID_LIST,
RELATIONSHIP, UPDATEDDATETIME, UPDATEDBY, SEQUENCE_NO, TUPLE_COUNT) values
('6','maplist6','rel5',sysdate,'upd5','5','5')


I tried to change the schema to PUBLIC by setSchemaName, but still does not
seem to be working.

I would like to have same DML query should work for the table created thru
SQL DDL and Java API without schema.

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re:Re:RE: Re: Node can not join cluster

2018-03-15 Thread Lucky
Well,  I've  solved this problem.
Thanks a lot.






Re: Topic based messaging

2018-03-15 Thread Dmitry Pavlov
Yes, you can have backup copies of data in caches.

At the same time you can have several message listeners set up on several
nodes being .

About topic message instance itself, there is only one copy of message in
cluster. So particlar message may be lost in some cases of node failure.

This is what you need?

пт, 16 мар. 2018 г. в 8:57, piyush :

> Thanks.
> Can it have extra backup copies in Ignite Cluster (for HA or fault
> tolerance) ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Topic based messaging

2018-03-15 Thread piyush
Thanks.
Can it have extra backup copies in Ignite Cluster (for HA or fault
tolerance) ?  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Fwd: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-15 Thread Denis Magda
Igniters,

Please use your social media accounts to promote the release we were
working on for a while:
https://twitter.com/denismagda/status/974438862465351681

It will be great if you can translate the article into your native language
and publish it in your country of origin.

--
Denis

-- Forwarded message --
From: Denis Magda 
Date: Thu, Mar 15, 2018 at 5:09 PM
Subject: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and
Spark DataFrames
To: annou...@apache.org, pr...@apache.org, d...@ignite.apache.org


Usually, Ignite community rolls out a new version once in 3 months, but we
had to make an exception for Apache Ignite 2.4 that consumed five months in
total.

We could easily blame Thanksgiving, Christmas and New Year holidays for the
delay and would be forgiven, but, in fact, we were forging the release you
can't just pass by.

Let's dive in and look for a big fish: https://blogs.apache.
org/ignite/entry/apache-ignite-2-4-brings

The full list of the changes can be found here: https://ignite.apache.org/
releases/2.4.0/release_notes.html

Ready to try then navigate to our downloads page:
https://ignite.apache.org/download.cgi

--
Denis


Understanding SQL CREATE TABLE 'WITH' Parameters?

2018-03-15 Thread joseheitor
Three questions regarding relationship of cached data to persisted data:

1) Do the settings of the parameters in the 'WITH' clause apply only to the
cached data? Or also to the persisted data?
 
2) Can the parameters set in a CREATE TABLE 'WITH' clause be rather
externally and globally set in a config file? (TEMPLATE, BACKUPS, etc)

3) In TEMPLATE=PARTITIONED: Is the persistent data stored to disk on a
particular node the same as that allocated to the node's cache? Or is all
data stored to disk on all nodes in the cluster, and only the cached data
partitioned between nodes?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Topic based messaging

2018-03-15 Thread Dmitry Pavlov
Hi, I've used this feature also, I've used it to build requests processing
system with decoupled componends.

Topic Based Messaging does not support persistence, so if persistable
message is required than it is better to use Ignite with Kafka.

Please see also related discussion at SO:
https://stackoverflow.com/questions/47022706/apache-ignite-vs-apache-kafka



чт, 15 мар. 2018 г. в 20:14, Raymond Wilson :

> Yes, I use it. It works well.
>
> Sent from my iPhone
>
> > On 16/03/2018, at 2:56 AM, piyush  wrote:
> >
> > Has anybody used this feature ?
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: QueryEntity and inheritence

2018-03-15 Thread Ralph Benchetrit
Up

> 
> Hi,
> 
> I have some objects that I want to put in an ignite cache and also want to
> query them using jdbc sql.
> I have used QueryEntity to define the mapping between object model and
> relational model.
> 
> My problem is :
> 
> My objects uses inheritence.
> "Asset" object is the parent, and "OTC" object and "Security" Object inherit
> from Asset.
> 
> Asset
> |-OTC
> |-Security
> 
> I would like to have an sql model that is represented with 3 tables
> ASSET table which will contain all attributs from Asset Object
> OTC table which will contain all specialized attributs from OTC Object
> SECURITY table which will contain all specialized attributs from Security
> object
> 
> And of course all data from 3 tables have to be mapped on the same cache (I
> do not want to duplicate data in 2 caches)
> 
> Here is what I have done:
> 
> assetQueryEntity, OTCQueryEntity, SecurityQueryEntity are QueryEntity
> definition for each of the 3 tables :
> 
> LinkedList qeList = new LinkedList<>();
> 
> qeList .add(assetQueryEntity);
> qeList .add(OTCQueryEntity);
> qeList .add(SecurityQueryEntity);
> 
> CacheConfiguration cacheCfg = new CacheConfiguration<>("AssetCache");
> cacheCfg.setQueryEntities(qeList);
> IgniteCache cache = ignite.getOrCreateCache(cacheCfg);
> 
> 
> The result is that I can see 3 tables with the right definition.
> I can select data from OTC and SECURITY tables.
> but the Asset table is empty
> 
> Is there a way to do that ?
> 
> Thanks for your help
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Kubernetes - Access Ignite Cluster Externally

2018-03-15 Thread Ryan Samo
Hey guys and gals!
I have created a development environment for Ignite 2.3 Native Persistence
on Kubernetes 1.9.3 and have it up and running successfully. I then
attempted to activate one of my clusters via a Java client call and
discovered that the TcpDiscoveryKubernetesIpFinder doesn't support the
"addresses" property by receiving the following error:

*Caused by: org.springframework.beans.NotWritablePropertyException: Invalid
property 'addresses' of bean class
[org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder]:
Bean property 'addresses' is not writable or has an invalid setter method.
Does the parameter type of the setter match the return type of the getter?*

It turns out that in the documentation for the
TcpDiscoveryKubernetesIpFinder class, there is a statement that says:

*"An application that uses Ignite client nodes as a gateway to the cluster
is required to be containerized as well. Applications and Ignite nodes
running outside of Kubernetes will not be able to reach the containerized
counterparts."*

I get that in most cases, it's best to run all of the components from within
Kubernetes for security purposes, but our use case is to create an Ignite
cluster and then hit them from external clients. In digging through the
TcpDiscoveryKubernetesIpFinder code, I see that it inherits
TcpDiscoveryIpFinder which has the methods we need to specify Ignite server
addresses. With that being said, my questions are...

1.) Is there any development going on around the
TcpDiscoveryKubernetesIpFinder class to possibly add external client
connections outside of Kubernetes?

2.) If I decided to build my own version of the
TcpDiscoveryKubernetesIpFinder class that allows for external connections,
would that be broken in upcoming releases?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AffinityKey Configuration in order to achieve multiple joins across caches

2018-03-15 Thread StartCoding
Hi Mike,

Thanks for your quick response. 

I am afraid denormalizing will work for me because I have just given a
simple example. There are 16 tables which in that case needs to be joined
into single entity. Replication was an approach I thought about and we have
already considered smaller tables for that. But there are 7 huge tables
which consists of 6M+ records and will degrade the performance using
replicated caches.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Topic based messaging

2018-03-15 Thread Raymond Wilson
Yes, I use it. It works well. 

Sent from my iPhone

> On 16/03/2018, at 2:56 AM, piyush  wrote:
> 
> Has anybody used this feature ?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AffinityKey Configuration in order to achieve multiple joins across caches

2018-03-15 Thread Mikhail
Hi, 

You can have only one affinity key in a class, so in your case, you need to
choose the smallest table and made it replicated to avoid distributed joins.
Another option is to use denormalization of you data, for example, to store
Class B Class C in one class as one row.

Thanks,
Mike.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity Key column to be always part of the Primary Key

2018-03-15 Thread Mikhail
Hi Naveen


>If I do not have the affinity key column as part of the primary key, it
does 
>not allow me to create the table itself. 

Could you please explain how it doesn't allow you create the table? is there
any exceptions/errors messages?

Thanks,
Mike.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2018-03-15 Thread Dmitry Pavlov
Hi Alexey,

It may be serious issue. Could you recommend expert here who can pick up
this?

Sincerely,
Dmitriy Pavlov

чт, 15 мар. 2018 г. в 19:25, Arseny Kovalchuk :

> Hi, guys.
>
> I've got a reproducer for a problem which is generally reported as "Caused
> by: java.lang.IllegalStateException: Failed to get page IO instance (page
> content is corrupted)". Actually it reproduces the result. I don't have an
> idea how the data has been corrupted, but the cluster node doesn't want to
> start with this data.
>
> We got the issue again when some of server nodes were restarted several
> times by kubernetes. I suspect that the data got corrupted during such
> restarts. But the main functionality that we really desire to have, that
> the cluster DOESN'T HANG during next restart even if the data is corrupted!
> Anyway, there is no a tool that can help to correct such data, and as a
> result we wipe all data manually to start the cluster. So, having warnings
> about corrupted data in logs and just working cluster is the expected
> behavior.
>
> How to reproduce:
> 1. Download the data from here
> https://storage.googleapis.com/pub-data-0/data5.tar.gz (~200Mb)
> 2. Download and import Gradle project
> https://storage.googleapis.com/pub-data-0/project.tar.gz (~100Kb)
> 3. Unpack the data to the home folder, say /home/user1. You should get the
> path like */home/user1/data5*. Inside data5 you should have binary_meta,
> db, marshaller.
> 4. Open *src/main/resources/data-test.xml* and put the absolute path of
> unpacked data into *workDirectory* property of *igniteCfg5* bean. In this
> example it should be */home/user1/data5.* Do not edit consistentId!
> The consistentId is ignite-instance-5, so the real data is in
> the data5/db/ignite_instance_5 folder
> 5. Start application from ru.synesis.kipod.DataTestBootApp
> 6. Enjoy
>
> Hope it will help.
>
>
> ​
> Arseny Kovalchuk
>
> Senior Software Engineer at Synesis
> skype: arseny.kovalchuk
> mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
> ​LinkedIn Profile ​
>
> On 26 December 2017 at 21:15, Denis Magda  wrote:
>
>> Cross-posting to the dev list.
>>
>> Ignite persistence maintainers please chime in.
>>
>> —
>> Denis
>>
> On Dec 26, 2017, at 2:17 AM, Arseny Kovalchuk 
>> wrote:
>>
>> Hi guys.
>>
>> Another issue when using Ignite 2.3 with native persistence enabled. See
>> details below.
>>
>> We deploy Ignite along with our services in Kubernetes (v 1.8) on
>> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
>> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>>
>> We put about 230 events/second into Ignite, 70% of events are ~200KB in
>> size and 30% are 5000KB. Smaller events have indexed fields and we query
>> them via SQL.
>>
>> The cluster is activated from a client node which also streams events
>> into Ignite from Kafka. We use custom implementation of streamer which uses
>> cache.putAll() API.
>>
>> We started cluster from scratch without any persistent data. After a
>> while we got corrupted data with the error message.
>>
>> [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
>> - Partition eviction failed, this can cause grid hang.
>> class org.apache.ignite.IgniteException: Runtime failure on search row:
>> Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
>> ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419,
>> face_last_name=null, face_list_id=null, channel=171, source=,
>> face_similarity=null, license_plate_number=null, descriptors=null,
>> cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854,
>> stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854,
>> persistent=false, face_first_name=null, license_plate_first_name=null,
>> face_full_name=null, level=0, module=Kpx.Synesis.Outdoor,
>> end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0,
>> human, 0, truck, 0, start_time=1513946618964, processed=false,
>> kafka_offset=111259, license_plate_last_name=null, armed=false,
>> license_plate_country=null, topic=MovingObject, comment=,
>> expiration=1514033024000, original_id=null, license_plate_lists=null], ver:
>> GridCacheVersion [topVer=125430590, order=1513955001926, nodeOrder=3] ][
>> 3008806055072854, MovingObject, Kpx.Synesis.Outdoor, 0, , 1513946618964,
>> 1513946624379, 171, 171, FALSE, FALSE, , FALSE, FALSE, 0, 0, 111259,
>> 1514033024000, (vehicle, 0, human, 0, truck, 0), null, null, null, null,
>> null, null, null, null, null, null, null, null ]
>> at org.apache.ignite.internal.pro
>> cessors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
>> at org.apache.ignite.internal.pro
>> cessors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
>> at org.apache.ignite.internal.pro
>> cessors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:216)
>> at 

Re: And again... Failed to get page IO instance (page content is corrupted)

2018-03-15 Thread Arseny Kovalchuk
Hi guys.

I've got a reproducer that can be related. See comments
http://apache-ignite-users.70518.x6.nabble.com/Partition-eviction-failed-this-can-cause-grid-hang-Caused-by-java-lang-IllegalStateException-Failed--tp19122p20524.html

Sergey Sergeev, just for reference, what kind of file system do you use
with Ignite's persistence?

​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 9 March 2018 at 12:31, Sergey Sergeev  wrote:

> Hi Mikhail,
>
> Unfortunately, the problem has repeated itself on ignite-core-2.3.3
>
> 27.02.18 00:27:55 ERROR  GridCacheIoManager - Failed to process message
> [senderId=8f99c887-cd4b-4c38-a649-ca430040d535, messageType=class
> o.a.i.i.processors.cache.distributed.dht.atomic.GridNearAtom
> icUpdateResponse]
> org.apache.ignite.IgniteException: Runtime failure on bounds:
> [lower=null, upper=PendingRow []]
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .BPlusTree.find(BPlusTree.java:954) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .BPlusTree.find(BPlusTree.java:933) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.IgniteCacheOffhe
> apManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:979)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.
> *GridCacheTtlManager.expire*(GridCacheTtlManager.java:197)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheUtils.
> unwindEvicts(GridCacheUtils.java:833) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.onMessageProcessed(GridCacheIoManager.java:1099)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.processMessage(GridCacheIoManager.java:1072)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.onMessage0(GridCacheIoManager.java:579) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.handleMessage(GridCacheIoManager.java:378)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.handleMessage(GridCacheIoManager.java:304)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.access$100(GridCacheIoManager.java:99) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er$1.onMessage(GridCacheIoManager.java:293) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.managers.communication.GridIoMana
> ger.invokeListener(GridIoManager.java:1555) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.managers.communication.GridIoMana
> ger.processRegularMessage0(GridIoManager.java:1183)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.managers.communication.GridIoMana
> ger.access$4200(GridIoManager.java:126) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.managers.communication.GridIoMana
> ger$9.run(GridIoManager.java:1090) ~[ignite-core-2.3.3.jar:2.3.3]
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
> Caused by: java.lang.IllegalStateException: Failed to get page IO
> instance (page content is corrupted)
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .io.IOVersions.forVersion(IOVersions.java:83)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .io.IOVersions.forPage(IOVersions.java:95) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.persistence.Cach
> eDataRowAdapter.initFromLink(CacheDataRowAdapter.java:148)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.persistence.Cach
> eDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at 
> org.apache.ignite.internal.processors.cache.tree.PendingRow.initKey(PendingRow.java:72)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.tree.PendingEntr
> iesTree.getRow(PendingEntriesTree.java:118) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.tree.PendingEntr
> iesTree.getRow(PendingEntriesTree.java:31) ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .BPlusTree$ForwardCursor.fillFromBuffer(BPlusTree.java:4539)
> ~[ignite-core-2.3.3.jar:2.3.3]
> at org.apache.ignite.internal.

Re: Large durable caches

2018-03-15 Thread Larry
Hi Alexey.

Were there any findings?  Any updates would be helpful.

Thanks,
-Larry

On Thu, Mar 8, 2018 at 3:48 PM, Dmitriy Setrakyan 
wrote:

> Hi Lawernce,
>
> I believe Alexey Goncharuk was working on improving this scenario. Alexey,
> can you provide some of your findings here?
>
> D.
>
> -- Forwarded message --
> From: lawrencefinn 
> Date: Mon, Mar 5, 2018 at 1:54 PM
> Subject: Re: Large durable caches
> To: user@ignite.apache.org
>
>
> BUMP.  Can anyone verify this?  If ignite cannot scale in this manner that
> is
> fine, i'd just want to know if what i am seeing makes sense.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2018-03-15 Thread Arseny Kovalchuk
Hi, guys.

I've got a reproducer for a problem which is generally reported as "Caused
by: java.lang.IllegalStateException: Failed to get page IO instance (page
content is corrupted)". Actually it reproduces the result. I don't have an
idea how the data has been corrupted, but the cluster node doesn't want to
start with this data.

We got the issue again when some of server nodes were restarted several
times by kubernetes. I suspect that the data got corrupted during such
restarts. But the main functionality that we really desire to have, that
the cluster DOESN'T HANG during next restart even if the data is corrupted!
Anyway, there is no a tool that can help to correct such data, and as a
result we wipe all data manually to start the cluster. So, having warnings
about corrupted data in logs and just working cluster is the expected
behavior.

How to reproduce:
1. Download the data from here
https://storage.googleapis.com/pub-data-0/data5.tar.gz (~200Mb)
2. Download and import Gradle project
https://storage.googleapis.com/pub-data-0/project.tar.gz (~100Kb)
3. Unpack the data to the home folder, say /home/user1. You should get the
path like */home/user1/data5*. Inside data5 you should have binary_meta,
db, marshaller.
4. Open *src/main/resources/data-test.xml* and put the absolute path of
unpacked data into *workDirectory* property of *igniteCfg5* bean. In this
example it should be */home/user1/data5.* Do not edit consistentId!
The consistentId is ignite-instance-5, so the real data is in
the data5/db/ignite_instance_5 folder
5. Start application from ru.synesis.kipod.DataTestBootApp
6. Enjoy

Hope it will help.


​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 26 December 2017 at 21:15, Denis Magda  wrote:

> Cross-posting to the dev list.
>
> Ignite persistence maintainers please chime in.
>
> —
> Denis
>
> On Dec 26, 2017, at 2:17 AM, Arseny Kovalchuk 
> wrote:
>
> Hi guys.
>
> Another issue when using Ignite 2.3 with native persistence enabled. See
> details below.
>
> We deploy Ignite along with our services in Kubernetes (v 1.8) on
> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>
> We put about 230 events/second into Ignite, 70% of events are ~200KB in
> size and 30% are 5000KB. Smaller events have indexed fields and we query
> them via SQL.
>
> The cluster is activated from a client node which also streams events into
> Ignite from Kafka. We use custom implementation of streamer which uses
> cache.putAll() API.
>
> We started cluster from scratch without any persistent data. After a while
> we got corrupted data with the error message.
>
> [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
> - Partition eviction failed, this can cause grid hang.
> class org.apache.ignite.IgniteException: Runtime failure on search row:
> Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
> ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419,
> face_last_name=null, face_list_id=null, channel=171, source=,
> face_similarity=null, license_plate_number=null, descriptors=null,
> cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854,
> stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854,
> persistent=false, face_first_name=null, license_plate_first_name=null,
> face_full_name=null, level=0, module=Kpx.Synesis.Outdoor,
> end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0,
> human, 0, truck, 0, start_time=1513946618964, processed=false,
> kafka_offset=111259, license_plate_last_name=null, armed=false,
> license_plate_country=null, topic=MovingObject, comment=,
> expiration=1514033024000, original_id=null, license_plate_lists=null], ver:
> GridCacheVersion [topVer=125430590, order=1513955001926, nodeOrder=3] ][
> 3008806055072854, MovingObject, Kpx.Synesis.Outdoor, 0, , 1513946618964,
> 1513946624379, 171, 171, FALSE, FALSE, , FALSE, FALSE, 0, 0, 111259,
> 1514033024000, (vehicle, 0, human, 0, truck, 0), null, null, null, null,
> null, null, null, null, null, null, null, null ]
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .BPlusTree.doRemove(BPlusTree.java:1787)
> at org.apache.ignite.internal.processors.cache.persistence.tree
> .BPlusTree.remove(BPlusTree.java:1578)
> at org.apache.ignite.internal.processors.query.h2.database.H2Tr
> eeIndex.remove(H2TreeIndex.java:216)
> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tab
> le.doUpdate(GridH2Table.java:496)
> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tab
> le.update(GridH2Table.java:423)
> at org.apache.ignite.internal.processors.query.h2.IgniteH2Index
> ing.remove(IgniteH2Indexing.java:580)
> at org.apache.ignite.interna

AffinityKey Configuration in order to achieve multiple joins across caches

2018-03-15 Thread StartCoding
Hi Team,


Below are my java templates whose object I want to store in the Ignite
Caches 

Class A
{

  field1
  field2
  field3


}

Class B
{

  field1
  field4
  field5


}

Class C

{

  field5
  field6
  field7


}





I wanted to colocate data in such a way that all instances of class A and
Class B should stay together in the partition node where ClassAObj.field1 =
ClassBObj.field1
Silimarly I want to make sure that all instances of class B and Class C
should stay together in the node where ClassBObj.field5 = ClassCObj.field5.
How Can we achive using cache Key configuration?


I tried like this

Defined Key classes

Class AKey

{

  field1
 


}
Class BKey

{

  field1
  field5
 


}

Class CKey

{

  field5
 


}


used below property  in the IgniteConfiguration.













   












 

Is this the way I should map or there is some other way to do it. Any help
to this will be appreciated.

Thanks
Saji



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Expiry Inconsistency with Native Persistence

2018-03-15 Thread Subash Chaturanga
Here’s my code.

*public static void *main(String[] args) *throws*InterruptedException {

String region = *"4GRegion"*;
String cacheName = *"bar2"*;
String path = *"C:**\\**dev**\\**dpsrc**\\**simple-java**\\**work"*;

DataStorageConfiguration storageCfg = *new*
DataStorageConfiguration();

DataRegionConfiguration regionCfg = *new*DataRegionConfiguration();
regionCfg.setName(region);
regionCfg.setInitialSize(10L * 1024 *1024);
regionCfg.setMaxSize(100L * 1024 * 1024);
regionCfg.setPageEvictionMode(DataPageEvictionMode.*RANDOM_LRU*);
regionCfg.setPersistenceEnabled(*true*);

storageCfg.setDataRegionConfigurations(regionCfg);

IgniteConfiguration cfg = *new*IgniteConfiguration();
cfg.setWorkDirectory(path);
cfg.setDataStorageConfiguration(storageCfg);
TcpDiscoverySpi spi = *new*TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = *new*TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.*asList*(*"127.0.0.1:47500..47509"*));
spi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(spi);

Ignite ignite = Ignition.*start*(cfg);

ignite.active(*true*);

CacheConfiguration cc = *new *CacheConfiguration<>();
cc.setDataRegionName(region);
cc.setPartitionLossPolicy(PartitionLossPolicy.*READ_WRITE_ALL*);
cc.setName(cacheName);
cc.setCacheMode(CacheMode.*PARTITIONED*);
cc.setGroupName(cacheName);
cc.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(*new*
Duration(TimeUnit.*SECONDS*, 15)));
cc.setStatisticsEnabled(*true*);
ignite.resetLostPartitions(Arrays.*asList*(*new *
String[]{cacheName}));




*//ignite.getOrCreateCache(cc).put("A1","V1");//
ignite.getOrCreateCache(cc).put("A2","V2");*System.*out*.println(
*new *Date().toString() + *">>>" *+ ignite.getOrCreateCache(cc).get(
*"A1"*));
System.*out*.println(*new *Date().toString() + *">>>" *+
ignite.getOrCreateCache(cc).get(*"A2"*));

Thread.*sleep*(20*1000);

System.*out*.println(*new *Date().toString() + *">>>" *+
ignite.getOrCreateCache(cc).get(*"A1"*));
System.*out*.println(*new *Date().toString() + *">>>" *+
ignite.getOrCreateCache(cc).get(*"A2"*));
}



On Wed, Mar 14, 2018 at 5:07 PM vkulichenko 
wrote:

> Subash,
>
> This is weird, I'm doing exactly the same and not able to reproduce the
> issue. Can you share your whole test so that I can run it as-is?
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 
/subash


Re: Topic based messaging

2018-03-15 Thread piyush
Has anybody used this feature ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache ignit : Syntax error in SQL statement create table

2018-03-15 Thread Ilya Kasnacheev
Hello Guillaume!

Please share your pom.xml (or other source of dependencies) since something
is amiss here. In my projects h2 dependency is of proper version 1.4.195.

Thanks,

-- 
Ilya Kasnacheev

2018-03-05 16:51 GMT+03:00 guillaume :

> Hello,
>
> thx you for your response.
>
> I checked my dependency tree. What I do not understand is why apache-ignite
> indexing 2.3.0 contains h2 1.4.193 if they are incompatible. I share a
> screen with you to illustrate my point.
> I use maven.
>
>  t1656/Capture_du_2018-03-05_14-45-24.png>
>
> I also tested to exclude h2 1.4.193 from apache-ignite-indexing and to add
> 1.4.195 to my project and I get the following error:
>
> java.lang.ClassNotFoundException: org.h2.result.RowFactory
>
> Regards,
>
> Hochart Guillaume
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Affinity Key column to be always part of the Primary Key

2018-03-15 Thread Naveen
Do we have any update on this clarification regarding the affinity

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.3 vs 2.4 compatibility

2018-03-15 Thread Mikael
I made a backup of the files before I deleted them so I guess I can 
repeat it again, I will have a look at it.


Mikael

Den 2018-03-15 kl. 12:18, skrev Dmitry Pavlov:

Hi,

It should work and data from 2.3 should be loaded by 2.4. Could you 
please share details?


Do you have logs, and entries saved in persistent store?

Sincerely,
Dmitriy Pavlov

чт, 15 мар. 2018 г. в 12:56, Mikael >:


Hi!

Are persistent storage compatible between 2.3 and 2.4 ? I upgraded to
2.4 and got a lot of exceptions at startup, when I deleted all the
persistence files and restarted everything worked fine, no big deal,
just wanted to know if it should work with 2.3 persistence files and
upgrade to 2.4 or if it's not expected to work.






Re: 2.3 vs 2.4 compatibility

2018-03-15 Thread Dmitry Pavlov
Hi,

It should work and data from 2.3 should be loaded by 2.4. Could you please
share details?

Do you have logs, and entries saved in persistent store?

Sincerely,
Dmitriy Pavlov

чт, 15 мар. 2018 г. в 12:56, Mikael :

> Hi!
>
> Are persistent storage compatible between 2.3 and 2.4 ? I upgraded to
> 2.4 and got a lot of exceptions at startup, when I deleted all the
> persistence files and restarted everything worked fine, no big deal,
> just wanted to know if it should work with 2.3 persistence files and
> upgrade to 2.4 or if it's not expected to work.
>
>
>


2.3 vs 2.4 compatibility

2018-03-15 Thread Mikael

Hi!

Are persistent storage compatible between 2.3 and 2.4 ? I upgraded to 
2.4 and got a lot of exceptions at startup, when I deleted all the 
persistence files and restarted everything worked fine, no big deal, 
just wanted to know if it should work with 2.3 persistence files and 
upgrade to 2.4 or if it's not expected to work.





Re:RE: Re: Node can not join cluster

2018-03-15 Thread Lucky
Hi,
I load data from database with 192.168.63.36 node. The other node don't 
load data. This you can see it from default-config_60.xml and 
default-config.xml file.
I have provide files all about this .
Thank you.




At 2018-03-06 16:53:49, "Stanislav Lukyanov"  wrote:


Hi,

 

As Alex said before, from the log you’ve provided it’s hard to say much but 
what’s in this message:

===

[09:06:26,657][WARNING][main][TcpDiscoverySpi] Node has not been connected to 
topology and will repeat join process. Check remote nodes logs for possible 
error messages. Note that large topology may require significant time to start. 
Increase 'TcpDiscoverySpi.networkTimeout' configuration property if getting 
this message on the starting nodes [networkTimeout=5000]

===

 

Can you provide other log files and share more details about what specifically 
you’re doing?

If the issue is reproducible in different environments, it would be helpful if 
you could share a reproducer project on GitHub.

 

Thanks,

Stan

 

ignite_Error_log.rar
Description: Binary data


Ignite cluster always throw java.lang.NoClassDefFoundError

2018-03-15 Thread 王 刚

Hi, guys. I meat an issue in ignite cluster. I am using ignite in spring-boot 
1.5.8 by add ignite-core dependency 2.30. And the cluster use JDBC DiscoverySpi 
as cluster ipFinder.

If i kill spring-boot-ignite by chance, and try to restart, ignite alaws throws 
an NoClassDefFoundError: java.lang.NoClassDefFoundError: 
ch/qos/logback/classic/spi/ThrowableProxy or other class like 
org/springframework/http/ResponseEntity.  But I’am sure the class is in jar 
file. And after i remove all the files in /store/node, ignite can successfully 
start.

These are my pom file, configuration file and log file.




发送自 Windows 10 版邮件应用


http://maven.apache.org/POM/4.0.0"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd";>
4.0.0

com.samples.vehicles
ignite
jar
0.3.0-SNAPSHOT

vehicles-ignite
ignite search engine


com.samples
vehicles
0.3.0-SNAPSHOT



UTF-8
UTF-8
1.8
2.4.0 
2.12.0


  

com.samples.vehicles
common
0.3.0-SNAPSHOT


org.springframework.cloud
spring-cloud-starter-feign


org.springframework.cloud
spring-cloud-starter-eureka


org.springframework.boot
spring-boot-starter-web
  

org.springframework.boot
spring-boot-starter-jdbc


org.springframework.boot
spring-boot-configuration-processor


org.apache.camel
camel-spring-boot-starter


org.apache.camel
camel-kafka


org.apache.ignite
ignite-core
${ignite.version}


org.apache.ignite
ignite-spring
${ignite.version}


org.apache.ignite
ignite-indexing
${ignite.version}


org.apache.ignite
ignite-slf4j
${ignite.version}

  
ch.qos.logback
logback-classic


mysql
mysql-connector-java


org.projectlombok
lombok
true


commons-dbutils
commons-dbutils
1.1


com.h2database
h2






org.springframework.boot
spring-boot-maven-plugin

true



org.apache.maven.plugins
maven-compiler-plugin
3.7.0

1.8
1.8




 



IgniteConfig.java
Description: IgniteConfig.java


ignite.log
Description: ignite.log