java.lang.ClassNotFoundException: Failed to peer load class

2016-11-08 Thread wang alex
When I run the code below, I got "Failed to peer load class". String objectIds = "1vxzn3ifggm4o,1a47fmqipb1u3,z56f5kkwlfk3,tths3z5k5l38,79lzqlrd4cg6"; for (String did : objectIds.split(",")) { calls.add(() -> { TLabObject object = RepositoryDao.getObject(did);

Re: getting Failed to find cache even though cache is there

2016-11-08 Thread vkulichenko
Hi, It looks like cache object is unexpectedly serialized. Is it referenced by loadCache predicate and/or arguments? -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/getting-Failed-to-find-cache-even-though-cache-is-there-tp8787p8800.html Sent from the

Re: Class objects are fetched as string when JDBC api is used

2016-11-08 Thread vkulichenko
To my knowledge, both JDBC driver and especially SqlFieldsQuery should return objects, not strings. However, if you print out these object and you didn't override toString, then you will get exactly what you shown in the output. Can this be the case? -Val -- View this message in context:

Re: Problem with v for listening updates

2016-11-08 Thread vkulichenko
Hi, I tried to run your code and it works properly for me. Are you sure you're running the exact same version? Also note that you must do a full copy of the object before putting it back to cache. If you try to avoid this, you can get unexpected behavior because you will mutate the actual

Re: Fwd: How to trigger out event when a specific field got updated?

2016-11-08 Thread vkulichenko
Hi, You get both old and new value in the event. So you can compare this field values within your listener and act accordingly. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Fwd-How-to-trigger-out-event-when-a-specific-field-got-updated-tp8767p8797.html

Re: When writethrough processing, Persistent storage failed

2016-11-08 Thread vkulichenko
Hi, If there is a high load, I think there is a big chance of losing something in 10 seconds. However, you can try to increase flushSize property which controls the maximum amount of entries saved in queue. Note that in case you can't tolerate any data losses, you should use write-through instead

Re: Query on using Ignite as persistence data and processing layer

2016-11-08 Thread vkulichenko
Hi, It's fine to use Ignite as the main and only data storage for your application, but Ignite is not a persistence storage. Data is in memory, so there is always a chance for data loss. If this is something that you can't live with, then do not rip and replace, but use Ignite with a persistence

Re: Need help on the data streaming.

2016-11-08 Thread ssrini9
regarding the - configure expirations for cache entries. what happens when the cache is expired ? underlying query will be executed again by the data streamer and data will be loaded into cache again ? please clarify. -- View this message in context:

Re: Failed to send time sync snapshot to remote node

2016-11-08 Thread vkulichenko
Hi, Please properly subscribe to the mailing list so that the community can receive email notifications for your messages. To subscribe, send empty email to user-subscr...@ignite.apache.org and follow simple instructions in the reply. Navneet Kumar wrote > I am connected with the remote ignite

Re: java.lang.ClassNotFoundException: Failed to peer load class

2016-11-08 Thread vkulichenko
Hi, Please properly subscribe to the mailing list so that the community can receive email notifications for your messages. To subscribe, send empty email to user-subscr...@ignite.apache.org and follow simple instructions in the reply. alex wrote > When I run the code below, I got "Failed to

Re: SQLQuery

2016-11-08 Thread vkulichenko
Devis, Having a link from one object to another is the only way to join them. You need to add a 'clientUpdateId' field to LinkObject, store each LinkObject as a separate entry and then join. Another way is to denormalize and add 'registrationId' to LinkObject and remove ClientUpdate type. This

Re: What's the difference between EntryProcessor and distributed closure?

2016-11-08 Thread Tracyl
Thanks Alexey. By predicate/projection pushdown, I mean: currently I am storing a native Spark Row object as value format of IgniteCache. If I retrieve it as an IgniteRDD, I only want certain column of that Row object rather than returning entire Row and do filter/projection at Spark level. Do

Re: Need help on the data streaming.

2016-11-08 Thread vkulichenko
Srini, In this case you will have to implement a mechanism on the DB side that will use Ignite API to update the cache. Ignite doean't provide anything out of the box for this. Another way to handle this is to configure expirations for cache entries [1]. This way any entry will be eventually

Re: Need help on the data streaming.

2016-11-08 Thread ssrini9
Hi Val Thank you for responding to my post. I agree with the idea of using Ignite for both read/write through cache. However in my application, writes to persistent store happens through a different application. My application is a reporting application, where I need to stream the data, apply

Re: Problem with v for listening updates

2016-11-08 Thread Andry
Hi Vladislav, Thanks for the reply. Example I have provided just to explain our case. In our system we wanted to update several properties on existing objet in the cache but don't make full copy of object each time we extracting object from cache. We are using default configuration for

SQLQuery

2016-11-08 Thread devis76
Hi, can you help me how to perform this query with Ignite SQL. I have this class class ClientUpdate { private final String registrationId; private final LinkObject[] objectLinks; } class LinkObject implements Serializable{ private final Integer objectId; private

Re: Problem with v for listening updates

2016-11-08 Thread Vladislav Pyatkov
Hi Andry, It is look like a mistake: cache.put(1, new TestVal("old")); TestVal oldVal = cache.get(1); oldVal.val = "new"; cache.put(1, oldVal); You need create new object always. Try to do like this: cache.put(1, new TestVal("old")); cache.put(1, new TestVal("new")); Otherwise you can to get

Re: What's the difference between EntryProcessor and distributed closure?

2016-11-08 Thread Alexey Goncharuk
Hi Tracyl, Can you describe in greater detail what you are trying to achieve? To my knowledge, predicate pushdown is a term usually used for map-reduce jobs. The concept of Ignite's jobs and tasks is more similar to fork-join rather than map-reduce semantics, so we could better help you if you

Re: 答复: 答复: ignite used too much memory

2016-11-08 Thread Andrey Mashenkov
Hi Shawn. It looks strange that each entry costs 10k in average, but you expect 50 bytes at most. You wrote that there are 300k entries, but I see 5000k keys in Heap. Grid configuration and Heapdump would be helpful to understand who holds all that data. Or It would be great to have a

Re: when I build ignite C++, it report that the" ignite/im pl/inter op/interop_target.h" is not exist ?

2016-11-08 Thread Igor Sapego
Hi, I've checked - by some reason this file really is missing from the release package. It is present in the 1.7.0 tag in git repo though. You can get it here - [1] [1] - https://github.com/apache/ignite/releases/tag/1.7.0 Best Regards, Igor On Tue, Nov 8, 2016 at 6:45 AM, smile

Re: Using OFF_HEAP_TIERED and Replicated Heap continously grows eventually heap crash

2016-11-08 Thread Vladislav Pyatkov
Hi, Yes, you are right, if you decrease the property (sqlOnheapRowCacheSize), you get sacrificing performance. But default value is really great: public static final int DFLT_SQL_ONHEAP_ROW_CACHE_SIZE = 10 * 1024; You can try to analyse heap dump, in order to make sure in the reason of the

Re: can ignite C++ API support pub/sub or Listener ?

2016-11-08 Thread Igor Sapego
Hi, There are no such API in C++ API yet. We are working on extending C++ API though so it may appear in future. Best Regards, Igor On Tue, Nov 8, 2016 at 10:41 AM, smile wrote: > Hi, all >I used ignite C++ API, and I find that it dose not support pub/sub , > also not

Re: Kafka - Failed to stream a record with null key

2016-11-08 Thread austin solomon
Hi Roman, Thank you for the brief explanation, I will try to modify the source code of FileStreamSourceTask.java Thanks & Regards, Austin -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Kafka-Failed-to-stream-a-record-with-null-key-tp8731p8776.html Sent from

Query on using Ignite as persistence data and processing layer

2016-11-08 Thread chevy
Hi, I am looking at a feasibility of using Ignite as a persistence layer instead of a mySql/Postgres db where we do lot of processing before sending data to our rest-api. 1. Is it good to use ignite as a storage? 2. Is it efficient to do so much processing of data in ignite? 3. What is the

Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

2016-11-08 Thread chevy
I went to local gradle repo and manually deleted other h2 versions. It’s working now. I have one more query – is there a way I can make ‘ignite-http-rest’ work with spring-boot? Last time when I checked with one of Ignite dev they mentioned that I can’t use it with boot. Without rest-api, I need