Re: Group By Query is slow : Apache Ignite 2.3.0

2017-12-20 Thread dkarachentsev
Hi Indranil,

These measurements are not fully correct, for example select count(*) might
use only index and in select * was not actually invoked, because you need to
run over cursor. 
Also by default query is not parallelized on one node, and scan with
grouping is going sequentially in one thread.

Try to recheck your results on one node with enabled query parallelism:
CacheConfiguration.setQueryParallelism(8) [1]. 

And/or on 4 server nodes with 1 backup. You should get better numbers
because of spreading query over machines.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setQueryParallelism(int)

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-20 Thread Dmitry Karachentsev

Crossposting to devlist.

Hi Igniters!

It's might be a nice feature to have - get list of registered continuous 
queries with ability to deregister them.


What do you think?

Thanks!
-Dmitry

20.12.2017 16:59, fefe пишет:

For sanity checks or tests. I want to be sure that I haven't forgot to
deregister any listener.

Its also very important metric to see how many continuous queries/listeners
are currently running.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: NullPointerException in GridDhtPartitionDemander

2017-12-20 Thread aMark
Hi,

A client gets created for each request and client connect to cluster to read
the data. Once reading is done, client exits. This explains the high
topology version.

Though server nodes are not getting created often.

We can try Ignite 2.3 in next release but we are close to our release date
hence can't try Ignite 2.3 at this point of time.

Thanks,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Denis Magda
Why are you giving only 5GB of RAM to every node then (referring to your data 
region configuration)? You mentioned that it’s fine to assign 15GB of RAM. Does 
it mean there are another processes running on the server that use the rest of 
RAM heavily.

To make the troubleshooting of your problem more effectively, please upload 
your complete configuration and the code of preloader that calls Ignite data 
streamer on GitHub and share with us.

—
Denis

> On Dec 20, 2017, at 8:34 PM, Tejashwa Kumar Verma  
> wrote:
> 
> Hi Denis,
> 
> I dont know that i got your question correctly or not.
> But still attempting to ans.
> 
> For now i have 2 node cluster and both have 48-48 GB RAM available. And data 
> is not Preloaded .
> 
> 
> Thanks & Regards
> Tejas
> 
> On Thu, Dec 21, 2017 at 9:55 AM, Denis Magda  > wrote:
> Does it mean that you have 3 cluster nodes and all of them are running on a 
> single server? Is data preloaded from a different machine?
> 
> —
> Denis
> 
>> On Dec 20, 2017, at 8:09 PM, Tejashwa Kumar Verma > > wrote:
>> 
>> HI Alexey,
>> 
>> We have enough memory(around 48 GB) on server whereas allocation wise we are 
>> assigning/utilizing only 15GB memory.
>> 
>> 
>> @Denis, I have tried all the configs given in mentioned link. But its not 
>> helping out. 
>> 
>> 
>> Thanks & regards
>> Tejas
>> 
>> On Thu, Dec 21, 2017 at 5:44 AM, Denis Magda > > wrote:
>> Tejas,
>> 
>> The new memory architecture of Ignite 2.x might require an extra tuning. I 
>> find this doc as a good starting point of the scrutiny:
>> https://apacheignite.readme.io/docs/durable-memory-tuning 
>> 
>> 
>> —
>> Denis
>> 
>>> On Dec 20, 2017, at 10:43 AM, Tejashwa Kumar Verma 
>>> mailto:tejashwa.ve...@gmail.com>> wrote:
>>> 
>>> Yes, I have same cluster, env and no of nodes. 
>>> 
>>> I am using DataStreamer to load data. 
>>> 
>>> Thanks and Regards
>>> Tejas 
>>> 
>>> On 21 Dec 2017 12:11 am, "Alexey Kukushkin" >> > wrote:
>>> Tejas, how do you load the cache - are you using DataStreamer or SQL, JDBC 
>>> or put/putAll or something else? Can you confirm - are you saying you have 
>>> same cluster (same number of nodes and hardware) and after the upgrade the 
>>> cache load time increased from 40 to 90 minutes?
>> 
>> 
> 
> 



Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Tejashwa Kumar Verma
Hi Denis,

I dont know that i got your question correctly or not.
But still attempting to ans.

For now i have 2 node cluster and both have 48-48 GB RAM available. And
data is not Preloaded .


Thanks & Regards
Tejas

On Thu, Dec 21, 2017 at 9:55 AM, Denis Magda  wrote:

> Does it mean that you have 3 cluster nodes and all of them are running on
> a single server? Is data preloaded from a different machine?
>
> —
> Denis
>
> On Dec 20, 2017, at 8:09 PM, Tejashwa Kumar Verma <
> tejashwa.ve...@gmail.com> wrote:
>
> HI Alexey,
>
> We have enough memory(around 48 GB) on server whereas allocation wise we
> are assigning/utilizing only 15GB memory.
>
>
> @Denis, I have tried all the configs given in mentioned link. But its not
> helping out.
>
>
> Thanks & regards
> Tejas
>
> On Thu, Dec 21, 2017 at 5:44 AM, Denis Magda  wrote:
>
>> Tejas,
>>
>> The new memory architecture of Ignite 2.x might require an extra tuning.
>> I find this doc as a good starting point of the scrutiny:
>> https://apacheignite.readme.io/docs/durable-memory-tuning
>>
>> —
>> Denis
>>
>> On Dec 20, 2017, at 10:43 AM, Tejashwa Kumar Verma <
>> tejashwa.ve...@gmail.com> wrote:
>>
>> Yes, I have same cluster, env and no of nodes.
>>
>> I am using DataStreamer to load data.
>>
>> Thanks and Regards
>> Tejas
>>
>> On 21 Dec 2017 12:11 am, "Alexey Kukushkin" 
>> wrote:
>>
>>> Tejas, how do you load the cache - are you using DataStreamer or SQL,
>>> JDBC or put/putAll or something else? Can you confirm - are you saying you
>>> have same cluster (same number of nodes and hardware) and after the upgrade
>>> the cache load time increased from 40 to 90 minutes?
>>>
>>
>>
>
>


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Denis Magda
Does it mean that you have 3 cluster nodes and all of them are running on a 
single server? Is data preloaded from a different machine?

—
Denis

> On Dec 20, 2017, at 8:09 PM, Tejashwa Kumar Verma  
> wrote:
> 
> HI Alexey,
> 
> We have enough memory(around 48 GB) on server whereas allocation wise we are 
> assigning/utilizing only 15GB memory.
> 
> 
> @Denis, I have tried all the configs given in mentioned link. But its not 
> helping out. 
> 
> 
> Thanks & regards
> Tejas
> 
> On Thu, Dec 21, 2017 at 5:44 AM, Denis Magda  > wrote:
> Tejas,
> 
> The new memory architecture of Ignite 2.x might require an extra tuning. I 
> find this doc as a good starting point of the scrutiny:
> https://apacheignite.readme.io/docs/durable-memory-tuning 
> 
> 
> —
> Denis
> 
>> On Dec 20, 2017, at 10:43 AM, Tejashwa Kumar Verma > > wrote:
>> 
>> Yes, I have same cluster, env and no of nodes. 
>> 
>> I am using DataStreamer to load data. 
>> 
>> Thanks and Regards
>> Tejas 
>> 
>> On 21 Dec 2017 12:11 am, "Alexey Kukushkin" > > wrote:
>> Tejas, how do you load the cache - are you using DataStreamer or SQL, JDBC 
>> or put/putAll or something else? Can you confirm - are you saying you have 
>> same cluster (same number of nodes and hardware) and after the upgrade the 
>> cache load time increased from 40 to 90 minutes?
> 
> 



Re: Ignite 2.3 Swap Path configuration is causing issue

2017-12-20 Thread Denis Magda
Yes. Read more here: 
https://apacheignite.readme.io/docs/memory-configuration#section-data-regions 


—
Denis


> On Dec 20, 2017, at 8:24 PM, Tejashwa Kumar Verma  
> wrote:
> 
> Oh thanks Alexey, So if we are using ignite V2.X without enabling persistence 
> and if data is spilling over the MaxSize. Then where will that data go ?
> 
> On Thu, Dec 21, 2017 at 1:19 AM, Alexey Kukushkin  > wrote:
> Swapping feature was removed in Ignite v.2 since native persistence was 
> introduced. Use native persistence to "extend" memory.
> 



Re: Ignite 2.3 Swap Path configuration is causing issue

2017-12-20 Thread Tejashwa Kumar Verma
Oh thanks Alexey, So if we are using ignite V2.X without enabling
persistence and if data is spilling over the MaxSize. Then where will that
data go ?

On Thu, Dec 21, 2017 at 1:19 AM, Alexey Kukushkin  wrote:

> Swapping feature was removed in Ignite v.2 since native persistence was
> introduced. Use native persistence to "extend" memory.
>


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Tejashwa Kumar Verma
HI Alexey,

We have enough memory(around 48 GB) on server whereas allocation wise we
are assigning/utilizing only 15GB memory.


@Denis, I have tried all the configs given in mentioned link. But its not
helping out.


Thanks & regards
Tejas

On Thu, Dec 21, 2017 at 5:44 AM, Denis Magda  wrote:

> Tejas,
>
> The new memory architecture of Ignite 2.x might require an extra tuning. I
> find this doc as a good starting point of the scrutiny:
> https://apacheignite.readme.io/docs/durable-memory-tuning
>
> —
> Denis
>
> On Dec 20, 2017, at 10:43 AM, Tejashwa Kumar Verma <
> tejashwa.ve...@gmail.com> wrote:
>
> Yes, I have same cluster, env and no of nodes.
>
> I am using DataStreamer to load data.
>
> Thanks and Regards
> Tejas
>
> On 21 Dec 2017 12:11 am, "Alexey Kukushkin" 
> wrote:
>
>> Tejas, how do you load the cache - are you using DataStreamer or SQL,
>> JDBC or put/putAll or something else? Can you confirm - are you saying you
>> have same cluster (same number of nodes and hardware) and after the upgrade
>> the cache load time increased from 40 to 90 minutes?
>>
>
>


Re: “Failed to communicate with Ignite cluster" error when using JDBC Thin driver

2017-12-20 Thread gunman524
It could not be a version issue, the thin driver and cluster version are
exactly same.

It can happened on any query but can not be reproduce 100%, it only happened
"some times".

In Log, I found  "Caused by: java.io.IOException: Failed to read incoming
message (not enough 
data). "
  
Firstly I guess the passing parms does not match with what defined in SQL,
but it not the case after double check.

So, any idea about it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance comparisons

2017-12-20 Thread Denis Magda
I would disagree with you that it doesn’t make sense to re-run benchmarks. 
Vendors usually publish artificial benchmarks that has nothing to do with a 
specific use case and environment. A rule of thumb is to benchmark your data 
model and use case.

Anyway, those benchmarks were executed by GridGain so direct all the question 
to that company.

BTW, these results [1] were received after that allegation of Hazelcast 
discussed in the links provided by you.

[1] 
http://dmagda.blogspot.com/2017/04/benchmarking-apache-ignite-still-keeps.html 


—
Denis

> On Dec 20, 2017, at 12:27 PM, Dmitri Bronnikov  
> wrote:
> 
> If 3-rd party runs benchmark even better. It doesn't make sense if everyone
> in the world has to run the same benchmarks. Division of labor makes sense.
> 
> Here's what I meant w.r.t. comparison vs Hazelcast.
> 
> https://www.gridgain.com/resources/blog/gridgain-confirms-apache-ignite-performance-2x-faster-hazelcast
> https://blog.hazelcast.com/fake-benchmark/
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Denis Magda
Tejas,

The new memory architecture of Ignite 2.x might require an extra tuning. I find 
this doc as a good starting point of the scrutiny:
https://apacheignite.readme.io/docs/durable-memory-tuning 


—
Denis

> On Dec 20, 2017, at 10:43 AM, Tejashwa Kumar Verma  
> wrote:
> 
> Yes, I have same cluster, env and no of nodes. 
> 
> I am using DataStreamer to load data. 
> 
> Thanks and Regards
> Tejas 
> 
> On 21 Dec 2017 12:11 am, "Alexey Kukushkin"  > wrote:
> Tejas, how do you load the cache - are you using DataStreamer or SQL, JDBC or 
> put/putAll or something else? Can you confirm - are you saying you have same 
> cluster (same number of nodes and hardware) and after the upgrade the cache 
> load time increased from 40 to 90 minutes?



Re: Group By Query is slow : Apache Ignite 2.3.0

2017-12-20 Thread INDRANIL BASU
Hi,
    There are 86486 records.
  Time taken for select count(*) from A => 35 ms Time taken for select mainId, 
count(*) from A group by mainId => 70 ms
Time taken for select * from A => 0 ms
 I am doing a POC with apache ignite and I am very keen to use it in production 
for live streaming and real time in memory fast query.Group by, top 100 are the 
2 preferred queries.
I need to get the figures great to put up a case, hence any help will be 
appreciated.
Thanks and regards,

-- Indranil Basu
 

On Wednesday 20 December 2017, 11:14:33 PM GMT+11, dkarachentsev 
 wrote:  
 
 Hi,

How many records your query returns without LIMIT? How long does it take to
select all records without grouping?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
  

Re: Performance comparisons

2017-12-20 Thread Dmitri Bronnikov
If 3-rd party runs benchmark even better. It doesn't make sense if everyone
in the world has to run the same benchmarks. Division of labor makes sense.

Here's what I meant w.r.t. comparison vs Hazelcast.

https://www.gridgain.com/resources/blog/gridgain-confirms-apache-ignite-performance-2x-faster-hazelcast
https://blog.hazelcast.com/fake-benchmark/



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.3 Swap Path configuration is causing issue

2017-12-20 Thread Alexey Kukushkin
Swapping feature was removed in Ignite v.2 since native persistence was
introduced. Use native persistence to "extend" memory.


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Alexey Kukushkin
Memory architecture changed in release 2 and one of the consequences was
Ignite now allocates memory more eagerly. Do you have enough memory on the
servers? Is it possible that the amount of memory you allocated for Ignite
results in Ignite reserving too much RAM on the servers so that the
remaining RAM is not enough for OS and other apps and that causes intensive
swapping?


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Tejashwa Kumar Verma
Yes, I have same cluster, env and no of nodes.

I am using DataStreamer to load data.

Thanks and Regards
Tejas

On 21 Dec 2017 12:11 am, "Alexey Kukushkin" 
wrote:

> Tejas, how do you load the cache - are you using DataStreamer or SQL, JDBC
> or put/putAll or something else? Can you confirm - are you saying you have
> same cluster (same number of nodes and hardware) and after the upgrade the
> cache load time increased from 40 to 90 minutes?
>


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Alexey Kukushkin
Tejas, how do you load the cache - are you using DataStreamer or SQL, JDBC
or put/putAll or something else? Can you confirm - are you saying you have
same cluster (same number of nodes and hardware) and after the upgrade the
cache load time increased from 40 to 90 minutes?


Re: Performance comparisons

2017-12-20 Thread Denis Magda
Ignite community does not publish any performance benchmarks that compares
Ignite to other projects and products. That's against ASF ideology.

However, you can run benchmarks on your own following the doc below and we
will be happy to assist if you believe final results look suspicious and
more advanced tuning is required:
https://apacheignite.readme.io/docs/perfomance-benchmarking

As for Hazelcast benchmarks, most likely you came across the benchmarks of a
3rd party vendor. My suggestion would be to contact that vendor.

--
Denis




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.3 Swap Path configuration is causing issue

2017-12-20 Thread Tejashwa Kumar Verma
Hi Alexey,

Thanks for quick reply.
Since you are suggesting that SwapPath is not about swap files used by
operating systems to extend RAM.

Please suggest me the way in 2.3 to configure swapfiles to extend RAM


Thanks & Regards
Tejas

On 20 Dec 2017 9:29 pm, "Alexey Kukushkin" 
wrote:

> My understanding is specifying a swapPath makes data region allocated in a
> memory mapped file. Otherwise the data region is stored in off-heap RAM.
> swapPath is not about swap files used by operating systems to extend RAM.
>
> According to my understanding the behaviour you described is correct.
>


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Tejashwa Kumar Verma
Hi Alexei,

In this case I have already removed swapPath configuration. Please see Data
Region config

Still it is slow


Thanks and Regards
Tejas

On 20 Dec 2017 9:44 pm, "Alexey Kukushkin" 
wrote:

> Specifying "swapPath" makes Ignite use memory mapped file to store data.
> Memory mapped files are slower than "raw" RAM.
>
> Remove "swapPath" setting to have same configuration as you had in Ignite
> 1.9 (data stored in off-heap RAM).
>


Re: Ignite service method cannot invoke for third time

2017-12-20 Thread arunkjn
Hi,

I am attaching a thread dump for all nodes-

1. data node - this hosts all our caches.
  https://pastebin.com/XeFdjtgM
2. service node - this has the concerned service whose method cannot be
called the third time
  https://pastebin.com/ukN21R4S
3. sample workflow node - this is a client node which is trying to call the
service method.
  https://pastebin.com/ViK3Vudd
4. consumer node- a server node which should be irrelevant
  https://pastebin.com/EYaFcLug
5. publisher node- a server node which should be irrelevant
  https://pastebin.com/1JEt9F7d
5. executor node- a server node which should be irrelevant
  https://pastebin.com/mRnNMTmM

Please let me know if you need any more detail.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Alexey Kukushkin
Specifying "swapPath" makes Ignite use memory mapped file to store data.
Memory mapped files are slower than "raw" RAM.

Remove "swapPath" setting to have same configuration as you had in Ignite
1.9 (data stored in off-heap RAM).


Re: Ignite 2.3 Swap Path configuration is causing issue

2017-12-20 Thread Alexey Kukushkin
My understanding is specifying a swapPath makes data region allocated in a
memory mapped file. Otherwise the data region is stored in off-heap RAM.
swapPath is not about swap files used by operating systems to extend RAM.

According to my understanding the behaviour you described is correct.


Data load is very slow in ignite 2.3 compare to ignite 1.9

2017-12-20 Thread Tejashwa Kumar Verma
Hi All,

We are migrating from 1.9 to 2.3 and following are the respective configs .

*Cache Config in 1.9:*

<
property name="backups" value="0" />
java.lang.String net.juniper.cs.entity.InstallBase   


*Cache Config in 2.3:*

Here we have to configure DataRegion and then that data region needs to be
assigned to Cache.

*Data Region Conf:*



*Cache Conf:*

  
  java.lang.String net.juniper.cs.entity.InstallBase   



*In both cases we are loading data in Offheap memory but still we are
observing noticeable increase in the load time(from 40 min to 90 min) of
Cache.*
*Can anyone help me to understand that why this is happening ?*

*Note:*
*Data Persistence is not enabled.*


Thanks & Regards
Tejas


Ignite 2.3 Swap Path configuration is causing issue

2017-12-20 Thread Tejashwa Kumar Verma
Hi All,

We are migrating from 1.9 to 2.3 and we are configuring Data regions for
Cache with '*swapPath*' configuration as below.

   <
property name="initialSize" value="#{100L * 1024 * 1024}" />   **

Now as per configs, first data (< = 5GB) should go in RAM and once it
starts spilling over maxSize 5GB then remaining data should go in SWAP.

*ISSUE:*
We are facing issue here, whenever we start data load with "*swapPath*"
property configuration, entire data is going to SWAP directly and its not
going to RAM and this behavior is not correct. Whereas if we are removing "
*swapPath*" property from configuration then data is going to RAM.

Please let me know if my configs are correct or anything else is required ?


Thanks & Regards
Tejas


Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-20 Thread fefe
For sanity checks or tests. I want to be sure that I haven't forgot to
deregister any listener. 

Its also very important metric to see how many continuous queries/listeners
are currently running.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-20 Thread dkarachentsev
Hi,

Currently it's not possible. What's for do you need such possibility?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: “Failed to communicate with Ignite cluster" error when using JDBC Thin driver

2017-12-20 Thread dkarachentsev
Hi,

Is it possible that version of thin driver is different from version of
cluster nodes? Does it happen on concrete queries or it could be on any one?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


List of running Continuous queries or CacheEntryListener per cache or node

2017-12-20 Thread fefe
Is it possible to get list of running continuous queries or
cacheEntryListener for given Cache or given Node?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory foot print of the ignite nThe background cache eviction process was unable to free [10] percent of the cache for Context

2017-12-20 Thread Alexey Popov
Hi Naveen,

Off-heap memory "Off-heap size" works strangely.
There are several tickets here
https://issues.apache.org/jira/browse/IGNITE-6814
https://issues.apache.org/jira/browse/IGNITE-5583
http://apache-ignite-users.70518.x6.nabble.com/Cache-size-in-Memory-td17226.html

You can get an estimation of off-heap memory usage by  *  (4k default for 2.3)
 ^-- PageMemory [pages=2043040] 
So, it is about 2043040 * 4096 = 8.3 Gb

Does it look realistic for you data models?

Please note that all Caches are stored off-heap.
On-heap memory used for data transfer, "hot" caches on top of off-heap
caches, etc.

Regarding your error: 
Are you sure you use Ignite here for cache variable?
It looks like org.apache.catalina.webresources.Cache is used instead of
Ignite for your code

res = cache.get(custid).toString(); 

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Group By Query is slow : Apache Ignite 2.3.0

2017-12-20 Thread dkarachentsev
Hi,

How many records your query returns without LIMIT? How long does it take to
select all records without grouping?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NullPointerException in GridDhtPartitionDemander

2017-12-20 Thread ilya.kasnacheev
Hello!

Can you please elaborate why do you have acute number of topology versions
(69 in this case)? Can you please describe the life cycle of your topology?
Which nodes join, when, which nodes leave, when?

Don't you, by any chance, create client nodes for every request or small
batch, or even recreate server nodes often?

I also recommend trying the same scenario on 2.3 to see if there are any
differences. I'm not sure if they will be.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Upgrade from 2.2.0 to 2.3.0 problem with BinaryObjectException

2017-12-20 Thread Denis Mekhanikov
Oh, looks like this problem is already fixed:
https://issues.apache.org/jira/browse/IGNITE-6944
I checked, your project works, when Ignite from current master is used.

So, you can just wiat for the next release and switch to it.

Denis

ср, 20 дек. 2017 г. в 13:10, Denis Mekhanikov :

> Hi Łukasz!
>
> This problem is caused by *@Cacheable* annotation on 
> *SampleRepo#getSampleEntity()
> *method.
> When you invoke it for the first time, its result is put into an Ignite
> cache. And for the second time the result is just taken from the cache, you
> probably know that.
>
> The problem is that *SampleEntity* contains a *key* field, which
> internally uses *SingletonImmutableList* class. This class has
> *writeReplace()* method, that alters the serialization. I guess, that
> lookup for this method was broken for *BinaryMarshaller* in 2.3 release.
>
> *SingletonImmutableList* has a transient field *element*. When you put
> this value into a cache, it is serialized, and value of this field is
> omitted. When you get this value from cache, this field is null, which
> causes the NPE.
>
> But actually this class should be serialized, using *writeReplace()*
> method. It works fine if you change marshaller to Optimized. To do it, add
> the following line to *CacheConfig#provideDevIgniteConfiguration():*
>
> cfg.setMarshaller(new OptimizedMarshaller(false));
>
> Note, that Optimized marshaller actually has a number of restrictions.
> Features like IgniteCache.withKeepBinary(), .NET, C++, ODBC won't work with
> it. It may also affect performance, especially if you use SQL.
>
> I'll investigate this problem further. I hope, it will be fixed by 2.4
> release.
>
> Denis
>
> вт, 19 дек. 2017 г. в 0:37, lukaszbyjos :
>
>> I have created repo for this error to easier recreate.
>>
>> https://github.com/Mistic92/ignite-bug
>>
>> When using 2.2.0 everything is ok. But after update to 2.3.0 I get error
>>
>> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to read
>> field [name=]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:168)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> ... 135 more
>> Caused by: java.lang.NullPointerException
>> at
>> com.google.common.collect.ImmutableList.hashCode(ImmutableList.java:571)
>> ~[guava-20.0.jar:?]
>> at java.util.Arrays.hashCode(Arrays.java:4146) ~[?:1.8.0_152]
>> at java.util.Objects.hash(Objects.java:128) ~[?:1.8.0_152]
>> at com.google.cloud.datastore.BaseKey.hashCode(BaseKey.java:204)
>> ~[google-cloud-datastore-1.8.0.jar:1.8.0]
>> at
>>
>> com.jmethods.catatumbo.DefaultDatastoreKey.hashCode(DefaultDatastoreKey.java:134)
>> ~[catatumbo-catatumbo-2.4.0.jar:2.4.0]
>> at java.util.HashMap.hash(HashMap.java:339) ~[?:1.8.0_152]
>> at java.util.HashMap.put(HashMap.java:612) ~[?:1.8.0_152]
>> at java.util.HashSet.add(HashSet.java:220) ~[?:1.8.0_152]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2093)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:679)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> at
>>
>> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
>> ~[ignite-core-2.3.0.jar:2.3.0]
>> ... 135 more
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Upgrade from 2.2.0 to 2.3.0 problem with BinaryObjectException

2017-12-20 Thread Denis Mekhanikov
Hi Łukasz!

This problem is caused by *@Cacheable* annotation on
*SampleRepo#getSampleEntity()
*method.
When you invoke it for the first time, its result is put into an Ignite
cache. And for the second time the result is just taken from the cache, you
probably know that.

The problem is that *SampleEntity* contains a *key* field, which internally
uses *SingletonImmutableList* class. This class has *writeReplace()* method,
that alters the serialization. I guess, that lookup for this method was
broken for *BinaryMarshaller* in 2.3 release.

*SingletonImmutableList* has a transient field *element*. When you put this
value into a cache, it is serialized, and value of this field is omitted.
When you get this value from cache, this field is null, which causes the
NPE.

But actually this class should be serialized, using *writeReplace()*
method. It works fine if you change marshaller to Optimized. To do it, add
the following line to *CacheConfig#provideDevIgniteConfiguration():*

cfg.setMarshaller(new OptimizedMarshaller(false));

Note, that Optimized marshaller actually has a number of restrictions.
Features like IgniteCache.withKeepBinary(), .NET, C++, ODBC won't work with
it. It may also affect performance, especially if you use SQL.

I'll investigate this problem further. I hope, it will be fixed by 2.4
release.

Denis

вт, 19 дек. 2017 г. в 0:37, lukaszbyjos :

> I have created repo for this error to easier recreate.
>
> https://github.com/Mistic92/ignite-bug
>
> When using 2.2.0 everything is ok. But after update to 2.3.0 I get error
>
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to read
> field [name=]
> at
>
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:168)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
> ~[ignite-core-2.3.0.jar:2.3.0]
> ... 135 more
> Caused by: java.lang.NullPointerException
> at
> com.google.common.collect.ImmutableList.hashCode(ImmutableList.java:571)
> ~[guava-20.0.jar:?]
> at java.util.Arrays.hashCode(Arrays.java:4146) ~[?:1.8.0_152]
> at java.util.Objects.hash(Objects.java:128) ~[?:1.8.0_152]
> at com.google.cloud.datastore.BaseKey.hashCode(BaseKey.java:204)
> ~[google-cloud-datastore-1.8.0.jar:1.8.0]
> at
>
> com.jmethods.catatumbo.DefaultDatastoreKey.hashCode(DefaultDatastoreKey.java:134)
> ~[catatumbo-catatumbo-2.4.0.jar:2.4.0]
> at java.util.HashMap.hash(HashMap.java:339) ~[?:1.8.0_152]
> at java.util.HashMap.put(HashMap.java:612) ~[?:1.8.0_152]
> at java.util.HashSet.add(HashSet.java:220) ~[?:1.8.0_152]
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2093)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:679)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
> ~[ignite-core-2.3.0.jar:2.3.0]
> at
>
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
> ~[ignite-core-2.3.0.jar:2.3.0]
> ... 135 more
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


“Failed to communicate with Ignite cluster" error when using JDBC Thin driver

2017-12-20 Thread gunman524
Hi guys,

When I use ignite jdbc thin driver connect Ignite cluster, sometimes will
meet the errors below. Do you have any idea what cause this error?  many
thanks!


org.springframework.dao.DataAccessResourceFailureException: 
### Error updating database.  Cause: java.sql.SQLException: Failed to
communicate with Ignite cluster.
### The error may involve defaultParameterMap
### The error occurred while setting parameters
### SQL: MERGE INTO
L_ISUP_APP_DRAFT_RESULT_HISTORY_T(ID,DRAFT_ID,USER_ID,APP_ID,DATA_STRUCTURE,PROJECT_ID,RESULT_ID,RESULT_URL,PROCESS_TYPE,STEP_ID,DRAFT_NAME,STATUS,DELETE_FLAG,CREATE_TIME,LAST_UPDATE_TIME,RDC_CODE,APP_DEPLOY_RDC,FROM_RDC,PARAM_STATUS)

VALUES(?,?,?,?,?,?,?,?,  '1',?,?,   
?,   ?, ?,?,?,?,?,?);
### Cause: java.sql.SQLException: Failed to communicate with Ignite cluster.
; SQL []; Failed to communicate with Ignite cluster.; nested exception is
java.sql.SQLException: Failed to communicate with Ignite cluster.
at
org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:105)
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at
org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:75)
at
org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:447)
at com.sun.proxy.$Proxy1462.insert(Unknown Source)
at
org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:279)
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:56)
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:53)
at com.sun.proxy.$Proxy1466.igniteCreateDraftInfo(Unknown Source)
at
com.huawei.isup.service.connect.serviceignite.IgniteAsyncService.igniteCreateDraftInfo(IgniteAsyncService.java:54)
at
com.huawei.isup.service.connect.serviceignite.IgniteAsyncService$$FastClassBySpringCGLIB$$25701ede.invoke()
at 
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at
org.springframework.aop.interceptor.AsyncExecutionInterceptor$1.call(AsyncExecutionInterceptor.java:108)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Failed to communicate with Ignite cluster.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:681)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinPreparedStatement.executeWithArguments(JdbcThinPreparedStatement.java:252)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinPreparedStatement.execute(JdbcThinPreparedStatement.java:240)
at
org.apache.tomcat.dbcp.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:197)
at
org.apache.tomcat.dbcp.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:197)
at
org.apache.ibatis.executor.statement.PreparedStatementHandler.update(PreparedStatementHandler.java:46)
at
org.apache.ibatis.executor.statement.RoutingStatementHandler.update(RoutingStatementHandler.java:74)
at
org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:50)
at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:117)
at
org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:76)
at
org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:198)
at
org.apache.ibatis.session.defaults.DefaultSqlSession.insert(DefaultSqlSession.java:185)
at sun.reflect.GeneratedMethodAccessor1403.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:434)
... 15 more
Caused by: java.io.I