Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-05-23 Thread hulitao198758
mee too



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: confirm unsubscribe from user@ignite.apache.org

2019-05-23 Thread Puneeth S B Gowda
remove me

On Thu, 23 May 2019 at 22:00,  wrote:

> Hi! This is the ezmlm program. I'm managing the
> user@ignite.apache.org mailing list.
>
> To confirm that you would like
>
>puneeth.go...@travelcentrictechnology.com
>
> removed from the user mailing list, please send a short reply
> to this address:
>
>user-uc.1558629039.eildjccjkaeedjkooohm-puneeth.gowda=
> travelcentrictechnology@ignite.apache.org
>
> Usually, this happens when you just hit the "reply" button.
> If this does not work, simply copy the address and paste it into
> the "To:" field of a new message.
>
> I haven't checked whether your address is currently on the mailing list.
> To see what address you used to subscribe, look at the messages you are
> receiving from the mailing list. Each message has your address hidden
> inside its return path; for example, m...@xdd.ff.com receives messages
> with return path: -mary=xdd.ff@ignite.apache.org.
>
> Some mail programs are broken and cannot handle long addresses. If you
> cannot reply to this request, instead send a message to
>  and put the entire address listed above
> into the "Subject:" line.
>
>
> --- Administrative commands for the user list ---
>
> I can handle administrative requests automatically. Please
> do not send them to the list address! Instead, send
> your message to the correct command address:
>
> To subscribe to the list, send a message to:
>
>
> To remove your address from the list, send a message to:
>
>
> Send mail to the following for info and FAQ for this list:
>
>
>
> Similar addresses exist for the digest list:
>
>
>
> To get messages 123 through 145 (a maximum of 100 per request), mail:
>
>
> To get an index with subject and author for messages 123-456 , mail:
>
>
> They are always returned as sets of 100, max 2000 per request,
> so you'll actually get 100-499.
>
> To receive all messages with the same subject as message 12345,
> send a short message to:
>
>
> The messages should contain one line or word of text to avoid being
> treated as sp@m, but I will ignore their content.
> Only the ADDRESS you send to is important.
>
> You can start a subscription for an alternate address,
> for example "john@host.domain", just add a hyphen and your
> address (with '=' instead of '@') after the command word:
> 
>
> To stop subscription for this address, mail:
> 
>
> In both cases, I'll send a confirmation message to that address. When
> you receive it, simply reply to it to complete your subscription.
>
> If despite following these instructions, you do not get the
> desired results, please contact my owner at
> user-ow...@ignite.apache.org. Please be patient, my owner is a
> lot slower than I am ;-)
>
> --- Enclosed is a copy of the request I received.
>
> Return-Path: 
> Received: (qmail 76264 invoked by uid 99); 23 May 2019 16:30:39 -
> Received: from pnap-us-west-generic-nat.apache.org (HELO
> spamd1-us-west.apache.org) (209.188.14.142)
> by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 23 May 2019 16:30:39
> +
> Received: from localhost (localhost [127.0.0.1])
> by spamd1-us-west.apache.org (ASF Mail Server at
> spamd1-us-west.apache.org) with ESMTP id 8BC22C2F85
> for ; Thu, 23 May 2019
> 16:30:38 + (UTC)
> X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
> X-Spam-Flag: NO
> X-Spam-Score: 3.819
> X-Spam-Level: ***
> X-Spam-Status: No, score=3.819 tagged_above=-999 required=6.31
> tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
> HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=2,
> RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001,
> RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_NONE=0.001,
> URIBL_BLOCKED=0.001, URI_HEX=1.313, URI_NOVOWEL=0.5]
> autolearn=disabled
> Authentication-Results: spamd1-us-west.apache.org (amavisd-new);
> dkim=pass (2048-bit key) header.d=
> hotelhub-com.20150623.gappssmtp.com
> Received: from mx1-lw-us.apache.org ([10.40.0.8])
> by localhost (spamd1-us-west.apache.org [10.40.0.7])
> (amavisd-new, port 10024)
> with ESMTP id qCSgHNAsSY9O for  >;
> Thu, 23 May 2019 16:30:36 + (UTC)
> Received: from mail-ot1-f54.google.com (mail-ot1-f54.google.com
> [209.85.210.54])
> by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org)
> with ESMTPS id A6EFD61138
> for ; Thu, 23 May 2019
> 16:30:36 + (UTC)
> Received: by mail-ot1-f54.google.com with SMTP id r7so6001759otn.6
> for ; Thu, 23 May 2019
> 09:30:36 -0700 (PDT)
> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
> d=hotelhub-com.20150623.gappssmtp.com; s=20150623;
>
> h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
> bh=KWWHF4QwBrYedKJKs5l0ez5ytE0KP8qSHzwZByBJCFE=;
>
> b=KN5P/5gXv8bEmvcUu4w8e9jZoih5hSN80rRJ8VwLRCdPXByGgGZk3XANzLfKF7Uhpp
>
>  A7fLUxXFwkeeo0DuW0q5GgG/lzyczNvz1RVv3LGP8Ogp+RJBcW/CrgK9/UgVSyneyh0r
>
>  

unsubscribe

2019-05-23 Thread Puneeth S B Gowda
-- 

Warm regards,


Puneeth S B Gowda, MBA  | Agile Operations Manager

[image: HotelHub-logo]
HotelHub LLP
Phone: +91 80 6741 8730
Cell: +91 96 3209 6056
Email: puneeth.go...@hotelhub.com
Website: www.hotelhub.com 


Re: Which cache gets expiry policy when creating near cache?

2019-05-23 Thread John Smith
Also is there a difference between these two?

ignite.getOrCreateCache(cacheConfig, nearConfig).withExpiryPolicy();

AND

ignite.getOrCreateNearCache(name, nearConfig).withExpiryPolicy();





On Thu, 23 May 2019 at 11:27, Ilya Kasnacheev 
wrote:

> Hello!
>
> Yes, I guess so.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 23 мая 2019 г. в 17:53, John Smith :
>
>> So then I should create my regular cache first... Set the expiry policy
>> on that and then create near cache on top of that?
>>
>> On Thu, 23 May 2019 at 08:48, Ilya Kasnacheev 
>> wrote:
>>
>>> Hello!
>>>
>>> It will be set on the cache proxy returned by withExpirePolicy() method
>>> (and will be applied to near cache, I guess, if this is implemented at all)
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 23 мая 2019 г. в 00:18, John Smith :
>>>
 Hi, when we use ignite getOrCreateNearCache().withExpirePolicy()

 Will the expire policy be set on the underlying cache or the near cache?

>>>


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread joaquinsanroman
Hi,

Thank you very much for your help!

You mean that I need to run one IgniteClient inside the host from I want to
make the query? With this configuration the hdfs will access to the cluster
through the local client, right?

I would like to access without having to run the ignite client in local, but
this solve my problem temporally.

If I would like to access through IPC, what I need to configure?

King regards.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread Ilya Kasnacheev
Hello!

Unfortunately I'm not familiar enough with IGFS to answer such question
purely from configuration variables.

What does it say when it does not work? Have you tried starting Ignite
client in the same VM prior to the launch? If you don't have Ignite client,
you will have to use IPC.

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 18:19, joaquinsanroman :

> Hi,
>
> I set fs.igfs.igfs.endpoint.no_embed to false, but it does not run. This is
> the actual situation:
>
> [xxx@snnni0006 ~]$ hdfs getconf -confkey fs.igfs.igfs.endpoint.no_embed
> false
>
> [xxx@snnni0006 ~]$ hdfs getconf -confkey IgfsIpcEndpointConfiguration.host
> snnni0010
>
> [xxx@snnni0006 ~]$ hdfs getconf -confkey IgfsIpcEndpointConfiguration.port
> 10500
>
> I am configuring something wrong?
>
> Regards.
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: apache ignite in client mode - memory leak?

2019-05-23 Thread mahesh76private
>>What happens if you trigger garbage collection?
It frees up some amount of memory then continues to increase (as shown in
earlier message)


Haven't taken GC logs. is there a quick way to take these?






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: apache ignite in client mode - memory leak?

2019-05-23 Thread Ilya Kasnacheev
Hello!

Do you have GC logs? What happens if you trigger garbage collection?

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 18:25, mahesh76private :

> Hi,
>
> When Ignite is in client mode, it constantly seems to consume 20MB.
> See the below metric, ignite spews out. Please explain.
>
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=dba51c3c, uptime=00:56:00.345]
> ^-- H/N/C [hosts=5, nodes=6, CPUs=40]
> ^-- CPU [cur=0.23%, avg=0.65%, GC=0%]
> ^-- PageMemory [pages=0]
> ^-- Heap [used=*1755MB*, free=57.14%, comm=1824MB]
> ^-- Off-heap [used=0MB, free=-1%, comm=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=0, qSize=0]
> 2019-05-23 15:22:06.886  INFO 33 --- [eout-worker-#23]
> org.apache.ignite.internal.IgniteKernal  :
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=dba51c3c, uptime=00:57:00.348]
> ^-- H/N/C [hosts=5, nodes=6, CPUs=40]
> ^-- CPU [cur=0.27%, avg=0.64%, GC=0%]
> ^-- PageMemory [pages=0]
> ^-- Heap [used=*1774MB*, free=56.68%, comm=1824MB]
> ^-- Off-heap [used=0MB, free=-1%, comm=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=0, qSize=0]
> 2019-05-23 15:23:07.019  INFO 33 --- [eout-worker-#23]
> org.apache.ignite.internal.IgniteKernal  :
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=dba51c3c, uptime=00:58:00.350]
> ^-- H/N/C [hosts=5, nodes=6, CPUs=40]
> ^-- CPU [cur=0.23%, avg=0.63%, GC=0%]
> ^-- PageMemory [pages=0]
> ^-- Heap [used=*1793MB*, free=56.21%, comm=1824MB]
> ^-- Off-heap [used=0MB, free=-1%, comm=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=0, qSize=0]
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Which cache gets expiry policy when creating near cache?

2019-05-23 Thread Ilya Kasnacheev
Hello!

Yes, I guess so.

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 17:53, John Smith :

> So then I should create my regular cache first... Set the expiry policy on
> that and then create near cache on top of that?
>
> On Thu, 23 May 2019 at 08:48, Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> It will be set on the cache proxy returned by withExpirePolicy() method
>> (and will be applied to near cache, I guess, if this is implemented at all)
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 23 мая 2019 г. в 00:18, John Smith :
>>
>>> Hi, when we use ignite getOrCreateNearCache().withExpirePolicy()
>>>
>>> Will the expire policy be set on the underlying cache or the near cache?
>>>
>>


apache ignite in client mode - memory leak?

2019-05-23 Thread mahesh76private
Hi, 

When Ignite is in client mode, it constantly seems to consume 20MB. 
See the below metric, ignite spews out. Please explain.


Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=dba51c3c, uptime=00:56:00.345]
^-- H/N/C [hosts=5, nodes=6, CPUs=40]
^-- CPU [cur=0.23%, avg=0.65%, GC=0%]
^-- PageMemory [pages=0]
^-- Heap [used=*1755MB*, free=57.14%, comm=1824MB]
^-- Off-heap [used=0MB, free=-1%, comm=0MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=0, qSize=0]
2019-05-23 15:22:06.886  INFO 33 --- [eout-worker-#23]
org.apache.ignite.internal.IgniteKernal  : 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=dba51c3c, uptime=00:57:00.348]
^-- H/N/C [hosts=5, nodes=6, CPUs=40]
^-- CPU [cur=0.27%, avg=0.64%, GC=0%]
^-- PageMemory [pages=0]
^-- Heap [used=*1774MB*, free=56.68%, comm=1824MB]
^-- Off-heap [used=0MB, free=-1%, comm=0MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=0, qSize=0]
2019-05-23 15:23:07.019  INFO 33 --- [eout-worker-#23]
org.apache.ignite.internal.IgniteKernal  : 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=dba51c3c, uptime=00:58:00.350]
^-- H/N/C [hosts=5, nodes=6, CPUs=40]
^-- CPU [cur=0.23%, avg=0.63%, GC=0%]
^-- PageMemory [pages=0]
^-- Heap [used=*1793MB*, free=56.21%, comm=1824MB]
^-- Off-heap [used=0MB, free=-1%, comm=0MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=0, qSize=0]




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread joaquinsanroman
Hi,

I set fs.igfs.igfs.endpoint.no_embed to false, but it does not run. This is
the actual situation:

[xxx@snnni0006 ~]$ hdfs getconf -confkey fs.igfs.igfs.endpoint.no_embed
false

[xxx@snnni0006 ~]$ hdfs getconf -confkey IgfsIpcEndpointConfiguration.host
snnni0010

[xxx@snnni0006 ~]$ hdfs getconf -confkey IgfsIpcEndpointConfiguration.port
10500

I am configuring something wrong? 

Regards.







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Which cache gets expiry policy when creating near cache?

2019-05-23 Thread John Smith
So then I should create my regular cache first... Set the expiry policy on
that and then create near cache on top of that?

On Thu, 23 May 2019 at 08:48, Ilya Kasnacheev 
wrote:

> Hello!
>
> It will be set on the cache proxy returned by withExpirePolicy() method
> (and will be applied to near cache, I guess, if this is implemented at all)
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 23 мая 2019 г. в 00:18, John Smith :
>
>> Hi, when we use ignite getOrCreateNearCache().withExpirePolicy()
>>
>> Will the expire policy be set on the underlying cache or the near cache?
>>
>


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread Ilya Kasnacheev
Hello!

It seems that Igfs can try use client node from the same process if
fs.igfs.igfs.endpoint.no_embed is set to false. But you have it set to
true. In this case it will use IPC by host/port.

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 17:20, joaquinsanroman :

> Hi,
>
> Yes, this what I need. When I run "hdfs dfs -ls igfs://igfs@" in an
> external
> node (who has not an ignite node) it should connect to a defined
> endpoint:port to do IGFS operations.
>
> I have checked the documentation and I found 2 properties (File system URI:
> https://apacheignite-fs.readme.io/docs/file-system):
>
> - IgfsIpcEndpointConfiguration.host
> - IgfsIpcEndpointConfiguration.port
>
> I have configured them in my hdfs core-site.xml but it continues with the
> same error.
>
> Regards!
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread joaquinsanroman
Hi,

Yes, this what I need. When I run "hdfs dfs -ls igfs://igfs@" in an external
node (who has not an ignite node) it should connect to a defined
endpoint:port to do IGFS operations.

I have checked the documentation and I found 2 properties (File system URI:
https://apacheignite-fs.readme.io/docs/file-system):

- IgfsIpcEndpointConfiguration.host
- IgfsIpcEndpointConfiguration.port

I have configured them in my hdfs core-site.xml but it continues with the
same error.

Regards!





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [External]Re: Read/query TPS is decreasing after enabling mix load i.e. write services

2019-05-23 Thread Ilya Kasnacheev
Hello!

I recommend gathering thread dumps from cluster as it has performance
issues, sharing these dumps with us.

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 10:52, Kamlesh Joshi :

> Hi ilya,
>
>
>
> We tried with LOG_ONLY and BACKGROUND both but still behavior remains
> same. Any other parameter tweak which would help?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, May 22, 2019 9:14 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: [External]Re: Read/query TPS is decreasing after enabling
> mix load i.e. write services
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello!
>
>
>
> If it is 'timeout' then it is likely not the reason of your issues.
>
>
>
> What is your walMode? Is it LOG_ONLY? If it's not, try to use LOG_ONLY
> here.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 22 мая 2019 г. в 16:30, Kamlesh Joshi :
>
> Hi Ilya,
>
>
>
> Looking at the logs, every time checkpoint starts due to
> ‘timeout’ reason not because the buffer is full. So still do we need to
> increase checkpointing buffer or only changing checkpointing frequency will
> help ?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, May 22, 2019 4:19 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: [External]Re: Read/query TPS is decreasing after enabling
> mix load i.e. write services
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello!
>
>
>
> We recommend using SSD and not HDD with Ignite. Otherwise, try to increase
> size of Checkpoint Page Buffer - if you run out of this buffer, all
> activity will stop until checkpoint is finished. Maybe you also need to
> decrease time between checkpoints.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> вт, 7 мая 2019 г. в 13:01, Kamlesh Joshi :
>
> Hi,
>
> Unfortunately, I cant share the reproducer.
> We are using Ignite Binary Objects for performing operations on the
> cluster (Get/Put). We have exposed these operations to other application
> (i.e. TIBCO BW services) which performs operations on the cluster.
>
> Thanks and Regards,
> Kamlesh Joshi
>
> -Original Message-
> From: Maxim.Pudov 
> Sent: Wednesday, April 17, 2019 8:18 PM
> To: user@ignite.apache.org
> Subject: [External]Re: Read/query TPS is decreasing after enabling mix
> load i.e. write services
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hi, could you share a reproducer of your problem? We are missing a lot of
> information here. The configuration of your nodes, cache configurations,
> what API you use to query and update the data.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
> "Confidentiality Warning: This message and any attachments are intended
> only for the use of the intended recipient(s).
> are confidential and may be privileged. If you are not the intended
> recipient. you are hereby notified that any
> review. re-transmission. conversion to hard copy. copying. circulation or
> other use of this message and any attachments is
> strictly prohibited. If you are not the intended recipient. please notify
> the sender immediately by return email.
> and delete this message and any attachments from your system.
>
> Virus Warning: Although the company has taken reasonable precautions to
> ensure no viruses are present in this email.
> The company cannot accept responsibility for any loss or damage arising
> from the use of this email or attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, 

Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread Ilya Kasnacheev
Hello!

I don't understand what you are trying to do. Do you want igfs://igfs@/ to
spawn a client node that would connect to a cluster and do IGFS operations?

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 16:16, joaquinsanroman :

> Hi Ilya,
>
> I am not using SHMEM, because the client and the servers are in different
> hosts.
>
> If I use @localhost:10500, I will never connect because in localhost is not
> running one node of ignite.
>
> My intention is to connect to one cluster remotely.
>
> Do you know how to do it?
>
> Thank you very much,
> Regards.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: sessionFactory.getCache().evictCollectionData

2019-05-23 Thread Tomasz Prus
Interestingly, when i run evictCollectionData twice, it works.

czw., 23 maj 2019 o 13:39 Tomasz Prus  napisał(a):

> Hello,
> I have configured Ignite cache with Hibernate 2L cache for to instances
> and almost everything works fine but when trying to evict collection data
> after new entity creation, seems that eviction doesn't work because there
> is no new entity in that evicted collection. My configs:
>
>  class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.NearCacheConfiguration"/>
> 
> 
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
>  value="org.hibernate.cache.spi.UpdateTimestampsCache"/>
> 
>
> 
>  value="org.hibernate.cache.internal.StandardQueryCache"/>
> 
> 
>  value="default-query-results-region"/>
> 
> 
>  value="default-update-timestamps-region"/>
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
>  value="228.10.10.157"/>
> 
> 
> 
> 
> 
> ...
>  destroy-method="close">
>  value="${hibernate.connection.driver_class}" />
> 
> 
>  />
>
> 
> 
> 
> 
> 
> 
>
> 
> 
>
> 
> 
>
> 
> 
>
> 
>
> 
>
>  class="org.springframework.orm.hibernate5.LocalSessionFactoryBean"
> depends-on="igniteInstance">
>
> 
> 
> 
>
> 
> 
> 
>  key="hibernate.session_factory_name">our-session-factory
>  key="hibernate.session_factory_name_is_jndi">false
> ${hibernate.dialect}
> false
> true
>  key="hibernate.connection.characterEncoding">UTF-8
> true
>  key="hibernate.cache.use_second_level_cache">true
>  key="hibernate.cache.region.factory_class">org.apache.ignite.cache.hibernate.HibernateRegionFactory
>  key="org.apache.ignite.hibernate.default_access_type">READ_WRITE
>  key="org.apache.ignite.hibernate.ignite_instance_name">myGrid
> 
> 
>
> If i use only one instance (one application), eviction works fine. Can You
> help me?
>


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread joaquinsanroman
Hi Ilya,

I am not using SHMEM, because the client and the servers are in different
hosts. 

If I use @localhost:10500, I will never connect because in localhost is not
running one node of ignite.

My intention is to connect to one cluster remotely.

Do you know how to do it?

Thank you very much,
Regards.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: When the client frequently has FullGC, it blocks all requests from the server. "Possible starvation in striped pool"

2019-05-23 Thread Ilya Kasnacheev
Hello!

I think that this will only be mitigated ny moving to some kind of thin
client. Optionally you can try to bring thick client out of VM that is
having long GCs (a separate JVM?).

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 04:59, 赵剑 :

> Hello
> When the client frequently has FullGC, it blocks all requests from the
> server. I try to modify many server parameters to solve this problem.
> The modified parameters are as follows:
> slowClientQueueLimit
> socketWriteTimeout
> clientFailureDetectionTimeout
> failureDetectionTimeout
>
> The blocking occurred is a large number of "[2019-05-21T16:36:04,880][WARN
> ][grid-timeout-worker-#10343][G] >>> Possible starvation in striped pool."
>
> Please refer to the attachment for the full log, 10.110.118.53 in the log
> is the FullGC test node.
>
> What parameters can be modified to avoid similar problems? What
> adjustments do I need to make?
>
> Thank you very much.
>
> Ignite Version 2.4.0
>
> server config file:
>
> 
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd
> ">
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>  
>  
>  
> 
> 
>
> 
> 
>
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
>  value="/home/qipu/production/apache-ignite-2.4.0/persistence"/>
>  value="/home/qipu/production/apache-ignite-2.4.0/wal"/>
>  value="/home/qipu/production/apache-ignite-2.4.0/wal/archive"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> 
> 10.16.133.179:47500..47509
> 10.16.133.180:47500..47509
> 10.16.133.181:47500..47509
> 10.16.133.182:47500..47509
> 10.16.133.183:47500..47509
> 10.16.133.184:47500..47509
> 10.16.133.185:47500..47509
> 10.16.133.186:47500..47509
> 10.16.133.187:47500..47509
> 10.16.133.188:47500..47509
> 
> 
> 
> 
> 
> 
> 
> 
>  value="/config/ignite-log4j2.xml"/>
> 
> 
> 
> 
>
>
>
> client code:
>
> IgniteCluster igniteCluster = IgniteCluster.valueOf("CLUSTER_A");
> boolean usePairedConnections = true;
> int messageQueueLimit = 20480;
> System.out.println("ignite.cluster: "+igniteCluster+" ,
> ignite.usePairedConnections: "+usePairedConnections+" ,
> ignite.messageQueueLimit: "+messageQueueLimit);
>
> Ignition.setClientMode(true);
>
> IgniteConfiguration cfg = new IgniteConfiguration();
> TcpDiscoverySpi spi = new TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder();
>
>
> finder.setAddresses(Arrays.asList(igniteCluster.getConfig().getServer().split(",")));
>
> spi.setIpFinder(finder);
>
> TcpCommunicationSpi tcpCommunicationSpi = new
> TcpCommunicationSpi();
> tcpCommunicationSpi.setUsePairedConnections(usePairedConnections);
> tcpCommunicationSpi.setMessageQueueLimit(messageQueueLimit);
>
> cfg.setDiscoverySpi(spi);
> cfg.setCommunicationSpi(tcpCommunicationSpi);
> ignite = Ignition.start(cfg);
>
> igniteCache =
> ignite.getOrCreateCache(IgniteCacheName.valueOf("QIPU_ENTITY_CACHE").toString());
>
> // read operation
> byte[] value = cache.getAsync(key).get(500);
> // write operation
> cache.putAsync(entry.getKey(), entry.getValue()).get(putTimeOut);
>


Re: Issue with CacheQueryReadEvent's queryType

2019-05-23 Thread Ilya Kasnacheev
Hello!

Yes, using JDBC Thin driver is preferred because a lot of work is happening
around it.

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 10:30, Garaude, Benjamin <
benjamin.gara...@wolterskluwer.com>:

> Hi,
>
>
>
> Thanks for your answer, I’ll file an issue and we’ll see.
>
>
>
> Just one question: when you say “everyone seems to be using JDBC”, you
> mean they are not using SQLFieldQuery, but plain JDBC queries using the
> ignite JDBC driver?
>
> Is that approach recommended over SQLFieldQueries?
>
>
>
> Regards,
>
>
>
>
>
> Benjamin GARAUDE
>
>
>
>
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, 22 May 2019 18:24
> *To:* user@ignite.apache.org
> *Subject:* Re: Issue with CacheQueryReadEvent's queryType
>
>
>
> Hello!
>
>
>
> It seems that we always report SQL for two-step queries (i.e. ones which
> are not simply look up by key).
>
>
>
> I think you need to live with that, however you can try and file an issue
> against JIRA. I doubt it will get much traction since everyone seems to be
> using JDBC anyway.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> вт, 21 мая 2019 г. в 11:17, Garaude, Benjamin <
> benjamin.gara...@wolterskluwer.com>:
>
> Hello,
>
>
>
> I'm trying to listen locally events of type
> EventType.EVT_CACHE_QUERY_OBJECT_READ
>
>
>
> I enable the events with:
>
> ignite.events().enableLocal(EventType.EVT_CACHE_QUERY_OBJECT_READ);
>
>
>
> An I then register a local listener with:
>
> ignite.events().localListen(myListenerInstance,
> EventType.EVT_CACHE_QUERY_OBJECT_READ);
>
>
>
> It works fine except that when I execute a SqlFieldsQuery on a cache, the
> event I receive has the property queryType set to SQL and not SQL_FIELDS.
>
>
>
> I've created a test case reproducing this issue:
>
> https://github.com/bgaraude/IgniteTest/tree/master/ignite-query-event
> 
>
>
>
> Am I missing something?
>
>
>
> Benjamin
>
>


Re: Which cache gets expiry policy when creating near cache?

2019-05-23 Thread Ilya Kasnacheev
Hello!

It will be set on the cache proxy returned by withExpirePolicy() method
(and will be applied to near cache, I guess, if this is implemented at all)

Regards,
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 00:18, John Smith :

> Hi, when we use ignite getOrCreateNearCache().withExpirePolicy()
>
> Will the expire policy be set on the underlying cache or the near cache?
>


Re: Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread Ilya Kasnacheev
Hello!

I don't think you actually want to use SHMEM. How about just using
@localhost:10500?
-- 
Ilya Kasnacheev


чт, 23 мая 2019 г. в 13:15, joaquinsanroman :

> Hi,
>
> First of all, thak you very much for your help.
>
> I have configured one IGFS cluster without HDFS secondary filesystem
> because
> the intention is to use IGFS as an independent storage.
>
> The configuration file for all server nodes is the next:
>
> 
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>http://www.springframework.org/schema/util/spring-util.xsd;>
>
>
> 
> Spring file for Ignite node configuration with IGFS and Apache
> Hadoop map-reduce support enabled.
> Ignite node will start with this configuration by default.
> 
>
>
> 
> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>  value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
> 
> 
>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>  class="org.apache.ignite.configuration.ConnectorConfiguration">
> 
> 
> 
>
> 
> 
>   
> value="sanlbeclomi0001.santander.pre.corp:2181,sanlbeclomi0002.santander.pre.corp:2181,sanlbeclomi0003.santander.pre.corp:2181"/>
>   
>   
>   
> 
>   
>
>
> 
> 
>  class="org.apache.ignite.configuration.FileSystemConfiguration">
>
> 
>  value="true"/>
>
>
> 
> 
> 
>
>
> 
>
>
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
>  value="PARTITIONED"/>
>
>  value="TRANSACTIONAL"/>
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
>  value="PARTITIONED"/>
>
>  value="TRANSACTIONAL"/>
> 
>
> 
> 
>
>
> 
>  class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
> 
> 
>  name="port" value="10500"/>
> 
> 
>
> 
>
> 
> 
> 
> 
>
> I can access to the IGFS through java setting the same configuration file
> and adding the Ignition.setClientMode(true).
> In spark, I am also able to access data remotly seting the client mode and
> including the libraries into de classpath.
>
> My problem is when I try to get information or files directly through "hdfs
> dfs". I have set the properties inside the core-site.xml:
>
> fs.igfs.impl
> org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
> Class name mapping
>
> fs.AbstractFileSystem.igfs.impl
> org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem
> Class name mapping
>
> fs.igfs.igfs.config_path
> /tmp/IGFSConfigZookeeper.xml
> Class name mapping
>
> fs.igfs.igfs.endpoint.no_embed
> true
> Class name mapping
>
> The /tmp/IGFSConfigZookeeper.xml has the same configuration of the ignite
> servers but including the property:
>
> 
>
> When I run the command "hdfs dfs -ls igfs://igfs@/", I get the error:
>
> ls: Failed to communicate with IGFS: Failed to connect to IGFS
> [endpoint=igfs://igfs@, attempts=[[type=SHMEM, port=10500,
> err=java.io.IOException: Failed to connect shared memory endpoint to port
> (is shared memory server endpoint up and running?): 10500], [type=TCP,
> host=127.0.0.1, port=10500, err=java.io.IOException: Failed to connect to
> endpoint [host=127.0.0.1, port=10500]]] (ensure that IGFS is running and
> have IPC endpoint enabled; ensure that ignite-shmem-1.0.0.jar is in Hadoop
> classpath if you use shared memory endpoint).
>
> Otherwise, it runs ok when I execute: hdfs dfs -ls igfs://igfs@/host1:10500/
>
>
> I think the problem is I am not setting ok the client conection properties
> into the /tmp/IGFSConfigZookeeper.xml file.
>
> Could you help me with the error?
>
> Thank you very much,
> Regards.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


How to use transaction.commitAsync()?

2019-05-23 Thread kimec.ethome.sk
Let's assume I need to update an item in a cache and then invoke 
commitAsync(). Is the following a valid code pattern?


Transaction transaction = ignite.transactions().txStart();
cache.putAsync(key, value); // this
cache.put(key, value); // or this
transaction.commitAsync().listen(fut -> /* respond to the caller */);

Thanks!

Kamil


zk connect loss

2019-05-23 Thread wangsan
13:10:06.119 [main] ERROR   - Failed to resolve default logging config file:
config/java.util.logging.properties
13:10:37.097 [main] ERROR o.a.i.s.d.z.internal.ZookeeperClient  - Operation
failed with unexpected error, connection lost:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /search
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /search
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1130)
at
org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.exists(ZookeeperClient.java:280)
at
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.initZkNodes(ZookeeperDiscoveryImpl.java:792)
at
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoin(ZookeeperDiscoveryImpl.java:960)
at
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:778)
at
org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:696)
at
org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:474)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:915)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1720)
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1033)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:66)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


sessionFactory.getCache().evictCollectionData

2019-05-23 Thread Tomasz Prus
Hello,
I have configured Ignite cache with Hibernate 2L cache for to instances and
almost everything works fine but when trying to evict collection data after
new entity creation, seems that eviction doesn't work because there is no
new entity in that evicted collection. My configs:
















































...



































our-session-factory
false
${hibernate.dialect}
false
true
UTF-8
true
true
org.apache.ignite.cache.hibernate.HibernateRegionFactory
READ_WRITE
myGrid



If i use only one instance (one application), eviction works fine. Can You
help me?


Re: failed to get the security context object

2019-05-23 Thread Zaheer
Hi,

I am also trying to develop a security plugin for Ignite. Security context
in case of visor call is null and even the SecurityContextHolder wont work.
Because,

1. *SecurityContextHolder* has a ThreadLocal variable holding the
*SecurityContext*. So if your calls of authenticate and authorize happen in
same thread like the *REST* call, it will work. Try printing
Thread.currentThread().getName() in your calls. You will understand what I
am saying.

2. When you connect visor to the grid, *authenticateNode* method is called.
And after that any call you make calls *authorize* method only , that too 
if plugin was configured on visor. So *SecurityContextHolder.set()* happens
in the *authenticateNode* which is called in *tcp-dicovery-worker* thread.
And *SecurityContextHolder.get()* happens in *authorize* method which is
called in a separate thread depending on the visor call. So here
*SecurityContextHolder* will not work. 



For cases of visor or any server node, thick client joining the cluster,
*SecurityContext* is passed null. To overcome this, you need to store local
nodes security context in a field in your plugin say *localSecurityContext*
representing security context of local node. You can try something like this
: 

/public class MySecurityProcessor extends GridProcessorAdapter implements
DiscoverySpiNodeAuthenticator, GridSecurityProcessor, IgnitePlugin {

*private MySecurityContext localSecurityContext;*


public SecurityContext authenticateNode(ClusterNode node,
SecurityCredentials cred) throws IgniteCheckedException {

 
 //write your logic to authenticate node and return Security Context

 //Check if node is local, and store the security context in your local
variable before returning
* if(node.isLocal())  localSecurityContext= ...*

}

public SecurityContext authenticate(AuthenticationContext
authenticationContext) throws IgniteCheckedException {
   SecuritySubject secureSecuritySubject = new SecuritySubject(
authenticationContext.subjectId(),
authenticationContext.subjectType(),
authenticationContext.credentials().getLogin(),
authenticationContext.address()
);
SecurityContext securityContext = new
MySecurityContext(secureSecuritySubject, accessToken);
SecurityContextHolder.set(securityContext);
return securityContext;
}
public void authorize(String name, SecurityPermission perm, SecurityContext
securityCtx) throws SecurityException {
System.out.println(   SecurityContextHolder.get());
System.out.println( securityCtx );
//If context is null use localSecurityContext
*if(securityCtx==null) securityCtx=localSecurityContext;*
//do some authorization 
 .
}

..
}/


Note that this will work if *isGlobalNodeAuthentication* is true. Because
only then *authenticateNode* method is called on each joining node (instead
of coordinator) and you can save the context in local variable. Also the
joining node must also have the plugin configured for this to work.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Hadoop client configuration IGFS - hdfs dfs

2019-05-23 Thread joaquinsanroman
Hi, 

First of all, thak you very much for your help. 

I have configured one IGFS cluster without HDFS secondary filesystem because
the intention is to use IGFS as an independent storage. 

The configuration file for all server nodes is the next: 

 

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd;>



Spring file for Ignite node configuration with IGFS and Apache
Hadoop map-reduce support enabled. 
Ignite node will start with this configuration by default. 



















  
  
  
  

  

















 










































I can access to the IGFS through java setting the same configuration file
and adding the Ignition.setClientMode(true). 
In spark, I am also able to access data remotly seting the client mode and
including the libraries into de classpath. 

My problem is when I try to get information or files directly through "hdfs
dfs". I have set the properties inside the core-site.xml: 

fs.igfs.impl 
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem 
Class name mapping 

fs.AbstractFileSystem.igfs.impl 
org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem 
Class name mapping 

fs.igfs.igfs.config_path 
/tmp/IGFSConfigZookeeper.xml 
Class name mapping 

fs.igfs.igfs.endpoint.no_embed 
true 
Class name mapping

The /tmp/IGFSConfigZookeeper.xml has the same configuration of the ignite
servers but including the property: 



When I run the command "hdfs dfs -ls igfs://igfs@/", I get the error: 

ls: Failed to communicate with IGFS: Failed to connect to IGFS
[endpoint=igfs://igfs@, attempts=[[type=SHMEM, port=10500,
err=java.io.IOException: Failed to connect shared memory endpoint to port
(is shared memory server endpoint up and running?): 10500], [type=TCP,
host=127.0.0.1, port=10500, err=java.io.IOException: Failed to connect to
endpoint [host=127.0.0.1, port=10500]]] (ensure that IGFS is running and
have IPC endpoint enabled; ensure that ignite-shmem-1.0.0.jar is in Hadoop
classpath if you use shared memory endpoint). 

Otherwise, it runs ok when I execute: hdfs dfs -ls igfs://igfs@/host1:10500/ 

I think the problem is I am not setting ok the client conection properties
into the /tmp/IGFSConfigZookeeper.xml file. 

Could you help me with the error? 

Thank you very much, 
Regards. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


JVM Halt on Null Pointer Exception in GridDhtTxAbstractEnlistFuture

2019-05-23 Thread garima.j
Hello,

We have a 3 node cluster in production for Apache Ignite version 2.7. RAM
128GB. A Spark streaming service (with thick Ignite clients) writes data
into Ignite cache in a transaction (first get and then put). 
Now, 3 spark clients stopped and hence NODE_FAILED events were received. My
Ignite crashed on one node and had the below stack trace :
 
[2019-05-23T13:57:04,976][WARN ][sys-stripe-5-#6][lock] Received near enlist
request from unknown node (will ignore) [txId=GridCacheVersion
[topVer=169659586, order=1558471024158, nodeOrder=23],
node=1be3bce3-7220-45bc-9863-4f16d97ea22b]
[2019-05-23T13:57:04,977][ERROR][sys-stripe-5-#6][GridCacheIoManager] Failed
processing message [senderId=1be3bce3-7220-45bc-9863-4f16d97ea22b,
msg=GridNearTxEnlistRequest [threadId=5872,
futId=c3170abca61-33b3ea8d-0a3e-44cb-83e6-032a37a9eed1, clientFirst=false,
miniId=1, subjId=1be3bce3-7220-45bc-9863-4f16d97ea22b,
topVer=AffinityTopologyVersion [topVer=101, minorTopVer=0],
lockVer=GridCacheVersion [topVer=169659586, order=1558471024158,
nodeOrder=23], mvccSnapshot=MvccSnapshotResponse [futId=1221240,
crdVer=1558179485875, cntr=110485182, opCntr=1, txs=[101051367, 110485176],
cleanupVer=101051361, tracking=0], timeout=5000, txTimeout=5000,
taskNameHash=0, op=UPSERT, needRes=false]]
java.lang.NullPointerException: null
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.(GridDhtTxAbstractEnlistFuture.java:237)
~[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxEnlistFuture.(GridDhtTxEnlistFuture.java:84)
~[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.processNearTxEnlistRequest(GridDhtTransactionalCacheAdapter.java:2061)
~[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.access$900(GridDhtTransactionalCacheAdapter.java:112)
~[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:229)
~[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:227)
~[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:380)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:306)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:101)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:295)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
[ignite-core-2.7.0.jar:2.7.0]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
[ignite-core-2.7.0.jar:2.7.0]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2019-05-23T13:57:05,043][ERROR][sys-stripe-5-#6][] Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]],
failureCtx=FailureContext [type=CRITICAL_ERROR,
err=java.lang.NullPointerException]]
java.lang.NullPointerException: null

Please help and let me know why this failure happened.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache ignite rewrite badly SQL ???

2019-05-23 Thread Roman Kondakov

Hi!

Is your data collocated? To perform fast joins in the distributed 
systems tables should be collocated on the join keys [1] to avoid 
network latency.


Also Ignite's SQL optimizer currently is not able to select the best 
join order. If your query is slow you should choose the join order 
manually using enforceJoinOrder flag [2].


[1] https://apacheignite.readme.io/docs/affinity-collocation

[2] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/SqlFieldsQuery.html#setEnforceJoinOrder-boolean-



--
Kind Regards
Roman Kondakov

On 21.05.2019 20:38, yann.blaz...@externe.bnpparibas.com wrote:

Hello all , I have a big problem.

I have  a lot of tabes in my ignite cluster and one request take 10 
minutes executed as a SQLFieldQuery, but only 153ms from DataGrip in 
jdbc.


I explain, I have to change some names in request to not divulgate 
informations.



I have 3 tables  :

TC with 557 000 lines
TD with 3753 lines
TS with 1500 lines.


I want to execute this request :

select * from TC coinner join TD dd
 on dd.eid = co.eid
 and dd.mid = co.midinner join TS sch on sch.eid = dd.eid and sch.mid = 
dd.mid;

On datagrip 153ms.

When I look into logs, I see that request has been rewrited like that :

SELECT *
FROM TC CO__Z0
 /*CONTRACT.__SCAN_ */ INNER JOIN TD DD__Z1
 /* batched:unicast TD_3: EID = CO__Z0.EID AND MID = CO__Z0.MID */ ON 1=1 /* WHERE (DD__Z1.EID = CO__Z0.EID) AND (DD__Z1.MID = CO__Z0.MID) */ 
INNER JOIN TS SCH__Z2

 /* batched:unicast TS_2: EID = DD__Z1.EID AND MID = DD__Z1.MID */ ON 1=1 
WHERE ((SCH__Z2.EID = DD__Z1.EID)
 AND (SCH__Z2.MID = DD__Z1.MID))
 AND ((DD__Z1.EID = CO__Z0.EID)
 AND (DD__Z1.MID = CO__Z0.MID))

Why these (On 1=1 ) ???

If I understand, this make a scalar product of my lines that explain 
the 10 minutes ! Why this is rewrited like that ???


Thanks for your help, regards

This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential.
If you receive this message in error,or are not the intended 
recipient(s),

please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its 
purpose,
dissemination or disclosure, either whole or partial, is prohibited. 
Since the internet
cannot guarantee the integrity of this message which may not be 
reliable, BNP PARIBAS
(and its subsidiaries) shall not be liable for the message if 
modified, changed or falsified.
Do not print this message unless it is necessary, consider the 
environment.


--

Ce message et toutes les pieces jointes (ci-apres le "message")
sont etablis a l'intention exclusive de ses destinataires et sont 
confidentiels.

Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en 
avertir
immediatement l'expediteur. Toute lecture non autorisee, toute 
utilisation de
ce message qui n'est pas conforme a sa destination, toute diffusion ou 
toute
publication, totale ou partielle, est interdite. L'Internet ne 
permettant pas d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP 
Paribas
(et ses filiales) decline(nt) toute responsabilite au titre de ce 
message dans l'hypothese

ou il aurait ete modifie, deforme ou falsifie.
N'imprimez ce message que si necessaire, pensez a l'environnement.



RE: [External]Re: Read/query TPS is decreasing after enabling mix load i.e. write services

2019-05-23 Thread Kamlesh Joshi
Hi ilya,

We tried with LOG_ONLY and BACKGROUND both but still behavior remains same. Any 
other parameter tweak which would help?

Thanks and Regards,
Kamlesh Joshi

From: Ilya Kasnacheev 
Sent: Wednesday, May 22, 2019 9:14 PM
To: user@ignite.apache.org
Subject: Re: [External]Re: Read/query TPS is decreasing after enabling mix load 
i.e. write services


The e-mail below is from an external source. Please do not open attachments or 
click links from an unknown or suspicious origin.
Hello!

If it is 'timeout' then it is likely not the reason of your issues.

What is your walMode? Is it LOG_ONLY? If it's not, try to use LOG_ONLY here.

Regards,
--
Ilya Kasnacheev


ср, 22 мая 2019 г. в 16:30, Kamlesh Joshi 
mailto:kamlesh.jo...@ril.com>>:
Hi Ilya,

Looking at the logs, every time checkpoint starts due to 
‘timeout’ reason not because the buffer is full. So still do we need to 
increase checkpointing buffer or only changing checkpointing frequency will 
help ?

Thanks and Regards,
Kamlesh Joshi

From: Ilya Kasnacheev 
mailto:ilya.kasnach...@gmail.com>>
Sent: Wednesday, May 22, 2019 4:19 PM
To: user@ignite.apache.org
Subject: Re: [External]Re: Read/query TPS is decreasing after enabling mix load 
i.e. write services


The e-mail below is from an external source. Please do not open attachments or 
click links from an unknown or suspicious origin.
Hello!

We recommend using SSD and not HDD with Ignite. Otherwise, try to increase size 
of Checkpoint Page Buffer - if you run out of this buffer, all activity will 
stop until checkpoint is finished. Maybe you also need to decrease time between 
checkpoints.

Regards,
--
Ilya Kasnacheev


вт, 7 мая 2019 г. в 13:01, Kamlesh Joshi 
mailto:kamlesh.jo...@ril.com>>:
Hi,

Unfortunately, I cant share the reproducer.
We are using Ignite Binary Objects for performing operations on the cluster 
(Get/Put). We have exposed these operations to other application (i.e. TIBCO BW 
services) which performs operations on the cluster.

Thanks and Regards,
Kamlesh Joshi

-Original Message-
From: Maxim.Pudov mailto:pudov@gmail.com>>
Sent: Wednesday, April 17, 2019 8:18 PM
To: user@ignite.apache.org
Subject: [External]Re: Read/query TPS is decreasing after enabling mix load 
i.e. write services

The e-mail below is from an external source. Please do not open attachments or 
click links from an unknown or suspicious origin.

Hi, could you share a reproducer of your problem? We are missing a lot of 
information here. The configuration of your nodes, cache configurations, what 
API you use to query and update the data.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s).
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email.
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email.
The company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachment."

"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s), are confidential and may be 
privileged. If you are not the intended recipient, you are hereby notified that 
any review, re-transmission, conversion to hard copy, copying, circulation or 
other use of this message and any attachments is strictly prohibited. If you 
are not the intended recipient, please notify the sender immediately by return 
email and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. The company cannot accept responsibility 
for any loss or damage arising from the use of this email or attachment."
"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s). 
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any 
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is 
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email. 
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. 
The company cannot accept responsibility for any loss or damage 

RE: Issue with CacheQueryReadEvent's queryType

2019-05-23 Thread Garaude, Benjamin
Hi,

Thanks for your answer, I’ll file an issue and we’ll see.

Just one question: when you say “everyone seems to be using JDBC”, you mean 
they are not using SQLFieldQuery, but plain JDBC queries using the ignite JDBC 
driver?
Is that approach recommended over SQLFieldQueries?

Regards,


Benjamin GARAUDE



From: Ilya Kasnacheev 
Sent: Wednesday, 22 May 2019 18:24
To: user@ignite.apache.org
Subject: Re: Issue with CacheQueryReadEvent's queryType

Hello!

It seems that we always report SQL for two-step queries (i.e. ones which are 
not simply look up by key).

I think you need to live with that, however you can try and file an issue 
against JIRA. I doubt it will get much traction since everyone seems to be 
using JDBC anyway.

Regards,
--
Ilya Kasnacheev


вт, 21 мая 2019 г. в 11:17, Garaude, Benjamin 
mailto:benjamin.gara...@wolterskluwer.com>>:
Hello,

I'm trying to listen locally events of type 
EventType.EVT_CACHE_QUERY_OBJECT_READ

I enable the events with:
ignite.events().enableLocal(EventType.EVT_CACHE_QUERY_OBJECT_READ);

An I then register a local listener with:
ignite.events().localListen(myListenerInstance, 
EventType.EVT_CACHE_QUERY_OBJECT_READ);

It works fine except that when I execute a SqlFieldsQuery on a cache, the event 
I receive has the property queryType set to SQL and not SQL_FIELDS.

I've created a test case reproducing this issue:
https://github.com/bgaraude/IgniteTest/tree/master/ignite-query-event

Am I missing something?

Benjamin