Re: Distributed Training in tensorflow

2019-01-10 Thread Mehdi Seydali
Yes you are write. I have many debate about this. I have an idea that if we
have dl4j ( running over spark)  what is the matter of doing run dl4j over
ignite.   previously i have this idea  but after googling and share with
you i think this is a waste time. Spark itself is in memory computing
platform also ignite is. In distributed deep learning with are going to
speed up learning via distribute model learning. Dl4j is a distributed deep
learning data model and i think with integrating it with ignite we have no
more speed up. It was in my opinion we can use igniterdd for speed up but i
underestand that in deep learning we rarely shared data for using
igniterdd. Do you agree with my interpretation?do you have any comment?

On Wednesday, January 9, 2019, dmitrievanthony 
wrote:

> Let me also add that it depends on what you want to achieve. TensorFlow
> supports distributed training and it does it on it's own. But if you use
> pure TensorFlow you'll have to start TensorFlow workers manually and
> distribute data manually as well. And you can do it, I mean start workers
> manually on the nodes Ignite cluster occupies or even some other nodes. It
> will work and perhaps work well in some cases and work very well in case of
> accurate manual setup.
>
> At the same time, Apache Ignite provides a cluster management functionality
> for TensorFlow that allows to start workers automatically on the same nodes
> Apache Ignite keeps the data. From our perspective it's the most efficient
> way to setup TensorFlow cluster on top of Apache Ignite cluster because it
> allows to reduce data transfers. You can find more details about this in
> readme: https://apacheignite.readme.io/docs/ignite-dataset and
> https://apacheignite.readme.io/docs/tf-command-line-tool.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite and spark for deep learning

2019-01-10 Thread Mehdi Seydali
I completely agree with you. I discuss many time with our team that this
integration have not any gain about speed up and i have your idea about
caching in ignite because in deep learning we have nothing to share in job
because every job independently works on it's portion of data. In your
opinion one idea can be ignite+ignite ml+ dl4j +spark? What benefit we can
achive in this integration. ?ignite ml cant used for deep independently?

On Wednesday, January 9, 2019, zaleslaw  wrote:

> Dear Mehdi Sey
>
> Yes, both platforms are used for in-memory computing, but they have
> different APIs and history of feature creation and different ways of
> integration with famous DL frameworks (like DL4j and TensorFlow).
>
> From my point of view, you have no speed up in Ignite + Spark + DL4j
> integration.
>
> Caching data in Ignite as a backend for RDD and dataframes first of all is
> acceleration of business logic based on SQL queries. Not the same for ML
> frameworks.
>
> We have no proof, that usage Ignite as a backend could speed up DL4j or
> MLlib algorithms.
>
> Moreover, to avoid this, we wrote own ML library which is more better than
> MLlib and runs natively on Ignite.
>
> In my opinon, you should choose Ignite + Ignite ML + TF integration or
> Spark
> + DL4j to solve your Data Science task (where you need neural networks).
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Web Console set up

2019-01-10 Thread Alexey Kuznetsov
Hi,

You should NOT download  Web agent docker image.
You should download Web agent from started Web Console (a link in a footer).
I created issue to support your use case: IGNITE-10889
 Web Console should
work with Web Agent started from Docker image.

-- 
Alexey Kuznetsov


Re: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

2019-01-10 Thread Max Barrios
I’m still seeing this error even when passing a value via ‘ref’. 

Looks like there’s some ignite lib not getting loaded that can resolve the 
awsCredentials property, when running in a spark cluster.

Is there any comprehensive guidance for running a spark app using ignite in a 
spark cluster? 

Sent from my iPhone

> On Jan 10, 2019, at 08:25, Stanislav Lukyanov  wrote:
> 
> Hi,
>  
> Were you able to solve this?
>  
> It seems that your config is actually fine… The feature was added by 
> https://issues.apache.org/jira/browse/IGNITE-4530.
>  
> Does it work if you replace `ref` with just a value?
> Like
> 
> 
> 
>  
> Stan
>  
> From: Max Barrios
> Sent: 12 декабря 2018 г. 23:51
> To: user@ignite.apache.org
> Subject: Amazon S3 Based Discovery NOT USING BasicAWSCredentials
>  
> I am running Apache Ignite 2.6.0 in AWS and am using S3 Based Discovery,
>  
> However, I DO NOT want to embed AWS Access or Secret Keys in my ignite.xml
>  
> I have AWS EC2 Instance Metadata Service for my instances so that the creds 
> can be loaded from there. 
>  
> However, there's no guidance or documentation on how to do this. Is this even 
> supported?
>  
> For example, I want to do this:
> 
>   ...
>   
> 
>   
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
>   
>   
> 
>   
> 
>   
> 
>  
> 
>  class="com.amazonaws.auth.InstanceProfileCredentialsProvider">
> 
>  
> But I get this exception when I try the above:
>  
> Error setting property values; nested exception is 
> org.springframework.beans.NotWritablePropertyException: Invalid property 
> 'awsCredentialsProvider' of bean class 
> [org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder]: 
> Bean property 'awsCredentialsProvider' is not writable or has an invalid 
> setter method. Does the parameter type of the setter match the return type of 
> the getter?
>  
> If using an AWS Credentials Provider *is* supported, where are the bean 
> properties for documented, so I can see what I may be doing wrong? Are there 
> any working examples for anything other than BasicAWSCCredentials?
>  
> Please help. 
>  
> Max
>  


RE: Ignite 2.7 Persistence

2019-01-10 Thread gweiske
Thanks for the replies. Yes, subsequent queries are faster, but the time to
run the query the first time (i.e. load the data into memory) after a
restart can be measured in hours and is significantly longer than loading
the data from a csv file. That does not seem right. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Visor cache command freezes, when client node connects.

2019-01-10 Thread javadevmtl
Hi, using 2.7.3

I start my client as...

TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses.getList());
spi.setIpFinder(ipFinder);

igniteConfig.setDiscoverySpi(spi);

igniteConfig.setClientMode(true);

I then also create a cache dynamically as...

CacheConfiguration cacheCfg = new CacheConfiguration("DJTAZZ");

cacheCfg.setCacheMode(CacheMode.REPLICATED);
this.cache =
igniteClient.getIgniteInstance().getOrCreateCache(cacheCfg);

In ignitegridvisor.sh when I run the "cache" command it seems to freeze
until I disconnect my application.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Text Query via SQL or REST API?

2019-01-10 Thread Manu
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene indexes that supports SQL
searching




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Extra console output from logs.

2019-01-10 Thread javadevmtl
Nobody has experienced this? I'm not trying to disable logs. I'm just getting
double the output...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Web Console set up

2019-01-10 Thread newigniter
I am trying to setup ignite web console.
I pulled apache ignite web console standalone docker image and started it in
a docker container.
I still need to setup apache ignite web agent. When I download web agent and
start it on my local pc everything works perfectly. Web console connects to
my cluster.

I am now trying to run a web agent in another docker container so that I
don't have to start it locally.
I pulled latest web agent docker image version (2.7.0) and I start that
image.

My logs say the following:
Connection established.
You are using an older version of the agent. Please reload agent

Not sure why I am getting this error? My ignite version is also 2.7.
Tnx



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite in Kubernetes not works correctly

2019-01-10 Thread Alena Laas
We are using Azure AKS cluster.

We kill pod using Kubernetes dashboard or through kubectl (kubectl delete
pods ), never mind, result is the same.

Maybe you need some more logs from us?

On Thu, Jan 10, 2019 at 7:28 PM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> What kind of environment are you using? A public cloud? Your own data
> centre? And how are you killing the pod?
>
> I fired up a cluster using Minikube and your configuration and it worked
> as far as I could see. (I deleted the pod using the dashboard, for what
> that’s worth.)
>
> Regards,
> Stephen
>
> On 10 Jan 2019, at 14:20, Alena Laas 
> wrote:
>
>
>
> -- Forwarded message -
> From: Alena Laas 
> Date: Thu, Jan 10, 2019 at 5:13 PM
> Subject: Ignite in Kubernetes not works correctly
> To: 
> Cc: Vadim Shcherbakov 
>
>
> Hello!
> Could you please help with some problem with Ignite within Kubernetes
> cluster?
>
> When we start 2 Ignite nodes at the same time or use scaling for
> Deployment (from 1 to 2) everything is fine, both of them are visible
> inside Ignite cluster (we use web console to see it)
>
> But after we kill pod with one node and it restarts the node is no more
> seen in Ignite cluster. Moreover the logs from this restarted node look
> poor:
> [13:32:57] __ 
> [13:32:57] / _/ ___/ |/ / _/_ __/ __/
> [13:32:57] _/ // (7 7 // / / / / _/
> [13:32:57] /___/\___/_/|_/___/ /_/ /___/
> [13:32:57]
> [13:32:57] ver. 2.7.0#20181130-sha1:256ae401
> [13:32:57] 2018 Copyright(C) Apache Software Foundation
> [13:32:57]
> [13:32:57] Ignite documentation: http://ignite.apache.org
> [13:32:57]
> [13:32:57] Quiet mode.
> [13:32:57] ^-- Logging to file
> '/opt/ignite/apache-ignite/work/log/ignite-7d323675.0.log'
> [13:32:57] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
> [13:32:57] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
> [13:32:57]
> [13:32:57] OS: Linux 4.15.0-1036-azure amd64
> [13:32:57] VM information: OpenJDK Runtime Environment 1.8.0_181-b13
> Oracle Corporation OpenJDK 64-Bit Server VM 25.181-b13
> [13:32:57] Please set system property '-Djava.net.preferIPv4Stack=true' to
> avoid possible problems in mixed environments.
> [13:32:57] Configured plugins:
> [13:32:57] ^-- None
> [13:32:57]
> [13:32:57] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> [tryStop=false, timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED
> [13:32:58] Message queue limit is set to 0 which may lead to potential
> OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
> to message queues growth on sender and receiver sides.
> [13:32:58] Security status [authentication=off, tls/ssl=off]
>
> And logs from the remaining node say that there are either 2 or 1 server
> and this info is blinking
> [14:02:05] Joining node doesn't have encryption data
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:02:15] Topology snapshot [ver=234, locNode=a5eb30e1, servers=2,
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:02:15] Topology snapshot [ver=235, locNode=a5eb30e1, servers=1,
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:02:20] Joining node doesn't have encryption data
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:02:30] Topology snapshot [ver=236, locNode=a5eb30e1, servers=2,
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:02:30] Topology snapshot [ver=237, locNode=a5eb30e1, servers=1,
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:02:35] Joining node doesn't have encryption data
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:02:45] Topology snapshot [ver=238, locNode=a5eb30e1, servers=2,
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:02:45] Topology snapshot [ver=239, locNode=a5eb30e1, servers=1,
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:02:50] Joining node doesn't have encryption data
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:03:00] Topology snapshot [ver=240, locNode=a5eb30e1, servers=2,
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:03:00] Topology snapshot [ver=241, locNode=a5eb30e1, servers=1,
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:03:06] Joining node doesn't have encryption data
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:03:16] Topology snapshot [ver=242, locNode=a5eb30e1, servers=2,
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:03:16] Topology snapshot [ver=243, locNode=a5eb30e1, servers=1,
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:03:21] Joining node doesn't have encryption data
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:03:31] Topology snapshot [ver=244, locNode=a5eb30e1, servers=2,
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:03:31] Topology snapshot [ver=245, locNode=a

Re: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

2019-01-10 Thread Max Barrios
I’m still seeing this problem

I’ll try the suggested approach 

Sent from my iPhone

> On Jan 10, 2019, at 08:25, Stanislav Lukyanov  wrote:
> 
> Hi,
>  
> Were you able to solve this?
>  
> It seems that your config is actually fine… The feature was added by 
> https://issues.apache.org/jira/browse/IGNITE-4530.
>  
> Does it work if you replace `ref` with just a value?
> Like
> 
> 
> 
>  
> Stan
>  
> From: Max Barrios
> Sent: 12 декабря 2018 г. 23:51
> To: user@ignite.apache.org
> Subject: Amazon S3 Based Discovery NOT USING BasicAWSCredentials
>  
> I am running Apache Ignite 2.6.0 in AWS and am using S3 Based Discovery,
>  
> However, I DO NOT want to embed AWS Access or Secret Keys in my ignite.xml
>  
> I have AWS EC2 Instance Metadata Service for my instances so that the creds 
> can be loaded from there. 
>  
> However, there's no guidance or documentation on how to do this. Is this even 
> supported?
>  
> For example, I want to do this:
> 
>   ...
>   
> 
>   
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
>   
>   
> 
>   
> 
>   
> 
>  
> 
>  class="com.amazonaws.auth.InstanceProfileCredentialsProvider">
> 
>  
> But I get this exception when I try the above:
>  
> Error setting property values; nested exception is 
> org.springframework.beans.NotWritablePropertyException: Invalid property 
> 'awsCredentialsProvider' of bean class 
> [org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder]: 
> Bean property 'awsCredentialsProvider' is not writable or has an invalid 
> setter method. Does the parameter type of the setter match the return type of 
> the getter?
>  
> If using an AWS Credentials Provider *is* supported, where are the bean 
> properties for documented, so I can see what I may be doing wrong? Are there 
> any working examples for anything other than BasicAWSCCredentials?
>  
> Please help. 
>  
> Max
>  


Re: Partitions stuck in MOVING state after upgrade to 2.7

2019-01-10 Thread dilaz03
There is problem with Kubernetes, because scheduler can restart Ignite node
at any time.

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite in Kubernetes not works correctly

2019-01-10 Thread Stephen Darlington
What kind of environment are you using? A public cloud? Your own data centre? 
And how are you killing the pod?

I fired up a cluster using Minikube and your configuration and it worked as far 
as I could see. (I deleted the pod using the dashboard, for what that’s worth.)

Regards,
Stephen

> On 10 Jan 2019, at 14:20, Alena Laas  wrote:
> 
> 
> 
> -- Forwarded message -
> From: Alena Laas  >
> Date: Thu, Jan 10, 2019 at 5:13 PM
> Subject: Ignite in Kubernetes not works correctly
> To: mailto:user@ignite.apache.org>>
> Cc: Vadim Shcherbakov  >
> 
> 
> Hello!
> Could you please help with some problem with Ignite within Kubernetes cluster?
> 
> When we start 2 Ignite nodes at the same time or use scaling for Deployment 
> (from 1 to 2) everything is fine, both of them are visible inside Ignite 
> cluster (we use web console to see it)
> 
> But after we kill pod with one node and it restarts the node is no more seen 
> in Ignite cluster. Moreover the logs from this restarted node look poor:
> [13:32:57]__   
> [13:32:57]   /  _/ ___/ |/ /  _/_  __/ __/ 
> [13:32:57]  _/ // (7 7// /  / / / _/   
> [13:32:57] /___/\___/_/|_/___/ /_/ /___/  
> [13:32:57] 
> [13:32:57] ver. 2.7.0#20181130-sha1:256ae401
> [13:32:57] 2018 Copyright(C) Apache Software Foundation
> [13:32:57] 
> [13:32:57] Ignite documentation: http://ignite.apache.org 
> 
> [13:32:57] 
> [13:32:57] Quiet mode.
> [13:32:57]   ^-- Logging to file 
> '/opt/ignite/apache-ignite/work/log/ignite-7d323675.0.log'
> [13:32:57]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
> [13:32:57]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or 
> "-v" to ignite.{sh|bat}
> [13:32:57] 
> [13:32:57] OS: Linux 4.15.0-1036-azure amd64
> [13:32:57] VM information: OpenJDK Runtime Environment 1.8.0_181-b13 Oracle 
> Corporation OpenJDK 64-Bit Server VM 25.181-b13
> [13:32:57] Please set system property '-Djava.net.preferIPv4Stack=true' to 
> avoid possible problems in mixed environments.
> [13:32:57] Configured plugins:
> [13:32:57]   ^-- None
> [13:32:57] 
> [13:32:57] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler 
> [tryStop=false, timeout=0, super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED
> [13:32:58] Message queue limit is set to 0 which may lead to potential OOMEs 
> when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to 
> message queues growth on sender and receiver sides.
> [13:32:58] Security status [authentication=off, tls/ssl=off]
> 
> And logs from the remaining node say that there are either 2 or 1 server and 
> this info is blinking
> [14:02:05] Joining node doesn't have encryption data 
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:02:15] Topology snapshot [ver=234, locNode=a5eb30e1, servers=2, 
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:02:15] Topology snapshot [ver=235, locNode=a5eb30e1, servers=1, 
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:02:20] Joining node doesn't have encryption data 
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:02:30] Topology snapshot [ver=236, locNode=a5eb30e1, servers=2, 
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:02:30] Topology snapshot [ver=237, locNode=a5eb30e1, servers=1, 
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:02:35] Joining node doesn't have encryption data 
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:02:45] Topology snapshot [ver=238, locNode=a5eb30e1, servers=2, 
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:02:45] Topology snapshot [ver=239, locNode=a5eb30e1, servers=1, 
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:02:50] Joining node doesn't have encryption data 
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:03:00] Topology snapshot [ver=240, locNode=a5eb30e1, servers=2, 
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:03:00] Topology snapshot [ver=241, locNode=a5eb30e1, servers=1, 
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:03:06] Joining node doesn't have encryption data 
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:03:16] Topology snapshot [ver=242, locNode=a5eb30e1, servers=2, 
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:03:16] Topology snapshot [ver=243, locNode=a5eb30e1, servers=1, 
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
> [14:03:21] Joining node doesn't have encryption data 
> [node=7d323675-bc0b-4507-affb-672b25766201]
> [14:03:31] Topology snapshot [ver=244, locNode=a5eb30e1, servers=2, 
> clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
> [14:03:31] Topology snapshot [ver=245, locNode=a5eb30e1, servers=1, 
> clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]

RE: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

2019-01-10 Thread Stanislav Lukyanov
Hi,

Were you able to solve this?

It seems that your config is actually fine… The feature was added by 
https://issues.apache.org/jira/browse/IGNITE-4530.

Does it work if you replace `ref` with just a value?
Like 




Stan

From: Max Barrios
Sent: 12 декабря 2018 г. 23:51
To: user@ignite.apache.org
Subject: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

I am running Apache Ignite 2.6.0 in AWS and am using S3 Based Discovery,

However, I DO NOT want to embed AWS Access or Secret Keys in my ignite.xml

I have AWS EC2 Instance Metadata Service for my instances so that the creds can 
be loaded from there. 

However, there's no guidance or documentation on how to do this. Is this even 
supported?

For example, I want to do this:

  ...
  

  

  
  

  

  






But I get this exception when I try the above:

Error setting property values; nested exception is 
org.springframework.beans.NotWritablePropertyException: Invalid property 
'awsCredentialsProvider' of bean class 
[org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder]: Bean 
property 'awsCredentialsProvider' is not writable or has an invalid setter 
method. Does the parameter type of the setter match the return type of the 
getter?

If using an AWS Credentials Provider *is* supported, where are the bean 
properties for documented, so I can see what I may be doing wrong? Are there 
any working examples for anything other than BasicAWSCCredentials?

Please help. 

Max



Re: Partitions stuck in MOVING state after upgrade to 2.7

2019-01-10 Thread dilaz03
I see next logs on node startup after hard shutdown:
...
[exchange-worker-#40] DEBUG o.a.i.i.p.c.p.GridCacheDatabaseSharedManager -
Restored partition state (from WAL) [grp=test_events, p=1021,
state=MOVINGupdCntr=303]
...

Some partitions are in OWNING state, but many partitions are MOVING. And I
should call 'cache -rlp' for clearing this state. If I shutdown node after
deactivation then all is correct:
...
[exchange-worker-#40] DEBUG o.a.i.i.p.c.p.GridCacheDatabaseSharedManager -
Restored partition state (from page memory) [grp=test_events, p=0,
state=OWNINGupdCntr=568124]
...

I think I don't understand something about new version. How should I restore
node after crash?

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Question about add new nodes to ignite cluster.

2019-01-10 Thread Stanislav Lukyanov
Here “cache start” is a rather internal wording.
It means “cache adapter machinery will be initialized”.

In case of ASYNC rebalancing the cache will first appear on the node as
existing but storing no data until it is rebalanced.

In practice, ASYNC rebalancing means that the node will start (Ignition.start() 
will return)
immediately, not waiting for the rebalance.
SYNC rebalancing means that the node will start only after all data was 
processed.

For example, say you have the code
Ignite ignite = Ignition.start(cfg);
System.out.println(Ignite.cache(“foo”).get(“k”));
where cache “foo” is a part of the configuration ‘cfg’.
Here, if “foo” has ASYNC rebalancing the value will be printed immediately.
If “foo” has SYNC rebalancing the value will be printed only after the 
rebalancing has completed.

Stan

From: Justin Ji
Sent: 22 декабря 2018 г. 13:24
To: user@ignite.apache.org
Subject: RE: Question about add new nodes to ignite cluster.

Thank for your replies!

I agree with "the node doesn’t serve any requests."

But the documents write that:

Asynchronous rebalancing mode. Distributed caches will start immediately and
will load all necessary data from other available grid nodes in the
background.

under Rebalance Modes
https://apacheignite.readme.io/docs/rebalancing

what does "start immediately" mean? and what are the differences between
SYNC and ASYNC?

Looking forward to your reply~

Justin



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: error in running shared rdd in ignite

2019-01-10 Thread Ilya Kasnacheev
Hello!

I guess you should add VM options:

--add-exports=java.base/jdk.internal.misc=ALL-UNNAMED--add-exports=java.base/sun.nio.ch=ALL-UNNAMED

As per running under Java 9.

In case of your IDE, please specify JVM as noted.

Regard,
-- 
Ilya Kasnacheev


сб, 5 янв. 2019 г. в 11:59, mehdi sey :

> hi, i have a code for writing into ignite rdd. this program read data from
> spark rdd and catch it on ignite rdd. i run it with command line in Linux
> Ubuntu but in the middle of execution i have encounter with below error. i
> checked in spark UI for watching if job complete or not but the job is not
> complete and failed. why? i have attached piece of code that i have wrote
> and run with command.
>
> $SPARK_HOME/bin/spark-submit --class "com.gridgain.RDDWriter" --master
> spark://linux-client:7077 ~/spark\ and\ ignite\
> issue/ignite-and-spark-integration-master/ignite-rdd/ignite-spark-scala/target/ignite-spark-scala-1.0.jar
>
> 2019-01-05 11:47:02 WARN  Utils:66 - Your hostname, linux-client resolves
> to
> a loopback address: 127.0.1.1, but we couldn't find any external IP
> address!
> 2019-01-05 11:47:02 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind
> to another address
> 2019-01-05 11:47:03 WARN  NativeCodeLoader:62 - Unable to load
> native-hadoop
> library for your platform... using builtin-java classes where applicable
> 2019-01-05 11:47:03 INFO  SparkContext:54 - Running Spark version 2.4.0
> 2019-01-05 11:47:03 INFO  SparkContext:54 - Submitted application:
> RDDWriter
> 2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing view acls to: mehdi
> 2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing modify acls to:
> mehdi
> 2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing view acls groups
> to:
> 2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing modify acls groups
> to:
> 2019-01-05 11:47:03 INFO  SecurityManager:54 - SecurityManager:
> authentication disabled; ui acls disabled; users  with view permissions:
> Set(mehdi); groups with view permissions: Set(); users  with modify
> permissions: Set(mehdi); groups with modify permissions: Set()
> 2019-01-05 11:47:03 WARN  MacAddressUtil:136 - Failed to find a usable
> hardware address from the network interfaces; using random bytes:
> 88:26:00:23:5d:50:a0:61
> 2019-01-05 11:47:03 INFO  Utils:54 - Successfully started service
> 'sparkDriver' on port 36233.
> 2019-01-05 11:47:03 INFO  SparkEnv:54 - Registering MapOutputTracker
> 2019-01-05 11:47:03 INFO  SparkEnv:54 - Registering BlockManagerMaster
> 2019-01-05 11:47:03 INFO  BlockManagerMasterEndpoint:54 - Using
> org.apache.spark.storage.DefaultTopologyMapper for getting topology
> information
> 2019-01-05 11:47:03 INFO  BlockManagerMasterEndpoint:54 -
> BlockManagerMasterEndpoint up
> 2019-01-05 11:47:03 INFO  DiskBlockManager:54 - Created local directory at
> /tmp/blockmgr-6e47832e-855a-4305-a293-662379733b7f
> 2019-01-05 11:47:03 INFO  MemoryStore:54 - MemoryStore started with
> capacity
> 366.3 MB
> 2019-01-05 11:47:03 INFO  SparkEnv:54 - Registering OutputCommitCoordinator
> 2019-01-05 11:47:03 INFO  log:192 - Logging initialized @2024ms
> 2019-01-05 11:47:04 INFO  Server:351 - jetty-9.3.z-SNAPSHOT, build
> timestamp: unknown, git hash: unknown
> 2019-01-05 11:47:04 INFO  Server:419 - Started @2108ms
> 2019-01-05 11:47:04 INFO  AbstractConnector:278 - Started
> ServerConnector@5ba745bc{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
> 2019-01-05 11:47:04 INFO  Utils:54 - Successfully started service 'SparkUI'
> on port 4040.
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@606fc505{/jobs,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@2c30b71f{/jobs/json,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@1d81e101{/jobs/job,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@bf71cec
> {/jobs/job/json,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@22d6cac2{/stages,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@30cdae70{/stages/json,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@1654a892
> {/stages/stage,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@6c000e0c
> {/stages/stage/json,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@5f233b26{/stages/pool,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@44f9779c
> {/stages/pool/json,null,AVAILABLE,@Spark}
> 2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
> o.s.j.s.ServletContextHandler@6974a715{/storage,null,AVAILABLE,@Spark}
> 2019-01-

Critical system error detected

2019-01-10 Thread yangjiajun
Hello.

I have a ignite 2.7 node with persistence enabled.I use it as a database.It
reported "blocked system-critical thread has been detected" exception in
logs.I guess it was blocked by drop table operations.Can someone help me to
find what happened to ignite? 

Here is the logs and config of ignite:
ignite-b2c59b25.rar
 
 
ignite-b2c59b25.rar
 
 
example-default1.xml

  

I delete some logs because my company has upload file size limit.I will try
to upload it on another post.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2019-01-10 Thread Stanislav Lukyanov
> In my case, I have configured swap storage 
> (https://apacheignite.readme.io/docs/swap-space) but *not* Ignite durable 
> memory. If DataRegion maxSize is say 100GB and my physical RAM is 50GB
> then 
> the swap file will be 100GB but Ignite will also use some portion (<50GB)
> of 
> the available physical RAM for off-heap cache data storage. 

I assume by Durable Memory you mean Native Persistence. They're
(confusingly) different - Durable Memory is just the name of the Ignite's
memory architecture, not necessarily with Persistence enabled.

> My question is about how to limit the size of this portion while still 
> allowing the DataRegion to specify a large swap file for use as overflow
> of 
> less regularly accessed data. 

I don't think it's possible. Just use Native Persistence instead - you'll
get the memory distribution that you want
(dataRegionConfiguration.maxSize=8gb will do the trick), and actual
persistence as a bonus.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Pain points of Ignite user community

2019-01-10 Thread Stanislav Lukyanov
Hi Rohan,

Sorry, the publishing took some time.
In case you’re still interested, here’s the article: 
https://www.gridgain.com/resources/blog/checklist-assembling-your-first-apacher-ignitetm-cluster

Thanks,
Stan

From: Rohan Honwade
Sent: 29 ноября 2018 г. 8:15
To: user@ignite.apache.org
Subject: Re: Pain points of Ignite user community

Thank you Stan.

Denis, I don’t intend to speak for my employer. The content will be my personal 
opinion.

Regards,
Rohan


On Nov 28, 2018, at 8:05 PM, Stanislav Lukyanov  wrote:

Hi,
 
I expect a write-up on some of the Ignite pitfalls to be out soon – ping me 
next week.
 
Stan
 
From: Rohan Honwade
Sent: 29 ноября 2018 г. 0:42
To: user@ignite.apache.org
Subject: Pain points of Ignite user community
 
Hello,
 
I am currently creating some helpful blog articles for Ignite users. Can 
someone who is active on this mailing list or the StackOverflow Ignite section  
please let me know what are the major pain points that users face when using 
Ignite? 
 
Regards,
RH




RE: There is no property called StartSize in CacheConfiguration

2019-01-10 Thread Stanislav Lukyanov
The .Net page seems to be outdated. The startSize property isn’t there anymore.
Check out the main one - 
https://apacheignite-net.readme.io/docs/performance-tips.

Stan

From: Peter Sham
Sent: 9 декабря 2018 г. 8:22
To: user@ignite.apache.org
Subject: There is no property called StartSize in CacheConfiguration

I am reading performance tips on Ignite.Net 
(https://apacheignite-net.readme.io/docs/performance-tips#section-tune-cache-start-size)
 and upon "Tune Cache Start Size", there should be a property called 
"StartSize" in CacheConfiguration. But there is no such property. What should 
the configuration property for setting initial cache size? Cannot find it on 
API documentation. Anyone can help? 

Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: How to setup multi host node discovery

2019-01-10 Thread Ilya Kasnacheev
Hello!

Have you tried specifying both nodes' internal IPs in configurartion?

Can you provide log thereof?

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 18:25, newigniter :

> Tnx for your help. Below is my config. Did you mean something like this?
>
> http://www.springframework.org/schema/beans";
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>  name="defaultDataRegionConfiguration">
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  name="persistenceEnabled" value="true" />
> 
> 
> 
> 
>
> 
>
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
>
> 
>
> 127.0.0.1
> 
>
> 
>
> [ec2 ip address]:47500..47509
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> [ec2 ip address]:47500..47509 is the ip address of the ec2 instance where
> first node was started. If I understood correctly, it is enough to provide
> only one ip address?
>
> I did that and using this configuration I started 2nd node.
>
> I connect to my first node and execute:
> ./control.sh --user ignite --password ignite --state -> CLUSTER ACTIVE
> ./control.sh --user ignite --password ignite --baseline -> only first node
> is found
> I connect to my second node and execute:
> ./control.sh --user ignite --password ignite --state -> CLUSTER IS INACTIVE
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Fwd: Ignite in Kubernetes not works correctly

2019-01-10 Thread Alena Laas
-- Forwarded message -
From: Alena Laas 
Date: Thu, Jan 10, 2019 at 5:13 PM
Subject: Ignite in Kubernetes not works correctly
To: 
Cc: Vadim Shcherbakov 


Hello!
Could you please help with some problem with Ignite within Kubernetes
cluster?

When we start 2 Ignite nodes at the same time or use scaling for Deployment
(from 1 to 2) everything is fine, both of them are visible inside Ignite
cluster (we use web console to see it)

But after we kill pod with one node and it restarts the node is no more
seen in Ignite cluster. Moreover the logs from this restarted node look
poor:
[13:32:57] __ 
[13:32:57] / _/ ___/ |/ / _/_ __/ __/
[13:32:57] _/ // (7 7 // / / / / _/
[13:32:57] /___/\___/_/|_/___/ /_/ /___/
[13:32:57]
[13:32:57] ver. 2.7.0#20181130-sha1:256ae401
[13:32:57] 2018 Copyright(C) Apache Software Foundation
[13:32:57]
[13:32:57] Ignite documentation: http://ignite.apache.org
[13:32:57]
[13:32:57] Quiet mode.
[13:32:57] ^-- Logging to file
'/opt/ignite/apache-ignite/work/log/ignite-7d323675.0.log'
[13:32:57] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[13:32:57] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or
"-v" to ignite.{sh|bat}
[13:32:57]
[13:32:57] OS: Linux 4.15.0-1036-azure amd64
[13:32:57] VM information: OpenJDK Runtime Environment 1.8.0_181-b13 Oracle
Corporation OpenJDK 64-Bit Server VM 25.181-b13
[13:32:57] Please set system property '-Djava.net.preferIPv4Stack=true' to
avoid possible problems in mixed environments.
[13:32:57] Configured plugins:
[13:32:57] ^-- None
[13:32:57]
[13:32:57] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED
[13:32:58] Message queue limit is set to 0 which may lead to potential
OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
to message queues growth on sender and receiver sides.
[13:32:58] Security status [authentication=off, tls/ssl=off]

And logs from the remaining node say that there are either 2 or 1 server
and this info is blinking
[14:02:05] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:02:15] Topology snapshot [ver=234, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:02:15] Topology snapshot [ver=235, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:02:20] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:02:30] Topology snapshot [ver=236, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:02:30] Topology snapshot [ver=237, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:02:35] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:02:45] Topology snapshot [ver=238, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:02:45] Topology snapshot [ver=239, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:02:50] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:03:00] Topology snapshot [ver=240, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:03:00] Topology snapshot [ver=241, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:03:06] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:03:16] Topology snapshot [ver=242, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:03:16] Topology snapshot [ver=243, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:03:21] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:03:31] Topology snapshot [ver=244, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:03:31] Topology snapshot [ver=245, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:03:36] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:03:46] Topology snapshot [ver=246, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:03:46] Topology snapshot [ver=247, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:03:51] Joining node doesn't have encryption data
[node=7d323675-bc0b-4507-affb-672b25766201]
[14:04:01] Topology snapshot [ver=248, locNode=a5eb30e1, servers=2,
clients=0, state=ACTIVE, CPUs=16, offheap=40.0GB, heap=2.0GB]
[14:04:01] Topology snapshot [ver=249, locNode=a5eb30e1, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=20.0GB, heap=1.0GB]
[14:04:06] Joining node d

Re: Text Query question

2019-01-10 Thread Andrey Mashenkov
Hi,

Unfortunatelly, it doesn't look as an open source solution.

It is not clear how their Indices are integrated with Ignite page memory.
If they do not use Ignite page memory then how they survive in failover
scenarios as no shared test\test results are available?
Otherwise, I bet they have headache each time they merge with new version
of Ignite or Lucene or geo-index.

Anyway, their features are awesome, thanks.


On Wed, Jan 9, 2019 at 10:06 PM Manu  wrote:

> Hi! take a look to
> https://github.com/hawkore/examples-apache-ignite-extensions/ they are
> implemented a solution for persisted lucene and spatial indexes
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Migrate from 2.6 to 2.7

2019-01-10 Thread Ilya Kasnacheev
Hello!

Cross-posting: I have filed a blocker ticket about it.
https://issues.apache.org/jira/browse/IGNITE-10884

Regards,
-- 
Ilya Kasnacheev


чт, 3 янв. 2019 г. в 03:24, Denis Magda :

> Are you using JDBC/ODBC drivers? Just want to know why it's hard to
> execute SQL queries outside of transactions.
>
> Can you switch to pessimistic transactions instead?
>
> --
> Denis
>
> On Wed, Jan 2, 2019 at 7:24 AM whiteman  wrote:
>
>> Hi guys,
>>
>> As far as I am concerned this is a breaking behaviour. In Apache Ignite v
>> 2.5 it was possible to have the SQL query inside the optimistic
>> serializable
>> transaction. Point here is that SQL query might not be part of the
>> transaction (no guarantees) but was at least performed. In 2.7 this code
>> won't work at all. The advice to move all SQL queries outside the
>> transactions is in real world not possible. It would greatly increased
>> complexity of the codebase. My question is if there is a switch for
>> enabling
>> pre 2.7 behaviour.
>>
>> THanks,
>> Cheers,
>> D.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-10 Thread Ilya Kasnacheev
Hello!

I have filed a blocker ticket about it:
https://issues.apache.org/jira/browse/IGNITE-10884

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 17:43, Prasad Bhalerao :

>
> Hi Ilya,
>
> I have created a reproducer for this issue and uploaded it to GitHub.
>
> GitHub project: https://github.com/prasadbhalerao1983/IgniteTestPrj.git
>
> Please run IgniteTransactionTester class to check the issue.
>
>
> Exception:
>
> Exception in thread "main" javax.cache.CacheException: Only pessimistic
> repeatable read transactions are supported at the moment.
>  at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)
>  at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)
>  at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)
>  at
> IgniteTransactionTester.testTransactionException(IgniteTransactionTester.java:53)
>  at IgniteTransactionTester.main(IgniteTransactionTester.java:38)
> Caused by: class
> org.apache.ignite.internal.processors.query.IgniteSQLException: Only
> pessimistic repeatable read transactions are supported at the moment.
>  at
> org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690)
>  at
> org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671)
>  at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793)
>  at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610)
>  at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315)
>  at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
>  at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
>  at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
>  at
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
>  at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
>  at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
>  at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685)
>
> Thanks,
>
> Prasad
>
>
>
> On Wed, Jan 9, 2019 at 6:22 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> It was discussed recently:
>> http://apache-ignite-users.70518.x6.nabble.com/Migrate-from-2-6-to-2-7-td25738.html
>>
>> I don't think you will be able to use SQL from transactions in Ignite
>> 2.7. While this looks like a regression, you will have to work around it
>> for now.
>>
>> Do you have a small reproducer for this issue? I could file a ticket if
>> you had. You can try to do it yourself, too.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 9 янв. 2019 г. в 15:33, Prasad Bhalerao > >:
>>
>>> Hi,
>>>
>>> My cache configuration is as follows. I am using TRANSACTIONAL and not
>>> TRANSACTIONAL_SNAPSHOT.
>>>
>>>
>>>
>>> private CacheConfiguration ipContainerIPV4CacheCfg() {
>>>
>>>   CacheConfiguration ipContainerIpV4CacheCfg = new 
>>> CacheConfiguration<>(CacheName.IP_CONTAINER_IPV4_CACHE.name());
>>>   
>>> ipContainerIpV4CacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>>   ipContainerIpV4CacheCfg.setWriteThrough(ENABLE_WRITE_THROUGH);
>>>   ipContainerIpV4CacheCfg.setReadThrough(false);
>>>   ipContainerIpV4CacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>>   
>>> ipContainerIpV4CacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>   ipContainerIpV4CacheCfg.setBackups(1);
>>>   Factory storeFactory = 
>>> FactoryBuilder.factoryOf(IpContainerIpV4CacheStore.class);
>>>   ipContainerIpV4CacheCfg.setCacheStoreFactory(storeFactory);
>>>   ipContainerIpV4CacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, 
>>> IpContainerIpV4Data.class);
>>>   
>>> ipContainerIpV4CacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());
>>>   ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(84);
>>>   RendezvousAffinityFunction affinityFunction = new 
>>> RendezvousAffinityFunction();
>>>   affinityFunction.setExcludeNeighbors(true);
>>>   ipContainerIpV4CacheCfg.setAffinity(affinityFunction);
>>>   ipContainerIpV4CacheCfg.setStatisticsEnabled(true);
>>>
>>>   return ipContainerIpV4CacheCfg;
>>> }
>>>
>>>
>>> Thanks,
>>> Prasad
>>>
>>> On Wed, Jan 9, 2019 at 5:45 PM Павлухин Иван 
>>> wrote:
>>>
 Hi Prasad,

 > javax.cache.CacheException: Only pessimistic repeatable read
 transactions are supported at the moment.
 Exception mentioned by you should happen only for cache with
 TRANSACTIONAL_SNAPSHOT atomicity mode configured. Have you confi

Re: Ignite-benchmark- driver classname not found

2019-01-10 Thread Ilya Kasnacheev
Hello!

We no longer have dedicated "OffHeap" benchmarks since 2.0. You can ignore
these failures.

I have created a ticket: https://issues.apache.org/jira/browse/IGNITE-10885

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 22:28, radha jai :

> Hi ,
>I ran the ignite-benchmark on a vm.  ignite version used is 2.6.0.
>cmd:  ./benchmark-run-all.sh ../config/benchmark-remote.properties
>some of the benchmarks didnt run, saying :
>log4j:WARN No appenders could be found for logger
> (org.reflections.Reflections).
>log4j:WARN Please initialize the log4j system properly.
>   log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
> for more info.
> <19:23:39> Duplicate simple class names detected (use
> fully-qualified names for execution):
> <19:23:39>
>  org.apache.ignite.yardstick.cache.IgniteIoTestSendAllBenchmark
> <19:23:39>
>  org.apache.ignite.yardstick.io.IgniteIoTestSendRandomBenchmark
> ERROR: Could not find benchmark driver class name in classpath:
> IgnitePutTxOffHeapValuesBenchmark.
> Make sure class name is specified correctly and corresponding package is
> added to -p argument list.
> Type '--help' for usage.
>
>
> I couldnt able to run below benchmarks due to above error
> IgnitePutTxOffHeapValuesBenchmark
> IgnitePutTxOffHeapBenchmark
> IgniteSqlQueryJoinOffHeapBenchmark
> IgnitePutOffHeapBenchmark
> IgnitePutOffHeapValuesBenchmark
> IgnitePutGetOffHeapValuesBenchmark
> IgniteSqlQueryOffHeapBenchmark
>
> Thanks
> With Regards
> Radha
>
>


Re: NullPointerException: Ouch! Argument cannot be null: key while performing cache.getAll

2019-01-10 Thread Ilya Kasnacheev
Hello!

Is it possible that you have specified a map with one of its key as 'null'?

Some standard maps should not allow 'null' keys but other implementations
might.

Do you have reproducer code?

Regards,
-- 
Ilya Kasnacheev


чт, 10 янв. 2019 г. в 10:48, kotamrajuyashasvi :

> Hi
>
> I'm working on a project with ignite as in memory cache with Cassandra as
> persistence for ignite.
> I need to perform cache.getAll(..) on a set of pojo cache keys built. For
> Random runs facing the below Exception.
>
> Failed to acquire lock for request: GridNearLockRequest
> [topVer=AffinityTopologyVersion [topVer=6, minorTopVer=1], miniId=1,
> dhtVers=[null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null, null, null, null, null, null, null, null,
> null, null, null, null, null], subjId=98637eda-1931-441f-a0b8-875162969ac0,
> taskNameHash=0, createTtl=-1, accessTtl=-1, flags=5, filter=null,
> super=GridDistributedLockRequest
> [nodeId=98637eda-1931-441f-a0b8-875162969ac0, nearXidVer=GridCacheVersion
> [topVer=158492748, order=1547015993291, nodeOrder=2], threadId=155,
> futId=569a5213861-cbfbf917-fcc5-410e-aaba-aea33f2f2f35, timeout=50,
> isInTx=true, isInvalidate=false, isRead=true, isolation=REPEATABLE_READ,
> retVals=[true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true, true, true, true, true, true, true, true,
> true, true, true, true, true], txSize=0, flags=0, keysCnt=100,
> super=GridDistributedBaseMessage [ver=GridCacheVersion [topVer=158492748,
> order=1547015993291, nodeOrder=2], committedVers=null, rolledbackVers=null,
> cnt=0, super=GridCacheIdMessage [cacheId=-379566268
> class org.apache.ignite.IgniteCheckedException:
> java.lang.NullPointerException: Ouch! Argument cannot be null: key
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:498)
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:400)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.loadMissingFromStore(GridDhtLockFuture.java:1054)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onComplete(GridDhtLockFuture.java:731)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onDone(GridDhtLockFuture.java:703)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onDone(GridDhtLockFuture.java:82)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451)
> at
>
> org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285)
> at
>
> org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:276)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.map(GridDhtLockFuture.java:966)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.onOwnerChanged(GridDhtLockFuture.java:655)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMvccManager.notifyOwnerChanged(GridCacheMvccManager.java:226)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMvccManager.access$200(GridCacheMvccManager.java:80)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMvccManager$3.onOwnerChanged(GridCacheMvccManager.java:163)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.checkOwnerChanged(GridCacheMapEntry.java:4108)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedCacheEntry.readyLock(GridDistributedCacheEntry.java:499)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLockFuture.readyLocks(GridDhtLockFuture.java:567)
> at
>
> org.apache.ignite.internal.processors.cache.distr

Re: Ignite kv/sql features

2019-01-10 Thread Ilya Kasnacheev
Hello!

1. Yes, it is possible.

2. It should be somewhat faster to read from cache of smaller entities
since serialization overhead is lower.

3. Can't answer on exact times. Web Console should show rebalance progress
as far as my understanding goes. Not sure about CLI.

4. You can use continuous queries, event listeners or cache store for that.

Regards,
-- 
Ilya Kasnacheev


чт, 10 янв. 2019 г. в 10:30, summasumma :

> Hi all,
>
> Can you please clarify the following possibilities in Ignite?
> 1. Insert multiple entries in cache KV store then is it possible to
> retrieve
> selected rows based on the particular column using a SQL query on the same
> KV store cache? (i.e, insert using kv operation but read using sql query)
>
> 2. If i want to retrieve all the entries matching one single column(which
> is
> not the key), then we should use SCAN operation or
> its better to create another cache with that single column as key and
> perform HGETALL on that separate cache? Assuming memory availability is not
> an issue what is the performance impact on this methods?
>
> 3. In a Ignite cluster, if one node goes down, how much time it takes to
> rebalance the entries amond remaining nodes? Is there any CLI to validate
> if
> indeed the entries rebalance is over or not? as a client not will it get
> events indication like - that one of node in cluster is down/ rebalance
> started / rebalance done etc?
>
> 4. Is it possible to use compute functionality in Ignite to insert a single
> entry into multiple cache asynchronously ? Say insert "key1, {val1, val2}"
> is the record, i want this element to be inserted in cache1 with key1 as
> key
> for that row and also insert the same in to another 'cache2' with 'val1' as
> key. Idea is to call one insert request from client to ignite server
> cluster
> but it should result in multiple insertion to multiple tables based on say
> a
> custom compute functionality in server?
>
> Please clarify.
>
> Thanks,
> ...summa
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Iterating through native persistence entries before joining the cluster

2019-01-10 Thread Ilya Kasnacheev
Hello!

I think you should decouple workload readiness from joining the cluster.

You should let node join cluster first, then iterate entries, and only then
allow workload to this node.

Regards,
-- 
Ilya Kasnacheev


чт, 3 янв. 2019 г. в 13:18, Lukas Polacek :

> Hi,
> in our use case we need to run some C++ code (via JNI) whenever something
> is pushed into the local Ignite cache. In other words, we need to have
> Ignite in sync with C++ memory. We have a local listener that listens to
> EVT_CACHE_OBJECT_PUT events and executes the C++ code, so everything is
> fine while the node is running. However, we use native persistence, so
> after a node restart, the local cache is read from the disk but the C++
> code hasn't been run for any cache entries, which means that Ignite and C++
> memory are out of sync.
>
> Iterating through the local cache entries is only possible once the node
> has already joined the cluster, but that's too late for us - it needs to be
> done before joining the cluster.
>
> I've managed to add a lifecycle bean event BEFORE_CLUSTER_JOIN (see
> http://apache-ignite-users.70518.x6.nabble.com/Register-listeners-before-joining-the-cluster-tc25944.html,
> a PR is hopefully coming soon), which is triggered before joining the
> cluster but at that point we cannot access the cache via ignite.cache(...).
> Is there a way to access all entries in the native persistence at that
> point or earlier? I'm also fine with modifying the Ignite source code if
> that's necessary (and simple enough), since we are just prototyping.
>


Re: Recovering from a data region OOM condition

2019-01-10 Thread colinc
Ignite DataRegionMetrics reports that memory *is* freed up when removing
items from the cache. However, Ignite continues to throw an OOM exception on
each subsequent cache removal. Cache puts are unsuccessful.

So although Ignite reports that the memory is free, it doesn't seem possible
to actually use it again following the OOM condition.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to detect sql schema changes and make update

2019-01-10 Thread Manu
Hi! take a look to 
https://github.com/hawkore/examples-apache-ignite-extensions/ they are 
implemented a solution to detect changes on query entities and propagate
changes over cluster (fields, indexes and re-indexation)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite 2.7 Persistence

2019-01-10 Thread Stanislav Lukyanov
Hi,

That’s right, Ignite nodes restart “cold” meaning that they become operational 
without the data in the RAM.
It allows to restart as quickly as possible, but the price is that the first 
operations have to load data from the disk, meaning that the performance will 
be much lower.

Here is a ticket to allow turn on a “hot restart” mode - 
https://issues.apache.org/jira/browse/IGNITE-10152.
There is also an improvement that allows to manually load data of a specific 
partition in an efficient way - 
https://issues.apache.org/jira/browse/IGNITE-8873. If you iterate over all 
partitions after the node start it may shorten the warmup period.

Stan 

From: Glenn Wiebe
Sent: 8 января 2019 г. 18:02
To: user@ignite.apache.org
Subject: Re: Ignite 2.7 Persistence

I am new to Ignite, but as I understand it, after cluster restart, data is 
re-hydrated into memory as the nodes receive requests for their partitions' 
entries. So, a first query would be as slow as a distributed disk-based query. 
Subsequent queries should have some (depending on memory available) information 
in memory and thus faster. 

So, my question, is this the first query execution since startup?
Given that you have sufficient memory to hold this particular cache, I would 
expect subsequent query executions to take advantage of memory resident query 
processing.

Additionally I have done a quick look (but could not find) at whether Ignite 
caches in memory store aggregates (like counts) which may be able to be 
returned without reading actual data as here.

Good luck!

On Tue, Jan 8, 2019 at 7:55 AM gweiske  wrote:
I am using Ignite 2.7 with persistence enabled on a single VM with 128 GB RAM
in Azure and separate external HDD drives each for wal, walarchive and
storage. I loaded 20 GB of data/50,000,000 rows, then shut down Ignite and
restarted the hosting VM, started and activated Ignite and ran a simple
query
that requires sorting through all the data (SELECT DISTINCT  FROM 
;). The query has been running for hours now. Looking at the memory, instead
of the expected ~42 GB it is currently at 5.7GB (*slowly* increasing). Any
ideas why it might be that slow? 
The same scenario with SSD drives (this time 1 drive for wal and walarchive,
a second one for storage) finishes in about 5500 seconds (still slow).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Recovering from a data region OOM condition

2019-01-10 Thread colinc
I wrote a test for what happens in the case that a DataRegion runs out of
memory. I filled up a cache with records until I received the expected
IgniteOutOfMemoryException. Then I tried to remove entries from the cache -
expecting that memory would be freed up again.

What I found is that any cache operation such as remove() or clearAll() thew
a further IOOM exception. Although cache.size() decreased, it does not
appear that memory is freed up - it is not possible to add new entries to
the cache even after a clearAll().

Is this expected behaviour? What is the recommended approach for dealing
with an OOM condition - other than to avoid it in the first place?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cluster of two nodes with minimal port use

2019-01-10 Thread Tobias König

Hi all,

nevermind, the problem was due to a malconfigured packet filter.

Cheers,
Tobias



On 1/8/19 3:13 PM, Tobias König wrote:

Hi there,

I'm trying to get an Ignite cluster consisting of two nodes to work, 
that uses a minimum number of exposed ports. I'm new to Ignite, but it 
is my understanding, that it should suffice to set each node to one 
specific port 1. for communication and 2. for discovery. The overall 
goal is to get a Docker cluster (with default bridged networking) 
working without Multicast and without --net=host.


However, I'm doing preliminary tests /without/ docker and am directly 
using my local machine (Node 1, IP 172.24.10.79) and a Raspberry Pi 
(Node 2, IP 172.24.10.83), and I can't get the cluster to work, 
because the discovery process doesn't succeed. I'm using a static IP 
finder in which I point each node to its corresponding counterpart.


XML-configuration of both nodes with the aforementioned minimal use of 
ports is attached inline.


If I start node 1 first and then node 2, no discovery process is 
initiated in the first minutes. If I start node 2 first and then node 
1, the discovery process is initiated but not completed successfully. 
I'll attach logs for the second case for both node 2 and 1.


Can somebody spot my configuration error?

Best regards and TIA,
Tobias



P.S. I was able to reproduce the error on two "regular" machines as 
well, without the use of a Raspberry Pi.



___

# ignite-config-node1.xml

http://www.springframework.org/schema/beans";
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>

    
    
    class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">

    
    
    class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">

    
    
127.0.0.1:3013
172.24.10.83:3013
    
    
    
    
    
    
    
    class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">

    
    
    
    



# ignite-config-node2.xml

http://www.springframework.org/schema/beans";
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>

    class="org.apache.ignite.configuration.IgniteConfiguration">

    
    class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">

    
    
    class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
    name="addresses">


127.0.0.1:3013
172.24.10.79:3013

    
    
    
    
    
    
    class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">