Re: Sharing Dataset Across Multiple Ignite Processes with Same Physical Page Mappings, SharedRDD

2018-01-26 Thread UmurD
One update to this thread: I realized that the 2 nodes-50K keys to 4
nodes-25K redistribution was happening because I was not enforcing client
mode at the spark worker side. However, my question still stands:

Does Ignite use shared memory (shmem) to manage the Shared RDD? Can I set up
Ignite servers to share a dataset/in memory cache to use shared memory?

Sincerely,
Umur


UmurD wrote
> Val,
> 
> I would like to make one correction. Data could also be shared with Linux
> shared memory (like shm). It does not have to be through copy-on-writes
> with
> read-only mapped pages. A shared dataset in shared memory across different
> processes also fits my use case.
> 
> Sincerely,
> Umur
> UmurD wrote
>> Hi Val,
>> 
>> Thanks for the quick response.
>> 
>> I am referring to how Virtual and Physical Memory works.
>> 
>> For more background, when a process is launched, it will be allocated a
>> virtual address space. This virtual memory will have a translation to the
>> physical memory you have on your computer. The pages allocated to the
>> processes will have different permissions (Read vs Read-Write), and some
>> of
>> them will be exclusively mapped to the process it is assigned to, while
>> some
>> others will be shared.
>> 
>> A good example of shared physical pages is for say a library (it does not
>> have to be a library, and I'm only providing that as an example). If I
>> launch two identical processes on the same machine, the shared libraries
>> used by these processes will have the same physical address (after
>> translating from virtual to physical addresses). This is because the
>> library
>> might be read-only, and there is no need for two copies of the same
>> library
>> if it is only being read. The processes will not get their own copy until
>> they attempt to write to the shared page. When they do, this will incur a
>> page-fault and the process will be allocated it's own (exclusive) copy of
>> the previously shared page for modification. This is called a
>> Copy-On-Write
>> (CoW).
>> 
>> The case I am looking for specifically is when I launch 2 processes (say
>> Ignite for the sake of the example), and load up a dataset to be shared,
>> I
>> want these 2 processes to point to the same physical memory space for the
>> shared dataset (until one of them tries to modify it, of course). In
>> other
>> words, I want the loaded dataset to have the same physical address
>> translation from their respective virtual addresses. That is what I'm
>> referring to when I talk about identical physical page mappings.
>> 
>> This is for a research project I am conducting, so performance or
>> functionality is unimportant. The physical mapping is the only critical
>> component.
>> 
>> Sincerely,
>> Umur
>> vkulichenko wrote
>>> Umur,
>>> 
>>> When you talk about "physical page mappings", what exactly are you
>>> referring
>>> to? Can you please elaborate a bit more on what and why you're trying to
>>> achieve? What is the issue you're trying to solve?
>>> 
>>> -Val
>>> UmurD wrote
 Hello Apache Ignite Community,
 
 I am currently working with Ignite and Spark; I'm specifically
 interested in
 the Shared RDD functionality. I have a few questions and hope I can
 find
 answers here.
 
 Goal:
 I am trying to have a single physical page with multiple sharers
 (multiple
 processes map to the same physical page number) on a dataset. Is this
 achievable with Apache Ignite?
 
 Specifications:
 This is all running on Ubuntu 14.04 on an x86-64 machine, with
 Ignite-2.3.0.
 
 I will first introduce the simpler case using only Apache Ignite, and
 then
 talk about integration and data sharing with Spark. I appreciate the
 assistance.
 
 IGNITE NODES ONLY
 Approach:
 I am trying to utilize the Shared RDD of Ignite. Since I also need my
 data
 to persist after the spark processes, I am deploying the Ignite cluster
 independently with the following command and config:
 
 '$IGNITE_HOME/bin/ignite.sh
 $IGNITE_HOME/examples/config/spark/example-shared-rdd.xml'. 
 
 I populate the Ignite nodes using:
 
 'mvn exec:java
 -Dexec.mainClass=org.apache.ignite.examples.spark.SharedRDDExample'. I
 modified this file to only populate the SharedRDD cache (partitioned)
 with
 100,000 
 
 int,int
 
  pairs.
 
 Finally, I observe the status of the ignite cluster using:
 
 '$IGNITE_home/bin/ignitevisorcmd.sh'
 
 Results:
 I can confirm that I have average 50,000 
 
 int,int
 
  pairs per node, totaling
 at 100,000 key,value pairs. The memory usage of my Ignite nodes also
 increase, confirming the populated RDD. However, when I compare the
 page
 maps of both Ignite nodes, I see that they are oblivious to each others
 memory space and have different Physical Page mappings. Is it possible
 for
 me to set 

Re: ScanQuery with predicate always returns empty result.

2018-01-26 Thread Thomas Isaksen
Ah! I make silly mistakes too often. I will give it a try. I will also make a 
separate class instead of an inner class. Thanks!

--
Thomas Isaksen

From: ezhuravlev 
Sent: Friday, January 26, 2018 3:48:56 PM
To: user@ignite.apache.org
Subject: Re: ScanQuery with predicate always returns empty result.

Hi, in case when you use:
ScanQuery scan = new ScanQuery<>((key, value) -> {

System.out.println(key + “ = “ + value);

return true;

})
I think the problem is that you created ScanQuery while
your cache is IgniteCache. Scan query should be  too.

In case of Inner class, it's trying to serialize your outer class, but it
faces some problems in this process. Here you can create separate class
instead of inner, it will help

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to activate cluster - table already exists

2018-01-26 Thread Thomas Isaksen
Hi, I will share next time for sure. I tend to break stuff so I'm guessing I 
will do it again. As far as I can remember I didn't change anything before 
starting again.

--
Thomas Isaksen

From: Michael Cherkasov 
Sent: Friday, January 26, 2018 8:48:58 PM
To: user@ignite.apache.org
Subject: Re: Failed to activate cluster - table already exists

Hi Thomas,

Please share with us next time a reproducer with work folder.
It's something that should be checked.

Did you change your QueryEntities between cluster restart?

Thanks,
Mike.

2018-01-26 5:25 GMT-08:00 Thomas Isaksen 
>:
Hi Mikhail,

I don't know what happened but I deleted some folders under 
$IGNITE_HOME/work/db with the same name and the problem cleared.
I think maybe I killed Ignite before it could finish writing or something to 
that effect, which could have caused the problem.

./t

-Original Message-
From: Mikhail 
[mailto:michael.cherka...@gmail.com]
Sent: fredag 26. januar 2018 01.51
To: user@ignite.apache.org
Subject: Re: Failed to activate cluster - table already exists

Hi Thomas,

Looks like you can reproduce the issue with a unit test.

Could you please share it with us?

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Failed to activate cluster - table already exists

2018-01-26 Thread Michael Cherkasov
Hi Thomas,

Please share with us next time a reproducer with work folder.
It's something that should be checked.

Did you change your QueryEntities between cluster restart?

Thanks,
Mike.

2018-01-26 5:25 GMT-08:00 Thomas Isaksen :

> Hi Mikhail,
>
> I don't know what happened but I deleted some folders under
> $IGNITE_HOME/work/db with the same name and the problem cleared.
> I think maybe I killed Ignite before it could finish writing or something
> to that effect, which could have caused the problem.
>
> ./t
>
> -Original Message-
> From: Mikhail [mailto:michael.cherka...@gmail.com]
> Sent: fredag 26. januar 2018 01.51
> To: user@ignite.apache.org
> Subject: Re: Failed to activate cluster - table already exists
>
> Hi Thomas,
>
> Looks like you can reproduce the issue with a unit test.
>
> Could you please share it with us?
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: how to create instance of CacheManager of ignite

2018-01-26 Thread ak47
Denis
cachingProvider.getCacheManager(URI uri, ClassLoader clsLdr)
doesn't work, were you able to make it work?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data lost when primary nodes down.

2018-01-26 Thread Ilya Kasnacheev
Hello!

In Ignite, caches are usually partitioned. This means that data is split
into some number e.g. 128) partitions, which are stored on a different
nodes.
For example, for three node cluster, one node (10.5.42.95) will store 1/3
of data in primary paritions and another 1/3 data in backup partitions (if
you have backups = 1)
This ensures that if this node goes away, there's at least one copy of each
partition on the remaining ones.

But when you connect to this node with dbeaver or any other client, you can
access all the data. Everything, regardless on which node it is stored, is
accessible.

Regards,

-- 
Ilya Kasnacheev

2018-01-25 4:27 GMT+03:00 rizal123 :

> Hi Denis,
>
> Would you mind clarification my statement?
> I'm a little bit confuse with primary and backup nodes. And where the data
> is stored.
>
> Thanks
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Question about data distribution

2018-01-26 Thread Stanislav Lukyanov
Hi,

How many data entries do you have? 
Are IDs that you use for affinity mapping evenly distributed among the entries?

Can you show the code that you use to define the affinity mapping and your 
cache configuration?

Also, what exactly do you mean by "node gets X IDs to work with"?
Do you mean that a node stores X IDs? How do you check that?

The data distribution across partitions (and, subsequently, nodes) is based on 
hashing,
so it has a probabilistic guarantee to be fairly even, given that the initial 
IDs are evenly distributed
and that the data set is large enough.

Thanks,
Stan

From: svonn
Sent: 26 января 2018 г. 18:01
To: user@ignite.apache.org
Subject: Question about data distribution

Hi!

I have two server nodes and I've set up an AffinityKey mapping via some ID.
I'm streaming data from kafka to ignite and for my test data, about 5min
worth of data belongs to one ID, then the data for the next ID starts (real
data will mostly come in parallel). The cache I'm streaming the data into
has about 30min expiration policy.

I've noticed that the data seems to get very unevently distributed.
One node sometimes gets 9 IDs to work with, while the other one only works
on a single ID.
Is that due to the fact that they aren't arriving simultaniously? Can this
behaviour be adjusted?

Best regards
svonn





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Question about data distribution

2018-01-26 Thread svonn
Hi!

I have two server nodes and I've set up an AffinityKey mapping via some ID.
I'm streaming data from kafka to ignite and for my test data, about 5min
worth of data belongs to one ID, then the data for the next ID starts (real
data will mostly come in parallel). The cache I'm streaming the data into
has about 30min expiration policy.

I've noticed that the data seems to get very unevently distributed.
One node sometimes gets 9 IDs to work with, while the other one only works
on a single ID.
Is that due to the fact that they aren't arriving simultaniously? Can this
behaviour be adjusted?

Best regards
svonn





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ScanQuery with predicate always returns empty result.

2018-01-26 Thread ezhuravlev
Hi, in case when you use:
ScanQuery scan = new ScanQuery<>((key, value) -> {

System.out.println(key + “ = “ + value);

return true;

})
I think the problem is that you created ScanQuery while
your cache is IgniteCache. Scan query should be  too.

In case of Inner class, it's trying to serialize your outer class, but it
faces some problems in this process. Here you can create separate class
instead of inner, it will help

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ScanQuery with predicate always returns empty result.

2018-01-26 Thread Thomas Isaksen
Hi

I have the following code:

IgniteCache skipCache = 
IgniteUtil.getCache("skipCache").withKeepBinary();
ScanQuery scan = new ScanQuery<>();
List> result = skipCache.query(scan).getAll();

This returns all the data in my cache. However, once I start using a predicate 
I get no data:

ScanQuery scan = new ScanQuery<>((key, value) -> {
System.out.println(key + " = " + value);
return true;
});
List> result = skipCache.query(scan).getAll();

If I try to define the ScanQuery with an Inner class:

ScanQuery scan = new ScanQuery<>(new 
IgniteBiPredicate()
{
@Override
public boolean apply(Long aLong, BinaryObject binaryObject)
{
return true;
}
});

I get exception:

javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
Query execution failed: GridCacheQueryBean [qry=GridCacheQueryAdapter 
[type=SCAN, clsName=null, clause=null, 
filter=no.toyota.gatekeeper.test.ApplicationTest$1@67af833b, transform=null, 
part=null, incMeta=false, metrics=GridCacheQueryMetricsAdapter 
[minTime=9223372036854775807, maxTime=0, sumTime=0, avgTime=0.0, execs=0, 
completed=0, fails=0], pageSize=1024, timeout=0, keepAll=true, 
incBackups=false, dedup=false, 
prj=org.apache.ignite.internal.cluster.ClusterGroupAdapter@40d4009d, 
keepBinary=true, subjId=30d3339d-98d1-4b28-b86a-5eb8b440d7b4, taskHash=0], 
rdc=null, trans=null]

at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1287)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.next(GridCacheQueryFutureAdapter.java:171)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$5.onHasNext(GridCacheDistributedQueryManager.java:634)
at 
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
at 
org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
at 
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:114)
at 
no.toyota.gatekeeper.test.ApplicationTest.testMatch(ApplicationTest.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 

RE: Failed to activate cluster - table already exists

2018-01-26 Thread Thomas Isaksen
Hi Mikhail,

I don't know what happened but I deleted some folders under 
$IGNITE_HOME/work/db with the same name and the problem cleared.
I think maybe I killed Ignite before it could finish writing or something to 
that effect, which could have caused the problem.

./t

-Original Message-
From: Mikhail [mailto:michael.cherka...@gmail.com] 
Sent: fredag 26. januar 2018 01.51
To: user@ignite.apache.org
Subject: Re: Failed to activate cluster - table already exists

Hi Thomas,

Looks like you can reproduce the issue with a unit test.

Could you please share it with us?

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: a2cf190a-6a44-4b94-baea-c9b88a16922e, class org.apache.ignite.IgniteCheckedException:Failed to execute SQL query

2018-01-26 Thread ilya.kasnacheev
Hello!

Most likely it's an error in SQL statement. Unfortunately, your stack trace
is not enough to get to the root of problem. Logs from nodes should contain
relevant information. You can share your logs from nodes, or search them
yourself for IgniteCheckedException, post your findings.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: One problem about Cluster Configuration(cfg)

2018-01-26 Thread Andrey Mashenkov
Rick,

I can't reproduce the issue. Query"=select+*+from+String" works fine
for me.

On Fri, Jan 26, 2018 at 12:57 PM,  wrote:

> Hello Andrey,
>
>
>
> 1.  I have check the setting of cache is Partitioned and run two
> nodes again. These program codes are respectively as follows:
>
> One node(shell script)===
> ===
>
> 
>
> 
>
> 
>
> 
>
>   
>
> *  *
>
>   
>
> 
>
> java.lang.String
>
> java.lang.String
>
> 
>
>   
>
> 
>
> 
>
> 
>
> 
> ===
>
> The other node(maven java)=
> =
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
>
>
> cacheConf.setCacheMode(CacheMode.*PARTITIONED*);
>
>
>
> *IgniteCache* cache = *igniteVar**.getOrCreateCache(**cacheConf**)*;
>
> 
> ===
>
> 2.  Yes, I can see the information of 2 servers* after I close the
> other node* in the following text.
>
> I thinks that both of the One node and the other node are linked, *but I
> can not put or get data into the oneCache.*
>
> 
> ===
>
> [26-01-2018 17:23:46][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Topology snapshot [ver=2, *servers=2, *clients=0, CPUs=4, heap=4.5GB]
>
>
>
> [26-Jan-2018 17:23:46][WARN 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Node FAILED: TcpDiscoveryNode [id=abe48607-8b7a-4413-9711-c7a14ecbd1a5,
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 127.0.0.1], sockAddrs=[ubuntu/
> 127.0.0.1:47501, /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501],
> discPort=47501, order=2, intOrder=2, lastExchangeTime=1516958573637,
> loc=false, ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
> 
> ===
>
> In addition, the following ports are created on localhost, as:
>
> *One node(shell script)*
>
> java 12266 root   TCP *:40447 (LISTEN)
>
> java 12266 root   TCP *:49187 (LISTEN)
>
> java 12266 root   TCP *:46451 (LISTEN)
>
> *java 12266 root  TCP *:47100 (LISTEN)*
>
> *java 12266 root  TCP *:11211 (LISTEN)*
>
> *java 12266 root  TCP *:8080 (LISTEN)*
>
> *java 12266 root  TCP *:47500 (LISTEN)*
>
> *The other node(maven java)*
>
> *java 12349 root  TCP *:47101 (LISTEN)*
>
> *java 12349 root  TCP *:11212 (LISTEN)*
>
> *java 12349 root  TCP *:8081 (LISTEN)*
>
> *java 12349 root  TCP *:47501 (LISTEN)*
>
>
>
> *However, I can not to print data from the oneCache **in the console of
> the maven java project.*
>
>
>
> 3.  I am not sure if url:http://127.0.0.1:8080/
> ignite?cmd=qryfldexe=100&*cacheName=oneCache*
> *=select+*+from+String*
>
> meet your command “SELECT _val FROM ” to show all data in oneCache via
> restful api or not.
>
>
>
> Rick
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Friday, January 26, 2018 4:52 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: One problem about Cluster Configuration(cfg)
>
>
>
> Rick,
>
>
>
> 1. As you are able to put entry to cache, you should see cache.get()
> result in console.
>
> Please, check cache is not Local, but Partitioned or Replicated.
>
>
>
> 2. Also check the topology version message. You should see 2 servers in it.
>
>
>
> 3. Try to select value explicitly via _val field. E.g. "SELECT _val FROM
> "
>
>
>
> On Fri, Jan 26, 2018 at 6:30 AM,  wrote:
>
> Hi Andrey,
>
>
>
> I was so pleased to hear from you.
>
>
>
> Please allow me to explain in detail my problem.
>
>
>
> In the following, there is my java code to put and get one KV data
> ("keyString", "valueString")  into the cache  ”oneCache”.
>
> 
> ===
>
> Ignite igniteVar = Ignition.*getOrStart*(cfg);
>
>
>
> *CacheConfiguration* cacheConf = *new* *CacheConfiguration*();
>
>
>
> cacheConf.setName("oneCache");
>
>
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
>
>
> *IgniteCache* *cache** = **igniteVar**.getOrCreateCache(**cacheConf**)**;*
>
>
>
> *cache**.put(**"keyString"**, **"valueString"**);*
>
>
>
> *System.**out**.println(**cache**.get(**"keyString"**));*
>
> 
> ===
>
> *Although 

RE: One problem about Cluster Configuration(cfg)

2018-01-26 Thread linrick
Hello Andrey,


1.  I have check the setting of cache is Partitioned and run two nodes 
again. These program codes are respectively as follows:
One node(shell 
script)==




  
  
  

java.lang.String
java.lang.String

  



===
The other node(maven 
java)==
cacheConf.setIndexedTypes(String.class, String.class);

cacheConf.setCacheMode(CacheMode.PARTITIONED);

IgniteCache cache = igniteVar.getOrCreateCache(cacheConf);
===

2.  Yes, I can see the information of 2 servers after I close the other 
node in the following text.

I thinks that both of the One node and the other node are linked, but I can not 
put or get data into the oneCache.
===
[26-01-2018 17:23:46][INFO 
][disco-event-worker-#28%null%][GridDiscoveryManager] Topology snapshot [ver=2, 
servers=2, clients=0, CPUs=4, heap=4.5GB]

[26-Jan-2018 17:23:46][WARN 
][disco-event-worker-#28%null%][GridDiscoveryManager] Node FAILED: 
TcpDiscoveryNode [id=abe48607-8b7a-4413-9711-c7a14ecbd1a5, 
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 127.0.0.1], 
sockAddrs=[ubuntu/127.0.0.1:47501, /0:0:0:0:0:0:0:1%lo:47501, 
/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
lastExchangeTime=1516958573637, loc=false, ver=1.9.0#20170302-sha1:a8169d0a, 
isClient=false]
===
In addition, the following ports are created on localhost, as:
One node(shell script)
java 12266 root   TCP *:40447 (LISTEN)
java 12266 root   TCP *:49187 (LISTEN)
java 12266 root   TCP *:46451 (LISTEN)
java 12266 root  TCP *:47100 (LISTEN)
java 12266 root  TCP *:11211 (LISTEN)
java 12266 root  TCP *:8080 (LISTEN)
java 12266 root  TCP *:47500 (LISTEN)
The other node(maven java)
java 12349 root  TCP *:47101 (LISTEN)
java 12349 root  TCP *:11212 (LISTEN)
java 12349 root  TCP *:8081 (LISTEN)
java 12349 root  TCP *:47501 (LISTEN)

However, I can not to print data from the oneCache in the console of the maven 
java project.


3.  I am not sure if 
url:http://127.0.0.1:8080/ignite?cmd=qryfldexe=100=oneCache=select+*+from+String

meet your command “SELECT _val FROM ” to show all data in oneCache via 
restful api or not.

Rick

From: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
Sent: Friday, January 26, 2018 4:52 PM
To: user@ignite.apache.org
Subject: Re: One problem about Cluster Configuration(cfg)

Rick,

1. As you are able to put entry to cache, you should see cache.get() result in 
console.
Please, check cache is not Local, but Partitioned or Replicated.

2. Also check the topology version message. You should see 2 servers in it.

3. Try to select value explicitly via _val field. E.g. "SELECT _val FROM "

On Fri, Jan 26, 2018 at 6:30 AM, 
> wrote:
Hi Andrey,

I was so pleased to hear from you.

Please allow me to explain in detail my problem.

In the following, there is my java code to put and get one KV data 
("keyString", "valueString")  into the cache  ”oneCache”.
===
Ignite igniteVar = Ignition.getOrStart(cfg);

CacheConfiguration cacheConf = new CacheConfiguration();

cacheConf.setName("oneCache");

cacheConf.setIndexedTypes(String.class, String.class);

IgniteCache cache = igniteVar.getOrCreateCache(cacheConf);

cache.put("keyString", "valueString");

System.out.println(cache.get("keyString"));
===
Although the compiler works well, the execution (put data and get data) does 
not run, as follows.


1.  Restful api: 
http://127.0.0.1:8080/ignite?cmd=qryfldexe=100=oneCache=select+*+from+String



{"successStatus":0,"sessionToken":"","error":"","response":{"items":[],"last":true,"fieldsMetadata":[{"schemaName":"oneCache","typeName":"STRING","fieldName":"_KEY","fieldTypeName":"java.lang.String"},{"schemaName":"oneCache","typeName":"STRING","fieldName":"_VAL","fieldTypeName":"java.lang.String"}],"queryId":0}}


2.  The execution result of the other node(maven.project.java) is as the 
attachement(JPG) for detail.

From the above information, I 

Re: CacheStoreAdapter write and delete are not being called by Ignite's GridCacheStoreManager

2018-01-26 Thread Pim D
Hi Slava,

Guess I overlooked that part, thanx!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Binary type has different affinity key fields

2018-01-26 Thread Thomas Isaksen
Hi Slava

Thanks for pointing out my mistakes with the template. 
I have attached the java classes in question and the ignite config file that I 
am using .

I create the table using DDL as follows:

CREATE TABLE UserCache (
id bigint,
username varchar, 
password varchar,
PRIMARY KEY (username, password)
)
WITH "template=userCache, affinitykey=username, cache_name=UserCache, 
key_type=no.toyota.gatekeeper.ignite.key.CredentialsKey, 
value_type=no.toyota.gatekeeper.authenticate.Credentials";

Next I try to put one entry into my cache:
 
@Test
Public void testIgnite()
{
Ignition.setClientMode(true);
Ignite ignite = Ignition.start("/config/test-config.xml");
IgniteCache cache = 
ignite.cache("UserCache");
// this blows up
cache.put(new CredentialsKey("foo","bar"), new 
Credentials("foo","bar","resourceId"));
}

I am not sure my code is correct but I get the same error when I try to insert 
a row using SQL.

INSERT INTO UserCache (id,username,password) VALUES (1, 'foo','bar');

--
Thomas Isaksen

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: torsdag 25. januar 2018 17.39
To: user@ignite.apache.org
Subject: RE: Binary type has different affinity key fields

Hi Thomas,

CREATE TABLE statement doesn't use your template because you specified the 
predefined one - 'partitioned' *template=partitioned*

In case of using replicated mode, the following property  does not make sense.

Could you please share full reproducer? I will try it on my side and come back 
to you with findings.
I mean definition CredentialsKey and Credentials and code that can be used in 
order to reproduce the exception you mentioned.

Best regards,
Slava.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


CredentialsKey.java
Description: CredentialsKey.java


Credentials.java
Description: Credentials.java


test-config.xml
Description: test-config.xml


No transaction is currently active || Not allowed to create transaction on shared EntityManager - use Spring transactions or EJB CMT instead

2018-01-26 Thread rizal123
Hi,

First of all, yes I know Apache Ignite not support SQL Transaction. I hope
this is not showstopper of my POC. 
I`m here to find another way.

1. I have function for update sequence table.

private int sequenceManual(String seqName) {
int seq = 0;
Query query;
try {
query = this.em.createNativeQuery("UPDATE SEQUENCE SET 
SEQ_COUNT =
SEQ_COUNT + " + INCREMENT + " WHERE SEQ_NAME = '" +seqName+"'");
seq = (Integer) query.executeUpdate();
} catch (Exception e) {
LOG.error("An exception was thrown while Update Sequence " + 
seqName, e);
}

try {
query = this.em.createNativeQuery("SELECT SEQ_COUNT FROM 
SEQUENCE WHERE
SEQ_NAME = '"+ seqName +"'");
seq = (int) ((Number) query.getSingleResult()).longValue();
} catch (Exception e) {
LOG.error("An exception was thrown while Next Return Sequence " 
+ seqName,
e);
}
return seq;
}

with this code, I have an error:
javax.persistence.TransactionRequiredException: Exception Description: No
transaction is currently active


Then, I modified my code with @Transactional Spring.
@Transactional(propagation=Propagation.REQUIRED)
private int sequenceManual(String seqName) {
. . . .

I have an error:
java.lang.IllegalStateException: Not allowed to create transaction on shared
EntityManager - use Spring transactions or EJB CMT instead

Is there something suggestion to running my Update Sequence SQL?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: One problem about Cluster Configuration(cfg)

2018-01-26 Thread Andrey Mashenkov
Rick,

1. As you are able to put entry to cache, you should see cache.get() result
in console.
Please, check cache is not Local, but Partitioned or Replicated.

2. Also check the topology version message. You should see 2 servers in it.

3. Try to select value explicitly via _val field. E.g. "SELECT _val FROM
"

On Fri, Jan 26, 2018 at 6:30 AM,  wrote:

> Hi Andrey,
>
>
>
> I was so pleased to hear from you.
>
>
>
> Please allow me to explain in detail my problem.
>
>
>
> In the following, there is my java code to put and get one KV data
> ("keyString", "valueString")  into the cache  ”oneCache”.
>
> 
> ===
>
> Ignite igniteVar = Ignition.*getOrStart*(cfg);
>
>
>
> *CacheConfiguration* cacheConf = *new* *CacheConfiguration*();
>
>
>
> cacheConf.setName("oneCache");
>
>
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
>
>
> *IgniteCache* *cache** = **igniteVar**.getOrCreateCache(**cacheConf**)**;*
>
>
>
> *cache**.put(**"keyString"**, **"valueString"**);*
>
>
>
> *System.**out**.println(**cache**.get(**"keyString"**));*
>
> 
> ===
>
> *Although the compiler works well, the execution (put data and get data)
> does not run, as follows.*
>
>
>
> 1.  Restful api: http://127.0.0.1:8080/ignite?
> cmd=qryfldexe=100&*cacheName=oneCache*=select+*+from+String
>
>
>
> {"successStatus":0,"sessionToken":"","error":"","
> response":{"items":[],"last":true,"fieldsMetadata":[{"
> schemaName":"oneCache","typeName":"STRING","fieldName"
> :"_KEY","fieldTypeName":"java.lang.String"},{"schemaName":"
> oneCache","typeName":"STRING","fieldName":"_VAL","
> fieldTypeName":"java.lang.String"}],"queryId":0}}
>
>
>
> 2.  The execution result of the other node(maven.project.java) is as
> the attachement(JPG) for detail.
>
>
>
> From the above information, I thinks that the oneCache exists, but there
> is not "keyString", "valueString" data in oneCache.
>
>
>
> Rick
>
>
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Thursday, January 25, 2018 5:54 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: One problem about Cluster Configuration(cfg)
>
>
>
> Rick,
>
>
>
> Looks like ok.
>
> You run 2 nodes, then you kill one and other node report that killed node
> was dropped from grid.
>
>
>
> What the issue is?
>
>
>
> On Thu, Jan 25, 2018 at 12:38 PM,  wrote:
>
> Hi Andrey,
>
>
>
> 1.  There are no other running nodes when I triggered the two nodes.
>
>
>
> 2.  If I firstly triggered the One node(shell script), and then
> triggered the other node(maven.project.java).
>
> I  closed the other node(maven.project.java) and *the One node was still
> running*. The proram result of the One node show that:
>
>
>
> [25-Jan-2018 17:32:25][WARN ][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
> Local node has detected failed nodes and started cluster-wide procedure. To
> speed up failure detection please see 'Failure Detection' section under
> javadoc for 'TcpDiscoverySpi'
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Added new node to topology: TcpDiscoveryNode 
> [id=664c870e-6b93-4328-a95b-9e04d5b4f59c,
> addrs=[0:0:0:0:0:0:0:1%lo,  127.0.0.1], sockAddrs=[ubuntu/ 127.0.0.1:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10,
> intOrder=6, lastExchangeTime=1516872738417, loc=false,
> ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> **Topology snapshot [ver=10, servers=2, clients=0, CPUs=4, heap=4.5GB]**
>
>
>
> [25-Jan-2018 17:32:25][WARN 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Node FAILED: TcpDiscoveryNode [id=664c870e-6b93-4328-a95b-9e04d5b4f59c,
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1], sockAddrs=[ubuntu/127.0.0.1:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10,
> intOrder=6, lastExchangeTime=1516872738417, loc=false,
> ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Topology snapshot *[ver=11, servers=1, clients=0, CPUs=4, heap=1.0GB]*
>
>
>
> Rick
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Thursday, January 25, 2018 5:10 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: One problem about Cluster Configuration(cfg)
>
>
>
> Hi Rick,
>
>
>
> Do you have a luck to resolve this?
>
> Or you still observe the issue when configuring ipFinder via API?
>
>
>
> On Thu, Jan 25, 2018 at 11:29 AM,  wrote:
>
> Hi all,
>
>
>
> By the way, I run two nodes on localhost, and the multicastGroup ip and
> port are default settings in the example-cache.xml, as:
>
>