Re: Distributed transaction support in Ingnite

2018-02-09 Thread Prasad Bhalerao
Hi,

I am using Oracle database. I want to use write behind approach to update
the tables when the cache associated with that table is updated.

1. If I do colocate computing in my distributed transaction will it also
commit/rollback the data from oracle table?

2. I want to start multiple ignite nodes. When nodes starts it will load
all the configured caches. All the nodes will have the same cache
configuration, I meant all node will start loading the same caches at
startup in partiotioned cache mode.

Does ignite executes the cache loader/CacheStore code (which fetches the
data from oracle tables) on the node which starts first and then
distributes it to other nodes in a cluster or all the nodes in a cluster
executes the data  loader code and just loads the cache with the data which
is appropriate to the  node?

Thanks,
Prasad


On Feb 8, 2018 7:54 PM, "Ilya Lantukh"  wrote:

Hi Phasad,

Your approach is incorrect, and function that you passed into
ignite.compute().affinityRun(...) will be executed outside of transaction
scope. If you want to execute your code on the affinity node to modify
value in cache, you should use IgniteCache.invoke(...) method - it will be
a part of transaction.

Hope this helps.

On Thu, Feb 8, 2018 at 5:14 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> Does ignite support distributed transaction in case of collocate
> computation?
>
> I started two ignite nodes and then pushed the data to cache using
> following code. Please check code as given below.  In this code I am
> rolling back the transaction at the end of compute affinity run. But after
> doing rollback the values in the map are not getting restored to previous
> version.
>
> Can anyone please help? Am I doing something wrong?
>
>
> *public static void *main(String[] args) *throws *Exception {
> Ignition.*setClientMode*(*true*);
>
> *try*( Ignite ignite = Ignition.*start*(*"ignite-configuration.xml"*)){
> IgniteCache cache = 
> ignite.getOrCreateCache(*"ipcache1"*);
>
> *for *(*int *i = 0; i < 10; i++)
> cache.put(i, Integer.*toString*(i));
>
> *for *(*int *i = 0; i < 10; i++)
> System.*out*.println(*"Got [key=" *+ i + *", val=" *+ 
> cache.get(i) + *']'*);
>
> System.*out*.println(*"Node Started"*);
>
> *final *IgniteCache cache1 = 
> ignite.cache(*"ipcache1"*);
> IgniteTransactions transactions = ignite.transactions();
> *Transaction tx = transactions.txStart(TransactionConcurrency.*
> *OPTIMISTIC,
> TransactionIsolation.SERIALIZABLE**);*
>
> *for *(*int *i = 0; i < 10; i++) {
> *int *key = i;
>
>
>ignite.compute().affinityRun(*"ipcache1"*, key,
> () -> {
> System.*out*.println(*"Co-located using 
> affinityRun [key= " *+ key + *", value=" *+ cache1.localPeek(key) + *']'*);
>
> String s = cache1.get(key);
> s = s+*"#Modified"*;
> cache1.put(key,s);
> }
> );
> }
> *tx.rollback();*
> System.*out*.println(*"RolledBack..."*);
> *for *(*int *i = 0; i < 10; i++)
> System.*out*.println(*"Got [key=" *+ i + *", val=" *+ 
> cache.get(i) + *']'*);
> }
>
> }
>
>
>
> Thanks,
> Prasad
>



-- 
Best regards,
Ilya


Re: Cat Example

2018-02-09 Thread Amir Akhmedov
I ran your code and connected to Ignite through dBeaver and it shows single
DOG table. I think it's some glitch with your dBeaver instance. In general,
you can run H2 Console by enabling IGNITE_H2_DEBUG_CONSOLE JVM parameter
and check over there.

FYI, Ignite supports C++[1] and .NET[2] platforms, just in case :)

[1] https://apacheignite-cpp.readme.io/docs
[2] https://apacheignite-net.readme.io/docs/

On Fri, Feb 9, 2018 at 5:15 PM, Williams, Michael <
michael.willi...@transamerica.com> wrote:

> So  this seems to be the closest I can get to something that looks like
> DML – this does work, and the only odd thing is that table DOG shows up
> twice in dBeaver, which I can’t quite figure out. Is this the right way to
> go about getting data in? Sorry, I’m just getting started with Ignite and
> my background is more C++ than Java.
>
>
>
> Cat.java:
>
>
>
>
>
> import org.apache.ignite.cache.query.annotations.*;
>
>
>
> import java.io.*;
>
>
>
> public class Cat implements Serializable  {
>
> int legs;
>
> String name;
>
>
>
> Cat(int l, String n)
>
> {
>
> legs = l;
>
> name = n;
>
> }
>
> Cat()
>
> {
>
> legs = 0;
>
> name = "";
>
> }
>
>
>
> }
>
>
>
> Main Class:
>
>
>
>
>
> import org.apache.ignite.IgniteDataStreamer;
>
> import org.apache.ignite.Ignition;
>
> import org.apache.ignite.Ignite;
>
> import org.apache.ignite.IgniteCache;
>
> import org.apache.ignite.cache.CacheMode;
>
> import org.apache.ignite.cache.QueryEntity;
>
> import org.apache.ignite.cache.QueryIndex;
>
> import org.apache.ignite.configuration.CacheConfiguration;
>
> import org.apache.ignite.cache.query.SqlFieldsQuery;
>
> import org.apache.ignite.cache.query.QueryCursor;
>
>
>
> import java.util.*;
>
>
>
> public class Test {
>
> public static void main(String[] args)
>
> {
>
>
>
> Ignite ignite = Ignition.start();
>
> CacheConfiguration cfg= new
> CacheConfiguration("CAT");
>
> cfg.setCacheMode(CacheMode.REPLICATED);
>
> cfg.setSqlEscapeAll(true);
>
> //cfg.setIndexedTypes(Integer.class,Cat.class);
>
> QueryEntity qe = new QueryEntity();
>
> qe.setKeyType(Integer.class.getName());
>
> qe.setValueType(Cat.class.getName());
>
> LinkedHashMap fields = new LinkedHashMap();
>
> fields.put("legs",Integer.class.getName());
>
> fields.put("name",String.class.getName());
>
> qe.setFields(fields);
>
> Collection indexes = new ArrayList<>(1);
>
> indexes.add(new QueryIndex("legs"));
>
> qe.setIndexes(indexes);
>
> qe.setTableName("DOG");
>
> cfg.setQueryEntities(Arrays.asList(qe));
>
> cfg.setSqlSchema("PUBLIC");
>
>
>
> try(IgniteCache cache =
> ignite.getOrCreateCache(cfg))
>
> {
>
> try (IgniteDataStreamer stmr =
> ignite.dataStreamer("CAT")) {
>
> for (int i = 0; i < 1_000_000; i++)
>
> stmr.addData(i, new Cat(i+1,"Fluffy"));
>
> stmr.flush();
>
> }
>
>
>
> SqlFieldsQuery sql = new SqlFieldsQuery("select * from DOG
> LIMIT 10");
>
> try (QueryCursor> cursor = cache.query(sql)) {
>
> for (List row : cursor)
>
> System.out.println("cat=" + row.get(0));
>
> }
>
> }
>
> System.out.print("UP!");
>
> }}
>
>
>
> *From:* Amir Akhmedov [mailto:amir.akhme...@gmail.com]
> *Sent:* Friday, February 09, 2018 5:08 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Cat Example
>
>
>
> Hi Mike,
>
> As of today Ignite DML does not support transactions and every DML
> statement is executed as atomic (Igniters, please correct me if I'm wrong).
>
> But still you can use DataStreamer[1] to improve data loading. Or you can
> use Ignite's Cache API with batch operations like putAll.
>
> [1] https://apacheignite.readme.io/docs/data-streamers
> 
>
>
>
> On Fri, Feb 9, 2018 at 2:56 PM, Williams, Michael  transamerica.com> wrote:
>
> Is it possible to stream data into a table created by a query? For
> example, consider the following modified example. If I had a Person object,
> how would I replace the insert loop to improve speed?
>
>
>
> Thanks,
>
> Mike
>
>
>
> import org.apache.ignite.Ignite;
>
> import org.apache.ignite.IgniteCache;
>
> import org.apache.ignite.Ignition;
>
> import org.apache.ignite.cache.query.SqlFieldsQuery;
>
> import org.apache.ignite.configuration.CacheConfiguration;
>
>
>
> public class Test {
>
> private static final String DU

RE: @SpringApplicationContextResource / ApplicationContext / getBeansOfType()

2018-02-09 Thread Navnet Kachroo
Found the problem - this issue was caused by having 
“org.springframework.boot:spring-boot-devtools” in the project dependencies. 
Things worked as expected once I removed this devtools dependency.

Thanks,
NK

From: Michael Cherkasov [mailto:michael.cherka...@gmail.com]
Sent: Thursday, February 8, 2018 6:25 PM
To: user@ignite.apache.org
Subject: Re: @SpringApplicationContextResource / ApplicationContext / 
getBeansOfType()

Hi Navnet,

Could you please share a reproducer for this issue? Some small mvn based 
project on github or as zip archive that will show the issue.

Thanks,
Mike.


2018-02-08 15:00 GMT-08:00 NK 
mailto:navnet.kach...@revft.com>>:
Hi,

I have a Spring Boot app using Ignite 2.3.0.

I am invoking Ignite in a class called IgniteStarter using
"IgniteSpring.start(springAppCtx)" where springAppCtx is my app's Spring
Application Context.

When I look for beans of a specific type in the main IgniteStarter class, I
get the expected result. My code:
Collection jdbcRepositories =
springAppCtx.getBeansOfType(Repository.class).values();

I have an IgniteService (bootstrapped by Ignite) where I need to use app
context. When I use the same code as above (getBeansOfType(...)) in the
IgniteService class, I don't get any beans.

In the Ignite service, I am using ApplicationContext using annotation
@SpringApplicationContextResource.

I am able to get a correct bean count using
springAppCtx.getBeanDefinitionCount() (so the context is set correctly), but
getBeansOfType(...) doesn't work.

Any pointers to why getBeansOfType(...) does not return anything on the
spring app context managed / set by Ignite?

Thanks,
NK



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: SpringTransactionManager and ChainedTransactionManager

2018-02-09 Thread smurphy
In actuality I rewrote the tests and removed the
AbstractTransactionalJUnit4SpringContextTests super class and the
transactions worked. I also rewrote the the test as an integration test on a
tomcat server and confirmed that the transaction managers worked.

I did see the same javadoc comments that you mentioned and found that the
order was not important. Also, from the following article that is references
in Spring's Transaction documentation:

ChainedTransactionManager which is ‘a crude implementation of a transaction
manager just links together a list of other transaction managers to
implement the transaction synchronization. If the business processing is
successful they all commit, and if not they all roll back.”

Full article: 

https://www.javaworld.com/article/2077963/open-source-tools/distributed-transactions-in-spring--with-and-without-xa.html?page=2



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Cat Example

2018-02-09 Thread Williams, Michael
So  this seems to be the closest I can get to something that looks like DML – 
this does work, and the only odd thing is that table DOG shows up twice in 
dBeaver, which I can’t quite figure out. Is this the right way to go about 
getting data in? Sorry, I’m just getting started with Ignite and my background 
is more C++ than Java.

Cat.java:


import org.apache.ignite.cache.query.annotations.*;

import java.io.*;

public class Cat implements Serializable  {
int legs;
String name;

Cat(int l, String n)
{
legs = l;
name = n;
}
Cat()
{
legs = 0;
name = "";
}

}

Main Class:


import org.apache.ignite.IgniteDataStreamer;
import org.apache.ignite.Ignition;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.QueryEntity;
import org.apache.ignite.cache.QueryIndex;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.QueryCursor;

import java.util.*;

public class Test {
public static void main(String[] args)
{

Ignite ignite = Ignition.start();
CacheConfiguration cfg= new CacheConfiguration("CAT");
cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setSqlEscapeAll(true);
//cfg.setIndexedTypes(Integer.class,Cat.class);
QueryEntity qe = new QueryEntity();
qe.setKeyType(Integer.class.getName());
qe.setValueType(Cat.class.getName());
LinkedHashMap fields = new LinkedHashMap();
fields.put("legs",Integer.class.getName());
fields.put("name",String.class.getName());
qe.setFields(fields);
Collection indexes = new ArrayList<>(1);
indexes.add(new QueryIndex("legs"));
qe.setIndexes(indexes);
qe.setTableName("DOG");
cfg.setQueryEntities(Arrays.asList(qe));
cfg.setSqlSchema("PUBLIC");

try(IgniteCache cache = ignite.getOrCreateCache(cfg))
{
try (IgniteDataStreamer stmr = 
ignite.dataStreamer("CAT")) {
for (int i = 0; i < 1_000_000; i++)
stmr.addData(i, new Cat(i+1,"Fluffy"));
stmr.flush();
}

SqlFieldsQuery sql = new SqlFieldsQuery("select * from DOG 
LIMIT 10");
try (QueryCursor> cursor = cache.query(sql)) {
for (List row : cursor)
System.out.println("cat=" + row.get(0));
}
}
System.out.print("UP!");
}}

From: Amir Akhmedov [mailto:amir.akhme...@gmail.com]
Sent: Friday, February 09, 2018 5:08 PM
To: user@ignite.apache.org
Subject: Re: Cat Example

Hi Mike,
As of today Ignite DML does not support transactions and every DML statement is 
executed as atomic (Igniters, please correct me if I'm wrong).
But still you can use DataStreamer[1] to improve data loading. Or you can use 
Ignite's Cache API with batch operations like putAll.

[1] 
https://apacheignite.readme.io/docs/data-streamers

On Fri, Feb 9, 2018 at 2:56 PM, Williams, Michael 
mailto:michael.willi...@transamerica.com>> 
wrote:
Is it possible to stream data into a table created by a query? For example, 
consider the following modified example. If I had a Person object, how would I 
replace the insert loop to improve speed?

Thanks,
Mike

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.configuration.CacheConfiguration;

public class Test {
private static final String DUMMY_CACHE_NAME = "dummy_cache";
public static void main(String[] args)
{
Ignite ignite = Ignition.start();
{

// Create dummy cache to act as an entry point for SQL queries (new 
SQL API which do not require this
// will appear in future versions, JDBC and ODBC drivers do not 
require it already).
CacheConfiguration cacheCfg = new 
CacheConfiguration<>(DUMMY_CACHE_NAME).setSqlSchema("PUBLIC");
try (IgniteCache cache = ignite.getOrCreateCache(cacheCfg))
{
// Create reference City table based on REPLICATED template.
cache.query(new SqlFieldsQuery("CREATE TABLE city (id LONG 
PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"")).getAll();
// Create table based on PARTITIONED template with one backup.
cache.query(new SqlFieldsQuery("CRE

Re: Cat Example

2018-02-09 Thread Amir Akhmedov
Hi Mike,

As of today Ignite DML does not support transactions and every DML
statement is executed as atomic (Igniters, please correct me if I'm wrong).

But still you can use DataStreamer[1] to improve data loading. Or you can
use Ignite's Cache API with batch operations like putAll.

[1] https://apacheignite.readme.io/docs/data-streamers

On Fri, Feb 9, 2018 at 2:56 PM, Williams, Michael <
michael.willi...@transamerica.com> wrote:

> Is it possible to stream data into a table created by a query? For
> example, consider the following modified example. If I had a Person object,
> how would I replace the insert loop to improve speed?
>
>
>
> Thanks,
>
> Mike
>
>
>
> import org.apache.ignite.Ignite;
>
> import org.apache.ignite.IgniteCache;
>
> import org.apache.ignite.Ignition;
>
> import org.apache.ignite.cache.query.SqlFieldsQuery;
>
> import org.apache.ignite.configuration.CacheConfiguration;
>
>
>
> public class Test {
>
> private static final String DUMMY_CACHE_NAME = "dummy_cache";
>
> public static void main(String[] args)
>
> {
>
> Ignite ignite = Ignition.start();
>
> {
>
>
>
> // Create dummy cache to act as an entry point for SQL queries
> (new SQL API which do not require this
>
> // will appear in future versions, JDBC and ODBC drivers do
> not require it already).
>
> CacheConfiguration cacheCfg = new
> CacheConfiguration<>(DUMMY_CACHE_NAME).setSqlSchema("PUBLIC");
>
> try (IgniteCache cache = ignite.getOrCreateCache(
> cacheCfg))
>
> {
>
> // Create reference City table based on REPLICATED
> template.
>
> cache.query(new SqlFieldsQuery("CREATE TABLE city (id LONG
> PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"")).getAll();
>
> // Create table based on PARTITIONED template with one
> backup.
>
> cache.query(new SqlFieldsQuery("CREATE TABLE person (id
> LONG, name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id)) WITH
> \"template=replicated\"")).getAll();
>
>
>
> // Create an index.
>
> cache.query(new SqlFieldsQuery("CREATE INDEX on Person
> (city_id)")).getAll();
>
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("INSERT INTO city
> (id, name) VALUES (?, ?)");
>
>
>
> cache.query(qry.setArgs(1L, "Forest Hill")).getAll();
>
> cache.query(qry.setArgs(2L, "Denver")).getAll();
>
> cache.query(qry.setArgs(3L, "St. Petersburg")).getAll();
>
>
>
> qry = new SqlFieldsQuery("INSERT INTO person (id, name,
> city_id) values (?, ?, ?)");
>
> for(long i = 0; i < 100_000;++i)
>
> {
>
> cache.query(qry.setArgs(i, "John Doe", 3L)).getAll();
>
> }
>
> System.out.print("HI!");
>
> 
>
>
>
>
>
>
>
> *From:* Denis Magda [mailto:dma...@apache.org]
> *Sent:* Thursday, February 08, 2018 7:33 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Cat Example
>
>
>
> Hi Mike,
>
>
>
> If SQL indexes/configuration is set with the annotation and
> setIndexedTypes method then you have to use the type name (Cat in your
> case) as the SQL table name. It’s explained here:
>
> https://apacheignite-sql.readme.io/docs/schema-and-
> indexes#section-annotation-based-configuration
> 
>
>
>
> The cache name is used for IgniteCache APIs and other related methods.
>
>
>
> —
>
> Denis
>
>
>
> On Feb 8, 2018, at 3:48 PM, Williams, Michael  transamerica.com> wrote:
>
>
>
> Hi,
>
>
>
> Quick question, submitted a ticket earlier. How would I modify the below
> code such that, when viewed through Sql (dbeaver, eg) it behaves as if it
> had been created through a CREATE TABLE statement, where the name of the
> table was catCache? I’m trying to directly populate a series of tables that
> will be used downstream primarily through SQL. I’d like to be able to go
> into dBeaver, browse the tables, and see 10 cats named Fluffy, if this is
> working correctly.
>
> import org.apache.ignite.cache.query.annotations.*;
>
> import java.io
> 
> .*;
>
>
>
> public class Cat implements Serializable  {
>
> @QuerySqlField
>
> int legs;
>
> @QuerySqlField
>
> String name;
>
>
>
> Cat(int l, String n)
>
> {
>
> legs = l;
>
> name = n;
>
> }
>
> }
>
>
>
>
>
> import org.apache.ig

Re: Versioning services

2018-02-09 Thread vkulichenko
Colin,

Unfortunately this is not possible at the moment, you need to restart nodes
to change service implementation. There is a feature request for improving
this: https://issues.apache.org/jira/browse/IGNITE-6069

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with Spring Cache on K8S, eviction problem

2018-02-09 Thread lukaszbyjos
No, at k8s I'm using partitioned. Currently I'm testing case with turned off
onheap. I don't know why but it doesn't work for the first time when it's
called but when I call this few minutes later it works. 
Maybe it's because of Atomic mode? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with Spring Cache on K8S, eviction problem

2018-02-09 Thread vkulichenko
Why are using local cache instead of partitioned in dev? If you have local
caches, then you will get behavior exactly like you described - you will
remove from one node, but not remove on others. Can this be the case?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Cat Example

2018-02-09 Thread Williams, Michael
Is it possible to stream data into a table created by a query? For example, 
consider the following modified example. If I had a Person object, how would I 
replace the insert loop to improve speed?

Thanks,
Mike

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.configuration.CacheConfiguration;

public class Test {
private static final String DUMMY_CACHE_NAME = "dummy_cache";
public static void main(String[] args)
{
Ignite ignite = Ignition.start();
{

// Create dummy cache to act as an entry point for SQL queries (new 
SQL API which do not require this
// will appear in future versions, JDBC and ODBC drivers do not 
require it already).
CacheConfiguration cacheCfg = new 
CacheConfiguration<>(DUMMY_CACHE_NAME).setSqlSchema("PUBLIC");
try (IgniteCache cache = ignite.getOrCreateCache(cacheCfg))
{
// Create reference City table based on REPLICATED template.
cache.query(new SqlFieldsQuery("CREATE TABLE city (id LONG 
PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"")).getAll();
// Create table based on PARTITIONED template with one backup.
cache.query(new SqlFieldsQuery("CREATE TABLE person (id LONG, 
name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id)) WITH 
\"template=replicated\"")).getAll();

// Create an index.
cache.query(new SqlFieldsQuery("CREATE INDEX on Person 
(city_id)")).getAll();

SqlFieldsQuery qry = new SqlFieldsQuery("INSERT INTO city (id, 
name) VALUES (?, ?)");

cache.query(qry.setArgs(1L, "Forest Hill")).getAll();
cache.query(qry.setArgs(2L, "Denver")).getAll();
cache.query(qry.setArgs(3L, "St. Petersburg")).getAll();

qry = new SqlFieldsQuery("INSERT INTO person (id, name, 
city_id) values (?, ?, ?)");
for(long i = 0; i < 100_000;++i)
{
cache.query(qry.setArgs(i, "John Doe", 3L)).getAll();
}
System.out.print("HI!");




From: Denis Magda [mailto:dma...@apache.org]
Sent: Thursday, February 08, 2018 7:33 PM
To: user@ignite.apache.org
Subject: Re: Cat Example

Hi Mike,

If SQL indexes/configuration is set with the annotation and setIndexedTypes 
method then you have to use the type name (Cat in your case) as the SQL table 
name. It’s explained here:
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-annotation-based-configuration

The cache name is used for IgniteCache APIs and other related methods.

—
Denis


On Feb 8, 2018, at 3:48 PM, Williams, Michael 
mailto:michael.willi...@transamerica.com>> 
wrote:

Hi,

Quick question, submitted a ticket earlier. How would I modify the below code 
such that, when viewed through Sql (dbeaver, eg) it behaves as if it had been 
created through a CREATE TABLE statement, where the name of the table was 
catCache? I’m trying to directly populate a series of tables that will be used 
downstream primarily through SQL. I’d like to be able to go into dBeaver, 
browse the tables, and see 10 cats named Fluffy, if this is working correctly.
import org.apache.ignite.cache.query.annotations.*;
import 
java.io.*;

public class Cat implements Serializable  {
@QuerySqlField
int legs;
@QuerySqlField
String name;

Cat(int l, String n)
{
legs = l;
name = n;
}
}


import org.apache.ignite.Ignition;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.QueryCursor;
import java.util.List;
public class Test {
public static void main(String[] args)
{

Ignite ignite = Ignition.start();
CacheConfiguration cfg= new 
CacheConfiguration("catCache");

cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setSqlEscapeAll(true);
cfg.setSqlSchema("PUBLIC");
cfg.setIndexedTypes(Integer.class,Cat.class);
try(IgniteCache cache = ignite.getOrCreateCache(cfg))
{

Versioning services

2018-02-09 Thread colinc
I'm interested in using Ignite services as microservices and have seen Denis'
blog posts on the topic. In my case, I also have a requirement to perform
computations with data affinity. My idea is to call a node singleton service
locally from a distributed compute task.

The advantage of using the service is that it can spring-wire itself at
deployment time. A task may then result in the collaboration of a number of
local services to generate a result.

I have tried this and it seems to work well. My question is about updating
and versioning of services. In particular - is it possible to use the
DeploymentSpi with a specified classloader to deploy two different versions
of the same service - presumably registered with different service names? Or
is this designed only to work for tasks?

Regards,
Colin.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite with Spring Cache on K8S, eviction problem

2018-02-09 Thread lukaszbyjos
Hi. I have k8s cluster with one ignite server and few services as clients.
I have problem with evicting values using spring annotations. 
Apps have cache "example-user" and when one service evict by key another one
still have values. 

There you can find cache config and example repo for spring
https://gist.github.com/Mistic92/8649515ff026e24ca0870ed61739a17c

What should I change or do because currently I don't have any idea :(




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Loading cache from Oracle Table

2018-02-09 Thread Prasad Bhalerao
Hi Vinokurov,

I was thinking to use Java ExecutoreService to submit the task (task to
invoke CacheStore.loadCache method). Is it ok to use ExecutorService or
ignite provides some way to submit the task?

Thanks,
Prasad

On Fri, Feb 9, 2018 at 6:04 PM, Vinokurov Pavel 
wrote:

> Hi Prasad,
>
> Within your implementation of CacheStore.loadCacheYou you could use
> multiple threads to retrieve rows by batches.
> Note that each thread should use different jdbc connection.
>
> 2018-02-09 13:57 GMT+03:00 Prasad Bhalerao :
>
>> Hi,
>>
>> I have multiple oracle tables with more 50 million rows. I want to load
>> those table in cache.To load the cache I am using CacheStore.loadCache
>> method.
>>
>> Is there anyway where I can load a single table in multithreaded way to
>> improve the loading performance?
>> What I actually want to do is,
>> 1) Get all the distinct keys from the table.
>> 2) Divide the list of key in batches.
>> 3) Give each batch of keys to a separate thread which will fetch the data
>> from the same table in parallel mode.
>> e.g. Thread T1 will fetch the data for key 1 to 100 and thread T2 2ill
>> fetch the data for keys 101 to 200 and so on.
>>
>> Does ignite provide any mechanism to do this?
>>
>> Note: I do not have partitionId in my table.
>>
>>
>> Thanks,
>> Prasad
>>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>


Re: Loading cache from Oracle Table

2018-02-09 Thread Vinokurov Pavel
Hi Prasad,

Within your implementation of CacheStore.loadCacheYou you could use
multiple threads to retrieve rows by batches.
Note that each thread should use different jdbc connection.

2018-02-09 13:57 GMT+03:00 Prasad Bhalerao :

> Hi,
>
> I have multiple oracle tables with more 50 million rows. I want to load
> those table in cache.To load the cache I am using CacheStore.loadCache
> method.
>
> Is there anyway where I can load a single table in multithreaded way to
> improve the loading performance?
> What I actually want to do is,
> 1) Get all the distinct keys from the table.
> 2) Divide the list of key in batches.
> 3) Give each batch of keys to a separate thread which will fetch the data
> from the same table in parallel mode.
> e.g. Thread T1 will fetch the data for key 1 to 100 and thread T2 2ill
> fetch the data for keys 101 to 200 and so on.
>
> Does ignite provide any mechanism to do this?
>
> Note: I do not have partitionId in my table.
>
>
> Thanks,
> Prasad
>



-- 

Regards

Pavel Vinokurov


Loading cache from Oracle Table

2018-02-09 Thread Prasad Bhalerao
Hi,

I have multiple oracle tables with more 50 million rows. I want to load
those table in cache.To load the cache I am using CacheStore.loadCache
method.

Is there anyway where I can load a single table in multithreaded way to
improve the loading performance?
What I actually want to do is,
1) Get all the distinct keys from the table.
2) Divide the list of key in batches.
3) Give each batch of keys to a separate thread which will fetch the data
from the same table in parallel mode.
e.g. Thread T1 will fetch the data for key 1 to 100 and thread T2 2ill
fetch the data for keys 101 to 200 and so on.

Does ignite provide any mechanism to do this?

Note: I do not have partitionId in my table.


Thanks,
Prasad


Re: ignite support for multii data center replication

2018-02-09 Thread Roman Guseinov
Hi Rajesh,

If you want to split a single cluster across two data centers, this will
lead a performance penalty due to large network latency.

If you asked about data replication between clusters via WAN, Apache Ignite
doesn't support this at the moment. I think there are two ways to do: make
own implementation or use commercial solutions like
https://docs.gridgain.com/docs/data-center-replication.

Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/