Error in running wordcount hadoop example in ignite

2019-02-27 Thread mehdi sey
hi 
i want to execute wordcount example of hadoop in apache ignite. i have used
apache-ignite-hadoop-2.6.0-bin for execute map reduce tasks. my
default-config.xml in apache-ignite-hadoop-2.6.0-bin/config folder is just
as bellow:
http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd;>



Spring file for Ignite node configuration with IGFS and Apache
Hadoop map-reduce support enabled.
Ignite node will start with this configuration by default.






















 












i have run an ignite node with bellow command in command line in linux
ubuntu:
*/usr/local/apache-ignite-hadoop-2.6.0-bin/bin/ignite.sh
/usr/local/apache-ignite-hadoop-2.6.0-bin/config/default-config.xml*

after starting ignite node i execute a wordcount example in hadoop for
ruuning in ignite with bellow command:

*./hadoop jar
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar
wordcount /input/hadoop output2*

but after executing above command i have encounter an error just as attached
image. please help about solving problem. i have seen also below link but it
could not help.
http://apache-ignite-users.70518.x6.nabble.com/NPE-issue-with-trying-to-submit-Hadoop-MapReduce-tc2146.html#a2183
 
 





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


?????? Ignite Data Streamer Hung after a period

2019-02-27 Thread BinaryTree
Thank for your reply. 
1. Yes, I have persistence.
2. I think the cache store is not the bottleneck, because the skipStore is 
enabled when loading data.
IgniteDataStreamer streamer = 
ignite.dataStreamer(IgniteCacheKey.DATA_POINT_NEW.getCode());
streamer.skipStore(true);
streamer.keepBinary(true);
streamer.perNodeBufferSize(1);
streamer.perNodeParallelOperations(32);





--  --
??: "Ilya Kasnacheev";
: 2019??2??27??(??) 9:59
??: "user";

: Re: Ignite Data Streamer Hung after a period



Hello!



It's hard to say. Do you have persistence? Are you sure that cache store is not 
the bottleneck?


I would start with gathering thread dumps from whole cluster when in stuck 
state.


Regards,

-- 

Ilya Kasnacheev









, 27 . 2019 ??. ?? 15:06, Justin Ji :

Dmitry  - 
 
 I also encountered this problem.
 
 I used both persistence and indexing, when I loaded 20 million records, the
 loading speed became much slower than before, but the CPU of the ignite
 server is low.
 
 

 
 
 Here is my cache configuration:
 
 CacheConfiguration cacheCfg = new CacheConfiguration();
 cacheCfg.setName(cacheName);
 cacheCfg.setCacheMode(CacheMode.PARTITIONED);
 cacheCfg.setBackups(1);
 cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
 
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
 cacheCfg.setWriteThrough(true);
 cacheCfg.setWriteBehindEnabled(true);
 cacheCfg.setWriteBehindFlushThreadCount(2);
 cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
 cacheCfg.setWriteBehindFlushSize(409600);
 cacheCfg.setWriteBehindBatchSize(1024);
 cacheCfg.setStoreKeepBinary(true);
 cacheCfg.setQueryParallelism(16);
 cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
 cacheCfg.setRebalanceThrottle(100);
 CacheKeyConfiguration cacheKeyConfiguration = new
 CacheKeyConfiguration(DpKey.class);
 cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
 
 List entities = Lists.newArrayList();
 
 QueryEntity entity = new QueryEntity(DpKey.class.getName(),
 DpCache.class.getName());
 entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
 
 LinkedHashMap map = new LinkedHashMap<>();
 map.put("id", "java.lang.String");
 map.put("gmtCreate", "java.lang.Long");
 map.put("gmtModified", "java.lang.Long");
 map.put("devId", "java.lang.String");
 map.put("dpId", "java.lang.Integer");
 map.put("code", "java.lang.String");
 map.put("name", "java.lang.String");
 map.put("customName", "java.lang.String");
 map.put("mode", "java.lang.String");
 map.put("type", "java.lang.String");
 map.put("value", "java.lang.String");
 map.put("rawValue", byte[].class.getName());
 map.put("time", "java.lang.Long");
 map.put("status", "java.lang.Boolean");
 map.put("uuid", "java.lang.String");
 
 entity.setFields(map);
 QueryIndex devIdIdx = new QueryIndex("devId");
 devIdIdx.setName("idx_devId");
 devIdIdx.setInlineSize(128);
 List indexes = Lists.newArrayList(devIdIdx);
 entity.setIndexes(indexes);
 
 entities.add(entity);
 cacheCfg.setQueryEntities(entities);
 
 
 Can you give me some advice on where to start solving these problems?
 
 
 
 
 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: same cache cannot update twice in one transaction

2019-02-27 Thread xmw45688
Hi Ilya,

Since I'm using Cassandra as data store, it raises the following exceptions
once MVCC is enabled - 



class org.apache.ignite.IgniteCheckedException: Grid configuration parameter
invalid: readThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity
mode
at
org.apache.ignite.internal.processors.GridProcessorAdapter.assertParameter(GridProcessorAdapter.java:140)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.validate(GridCacheProcessor.java:527)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCacheContext(GridCacheProcessor.java:1543)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheContext(GridCacheProcessor.java:2324)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$null$fd62dedb$1(GridCacheProcessor.java:2163)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$5(GridCacheProcessor.java:2086)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$937cbe24$1(GridCacheProcessor.java:2160)




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What is the correct way of keeping ignite alive in a console app

2019-02-27 Thread Павлухин Иван
Hi,

Your application exits because Ignite node is started in
try-with-resources block. So, the node is stopped upon leaving try
block. If you write simply
public static void main(String[] args) {
  Ignition.start(igniteConfiguration);
}

application will continue running after main method completion.

ср, 27 февр. 2019 г. в 19:36, PBSLogitek :
>
> Hello
>
> What is the best way to keep my app running after i have initialized my
> ignite instance?
>
>
> public static void main(String[] args) {
> try (Ignite ignite = Ignition.start(igniteConfiguration)) {
>
>
> }
>
> // How to wait here in a correct way to make ignite not exit the
> application
>
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: introspection of ignite data

2019-02-27 Thread Scott Cote
Thank you Jeff

972.900.1561
Scott Cote


From: Jeff Zemerick 
Sent: Wednesday, February 27, 2019 10:30 AM
To: user@ignite.apache.org
Subject: Re: introspection of ignite data

Scott,

I have used:

CacheConfiguration config = cache.getConfiguration(CacheConfiguration.class);
Collection entities = config.getQueryEntities();
for(QueryEntity e : entities) {
  System.out.println("Table: " + e.getTableName());
}

I'm new to Ignite so there's a chance it might not be the best way. :)

Jeff



On Tue, Feb 26, 2019 at 11:06 AM Scott Cote 
mailto:sc...@etcc.com>> wrote:
I am trouble shooting a sql problem where I’m issuing a “select” statement and 
the parser is not finding my table …..

IgniteSqlException: Failed to parse query.  Table “FOOBOO” not found; SQL 
statement:\nselect * from FOOBOO [42102-197]

What API can I call against either an instance of  IgniteCache or Ignite - to 
find the names of the tables that are present – if any.

Want to be able to trouble shoot from inside a java debugger where I have the 
instances present – and/or later call an api for diagnostics.

TIA.

SCott



Re: Access a cache loaded by DataStreamer with SQL

2019-02-27 Thread needbrew99
OK, was able to get it working.  Apparently the cache name has to be PUBLIC
and it will create a table based on the object definition that I have.  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


What is the correct way of keeping ignite alive in a console app

2019-02-27 Thread PBSLogitek
Hello

What is the best way to keep my app running after i have initialized my
ignite instance?


public static void main(String[] args) {
try (Ignite ignite = Ignition.start(igniteConfiguration)) {


}

// How to wait here in a correct way to make ignite not exit the
application
   
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: introspection of ignite data

2019-02-27 Thread Jeff Zemerick
Scott,

I have used:

CacheConfiguration config =
cache.getConfiguration(CacheConfiguration.class);
Collection entities = config.getQueryEntities();
for(QueryEntity e : entities) {
  System.out.println("Table: " + e.getTableName());
}

I'm new to Ignite so there's a chance it might not be the best way. :)

Jeff



On Tue, Feb 26, 2019 at 11:06 AM Scott Cote  wrote:

> I am trouble shooting a sql problem where I’m issuing a “select” statement
> and the parser is not finding my table …..
>
>
>
> IgniteSqlException: Failed to parse query.  Table “FOOBOO” not found; SQL
> statement:\nselect * from FOOBOO [42102-197]
>
>
>
> What API can I call against either an instance of  IgniteCache or Ignite -
> to find the names of the tables that are present – if any.
>
>
>
> Want to be able to trouble shoot from inside a java debugger where I have
> the instances present – and/or later call an api for diagnostics.
>
>
>
> TIA.
>
>
>
> SCott
>
>
>


Re: ignite.net custom sql functions

2019-02-27 Thread Ilya Kasnacheev
Hello!

> I have found several examples in java code for registering the custom
function but nothing
showing how to do it in the xml config file.

// Preparing a cache configuration.CacheConfiguration cfg = new
CacheConfiguration();
// Registering the class that contains custom SQL
functions.cfg.setSqlFunctionClasses(MyFunctions.class);

should roughy convert to

com.pany.MyFunctions

> Also i am unsure where the classpath will be given that this is all
running with a .net wrapper.

You can put your JAR and all its dependencies under $IGNITE_HOME\libs

Under Windows some flavor of IGNITE_HOME will be created for you
implicitly, but you can always specify your own IgniteHome, pointing to
unzipped binary Ignite release.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 18:42, Pavel Tupitsyn :

> Hi Wayne,
>
> You can write a service in Java that registers custom SQL function.
> Then call that service from .NET.
> Classpath is provided in IgniteConfiguration.JvmClasspath.
>
> See https://ptupitsyn.github.io/Ignite-Plugin/ to get some idea on how to
> combine Java-based Ignite stuff with Ignite.NET.
> Let me know if you need more details.
>
> Thanks,
> Pavel
>
> On Wed, Feb 27, 2019 at 5:54 PM wt  wrote:
>
>> Thanks Ilya
>>
>> So i need a little guidance here if you don't mind i am just throwing
>> together a quick test as a precursor to a new project.
>>
>> I have a .net ignite service wrapper
>>
>>
>> 
>>  var cfg = new IgniteConfiguration()
>> {
>> SpringConfigUrl = "/ignitecfg.xml",
>> DiscoverySpi = new TcpDiscoverySpi
>> {
>> IpFinder = new TcpDiscoveryStaticIpFinder
>> {
>> Endpoints = new[] { "127.0.0.1:47500..47509"
>> }
>> },
>> SocketTimeout = TimeSpan.FromSeconds(0.3)
>> }
>>
>> };
>>
>> Ignition.Start(cfg);
>>
>> -
>>
>> i am using the spring xml section to register the custom sql functions
>> which
>> i have written in java and compiled to a jar file. I have found several
>> examples in java code for registering the custom function but nothing
>> showing how to do it in the xml config file. Also i am unsure where the
>> classpath will be given that this is all running with a .net wrapper. Any
>> guidance here will be super awesome.
>>
>> thanks
>> Wayne
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Access a cache loaded by DataStreamer with SQL

2019-02-27 Thread Mike Needham
I have looked at the docs and it is not clear to me how to set-up a
queryable table that can be loaded with a datastreamer.  If I want to use
the datastreamer and still have it queryable from a third party tool like
Tableau what would I need to set-up.  I tried one thing and it showed up in
the DBeaver but under a schema of Test, that was my cache name, but I
cannot query it as it says invalid schema.  Is there some config when the
cache is created that needs to be set so that it also generates TABLES that
are queryable.  Sorry for all the questions as I am trying to do a quick
proof of concept for our users and need to load 10 million initially into a
cache and have it queryable from SQL interfaces.

On Wed, Feb 27, 2019 at 8:14 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Please refer to this page: https://apacheignite.readme.io/docs/indexes
>
> In short, you can use CREATE TABLE or GetOrCreateCache
> indexedTypes+annotations or QueryEntities. All of those are compatible with
> DataStreamer as long as type name and field/column names match.
>
> By default caches do not have corresponding tables.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 26 февр. 2019 г. в 21:35, Mike Needham :
>
>> Hi All,
>>
>> I have a cache that I have loaded using the DataStreamer and can confirm
>> there is a cache created by using the ignitevisor utility  with the cache
>> command.  I cannot query it from any JDBC tools and am not sure why.  Do I
>> need to use a CREATE TABLE syntax in order for this to work instead of the
>> GetOrCreateCache<>(CacheName).  Or is there someother thing on the config
>> side that I am missiung
>>
>> Any help appreciated as I am just starting to evaluate this for a project.
>>
>>
>> --
>> *Some days it just not worth chewing through the restraints*
>>
>

-- 
*Some days it just not worth chewing through the restraints*


Re: ignite.net custom sql functions

2019-02-27 Thread Pavel Tupitsyn
Hi Wayne,

You can write a service in Java that registers custom SQL function.
Then call that service from .NET.
Classpath is provided in IgniteConfiguration.JvmClasspath.

See https://ptupitsyn.github.io/Ignite-Plugin/ to get some idea on how to
combine Java-based Ignite stuff with Ignite.NET.
Let me know if you need more details.

Thanks,
Pavel

On Wed, Feb 27, 2019 at 5:54 PM wt  wrote:

> Thanks Ilya
>
> So i need a little guidance here if you don't mind i am just throwing
> together a quick test as a precursor to a new project.
>
> I have a .net ignite service wrapper
>
>
> 
>  var cfg = new IgniteConfiguration()
> {
> SpringConfigUrl = "/ignitecfg.xml",
> DiscoverySpi = new TcpDiscoverySpi
> {
> IpFinder = new TcpDiscoveryStaticIpFinder
> {
> Endpoints = new[] { "127.0.0.1:47500..47509" }
> },
> SocketTimeout = TimeSpan.FromSeconds(0.3)
> }
>
> };
>
> Ignition.Start(cfg);
>
> -
>
> i am using the spring xml section to register the custom sql functions
> which
> i have written in java and compiled to a jar file. I have found several
> examples in java code for registering the custom function but nothing
> showing how to do it in the xml config file. Also i am unsure where the
> classpath will be given that this is all running with a .net wrapper. Any
> guidance here will be super awesome.
>
> thanks
> Wayne
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite.net custom sql functions

2019-02-27 Thread wt
Thanks Ilya

So i need a little guidance here if you don't mind i am just throwing
together a quick test as a precursor to a new project. 

I have a .net ignite service wrapper



 var cfg = new IgniteConfiguration()
{
SpringConfigUrl = "/ignitecfg.xml",
DiscoverySpi = new TcpDiscoverySpi
{
IpFinder = new TcpDiscoveryStaticIpFinder
{
Endpoints = new[] { "127.0.0.1:47500..47509" }
},
SocketTimeout = TimeSpan.FromSeconds(0.3)
}

};

Ignition.Start(cfg);

-

i am using the spring xml section to register the custom sql functions which
i have written in java and compiled to a jar file. I have found several
examples in java code for registering the custom function but nothing
showing how to do it in the xml config file. Also i am unsure where the
classpath will be given that this is all running with a .net wrapper. Any
guidance here will be super awesome.

thanks
Wayne



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: On Multiple Endpoints Mode of JDBC Driver

2019-02-27 Thread Ilya Kasnacheev
Hello!

I doubt that you will have speed-up here compared to just using JDBC
(possibly with randomized endpoints list).

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 14:52, 李玉珏@163 <18624049...@163.com>:

> The main consideration is that using JDBC interface, the existing code
> modification workload is small.
> 在 2019/2/27 下午5:31, Stephen Darlington 写道:
>
> If you’re already using Ignite-specific APIs (IgniteCallable), why not use
> the other Ignite-native APIs for reading/writing/processing data? That way
> you can use affinity functions for load balancing where it makes sense and
> Ignite’s normal load balancing processing for general compute tasks.
>
> Regards,
> Stephen
>
> On 27 Feb 2019, at 06:00, 李玉珏@163 <18624049...@163.com> wrote:
>
> Hi,
> Since JDBC can't achieve multi-endpoint load balancing, we want to use
> affinityCall (...) mechanism to achieve load balancing, that is, to obtain
> and use JDBC Connection in IgniteCallable implementation.
> How to efficiently access and use JDBC Connection?
>
>  转发的消息 
> 主题: Re: On Multiple Endpoints Mode of JDBC Driver
> 日期: Tue, 26 Feb 2019 14:53:17 -0800
> 发件人: Denis Magda  
> 回复地址: d...@ignite.apache.org
> 收件人: dev  
>
> Hello,
>
> You provide a list of IP addresses for the sake of high-availability - if
> one of the servers goes down then the client will reconnect to the next IP
> automatically. There is no any load balancing in place presently. But! In
> the next Ignite version, we're planning to roll out partition-awareness
> support - the client will send a request to the nodes who hold the data
> needed for the request.
>
> -
> Denis
>
>
> On Tue, Feb 26, 2019 at 2:48 PM 李玉珏 
>  wrote:
>
> Hi,
>
> Does have load balancing function in Multiple Endpoints mode of JDBC
> driver?For example, "jdbc: ignite: thin://192.168.0.50:101,192.188.5.40:101, 
> 192.168.10.230:101"
> If not, will one node become the bottleneck of the whole system?
>
>
>
>
>
>


Re: Access a cache loaded by DataStreamer with SQL

2019-02-27 Thread Ilya Kasnacheev
Hello!

Please refer to this page: https://apacheignite.readme.io/docs/indexes

In short, you can use CREATE TABLE or GetOrCreateCache
indexedTypes+annotations or QueryEntities. All of those are compatible with
DataStreamer as long as type name and field/column names match.

By default caches do not have corresponding tables.

Regards,
-- 
Ilya Kasnacheev


вт, 26 февр. 2019 г. в 21:35, Mike Needham :

> Hi All,
>
> I have a cache that I have loaded using the DataStreamer and can confirm
> there is a cache created by using the ignitevisor utility  with the cache
> command.  I cannot query it from any JDBC tools and am not sure why.  Do I
> need to use a CREATE TABLE syntax in order for this to work instead of the
> GetOrCreateCache<>(CacheName).  Or is there someother thing on the config
> side that I am missiung
>
> Any help appreciated as I am just starting to evaluate this for a project.
>
>
> --
> *Some days it just not worth chewing through the restraints*
>


Re: Ignite Data streamer optimization

2019-02-27 Thread Ilya Kasnacheev
Hello!

Why do you have nodeParallelOperations of 1?

Do you really have issues with default configuration?

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 08:59, ashishb888 :

> Sure. But in my case I can not do so. Any other options for single threads?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Data Streamer Hung after a period

2019-02-27 Thread Ilya Kasnacheev
Hello!

It's hard to say. Do you have persistence? Are you sure that cache store is
not the bottleneck?

I would start with gathering thread dumps from whole cluster when in stuck
state.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 15:06, Justin Ji :

> Dmitry  -
>
> I also encountered this problem.
>
> I used both persistence and indexing, when I loaded 20 million records, the
> loading speed became much slower than before, but the CPU of the ignite
> server is low.
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2000/WX20190227-200059.png>
>
>
> Here is my cache configuration:
>
> CacheConfiguration cacheCfg = new CacheConfiguration();
> cacheCfg.setName(cacheName);
> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
> cacheCfg.setBackups(1);
> cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
> cacheCfg.setWriteThrough(true);
> cacheCfg.setWriteBehindEnabled(true);
> cacheCfg.setWriteBehindFlushThreadCount(2);
> cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
> cacheCfg.setWriteBehindFlushSize(409600);
> cacheCfg.setWriteBehindBatchSize(1024);
> cacheCfg.setStoreKeepBinary(true);
> cacheCfg.setQueryParallelism(16);
> cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
> cacheCfg.setRebalanceThrottle(100);
> CacheKeyConfiguration cacheKeyConfiguration = new
> CacheKeyConfiguration(DpKey.class);
> cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
>
> List entities = Lists.newArrayList();
>
> QueryEntity entity = new QueryEntity(DpKey.class.getName(),
> DpCache.class.getName());
> entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
>
> LinkedHashMap map = new LinkedHashMap<>();
> map.put("id", "java.lang.String");
> map.put("gmtCreate", "java.lang.Long");
> map.put("gmtModified", "java.lang.Long");
> map.put("devId", "java.lang.String");
> map.put("dpId", "java.lang.Integer");
> map.put("code", "java.lang.String");
> map.put("name", "java.lang.String");
> map.put("customName", "java.lang.String");
> map.put("mode", "java.lang.String");
> map.put("type", "java.lang.String");
> map.put("value", "java.lang.String");
> map.put("rawValue", byte[].class.getName());
> map.put("time", "java.lang.Long");
> map.put("status", "java.lang.Boolean");
> map.put("uuid", "java.lang.String");
>
> entity.setFields(map);
> QueryIndex devIdIdx = new QueryIndex("devId");
> devIdIdx.setName("idx_devId");
> devIdIdx.setInlineSize(128);
> List indexes = Lists.newArrayList(devIdIdx);
> entity.setIndexes(indexes);
>
> entities.add(entity);
> cacheCfg.setQueryEntities(entities);
>
>
> Can you give me some advice on where to start solving these problems?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: H2 SQL query optimiser strategy in Ignite

2019-02-27 Thread Ilya Kasnacheev
Hello!

There's 'enforceJoinOrder' connection setting which will prevent reordering
of joins, so if you manually write query to have optimal join ordering then
it would perform well.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 13:39, joseheitor :

> Thanks, Ilya.
>
> The reason that I am trying to better understand the rationale is that I
> had
> a multi-join query that was performing badly on Ignite compared to
> PostgreSQL.
>
> When comparing the query plans from the two systems, I discovered that the
> Postgres query-planner had re-ordered the joins in my query, but H2
> (Ignite)
> had not (and does not assign a 'cost').
>
> When I explicitly re-ordered the joins in my query, to resemble the result
> of the Postgres plan - the performance on Ignite shot up to match that of
> Postgres.
>
> So my question is, whether there is any way to overcome this shortcoming
> ...
> for situations where we cannot evaluate the query layout manually, on an
> individual basis?
>
> Thanks,
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite.net custom sql functions

2019-02-27 Thread Ilya Kasnacheev
Hello!

They are not supported yet.

However, you can still configure your caches to use Java custom functions
and use them even if the rest of your project is .Net. I assume that
functions are usually self-contained.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 12:13, wt :

> good day
>
> I can't seem to find anything on the .net api for custom sql functions. Can
> anyone tell me if it is possible in .net as it is here in Java
>
> https://apacheignite-sql.readme.io/docs/custom-sql-functions
>
> Thanks
> Wayne
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance degradation in case of high volumes

2019-02-27 Thread Ilya Kasnacheev
Hello!

It is possible that your primary key would not fit into index and walking
index tree will put unnecessary strain on persistent page memory.

On 2.7 there will be a warning about that in log.

Please try to tune IGNITE_MAX_INDEX_PAYLOAD_SIZE system property since it's
the only available means of changing that value. Try to put something line
32 or 64 there, see if it helps.

Regards,
-- 
Ilya Kasnacheev


вт, 26 февр. 2019 г. в 20:24, Antonio Conforti :

> Hello,
>
>
> I recap the scenario of benchmark:
>
> 1) Constant submission of 4000 entries per second where every entry is an
> add (the key contains a field updatetime and changes for every entry).
> 2) The benchmark starts with no data in cache and the entries are submitted
> from an Ignite client node in the cluster using the StreamVisitor.addData()
> method (located on HOST1).
> 3) The cluster is composed of a total of 8 ignite server nodes: server
> nodes
> with consistence ID 1,3,5,7 on HOST1 and server nodes with consistence ID
> 2,4,6,8 on HOST2;
>
>
> Below the general configuration:
>
> Cache configuration:
>
> 1.TRANSACTIONAL
> 2.partitioned
> 3.with backup 1 (and affinity with exclude neighbours enabled)
> 4.write synchronization mode FULL_ASYNC
> 5.indexed on key and value (and enabled to SQL inquiry)
>
>
> We have also configured:
> 1.failureDetectionTimeout to 12msec
> 2.Data region (only 1):
> a.Persistence enabled
> b.max size 8 GB
> c.checkpointPageBufferSize 2 GB
> 3.WAL mode LOG_ONLY
> 4.disabled WAL archiving (WAL path and the WAL archive path to
> the same
> value)
> 5.Pages Writes Throttling enabled
>
>
> I run also a second scenario with frequency set to 60 (10minutes) as
> suggested and with direct IO enabled and WAL mode set to NONE.
>
>
> For problem of space you can find logs and configuration file of only
> server
> node 1 attached for both scenarios as described below:
>
> 1) folder 20190222: log and config for scenario 1)
> 2) folder 20190225: log and config for scenario 2)
> 3) IGN_DF_CMF_QUOTE_PK.java: key entity inserted in cache
> 4) IGN_DF_CMF_QUOTE.java: entity data inserted in cache
>
> case-Ignite.zip
> 
>
>
> If you need another benchmark with specific configuration, just let me
> know.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [Backup/Restore] Can we use copy data file from IGNITE_HOME/work/db/nodexxxxx/ to other place if Ignite process is running ?

2019-02-27 Thread Ilya Kasnacheev
Hello!

You can deactivate the cluster, copy data and then reactivate cluster back.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 15:39, James Wang 王升平 (edvance CN) <
james.w...@edvancesecurity.com>:

> Hi Ilya,
>
>
>
> If so, could you please provide some advice on how to backup ignite data
> OUT to disk.
>
>
>
> What do you mean about 3rd party extensions ?
>
>
>
> Our use case is: we don't have other RDBMS as 3 third party persistent, we
> all rely on IGNITE. I know GridGain have enterprise features, but we don’t
> have plan to purchase GridGain for the time being.
>
>
>
> Thank you.
>
>
>
> Best Regards,
>
> James Wang
>
> M/WeChat: +86 135 1215 1134
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, February 27, 2019 06:32 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: [Backup/Restore] Can we use copy data file from
> IGNITE_HOME/work/db/nodex/ to other place if Ignite process is running ?
>
>
>
> Hello!
>
>
>
> No, you cannot do this (without using 3rd party extensions, anyway).
>
>
>
> Regard,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 27 февр. 2019 г. в 11:00, James Wang 王升平 (edvance CN) <
> james.w...@edvancesecurity.com>:
>
> Hi Support,
>
>
>
> I am thinking of backup / restore approach.
>
>
>
> Can we copy data file from IGNITE_HOME/work/db/nodex/ to other place
> if Ignite process is running ?
>
>
>
> If we restore the copied data, will be file be crashed  if ignite process
> was running when this file was being backup.
>
>
>
> Thank you,
>
>
>
> Best Regards,
>
> James Wang
>
> M/WeChat: +86 135 1215 1134
>
>
>
>
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>


RE: [Backup/Restore] Can we use copy data file from IGNITE_HOME/work/db/nodexxxxx/ to other place if Ignite process is running ?

2019-02-27 Thread edvance CN
Hi Ilya,

If so, could you please provide some advice on how to backup ignite data OUT to 
disk.

What do you mean about 3rd party extensions ?

Our use case is: we don't have other RDBMS as 3 third party persistent, we all 
rely on IGNITE. I know GridGain have enterprise features, but we don’t have 
plan to purchase GridGain for the time being.

Thank you.

Best Regards,
James Wang
M/WeChat: +86 135 1215 1134

From: Ilya Kasnacheev 
mailto:ilya.kasnach...@gmail.com>>
Sent: Wednesday, February 27, 2019 06:32 PM
To: user@ignite.apache.org
Subject: Re: [Backup/Restore] Can we use copy data file from 
IGNITE_HOME/work/db/nodex/ to other place if Ignite process is running ?

Hello!

No, you cannot do this (without using 3rd party extensions, anyway).

Regard,
--
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 11:00, James Wang 王升平 (edvance CN) 
mailto:james.w...@edvancesecurity.com>>:
Hi Support,

I am thinking of backup / restore approach.

Can we copy data file from IGNITE_HOME/work/db/nodex/ to other place if 
Ignite process is running ?

If we restore the copied data, will be file be crashed  if ignite process was 
running when this file was being backup.

Thank you,

Best Regards,
James Wang
M/WeChat: +86 135 1215 1134


This message contains information that is deemed confidential and privileged. 
Unless you are the addressee (or authorized to receive for the addressee), you 
may not use, copy or disclose to anyone the message or any information 
contained in the message. If you have received the message in error, please 
advise the sender by reply e-mail and delete the message.
This message contains information that is deemed confidential and privileged. 
Unless you are the addressee (or authorized to receive for the addressee), you 
may not use, copy or disclose to anyone the message or any information 
contained in the message. If you have received the message in error, please 
advise the sender by reply e-mail and delete the message.


Re: Ignite Data Streamer Hung after a period

2019-02-27 Thread Justin Ji
Dmitry  - 

I also encountered this problem.

I used both persistence and indexing, when I loaded 20 million records, the
loading speed became much slower than before, but the CPU of the ignite
server is low.


 

Here is my cache configuration:

CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setWriteBehindFlushThreadCount(2);
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cacheCfg.setQueryParallelism(16);
cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
cacheCfg.setRebalanceThrottle(100);
CacheKeyConfiguration cacheKeyConfiguration = new
CacheKeyConfiguration(DpKey.class);
cacheCfg.setKeyConfiguration(cacheKeyConfiguration);

List entities = Lists.newArrayList();

QueryEntity entity = new QueryEntity(DpKey.class.getName(),
DpCache.class.getName());
entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());

LinkedHashMap map = new LinkedHashMap<>();
map.put("id", "java.lang.String");
map.put("gmtCreate", "java.lang.Long");
map.put("gmtModified", "java.lang.Long");
map.put("devId", "java.lang.String");
map.put("dpId", "java.lang.Integer");
map.put("code", "java.lang.String");
map.put("name", "java.lang.String");
map.put("customName", "java.lang.String");
map.put("mode", "java.lang.String");
map.put("type", "java.lang.String");
map.put("value", "java.lang.String");
map.put("rawValue", byte[].class.getName());
map.put("time", "java.lang.Long");
map.put("status", "java.lang.Boolean");
map.put("uuid", "java.lang.String");

entity.setFields(map);
QueryIndex devIdIdx = new QueryIndex("devId");
devIdIdx.setName("idx_devId");
devIdIdx.setInlineSize(128);
List indexes = Lists.newArrayList(devIdIdx);
entity.setIndexes(indexes);

entities.add(entity);
cacheCfg.setQueryEntities(entities);


Can you give me some advice on where to start solving these problems?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: On Multiple Endpoints Mode of JDBC Driver

2019-02-27 Thread 李玉珏
The main consideration is that using JDBC interface, the existing code 
modification workload is small.


在 2019/2/27 下午5:31, Stephen Darlington 写道:
If you’re already using Ignite-specific APIs (IgniteCallable), why not 
use the other Ignite-native APIs for reading/writing/processing data? 
That way you can use affinity functions for load balancing where it 
makes sense and Ignite’s normal load balancing processing for general 
compute tasks.


Regards,
Stephen

On 27 Feb 2019, at 06:00, 李玉珏@163 <18624049...@163.com 
> wrote:


Hi,

Since JDBC can't achieve multi-endpoint load balancing, we want to 
use affinityCall (...) mechanism to achieve load balancing, that is, 
to obtain and use JDBC Connection in IgniteCallable implementation.

How to efficiently access and use JDBC Connection?

 转发的消息 
主题: Re: On Multiple Endpoints Mode of JDBC Driver
日期: Tue, 26 Feb 2019 14:53:17 -0800
发件人:Denis Magda 
回复地址:   d...@ignite.apache.org
收件人:dev 



Hello,

You provide a list of IP addresses for the sake of high-availability - if
one of the servers goes down then the client will reconnect to the 
next IP

automatically. There is no any load balancing in place presently. But! In
the next Ignite version, we're planning to roll out partition-awareness
support - the client will send a request to the nodes who hold the data
needed for the request.

-
Denis


On Tue, Feb 26, 2019 at 2:48 PM 李玉珏  wrote:


Hi,

Does have load balancing function in Multiple Endpoints mode of JDBC
driver?For example, "jdbc: ignite:thin://192.168.0.50:101,
192.188.5.40:101, 192.168.10.230:101"
If not, will one node become the bottleneck of the whole system?








Re: H2 SQL query optimiser strategy in Ignite

2019-02-27 Thread joseheitor
Thanks, Ilya.

The reason that I am trying to better understand the rationale is that I had
a multi-join query that was performing badly on Ignite compared to
PostgreSQL.

When comparing the query plans from the two systems, I discovered that the
Postgres query-planner had re-ordered the joins in my query, but H2 (Ignite)
had not (and does not assign a 'cost').

When I explicitly re-ordered the joins in my query, to resemble the result
of the Postgres plan - the performance on Ignite shot up to match that of
Postgres.

So my question is, whether there is any way to overcome this shortcoming ...
for situations where we cannot evaluate the query layout manually, on an
individual basis?

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [Backup/Restore] Can we use copy data file from IGNITE_HOME/work/db/nodexxxxx/ to other place if Ignite process is running ?

2019-02-27 Thread Ilya Kasnacheev
Hello!

No, you cannot do this (without using 3rd party extensions, anyway).

Regard,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 11:00, James Wang 王升平 (edvance CN) <
james.w...@edvancesecurity.com>:

> Hi Support,
>
>
>
> I am thinking of backup / restore approach.
>
>
>
> Can we copy data file from IGNITE_HOME/work/db/nodex/ to other place
> if Ignite process is running ?
>
>
>
> If we restore the copied data, will be file be crashed  if ignite process
> was running when this file was being backup.
>
>
>
> Thank you,
>
>
>
> Best Regards,
>
> James Wang
>
> M/WeChat: +86 135 1215 1134
>
>
>
>
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>


Re: Memory tuning [set sysctl -w vm.extra_free_kbytes]

2019-02-27 Thread Ilya Kasnacheev
Hello!

You can probably try and ignore this setting. I haven't heard about case
where it would make a difference.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 12:29, James Wang 王升平 (edvance CN) <
james.w...@edvancesecurity.com>:

> Hi Support,
>
>
>
> The doc mentions to set sysctl -w vm.extra_free_kbytes=124 for CentOS.
>
>
>
> But I find this is only related to CentOS 6. Please advise which shall we
> set for CentOS-7
>
>
>
> Thank you.
>
>
>
> Best Regards,
>
> James Wang
>
> M/WeChat: +86 135 1215 1134
>
>
>
>
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>


Re: H2 SQL query optimiser strategy in Ignite

2019-02-27 Thread Ilya Kasnacheev
Hello!

As far as my understanding goes, H2 planner is sometimes fed constants
instead of actual statistics. So that it is able to plan queries for
general cases in absense of precise statistics.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 13:13, joseheitor :

> Hi,
>
> H2 uses a cost-based query optimisation strategy for the query-planner. But
> in the case of Ignite, I suspect that this cannot be leveraged because the
> client node does not have access to data statistics (distribution, size,
> etc) throughout the various client nodes...
>
> So how is the query plan devised?
>
> Thanks,
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: web-console not displaying the latest version of ignite

2019-02-27 Thread radha
Thanks for the reply.
I am able to see the proper web-console version i.e 2.7.
I took the old ignite version frontend as u said. Since I was working on
both ignite versions 2.5 and 2.7 , i missed something while compiling the
web-console.

regards
Krupa



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


H2 SQL query optimiser strategy in Ignite

2019-02-27 Thread joseheitor
Hi,

H2 uses a cost-based query optimisation strategy for the query-planner. But
in the case of Ignite, I suspect that this cannot be leveraged because the
client node does not have access to data statistics (distribution, size,
etc) throughout the various client nodes...

So how is the query plan devised?

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: On Multiple Endpoints Mode of JDBC Driver

2019-02-27 Thread Stephen Darlington
If you’re already using Ignite-specific APIs (IgniteCallable), why not use the 
other Ignite-native APIs for reading/writing/processing data? That way you can 
use affinity functions for load balancing where it makes sense and Ignite’s 
normal load balancing processing for general compute tasks.

Regards,
Stephen

> On 27 Feb 2019, at 06:00, 李玉珏@163 <18624049...@163.com> wrote:
> 
> Hi,
> 
> Since JDBC can't achieve multi-endpoint load balancing, we want to use 
> affinityCall (...) mechanism to achieve load balancing, that is, to obtain 
> and use JDBC Connection in IgniteCallable implementation.
> How to efficiently access and use JDBC Connection?
> 
>  转发的消息 
> 主题:   Re: On Multiple Endpoints Mode of JDBC Driver
> 日期:   Tue, 26 Feb 2019 14:53:17 -0800
> 发件人:  Denis Magda  
> 回复地址: d...@ignite.apache.org 
> 收件人:  dev  
> 
> Hello,
> 
> You provide a list of IP addresses for the sake of high-availability - if
> one of the servers goes down then the client will reconnect to the next IP
> automatically. There is no any load balancing in place presently. But! In
> the next Ignite version, we're planning to roll out partition-awareness
> support - the client will send a request to the nodes who hold the data
> needed for the request.
> 
> -
> Denis
> 
> 
> On Tue, Feb 26, 2019 at 2:48 PM 李玉珏  
>  wrote:
> 
>> Hi,
>> 
>> Does have load balancing function in Multiple Endpoints mode of JDBC
>> driver?For example, "jdbc: ignite: thin://192.168.0.50:101,
>> 192.188.5.40:101, 192.168.10.230:101"
>> If not, will one node become the bottleneck of the whole system?
>> 
> 




Memory tuning [set sysctl -w vm.extra_free_kbytes]

2019-02-27 Thread edvance CN
Hi Support,

The doc mentions to set sysctl -w vm.extra_free_kbytes=124 for CentOS.

But I find this is only related to CentOS 6. Please advise which shall we set 
for CentOS-7

Thank you.

Best Regards,
James Wang
M/WeChat: +86 135 1215 1134


This message contains information that is deemed confidential and privileged. 
Unless you are the addressee (or authorized to receive for the addressee), you 
may not use, copy or disclose to anyone the message or any information 
contained in the message. If you have received the message in error, please 
advise the sender by reply e-mail and delete the message.


Re: Do client nodes also have to define the cache configuration?

2019-02-27 Thread PBSLogitek
Thank you very much



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Do client nodes also have to define the cache configuration?

2019-02-27 Thread Ilya Kasnacheev
Hello!

No, it is not necessary. But you can supply additional configurations of
caches which will be started when client joins.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 12:03, PBSLogitek :

> Hello
>
> I have a simple question. I have a server a node and some clients
> connecting
> to it. In my server xml configuration i have defined all caches. Now my
> question is do i have to define those caches in my client xml config to or
> is it not necessary? Will i get an error when i try to access a cache from
> a
> client that is not defined in xml?
>
> Thx
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Do client nodes also have to define the cache configuration?

2019-02-27 Thread PBSLogitek
Hello

I have a simple question. I have a server a node and some clients connecting
to it. In my server xml configuration i have defined all caches. Now my
question is do i have to define those caches in my client xml config to or
is it not necessary? Will i get an error when i try to access a cache from a
client that is not defined in xml?

Thx



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: pre-load data (Apache ignite native persistence store or Cassandra) into two partitioned cache tables

2019-02-27 Thread xmw45688
Thanks for reply.  I got the most working thanks to the example
(https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/starschema/CacheStarSchemaExample.java)
provided by Ignite.  

Here is my sql for POC (Cassandra DDL scripts)
create table ignitetest.dim_store (id bigint primary key, name varchar, addr
varchar, zip varchar);
create table ignitetest.dim_product (id bigint primary key, name varchar,
price double, qty int);
create table ignitetest.fact_purchase (id bigint primary key, productId
bigint,  storeId bigint, purchasePrice double);
create table ignitetest.fact_purchase_line(id bigint , factId bigint, line
int, linePrice double, lineQty int,  primary key (id, factId));
create table ignitetest.invoice (id bigint, factId bigint, productId bigint, 
storeId bigint, purchasePrice double, primary key (id));
create table ignitetest.invoice_line (id bigint, invoiceId bigint,
factLineId bigint, line int, price double, qty int, primary key (id,
invoiceId, factLineId));

Have following key affinity mapped - 
purchase_fact -> factId-> purchase_fact_line
purchase_fact -> factId -> invoice
invoice -> invoiceId -> invoice_line

1. fact_purchase and fact_purchase_line via factId affinity works expected.
2. fact_purchase and invoice via factId affinity works expected.
3. invoice and invoice_line via invoiceId affinity works expected. 

However, 
4. fact_purhcase_line, invoice and invoice line via factLineId and InvoiceID
do not work, please see annotation below

public class InvoiceLineKey {
/** Primary key. */
private long id;

/** Foreign key to fact_purhcase_line */
@AffinityKeyMapped
private long factLineId;

/** Foreign key to invoice */
@AffinityKeyMapped
private long invoiceId;


5. I don't quite understand that invoiceId affinity key mapped between
invoice and invoice_line does not require factLineId key mapped between
fact_purchase_line and invoice_line.  Is this because of having factId key
affinity between purchase_fact and purchase_fact_line, between purchase_fact
and invoice.  

So I just have the following key affinity mapped - 

purchase_fact -> factId-> purchase_fact_line
purchase_fact -> factId -> invoice
invoice -> invoiceId -> invoice_line

Interestingly, invoice_line join fact_purhcase_line works fine (see queries
below).  Can someone please shed some lights on this?

// expected
SELECT count(*) from PARTITION.invoice inv, PARTITION.invoiceline il
WHERE inv.id = il.invoiceid;

// why does this query work? note there is a join between
li.id=il.factLineId which is not a key affinity mapped.
SELECT count(*) 
 from PARTITION.factpurchaseline li, PARTITION.invoice inv,
PARTITION.invoiceline il
WHERE li.id = il.factlineid
  AND inv.id = il.invoiceid 
;

// why does this query work? note there is a join between
li.id=il.factLineId which is not a key affinity mapped.
SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoiceline il
WHERE li.id = il.factlineid
;




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[Backup/Restore] Can we use copy data file from IGNITE_HOME/work/db/nodexxxxx/ to other place if Ignite process is running ?

2019-02-27 Thread edvance CN
Hi Support,

I am thinking of backup / restore approach.

Can we copy data file from IGNITE_HOME/work/db/nodex/ to other place if 
Ignite process is running ?

If we restore the copied data, will be file be crashed  if ignite process was 
running when this file was being backup.

Thank you,

Best Regards,
James Wang
M/WeChat: +86 135 1215 1134


This message contains information that is deemed confidential and privileged. 
Unless you are the addressee (or authorized to receive for the addressee), you 
may not use, copy or disclose to anyone the message or any information 
contained in the message. If you have received the message in error, please 
advise the sender by reply e-mail and delete the message.