Re: UPDATE query with JOIN

2018-10-08 Thread Justin Ji
Thank for your reply!
I sure I have indexed the devId, here is the output of EXPLAIN:

[SELECT
__Z0.DEVID AS __C0_0,
__Z0.ISONLINE AS __C0_1,
__Z0.MQTTTIME AS __C0_2
FROM "device_online_status".T_DEVICE_ONLINE_STATUS __Z0
/* "device_online_status".T_DEVICE_ONLINE_STATUS_DEVID_ASC_IDX: DEVID
IN('1002', '6c072f7d599215dadfs0ya', '6cdff0d13a96d8cec0j8v6',
'6cf3cde4012b74b853qsoe', '6c0d48eb1718840a69yndq', '002200504301',
'vdevp150509677704164', '002yt001sf00sf0q', '6c2dd83eebd2723329ornu',
'6ce091736ee2cdef6c2gjc', '6c7510b6d2b42b9a46w9j3', '002yt001sf00sfrz',
'6c05c274a04cca4e00z1tp', '6c6baec455eac8bd17ozfn', '002yt001sfgwsfV3')
*/
WHERE __Z0.DEVID IN('1002', '6c072f7d599215dadfs0ya',
'6cdff0d13a96d8cec0j8v6', '6cf3cde4012b74b853qsoe',
'6c0d48eb1718840a69yndq', '002200504301', 'vdevp150509677704164',
'002yt001sf00sf0q', '6c2dd83eebd2723329ornu', '6ce091736ee2cdef6c2gjc',
'6c7510b6d2b42b9a46w9j3', '002yt001sf00sfrz', '6c05c274a04cca4e00z1tp',
'6c6baec455eac8bd17ozfn', '002yt001sfgwsfV3')]
[SELECT
__C0_0 AS DEVID,
__C0_1 AS ISONLINE,
__C0_2 AS MQTTTIME
FROM PUBLIC.__T0
/* "device_online_status"."merge_scan" */]

And the following code is the way that I create a cache:


List entities = getQueryEntities();
cacheCfg.setQueryEntities(entities);

private List getQueryEntities() {
List entities = Lists.newArrayList();

//配置可见(可被查询)字段
QueryEntity entity = new QueryEntity(String.class.getName(),
DeviceStatusIgniteVO.class.getName());
   
entity.setTableName(IgniteTableKey.T_DEVICE_ONLINE_STATUS.getCode());

LinkedHashMap map = new LinkedHashMap<>();
map.put("devId", "java.lang.String");
map.put("isOnline", "java.lang.Boolean");
map.put("gmtModified", "java.lang.Long");
map.put("mqttTime", "java.lang.Long");
entity.setFields(map);

//配置索引信息
List indexes = Lists.newArrayList(new
QueryIndex("devId"));
entity.setIndexes(indexes);

entities.add(entity);

return entities;
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re-deploying a continuous query after all server nodes restart

2018-10-08 Thread DanieleBosetti
Hi, thanks for replying!

I double-checked my case, and (I was quite wrong); the client keeps
receiving updates from the CQ after the first server is restarted.
(maybe it could be useful to add a case in the examples section, like
"ContinuousQueryFailover"? I could try doing that eventually)


There is a thing that confuses me tough. I find the warning log below
("Client node was reconnected..") which advises to listen to
EVT_CLIENT_NODE_RECONNECTED.
Does that mean that simply listening to this event is enough for the client
to recover from the disconnection;
or is any further action required if we receive a
EVT_CLIENT_NODE_RECONNECTED (like closing the CQ and re-issuing it) ?
Also, even in presence of this message, my test case still passes (the CQ
keeps pushing data)-
so should I be worried when I see it?


"Client node was reconnected after it was already considered failed by the
server topology ..etc.." and
"All continuous queries and remote event listeners created by this client
will be unsubscribed,
consider listening to EVT_CLIENT_NODE_RECONNECTED event to restore them."


Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Remote Listeners in C#.Net Ignite

2018-10-08 Thread Ilya Kasnacheev
Hello!

It seems that RemoteListen() is not implemented in C# yet. I suggest
implementing that part of functionality on Java side.

Regards,
-- 
Ilya Kasnacheev


пт, 5 окт. 2018 г. в 15:53, Hemasundara Rao <
hemasundara@travelcentrictechnology.com>:

> Hi All,
>
>  I have requirement of  when a cache item expires, need to capture all
> cluster nodes expire event and need a resource cleanup process at item
> expire.
> Could you please some one help me with sample code, how to do it.
> What I am using LocalListen, that is only firing for that node only.
> Can some give me same code for Remote Listen (C#.Net code)?
> I need to listen all the cache item expiry event.
>
>
> Thanks and Regards,
>
> Hemasundara Rao Pottangi  | Senior Project Leader
>
> [image: HotelHub-logo]
> HotelHub LLP
> Phone: +91 80 6741 8700
> Cell: +91 99 4807 7054
> Email: hemasundara@hotelhub.com
> Website: www.hotelhub.com 
> --
>
> HotelHub LLP is a service provider working on behalf of Travel Centric
> Technology Ltd, a company registered in the United Kingdom.
> DISCLAIMER: This email message and all attachments are confidential and
> may contain information that is Privileged, Confidential or exempt from
> disclosure under applicable law. If you are not the intended recipient, you
> are notified that any dissemination, distribution or copying of this email
> is strictly prohibited. If you have received this email in error, please
> notify us immediately by return email to
> noti...@travelcentrictechnology.com and destroy the original message.
> Opinions, conclusions and other information in this message that do not
> relate to the official business of Travel Centric Technology Ltd or
> HotelHub LLP, shall be understood to be neither given nor endorsed by
> either company.
>
>


Re: Affinity function and partition aware database loading

2018-10-08 Thread Ilya Kasnacheev
Hello!

As far as I could understand, you have affinity data in your rows, but it
is not a column but computable over several columns.

In this case you can create your own AffinityFunction inheriting from
RendezvousAffinityFunction.

In general, creating a new AffinityFunction is a monumental task, but in
this case you can just override partition(Object) method, make this method
return number of partition when passed your row object as input.
Note that it might be in BinaryObject form at this point.

Regards,
-- 
Ilya Kasnacheev


пн, 8 окт. 2018 г. в 15:15, Maxim.Pudov :

> Hello,
>
> If you want to benefit from partition aware data loading, you must set up
> an
> affinity key field, without it SQL will make a full scan every time.
> If you don't plan to use SQL on Ignite's side, then you can set up a
> custom
> AffinityFunction
> <
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/AffinityFunction.html>
>
> in your  CacheConfiguration
> <
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setAffinity-org.apache.ignite.cache.affinity.AffinityFunction->
>
> , which is not recommened due to potential complexity of this task.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Blog post "Introduction to the Apache(R) Ignite™ community structure"

2018-10-08 Thread Dmitriy Pavlov
Hi Apache Ignite Enthusiasts,

I'm happy to announce a new post about Apache Ignite

https://www.gridgain.com/resources/blog/introduction-apacher-ignitetm-community-structure


- "Introduction to the Apache® Ignite™ community structure." This post is a
translated version of the Habr post. Thanks to Tom D. for translation of
original post.

There I explain the roles, rules, and benefits of participating in the
Apache Ignite community.

As always, comments and feedback are strongly appreciated.

Sincerely,
Dmitriy Pavlov


RE: Services not initializing in persistence enabled cluster nodes

2018-10-08 Thread Maxim.Pudov
Your case is well described in docs
https://apacheignite.readme.io/docs/baseline-topology#section-adding-new-node
Hope, it will help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark Ignite Data Load failing On Large Cache

2018-10-08 Thread ApacheUser
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and 
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after 
reaching 350GB plus one or two nodes not processing data load and the data 
load is stopped. Please advise, the CLuster , Server and Client Logs 
below.


 


Server Logs:

[11:59:34] Topology snapshot [ver=121, servers=4, clients=9, CPUs=32,
offheap=1000.0GB, heap=78.0GB]
[11:59:34]   ^-- Node [id=F6605E96-47C9-479B-A840-03316500C9A3,
clusterState=ACTIVE]
[11:59:34]   ^-- Baseline [id=0, size=4, online=4, offline=0]
[11:59:34] Data Regions Configured:
[11:59:34]   ^-- default_mem_region [initSize=256.0 MiB, maxSize=20.0 GiB,
persistenceEnabled=true]
[11:59:34]   ^-- q_major [initSize=10.0 GiB, maxSize=30.0 GiB,
persistenceEnabled=true]
[11:59:34]   ^-- q_minor [initSize=10.0 GiB, maxSize=30.0 GiB,
persistenceEnabled=true]
[14:33:15,872][SEVERE][grid-nio-worker-client-listener-3-#33][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-3, igniteInstanceName=null,
finished=false, hashCode=254322881, interrupted=false,
runner=grid-nio-worker-client-listener-3-#33]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/64.102.213.190:10800, rmtAddr=/10.82.249.225:51449,
createTime=1538740798912, closeTime=0, bytesSent=397, bytesRcvd=302,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1538742789216,
lastSndTime=1538742789216, lastRcvTime=1538742789216, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true]]]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1085)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
[21:43:26,312][SEVERE][grid-nio-worker-client-listener-0-#30][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=0, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-0, igniteInstanceName=null,
finished=false, hashCode=2211598, interrupted=false,
runner=grid-nio-worker-client-listener-0-#30]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/64.102.213.190:10800, rmtAddr=/10.82.32.114:59525,
createTime=1538746249024, closeTime=0, bytesSent=2035, bytesRcvd=1532,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1538767916701,
lastSndTime=1538767916701, lastRcvTime=1538767916701, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true]]]
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1085)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
   

Spark Ignite Data Load failing On Large Cache

2018-10-08 Thread ApacheUser
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after
reaching 350GB plus one or two nodes not processing data load and the data
load is stopped. Please advise, the CLuster , Server and Client Logs
below.Detailsvisor> topHosts:
4+===+|
  
Int./Ext. IPs|Node ID8(@)| Node Type |   OS 
 
| CPUs |   MACs| CPU Load
|+===+|
0:0:0:0:0:0:0:1%lo | 1: F6605E96(@n1)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:52:96:C4 | 0.14 %   ||
127.0.0.1  | 2: 2760B50C(@n11) | Client|
   
|  |   |  || 64.102.213.190 | 3:
81855FF0(@n12) | Client||  |
  
| 
|++---+---++--+---+--+|
0:0:0:0:0:0:0:1%lo | 1: 512609AB(@n0)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:E5:27:36 | 2.13 %   ||
127.0.0.1  | 2: 72AA1490(@n5)  | Client|
   
|  |   |  || 64.102.212.151 | 3:
E218A964(@n6)  | Client||  |
  
| 
|++---+---++--+---+--+|
0:0:0:0:0:0:0:1%lo | 1: 4470553B(@n2)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:C4:F4:98 | 0.10 %   ||
127.0.0.1  | 2: F0D1625A(@n7)  | Client|
   
|  |   |  || 64.102.213.13  | 3:
EF0C5A13(@n8)  | Client||  |
  
| 
|++---+---++--+---+--+|
0:0:0:0:0:0:0:1%lo | 1: F44497FE(@n3)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:26:72:FD | 0.21 %   ||
127.0.0.1  | 2: DBA60939(@n4)  | Client|
   
|  |   |  || 64.102.213.220 | 3:
65FA421F(@n9)  | Client||  |
  
|  ||| 4: 8CBFE426(@n10) | Client|  
 
|  |   | 
|+---+Summary:+--+|
Active | true|| Total hosts| 4  
|| Total nodes| 13  || Total CPUs | 32 
|| Avg. CPU load  | 0.61 %  || Avg. free heap | 71.00 %
|| Avg. Up time   | 30:22:52|| Snapshot time  | 2018-10-08
14:19:47 |+--+visor> nodeSelect node
from:+==+|
#  |Node ID8(@), IP | Node Type | Up Time  | CPUs | CPU Load
| Free Heap
|+==+|
0  | 512609AB(@n0), 64.102.212.151  | Server| 30:23:14 | 8| 4.33 %  
| 36.00 %   || 1  | F6605E96(@n1), 64.102.213.190  | Server| 30:23:10 |
8| 0.90 %   | 56.00 %   || 2  | 4470553B(@n2), 64.102.213.13   | Server   
| 30:23:07 | 8| 0.20 %   | 78.00 %   || 3  | F44497FE(@n3),
64.102.213.220  | Server| 30:23:03 | 8| 0.17 %   | 44.00 %   || 4  |
DBA60939(@n4), 64.102.213.220  | Client| 14:21:12 | 8| 0.17 %   |
66.00 %   || 5  | 72AA1490(@n5), 64.102.212.151  | Client| 14:21:06 | 8   
| 0.17 %   | 78.00 %   || 6  | E218A964(@n6), 64.102.212.151  | Client|
14:21:07 | 8| 0.17 %   | 71.00 %   || 7  | F0D1625A(@n7), 64.102.213.13  
| Client| 14:21:06 | 8| 0.07 %   | 84.00 %   || 8  | EF0C5A13(@n8),
64.102.213.13   | Client| 14:21:06 | 8| 0.07 %   | 83.00 %   || 9  |
65FA421F(@n9), 64.102.213.220  | Client| 14:21:07 | 8| 0.10 %   |
64.00 %   || 10 | 8CBFE426(@n10), 64.102.213.220 | Client| 14:21:06 | 8   
| 0.13 %   | 76.00 %   || 11 | 2760B50C(@n11), 64.102.213.190 | Client|
14:21:07 | 8| 0.13 %   | 78.00 %   || 12 | 81855FF0(@n12),
64.102.213.190 | Client| 14:21:06 | 8| 0.10 %   | 81.00 %  
|+--+*Server

Re: Affinity function and partition aware database loading

2018-10-08 Thread Maxim.Pudov
Hello,

If you want to benefit from partition aware data loading, you must set up an
affinity key field, without it SQL will make a full scan every time.
If you don't plan to use SQL on Ignite's side, then you can set up a custom 
AffinityFunction

  
in your  CacheConfiguration

 
, which is not recommened due to potential complexity of this task.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ScanQuery failing on Client Mode

2018-10-08 Thread Ilya Kasnacheev
Hello!

Yes, it will cause problems if it is not available on server node. Cache
key-value classes are not peer loaded. You can still use withKeepBinary()
on cache and do continuous queries on BinaryObject's.

Regards,
-- 
Ilya Kasnacheev


пт, 5 окт. 2018 г. в 18:50, kvenkatramtreddy :

> Hi,
>
> it is available. I can call other methods in the same service. Cache.query
> is working in server nodes, but not on client nodes.
>
> But ARCacheDelegateImpl is not available in server node. will it causing
> the
> issue?.
>
> One more questions, Can we run the few server nodes with persistence and
> few
> of them without persistence in cluster?
>
> Thanks & Regards,
> Venkat
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: UPDATE query with JOIN

2018-10-08 Thread Evgenii Zhuravlev
Hi,

Are you sure that you have index for devId field?

Evgenii

пн, 8 окт. 2018 г. в 12:23, Justin Ji :

> Hi Alexander -
>
> I have tried the SQL you suggested, but the performance got worse, I do not
> know why?
>
> 1. "update t_device_module set isOnline=1, mqttTime=" +
> System.currentTimeMillis() / 1000 + " where devId in ('0001', '0002', ...,
> '2048')";
> The SQL may take 600ms for 2048 records.
>
> 2."update t_device_module t1 set t1.isOnline=1, t1.mqttTime=" +
> System.currentTimeMillis() / 1000 + " where t1.devId in (select table.devId
> from table(devId varchar = ?))";
>
> cache.query(new SqlFieldsQuery(sql2).setArgs(new
> Object[]{clientIds.toArray()}));
>
> The clientIds is a list which contains 2048 records(equals with above), but
> it was executed about 2000ms.
>
> According to the introduction of
> sql-performance-and-usability-considerations
> <
> https://apacheignite-sql.readme.io/docs/performance-and-debugging#sql-performance-and-usability-considerations>
>
> : the second SQL is recommended, bucause the first one will not use
> indexes.
>
> My question is why the second one is worse than the first one.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: UPDATE query with JOIN

2018-10-08 Thread Justin Ji
Hi Alexander - 

I have tried the SQL you suggested, but the performance got worse, I do not
know why? 

1. "update t_device_module set isOnline=1, mqttTime=" +
System.currentTimeMillis() / 1000 + " where devId in ('0001', '0002', ...,
'2048')";
The SQL may take 600ms for 2048 records.

2."update t_device_module t1 set t1.isOnline=1, t1.mqttTime=" +
System.currentTimeMillis() / 1000 + " where t1.devId in (select table.devId
from table(devId varchar = ?))";

cache.query(new SqlFieldsQuery(sql2).setArgs(new
Object[]{clientIds.toArray()}));

The clientIds is a list which contains 2048 records(equals with above), but
it was executed about 2000ms.

According to the introduction of 
sql-performance-and-usability-considerations

  
: the second SQL is recommended, bucause the first one will not use indexes.

My question is why the second one is worse than the first one.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE query with JOIN

2018-10-08 Thread Justin Ji
Hi Alexander - 

I have tried the SQL you suggested, but the performance got worse, I do not
know why? 

1. "update t_device_module set isOnline=1, mqttTime=" +
System.currentTimeMillis() / 1000 + " where devId in ('0001', '0002', ...,
'2048')";
The SQL may take 600ms for 2048 records.

2."update t_device_module t1 set t1.isOnline=1, t1.mqttTime=" +
System.currentTimeMillis() / 1000 + " where t1.devId in (select table.devId
from table(devId varchar = ?))";

cache.query(new SqlFieldsQuery(sql2).setArgs(new
Object[]{clientIds.toArray()}));

The clientIds is a list which contains 2048 records(equals with above), but
it was executed about 2000ms.

According to the introduction of 
sql-performance-and-usability-considerations

  
: the second SQL is recommended, bucause the first one will not use indexes.

My question is why the second one is worse than the first one.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BUG - .net decimal type creating ignite table produces double

2018-10-08 Thread Ilya Kasnacheev
Hello!

Is it a follow-up of
http://apache-ignite-users.70518.x6.nabble.com/net-decimal-being-stored-as-Other-in-ignite-td24227.html#a24239
?

If we are talking about HTTP JSON REST, then JSON does not have any decimal
type, it only has Double. Therefore I imagine that numbers will be
represented as doubles.
This is default behaviour of ObjectMapper which we happen to use, it is
configurable but I can't see how you can configure it with Ignite REST.

You could have your own ConnectorMessageInterceptor that will output them
as strings instead, I guess.

Regards,
-- 
Ilya Kasnacheev


чт, 4 окт. 2018 г. в 18:52, wt :

> version 2.6
>
>
> i have a tool that creates ignite tables and it is passing in a class with
> the type as decimal but it is coming out as double in when i query the
> metadata using rest in ignite? The documentation explicitly states c#
> decimal converts to java.math.bigdecimal which is not a floating point like
> double
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: odbc caches - cannot browse

2018-10-08 Thread Ilya Kasnacheev
Hello!

ODBC driver should allow you to show all *tables* in their schemas and
browse them. Note that you need to have tables and not just plain caches.

If it doesn't, please guide me what is the software you are using. Is it
Microsoft Excel? What is the version?

Regards,
-- 
Ilya Kasnacheev


пт, 5 окт. 2018 г. в 12:18, wt :

> is there a reason why the odbc driver can't show all the caches and browse
> their schemas' like regular odbc drivers?
>
> this is a regular odbc driver allowing the user to navigate tables.
> odbc.png
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to schedule and ignite service to run every 5 seconds?

2018-10-08 Thread Ilya Kasnacheev
Hello!

ignite-schedule works with CRON expressions and cron4j underneath, which
does not support granularity finer than one minute.

Regards,
-- 
Ilya Kasnacheev


пн, 8 окт. 2018 г. в 8:13, the_palakkaran :

> The default scheduling happens every 1 minute. Is it possible to override
> it
> and make it every 5 seconds?
>
> SchedulerFuture schedulerFuture = ignite.scheduler().scheduleLocal(
> new Callable() {
> private int invocations = 0;
>
> @Override
> public Integer call() {
> invocations++;
> ignite.compute().broadcast(
> new IgniteRunnable() {
>
> @Override public void run() {
>
>   System.out.println("Running");
> }
> }
> );
> return invocations;
> }
> },
> "{5, 3} * * * * *" // Cron expression.
> );
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is it possible to schedule an ignite service?

2018-10-08 Thread Ilya Kasnacheev
Hello!

In Service.execute() method you can simply implement a sleep/run loop. You
do not have to return from this method.

You can also use ignite-schedule for that I guess.

Regards,
-- 
Ilya Kasnacheev


пн, 8 окт. 2018 г. в 7:06, the_palakkaran :

> Hi,
>
> Is it possible to schedule an ignite service that is deployed on multiple
> nodes to run at periodic intervals?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a way for ignite cache to know data update in DB and refresh ?

2018-10-08 Thread Ilya Kasnacheev
Hello!

In general, databases do not provide push-based APIs to notify you about
changes (Ignite being an exception here with its Continuous Queries).

So the answer is "no" in general. It is not implemented and it's not
obvious how it could be.

There are third party solutions for a limited number of database options,
but nothing in Apache Ignite.

Regards,
-- 
Ilya Kasnacheev


чт, 4 окт. 2018 г. в 12:27, the_palakkaran :

> Hi,
>
> Is there a way in ignite to know if the original value in DB has been
> updated so as to refresh it in cache that has persistence enabled?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Does Ignite support GPB serialized data

2018-10-08 Thread Ilya Kasnacheev
Hello!

Ignite does not have special support for protocol buffers.

You are welcome to implement Binarylizable or Externalizable interfaces on
your objects to specify serialization for them.

You can also specify BinarySerializer for types that you do not control by
putting them into BinaryConfiguration.setTypeConfigurations() and using
that one with IgniteConfiguration:
https://apacheignite.readme.io/docs/binary-marshaller#section-configuring-binary-objects

Regards,
-- 
Ilya Kasnacheev


ср, 3 окт. 2018 г. в 18:24, Michael Fong :

> Hi, all
>
> We have protocol buffer serialized binary data that would like to stored
> into ignite, and wonder if Ignite supports gpb serialization out of the
> box?
>
> If not, which serialization interface do we need to implement to customize
> and override in the xml?
>
> Thanks in advance
>


Re: Spark 2.3 Structured Streaming With Ignite

2018-10-08 Thread Stephen Darlington
There’s a ticket and a patch but it doesn’t work “out of the box” yet.

https://issues.apache.org/jira/browse/IGNITE-9357 


Regards,
Stephen

> On 5 Oct 2018, at 19:53, ApacheUser  wrote:
> 
> Hi,
> 
> Is it possible to use Spark Structured Streaming with Ignite?. I am getting
> "Data source ignite does not support streamed writing" error.
> 
> Log trace:
> 
> 
> Exception in thread "main" java.lang.UnsupportedOperationException: Data
> source ignite does not support streamed writing
>at
> org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:319)
>at
> org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:293)
>at
> com.cisco.ccrc.spark.sparkIngite.ssincrmntl$.main(ssincrmntl.scala:90)
>at
> com.cisco.ccrc.spark.sparkIngite.ssincrmntl.main(ssincrmntl.scala)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:498)
>at
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
>at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
>at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
>at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
>at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 
> 
> Thanks
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




RE: Services not initializing in persistence enabled cluster nodes

2018-10-08 Thread PotatoGuo
Sir, Is there any way to add a new node in a persistent cluster? Because i
want to expand the cluster for the server num may increase someday.
PS: the ignite is embeded in project using Spring boot.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: INSERT INTO ... SELECT kills server nodes

2018-10-08 Thread Evgenii Zhuravlev
Hi Alberto,

Can you share the full query, configured indexes and configs? Both JVM and
Ignite.

Thanks,
Evgenii

пт, 5 окт. 2018 г. в 22:00, Alberto Mancini :

> Hi,
> we are developing a poc using Ignite and we faced a scaring problem
> that i suppose is due to some naive misconfiguration.
>
> Long story short, we use persistence on a cluster on AWS and to speedup
> test data loading we attempted a simple sql query:
>
> INSERT INTO MyTable (
> ID
> PARTITION_KEY,
> ...
> )
> SELECT
>  ID,
>  'another_partition_key',
> ...
> )
>
> With enough rows this end up killing nodes.
> With the default failureDetectionTimeout the server nodes
> end up offline, tuning the timeout we get OOM.
>
> Obviously the query is used only for testing but the fact that the server
> nodes stop is actually scaring.
>
> Do we missed some obvious configuration that may keep healthy the cluster?
>
> Thanks,
>A.
>


Re: Apache Ignite 2.6.0 JDBC Thin Client Multiple Endpoints Issues

2018-10-08 Thread Ilya Kasnacheev
Hello!

The address list is comma-separated and should not contain any spaces:

String igniteUrl="jdbc:ignite:thin://
192.168.0.117:10800,192.168.0.62:10800,192.168.0.115:10800";

JDBC seems to be confused if spaces are used between entries. I think you
should not expect to have naked spaces in URLs.

Regards,
-- 
Ilya Kasnacheev


ср, 3 окт. 2018 г. в 16:52, arpiticx :

> Hi,
>
> I am running 3 ignite nodes in a cluster and trying to connect using jdbc
> thin driver with multiple end points and getting error when trying to get
> connection. Here is my code for connecting to ignite cluster using jdbc
> thin
> driver and multiple endpoints. Any help is appreciated
>
> *Connection Pool Coniguration*
>
> private static void initializeIgnitePool()
> {
> try
> {
>
> System.out.println("Configuring cache database
> connection pool");
> //  String
> igniteUrl="jdbc:ignite:cfg://cache=default@file
> :$$home_path$$"+File.separator+"$$config_file_name$$";
> //  String
> igniteUrl="jdbc:ignite:cfg://cache=default@file
> :/opt/apache-ignite-fabric-2.6.0-bin/config/icrm-ignite-config.xml";
> String igniteUrl="jdbc:ignite:thin://
> 192.168.0.117:10800,
> 192.168.0.62:10800, 192.168.0.115:10800";
>
> BasicDataSource bds=new BasicDataSource();
>
>
>
> bds.setDriverClassName("org.apache.ignite.IgniteJdbcThinDriver");
> bds.setUrl(igniteUrl);
> //  bds.setMaxActive(200);
> //  bds.setMaxWait(30);
> bds.setMaxIdle(60);
> bds.setTestOnBorrow(false);
> bds.setLogAbandoned(false);
> bds.setValidationQuery("select 1 ");
> ds=bds;
> }
> catch(Exception e)
> {
> e.printStackTrace();
> }
> }
>
>
> *Code for inserting multiple records*
>
> for(int i=0;i<2000;i++)
> {
> try {
> Connection
> conn=pool.getConnection();
> PreparedStatement
> ps=conn.prepareStatement("insert into test1
> (id,fname) values (?,?)");
> ps.setInt(1,i);
> ps.setString(2, "arpit-"+i);
>
> int u=ps.executeUpdate();
> System.out.println("Rows inserted
> :"+u);
>
> ps.close();
> conn.close();
> Thread.sleep(1000);
> } catch (Exception e) {
> System.out.println(i+"
> ==> "+e.getMessage()+"
> <==");
> }
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Affinity quesions

2018-10-08 Thread Pavel Vinokurov
James,

Please pay attention to the following query configuration method
Query#setLocal(true). It allows to return records located on only local
node.
Following code block iterates through cache entries located on each server
node:

ignite.compute(ignite.cluster().forServers()).broadcast(new IgniteRunnable() {
@Override public void run() {
Ignite local = Ignition.localIgnite();

System.out.println("Ignite node:"+local.cluster().localNode().id());

IgniteCache cache = local.cache("cache");

QueryCursor query = cache.query(new ScanQuery().setLocal(true));

Iterator it = query.iterator();
while (it.hasNext()){
//iterate through local entities
}
}
});


пт, 5 окт. 2018 г. в 22:45, James Dodson :

> Thanks Val.
> Is there a way I can verify this behavior - a way to query one node in
> particular to see what data is on each node?
>
> On Fri, Oct 5, 2018 at 2:04 PM vkulichenko 
> wrote:
>
>> James,
>>
>> That would be enough - everything with the same affinity key will be
>> stored
>> on the same node. That's assuming, of course, that both caches have equal
>> affinity configuration (same affinity function, same number of partitions,
>> etc.).
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 

Regards

Pavel Vinokurov


Re: Re-deploying a continuous query after all server nodes restart

2018-10-08 Thread Evgenii Zhuravlev
Hi,

In this case, starting the new node B and restarting the node A are
absolutely the same - it's just starting the new node, because node A will
be started with the new node id. As javadoc for CQ says:
Note that in case query is distributed and a new node joins, it will get
the remote filter for the query during discovery process before it actually
joins topology, so no updates will be missed.

So, in case of node restart, if at least one node has this CQ, it will be
deployed to the newly joined node.

Evgenii

чт, 4 окт. 2018 г. в 19:16, DanieleBosetti :

> Hi all,
>
> I'd like to understand how to determine that a continuous query (CQ) needs
> to be re-created, in case of multiple server restarts.
>
> My understanding for the following case is:
> if an ignite client joins a cluster (composed by server-A only) and
> deploys
> a CQ to the cluster, server-A will push updates to the client (CQ listens
> on
> a cache with replicated mode).
> When a new server node (server-B) joins the cluster, then that CQ is
> deployed to the new server node.
> Server-A will push updates for that CQ to the client, and when server-A
> shuts down, then server-B will continue pushing updates for the CQ.
> If server-A is started again, then the CQ is not re-deployed to this node
> (does the client decide whether to re-deploy the CQ on new server nodes
> based on their node-id ?)
> If then server-B is shut down; I understand the CQ we originally created
> (or
> the related client local listener), will stop receiving updates. (is that
> correct?)
>
> How can the client determine that no server is "managing" its CQ? (and,
> since server nodes are restarted in turn, the client will not receive any
> CLIENT_DISCONNECTED event)
> Should the client listen to all NODE_JOIN and NODE_LEFT to figure out that
> the servers left?
>
> (I searched the examples section, but I couldn't find how to "keep alive" a
> CQ).
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>