Re: CountDownLatch issue in Ignite 2.6 version

2020-06-08 Thread Prasad Bhalerao
I just checked the ignite doc for atomic configuration.
But it doesn't say that it is applicable to distributed data structures.

Is it really applicable to distributed data structures like count down latch

On Tue 9 Jun, 2020, 7:26 AM Prasad Bhalerao  Hi,
> I was under the impression that countdown latch is implemented in
> replicated cache. So when any number of nodes go down it does not loose
> it's state.
>
> Can you please explain why atmoc data structures are using 1 back when its
> state is very important?
>
> Can we enforce  atomic data structures to use replicated cache?
>
> Which cache does ignite use to store atomic data structures?
>
> Thanks
> Prasad
>
> On Mon 8 Jun, 2020, 11:58 PM Evgenii Zhuravlev  wrote:
>
>> Hi,
>>
>> By default, cache, that stores all atomic structures has only 1 backup,
>> so, after losing all data for this certain latch, it recreates it. To
>> change the default atomic configuration use
>> IgniteConfiguration.setAtomicConfiguration.
>>
>> Evgenii
>>
>> сб, 6 июн. 2020 г. в 06:20, Akash Shinde :
>>
>>> *Issue:* Countdown latched gets reinitialize to original value(4) when
>>> one or more (but not all) node goes down. *(Partition loss happened)*
>>>
>>> We are using ignite's distributed countdownlatch to make sure that cache
>>> loading is completed on all server nodes. We do this to make sure that our
>>> kafka consumers starts only after cache loading is complete on all server
>>> nodes. This is the basic criteria which needs to be fulfilled before starts
>>> actual processing
>>>
>>>
>>>  We have 4 server nodes and countdownlatch is initialized to 4. We use
>>> cache.loadCache method to start the cache loading. When each server
>>> completes cache loading it reduces the count by 1 using countDown method.
>>> So when all the nodes completes cache loading, the count reaches to zero.
>>> When this count  reaches to zero we start kafka consumers on all server
>>> nodes.
>>>
>>>  But we saw weird behavior in prod env. The 3 server nodes were shut
>>> down at the same time. But 1 node is still alive. When this happened the
>>> count down was reinitialized to original value i.e. 4. But I am not able to
>>> reproduce this in dev env.
>>>
>>>  Is this a bug, when one or more (but not all) nodes goes down then
>>> count re initializes back to original value?
>>>
>>> Thanks,
>>> Akash
>>>
>>


Ignite Spring Integration

2020-06-08 Thread marble.zh...@coinflex.com
Hi Guru, 

I am integrating the Spring into the Ignite, when deploying service, with
metho deployNodeSingleton, met no bean found exception, show as below, which
lead the Suppressed: class org.apache.ignite.IgniteCheckedException: Error
occured during service initialization:
[locId=13760585-1342-47d0-b30f-30f62b23df12, name=WebSocketGridService]

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException:
No bean named 'marketDao' available

Need helpful, thanks a lot.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite node log file setup

2020-06-08 Thread kay
Hello!

I start up 

sh ./ignite.sh -J-DgridName=testGridName -v ./config/config-cache.xml

and in config-cache.xml 


but server start failed.

Is that not proper to set igniteInstanceName?? 

log is here 
class org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.communication.GridIoManager]
 at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
 at org.apache.ignite.Ignition.start(Ignition.java:349)
 at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
 Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
manager: GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.communication.GridIoManager]
 at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1965)
 at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1173)
 at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
 at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703)
 at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117)
 at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
 at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
 at org.apache.ignite.Ignition.start(Ignition.java:346)
 ... 1 more
 Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
SPI: TcpCommunicationSpi [connectGate=null,
connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@6622fc65,
chConnPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$4@299321e2,
enableForcibleNodeKill=false, enableTroubleshootingLog=false,
locAddr=42.1.188.128, locHost=intdev01/42.1.188.128, locPort=48722,
locPortRange=1, shmemPort=-1, directBuf=true, directSndBuf=false,
idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0,
slowClientQueueLimit=0, nioSrvr=GridNioServer [selectorSpins=0,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=org.apache.ignite.internal.util.nio.GridDirectParser@2f17e30d,
directMode=true], GridConnectionBytesVerifyFilter], closed=false,
directBuf=true, tcpNoDelay=true, sockSndBuf=32768, sockRcvBuf=32768,
writeTimeout=2000, idleTimeout=60, skipWrite=false, skipRead=false,
locAddr=intdev01/42.1.188.128:48722, order=LITTLE_ENDIAN, sndQueueLimit=0,
directMode=true,
mreg=org.apache.ignite.internal.processors.metric.MetricRegistry@71cf1b07,
rcvdBytesCntMetric=org.apache.ignite.internal.processors.metric.impl.LongAdderMetric@a9be6fa5,
sentBytesCntMetric=org.apache.ignite.internal.processors.metric.impl.LongAdderMetric@489b09ce,
outboundMessagesQueueSizeMetric=org.apache.ignite.internal.processors.metric.impl.LongAdderMetric@69a257d1,
sslFilter=null, msgQueueLsnr=null, readerMoveCnt=0, writerMoveCnt=0,
readWriteSelectorsAssign=false], shmemSrv=null, usePairedConnections=false,
connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false,
ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000,
boundTcpPort=48722, boundTcpShmemPort=-1, selectorsCnt=8, selectorSpins=0,
addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@181e731e[Count = 1],
stopping=false,
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@19648c40]
 at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
 at
org.apache.ignite.internal.managers.communication.GridIoManager.start(GridIoManager.java:435)
 at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1960)
 ... 11 more
 Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to
register SPI MBean: null
 at
org.apache.ignite.spi.IgniteSpiAdapter.registerMBean(IgniteSpiAdapter.java:421)
 at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.spiStart(TcpCommunicationSpi.java:2397)
 at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
 ... 13 more
 Caused by: javax.management.MalformedObjectNameException: Invalid character
':' in value part of property
 at javax.management.ObjectName.construct(ObjectName.java:618)
 at javax.management.ObjectName.(ObjectName.java:1382)
 at
org.apache.ignite.internal.util.IgniteUtils.makeMBeanName(IgniteUtils.java:4719)
 at
org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4788)
 at

Re: Ignite visor use

2020-06-08 Thread Evgenii Zhuravlev
Ignite Visor starts daemon node inside to connect to the cluster, it
doesn't matter cluster was started. All you need is to configure addresses
of the cluster nodes in the config file, the same way as you do fo other
nodes in the cluster.

Evgenii

пн, 8 июн. 2020 г. в 19:01, Prasad Bhalerao :

> Hi,
> I am starting ignite inside my java spring boot app. I do not use ignite
> shell script to start it.
> Can I use ignite visor in such case to connect to my ignite cluster?
>
> What all the minimum required scrips do I need to use ignite visor?
>
>
> Thanks,
> Prasad
>


Ignite visor use

2020-06-08 Thread Prasad Bhalerao
Hi,
I am starting ignite inside my java spring boot app. I do not use ignite
shell script to start it.
Can I use ignite visor in such case to connect to my ignite cluster?

What all the minimum required scrips do I need to use ignite visor?


Thanks,
Prasad


Re: CountDownLatch issue in Ignite 2.6 version

2020-06-08 Thread Prasad Bhalerao
Hi,
I was under the impression that countdown latch is implemented in
replicated cache. So when any number of nodes go down it does not loose
it's state.

Can you please explain why atmoc data structures are using 1 back when its
state is very important?

Can we enforce  atomic data structures to use replicated cache?

Which cache does ignite use to store atomic data structures?

Thanks
Prasad

On Mon 8 Jun, 2020, 11:58 PM Evgenii Zhuravlev  Hi,
>
> By default, cache, that stores all atomic structures has only 1 backup,
> so, after losing all data for this certain latch, it recreates it. To
> change the default atomic configuration use
> IgniteConfiguration.setAtomicConfiguration.
>
> Evgenii
>
> сб, 6 июн. 2020 г. в 06:20, Akash Shinde :
>
>> *Issue:* Countdown latched gets reinitialize to original value(4) when
>> one or more (but not all) node goes down. *(Partition loss happened)*
>>
>> We are using ignite's distributed countdownlatch to make sure that cache
>> loading is completed on all server nodes. We do this to make sure that our
>> kafka consumers starts only after cache loading is complete on all server
>> nodes. This is the basic criteria which needs to be fulfilled before starts
>> actual processing
>>
>>
>>  We have 4 server nodes and countdownlatch is initialized to 4. We use
>> cache.loadCache method to start the cache loading. When each server
>> completes cache loading it reduces the count by 1 using countDown method.
>> So when all the nodes completes cache loading, the count reaches to zero.
>> When this count  reaches to zero we start kafka consumers on all server
>> nodes.
>>
>>  But we saw weird behavior in prod env. The 3 server nodes were shut down
>> at the same time. But 1 node is still alive. When this happened the count
>> down was reinitialized to original value i.e. 4. But I am not able to
>> reproduce this in dev env.
>>
>>  Is this a bug, when one or more (but not all) nodes goes down then count
>> re initializes back to original value?
>>
>> Thanks,
>> Akash
>>
>


Re: Do I need Map entry classes in Ignite's classpath?

2020-06-08 Thread Andrew Munn
How can you update those k,v classes?  Do you have to take the cluster down
or can you do a rolling upgrade somehow?

On Mon, Jun 8, 2020 at 7:37 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> If these classes are used as key or value types, then yes. Key/value types
> are not peer loaded. Otherwise, you just need to enable peer class loading.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июн. 2020 г. в 05:05, Andrew Munn :
>
>> Is there ever any reason for the classes of Objects being put in the Map
>> to be in the classpath for Ignite?  or does the cluster always handle that
>> automatically?
>>
>> Thanks,
>> Andrew
>>
>


Can you pass host name to ignite visor?

2020-06-08 Thread Andrew Munn
Is there any way to pass the cluster address to ignitevisorcmd.sh on the
command line or must it be set in the config.xml file used?


Re: Zookeeper discovery in mix environments.

2020-06-08 Thread akorensh
Limit your env to two nodes.
Try using  --net=host for docker containers. If that works, limiting to  use
setLocalHost for TcpCommunicationSPI, and configure ZooKeeper to use address
translation/or bind to a specific address.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What does: Failed to send DHT near response mean?

2020-06-08 Thread akorensh
You can use setlocalHost to bind a node to one address only. 
see:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setLocalHost-java.lang.String-





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Countdown latch issue with 2.6.0

2020-06-08 Thread Alexandr Shapkin
Hi,

Have you tried to reproduce this behaviour with the latest release - 2.8.1?
If I remember correctly, there were some fixes regarding the partition
counters mismatches, for sample:
https://issues.apache.org/jira/browse/IGNITE-10078



-
Alex Shapkin
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Messages being Missed on Node Start

2020-06-08 Thread Alexandr Shapkin
Hi,

Yes, seems like the https://issues.apache.org/jira/browse/IGNITE-1410 is not
resolved, unfortunately.

Is it possible to listen for predefined events like
LifecycleEventType.AFTER_NODE_START or EVT_CLUSTER_ACTIVATED?

Alternatively, you might want to utilize Ignite data structures, like
AtomicLong to signal other nodes that the client is ready for processing.

If none of the above suggestions work for you, can you provide an example of
why do you need to register a custom event? And what you are trying to
achieve with it?



-
Alex Shapkin
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question regarding topology

2020-06-08 Thread Alexandr Shapkin
Hello!

It's not clear, whether the code snippet came from a client or a server. If
it's a client, then I'd recommend you try sending the keys and additional
data directly to a server node and perform the insertion directly on the
node. The Compute API could be an option here [1]

You might also want to check the thread dump or JFR and check where the
possible starvation is happening. It's also useful to check the resource
utilization prior to adding additional nodes.

[1] - https://apacheignite.readme.io/docs/compute-grid#ignitecompute



-
Alex Shapkin
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Zookeeper discovery in mix environments.

2020-06-08 Thread John Smith
I think the problem might also, be the client is running inside DC/OS with
docker containers. When I check the network topology all
connected aplication are reporting multiple addresses...
Here is an example of a single client... Is it possible to tell it only
bind to one or only use one?

--+
| xxx.xxx.0.17| 1: (@n1) |  | Client| Linux amd64
3.10.0-862.14.4.el7.x86_64 | 2|  | 1.00 %   |
| xxx..0.1 |  |
 |   ||  | xx |
 |
| xxx.xxx.100.1   |  |
 |   ||  | xx |
 |
| xxx.xxx.100.2   |  |
 |   ||  | xx |
 |
| 127.0.0.1  |  |
   |   ||  | xx |
   |
| xxx.xxx.xxx.68   |  |
 |   ||  | xx |
 |
| xxx.xxx.0.1 |  |
 |   ||  |
  |  |
| xxx.xxx.xxx.129 |  |
 |   ||  |
  |  |
| xxx.xxx.100.3   |  |
 |   ||  |
  |  |
++--+--+---++--+---

On Mon, 8 Jun 2020 at 17:02, akorensh  wrote:

> In terms of TcpCommunicationSPI it might help:
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalAddress-java.lang.String-
>
> Also look into setting the localaddress:
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalAddress-java.lang.String-
>
> Doc for tcp communication spi:
> https://apacheignite.readme.io/docs/network-config#tcpcommunicationspi
>
>
>
> In your case, Discovery Communcation is using ZooKeeper to exchange
> messages, and therefore you would need to configure ZooKeeper if you would
> like address resolution.
>
> Ignite class ZookeperClient is a wrapper around the native ZooKeeper class
> which handles all communication.
>
> ZookeperClient wrapper class:
>
> https://github.com/apache/ignite/blob/master/modules/zookeeper/src/main/java/org/apache/ignite/spi/discovery/zk/internal/ZookeeperClient.java
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: What does: Failed to send DHT near response mean?

2020-06-08 Thread John Smith
It is. It seems to only happen when I scale the client. Just curious. is it
possible that the client is bound to alot of address and
causing communication issues???
When I look at the topology through ignite visor I see the client has
multiple possible addresses. Is it possible to tell the client to only bind
or use one address?


--+
| xxx.xxx.0.17| 1: (@n1) |  | Client| Linux amd64
3.10.0-862.14.4.el7.x86_64 | 2|  | 1.00 %   |
| xxx..0.1 |  |
 |   ||  | xx |
 |
| xxx.xxx.100.1   |  |
 |   ||  | xx |
 |
| xxx.xxx.100.2   |  |
 |   ||  | xx |
 |
| 127.0.0.1  |  |
   |   ||  | xx |
   |
| xxx.xxx.xxx.68   |  |
 |   ||  | xx |
 |
| xxx.xxx.0.1 |  |
 |   ||  |
  |  |
| xxx.xxx.xxx.129 |  |
 |   ||  |
  |  |
| xxx.xxx.100.3   |  |
 |   ||  |
  |  |
++--+--+---++--+---

On Mon, 8 Jun 2020 at 17:14, akorensh  wrote:

> Hello,
>   It means that a node was not able to exchange message w/another node
> because of network or firewall issues. It lists the node in question inside
> the message.
>
>   Check to make sure that the node is available and is in working order.
>
>
> *Failed to send DHT near response *[futId=16392, nearFutId=2,
> node=*379b18e9-a03b-4956-9184-8c514924af2a*, res=GridDhtAtomicNearResponse
> [partId=522, futId=2, primaryId=6113a59a-b087-4e70-8541-d050eea8c934,
> errs=null,
> flags=]]","stackTrace":"org.apache.ignite.IgniteCheckedException:
> Failed to send message (node may have left the grid or TCP connection
> cannot
> be established due to firewall issues) [*node=TcpDiscoveryNode
> [id=379b18e9-a03b-4956-9184-8c514924af2a, addrs=[127.0.0.1, xxx.xxx.xxx.7],
> sockAddrs=[/127.0.0.1:0, /xxx.xxx.xxx.7:0*]
>
> Thanks, Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite memory memory-architecture with cache partitioned mode

2020-06-08 Thread Alexandr Shapkin
I just wanted to highlight that there is the Partition Awareness feature
available for thin clients.
When enabled, a client will know to witch node a partition belongs and send
a request directly to that node.

[1] -
https://apacheignite-net.readme.io/docs/thin-client#partition-awareness
[2] -
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients



-
Alex Shapkin
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What does: Failed to send DHT near response mean?

2020-06-08 Thread akorensh
Hello,
  It means that a node was not able to exchange message w/another node
because of network or firewall issues. It lists the node in question inside
the message.

  Check to make sure that the node is available and is in working order.
  

*Failed to send DHT near response *[futId=16392, nearFutId=2,
node=*379b18e9-a03b-4956-9184-8c514924af2a*, res=GridDhtAtomicNearResponse
[partId=522, futId=2, primaryId=6113a59a-b087-4e70-8541-d050eea8c934,
errs=null, flags=]]","stackTrace":"org.apache.ignite.IgniteCheckedException:
Failed to send message (node may have left the grid or TCP connection cannot
be established due to firewall issues) [*node=TcpDiscoveryNode
[id=379b18e9-a03b-4956-9184-8c514924af2a, addrs=[127.0.0.1, xxx.xxx.xxx.7],
sockAddrs=[/127.0.0.1:0, /xxx.xxx.xxx.7:0*]

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Zookeeper discovery in mix environments.

2020-06-08 Thread akorensh
In terms of TcpCommunicationSPI it might help:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalAddress-java.lang.String-

Also look into setting the localaddress: 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalAddress-java.lang.String-

Doc for tcp communication spi:
https://apacheignite.readme.io/docs/network-config#tcpcommunicationspi



In your case, Discovery Communcation is using ZooKeeper to exchange
messages, and therefore you would need to configure ZooKeeper if you would
like address resolution.

Ignite class ZookeperClient is a wrapper around the native ZooKeeper class
which handles all communication.

ZookeperClient wrapper class:
https://github.com/apache/ignite/blob/master/modules/zookeeper/src/main/java/org/apache/ignite/spi/discovery/zk/internal/ZookeeperClient.java







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Zookeeper discovery in mix environments.

2020-06-08 Thread John Smith
Would address resolver work here with Zookeeper discovery? I think most of
my communications erros are due to my clients binding to multiple addresses?


What does: Failed to send DHT near response mean?

2020-06-08 Thread John Smith
Hi, I got the below errors on my client node. CLIENT=TRUE

I can see that it cannot communicate but why is it trying to communicate to
127.0.0.1? This is running inside a docker container and the cache nodes
are running on dedicated VMs.

{"appTimestamp":"2020-06-08T19:26:46.209+00:00","threadName":"sys-stripe-2-#3%foo%","level":"WARN","loggerName":"org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi","message":"Connect
timed out (consider increasing 'failureDetectionTimeout' configuration
property) [addr=/xxx.xxx.xxx.7:47100, failureDetectionTimeout=1]"}
{"appTimestamp":"2020-06-08T19:26:46.218+00:00","threadName":"sys-stripe-2-#3%foo%","level":"ERROR","loggerName":"org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi","message":"Failed
to send message to remote node [node=TcpDiscoveryNode
[id=379b18e9-a03b-4956-9184-8c514924af2a, addrs=[127.0.0.1, xxx.xxx.xxx.7],
sockAddrs=[/127.0.0.1:0, /xxx.xxx.xxx.7:0], discPort=0, order=1184,
intOrder=597, lastExchangeTime=1591643806376, loc=false,
ver=2.7.0#20181130-sha1:256ae401, isClient=true], msg=GridIoMessage [plc=2,
topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0,
skipOnTimeout=false, msg=GridDhtAtomicNearResponse [partId=522, futId=2,
primaryId=6113a59a-b087-4e70-8541-d050eea8c934, errs=null,
flags=]]]","stackTrace":"org.apache.ignite.IgniteCheckedException: Failed
to connect to node (is node still alive?). Make sure that each ComputeTask
and cache Transaction has a timeout set in order to prevent parties from
waiting forever in case of network issues
[nodeId=379b18e9-a03b-4956-9184-8c514924af2a, addrs=[/xxx.xxx.xxx.7:47100,
/127.0.0.1:47100]]\n\tat
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3459)\n\tat
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2987)\n\tat
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2870)\n\tat
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2713)\n\tat
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2672)\n\tat
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1656)\n\tat
org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1731)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1170)\n\tat
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.sendDhtNearResponse(GridDhtAtomicCache.java:3492)\n\tat
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processDhtAtomicUpdateRequest(GridDhtAtomicCache.java:3364)\n\tat
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$600(GridDhtAtomicCache.java:135)\n\tat
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$7.apply(GridDhtAtomicCache.java:309)\n\tat
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$7.apply(GridDhtAtomicCache.java:304)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:380)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:306)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:101)\n\tat
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:295)\n\tat
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)\n\tat
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)\n\tat
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)\n\tat
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)\n\tat
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)\n\tat
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)\n\tat
java.lang.Thread.run(Thread.java:748)\n\tSuppressed:
org.apache.ignite.IgniteCheckedException: Failed to connect to address
[addr=/xxx.xxx.xxx.7:47100, err=Host is unreachable]\n\t\tat
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3462)\n\t\t...
25 common frames omitted\n\tCaused by: java.net.NoRouteToHostException:
Host is unreachable\n\t\tat
sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)\n\t\tat

Re: embedded jetty & ignite

2020-06-08 Thread Denis Magda
Clay,

This 3-years old blogging series talks about a recommended architecture for
deployments with Ignite service grid:

   - Part 1:
   https://dzone.com/articles/running-microservices-on-top-of-in-memory-data-gri
   - Part 2:
   
https://dzone.com/articles/running-microservices-on-top-of-in-memory-data-gri-1
   - Part 3:
   https://dzone.com/articles/microservices-on-top-of-an-in-memory-data-grid-par


That was before Kubernetes, Spring Boot REST, Quarkus, etc. became de-facto
standards for microservices architectures.

-
Denis


On Mon, Jun 8, 2020 at 12:08 PM Denis Magda  wrote:

> Clay,
>
> I assume embedding jetty as an ignite service, i.e., as an ignite server
>> node, is not desirable (still don't know the reason).
>
>
> Let's suppose that your cluster has 3 server/data nodes with application
> records distributed evenly - so that each node keeps ~33% of the whole data
> set. Next, if to deploy Jetty as an ignite service on those 3 nodes and to
> assume that all application records are accessed frequently, then for ~33%
> of requests the service will serve local data of its node while for the
> rest ~66% request it will getting it from the other 2 remote nodes. So, the
> performance advantages of this deployment strategy are insignificant
> (unless all the data is fully replicated).
>
> Also, if an Ignite service is deployed on a server node, then each time
> you need to update service logic you'll restart the server node and trigger
> cluster rebalancing that might be a time-consuming operation depending on
> data size. This might be a more prominent implication that you would not
> want to have in the production environment.
>
>
>
> -
> Denis
>
>
> On Sun, Jun 7, 2020 at 3:04 AM Clay Teahouse 
> wrote:
>
>> Denis -- Thank you for the recommendation.
>> From what I read, embedded ignite servers has implications for
>> scalability, and write performance and I cannot afford either.
>> I assume embedding jetty as an ignite service, i.e., as an ignite server
>> node, is not desirable (still don't know the reason).
>> Yes, ultimatley my goal is to deploy the ignite services and data nodes
>> as microservices.
>>
>> On Sat, Jun 6, 2020 at 11:25 PM Denis Magda  wrote:
>>
>>> Clay,
>>>
>>> Probably, such frameworks as Quarkus, Spring Boot, Micronaut would work
>>> as a better foundation for your microservices. As you know, those already
>>> go with embedded REST servers and you can always use Ignite clients to
>>> reach out to the cluster.
>>>
>>> Usually, Ignite servers are deployed in the embedded mode when you're
>>> dealing with ultra-low latency use case or doing web-sessions clustering:
>>>
>>> https://www.gridgain.com/docs/latest/installation-guide/deployment-modes#embedded-deployment
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Sat, Jun 6, 2020 at 9:03 AM Clay Teahouse 
>>> wrote:
>>>
 Hi Denis -- My main reason was for embedding jetty as an ignite service
 was to have ignite manage jetty instance, the same it does for any other
 kind of service.

 On Thu, Jun 4, 2020 at 3:30 PM Denis Magda  wrote:

> Clay,
>
> Do you have any specific requirements in mind for the ignite service +
> jetty deployment? If possible, please tell us a bit more about your
> application.
>
> Generally, I would deploy Jetty separately and use load balancers when
> several instances of an application are needed.
>
> -
> Denis
>
>
> On Wed, Jun 3, 2020 at 3:20 PM Clay Teahouse 
> wrote:
>
>> Thank you, Denis. I'll research this topic further.
>>
>> Any recommendation for/against using jetty as an embedded servlet
>> container, in this case, say, deployed as an ignite service?
>>
>> On Fri, May 29, 2020 at 11:22 PM Denis Magda 
>> wrote:
>>
>>> Clay,
>>>
>>> Just start your Jetty server and deploy as many instances of your
>>> web app as needed. Inside the logic of those apps start Ignite server 
>>> nodes
>>> instances. Then, refer to this documentation page for session clustering
>>> configuration:
>>> https://apacheignite-mix.readme.io/docs/web-session-clustering
>>>
>>> Also, there were many related questions related to this topic. Try
>>> to search for specific by googling for "session clustering with ignite 
>>> and
>>> jetty".
>>>
>>> Let us know if further help is needed.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, May 29, 2020 at 6:57 PM Clay Teahouse <
>>> clayteaho...@gmail.com> wrote:
>>>
 thank you Denis.
 If I want to go with the first option, how would I deploy jetty as
 embedded server? Do I deploy it as an ignite service?
 How would I do session clustering in this case?

 On Fri, May 29, 2020 at 3:18 PM Denis Magda 
 wrote:

> Hi Clay,
>
> I wouldn't suggest using Ignite's Jetty instance for the
> 

Re: embedded jetty & ignite

2020-06-08 Thread Denis Magda
Clay,

I assume embedding jetty as an ignite service, i.e., as an ignite server
> node, is not desirable (still don't know the reason).


Let's suppose that your cluster has 3 server/data nodes with application
records distributed evenly - so that each node keeps ~33% of the whole data
set. Next, if to deploy Jetty as an ignite service on those 3 nodes and to
assume that all application records are accessed frequently, then for ~33%
of requests the service will serve local data of its node while for the
rest ~66% request it will getting it from the other 2 remote nodes. So, the
performance advantages of this deployment strategy are insignificant
(unless all the data is fully replicated).

Also, if an Ignite service is deployed on a server node, then each time you
need to update service logic you'll restart the server node and trigger
cluster rebalancing that might be a time-consuming operation depending on
data size. This might be a more prominent implication that you would not
want to have in the production environment.



-
Denis


On Sun, Jun 7, 2020 at 3:04 AM Clay Teahouse  wrote:

> Denis -- Thank you for the recommendation.
> From what I read, embedded ignite servers has implications for
> scalability, and write performance and I cannot afford either.
> I assume embedding jetty as an ignite service, i.e., as an ignite server
> node, is not desirable (still don't know the reason).
> Yes, ultimatley my goal is to deploy the ignite services and data nodes as
> microservices.
>
> On Sat, Jun 6, 2020 at 11:25 PM Denis Magda  wrote:
>
>> Clay,
>>
>> Probably, such frameworks as Quarkus, Spring Boot, Micronaut would work
>> as a better foundation for your microservices. As you know, those already
>> go with embedded REST servers and you can always use Ignite clients to
>> reach out to the cluster.
>>
>> Usually, Ignite servers are deployed in the embedded mode when you're
>> dealing with ultra-low latency use case or doing web-sessions clustering:
>>
>> https://www.gridgain.com/docs/latest/installation-guide/deployment-modes#embedded-deployment
>>
>>
>> -
>> Denis
>>
>>
>> On Sat, Jun 6, 2020 at 9:03 AM Clay Teahouse 
>> wrote:
>>
>>> Hi Denis -- My main reason was for embedding jetty as an ignite service
>>> was to have ignite manage jetty instance, the same it does for any other
>>> kind of service.
>>>
>>> On Thu, Jun 4, 2020 at 3:30 PM Denis Magda  wrote:
>>>
 Clay,

 Do you have any specific requirements in mind for the ignite service +
 jetty deployment? If possible, please tell us a bit more about your
 application.

 Generally, I would deploy Jetty separately and use load balancers when
 several instances of an application are needed.

 -
 Denis


 On Wed, Jun 3, 2020 at 3:20 PM Clay Teahouse 
 wrote:

> Thank you, Denis. I'll research this topic further.
>
> Any recommendation for/against using jetty as an embedded servlet
> container, in this case, say, deployed as an ignite service?
>
> On Fri, May 29, 2020 at 11:22 PM Denis Magda 
> wrote:
>
>> Clay,
>>
>> Just start your Jetty server and deploy as many instances of your web
>> app as needed. Inside the logic of those apps start Ignite server nodes
>> instances. Then, refer to this documentation page for session clustering
>> configuration:
>> https://apacheignite-mix.readme.io/docs/web-session-clustering
>>
>> Also, there were many related questions related to this topic. Try to
>> search for specific by googling for "session clustering with ignite and
>> jetty".
>>
>> Let us know if further help is needed.
>>
>>
>> -
>> Denis
>>
>>
>> On Fri, May 29, 2020 at 6:57 PM Clay Teahouse 
>> wrote:
>>
>>> thank you Denis.
>>> If I want to go with the first option, how would I deploy jetty as
>>> embedded server? Do I deploy it as an ignite service?
>>> How would I do session clustering in this case?
>>>
>>> On Fri, May 29, 2020 at 3:18 PM Denis Magda 
>>> wrote:
>>>
 Hi Clay,

 I wouldn't suggest using Ignite's Jetty instance for the deployment
 of your services. Ignite's Jetty primary function is to handle REST
 requests specific to Ignite:
 https://apacheignite.readme.io/docs/rest-api

 Instead, deploy and manage your restful services separately. Then,
 if the goal is to do a web session clustering, deploy Ignite server 
 nodes
 in the embedded mode making the sessions' caches replicated. Otherwise,
 deploy the server nodes independently and reach the cluster out from 
 the
 restful services using existing Ignite APIs. This tutorial shows how 
 to do
 the latter with Spring Boot:
 https://www.gridgain.com/docs/tutorials/spring/spring_ignite_tutorial

 -
 Denis



Re: CountDownLatch issue in Ignite 2.6 version

2020-06-08 Thread Evgenii Zhuravlev
Hi,

By default, cache, that stores all atomic structures has only 1 backup, so,
after losing all data for this certain latch, it recreates it. To change
the default atomic configuration use
IgniteConfiguration.setAtomicConfiguration.

Evgenii

сб, 6 июн. 2020 г. в 06:20, Akash Shinde :

> *Issue:* Countdown latched gets reinitialize to original value(4) when
> one or more (but not all) node goes down. *(Partition loss happened)*
>
> We are using ignite's distributed countdownlatch to make sure that cache
> loading is completed on all server nodes. We do this to make sure that our
> kafka consumers starts only after cache loading is complete on all server
> nodes. This is the basic criteria which needs to be fulfilled before starts
> actual processing
>
>
>  We have 4 server nodes and countdownlatch is initialized to 4. We use
> cache.loadCache method to start the cache loading. When each server
> completes cache loading it reduces the count by 1 using countDown method.
> So when all the nodes completes cache loading, the count reaches to zero.
> When this count  reaches to zero we start kafka consumers on all server
> nodes.
>
>  But we saw weird behavior in prod env. The 3 server nodes were shut down
> at the same time. But 1 node is still alive. When this happened the count
> down was reinitialized to original value i.e. 4. But I am not able to
> reproduce this in dev env.
>
>  Is this a bug, when one or more (but not all) nodes goes down then count
> re initializes back to original value?
>
> Thanks,
> Akash
>


Re: Non Distributed Join between tables

2020-06-08 Thread manueltg89
Hello Ilya!,

Thanks for your response. I've created a new project and seems that now It
works correctly, I must have a problem. But I have another doubt with
REPLICATED cache. I think that all nodes must have same data in this mode,
Is It true?, with online tool of Apache Ignite I make a query to selected
node (I have two nodes), in node1 works perfectly but in node2 returns empty
results. Should It return the same results?

Thanks in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node is unable to join cluster because it has destroyed caches

2020-06-08 Thread Ilya Kasnacheev
Hello!

It was disabled not just because of potential data loss but because the
cache was resurrected on such start and could break cluster.

Creating cache per operation and destroying afterwards is an anti-pattern,
it can cause all sorts of issues and is better avoided.

Regards,
-- 
Ilya Kasnacheev


чт, 4 июн. 2020 г. в 01:26, xero :

> Hi,
> I tried your suggestion of using a NodeFilter but, is not solving this
> issue. Using a NodeFilter by consistent-id in order to create the cache in
> only one node is creating persistence information in every node:
>
> In the node for which the filter is true (directory size 75MB):
>
> //work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/
>
> In the node for which the filter is false  (directory size 8k):
>
> //work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/
>
> If the cache is destroyed while *any* of these nodes is down, it won't join
> the cluster again throwing the exception:
> /Caused by: class org.apache.ignite.spi.IgniteSpiException: Joining node
> has
> caches with data which are not presented on cluster, it could mean that
> they
> were already destroyed, to add the node to cluster - remove directories
> with
> the caches[tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c]/
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node log file setup

2020-06-08 Thread Ilya Kasnacheev
Hello!

${sys:} can use any system property which exists in the JVM.

If you put your gridName (or consistentId) into system property, you can
refer to it from both Ignite configuration (xml or otherwise) and log4j2
configuration.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июн. 2020 г. в 04:55, kay :

> Hello, Thank you for reply.
>
> I know that I can put the 'cache-node-01' into the filename.
>
> But I wanna use only 1 log4j2.xml.
>
> If I use   fileName="${sys:IGNITE_HOME}/work/log/cache-node-01-${sys:nodeId}.log"
>
> like this, I should make log4j2.xml file each node.
> So I asked a question Is there any way to get gridName parameter in
> log4j2.xml like '${sys:nodeId}'
>
> ex) ${sys:gridName} or ${sys:igniteName}
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Received metrics from unknown node

2020-06-08 Thread Ilya Kasnacheev
Hello!

Metrics were sent when node was already segmented by cluster, hence
"unknown".

Why it did not reconnect, I do not know. Did you collect a thread dump of
this client node by chance?

Regards,
-- 
Ilya Kasnacheev


пн, 8 июн. 2020 г. в 09:53, VeenaMithare :

> Hi all,
>
> Yesterday we had some network issues and we observed the server log having
> messages like :
> 2020-06-07T21:46:35,368 DEBUG c.c.p.c.c.p.d.CustomTcpDiscoverySpi
> [tcp-disco-msg-worker-#2]: Received metrics from unknown node:
> 8baf933f-e2cc-43be-818c-de2fb1259194
>
> Once we figured which client node this consistent id belonged to, we saw
> this message on the client node logs. Please note this is the last 'ignite'
> message in the logs on the client node :
>
> 2020-06-07T16:06:32,961 WARN  c.c.p.c.c.p.d.CustomTcpDiscoverySpi
> [tcp-client-disco-msg-worker-#4%instancename%]: Local node was dropped from
> cluster due to network problems, will try to reconnect with new id after
> 1ms (reconnect delay can be changed using
> IGNITE_DISCO_FAILED_CLIENT_RECONNECT_DELAY system property)
> [newId=8baf933f-e2cc-43be-818c-de2fb1259194,
> prevId=d7674f40-6112-46a6-83f8-15656b01c66b, locNode=TcpDiscoveryNode
> [id=d7674f40-6112-46a6-83f8-15656b01c66b, addrs=[0:0:0:0:0:0:0:1%lo,
> a.b.c.d, 127.0.0.1, 192.168.61.150], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0,
> /127.0.0.1:0, hostname.companyname.local/a.b.c.d:0,
> multicast/192.168.61.150:0], discPort=0, order=193, intOrder=0,
> lastExchangeTime=1591527983474, loc=true, ver=2.7.6#20190911-sha1:21f7ca41,
> isClient=true], nodeInitiatedFail=c67403fd-812b-46b4-9c76-60f5052b57d7,
> msg=Client node considered as unreachable and will be dropped from cluster,
> because no metrics update messages received in interval:
> TcpDiscoverySpi.clientFailureDetectionTimeout() ms. It may be caused by
> network problems or long GC pause on client node, try to increase this
> parameter. [nodeId=d7674f40-6112-46a6-83f8-15656b01c66b,
> clientFailureDetectionTimeout=3]]
>
>
> My questions are below :
>
> 1. If there are no metrics received, should the node not segmented.
> 2. This node did not get segmented. It took on a new node id. But the
> nodeid
> was not registered in the cluster ?( as per the code in updateMetrics in
> ServerImpl.java  - snapshot below . )
>
> 3. How do we monitor the cluster topology for these kind of scenarios ?
>
>
> ===
>
> private void updateMetrics(UUID nodeId,
> ClusterMetrics metrics,
> Map cacheMetrics,
> long tsNanos)
> {
> assert nodeId != null;
> assert metrics != null;
>
> TcpDiscoveryNode node = ring.node(nodeId);
>
> if (node != null) {
>   ..
>
> }
> else if (log.isDebugEnabled())
> log.debug("Received metrics from unknown node: " + nodeId);
> }
> }
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Non Distributed Join between tables

2020-06-08 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer SQL script to observe that?

Regards,
-- 
Ilya Kasnacheev


пн, 8 июн. 2020 г. в 10:46, manueltg89 :

> I have three tables in Apache Ignite, each table has a affinity key to
> other
> table, when I make a join between tables with direct relations this works
> perfectly, but if I make a non distributed join between three tables this
> return empty, is normal this behaviour?, Could I make in another way?
>
> tbl_a  tbl_b   tbl_c
> -  --   -
> aff_baff_b
>
> When I make: select * from tbl_a INNER JOIN tbl_b ON tbl_a.id =
> tbl_a.fk_id
> = tbl_b.id -> All OK
>
> When I make: select * from tbl_a INNER JOIN tbl_b ON tbl_a.id =
> tbl_a.fk_id
> = tbl_b.id INNER JOIN tbl_c.fk_id = tbl_b.id -> Return empty results.
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite SqlFieldQuery Unicode Characters Support

2020-06-08 Thread Ilya Kasnacheev
Hello!

I have met an error like this one, and my recommendation is that on all
nodes, client and server, you should set file.encoding to the same value
(UTF-8 usually, however CJK multi-byte encodings should also work).

Make sure all of your JVMs are launched with -Dfile.encoding=UTF-8.
ignite.sh should do that already. Make sure you are using the same on
Windows and Linux.

Regards,
-- 
Ilya Kasnacheev


пт, 5 июн. 2020 г. в 14:31, Ravi Makwana :

> Hi,
>
> We are using Apache ignite version 2.7 and we have one node cluster with
> 150 caches out of 150 We have 1 cache which contains Unicode characters
> like Chinese language.
>
> Server node is running on Linux OS & Client node is running on Windows OS.
>
> Client side we are using sqlquery & sqlfieldquery to fetch and put &
> data-streamer to insert data using .net ignite apis.
>
> In ignite cache,we have AddressFull property which contains Unicode
> characters like Chinese language so while inserting we have inserted
> specified data into AddressFull property which is mentioned below.
>
> *"北京市朝阳区安贞西里四区甲1号, 北京, 中国, 100011"*
>
> While fetching data from ignite cache we have observed that we are not
> getting inserted data properly.
>
> We have performed below mentioned sqlquery using SqlFieldQuery.
>
> SELECT HotelCode,AddressFull FROM HotelOtherLanguageMasterModel WHERE
> HotelCode = '12345'
>
> We got below mentioned result data for AddressFull property.
>
> *"北京市�阳区安贞西里四区甲1�, 北京, 中国, 100011"*
>
> Based on this test we have couple of questions on the same.
>
> 1) Is SqlFieldQuery .net api supporting Unicode characters?
> 2) If Yes, How do we overcome this issue?
>
> Regards,
> Ravi Makwana
>


Re: Index usage on Joins

2020-06-08 Thread Ilya Kasnacheev
Hello!

This sounds like a good application point of *enforceJoinOrder=true*.
Consider:
3: jdbc:ignite:thin://localhost> !connect
jdbc:ignite:thin://localhost?enforceJoinOrder=true
4: jdbc:ignite:thin://localhost>
4: jdbc:ignite:thin://localhost> EXPLAIN SELECT f.Date_key,
. . . . . . . . . . . . . . . .> loc.Location_name,
. . . . . . . . . . . . . . . .> SUM(f.Revenue)
. . . . . . . . . . . . . . . .>
. . . . . . . . . . . . . . . .> FROM DimensionProduct pr,
DimensionLocation loc, FactTableRevenue f
. . . . . . . . . . . . . . . .>
. . . . . . . . . . . . . . . .> WHERE pr._key = f.Product_Key AND loc._key
= f.Location_Key
. . . . . . . . . . . . . . . .> AND f.Date_Key = 20200604
. . . . . . . . . . . . . . . .> AND pr.Product_Name = 'Product 1'
. . . . . . . . . . . . . . . .> AND loc.Location_Name IN ('London',
'Paris')
. . . . . . . . . . . . . . . .>
. . . . . . . . . . . . . . . .> GROUP BY f.Date_Key, loc.Location_name;
PLAN  SELECT
F__Z2.DATE_KEY AS __C0_0,
LOC__Z1.LOCATION_NAME AS __C0_1,
SUM(F__Z2.REVENUE) AS __C0_2
FROM PUBLIC.DIMENSIONPRODUCT PR__Z0
/* PUBLIC.IX_PRODUCT_PRODUCT_NAME: PRODUCT_NAME = 'Product 1' */
/* WHERE PR__Z0.PRODUCT_NAME = 'Product 1'
*/
INNER JOIN PUBLIC.DIMENSIONLOCATION LOC__Z1
/* PUBLIC.IX_LOCATION_LOCATION_NAME: LOCATION_NAME IN('London',
'Paris') */
ON 1=1
/* WHERE LOC__Z1.LOCATION_NAME IN('London', 'Paris')
*/
INNER JOIN PUBLIC.FACTTABLEREVENUE F__Z2
/* PUBLIC.IX_REVENUE_DATE_PRODUCT_LOCATION: DATE_KEY = 20200604
AND PRODUCT_KEY = PR__Z0._KEY
AND LOCATION_KEY = LOC__Z1._KEY
 */
ON 1=1
WHERE (LOC__Z1.LOCATION_NAME IN('London', 'Paris'))
AND ((PR__Z0.PRODUCT_NAME = 'Product 1')
AND ((F__Z2.DATE_KEY = 20200604)
AND ((PR__Z0._KEY = F__Z2.PRODUCT_KEY)
AND (LOC__Z1._KEY = F__Z2.LOCATION_KEY
GROUP BY F__Z2.DATE_KEY, LOC__Z1.LOCATION_NAME

PLAN  SELECT
__C0_0 AS DATE_KEY,
__C0_1 AS LOCATION_NAME,
CAST(CAST(SUM(__C0_2) AS DOUBLE) AS DOUBLE) AS __C0_2
FROM PUBLIC.__T0
/* PUBLIC."merge_scan" */
GROUP BY __C0_0, __C0_1

2 rows selected (0,011 seconds)

Is this what you wanted? First we filter pr and loc by varchar, then join
this small result to facts using secondary index.

Regards,
-- 
Ilya Kasnacheev


чт, 4 июн. 2020 г. в 16:58, njcstreet :

> Hi,
>
> I am evaluating Ignite for a data warehouse style system which would have a
> central very large "fact" table with potentially billions of records, and
> several "dimensions" that describe the data. The fact table would be
> partitioned as it is large, and the dimensions would be replicated across
> all nodes. I am using the latest version 2.8.
>
> My question is about index usage and joins. I need to join between the fact
> table (which has the numerical transaction values), and the dimensions
> which
> describe the data (such as product / location). However it seems that
> indexes on the fact table won't be used when joining. I understand that you
> can only use one index per table in a query, so I was hoping to use a group
> index for the query against the fact table, since there are a few
> attributes
> that users will always filter on. Here is an example schema (heavily
> simplified and with little data, but enough to demonstrate that the Explain
> plan is not using the index on the Fact table)
>
> CREATE TABLE IF NOT EXISTS FactTableRevenue (
>
> id int PRIMARY KEY,
>
> date_key int,
> product_key int,
> location_key int,
> revenue float
>
> ) WITH "template=partitioned,backups=0";
>
>
>
> CREATE TABLE IF NOT EXISTS DimensionProduct (
>
> id int PRIMARY KEY,
> product_name varchar
>
> ) WITH "TEMPLATE=REPLICATED";
>
>
>
> CREATE TABLE IF NOT EXISTS DimensionLocation (
>
> id int PRIMARY KEY,
> location_name varchar
>
> )WITH "TEMPLATE=REPLICATED";
>
>
>
> CREATE INDEX ix_product_product_name ON DimensionProduct(product_name);
> CREATE INDEX ix_location_location_name ON DimensionLocation(location_name);
> CREATE INDEX ix_revenue_date_product_location ON FactTableRevenue(date_key,
> product_key, location_key);
>
>
> INSERT INTO DimensionProduct (id, product_name) VALUES (1, 'Product 1');
> INSERT INTO DimensionProduct (id, product_name) VALUES (2, 'Product 2');
> INSERT INTO DimensionProduct (id, product_name) VALUES (3, 'Product 3');
>
> INSERT INTO DimensionLocation (id, location_name) VALUES (1, 'London');
> INSERT INTO DimensionLocation (id, location_name) VALUES (2, 'Paris');
> INSERT INTO DimensionLocation (id, location_name) VALUES (3, 'New York');
>
> INSERT INTO FactTableRevenue (id, date_key, product_key, location_key,
> revenue) VALUES
> (1, 20200604, 1, 1, 500);
>
> INSERT INTO FactTableRevenue (id, date_key, product_key, location_key,
> revenue) VALUES
> (2, 20200604, 1, 2, 700);
>
> INSERT INTO FactTableRevenue (id, date_key, product_key, location_key,
> revenue) VALUES
> (3, 20200604, 1, 3, 90);
>
> INSERT INTO FactTableRevenue (id, date_key, product_key, location_key,
> revenue) VALUES
> (4, 20200604, 

Re: Data Consistency Question

2020-06-08 Thread Ilya Kasnacheev
Hello!

Why do you open connection inside the semaphore? You are supposed to open
connection(s) in advance, only issue updates inside the semaphore.

The whole approach is questionable (no sense to use thin JDBC if you
already have Ignite node) but this one is a killer. You also seem to open a
new connection every time.

Regards,
-- 
Ilya Kasnacheev


ср, 3 июн. 2020 г. в 09:17, adipro :

> Can someone please help regarding this issue?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Do I need Map entry classes in Ignite's classpath?

2020-06-08 Thread Ilya Kasnacheev
Hello!

If these classes are used as key or value types, then yes. Key/value types
are not peer loaded. Otherwise, you just need to enable peer class loading.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июн. 2020 г. в 05:05, Andrew Munn :

> Is there ever any reason for the classes of Objects being put in the Map
> to be in the classpath for Ignite?  or does the cluster always handle that
> automatically?
>
> Thanks,
> Andrew
>


Re: Countdown latch issue with 2.6.0

2020-06-08 Thread Akash Shinde
Hi, I have created jira for this issue
https://issues.apache.org/jira/browse/IGNITE-13132

Thanks,
Akash

On Sun, Jun 7, 2020 at 9:29 AM Akash Shinde  wrote:

> Can someone please help me with this issue.
>
> On Sat, Jun 6, 2020 at 6:45 PM Akash Shinde  wrote:
>
>> Hi,
>> Issue: Countdown latched gets reinitialize to original value(4) when one
>> or more (but not all) node goes down. (Partition loss happened)
>>
>> We are using ignite's distributed countdownlatch to make sure that cache
>> loading is completed on all server nodes. We do this to make sure that our
>> kafka consumers starts only after cache loading is complete on all server
>> nodes. This is the basic criteria which needs to be fulfilled before starts
>> actual processing
>>
>>
>>  We have 4 server nodes and countdownlatch is initialized to 4. We use
>> "cache.loadCache" method to start the cache loading. When each server
>> completes cache loading it reduces the count by 1 using countDown method.
>> So when all the nodes completes cache loading, the count reaches to zero.
>> When this count  reaches to zero we start kafka consumers on all server
>> nodes.
>>
>>  But we saw weird behavior in prod env. The 3 server nodes were shut down
>> at the same time. But 1 node is still alive. When this happened the count
>> down was reinitialized to original value i.e. 4. But I am not able to
>> reproduce this in dev env.
>>
>>  Is this a bug, when one or more (but not all) nodes goes down then count
>> re initializes back to original value?
>>
>> Thanks,
>> Akash
>>
>


Roadmap of new distributed SQL engine based on Calcite

2020-06-08 Thread Юрий
Dear Igniters,

Many of you heard about starting development of new SQL engine based on
Calcite. Currently we are only in beginning of the way and it will require
many works to achieve the goal. In order to understand where are we and
which features already done and which of them could be started by you will
be better to have some consolidate resource. For the reason I've prepare
roadmap page [1] , where we could track progress of development new SQL
engine.

Let's start fill the page and providing rough estimates. Dear, Ignite user
community, please share your suggestion as well.

[1]
https://cwiki.apache.org/confluence/display/IGNITE/Apache+Calcite-powered+SQL+Engine+Roadmap

-- 
Живи с улыбкой! :D


Non Distributed Join between tables

2020-06-08 Thread manueltg89
I have three tables in Apache Ignite, each table has a affinity key to other
table, when I make a join between tables with direct relations this works
perfectly, but if I make a non distributed join between three tables this
return empty, is normal this behaviour?, Could I make in another way?

tbl_a  tbl_b   tbl_c
-  --   -
aff_baff_b

When I make: select * from tbl_a INNER JOIN tbl_b ON tbl_a.id = tbl_a.fk_id
= tbl_b.id -> All OK

When I make: select * from tbl_a INNER JOIN tbl_b ON tbl_a.id = tbl_a.fk_id
= tbl_b.id INNER JOIN tbl_c.fk_id = tbl_b.id -> Return empty results.

Thanks in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Received metrics from unknown node

2020-06-08 Thread VeenaMithare
Hi all, 

Yesterday we had some network issues and we observed the server log having
messages like :
2020-06-07T21:46:35,368 DEBUG c.c.p.c.c.p.d.CustomTcpDiscoverySpi
[tcp-disco-msg-worker-#2]: Received metrics from unknown node:
8baf933f-e2cc-43be-818c-de2fb1259194

Once we figured which client node this consistent id belonged to, we saw
this message on the client node logs. Please note this is the last 'ignite'
message in the logs on the client node : 

2020-06-07T16:06:32,961 WARN  c.c.p.c.c.p.d.CustomTcpDiscoverySpi
[tcp-client-disco-msg-worker-#4%instancename%]: Local node was dropped from
cluster due to network problems, will try to reconnect with new id after
1ms (reconnect delay can be changed using
IGNITE_DISCO_FAILED_CLIENT_RECONNECT_DELAY system property)
[newId=8baf933f-e2cc-43be-818c-de2fb1259194,
prevId=d7674f40-6112-46a6-83f8-15656b01c66b, locNode=TcpDiscoveryNode
[id=d7674f40-6112-46a6-83f8-15656b01c66b, addrs=[0:0:0:0:0:0:0:1%lo,
a.b.c.d, 127.0.0.1, 192.168.61.150], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0,
/127.0.0.1:0, hostname.companyname.local/a.b.c.d:0,
multicast/192.168.61.150:0], discPort=0, order=193, intOrder=0,
lastExchangeTime=1591527983474, loc=true, ver=2.7.6#20190911-sha1:21f7ca41,
isClient=true], nodeInitiatedFail=c67403fd-812b-46b4-9c76-60f5052b57d7,
msg=Client node considered as unreachable and will be dropped from cluster,
because no metrics update messages received in interval:
TcpDiscoverySpi.clientFailureDetectionTimeout() ms. It may be caused by
network problems or long GC pause on client node, try to increase this
parameter. [nodeId=d7674f40-6112-46a6-83f8-15656b01c66b,
clientFailureDetectionTimeout=3]]


My questions are below : 

1. If there are no metrics received, should the node not segmented. 
2. This node did not get segmented. It took on a new node id. But the nodeid
was not registered in the cluster ?( as per the code in updateMetrics in
ServerImpl.java  - snapshot below . )

3. How do we monitor the cluster topology for these kind of scenarios ?


===

private void updateMetrics(UUID nodeId,
ClusterMetrics metrics,
Map cacheMetrics,
long tsNanos)
{
assert nodeId != null;
assert metrics != null;

TcpDiscoveryNode node = ring.node(nodeId);

if (node != null) {
  ..

}
else if (log.isDebugEnabled())
log.debug("Received metrics from unknown node: " + nodeId);
}
}

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/