Inserting date into ignite with spark jdbc

2020-10-29 Thread Humphrey
Hello guys this question has been asked on  Stack Overflow

  
but yet no answer is a provided.

I'm facing the same issue (trying to insert data in ignite using
spark.jdbc):
Exception in thread "main" java.sql.SQLException: No PRIMARY KEY defined for
CREATE TABLE
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)

Code:
println("-- writing using jdbc --")
val prop = Properties()
prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"

df.write().apply {
mode(SaveMode.Overwrite)
format("jdbc")
option("url", "jdbc:ignite:thin://127.0.0.1")
option("dbtable", "comments")
   
option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),
"last_name")
}.save()

The last option doesn't seem to work/help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: High availability of local listeners for ContinuousQuery or Events

2020-10-29 Thread Igor Belyakov
Hi,

In case the node, which registered a continuous query fails, the continuous
query will be undeployed from the cluster. The cluster state won't be
changed.

It's not a good practice to write the business code in a remote filter.
Could you please clarify more details regarding your use case?

Igor

On Thu, Oct 29, 2020 at 4:46 PM 38797715 <38797...@qq.com> wrote:

> Hi community,
>
> For local listeners registered for ContinuousQuery and Events, is there
> a corresponding high availability mechanism design? That is, if the node
> registering the local listener fails, what state will the cluster be?
>
> If do not register a local listener, but write the business code in the
> remote filter and return false, is this a good practice?
>
>
>


Re: Ignite Cluster Issue on 2.7.6

2020-10-29 Thread Andrei Aleksandrov

Hi,

Do you use cluster with persistence? After first actication all your 
data will be located on the first activated node.


In this case, you also should track your baseline.

https://www.gridgain.com/docs/latest/developers-guide/baseline-topology

Baseline topology is a subset of nodes where you cache data located.

The recommendations are the following:

1)you should activate the cluster only when all server nodes were started
2)If the topology changes, you must either restore the failed nodes or 
reset to the base topology to trigger partition reassignment and 
rebalancing.
3)If some new node should contain the cache data then you should add 
this node to baseline topology:


using java code:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#setBaselineTopology-java.util.Collection-

using utility tool:

https://www.gridgain.com/docs/latest/administrators-guide/control-script#adding-nodes-to-baseline-topology

4)In case if some node from baseline can't be started (e.g because its 
data on the disk was destroyed) it should be removed from baseline:


https://www.gridgain.com/docs/latest/administrators-guide/control-script#removing-nodes-from-baseline-topology

If you are not using persistence, please provide additional information 
that "data is being added to the cache but not available to any of the 
modules." means:


1) How you access data
2) What do you see in the logs

BR,
Andrei

10/29/2020 4:19 PM, Gurmehar Kalra пишет:


Hi,

I have two module(Web and Engine)  and want to share data b/w the 
modules , but when I run  web and engine together , data is added to 
cache  but is not available to either of modules.

below is my ignite config, which is same in both modules

config.setActiveOnStart(*true*);

config.setAutoActivationEnabled(*true*);

config.setIgniteHome(propertyReader.getProperty("spring.ignite.storage.path"));

config.setFailureHandler(*new*StopNodeOrHaltFailureHandler());

config.setDataStorageConfiguration(getDataStorageConfiguration());

config.setGridLogger(*new*JavaLogger(java.util.logging.Logger./getLogger/(*/LOG/*.getClass().getCanonicalName(;

Ignite ignite= Ignition./start/(config);

ignite.cluster().active(*true*);

All Caches created have below properties |

cache.setWriteSynchronizationMode(CacheWriteSynchronizationMode.*/FULL_ASYNC/*);

cache.setAtomicityMode(CacheAtomicityMode.*/TRANSACTIONAL/*);

cache.setCacheMode(CacheMode.*/REPLICATED/*);

cache.setGroupName("EngineGroup");

Both Modules  are running on  IP List : 
127.0.0.1:47501,127.0.0.1:47502,127.0.0.1:47503,127.0.0.1:47504


Please suggest..

Regards,

Gurmehar Singh

::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and 
intended for the named recipient(s) only. E-mail transmission is not 
guaranteed to be secure or error-free as information could be 
intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
may contain viruses in transmission. The e mail and its contents (with 
or without referred errors) shall therefore not attach any liability 
on the originator or HCL or its affiliates. Views or opinions, if any, 
presented in this email are solely those of the author and may not 
necessarily reflect the views or opinions of HCL or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, 
modification, distribution and / or publication of this message 
without the prior written consent of authorized representative of HCL 
is strictly prohibited. If you have received this email in error 
please delete it and notify the sender immediately. Before opening any 
email and/or attachments, please check them for viruses and other defects.




Re: Large Heap with lots of BinaryMetaDataHolders

2020-10-29 Thread Andrei Aleksandrov

Hello,

Let's start from the very beginning.

1) Could you please share the server and client config?
2) Java code of what you have in your client launcher application

I will try to investigate your case.

BR,
Andrew

10/28/2020 7:19 PM, ssansoy пишет:

Hi, could anyone please help understand why the heap of a client app has such
large amounts of data pertaining to binary meta data?

Here it takes up 30mb but in our UAT environment we have approx 50 caches.
The binary meta data that gets added to the client's heap equats to around
220mb (even for a very simple app that doesn't do any subscriptions - it
just calls Igition.start() to connect to the cluster)

It seems meta is kept on the client for every cache even if the client app
needs it or not. Is there any way to tune this at all - e.g. knowing that a
particular client is only interested in a particular cache?

Screenshot:



Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


High availability of local listeners for ContinuousQuery or Events

2020-10-29 Thread 38797715

Hi community,

For local listeners registered for ContinuousQuery and Events, is there 
a corresponding high availability mechanism design? That is, if the node 
registering the local listener fails, what state will the cluster be?


If do not register a local listener, but write the business code in the 
remote filter and return false, is this a good practice?





Ignite Cluster Issue on 2.7.6

2020-10-29 Thread Gurmehar Kalra
Hi,

I have two module(Web and Engine)  and want to share data b/w the modules , but 
when I run  web and engine together , data is added to cache  but is not 
available to either of modules.
below is my ignite config, which is same in both modules

config.setActiveOnStart(true);
config.setAutoActivationEnabled(true);

config.setIgniteHome(propertyReader.getProperty("spring.ignite.storage.path"));
config.setFailureHandler(new StopNodeOrHaltFailureHandler());
config.setDataStorageConfiguration(getDataStorageConfiguration());
config.setGridLogger(new 
JavaLogger(java.util.logging.Logger.getLogger(LOG.getClass().getCanonicalName(;

Ignite ignite = Ignition.start(config);
ignite.cluster().active(true);


All Caches created have below properties |
cache.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC);
cache.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cache.setCacheMode(CacheMode.REPLICATED);
cache.setGroupName("EngineGroup");

Both Modules  are running on  IP List : 
127.0.0.1:47501,127.0.0.1:47502,127.0.0.1:47503,127.0.0.1:47504

Please suggest..
Regards,
Gurmehar Singh

::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.



Re: removing ControlCenterAgent

2020-10-29 Thread Mekhanikov Denis
Hi!

The issue is that Control Center Agent puts its configuration to the 
meta-storage. 
Ignite has an issue with processing data in meta-storage with class that is not 
present on all nodes: https://issues.apache.org/jira/browse/IGNITE-13642
Effectively it means that you can't remove control-center-agent from a cluster 
that worked with it previously.

You have a few options how to solve it:
- Add control-center-agent to class path of all nodes and disable it using 
management.sh --off. Classes and configuration will be there, but it won't do 
anything. You'll be able to remove the library after an upgrade to the version 
that doesn't have this bug. Hopefully, it will be fixed in Ignite 2.9.1

- Remove the metastorage directory from the persistence directory on all nodes. 
It will lead to removal of Control Center Agent configuration along with 
Baseline Topology history.
You will need to do that together with removal of the control-center-agent 
library.
NOTE that removal of metastorage is a dangerous operation and can lead to data 
loss. I recommend using the first option if it works for you.
Make a copy of persistence directories before removing anything. After the 
removal and a restart the baseline topology will be reset. Make sure that first 
activation will lead to the same BLT like before the restart to avoid data loss.

Also note that Control Center doesn't support Ignite 2.9 yet. The agent for it 
is on its way. Currently only Ignite 2.8 is supported.

Denis

On 28.10.2020, 19:58, "Bastien Durel"  wrote:

Hello,

I'm running a 2.9.0 cluster with 2 nodes. I tried to use grid grain's
ControlCenterAgent to investigate a slowdown.

When I removed the agent files from server (I don't like to have to put
it in all clients), the second node cannot join the cluster when I
start it.

If I start node A, then node B, node B fails, but if I start node B,
then node A, node A fails.

If I put the agent files back, then all nodes can start, but clients
fail because they don't have the agent classes themselves.

When a node fails to start, it prints this log :


[17:52:45,265][INFO][tcp-disco-sock-reader-[2f3f6f3a 
192.168.43.29:39675]-#6%ClusterWA%-#50%ClusterWA%][TcpDiscoverySpi] Initialized 
connection with remote server node 
[nodeId=2f3f6f3a-accb-4708-a5cc-26d324a07816, rmtAddr=/192.168.43.29:39675]
[17:52:45,268][SEVERE][main][IgniteKernal%ClusterWA] Failed to start 
manager: GridManagerAdapter [enabled=true, 
name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: 
TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, 
marsh=JdkMarshaller 
[clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@39a8e2fa], 
reconCnt=10, reconDelay=2000, maxAckTimeout=60, soLinger=5, 
forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, 
skipAddrsRandomization=false]
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:302)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967)
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1298)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2046)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114)
at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Unable to 
unmarshal key=metastorage.cluster.id.tag
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:2018)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1189)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299)
... 13 more
[17:52:45,271][SEVERE][main][IgniteKernal%ClusterWA] Got exception while 
starting (will rollback startup routine).
class org.apache.ignite.IgniteCheckedException: Failed to