Re: Intermittent Error

2021-06-28 Thread Aleksandr Shapkin
How did you create the cache? Most likely you would like yo use the PUBLIC one, 
or you should be able to access it using SQL_MYCACHE_TWSources.

Here are the docs regarding the schemas creation 
https://ignite.apache.org/docs/latest/SQL/schemas

> On 17 Jun 2021, at 13:28, Ilya Kasnacheev  wrote:
> 
> Hello!
> 
> What is the connection string of your JDBC connection?
> 
> Regards.
> -- 
> Ilya Kasnacheev
> 
> 
> ср, 16 июн. 2021 г. в 04:20, Moshe Rosten  >:
> Greetings,
> 
> I'm attempting to retrieve a list of values from this query line:
> List> res = conn.sqlQuery("select * from  TWSources");
> It works sometimes perfectly fine, and at other times it gives me this error:
> org.apache.ignite.internal.client.thin.ClientServerError: Ignite failed to 
> process request [5]: Failed to set schema for DB connection for thread 
> [schema=myCache] (server status code [1])
> What could the problem be?
> 
> 



Re: Minimal config for client nodes

2021-06-03 Thread Aleksandr Shapkin
Hello!

I don't think there is a complete list or something similar somewhere thus it’s 
better to check this manually. 

Regarding the peerClassLoading and similar configuration you might check the 
source code
https://github.com/apache/ignite/blob/b1529fbd7d42e3f3a5500a217a2b8b21acbb4189/modules/core/src/main/java/org/apache/ignite/internal/managers/discovery/GridDiscoveryManager.java
 

#checkAttributes method.

As for other settings, I can think of security/SSL configuration for both 
client and server, also it’s better to have network or discovery timeouts to be 
the same.



> On 3 Jun 2021, at 13:10, ict.management.trexon 
>  wrote:
> 
> Hello everybody.
> I'm trying to figure out the minimum configuration to pass to the client
> nodes.
> I thought, client nodes don't need all configuration of data regions,
> configuration of memory, persistence, consistendId, etc.
> 
> I thought, once I made a minimum configuration, where at most I indicated
> the way to connect to the cluster (IP, multicast, etc.), this would be
> enough for the client nodes.
> 
> So I tested the client nodes with a minimal configuration and the server
> nodes with the full one, and it worked, but i'm going to activate
> peerClassLoadingEnabled on the server side and here even the clients who
> want to connect to that cluster go into error because they also want this
> flag set to true.
> 
> So, my question is: A node set as a client, what minimum configuration must
> it have equal to that of the server nodes?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite throws "Failed to resolve class name" Exception

2021-06-03 Thread Aleksandr Shapkin
Hello!

It seems that you are trying to deploy DTO using peer class loading, 
unfortunately, that's not possible. Peer class loading is mostly about task 
deployment, see 
https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading

To resolve this you need to have your CacheState deployed on all nodes before 
deserialization happens or to work with cache in raw binary format.
https://ignite.apache.org/docs/latest/key-value-api/binary-objects


> On 26 May 2021, at 18:27, tsipporah22  wrote:
> 
> Hi Ilya,
> 
> Sorry I get back to you late. I have key-value classes on the server node.
> The peer class loading is enabled. I'm not getting this error consistently
> so it's hard to reproduce. Below is the code snippet that throws the error:
> 
> First I got CacheState object with tableName as the key:
> 
> public class CacheState implements Serializable {
>private static final long serialVersionUID = 1L;
> 
>@QuerySqlField(index=true)
>private String tableName;
> 
>@QuerySqlField
>private long updateVersion;
> 
> 
>public CacheState() {
>}
> 
>public CacheState(String tableName) {
>this.tableName = tableName;
>}
> 
>public String getTableName() {
>return tableName;
>}
> 
>public void setTableName(String tableName) {
>this.tableName = tableName;
>}
> 
>public long getUpdateVersion() {
>return updateVersion;
>}
> 
>public void setUpdateVersion(long updateVersion) {
>this.updateVersion = updateVersion;
>}
> 
> 
> And the error is thrown from this class:
> 
> public class WindowOptimizer {
>private final Ignite ignite;
> 
>private IgniteCache cacheStates;
> 
>public void init() {
>if (cacheStates == null) {
>cacheStates =
> ignite.cache(CacheState.class.getSimpleName().toLowerCase());
>}
>}
> }
> 
>private IgniteFuture>
> updateCacheState(IgniteCompute compute, BaselinePeriod period,
>OffsetDateTime now, WindowOptimizerCfg cfg) {
> 
>final IgniteFuture> future =
> compute.broadcastAsync(updater);
>future.listen(f -> {
>
> 
>CacheState cacheState = cacheStates.get(tableName);<--- this
> line throws Exception
> 
>
>})
>}
> 
> 
> Thanks,
> tsipporah
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: help with control.sh script

2021-05-12 Thread Aleksandr Shapkin
Hello!

Most likely it’s the same connectivity issue that lead to the lost partitions.

Do you have the full cmd output and some server logs?



> On 15 Apr 2021, at 21:15, facundo.maldonado  
> wrote:
> 
> I'm testing a 32 nodes cluster with a partitioned cache with one backup.
> If 2 of them crashed (not if, when) I have the lost partitions problem.
> 
> Now I ssh to one of the nodes and execute *control.sh --baseline.*
> From every node other than the one marked as "coordinator" (?) I get this
> output:
> 
> 
> Failed to execute baseline command='collect'
> Failed to communicate with grid nodes (maximum count of retries reached).
> Connection to cluster failed. Failed to communicate with grid nodes (maximum
> count of retries reached).
> 
> Ok, I went to every node and do the same until I found the 'coordinator'.
> Once I made the failing nodes get online again I execute:
> *control.sh --cache reset_lost_partitions mycache*
> 
> To my surprise, I'm getting 
> 
> Connection to cluster failed. Failed to communicate with grid nodes (maximum
> count of retries reached).
> 
> So, started again looking for the nodes where that command actually works.
> 
> I'm sure I'm doing something wrong. Could someone help me?
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: ClusterGroupEmptyException: Topology projection is empty.

2020-09-08 Thread Aleksandr Shapkin
Hi, is it possible to attach a full error stacktrace? Also what version do
you use?

There could be many places where cluster groups are required, please check
the docs for more details [1]

[1] - https://apacheignite.readme.io/docs/cluster-groups

вт, 8 сент. 2020 г. в 15:35, kay :

> Hello! I have a 2 server nodes.
>  when I start a first node, sometimes I got 'ClusterGroupEmptyException:
> Topology projection is empty.'
>
> What is exactly this Exception mean??
> and What should I do to avoid this error? Or is it a okay to ignore??
>
> Thank you so much!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Alex.


Re: Node is unable to join cluster because it has destroyed caches

2020-06-02 Thread Aleksandr Shapkin
Hi,

Have you tried to put the temp cache into the different, non persisted,
memory region?

You can also try to use a node filter to control what nodes should store
the cache.


On Tue, Jun 2, 2020, 19:24 xero  wrote:

> Hi Ignite team, We have a use case where a small portion of the dataset
> must answer successive queries that could be relatively expensive. For
> this, we create a temporary cache with that small subset of the dataset and
> operate on that new cache. At the end of the process, that cache is
> destroyed. The caches are created *partitioned* and *persistence* is
> enabled. The problem we are facing is the following. If during the reboot
> of a node, one of those "temporal" caches is destroyed (which is very
> likely since they have a short life), the node is unable to rejoin the
> cluster because it contains information about a cache that no longer
> exists. This is the exception: 2020-06-02T02: 56: 22.326 + 00: 00
> fa72f64b5d0f ignite: Caused by: class
> org.apache.ignite.spi.IgniteSpiException: Joining node has caches with data
> which are not presented on cluster, it could mean that they were already
> destroyed, to add the node to cluster - remove directories with the caches
> [cache-tempX, cache-tempY] Is this an expected behavior? Is it
> possible to skip this validation? Is there any way to indicate that it is a
> temporary cache? Thanks
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: How to use different ExpiryPolicy for different keys on the same cache?

2019-10-19 Thread Aleksandr Shapkin
You can use withExpiryPolicy method to create different cache instances.

Something like
ignite.cache(name).withExpiryPolicy(plc1)
ignite.cache(name).withExpiryPolicy(plc2)


On Sat, Oct 19, 2019, 15:51 李玉珏@163 <18624049...@163.com> wrote:

> Hi,
>
> How to use different ExpiryPolicy for different keys on the same cache?
>
>