Re: Recommended HW on AWS EC2 - vertical vs horizontal scaling

2018-09-19 Thread aealexsandrov
Hi,

Left the list with some useful articles links here
http://apache-ignite-users.70518.x6.nabble.com/Slow-SQL-query-uses-only-a-single-CPU-td23553.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow SQL query uses only a single CPU

2018-09-19 Thread aealexsandrov
Hi,

I think you can try to investigate the articles from the next wiki:

https://cwiki.apache.org/confluence/display/IGNITE/Design+Documents

Next blog contains the interesting information (possible some will be out of
date):

http://gridgain.blogspot.com

It contains a lot of information about how Ignite works under the hood. 

According to indexes and h2. Ignite used the h2 for query parsing, execution
planning, and indexing but looks like there is no detailed documentation
about it. Only official information:

https://apacheignite-sql.readme.io/docs/how-ignite-sql-works

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AWS Cluster

2018-09-10 Thread aealexsandrov
Hello,

In case if your nodes don't see each other then try to check next:

1)That your IP finder configuration for every node contains the IP addresses
of every AWS node from the cluster like next:

   









178.0.0.1:47500..47501
178.0.0.2:47500..47501







where 178.0.0.1 and 178.0.0.2 are AWS machines addresses.

2)Check you configurated the security group. It required because you should
open TCP ports for Ignite communication. 

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Example:

Inbound:

Custom TCP Rule TCP 10800 - 10900 0.0.0.0/0 client
Custom TCP Rule TCP 47500 - 47600 0.0.0.0/0 discovery
Custom TCP Rule TCP 47100 - 47200 0.0.0.0/0 communication

Outbound:

All traffic All All 0.0.0.0/0
All traffic All All ::/0

3)Check that ports is open in your operating system. For example, in
windows, you should add new rules to the firewall to open ports (or disable
the firewall)

When all steps above will be done and all IP addresses and ports will be
correct then your C# code will work as expected.

BR,
Andrei
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Recommended HW on AWS EC2 - vertical vs horizontal scaling

2018-08-24 Thread aealexsandrov
Hi,

Ignite doesn't have such kind of benchmarks because they are very specific
for every case and setup.

However, exists several common tips:

1)In case if you will use EBS then try to avoid the NVMe. It is fast but
looks like doesn't provide the guarantees for saving your data. We face the
corruption of the working directory on this type of devices.  
2)To get the best performance you should have enough of RAM to store your
data in Ignite off-heap
3)Volume Type - EBS Provisioned IOPS SSD (io1)

I suggest using x1 or x1e instances from
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/memory-optimized-instances.html
list.

Your choice will depend on your case and expectations. But for example:

x1e.32xlarge
ESB = io1
2 disks with 2 TB each

It will provide to the capability to store your data in the memory in one
node and the disk speed will be around 14000MB/SEC.

Is it possible to describe your case in more detail?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime failure on bounds

2018-08-22 Thread aealexsandrov
Hi,

Sorry, I missed this conversation. To understand your problem you should
provide the reproducer, cache configuration or steps how we can reproduce it
or file the issue with all this information.

At the moment it's not cleared how to solve it.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite SQL- read Only User Creation

2018-08-22 Thread aealexsandrov
Hi,

Yes, when you are using ignite advance security (not GridSecurityProcessor
interface) then you are able only to manage the password.
https://apacheignite.readme.io/docs/advanced-security provides the simple
password authentification security.

You can only:

1)Create the user - https://apacheignite-sql.readme.io/docs/create-user
2)Change the user password -
https://apacheignite-sql.readme.io/docs/alter-user
3)Delete the user - https://apacheignite-sql.readme.io/docs/alter-user

In case if you are required additional functionality then try to implement
GridSecurityProcessor.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite SQL- read Only User Creation

2018-08-21 Thread aealexsandrov
Hi,

I believe that you said about
https://apacheignite.readme.io/docs/advanced-security

It will provide the possibility to create/drop/alter the users. Also, it
provides the only simple authentification with the password. You can't set
the rules like read-only using it.

Also, you can try to implement the GridSecurityProcessor interface and
integrate this logic there or use some third-party plugin for Ignite that
already contains the security functionality. For example  this one
  .

Some information about how it could be done you can see here:

http://apache-ignite-users.70518.x6.nabble.com/Authentication-for-Apache-Ignite-2-5-td22565.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how ignite c++ node set baselinetopology

2018-08-14 Thread aealexsandrov
Hi,

C++ API doesn't contain the methods to update the baseline. So, in this
case, you can use Java API for it:

Add the code that will listen to the EVT_NODE_JOINED event

private final IgniteEx ignite;

ignite.events().localListen(event -> {
DiscoveryEvent e = (DiscoveryEvent)event;

final long topVer = e.topologyVersion();

ignite.cluster().setBaselineTopology(topVer);

return true;
}, EventType.EVT_NODE_JOINED);

The whole example you can see here:

https://apacheignite.readme.io/docs/baseline-topology#section-triggering-rebalancing-programmatically

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: C++ client Exception occurred: Unexpected header during deserialization: 9

2018-08-14 Thread aealexsandrov
Hi,

Looks like something wrong with you serialization scheme. In case if you are
going to store some C++ object you should implement the READ and WRITE
methods like here:

https://apacheignite-cpp.readme.io/docs/serialization#section-macros

Could you please provide the cache configuration and implementation (.h,
.cpp) of your classes that will be used as key, value or part of the key,
value object? Also, provide their serialization schemes.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-08 Thread aealexsandrov
Yes. In case if you don't want to store it as objects then you can move this
fields to original object:

class a{
Int a;
class b;
}

class b{
int b;
int c;
}

You can change it as next:

class a{
int a,
int b;
int c;
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread aealexsandrov
>From web console sources I see next mapping:

{"dbName": "BIT", "dbType": -7, "signed": {"javaType": "Boolean",
"primitiveType": "boolean"}},
{"dbName": "TINYINT", "dbType": -6,
"signed": {"javaType": "Byte", "primitiveType": "byte"},
"unsigned": {"javaType": "Short", "primitiveType": "short"}},
{"dbName": "SMALLINT", "dbType": 5,
"signed": {"javaType": "Short", "primitiveType": "short"},
"unsigned": {"javaType": "Integer", "primitiveType": "int"}},
{"dbName": "INTEGER", "dbType": 4,
"signed": {"javaType": "Integer", "primitiveType": "int"},
"unsigned": {"javaType": "Long", "primitiveType": "long"}},
{"dbName": "BIGINT", "dbType": -5, "signed": {"javaType": "Long",
"primitiveType": "long"}},
{"dbName": "FLOAT", "dbType": 6, "signed": {"javaType": "Float",
"primitiveType": "float"}},
{"dbName": "REAL", "dbType": 7, "signed": {"javaType": "Double",
"primitiveType": "double"}},
{"dbName": "DOUBLE", "dbType": 8, "signed": {"javaType": "Double",
"primitiveType": "double"}},
{"dbName": "NUMERIC", "dbType": 2, "signed": {"javaType":
"BigDecimal"}},
{"dbName": "DECIMAL", "dbType": 3, "signed": {"javaType":
"BigDecimal"}},
{"dbName": "CHAR", "dbType": 1, "signed": {"javaType": "String"}},
{"dbName": "VARCHAR", "dbType": 12, "signed": {"javaType": "String"}},
{"dbName": "LONGVARCHAR", "dbType": -1, "signed": {"javaType":
"String"}},
{"dbName": "DATE", "dbType": 91, "signed": {"javaType": "Date"}},
{"dbName": "TIME", "dbType": 92, "signed": {"javaType": "Time"}},
{"dbName": "TIMESTAMP", "dbType": 93, "signed": {"javaType":
"Timestamp"}},
{"dbName": "BINARY", "dbType": -2, "signed": {"javaType": "byte[]"}},
{"dbName": "VARBINARY", "dbType": -3, "signed": {"javaType": "byte[]"}},
{"dbName": "LONGVARBINARY", "dbType": -4, "signed": {"javaType":
"byte[]"}},
{"dbName": "NULL", "dbType": 0, "signed": {"javaType": "Object"}},
{"dbName": "OTHER", "dbType": , "signed": {"javaType": "Object"}},
{"dbName": "JAVA_OBJECT", "dbType": 2000, "signed": {"javaType":
"Object"}},
{"dbName": "DISTINCT", "dbType": 2001, "signed": {"javaType":
"Object"}},
{"dbName": "STRUCT", "dbType": 2002, "signed": {"javaType": "Object"}},
{"dbName": "ARRAY", "dbType": 2003, "signed": {"javaType": "Object"}},
{"dbName": "BLOB", "dbType": 2004, "signed": {"javaType": "Object"}},
{"dbName": "CLOB", "dbType": 2005, "signed": {"javaType": "String"}},
{"dbName": "REF", "dbType": 2006, "signed": {"javaType": "Object"}},
{"dbName": "DATALINK", "dbType": 70, "signed": {"javaType": "Object"}},
{"dbName": "BOOLEAN", "dbType": 16, "signed": {"javaType": "Boolean",
"primitiveType": "boolean"}},
{"dbName": "ROWID", "dbType": -8, "signed": {"javaType": "Object"}},
{"dbName": "NCHAR", "dbType": -15, "signed": {"javaType": "String"}},
{"dbName": "NVARCHAR", "dbType": -9, "signed": {"javaType": "String"}},
{"dbName": "LONGNVARCHAR", "dbType": -16, "signed": {"javaType":
"String"}},
{"dbName": "NCLOB", "dbType": 2011, "signed": {"javaType": "String"}},
{"dbName": "SQLXML", "dbType": 2009, "signed": {"javaType": "Object"}}

I guess that you can use it in your schemes.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread aealexsandrov
I am not fully sure but according to the specification of java.sql.Types you
can try to use next for Java objects:

https://docs.oracle.com/javase/7/docs/api/java/sql/Types.html#JAVA_OBJECT
https://docs.oracle.com/javase/7/docs/api/java/sql/Types.html#OTHER










BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread aealexsandrov
Hi,

You have pogo class - your.package.EquityClass wuth next fields:

Long equityID; 
private ListingCode firstCode; 
private String equityName; 
private String equityType; 
private String equityClass; 
private Set listings; 

 and you are going to store in into ignite. In this case, if you can do it
like next:






















equityID


















After that you can use it as next:

IgniteCache cache = ign.getOrCreateCache("CACHE_NAME");

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: big wal/archive file,but node file is small

2018-08-07 Thread aealexsandrov
Hi,

First of all read about the WAL:

https://apacheignite.readme.io/docs/write-ahead-log
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALHistorySize

WAL Work maximum used size: walSegmentSize * walSegments = 640Mb (default).
It's a size of your default WAL directory. You have big size of the wall
archive because it contains all the information about the previous segments.

Looks like at the moment it can't be removed safety until next issue will
not be fixed:

https://issues.apache.org/jira/browse/IGNITE-7912 it should be available in
Ignite 2.7.

However, you can try to optimize your WAL usage:

1)Disable the wall archive logging:

https://apacheignite.readme.io/docs/write-ahead-log#section-disabling-wal-archiving

2)You can try to turn off you WAL LOG during some operations. It will be
very useful to do it during the loading of the data.

You can turn off/on WAL logging for required cache using:

1)Java code
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#disableWal-java.lang.String-

2)SQL:

ALTER TABLE tablename NOLOGGING - turn off
ALTER TABLE tablename LOGGING - turn on

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: The problem after the ignite upgrade

2018-07-27 Thread aealexsandrov
Hi,

Could you please provide step by step instruction on how you upgrade the
cluster?

Please note that in case if you try to use the persistence then the
important feature that was added in 8.4 - baseline topology. A baseline
topology is a set of Ignite server nodes intended to store data. The nodes
from the baseline topology are not limited in terms of functionality and
behave as regular server nodes that act as a container for data and
computations in Ignite.

It changes the activation process. You can read about it here:

https://apacheignite.readme.io/docs/baseline-topology

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using SQL users to open jdbc connections

2018-07-26 Thread aealexsandrov
Hi,

Could you please show the command how you create new users? 

When you create the user in quotes ("test") using SQL as next: 

CREATE USER "test" WITH PASSWORD 'test' 

It will be created as it was set (in this case it will be test) 

If you create the user without quotes (test) using SQL as next: 

CREATE USER test WITH PASSWORD 'test' 

then username will be stored in uppercase (TEST). 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime failure on bounds

2018-07-18 Thread aealexsandrov
Hi,

I am out of context what you do in your code. However, I know that several
page corruption issues were fixed in 2.6 release. 

So there is no specific suggestion from my side.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread aealexsandrov
So you said that if I will try to start provided example I will see the same
error? I mean that I can try to investigate the problem in case if I will be
able to reproduce the same behavior.

Let me some time to take a look at this example.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread aealexsandrov
Hi,

First of all, you shouldn't use spark 2.1. With Ignite because you could
have conflicts of spark versions. 

>From your log when you used ignite spark (that used spark 2.2) I see that
you have the problem with spring configuration:

class org.apache.ignite.IgniteException: Spring application context resource
is not injected. 

I can't say why you face it using the provided code lines. 

Could you please provide the reproducer example on GitHub (or analog) to
analyze?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime failure on bounds

2018-07-18 Thread aealexsandrov
Hi,

In case if you see (page content is corrupted) and after that upgrade your
version to 2.6 then it's possible that your persistence is still broken.

The most simple way here is cleaning your PDS (work/db) directory before the
upgrading to 2.6. 

In case if data is important then you also can try to do next:

1)Stop the cluster
2)Remove only index.bin files from
work/db///index.bin.
3)Restart the cluster
4)Wait for the message that indexes were rebuilt in Ignite log.

In case if it will not help then only cleaning of PDS will help.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread aealexsandrov
Sorry, I have a typo. ignite contains "spark-core_2.11" inside
. Not spark-core_2.10.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Logging in C++

2018-07-18 Thread aealexsandrov
Hi,
What did you mean when saying "remote logging"? 

In case if you asking how you can configure the standard Apache Ignite
logging:

Ignite java node will store its own log into a log file that will be located
in work dir. C++ ignite node will be started as a wrapper for java node.
When you start it you provide the XML configuration file. How to configure
the logging using XML you can read here (Don't forget to add the required
java binaries to the path):

https://apacheignite.readme.io/docs#section-log4j2

In case if you are going to log something from C++ code then you can use
some existing solution for it e.g log4cpp, Pantheios, Glog, etc. How they
should be configurated you can read on their official sites.

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Running Server Node in UNIX

2018-07-16 Thread aealexsandrov
Hi,

Did you add to IGNITE_HOME to the path?

https://apacheignite.readme.io/docs/getting-started#section-with-default-configuration

In case if you are going to create maven project then you still need to
download the binaries and set the IGNITE_HOME:

https://ignite.apache.org/download.cgi

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Authentication for Apache Ignite 2.5

2018-07-10 Thread aealexsandrov
Hi,

1)According to advanced security that provided by default:

https://apacheignite.readme.io/docs/advanced-security

It will provide the possibility to create/drop/alter the users. Also, it
provides the only simple authentification with the password.

Note that it required the persistence and .

Could you please provide step by step case when you enable the security,
enable persistence and create the user using
https://apacheignite-sql.readme.io/docs/create-user and it doesn't work?

2)According to GridSecurityProcessor interface

You can take a look at the next thread. There you can see some problems that
you can face.

http://apache-ignite-users.70518.x6.nabble.com/Custom-GridSecurityProcessor-plugin-question-td4942.html

About hooking and the SecurityCredentialsProvider and SecurityCredentials.
In your example you see how it could be done:

There is no simple way to do it but you can try to extend
IgniteConfiguration like next:


public class SecurityIgniteConfiguration extends IgniteConfiguration { 
private SecurityCredentialsProvider securityCredentialsProvider;

public SecurityCredentialsProvider getSecurityCredentialsProvider() {
return securityCredentialsProvider;
}

public void setSecurityCredentialsProvider(
SecurityCredentialsProvider securityCredentialsProvider) {
this.securityCredentialsProvider = securityCredentialsProvider;
}
}

After that in your security processor do next:

securityCred = ((SecurityIgniteConfiguration)
ctx.config()).getSecurityCredentialsProvider();

BR,
Andrei







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problems with unlocking multiply held cache locks.

2018-06-28 Thread aealexsandrov
Hi,

As I remember we already discussed here with Jon:

http://apache-ignite-users.70518.x6.nabble.com/If-a-lock-is-held-by-another-node-IgniteCache-isLocalLocked-appears-to-return-incorrect-results-td22110.html#a22149

that isLocalLocked method works incorrectly in case if several nodes started
in the same JVM. I even file the issue:

https://issues.apache.org/jira/browse/IGNITE-8833

In the current example, several nodes started in the same JVM too. So this
method will not work properly.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite Rest API JOIN with multiple Caches

2018-06-27 Thread aealexsandrov
Hi,

Try to use qryfldexe:

For example, I create next caches:

ss1.java
  

It creates two caches with the same structure.

Now I am going to execute next command:

SELECT * FROM "mycache1".Value V1 join "mycache2".Value V2 on V1.key=V2.key

Let's use next converter to get URI string:

https://meyerweb.com/eric/tools/dencoder/

Our command will be next:

SELECT%20*%20FROM%20%22mycache1%22.Value%20V1%20join%20%22mycache2%22.Value%20V2%20on%20V1.key%3DV2.key

Run next in brouser:

http://127.0.0.1:8080/ignite?cmd=qryfldexe=10=mycache1=SELECT%20*%20FROM%20%22mycache1%22.Value%20V1%20join%20%22mycache2%22.Value%20V2%20on%20V1.key%3DV2.key

Output:

{"successStatus":0,"error":null,"response":{"items":[[0,"Value 0",0,"Value
0"],[1,"Value 1",1,"Value 1"],[2,"Value 2",2,"Value 2"],[3,"Value
3",3,"Value 3"],[4,"Value 4",4,"Value 4"],[5,"Value 5",5,"Value
5"],[6,"Value 6",6,"Value 6"],[7,"Value 7",7,"Value 7"],[8,"Value
8",8,"Value 8"],[9,"Value 9",9,"Value
9"]],"last":false,"queryId":10,"fieldsMetadata":[{"schemaName":"mycache1","typeName":"VALUE","fieldName":"KEY","fieldTypeName":"java.lang.Integer"},{"schemaName":"mycache1","typeName":"VALUE","fieldName":"VALUE","fieldTypeName":"java.lang.String"},{"schemaName":"mycache2","typeName":"VALUE","fieldName":"KEY","fieldTypeName":"java.lang.Integer"},{"schemaName":"mycache2","typeName":"VALUE","fieldName":"VALUE","fieldTypeName":"java.lang.String"}]},"sessionToken":null}

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClassCastException in Hibernate QueryCache

2018-06-27 Thread aealexsandrov
Hi,

I think that the problem is that next code doesn't work in java:

public static class SerializableObject implements Serializable {}

and

Object[] array = new Object[1];

array[0] = new SerializableObject();

Serializable[] serializableArr = (Serializable[]) array;

Exception in thread "main" java.lang.ClassCastException: [Ljava.lang.Object;
cannot be cast to [Ljava.io.Serializable;

But next code works ok:

SerializableObject[] array = new SerializableObject[1];

array[0] = new SerializableObject();

Serializable[] serializableArr = (Serializable[]) array;

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question on Ignite and Spark Structured Streaming Integration.

2018-06-27 Thread aealexsandrov
Hi,

As I know there is no any special sink function for streaming to Ignite.
Also, I don't see "jdbc" format in official documentation of spark (only
file, kafka, console, memory and foreach):

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-sinks

However, you can integrate it with Ignite using foreach sink functionality
and custom implemetation:

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#using-foreach
https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/ForeachWriter.html

BR,
Andrei







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed Closures Apply method/c#

2018-06-25 Thread aealexsandrov
Very strange. By default, there is no any timeout. I will take a look more
carefully.

Also is it possible that you cancel the closure somehow?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Running Node removal from baseline

2018-06-25 Thread aealexsandrov
Hi,

Now in case of the node was left the BLT cluster will wait for it. In case
if it can't return for a long time user should manually remove it from BTL
and it will do rebalance like in this example:

https://apacheignite.readme.io/docs/baseline-topology#section-triggering-rebalancing-programmatically

However, you can see that the discussions about automatic triggering
rebalancing on apache ignite dev list:

for example next -
http://apache-ignite-developers.2346864.n4.nabble.com/Triggering-rebalancing-on-timeout-or-manually-if-the-baseline-topology-is-not-reassembled-td29299.html.

I think that you can ask about it there.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClassCastException in Hibernate QueryCache

2018-06-25 Thread aealexsandrov
Hi,

Could you please provide some more details: cache configuration, an example
of code that was failed and logs?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed Closures Apply method/c#

2018-06-25 Thread aealexsandrov
Hi,

Could you please provide the exception?

As I see from the code if you didn't set some timeout manual using
writeTimeout method then Long.MAX_VALUE should be used.

Long timeout = (Long)map.get(TC_TIMEOUT);

long timeout0 = timeout == null || timeout == 0 ? Long.MAX_VALUE :
timeout;

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: running Apache Ignite in docker with cgroups

2018-06-22 Thread aealexsandrov
Hi,

Could you please provide your configuration files? How many nodes did you
start in your container?

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: setting baseline topology in kubernetes

2018-06-22 Thread aealexsandrov
Hi,

No 11211 is a default ignite TCP port. For every new node, it will be
incremented 11211, 11212, 11213, etc.

Also please check that you didn't overwrite it. 

https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html#setLocalPort-int-

And yes ignite TCP port should be exposed for every node in case if you are
going to work with them from outside.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there any way to remove a node from cluster safely?

2018-06-21 Thread aealexsandrov
Hi,

Unfortunately, in your environment, you can't stop the node without losing
the data. To stop one node safety you require at least one backup.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: setting baseline topology in kubernetes

2018-06-21 Thread aealexsandrov
Hi,

All existing options of control.sh tool you can see here:

https://apacheignite.readme.io/docs/baseline-topology#section-cluster-activation-tool

To connect to some host and port you can use --host --port options (default
values 127.0.0.1 and 11211). 

In case if you started the ignite in some container then you can use
control.sh from the container or you can expose required ports outside of
your container and connect to it.

BR,
Andrei 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite Rest API JOIN with multiple Caches

2018-06-20 Thread aealexsandrov
Hi,

Question was answered in
https://stackoverflow.com/questions/50950480/join-on-apache-ignite-rest-api/50950777#50950777.
 

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Query full table scan, Node goes down

2018-06-20 Thread aealexsandrov
Hi,

Please read the documentation more accurate.  The lazy flag you should set
on the SqlFieldsQuery object. It could be set on that node where you are
going to use it.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteCheckedException: Error while creating file page store caused by NullPointerException

2018-06-19 Thread aealexsandrov
As a workaround, you can try to add execution rights (like in your example)
to all files under work directory.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using Persistent ignite queues

2018-06-19 Thread aealexsandrov
Hi,

Could you please provide next for investigation:

1)Thread dump that you should create at the moment when cluster hung.
2)Code of your service
3)Logs of the cluster nodes.
4)The configuration of the clusters.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteCheckedException: Error while creating file page store caused by NullPointerException

2018-06-19 Thread aealexsandrov
Hi,

Is this issue reproducible? Can you change the work directory path or clear
your work directory and check again? Did you try to modify or copy anything
from work directory?

However, could you please provide the cluster configuration and code
reproducer?

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: If a lock is held by another node IgniteCache.isLocalLocked() appears to return incorrect results.

2018-06-19 Thread aealexsandrov
Hi,

I filed next issue:

https://issues.apache.org/jira/browse/IGNITE-8833

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [ANNOUNCE] Apache Ignite 2.5.0 Released

2018-06-19 Thread aealexsandrov
Hi szj,

Could you please redirect your questions to the new thread? Also, don't
forget to provide the steps for reproducing and logs.

PS. I will check your case with ignitevisorcmd.sh and node restarting.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: If a lock is held by another node IgniteCache.isLocalLocked() appears to return incorrect results.

2018-06-19 Thread aealexsandrov
Hi,

It looks like an issue. Let me investigate it and create an issue.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite user create/modify trouble

2018-06-08 Thread aealexsandrov
https://issues.apache.org/jira/browse/IGNITE-8756



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite user create/modify trouble

2018-06-08 Thread aealexsandrov
Hi,

Documentation should be updated as for me to describe how it designed in
SQL. I will file the issue.

When you use SQL and require to create the user like TesT then just use
quotes. Without quotes, you will create TEST.

In case if you are going to change the password for TesT user via SQL then
use quotes in ALTER command. Without quotes, this command will try to find
TEST.

When you using JDBC then username will not be case sensitive.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite user create/modify trouble

2018-06-07 Thread aealexsandrov
Hi,

When you create the user in quotes ("test") using SQL as next:

CREATE USER "test" WITH PASSWORD 'test'

It will be created as it was set (in this case it will be test)

If you create the user without quotes (test) using SQL as next:

CREATE USER test WITH PASSWORD 'test'

then username will be stored in uppercase (TEST).

The same situation with other DDL commands. So if you are going to change
the password of the default user you should do next:

ALTER USER "ignite" WITH PASSWORD 
'test';

When you set the username in JDBC then there is no case transformation?
Could you please re-test your cases using quotes as well?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Clarification regarding nodeFilter

2018-05-29 Thread aealexsandrov
Hi,

I checked next code:

Ignite ignite =
IgnitionEx.start("examples/config/example-ignite.xml", "ignite-1");
Ignite ignite2 =
IgnitionEx.start("examples/config/example-ignite.xml", "ignite-2");

ClusterGroup cg = ignite2.cluster().forPredicate(new
IgnitePredicate() {
Ignite filterIgnite;

@Override public boolean apply(ClusterNode node) {
System.out.println("ignite: " + (isNull(filterIgnite) ? null
: filterIgnite.name()));
return true;
}

@IgniteInstanceResource
void setFilterIgnite(Ignite filterIgnite) {
this.filterIgnite = filterIgnite;
}});

// Deploy services only on server nodes.
ignite.services(cg).deploy(new ServiceConfiguration()
.setMaxPerNodeCount(1)
.setName("my-service")
.setService(new SimpleMapServiceImpl<>())
);

It has the same behavior as nodeFilter predicate: it starts from the
coordinator node. And looks like it should be stateless too because it used
in ServiceConfiguration.

ignite: ignite-1
ignite: ignite-1
Service was initialized: my-service
Service was initialized: my-service
Executing distributed service: my-service
Executing distributed service: my-service
ignite: ignite-1
ignite: ignite-1
ignite: ignite-1
ignite: ignite-1

Maybe I missed something. Could you please provide the code that you propose
to use?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need clarification on redeploy behavior

2018-05-28 Thread aealexsandrov
Hi,

There is no force deploying/undeploying possibility as I know.

As was mentioned in another your question:

http://apache-ignite-users.70518.x6.nabble.com/Clarification-regarding-nodeFilter-td21671.html

Node filter predicate should be stateless and should always return the same
value for the same node. It means that using the compute task in return
statement as you did isn't correct. 

Note that nodeFilter predicate is a part of service configuration. So you
can't change it after service deployment (add/remove a new node to node
filter logic). 

You should undeploy it fully and deploy with the new configuration
(add/remove a new node to node filter logic).

BR,
Andrei






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Clarification regarding nodeFilter

2018-05-28 Thread aealexsandrov
Hi,

Please take a look at my answers below:

1)Yes. Node filter should be run only on coordinator node after reassignment
process. It could be executed several times and possible that Ignite
instance can be uninjected. All this behavior should be documented as well
as for me.

2) Node filter predicate should be stateless and should always return the
same value for the same node. It means that using the compute task in return
value as you did isn't correct.

As I know there is no direct possibility to see that service
deployed/undeployed now. In case if all nodes from nodeFiter were failed
then Ignite will wait for them. 

I found next issue and possible that it could help with it when it will be
ready - https://issues.apache.org/jira/browse/IGNITE-8365

You can try to handle the service nodes by their ids and in case if all of
them are offline try to do something.

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Clarification regarding nodeFilter

2018-05-28 Thread aealexsandrov
Hi,

Indeed there is some unexpected behavior of the how it documented and how it
works.

I filed next issues related to service/cache deploying with node filters:

https://issues.apache.org/jira/browse/IGNITE-8629 - There is no
documentation about what on which node Ignite Predicate will be executed
during service/cache deploying with NodeFilter
https://issues.apache.org/jira/browse/IGNITE-8630 - Node filter
IgnitePredicate executes twice during deploying on one single node cluster
https://issues.apache.org/jira/browse/IGNITE-8631 - gniteInstanceResource
doesn't inject the Ignite instance to IgnitePredicate during deploying cache
with nodeFilter

I propose you don't use the compute tasks in this predicate at the moment to
avoid the problems with multiple executions of the apply method on the same
node. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Backup and Restore of Ignite Persistence

2018-05-09 Thread aealexsandrov
Hi,

Yes Ignite store the data into files. For every cache entity could exist
some primiry node and optionally several backups node. It could be
configured in CacheConfiguration.

Read about it you can here:

https://apacheignite.readme.io/v1.1/docs/primary-and-backup-copies

For example you have 3 server nodes:

1)In case if you create cache with backup count = 2 then every entity will
be stored on all nodes.
2)In case if you create cache with backup count = 1 then every entity will
be stored on two nodes.
3)In case if you create cache with backup count = 0 then every entity will
be stored on primiry node only.

If all nodes that contain some entity go down at the same time then this
data could be lost.

About restoring of the data. You can setup write-ahead-log instanse that
provides a recovery mechanism for scenarios where a single node or the whole
cluster goes down. It is worth mentioning that in case of a crash or
restart, a cluster can always be recovered to the latest successfully
committed transaction by relying on the contents of the WAL. 

You can read about it here:

https://apacheignite.readme.io/docs/write-ahead-log

BR,
Andrei






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What is the most efficient way to scan all data partitions?

2018-05-09 Thread aealexsandrov
Hi,

It's normal that writing to disk is causing the degradation. You can do
several optimizations to speed up writting to disk:

https://apacheignite.readme.io/docs/durable-memory-tuning
https://apacheignite.readme.io/docs/performance-tips

However you can try to test your example with next DefaultDataRegion
configuration:



















You can also set walMode to NONE but in this case you can lose the data.

Also if you have more then one hard disk then you can think about separating
of the data and wal:

https://apacheignite.readme.io/docs/durable-memory-tuning#section-separate-disk-device-for-wal

According your example. It looks like ok but I don't see where you close
your streamer.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-05-09 Thread aealexsandrov
Hi,

qryexe doesn't work correct. At some reason it ignores _key and _value in
2.4. You can use qryfldexe as it was describe here:

http://apache-ignite-users.70518.x6.nabble.com/Example-of-SQL-query-td21427.html

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Example of SQL query

2018-05-09 Thread aealexsandrov
Hi again,

We already discussed about it recently. At the moment some things works
incorrect in 2.4:

1)Rest API only support String as key and value. And looks like you can't
use affinity key on strings for the same issue. But if you can setup it
using AffinityKeyMapper anotation:

public static class Key {
@AffinityKeyMapped
@QuerySqlField(index = true)
private final Stringkey;

public Key(Stringkeykey) {
this.key = key;
}

public Stringkey getKey() {
return key;
}
} 

And put it:

CacheConfiguration cfg = new CacheConfiguration<>("cache");

It should work like example from here:

http://apache-ignite-users.70518.x6.nabble.com/Inconsistency-reading-cache-from-code-and-via-REST-td21228.html#a21293

2)qryexe doesn't work correct. At some reason it ignores _key and _value
fields. Looks like it could be solved in future releases.

How I am going to show you some working examples how you avoid second
problem:

StartServerNode.java

  
StartClientNode.java

  

1)qryfldexe - exactly the same that you want but had another syntax:

Here -
select%20firstName%2C%20lastName%20from%20Person%20where%20_key%20%3D%201%20or%20_key%20%3D%203
is select firstName, lastName from Person where _key = 1 or _key = 3

http://127.0.0.1:8080/ignite?cmd=qryfldexe=10=Person=select%20firstName%2C%20lastName%20from%20Person%20where%20_key%20%3D%201%20or%20_key%20%3D%203

{"successStatus":0,"error":null,"sessionToken":null,"response":{"items":[["John1","Doe1"],["John3","Doe3"]],"last":true,"fieldsMetadata":[{"schemaName":"Person","typeName":"PERSON","fieldName":"FIRSTNAME","fieldTypeName":"java.lang.String"},{"schemaName":"Person","typeName":"PERSON","fieldName":"LASTNAME","fieldTypeName":"java.lang.String"}],"queryId":4}}

2)get command return the value of some key:

http://127.0.0.1:8080/ignite?cmd=get=Person=3

{"successStatus":0,"affinityNodeId":"37f9d00d-8a7a-4db4-a1af-16471a548ce1","error":null,"sessionToken":null,"response":{"firstName":"John3","lastName":"Doe3","id":"3"}}

3)getall command return several values for some keys

http://127.0.0.1:8080/ignite?cmd=getall=Person=1=2

{"successStatus":0,"affinityNodeId":null,"error":null,"sessionToken":null,"response":{"1":{"firstName":"John1","lastName":"Doe1","id":"1"},"2":{"firstName":"John2","lastName":"Doe2","id":"2"}}}

Hope that it will help you.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObjectBuilder - setField - Boolean Type - Problem

2018-05-08 Thread aealexsandrov
Hi Jonathan,

I tested your example with last changes from ignite apache repository.
Everything looks ok. My steps:

1)Start server node
2)Connect to server node using sqlline tool:

sqlline.bat --color=true --verbose=true -u
jdbc:ignite:thin://127.0.0.1:10800

3)Create new table:

0: jdbc:ignite:thin://127.0.0.1:10800> CREATE TABLE TEST (
. . . . . . . . . . . . . . . . . . .>F_ID varchar PRIMARY KEY,
. . . . . . . . . . . . . . . . . . .>F_INT INTEGER,
. . . . . . . . . . . . . . . . . . .>F_BOOLEAN BOOLEAN
. . . . . . . . . . . . . . . . . . .> ) WITH "CACHE_NAME=TEST,
key_type=java.lang.String, value_type=TEST";
No rows affected (0,277 seconds)

4)Start the client node with next code:

IgniteConfiguration cfg = new IgniteConfiguration();

Ignite ignite;

cfg.setClientMode(true);

TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
List addrs = Arrays.asList("127.0.0.1:47500..47501");
ipFinder.setAddresses(addrs);
TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
discoSpi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discoSpi);

ignite = Ignition.getOrStart(cfg);

BinaryObjectBuilder binaryObjectBuilder =
ignite.binary().builder("TEST");

binaryObjectBuilder.setField("F_BOOLEAN", true);
binaryObjectBuilder.setField("F_INT", 1);
binaryObjectBuilder.setField("F_ID", "SomeId");

BinaryObject bo = binaryObjectBuilder.build();

System.out.println("Binary object [" + bo.toString() + "]");

And got next output:

Binary object [TEST [idHash=445288316, hash=-886521494, F_INT=1,
F_BOOLEAN=true, F_ID=SomeId]]

Could you please clarify your ignite verstion and full code reproducer?

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What does "Non heap" mean in the log?

2018-04-26 Thread aealexsandrov
Hi,

It's non-heap memory that can be used for JVM memory management. You can
read about this here:

https://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryUsage.html

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cluster/ClusterMetrics.html#getNonHeapMemoryCommitted--

Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-23 Thread aealexsandrov
Also I will check why it's not working in next case:

CacheConfiguration cfg = new
CacheConfiguration<>();

cfg.setIndexedTypes(String.class, String.class);
cfg.setName(CACHE_NAME);

IgniteCache cache = ignite.getOrCreateCache(cfg)

and 

http://127.0.0.1:8080/ignite?cmd=qryexe=StartNode=1=String=2=_key%20%3D%20%3F

Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-23 Thread aealexsandrov
Hi Michael,

You are trying to work with sql queries. Let me provide to you the example
how you can do it:

1) Start ingnite server node
2) Start ignite client node that will create the cache and put some values:

StartNode.java
  

3)Run next command:

http://127.0.0.1:8080/ignite?cmd=qryexe=Value=1=mycache=15=key%20%3D%20%3F

Here "key%20%3D%20%3F" is "key = ?"

where ? is a value from arg1.

It's not working without arg1_argN: e.g  key = 1. Looks like it depends of
parsing process. I will check.

The result should be like next:

{"successStatus":0,"error":null,"sessionToken":null,"response":{"items":[{"key":{"key":15},"value":{"value":"Value
15"}}],"last":true,"queryId":0,"fieldsMetadata":[]}}

All components of this query should be described here:

https://apacheignite.readme.io/docs/rest-api#section-sql-query-execute

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Effective Data through DataStream

2018-04-20 Thread aealexsandrov
Hi,

Could you please provide your code example, configuration and used Ignite
version?

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to start Ignite with a jar built from the source code

2018-04-20 Thread aealexsandrov
Hi,

In source code we have DEVNOTES.txt that contains the information how to
build ignite.

I believe that next steps could help you:

1)git clone https://git-wip-us.apache.org/repos/asf/ignite.git
2)Update your source code.
3)Build ignite as described from DEVNOTES.txt:

1.Compile and install:

mvn clean install -Pall-java,all-scala,licenses -DskipTests

2.Javadoc generation (optional):
  
mvn initialize -Pjavadoc

3. Assembly Apache Ignite fabric:

mvn initialize -Prelease

Look for apache-ignite-fabric--bin.zip in ./target/bin directory.

4)After that you can use generated jar files.

5)Run ignite command contains of next:

"%JAVA_HOME%\bin\java.exe" %JVM_OPTS% %QUIET% %JMX_MON%
-DIGNITE_HOME="%IGNITE_HOME%" -DIGNITE_PROG_NAME="%PROG_NAME%" -cp "%CP%"
%MAIN_CLASS% "%CONFIG%"


where:

%JAVA_HOME% - path to your java installation
%JVM_OPTS% - options for JVM where ignite will be started
%QUIET% - -DIGNITE_QUIET=true (or false)
%IGNITE_HOME% - path to your ignite location (path
apache-ignite-fabric--bin folder from unziped
apache-ignite-fabric--bin.zip)
%PROG_NAME% - program name
%CP% - classpath. By default it should be like next:

C:\Ignite\2.4\apache-ignite-fabric-2.4.0-bin\libs\*;C:\Ignite\2.4\apache-ignite-fabric-2.4.0-bin\libs\ignite-indexing\*;C:\Ignite\2.4\apache-ignite-fabric-2.4.0-bin\libs\ignite-spring\*;C:\Ignite\2.4\apache-ignite-fabric-2.4.0-bin\libs\licenses\*

%MAIN_CLASS% - org.apache.ignite.startup.cmdline.CommandLineStartup for
command line (maybe you will create your own)
%CONFIG% - path to config file  (default -  config\default-config.xml)

Provided options list isn't full but hope it will help you to build and
start ignite. 

Thank you,
Andrei

















--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-19 Thread aealexsandrov
Hi Michael,

I reproduced your problem. It's a known issue. Please take a look at this
responce to find the workaround:

http://apache-ignite-users.70518.x6.nabble.com/How-to-use-rest-api-to-put-an-object-into-cache-td5897.html#a5955

Looks like this problem will be solved for basic types at next releases: 

https://issues.apache.org/jira/browse/IGNITE-3345 

Best regards,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inconsistency reading cache from code and via REST?

2018-04-18 Thread aealexsandrov
Hi Michail,

Could you please provide next information:

1)Ignite version
2)your ignite xml config
3)your reproducer (GridCacheTest). Looks like it wasn't attached.

Also if you are using Ignite 2.4 then do you add your servers to baseline
topology using control.sh before cluster activation?

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache ignite support for multi-tenancy

2018-04-18 Thread aealexsandrov
Hi,

You can create several separated caches per tenant with the same
configuration. What is your use case?

Thanks,
Andrei 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to start Ignite with a jar built from the source code

2018-04-18 Thread aealexsandrov
Hi,

You can find ignite.bat and ignite.sh scripts in source bundle bin folder.
It is the best way to start Ignite with custom configuration. It described
here:

https://apacheignite.readme.io/docs/getting-started#section-passing-configuration-file

Also you can take a look inside these scripts and use the run command from
them directly.

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to use bigbench test ignite performance

2018-04-16 Thread aealexsandrov
Hi,

As I know Ignite provides its own benchmarks that used Yardstick framework.

Documentation:

https://apacheignite.readme.io/docs/perfomance-benchmarking

yardstick:

https://github.com/gridgain/yardstick

In Ignite sources exists IgniteAbstractBenchmark class that could be used
for creation of the custom benchmarks. 

Best Regards.
Andrei










--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problem of datastorage after compute finished

2018-04-13 Thread aealexsandrov
Hi Michael,

As I see from your configuration you are going to have cluster with two
ignite servers and PARTITIONED cache.

You can't choose directly on what node will be stored your cache entities
because ignite will use its own affinity function that will maps keys to
partitions (not nodes).

In this case data from your cache could be stored on both nodes.

What you can to do:

1)Create two caches and setup the node filters for them:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate-

2)Ignite can try to give you guarantee that entities calculated on Node A
will be stored on one node and entities calculated on node B will be stored
on another. But there is no guarantee that it will be the same nodes where
it was calculated. setup your own AffinityKey. For example something like
that:

public static class EntityKey {
public EntityKey(String id, long nodeId) {
this.id = id;
this.nodeId = nodeId;
}

private String id;

// Subscriber ID which will be used for affinity.
@AffinityKeyMapped
private long nodeId;

public long getNodeId() {
return nodeId;
}

public String getId() {
return id;
}

}

After that use it as follow:

IgniteCache cache =
ignite.getOrCreateCache(cfg);

cache.putIfAbsent( new EntityKey(hid, 0), monthValues ) //from node
A
cache.putIfAbsent( new EntityKey(hid, 1), monthValues ) //from node
B

Thanks,
Andrei








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Customized affinity function

2018-04-11 Thread aealexsandrov
Hi Prasad,

Affinity function does not directly map keys to nodes, it maps keys to
partitions. So it takes care about entities balance between nodes. To read
more please take a look here:

https://apacheignite.readme.io/docs/affinity-collocation#affinity-function

In case if for every subscriber you are going to create it's own cache then
you can try to setup the node filters for every subscriber:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate-

If you have the same cache for all subscribers then try to setup your own
AffinityKey (as descibed above):

IgniteCache cache =
ignite.getOrCreateCache(cfg);

for (int i = 0 ; i < 1_000_000; i++) {
cache.put(new EntityKey(2*i, 1L), "Value1" + i); //sub id =
1
cache.put(new EntityKey(2*i + 1, 2L), "Value2" + i); //sub
id =2
}

where:

public static class EntityKey {
public EntityKey(long id, long subscriberId) {
this.id = id;
this.subscriberId = subscriberId;
}

private long id;

// Subscriber ID which will be used for affinity.
@AffinityKeyMapped
private long subscriberId;

public long getSubscriberId() {
return subscriberId;
}

public long getId() {
return id;
}
}

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with hive as 3rdparty persistence

2018-04-06 Thread aealexsandrov
Hi,

You can describe your own cacheConfiguration as follow:


  

  ...

  

  ...

  

  


  



%HIVE_JDBC_DRIVER% you can find here:

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_data-access/content/hive-jdbc-odbc-drivers.html

For more information read next:

https://apacheignite.readme.io/docs/3rd-party-store

Best Regards,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How suppress Ignite log message

2018-04-05 Thread aealexsandrov
Hello,

You can try to use next options:

-DIGNITE_NO_ASCII=true -DIGNITE_QUIET=false

1) IGNITE_NO_ASCII will turn off the Ignite ascii message:

if (System.getProperty(IGNITE_NO_ASCII) == null) {
String ver = "ver. " + ACK_VER_STR;

// Big thanks to: http://patorjk.com/software/taag
// Font name "Small Slant"
if (log.isInfoEnabled()) {
log.info(NL + NL +
">>>__    " + NL +
">>>   /  _/ ___/ |/ /  _/_  __/ __/  " + NL +
">>>  _/ // (7 7// /  / / / _/" + NL +
">>> /___/\\___/_/|_/___/ /_/ /___/   " + NL +

2)IGNITE_QUIET will hide other information from your example:

if (log.isQuiet())
U.quiet(false, "OS: " + U.osString());

if (log.isQuiet())
U.quiet(false, "VM information: " + U.jdkString());

Also you can try to change appenders options from the default log4j config
file:

^-- Logging by 'Log4JLogger [quiet=true,
config=/C:/GridGain/projects/gridgain/incubator-ignite/config/ignite-log4j.xml]'

Information about IGNITE_NO_ASCII , IGNITE_QUIET  and other
IgniteSystemProperties you can read here:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html

Best Regards,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Remote node ID is not as expected - New Node not coming up

2018-04-04 Thread aealexsandrov
Hi,

You can configure TcpCommunicationSpi:

TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
communicationSpi.setLocalAddress(localAddress);
communicationSpi.setLocalPort(cc.getCommunicationLocalPort());
communicationSpi.setLocalPortRange(cc.getPortRange());

Please take a look at documentation:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalPort-int-

Best Regards,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Redis KEYS command?

2018-03-28 Thread aealexsandrov
Hi Jose,

All availavle information about ingine/redis compatibility you can see here:

https://apacheignite.readme.io/docs/redis

If you are going tobe informed about the last information about future
integrations then you can read articleы from еру official site:

https://www.gridgain.com/search#!/?s=Redis

Best Regards,
Andrei 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-03-26 Thread aealexsandrov
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 100; i++) {
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:








 //HERE






2)Use persistence (or swaping space):










//THIS ONE





Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:
















2)Run next:

public class example {
public static void main(String[] args) throws IgniteException {
try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
ignite.cluster().active(true);

CacheConfiguration defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

defaultCacheCfg.setDataRegionName("Default_Region");

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 1000; i++) {
//remove old table cache just in case
cache.query(new SqlFieldsQuery(String.format(
"DROP TABLE TBL_%s", i)));
//create new table
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

ignite.cluster().active(false);
}
}
}









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-03-26 Thread aealexsandrov
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 100; i++) {
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:








 //HERE






2)Use persistence (or swaping space):










//THIS ONE





Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:
















2)Run next:

public class example {
public static void main(String[] args) throws IgniteException {
try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
ignite.cluster().active(true);

CacheConfiguration defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

defaultCacheCfg.setDataRegionName("Default_Region");

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 1000; i++) {
//remove old table cache just in case
cache.query(new SqlFieldsQuery(String.format(
"DROP TABLE TBL_%s", i)));
//create new table
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

ignite.cluster().active(false);
}
}
}









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Grid launching Compute Tasks

2018-03-22 Thread aealexsandrov
Hi Neeraj,

Generally you can launch compute tasks from execute method because they will
use separated thread pools executors. Also according documentation you can
be in execute method until cancel method will not called:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/services/Service.html

"Starts execution of this service. This method is automatically invoked
whenever an instance of the service is deployed on a grid node. Note that
service is considered deployed even after it exits the execute method and
can be cancelled (or undeployed) only by calling any of the cancel methods
on IgniteServices API. Also note that service is not required to exit from
execute method until cancel(ServiceContext) method was called."

Also if you will take a look at implementation of the GridServiceProcessor
you can see next:

// Start service in its own thread.
final ExecutorService exe = svcCtx.executor();

exe.execute(new Runnable() {
@Override public void run() {
try {
svc.execute(svcCtx);
}
catch (InterruptedException |
IgniteInterruptedCheckedException ignore) {
if (log.isDebugEnabled())
log.debug("Service thread was interrupted
[name=" + svcCtx.name() + ", execId=" +
svcCtx.executionId() + ']');
}
catch (IgniteException e) {
if (e.hasCause(InterruptedException.class) ||
   
e.hasCause(IgniteInterruptedCheckedException.class)) {
if (log.isDebugEnabled())
log.debug("Service thread was interrupted
[name=" + svcCtx.name() +
", execId=" + svcCtx.executionId() +
']');
}
else {
U.error(log, "Service execution stopped with
error [name=" + svcCtx.name() +
", execId=" + svcCtx.executionId() + ']',
e);
}
}
catch (Throwable e) {
log.error("Service execution stopped with error
[name=" + svcCtx.name() +
", execId=" + svcCtx.executionId() + ']', e);

if (e instanceof Error)
throw (Error)e;
}
finally {
// Suicide.
exe.shutdownNow();
}
}
});

It means that you should manage your exceptions (e.g SocketTimeoutException
for tcp connection) and possible deadlocks.

If you have any problems with it then please send the example for
reproducing.

Thank you,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


<    1   2