答复: why my ignite cluster went to compatibility mode

2018-08-17 Thread Huang Meilong
Thank you Alex, but why one of my ignite nodes went down abnormally?


发件人: Alex Plehanov 
发送时间: 2018年8月17日 15:58:07
收件人: user@ignite.apache.org
主题: Re: why my ignite cluster went to compatibility mode

Hi, Huang,

Already was discussed here [1]. You probably run visor (daemon node) to join 
cluster, which switch cluster to compatibility mode. You can restart to switch 
it to normal state again. Fix (ticket IGNITE-8774) will be available in Ignite 
2.7

[1]: 
http://apache-ignite-users.70518.x6.nabble.com/Node-with-BaselineTopology-cannot-join-mixed-cluster-running-in-compatibility-mode-td22200.html

2018-08-17 10:36 GMT+03:00 Huang Meilong 
mailto:ims...@outlook.com>>:

Hi all,


I'm new to ignite, I started a ignite cluster with three nodes yesterday(with 
command: ./ignite.sh -v -np 
/root/apache-ignite-fabric-2.6.0-bin/examples/config/persistentstore/examples-persistent-store.xml),
 I found one node is down without any log today, and when I try to restart the 
lost node, it say that cluster is in compatibility mode and can not join new 
node. How can I restart the new node?


"""

[15:19:14,299][INFO][tcp-disco-sock-reader-#5][TcpDiscoverySpi] Started serving 
remote node connection 
[rmtAddr=/172.16.157.129:34695<http://172.16.157.129:34695>, rmtPort=34695]
[15:19:14,414][SEVERE][tcp-disco-msg-worker-#3][TcpDiscoverySpi] 
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node in 
order to prevent cluster wide instability.
class org.apache.ignite.IgniteException: Node with BaselineTopology cannot join 
mixed cluster running in compatibility mode
at 
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[15:19:14,423][SEVERE][tcp-disco-msg-worker-#3][] Critical system error 
detected. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node with 
BaselineTopology cannot join mixed cluster running in compatibility mode]]
class org.apache.ignite.IgniteException: Node with BaselineTopology cannot join 
mixed cluster running in compatibility mode
at 
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[15:19:14,424][SEVERE][tcp-disco-msg-worker-#3][] JVM will be halted 
immediately due to the failure: [failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node with 
BaselineTopology cannot join mixed cluster running in compatibility mode]]


"""


Thanks,

Huang



why my ignite cluster went to compatibility mode

2018-08-17 Thread Huang Meilong
Hi all,


I'm new to ignite, I started a ignite cluster with three nodes yesterday(with 
command: ./ignite.sh -v -np 
/root/apache-ignite-fabric-2.6.0-bin/examples/config/persistentstore/examples-persistent-store.xml),
 I found one node is down without any log today, and when I try to restart the 
lost node, it say that cluster is in compatibility mode and can not join new 
node. How can I restart the new node?


"""

[15:19:14,299][INFO][tcp-disco-sock-reader-#5][TcpDiscoverySpi] Started serving 
remote node connection [rmtAddr=/172.16.157.129:34695, rmtPort=34695]
[15:19:14,414][SEVERE][tcp-disco-msg-worker-#3][TcpDiscoverySpi] 
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node in 
order to prevent cluster wide instability.
class org.apache.ignite.IgniteException: Node with BaselineTopology cannot join 
mixed cluster running in compatibility mode
at 
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[15:19:14,423][SEVERE][tcp-disco-msg-worker-#3][] Critical system error 
detected. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node with 
BaselineTopology cannot join mixed cluster running in compatibility mode]]
class org.apache.ignite.IgniteException: Node with BaselineTopology cannot join 
mixed cluster running in compatibility mode
at 
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[15:19:14,424][SEVERE][tcp-disco-msg-worker-#3][] JVM will be halted 
immediately due to the failure: [failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node with 
BaselineTopology cannot join mixed cluster running in compatibility mode]]


"""


Thanks,

Huang


答复: data loss using IgniteDataStreamer API

2018-08-16 Thread Huang Meilong
Thank you, i found that autoflus is what i need.


发件人: Ilya Kasnacheev 
发送时间: 2018年8月16日 21:22:10
收件人: user@ignite.apache.org
主题: Re: data loss using IgniteDataStreamer API

Hello!

For starters, I don't see that you do stmr.close() anywhere.

Data Streamer is only guaranteed to write all data to cache after it is closed 
properly.

Regards,

--
Ilya Kasnacheev

2018-08-16 16:19 GMT+03:00 Huang Meilong 
mailto:ims...@outlook.com>>:

Hi all,


I'm use IgniteDataStreamer API to ingest 1,000,000 record to a sql table cache, 
but only 996,626 records,


0: jdbc:ignite:thin://127.0.0.1/<http://127.0.0.1/>> select count(*) from 
APMMETRIC;
++
|COUNT(*)|
++
| 996626 |
++
1 row selected (0.057 seconds)
0: jdbc:ignite:thin://127.0.0.1/<http://127.0.0.1/>>



does ignite data streamer lose data? code snippet as below


IgniteDataStreamer stmr = 
ignite.dataStreamer("APM_METRIC_CACHE");

long start = System.currentTimeMillis();
for (Long l = 0L; l < 1000L; l++) {
long start1 = System.currentTimeMillis();
for (Integer j = 0; j < 1000; j++) {
ApmMetric metric = new ApmMetric(l, "metric_" + j,"CLUSTER-XXX", "host-1", 80.0 
+ (j.doubleValue() / 100.0));
ApmMetricKey metricKey = new ApmMetricKey(l, "metric_" + j,"CLUSTER-XXX", 
"host-1");
stmr.addData(metricKey, metric);
}
long end1 = System.currentTimeMillis();
System.out.println("stream 1000 records cost " + (end1 - start1) + " ms.");
}
long end = System.currentTimeMillis();
System.out.println("stream 100 records cost " + (end - start) + " ms.");


public class ApmMetricKey implements Serializable {
private Long timeStamp;
private String metricName;
private String clusterId;
private String hostName;

public Long getTimeStamp() {
return timeStamp;
}

public void setTimeStamp(Long timeStamp) {
this.timeStamp = timeStamp;
}

public String getMetricName() {
return metricName;
}

public void setMetricName(String metricName) {
this.metricName = metricName;
}

public String getClusterId() {
return clusterId;
}

public void setClusterId(String clusterId) {
this.clusterId = clusterId;
}

public String getHostName() {
return hostName;
}

public void setHostName(String hostName) {
this.hostName = hostName;
}

public ApmMetricKey() {}

public ApmMetricKey(Long timeStamp, String metricName, String clusterId, 
String hostName) {
this.timeStamp = timeStamp;
this.metricName = metricName;
this.clusterId = clusterId;
this.hostName = hostName;
}

@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ApmMetricKey that = (ApmMetricKey) o;
return Objects.equals(timeStamp, that.timeStamp) &&
Objects.equals(metricName, that.metricName) &&
Objects.equals(clusterId, that.clusterId) &&
Objects.equals(hostName, that.hostName);
}

@Override
public int hashCode() {
return Objects.hash(timeStamp, metricName, clusterId, hostName);
}

@Override
public String toString() {
return "ApmMetricKey{" +
"timeStamp=" + timeStamp +
", metricName='" + metricName + '\'' +
", clusterId='" + clusterId + '\'' +
", hostName='" + hostName + '\'' +
'}';
}
}




答复: data loss using IgniteDataStreamer API

2018-08-16 Thread Huang Meilong
Thank you, i found that autoflus is what i need.


发件人: Ilya Kasnacheev 
发送时间: 2018年8月16日 21:22:10
收件人: user@ignite.apache.org
主题: Re: data loss using IgniteDataStreamer API

Hello!

For starters, I don't see that you do stmr.close() anywhere.

Data Streamer is only guaranteed to write all data to cache after it is closed 
properly.

Regards,

--
Ilya Kasnacheev

2018-08-16 16:19 GMT+03:00 Huang Meilong 
mailto:ims...@outlook.com>>:

Hi all,


I'm use IgniteDataStreamer API to ingest 1,000,000 record to a sql table cache, 
but only 996,626 records,


0: jdbc:ignite:thin://127.0.0.1/<http://127.0.0.1/>> select count(*) from 
APMMETRIC;
++
|COUNT(*)|
++
| 996626 |
++
1 row selected (0.057 seconds)
0: jdbc:ignite:thin://127.0.0.1/<http://127.0.0.1/>>



does ignite data streamer lose data? code snippet as below


IgniteDataStreamer stmr = 
ignite.dataStreamer("APM_METRIC_CACHE");

long start = System.currentTimeMillis();
for (Long l = 0L; l < 1000L; l++) {
long start1 = System.currentTimeMillis();
for (Integer j = 0; j < 1000; j++) {
ApmMetric metric = new ApmMetric(l, "metric_" + j,"CLUSTER-XXX", "host-1", 80.0 
+ (j.doubleValue() / 100.0));
ApmMetricKey metricKey = new ApmMetricKey(l, "metric_" + j,"CLUSTER-XXX", 
"host-1");
stmr.addData(metricKey, metric);
}
long end1 = System.currentTimeMillis();
System.out.println("stream 1000 records cost " + (end1 - start1) + " ms.");
}
long end = System.currentTimeMillis();
System.out.println("stream 100 records cost " + (end - start) + " ms.");


public class ApmMetricKey implements Serializable {
private Long timeStamp;
private String metricName;
private String clusterId;
private String hostName;

public Long getTimeStamp() {
return timeStamp;
}

public void setTimeStamp(Long timeStamp) {
this.timeStamp = timeStamp;
}

public String getMetricName() {
return metricName;
}

public void setMetricName(String metricName) {
this.metricName = metricName;
}

public String getClusterId() {
return clusterId;
}

public void setClusterId(String clusterId) {
this.clusterId = clusterId;
}

public String getHostName() {
return hostName;
}

public void setHostName(String hostName) {
this.hostName = hostName;
}

public ApmMetricKey() {}

public ApmMetricKey(Long timeStamp, String metricName, String clusterId, 
String hostName) {
this.timeStamp = timeStamp;
this.metricName = metricName;
this.clusterId = clusterId;
this.hostName = hostName;
}

@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ApmMetricKey that = (ApmMetricKey) o;
return Objects.equals(timeStamp, that.timeStamp) &&
Objects.equals(metricName, that.metricName) &&
Objects.equals(clusterId, that.clusterId) &&
Objects.equals(hostName, that.hostName);
}

@Override
public int hashCode() {
return Objects.hash(timeStamp, metricName, clusterId, hostName);
}

@Override
public String toString() {
return "ApmMetricKey{" +
"timeStamp=" + timeStamp +
", metricName='" + metricName + '\'' +
", clusterId='" + clusterId + '\'' +
", hostName='" + hostName + '\'' +
'}';
}
}




data loss using IgniteDataStreamer API

2018-08-16 Thread Huang Meilong
Hi all,


I'm use IgniteDataStreamer API to ingest 1,000,000 record to a sql table cache, 
but only 996,626 records,


0: jdbc:ignite:thin://127.0.0.1/> select count(*) from APMMETRIC;
++
|COUNT(*)|
++
| 996626 |
++
1 row selected (0.057 seconds)
0: jdbc:ignite:thin://127.0.0.1/>



does ignite data streamer lose data? code snippet as below


IgniteDataStreamer stmr = 
ignite.dataStreamer("APM_METRIC_CACHE");

long start = System.currentTimeMillis();
for (Long l = 0L; l < 1000L; l++) {
long start1 = System.currentTimeMillis();
for (Integer j = 0; j < 1000; j++) {
ApmMetric metric = new ApmMetric(l, "metric_" + j,"CLUSTER-XXX", "host-1", 80.0 
+ (j.doubleValue() / 100.0));
ApmMetricKey metricKey = new ApmMetricKey(l, "metric_" + j,"CLUSTER-XXX", 
"host-1");
stmr.addData(metricKey, metric);
}
long end1 = System.currentTimeMillis();
System.out.println("stream 1000 records cost " + (end1 - start1) + " ms.");
}
long end = System.currentTimeMillis();
System.out.println("stream 100 records cost " + (end - start) + " ms.");


public class ApmMetricKey implements Serializable {
private Long timeStamp;
private String metricName;
private String clusterId;
private String hostName;

public Long getTimeStamp() {
return timeStamp;
}

public void setTimeStamp(Long timeStamp) {
this.timeStamp = timeStamp;
}

public String getMetricName() {
return metricName;
}

public void setMetricName(String metricName) {
this.metricName = metricName;
}

public String getClusterId() {
return clusterId;
}

public void setClusterId(String clusterId) {
this.clusterId = clusterId;
}

public String getHostName() {
return hostName;
}

public void setHostName(String hostName) {
this.hostName = hostName;
}

public ApmMetricKey() {}

public ApmMetricKey(Long timeStamp, String metricName, String clusterId, 
String hostName) {
this.timeStamp = timeStamp;
this.metricName = metricName;
this.clusterId = clusterId;
this.hostName = hostName;
}

@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ApmMetricKey that = (ApmMetricKey) o;
return Objects.equals(timeStamp, that.timeStamp) &&
Objects.equals(metricName, that.metricName) &&
Objects.equals(clusterId, that.clusterId) &&
Objects.equals(hostName, that.hostName);
}

@Override
public int hashCode() {
return Objects.hash(timeStamp, metricName, clusterId, hostName);
}

@Override
public String toString() {
return "ApmMetricKey{" +
"timeStamp=" + timeStamp +
", metricName='" + metricName + '\'' +
", clusterId='" + clusterId + '\'' +
", hostName='" + hostName + '\'' +
'}';
}
}



答复: problem when streaming data to a sql table

2018-08-16 Thread Huang Meilong
i resolved it by setting KEY_TYPE and VALUE_TYPE.


发件人: Huang Meilong 
发送时间: 2018年8月16日 17:42:33
收件人: user@ignite.apache.org
主题: problem when streaming data to a sql table


Hi,


I'm streaming data to a sql table like this,


'''

public static void main(String[] args) throws Exception {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
Connection conn = 
DriverManager.getConnection("jdbc:ignite:thin://worker-1/");
System.out.println("started jdbc connection...");
// Create database tables
Statement stmt = conn.createStatement();

// Create table based on PARTITIONED template with one backup
stmt.executeUpdate("drop table if EXISTS MX;");
stmt.executeUpdate("CREATE TABLE IF NOT EXISTS MX (" +
" timeStamp LONG, metricName VARCHAR, clusterId VARCHAR, 
hostName VARCHAR, metricValue FLOAT," +
" PRIMARY KEY (timeStamp, metricName, clusterId, hostName)) " +
" WITH \"backups=1, CACHE_NAME=MX, 
DATA_REGION=MX_24GB_Region\"");

stmt.executeUpdate("CREATE INDEX idx_timestamp_v1 ON MX (timeStamp)");
stmt.executeUpdate("CREATE INDEX idx_metric_name_v1 ON MX 
(metricName)");
stmt.executeUpdate("CREATE INDEX idx_cluster_id_v1 ON MX (clusterId)");
stmt.executeUpdate("CREATE INDEX idx_hostname_v1 ON MX (hostName)");

System.out.println("created table...");

Ignition.setClientMode(true);
IgniteConfiguration cfg = new IgniteConfiguration();

TcpDiscoveryVmIpFinder ipFinder1 = new TcpDiscoveryVmIpFinder();
ipFinder1.setAddresses(Arrays.asList("worker-1:47500..47502", 
"worker-2:47500..47502", "worker-3:47500..47502"));


TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new 
TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("worker-1:47500..47502", 
"worker-2:47500..47502", "worker-3:47500..47502"));
discoverySpi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discoverySpi);
//cfg.setActiveOnStart(true);

Ignite ignite = Ignition.start(cfg);

System.out.println("start ignite...");

IgniteCache stmCache = ignite.getOrCreateCache("MX");

IgniteDataStreamer stmr = 
ignite.dataStreamer(stmCache.getName());

MX mx = new MX();
mx.setClusterId("MX");
mx.setMetricName("metric");
mx.setHostName("host-1");
mx.setMetricValue(80.0);
stmr.addData("MX", mx);

try (ResultSet rs =
 stmt.executeQuery("SELECT AVG(metricValue) FROM MX")) {
System.out.println("Query result:");
while (rs.next())
System.out.println(">>>" + rs.getString(1));
}

}


'''


after putting data to the streamer, the result set is empty where execute query 
on that table.


class MX is as below:


public class MX implements Serializable {

@QuerySqlField(index = true)
private Long timesTamp;

@QuerySqlField(index = true)
private String metricName;

@QuerySqlField(index = true)
private String clusterId;

@QuerySqlField(index = true)
private String hostName;

@QuerySqlField
private Double metricValue;

public Long getTimesTamp() {
return timesTamp;
}

public void setTimesTamp(Long timesTamp) {
this.timesTamp = timesTamp;
}

public String getMetricName() {
return metricName;
}

public void setMetricName(String metricName) {
this.metricName = metricName;
}

public String getClusterId() {
return clusterId;
}

public void setClusterId(String clusterId) {
this.clusterId = clusterId;
}

public String getHostName() {
return hostName;
}

public void setHostName(String hostName) {
this.hostName = hostName;
}

public Double getMetricValue() {
return metricValue;
}

public void setMetricValue(Double metricValue) {
this.metricValue = metricValue;
}
}


can anyone give me some suggestions, thank you very much!




problem when streaming data to a sql table

2018-08-16 Thread Huang Meilong
Hi,


I'm streaming data to a sql table like this,


'''

public static void main(String[] args) throws Exception {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
Connection conn = 
DriverManager.getConnection("jdbc:ignite:thin://worker-1/");
System.out.println("started jdbc connection...");
// Create database tables
Statement stmt = conn.createStatement();

// Create table based on PARTITIONED template with one backup
stmt.executeUpdate("drop table if EXISTS MX;");
stmt.executeUpdate("CREATE TABLE IF NOT EXISTS MX (" +
" timeStamp LONG, metricName VARCHAR, clusterId VARCHAR, 
hostName VARCHAR, metricValue FLOAT," +
" PRIMARY KEY (timeStamp, metricName, clusterId, hostName)) " +
" WITH \"backups=1, CACHE_NAME=MX, 
DATA_REGION=MX_24GB_Region\"");

stmt.executeUpdate("CREATE INDEX idx_timestamp_v1 ON MX (timeStamp)");
stmt.executeUpdate("CREATE INDEX idx_metric_name_v1 ON MX 
(metricName)");
stmt.executeUpdate("CREATE INDEX idx_cluster_id_v1 ON MX (clusterId)");
stmt.executeUpdate("CREATE INDEX idx_hostname_v1 ON MX (hostName)");

System.out.println("created table...");

Ignition.setClientMode(true);
IgniteConfiguration cfg = new IgniteConfiguration();

TcpDiscoveryVmIpFinder ipFinder1 = new TcpDiscoveryVmIpFinder();
ipFinder1.setAddresses(Arrays.asList("worker-1:47500..47502", 
"worker-2:47500..47502", "worker-3:47500..47502"));


TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
TcpDiscoveryMulticastIpFinder ipFinder = new 
TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList("worker-1:47500..47502", 
"worker-2:47500..47502", "worker-3:47500..47502"));
discoverySpi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discoverySpi);
//cfg.setActiveOnStart(true);

Ignite ignite = Ignition.start(cfg);

System.out.println("start ignite...");

IgniteCache stmCache = ignite.getOrCreateCache("MX");

IgniteDataStreamer stmr = 
ignite.dataStreamer(stmCache.getName());

MX mx = new MX();
mx.setClusterId("MX");
mx.setMetricName("metric");
mx.setHostName("host-1");
mx.setMetricValue(80.0);
stmr.addData("MX", mx);

try (ResultSet rs =
 stmt.executeQuery("SELECT AVG(metricValue) FROM MX")) {
System.out.println("Query result:");
while (rs.next())
System.out.println(">>>" + rs.getString(1));
}

}


'''


after putting data to the streamer, the result set is empty where execute query 
on that table.


class MX is as below:


public class MX implements Serializable {

@QuerySqlField(index = true)
private Long timesTamp;

@QuerySqlField(index = true)
private String metricName;

@QuerySqlField(index = true)
private String clusterId;

@QuerySqlField(index = true)
private String hostName;

@QuerySqlField
private Double metricValue;

public Long getTimesTamp() {
return timesTamp;
}

public void setTimesTamp(Long timesTamp) {
this.timesTamp = timesTamp;
}

public String getMetricName() {
return metricName;
}

public void setMetricName(String metricName) {
this.metricName = metricName;
}

public String getClusterId() {
return clusterId;
}

public void setClusterId(String clusterId) {
this.clusterId = clusterId;
}

public String getHostName() {
return hostName;
}

public void setHostName(String hostName) {
this.hostName = hostName;
}

public Double getMetricValue() {
return metricValue;
}

public void setMetricValue(Double metricValue) {
this.metricValue = metricValue;
}
}


can anyone give me some suggestions, thank you very much!




答复: distributed-ddl extended-parameters section showing 404 page not found

2018-08-15 Thread Huang Meilong
thank you slava,


it says that to specify an affinity key name we can use AFFINITI_KEY in the 
Parameters section,

"

  *   AFFINITY_KEY= - specifies an affinity 
key<https://apacheignite.readme.io/docs/affinity-collocation> name which is a 
column of the PRIMARY KEY constraint.
  *

"


but in the example section, it uses affinityKey


  *   SQL<https://apacheignite-sql.readme.io/docs/create-table>

Copy

CREATE TABLE IF NOT EXISTS Person (
  id int,
  city_id int,
  name varchar,
  age int,
  company varchar,
  PRIMARY KEY (id, city_id)
) WITH "template=partitioned,backups=1,affinitykey=city_id, key_type=PersonKe



which one works?


发件人: Вячеслав Коптилин 
发送时间: 2018年8月15日 16:14:34
收件人: user@ignite.apache.org
主题: Re: distributed-ddl extended-parameters section showing 404 page not found

Hello,

Yep, the link is broken, unfortunately.
It seems it should be the following 
https://apacheignite-sql.readme.io/docs/create-table#section-parameters

Thanks,
S.

ср, 15 авг. 2018 г. в 10:17, Huang Meilong 
mailto:ims...@outlook.com>>:

I found it here: https://apacheignite-sql.readme.io/docs/getting-started

Getting Started - Apache Ignite SQL 
Documentation<https://apacheignite-sql.readme.io/docs/getting-started>
apacheignite-sql.readme.io<http://apacheignite-sql.readme.io>
Apache Ignite is a memory-centric distributed database, caching, and processing 
platform for transactional, analytical, and streaming workloads, delivering 
in-memory speeds at petabyte scale




"""

to set other cache configurations for the table, you should use the template 
parameter and provide the name of the cache configuration previously 
registered(via XML or code). See extended 
parameters<https://apacheignite-sql.readme.io/docs/distributed-ddl#section-extended-parameters>
 section for more details.

"""


发件人: dkarachentsev 
mailto:dkarachent...@gridgain.com>>
发送时间: 2018年8月15日 14:48:56
收件人: user@ignite.apache.org<mailto:user@ignite.apache.org>
主题: Re: distributed-ddl extended-parameters section showing 404 page not found

Hi,

Where did you find it? It might be a broken link.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


答复: distributed-ddl extended-parameters section showing 404 page not found

2018-08-15 Thread Huang Meilong
I found it here: https://apacheignite-sql.readme.io/docs/getting-started

Getting Started - Apache Ignite SQL 
Documentation
apacheignite-sql.readme.io
Apache Ignite is a memory-centric distributed database, caching, and processing 
platform for transactional, analytical, and streaming workloads, delivering 
in-memory speeds at petabyte scale




"""

to set other cache configurations for the table, you should use the template 
parameter and provide the name of the cache configuration previously 
registered(via XML or code). See extended 
parameters
 section for more details.

"""


发件人: dkarachentsev 
发送时间: 2018年8月15日 14:48:56
收件人: user@ignite.apache.org
主题: Re: distributed-ddl extended-parameters section showing 404 page not found

Hi,

Where did you find it? It might be a broken link.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


distributed-ddl extended-parameters section showing 404 page not found

2018-08-14 Thread Huang Meilong
hi,


do you know what are the extended parameters for distributed ddl, this page can 
not be found.


https://apacheignite-sql.readme.io/docs/distributed-ddl#section-extended-parameters


Thanks