Re: WAL size control

2019-01-16 Thread Denis Magda
Justin, check this section, you can tun WAL or disable WAL archiving in
general:
https://apacheignite.readme.io/docs/write-ahead-log#section-tuning-wal-archive

-
Denis


On Mon, Jan 14, 2019 at 7:06 PM Justin Ji  wrote:

> I have more than three hundred WAL segments in our folder
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


server always shutdown

2019-01-16 Thread hulitao198758
Ignite. Sh start multi-node server side is very unstable, often appear
process termination situation, I now is running on a server three cluster
nodes, three servers make up three cluster environment, often a server node
will hang, I do not know what situation



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

server ignite.sh always shutdown

2019-01-16 Thread hulitao198758
Ignite. Sh start multi-node server side is very unstable, often appear
process termination situation, I now is running on a server three cluster
nodes, three servers make up three cluster environment, often a server node
will hang, I do not know what situation



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


server ignite.sh always shutdown

2019-01-16 Thread hulitao198758
Ignite. Sh start multi-node server side is very unstable, often appear
process termination situation, I now is running on a server three cluster
nodes, three servers make up three cluster environment, often a server node
will hang, I do not know what situation



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been closed.

2019-01-16 Thread yangjiajun
Hello.

Thanks for reply.Unfortunately,I still get the exception after running my
test on 2.7  for several times.


ilya.kasnacheev wrote
> Hello!
> 
> I can reproduce this problem, but then again, it does not seem to
> reproduce
> on 2.7. Have you considered upgrading?
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> ср, 16 янв. 2019 г. в 14:14, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I do  test on a ignite 2.6 node with persistence enabled and get an
>> exception:
>>
>>  Exception in thread "main" java.sql.BatchUpdateException: class
>> org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
>> at
>>
>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
>> at java.lang.Thread.run(Unknown Source)
>>
>> Here is my test code:
>>
>> import java.sql.Connection;
>> import java.sql.DriverManager;
>> import java.sql.PreparedStatement;
>> import java.sql.SQLException;
>> import java.util.Properties;
>>
>> /**
>>  * test insert data in streaming mode
>>  * */
>> public class InsertStreamingMode {
>>
>> private static Connection conn;
>>
>> public static void main(String[] args) throws Exception {
>>
>> initialize();
>>
>> close();
>> }
>>
>> public static void close() throws Exception {
>> conn.close();
>> }
>>
>> public static void initialize() throws Exception {
>> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
>> final String dbUrl =
>>
>> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
>> final Properties props = new Properties();
>> conn = DriverManager.getConnection(dbUrl, props);
>> initData();
>> overwriteData();
>> }
>>
>> private static void initData() throws SQLException{
>>
>> long start=System.currentTimeMillis();
>> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
>> ON").execute();
>>
>> String sql="insert INTO  city1(id,name,name1)
>> VALUES(?,?,?)";
>> PreparedStatement ps=conn.prepareStatement(sql);
>> for(int i=0;i<160;i++){
>> String s1=String.valueOf(Math.random());
>> String s2=String.valueOf(Math.random());
>> ps.setInt(1, i);
>> ps.setString(2, s1);
>> ps.setString(3, s2);
>> ps.execute();
>> }
>> conn.prepareStatement("set streaming off").execute();
>> long end=System.currentTimeMillis();
>> System.out.println(end-start);
>> }
>>
>> private static void overwriteData() throws SQLException{
>>
>> long start=System.currentTimeMillis();
>> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
>> ON").execute();
>>
>> String sql="insert INTO  city1(id,name,name1)
>> VALUES(?,?,?)";
>> PreparedStatement ps=conn.prepareStatement(sql);
>> for(int i=0;i<160;i++){
>> String s1="test";
>> String s2="test";
>> ps.setInt(1, i);
>> ps.setString(2, s1);
>> ps.setString(3, s2);
>> ps.execute();
>> }
>> conn.prepareStatement("set streaming off").execute();
>> long end=System.currentTimeMillis();
>> System.out.println(end-start);
>> }
>> }
>>
>> Here is the table:
>> CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
>> "template=replicated"
>>
>> The exception occurs on overwriteData method.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Concurrent merge into operations cause critical system error on ignite 2.7 node.

2019-01-16 Thread yangjiajun
Hello.

Here is my test code:


import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;

/**
 * concurrent merge into
 * */
public class MergeInto {

private final static String mergeSql1 = "merge INTO  
city2(id,name,name1)
VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')";
private final static String mergeSql2 = "merge INTO  
city2(id,name,name1)
VALUES(2,'1','1'),(1,'1','1')";

private static Connection conn;
private static Connection conn1;

public static void main(String[] args) throws Exception {

initialize(false);

testQuery();

while(true){

}
}

public static void close() throws Exception {
conn.close();
}

public static void initialize(boolean initData) throws Exception {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
final String dbUrl =
"jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
final Properties props = new Properties();
conn = DriverManager.getConnection(dbUrl, props);
conn1= DriverManager.getConnection(dbUrl, props);
if(initData){
initData();
}
}

private static void initData() throws SQLException{

long start=System.currentTimeMillis();
conn.prepareStatement("set streaming on").execute();

String sql="insert INTO  city2(id,name,name1) VALUES(?,?,?)";
PreparedStatement ps=conn.prepareStatement(sql);
for(int i=0;i<160;i++){
String s1=String.valueOf(Math.random());
String s2=String.valueOf(Math.random());
ps.setInt(1, i);
ps.setString(2, s1);
ps.setString(3, s2);
ps.execute();
}
conn.prepareStatement("set streaming off").execute();
long end=System.currentTimeMillis();
System.out.println(end-start);
}

public static void testQuery() throws Exception {

new Thread(new Runnable() {

@Override
public void run() {

while(true){

long 
startTime=System.currentTimeMillis();
try (Statement stmt = 
conn.createStatement()) {
stmt.executeUpdate(mergeSql1);
} catch (SQLException e1) {
e1.printStackTrace();
}


System.out.println("conn:"+(System.currentTimeMillis()-startTime));
}

}
}).start();

new Thread(new Runnable() {

@Override
public void run() {

while(true){
long 
startTime=System.currentTimeMillis();

try (Statement stmt = 
conn1.createStatement()) {
stmt.executeUpdate(mergeSql2);
} catch (SQLException e1) {
e1.printStackTrace();
}


System.out.println("conn1:"+(System.currentTimeMillis()-startTime));
}
}
}).start();

}
}


ilya.kasnacheev wrote
> Hello!
> 
> Can you provide a reproducer project which would reliably show this
> behavior?
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> ср, 16 янв. 2019 г. в 15:37, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> Thanks for reply.I think these "Failed to process selector key" errors
>> cause
>> by the manual halt of my test application.I don't think network is a
>> problem.Since my test cases show that some operations cause trouble while
>> some others work fine.
>>
>>
>> ilya.kasnacheev wrote
>> > Hello!
>> >
>> > I can see multiple "Failed to process selector key" errors in your log.
>> > Are
>> > you sure that your nodes can communicate via network freely and without
>> > delay?
>> >
>> > Regards,
>> > --
>> > Ilya Kasnacheev
>> >
>> >
>> > ср, 16 янв. 2019 г. в 10:21, yangjiajun <
>>
>> > 1371549332@
>>
>> >>:
>> >
>> >> Hello.
>> >>
>> >> Please see the logs.
>> >>
>> >> ignite-8bdefd7a.zip
>> >> <
>> >>
>> http://apach

Re: MVCC and continuous query

2019-01-16 Thread Cindy Xing
Thanks Ilya.

Do we have example show case the pub/sub programming model with MVCC
enabled? 
Basically instead of pulling but react in case of update on specific key. 

Also, does ignite client API expose the version info.? 


Cindy



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Cache copy from another cache.

2019-01-16 Thread Mikhail
Hi Hemasundara,

I think this should be implemented on the application level. when a new
cache is ready, you need to change the cache name that your app uses to make
all operations use a new cache, might be this will require some
synchronization, it depends on your requirements.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Thin client cannot retrieve data that was inserted with the regular Ignite client when using a composite key

2019-01-16 Thread Mikhail
Hi Roman,

it looks like a bug for me, at least I don't see anything wrong about your
test case. Also, I can reproduce the issue with the latest master version,
so I filed a ticket:
https://issues.apache.org/jira/browse/IGNITE-10960

Thank you for your report,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite_backup_restore_query

2019-01-16 Thread msuh


Are there any additional steps to take after 4. start cluster again to
ensure that ignite recognizes the new data? I've attempted several times to
copy the data and wal directory (and making sure that the consistent ID are
the same) into a new node, but when looking through visor, all the caches
indicate that there are 0 entries ingested. 
It just doesn't seem to be picking up that there are manually copied data in
the persistence storage directory.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite_backup_restore_query

2019-01-16 Thread Mikhail
Hi,

you can make snapshots manually: 1. deactivate cluster. 2. stop all nodes 3.
copy work, wal and data storage directories 4. start cluster again.
To make live snapshots of your cluster you can use 3rd party solutions like
this one https://docs.gridgain.com/docs/data-snapshots

Thanks,
Mike.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


3rd party persistence with hive not updating hive with all records/entries in ignite

2019-01-16 Thread shivakumar
Hi 
i am trying to use hive as 3rd party persistence store and enabled write
behind and i set these cache configurations using spring xml
 
 



every 5000ms interval ignite updating only one recored/one row in hive, even
i ingested around 2000 records of data to ignite.
why all 2000 records of data not going into hive when it hits 5000ms
Flushfrequency time interval ?
is there any other parameter which affect these persistence store update.
any suggestions are appreciated!
thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: ignite continuous query with XML

2019-01-16 Thread shivakumar
thanks stan!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Transactional cache in Atomic mode

2019-01-16 Thread msuh
Hi Mikhail,

Thanks for your answer.

1) So if Ignite implicitly puts each cache update in a transaction, does
that mean it's just better performance-wise to wrap 100k updates in a
transaction (We found that 100k was the optimal number of updates to do in a
single transaction) than to not explicitly put the updates in a transaction?
Would ignite itself make the implicit transaction /only if/ the explicit
JAVA API for transaction has not been called?

2) I should've stated it more clearly. So I was testing code structured like
this:

ExecutorService executor = Executors.newFixedThreadPool(10);
for (each partition p in node) {
executor.execute(
 try(Transaction tx =
ignite.transactions().txStart(..)) {
  update the entires in p
   }
)
}

So we're wondering what the locking granularity is when we enter an
explicitly called transaction. Does it lock on partition, node, entry, or
something else? And is this configurable for each execution?


   
   



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to wait for partition map exchange on cluster activation

2019-01-16 Thread Andrey Davydov
I create small reproducer project/ It is available on:

https://drive.google.com/file/d/1A2_i1YBI7OGtJM0b8bxuIJrTI61ZQXoR/view?usp=sharing

Thera are project, and some logs and dumps in archive.

There is only one DemoTest.java to run to reproduce.
test run 3 ignite nodes with configuration very similar to our real project
without any logic or additional services.

IMPORTANT. As I investigate during reproduction, evrething works fine when
I run test on local drive (SSD in case of my laptop), but if I copy project
to external HDD mounted over USB3.0, problem present. So it may be race
caused by condition on slow IO. Initialy I got problem when I build my
project on external drive.

Andrey.



On Wed, Jan 9, 2019 at 8:43 PM Andrey Davydov 
wrote:

>
>
> Hello,
>
> I found in test logs of my project that Ignite warns about failed
> partition maps exchange. In test environment 3 Ignite 2.7 server nodes run
> in the same JVM8 on Win10, using localhost networking.
>
>
>
> 2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Affinity changes applied in 10 ms.
>
> 2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Affinity changes applied in 10 ms.
>
> 2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Full map updating for 5 groups performed in 4 ms.
>
> 2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Full map updating for 5 groups performed in 5 ms.
>
> 2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
> err=null]
>
> 2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
> err=null]
>
> 2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
> checkpointLockWait=0ms, checkpointLockHoldTime=6ms,
> walCpRecordFsyncDuration=248ms, pages=204, reason='node started']
>
> 2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
> checkpointLockWait=0ms, checkpointLockHoldTime=8ms,
> walCpRecordFsyncDuration=257ms, pages=204, reason='node started']
>
> 2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
> checkpointLockWait=0ms, checkpointLockHoldTime=22ms,
> walCpRecordFsyncDuration=289ms, pages=204, reason='node started']
>
> 2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms,
> pagesWrite=84ms, fsync=1992ms, total=3179ms]
>
> 2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1213ms,
> pagesWrite=79ms, fsync=2066ms, total=3358ms]
>
> 2019-01-09 20:15:30,998 [db-checkpoint-thread-#146%TestNode-2%] INFO
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
> - Checkpoint finished [cpId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1262ms,
> pagesWrite=79ms, fsync=2203ms, total=3

Re: Transactional cache in Atomic mode

2019-01-16 Thread Mikhail
Hi

>1)
you can do all operations on transaction caches without defining an explicit
transaction.
However, even if you don't start transaction and for example put some data
in transactional cache, ignite itself will make implicit transaction, so
transactionCache.put("key", "value") -> will update you caches without any
call to commit.

>2)

ScanQuery isn't transactional, so I don't understand your question, because
ScanQuery doesn't acquire any locks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Transactional cache in Atomic mode

2019-01-16 Thread msuh
Hi,

Two questions about transactionality and locking: 
1) I've been looking at https://apacheignite.readme.io/docs/transactions and
couldn't get a clear answer from the Ignite documentation so I hope to get
an answer here.
All of the caches we use are set to TRANSACTIONAL, as we will often need to
do a logical operation on a group of caches. However, there are cases we
want to have ATOMIC operations on a single cache to increase performance.

What I'm wondering is: if a cache is set to operate on a TRANSACTIONAL mode,
does it always need to be operated inside the IgniteTransactions
transactions = ignite.transactions(); Java transaction API? Would the
changes simply not commit if we were to update the cache without surrounding
it inside this IgniteTransactions context?

2) To increase performance of a ScanQuery over a test cache of ~1million
entries, I have set up a ThreadPool of 10 threads to perform ScanQuery over
each partition - where each thread would take over each partition - in a
cluster that has a single server node. Each transaction had a commit size of
100k.
However the entire operation failed because the threads could not acquire
the lock for the transaction, and the transactions of other threads
timed-out. What is the locking granularity for each transaction (seems like
it locks the entire cache from what I've witnessed)? Is it possible to
change the locking level so that I can set the transaction to lock on a
partition or any other granularity? 

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: failed to connect to secure ignite server from ignite web agent

2019-01-16 Thread Alexey Kuznetsov
Hi, Shivakumar!

If you want to connect to cluster with enabled security you need to
configure following properties on Web Agent:
"--node-login" and "--node-password" via command line or
"default.properties" file.

[1] For example: ignite-web-agent.sh  --node-login ignite --node-password
ignite

Please note, that it is recommended to add a special user for that,  for
example, with name "web_agent"

And in that case command [1] will looks like:
ignite-web-agent.sh  --node-login  web_agent  --node-password
some_pwd_of_web_agent

-- 
Alexey Kuznetsov


NearCache

2019-01-16 Thread Grégory Jevardat de Fombelle
Hello

Is there any option to have  a nearCache in Ignite that store unMarschalled 
values instead of serialized ones. I ask this for performance reasons.
I noticed that for big cached objects, default Java unserialization is quite 
expensive, like ~ 2 secs for a complex > 100MB object.  So in the end caching 
this kind of objects in a near cache is not really interesting given the 
penalty of the serialisation.
On the opposite caching a reference of this object in a custom applicative 
cache like ThreadLocal map of even just a Hashmap, the retrieval is almost 
instantaneous (order of 10-100 nano secs)

Note that I did not tested binary serialisation or storing Json serialized 
objects or other custom serialisation libraries like protobuf or kryo. Also 
I'am not sure if it is possible to use custom libraries for serialisation of 
huge and complex graph of objects.
Are there any options or architectural recommandations and some benchmarks on 
this topic ? 

failed to connect to secure ignite server from ignite web agent

2019-01-16 Thread shivakumar
Iam trying to connect to ignite server from web agent but it is giving the
below exception:

[2019-01-15 15:54:13,684][INFO ][pool-1-thread-1][RestExecutor] Connected to
cluster [url=https://ignite-service.default.svc.cluster.local:8080]
[2019-01-15 15:54:13,720][WARN ][pool-1-thread-1][ClusterListener] Failed to
handle request - session token not found or invalid

i enabled authentication in ignite servers.
it looks like it is a known issue in 2.6 and it is fixed in 2.7. 
can i get a jira ticket used to fix this issue in 2.7.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Failed to wait for partition map exchange on cluster activation

2019-01-16 Thread Andrey Davydov
No, I don’t start services during activation (because I got some problems with 
it on older ignite some months ago). There is one broadcasted service in 
system, and I start it manually just after activation.

I will provide dumps and more detailed logs bit later.

Andrey.

От: Ilya Kasnacheev
Отправлено: 16 января 2019 г. в 15:13
Кому: user@ignite.apache.org
Тема: Re: Failed to wait for partition map exchange on cluster activation

Hello!

Sorry, wrong thread:)

It is hard to say what happens here exactly. Can you collect several thread 
dumps during this prolonged activation, share them with us?

Do you have e.g. services? I was told that services would start during 
activation.

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 15:10, Ilya Kasnacheev :
Hello!

I can see multiple "Failed to process selector key" errors in your log. Are you 
sure that your nodes can communicate via network freely and without delay?

Regards,
-- 
Ilya Kasnacheev


вт, 15 янв. 2019 г. в 20:12, Andrey Davydov :
Hello,

You can find full log there: 
https://drive.google.com/file/d/1FwCjsXMw5LQJnKO0x5GNJ2w9gVsDbXlc/view?usp=sharing

I can rerun tests with additional logging settings if needed

Andrey.






On Tue, Jan 15, 2019 at 6:23 PM Ilya Kasnacheev  
wrote:
Hello!

Can you please upload the full verbose log somewhere?

Regards,
-- 
Ilya Kasnacheev


ср, 9 янв. 2019 г. в 20:43, Andrey Davydov :
 
Hello, 
I found in test logs of my project that Ignite warns about failed partition 
maps exchange. In test environment 3 Ignite 2.7 server nodes run in the same 
JVM8 on Win10, using localhost networking.
 
2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Affinity changes applied in 10 ms.
2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Affinity changes applied in 10 ms.
2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Full map updating for 5 groups performed in 4 ms.
2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Full map updating for 5 groups performed in 5 ms.
2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
err=null]
2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
err=null]
2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec, 
startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143], 
checkpointLockWait=0ms, checkpointLockHoldTime=6ms, 
walCpRecordFsyncDuration=248ms, pages=204, reason='node started']
2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b, 
startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143], 
checkpointLockWait=0ms, checkpointLockHoldTime=8ms, 
walCpRecordFsyncDuration=257ms, pages=204, reason='node started']
2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc, 
startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143], 
checkpointLockWait=0ms, checkpointLockHoldTime=22ms, 
walCpRecordFsyncDuration=289ms, pages=204, reason='node started']
2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec, pages=204, 
markPos=FileWALPointer [idx=0, fileOff=929726, len=31143], 
walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms, 
pagesWrite=84ms, fsync=1992ms, total=3179ms]
2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO  
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b, pages=204, 
markPos=FileWALPointer

Re: Data streamer has been closed.

2019-01-16 Thread Ilya Kasnacheev
Hello!

I can reproduce this problem, but then again, it does not seem to reproduce
on 2.7. Have you considered upgrading?

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 14:14, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I do  test on a ignite 2.6 node with persistence enabled and get an
> exception:
>
>  Exception in thread "main" java.sql.BatchUpdateException: class
> org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
> at java.lang.Thread.run(Unknown Source)
>
> Here is my test code:
>
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.PreparedStatement;
> import java.sql.SQLException;
> import java.util.Properties;
>
> /**
>  * test insert data in streaming mode
>  * */
> public class InsertStreamingMode {
>
> private static Connection conn;
>
> public static void main(String[] args) throws Exception {
>
> initialize();
>
> close();
> }
>
> public static void close() throws Exception {
> conn.close();
> }
>
> public static void initialize() throws Exception {
> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> final String dbUrl =
>
> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
> final Properties props = new Properties();
> conn = DriverManager.getConnection(dbUrl, props);
> initData();
> overwriteData();
> }
>
> private static void initData() throws SQLException{
>
> long start=System.currentTimeMillis();
> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
> ON").execute();
>
> String sql="insert INTO  city1(id,name,name1)
> VALUES(?,?,?)";
> PreparedStatement ps=conn.prepareStatement(sql);
> for(int i=0;i<160;i++){
> String s1=String.valueOf(Math.random());
> String s2=String.valueOf(Math.random());
> ps.setInt(1, i);
> ps.setString(2, s1);
> ps.setString(3, s2);
> ps.execute();
> }
> conn.prepareStatement("set streaming off").execute();
> long end=System.currentTimeMillis();
> System.out.println(end-start);
> }
>
> private static void overwriteData() throws SQLException{
>
> long start=System.currentTimeMillis();
> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
> ON").execute();
>
> String sql="insert INTO  city1(id,name,name1)
> VALUES(?,?,?)";
> PreparedStatement ps=conn.prepareStatement(sql);
> for(int i=0;i<160;i++){
> String s1="test";
> String s2="test";
> ps.setInt(1, i);
> ps.setString(2, s1);
> ps.setString(3, s2);
> ps.execute();
> }
> conn.prepareStatement("set streaming off").execute();
> long end=System.currentTimeMillis();
> System.out.println(end-start);
> }
> }
>
> Here is the table:
> CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
> "template=replicated"
>
> The exception occurs on overwriteData method.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Concurrent merge into operations cause critical system error on ignite 2.7 node.

2019-01-16 Thread Ilya Kasnacheev
Hello!

Can you provide a reproducer project which would reliably show this
behavior?

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 15:37, yangjiajun <1371549...@qq.com>:

> Hello.
>
> Thanks for reply.I think these "Failed to process selector key" errors
> cause
> by the manual halt of my test application.I don't think network is a
> problem.Since my test cases show that some operations cause trouble while
> some others work fine.
>
>
> ilya.kasnacheev wrote
> > Hello!
> >
> > I can see multiple "Failed to process selector key" errors in your log.
> > Are
> > you sure that your nodes can communicate via network freely and without
> > delay?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > ср, 16 янв. 2019 г. в 10:21, yangjiajun <
>
> > 1371549332@
>
> >>:
> >
> >> Hello.
> >>
> >> Please see the logs.
> >>
> >> ignite-8bdefd7a.zip
> >> <
> >>
> http://apache-ignite-users.70518.x6.nabble.com/file/t2059/ignite-8bdefd7a.zip
> >
> >>
> >>
> >>
> >> ilya.kasnacheev wrote
> >> > Hello!
> >> >
> >> > Can you provide logs?
> >> >
> >> > Regards,
> >> > --
> >> > Ilya Kasnacheev
> >> >
> >> >
> >> > вс, 13 янв. 2019 г. в 18:05, yangjiajun <
> >>
> >> > 1371549332@
> >>
> >> >>:
> >> >
> >> >> Hello.
> >> >>
> >> >> I have a ignite 2.7 node with persistence enabled.I test concurrent
> >> merge
> >> >> into operations on it and find below concurrent operations can cause
> >> >> critical system error:
> >> >> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
> >> >> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
> >> >> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
> >> >> VALUES(2,'1','1'),(1,'1','1')".
> >> >>
> >> >> But the following concurrent operations seem no problem:
> >> >> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
> >> >> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
> >> >> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
> >> >> VALUES(1,'1','1'),(2,'1','1')".
> >> >>
> >> >> Is this a bug?
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2019-01-16 Thread rick_tem
Oh, that is great.  I wasn't aware of that.  Thanks for the link!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Concurrent merge into operations cause critical system error on ignite 2.7 node.

2019-01-16 Thread yangjiajun
Hello.

Thanks for reply.I think these "Failed to process selector key" errors cause
by the manual halt of my test application.I don't think network is a
problem.Since my test cases show that some operations cause trouble while
some others work fine.
 

ilya.kasnacheev wrote
> Hello!
> 
> I can see multiple "Failed to process selector key" errors in your log.
> Are
> you sure that your nodes can communicate via network freely and without
> delay?
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> ср, 16 янв. 2019 г. в 10:21, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> Please see the logs.
>>
>> ignite-8bdefd7a.zip
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t2059/ignite-8bdefd7a.zip>
>>
>>
>>
>> ilya.kasnacheev wrote
>> > Hello!
>> >
>> > Can you provide logs?
>> >
>> > Regards,
>> > --
>> > Ilya Kasnacheev
>> >
>> >
>> > вс, 13 янв. 2019 г. в 18:05, yangjiajun <
>>
>> > 1371549332@
>>
>> >>:
>> >
>> >> Hello.
>> >>
>> >> I have a ignite 2.7 node with persistence enabled.I test concurrent
>> merge
>> >> into operations on it and find below concurrent operations can cause
>> >> critical system error:
>> >> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
>> >> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
>> >> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
>> >> VALUES(2,'1','1'),(1,'1','1')".
>> >>
>> >> But the following concurrent operations seem no problem:
>> >> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
>> >> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
>> >> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
>> >> VALUES(1,'1','1'),(2,'1','1')".
>> >>
>> >> Is this a bug?
>> >>
>> >>
>> >>
>> >> --
>> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>> >>
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2019-01-16 Thread Ilya Kasnacheev
Hello!

You can have some caches persistent and some not persistent, by having
several DataRegions some of whose have persistenceEnabled=true and some are
not, and specifying DataRegion by name in cache configuration.

Please see
https://apacheignite.readme.io/docs/memory-configuration#section-data-regions

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 13:15, rick_tem :

> Yes, we have a similar reluctance to use persistent store.  Ouruse case is
> that Gigs of data will be running through it with several caches that we
> don't necessarily want to keep around in a docker environment.  Some
> caches,
> however, we would like persistent.  Is there a plan to have persistence
> configurable at the cache level, or will it be all or nothing in the
> forseeable future?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Concurrent merge into operations cause critical system error on ignite 2.7 node.

2019-01-16 Thread Ilya Kasnacheev
Hello!

I can see multiple "Failed to process selector key" errors in your log. Are
you sure that your nodes can communicate via network freely and without
delay?

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 10:21, yangjiajun <1371549...@qq.com>:

> Hello.
>
> Please see the logs.
>
> ignite-8bdefd7a.zip
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2059/ignite-8bdefd7a.zip>
>
>
>
> ilya.kasnacheev wrote
> > Hello!
> >
> > Can you provide logs?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > вс, 13 янв. 2019 г. в 18:05, yangjiajun <
>
> > 1371549332@
>
> >>:
> >
> >> Hello.
> >>
> >> I have a ignite 2.7 node with persistence enabled.I test concurrent
> merge
> >> into operations on it and find below concurrent operations can cause
> >> critical system error:
> >> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
> >> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
> >> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
> >> VALUES(2,'1','1'),(1,'1','1')".
> >>
> >> But the following concurrent operations seem no problem:
> >> 1.Thread 1 executes "merge INTO  city2(id,name,name1)
> >> VALUES(1,'1','1'),(2,'1','1'),(3,'1','1')".
> >> 2.Thread 2 executes "merge INTO  city2(id,name,name1)
> >> VALUES(1,'1','1'),(2,'1','1')".
> >>
> >> Is this a bug?
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to wait for partition map exchange on cluster activation

2019-01-16 Thread Ilya Kasnacheev
Hello!

Sorry, wrong thread:)

It is hard to say what happens here exactly. Can you collect several thread
dumps during this prolonged activation, share them with us?

Do you have e.g. services? I was told that services would start during
activation.

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 15:10, Ilya Kasnacheev :

> Hello!
>
> I can see multiple "Failed to process selector key" errors in your log.
> Are you sure that your nodes can communicate via network freely and without
> delay?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 15 янв. 2019 г. в 20:12, Andrey Davydov :
>
>> Hello,
>>
>> You can find full log there:
>> https://drive.google.com/file/d/1FwCjsXMw5LQJnKO0x5GNJ2w9gVsDbXlc/view?usp=sharing
>>
>> I can rerun tests with additional logging settings if needed
>>
>> Andrey.
>>
>>
>>
>>
>>
>>
>> On Tue, Jan 15, 2019 at 6:23 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Can you please upload the full verbose log somewhere?
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> ср, 9 янв. 2019 г. в 20:43, Andrey Davydov :
>>>


 Hello,

 I found in test logs of my project that Ignite warns about failed
 partition maps exchange. In test environment 3 Ignite 2.7 server nodes run
 in the same JVM8 on Win10, using localhost networking.



 2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Affinity changes applied in 10 ms.

 2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Affinity changes applied in 10 ms.

 2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Full map updating for 5 groups performed in 4 ms.

 2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Full map updating for 5 groups performed in 5 ms.

 2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
 minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
 err=null]

 2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
 - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
 minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
 err=null]

 2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
 startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
 checkpointLockWait=0ms, checkpointLockHoldTime=6ms,
 walCpRecordFsyncDuration=248ms, pages=204, reason='node started']

 2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
 startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
 checkpointLockWait=0ms, checkpointLockHoldTime=8ms,
 walCpRecordFsyncDuration=257ms, pages=204, reason='node started']

 2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
 startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
 checkpointLockWait=0ms, checkpointLockHoldTime=22ms,
 walCpRecordFsyncDuration=289ms, pages=204, reason='node started']

 2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
 pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
 walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms,
 pagesWrite=84ms, fsync=1992ms, total=3179ms]

 2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
 - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
 pages=204, markPos=FileWALPointer [idx=0, fileOff=9

Re: Failed to wait for partition map exchange on cluster activation

2019-01-16 Thread Ilya Kasnacheev
Hello!

I can see multiple "Failed to process selector key" errors in your log. Are
you sure that your nodes can communicate via network freely and without
delay?

Regards,
-- 
Ilya Kasnacheev


вт, 15 янв. 2019 г. в 20:12, Andrey Davydov :

> Hello,
>
> You can find full log there:
> https://drive.google.com/file/d/1FwCjsXMw5LQJnKO0x5GNJ2w9gVsDbXlc/view?usp=sharing
>
> I can rerun tests with additional logging settings if needed
>
> Andrey.
>
>
>
>
>
>
> On Tue, Jan 15, 2019 at 6:23 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Can you please upload the full verbose log somewhere?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 9 янв. 2019 г. в 20:43, Andrey Davydov :
>>
>>>
>>>
>>> Hello,
>>>
>>> I found in test logs of my project that Ignite warns about failed
>>> partition maps exchange. In test environment 3 Ignite 2.7 server nodes run
>>> in the same JVM8 on Win10, using localhost networking.
>>>
>>>
>>>
>>> 2019-01-09 20:15:27,719 [sys-#164%TestNode-2%] INFO
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>>> - Affinity changes applied in 10 ms.
>>>
>>> 2019-01-09 20:15:27,719 [sys-#163%TestNode-1%] INFO
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>>> - Affinity changes applied in 10 ms.
>>>
>>> 2019-01-09 20:15:27,724 [sys-#164%TestNode-2%] INFO
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>>> - Full map updating for 5 groups performed in 4 ms.
>>>
>>> 2019-01-09 20:15:27,724 [sys-#163%TestNode-1%] INFO
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>>> - Full map updating for 5 groups performed in 5 ms.
>>>
>>> 2019-01-09 20:15:27,725 [sys-#163%TestNode-1%] INFO
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>>> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
>>> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
>>> err=null]
>>>
>>> 2019-01-09 20:15:27,725 [sys-#164%TestNode-2%] INFO
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:102
>>> - Finish exchange future [startVer=AffinityTopologyVersion [topVer=3,
>>> minorTopVer=1], resVer=AffinityTopologyVersion [topVer=3, minorTopVer=1],
>>> err=null]
>>>
>>> 2019-01-09 20:15:28,710 [db-checkpoint-thread-#157%TestNode-1%] INFO
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>>> - Checkpoint started [checkpointId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
>>> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
>>> checkpointLockWait=0ms, checkpointLockHoldTime=6ms,
>>> walCpRecordFsyncDuration=248ms, pages=204, reason='node started']
>>>
>>> 2019-01-09 20:15:28,713 [db-checkpoint-thread-#151%TestNode-0%] INFO
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>>> - Checkpoint started [checkpointId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
>>> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
>>> checkpointLockWait=0ms, checkpointLockHoldTime=8ms,
>>> walCpRecordFsyncDuration=257ms, pages=204, reason='node started']
>>>
>>> 2019-01-09 20:15:28,715 [db-checkpoint-thread-#146%TestNode-2%] INFO
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>>> - Checkpoint started [checkpointId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
>>> startPtr=FileWALPointer [idx=0, fileOff=929726, len=31143],
>>> checkpointLockWait=0ms, checkpointLockHoldTime=22ms,
>>> walCpRecordFsyncDuration=289ms, pages=204, reason='node started']
>>>
>>> 2019-01-09 20:15:30,788 [db-checkpoint-thread-#157%TestNode-1%] INFO
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>>> - Checkpoint finished [cpId=443748a9-c1a5-4b3b-96e4-04a0862829ec,
>>> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
>>> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1103ms,
>>> pagesWrite=84ms, fsync=1992ms, total=3179ms]
>>>
>>> 2019-01-09 20:15:30,858 [db-checkpoint-thread-#151%TestNode-0%] INFO
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>>> - Checkpoint finished [cpId=cbc928e1-4ecd-40ae-9791-c6ba20c3669b,
>>> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
>>> walSegmentsCleared=0, walSegmentsCovered=[], markDuration=1213ms,
>>> pagesWrite=79ms, fsync=2066ms, total=3358ms]
>>>
>>> 2019-01-09 20:15:30,998 [db-checkpoint-thread-#146%TestNode-2%] INFO
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:102
>>> - Checkpoint finished [cpId=ef4c3d02-ca01-4d67-8128-48d4dc99aabc,
>>> pages=204, markPos=FileWALPointer [idx=0, fileOff=929726, len=31143],
>>> walSegmentsCleared=

Data streamer has been closed.

2019-01-16 Thread yangjiajun
Hello.

I do  test on a ignite 2.6 node with persistence enabled and get an
exception:

 Exception in thread "main" java.sql.BatchUpdateException: class
org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
at java.lang.Thread.run(Unknown Source)

Here is my test code:

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Properties;

/**
 * test insert data in streaming mode
 * */
public class InsertStreamingMode {

private static Connection conn;

public static void main(String[] args) throws Exception {

initialize();

close();
}

public static void close() throws Exception {
conn.close();
}

public static void initialize() throws Exception {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
final String dbUrl =
"jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
final Properties props = new Properties();
conn = DriverManager.getConnection(dbUrl, props);
initData();
overwriteData();
}

private static void initData() throws SQLException{

long start=System.currentTimeMillis();
conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE 
ON").execute();

String sql="insert INTO  city1(id,name,name1) VALUES(?,?,?)";
PreparedStatement ps=conn.prepareStatement(sql);
for(int i=0;i<160;i++){
String s1=String.valueOf(Math.random());
String s2=String.valueOf(Math.random());
ps.setInt(1, i);
ps.setString(2, s1);
ps.setString(3, s2);
ps.execute();
}
conn.prepareStatement("set streaming off").execute();
long end=System.currentTimeMillis();
System.out.println(end-start);
}

private static void overwriteData() throws SQLException{

long start=System.currentTimeMillis();
conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE 
ON").execute();

String sql="insert INTO  city1(id,name,name1) VALUES(?,?,?)";
PreparedStatement ps=conn.prepareStatement(sql);
for(int i=0;i<160;i++){
String s1="test";
String s2="test";
ps.setInt(1, i);
ps.setString(2, s1);
ps.setString(3, s2);
ps.execute();
}
conn.prepareStatement("set streaming off").execute();
long end=System.currentTimeMillis();
System.out.println(end-start);
}
}

Here is the table:
CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
"template=replicated"

The exception occurs on overwriteData method.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Failing to create index on Ignite table column

2019-01-16 Thread Stanislav Lukyanov
Don’t think there is any Ignite API for that yet.

Stan

From: Shravya Nethula
Sent: 16 января 2019 г. 12:42
To: user@ignite.apache.org
Subject: RE: Failing to create index on Ignite table column

Hi Stan,

Thank you! This information is helpful.

Do you know any Ignite API through which I can get the indexes of a
particular table?

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



ignite_backup_restore_query

2019-01-16 Thread Prerana y
I was trying to backup and restore the ignite persistence and wal data .
Are there any steps available that can be followed to restore the data to
pods.

Thanks and regards
   Prerana


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2019-01-16 Thread rick_tem
Yes, we have a similar reluctance to use persistent store.  Ouruse case is
that Gigs of data will be running through it with several caches that we
don't necessarily want to keep around in a docker environment.  Some caches,
however, we would like persistent.  Is there a plan to have persistence
configurable at the cache level, or will it be all or nothing in the
forseeable future?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2019-01-16 Thread colinc
Thanks. We'll give Native Persistence another try.

Our reluctance to use it stems from the fact that if something goes wrong
with the storage then there are additional production processes required to
recover - bad persistent store can cause the cluster to fail to start or
else propagate problems. 

If your cache data is considered transient then the recovery or upgrade
process is a straightforward restart.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Recovering from a data region OOM condition

2019-01-16 Thread colinc
We were using Ignite 2.4 (update pending). Ignite 2.5 and later seens to
treat OOM as a critical error by default and stops the node. The reproducer
below uses a failure handler to stop this from happening. It allocates a
100MB (configurable - 100MB is quite small) region and fills it up with
data. Afterwards, it attempts to clear data from the cache. Every cache
operation (even data removal) results in OOM errors.

package mytest;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.DataRegionConfiguration;
import org.apache.ignite.configuration.DataStorageConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.failure.NoOpFailureHandler;
import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;
import org.junit.Test;

import javax.cache.CacheException;

public class MemoryTest {

private static final String CACHE_NAME = "cache";
private static final Logger logger =
LogManager.getLogger(MemoryTest.class);
private static final String DEFAULT_MEMORY_REGION = "Default_Region";

private static final String I1_NAME = "IgniteMemoryMonitorTest1";

private static final long MEM_SIZE = 100L * 1024 * 1024;



@Test
public void testOOM() {
try (Ignite ignite = startIgnite(I1_NAME)) {
fillDataRegion(ignite);
IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME);

// Clear all entries from the cache to free up memory
cache.clear();  // Fails here
cache.put("Key", "Value");
}
}


private Ignite startIgnite(String instanceName) {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName(instanceName);
cfg.setDataStorageConfiguration(createDataStorageConfiguration());
cfg.setFailureHandler(new NoOpFailureHandler());
return Ignition.start(cfg);
}

private DataStorageConfiguration createDataStorageConfiguration() {
return new DataStorageConfiguration()
.setDefaultDataRegionConfiguration(
new DataRegionConfiguration()
.setName(DEFAULT_MEMORY_REGION)
.setInitialSize(MEM_SIZE)
.setMaxSize(MEM_SIZE)
.setMetricsEnabled(true));
}



private void fillDataRegion(Ignite ignite) {
byte[] megabyte = new byte[1024 * 1024];

int storedDataMB = 0;
try {
IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME);
for (int i = 0; i < 200; i++) {
cache.put(i, megabyte);
storedDataMB++;
}
} catch (CacheException e) {
logger.info("Out of memory: " + e.getClass().getSimpleName() + "
after " + storedDataMB + "MB", e);
}
}
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Thin client cannot retrieve data that was inserted with the regular Ignite client when using a composite key

2019-01-16 Thread Roman Shtykh
Thin client cannot retrieve data with the composite key when it is put by a 
regular (thick) Ignite client.

Is any additional configuration needed or is it a bug?


    import java.io.Serializable;
    import org.apache.ignite.Ignite;
    import org.apache.ignite.IgniteCache;
    import org.apache.ignite.Ignition;
    import org.apache.ignite.client.ClientCache;
    import org.apache.ignite.client.ClientException;
    import org.apache.ignite.client.IgniteClient;
    import org.apache.ignite.configuration.ClientConfiguration;
    
    public class ThinClientGets {
    final static String CACHE_NAME = "testCache";
    
    public static void main(String[] args) {
    try (Ignite ignite = Ignition.start()) {
    try (IgniteCache c = 
ignite.getOrCreateCache(CACHE_NAME)) {
    c.put(new TestKey("a", "0"), 1);
    
    System.out.println("Val: " + c.get(new TestKey("a", "0")));
    }
    
    try (IgniteCache c = 
ignite.getOrCreateCache(CACHE_NAME + 2)) {
    c.put("k1", 1);
    
    System.out.println("Val: " + c.get("k1"));
    }
    
    // thin client
    System.out.println("-- Getting with a thin client --");
    ClientConfiguration cfg = new 
ClientConfiguration().setAddresses("127.0.0.1:10800");
    
    try (IgniteClient igniteClient = Ignition.startClient(cfg)) {
    ClientCache cache = 
igniteClient.cache(CACHE_NAME);
    
    // null here!
    System.out.println(cache.get(new TestKey("a", "0")));
    
    ClientCache cache2 = 
igniteClient.cache(CACHE_NAME + 2);
    
    System.out.println(cache2.get("k1"));
    }
    catch (ClientException e) {
    System.err.println(e.getMessage());
    }
    catch (Exception e) {
    System.err.format("Unexpected failure: %s\n", e);
    
    }
    }
    }
    
    static class TestKey implements Serializable {
    private final String a;
    
    private final String b;
    
    public TestKey(String a, String b) {
    this.a = a;
    this.b = b;
    }
    
    public String getA() {
    return a;
    }
    
    public String getB() {
    return b;
    }
    }
    }


--
Roman


RE: Failing to create index on Ignite table column

2019-01-16 Thread Shravya Nethula
Hi Stan,

Thank you! This information is helpful.

Do you know any Ignite API through which I can get the indexes of a
particular table?

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Baselined node rejoining crashes other baseline nodes - Duplicate KeyError

2019-01-16 Thread mahesh76private
Stan, thanks for the visibility. 

-1-
Over the last year, we move from various versions of ignite 2.4, 2.5 to 2.7.
I always keep work folder in tact. 
-2-
Over a period of development, we might have tried to create index a second
or many times on the same column on which an index already existed. Now,
could that cause a confusion at ignite level, especially in a multi-node
scenario? Was something out of sync? Was a check missing?
-3-
Over a period of time, we dropped the table several times and recreated the
table several times and indexes. Was something stable left out in work
folder. We always used 2 or more nodes. 
-4-
Over a period of time, we saw issues with index creation as well. My
colleague posted another strange behaviour with index creation. See the
issue here,
http://apache-ignite-users.70518.x6.nabble.com/Failing-to-create-index-on-Ignite-table-column-td26252.html#a26258
Summary is if we don't give index names the ignite gives exceptions.
 

Something seems to be wrong with Ignite index handling in multi-node
environment. 

Regarding your point 2 (jira), absolutely, makes sense not to crash the node
on this exception. We have about 100GB data (tables) on ignite and the only
work around right now seems to be 

Boot node 1. Keep its work folder. 
Boot node 2 after removing its work folder

This scenario though works, gives the cluster a down-time of about 1-2 hours
and this is not acceptable for our customers. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-16 Thread mahesh76private
OK. 
We don't have 1000 users for causing any sort of concurrency related lock
ups. 

So possible explanation is corruption of the tree, below is our scenario

-a-
We created a table with sql - in the create params KEY_TYPE= ".", 
VALUE_TYPE="..." are also set.
The intention is to retrieve data inserted into SQL table via Key/Value API
for some ML code using ignite framework. 

-b-
Once the table is created, we created an index on primary key column. 
Question: is creating an index column on primary key required? Internally, I
assume you would create an index.

-c-
we started inserting rows into table via SQL insert. 

In this scenario, we got the LOCK_RETRIES  issue

Hope this gives you some clue as to why the issue occurred? Can you please
address the question in b)

regards
Mahesh
.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Cache copy from another cache.

2019-01-16 Thread Hemasundara Rao
Hi Ignite Team,
 Is there a way to assign one cache with another cache ( total data to be
copied).
We have a requirement to replace existing cache with another cache.
Let us say we have Cache1 ( loaded data from database).
Every day we want to replace Cache1 with another Cache1_Refresh ( freshly
loading the current data from database to take care data changes). After
completely loading  Cache1_Refresh , we want to replace Cache1 with
Cache1_Refresh  without effecting Cache1 and want to destroy Cache1_Refresh.
Please suggest how to do the above requirement.

Thanks and regards,
Hemasundara Rao Pottangi  | Senior Project Leader

[image: HotelHub-logo]
HotelHub LLP
Phone: +91 80 6741 8700
Cell: +91 99 4807 7054
Email: hemasundara@hotelhub.com
Website: www.hotelhub.com 
--

HotelHub LLP is a service provider working on behalf of Travel Centric
Technology Ltd, a company registered in the United Kingdom.
DISCLAIMER: This email message and all attachments are confidential and may
contain information that is Privileged, Confidential or exempt from
disclosure under applicable law. If you are not the intended recipient, you
are notified that any dissemination, distribution or copying of this email
is strictly prohibited. If you have received this email in error, please
notify us immediately by return email to
noti...@travelcentrictechnology.com and
destroy the original message. Opinions, conclusions and other information
in this message that do not relate to the official business of Travel
Centric Technology Ltd or HotelHub LLP, shall be understood to be neither
given nor endorsed by either company.


RE: Baselined node rejoining crashes other baseline nodes - Duplicate KeyError

2019-01-16 Thread Stanislav Lukyanov
Hi,

Left a comment in the issue.
In short, the problem is that you got a duplicate index on one of your nodes 
somehow, 
even though it shouldn’t happen. Need to figure out, how.

Can you tell what you do with the cluster when it is running?
I’m particularly interested in any of the actions related to cache/table/index 
creation and deletion.

Stan

From: mahesh76private
Sent: 16 января 2019 г. 5:54
To: user@ignite.apache.org
Subject: Baselined node rejoining crashes other baseline nodes - Duplicate 
KeyError

I have two nodes on which we have 3 tables which are partitioned.  Index are
also built on these tables. 

For 24 hours caches work fine.  The tables are definitely distributed across
both the nodes

Node 2 reboots due to some issue - goes out of the baseline - comes back and
joins the baseline.  Other baseline nodes crash and in the logs we see
duplicate Key error

[10:38:35,437][INFO]tcp-disco-srvr-#2[TcpDiscoverySpi] TCP discovery
accepted incoming connection [rmtAddr=/192.168.1.7, rmtPort=45102]
[10:38:35,437][INFO]tcp-disco-srvr-#2[TcpDiscoverySpi] TCP discovery
spawning a new thread for connection [rmtAddr=/192.168.1.7, rmtPort=45102]
[10:38:35,437][INFO]tcp-disco-sock-reader-#12[TcpDiscoverySpi] Started
serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102]
[10:38:35,451][INFO]tcp-disco-sock-reader-#12[TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102
[10:38:35,457][SEVERE]tcp-disco-msg-worker-#3[TcpDiscoverySpi]
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
in order to prevent cluster wide instability.
*java.lang.IllegalStateException: Duplicate key
at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223)
at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174)*


Logs and confurations are attached here 
https://issues.apache.org/jira/browse/IGNITE-8728
please offer any suggestions 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Does set streaming off command flush data?

2019-01-16 Thread Stanislav Lukyanov
Yes. It closes the underlying streamers, which in turn flushes the data.

Stan

From: yangjiajun
Sent: 16 января 2019 г. 11:41
To: user@ignite.apache.org
Subject: Does set streaming off command flush data?

Hello.

The ignite's doc says we should close the JDBC/ODBC connection so that all
data is flushed to the cluster while use streaming mode.Does set streaming
off command do the same so that we can reuse the connection?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-16 Thread Stanislav Lukyanov
Yes.
You can also use an environment variable instead of the system property:
IGNITE_BPLUS_TREE_LOCK_RETRIES=10 ignite.sh …

Stan

From: mahesh76private
Sent: 16 января 2019 г. 11:29
To: user@ignite.apache.org
Subject: RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

how do I set it ?

should I boot ignite node (ignite.sh) with the following switch ?

java ...   -DIGNITE_BPLUS_TREE_LOCK_RETRIES=10

regards
Mahesh





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Does set streaming off command flush data?

2019-01-16 Thread yangjiajun
Hello.

The ignite's doc says we should close the JDBC/ODBC connection so that all
data is flushed to the cluster while use streaming mode.Does set streaming
off command do the same so that we can reuse the connection?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-16 Thread mahesh76private
how do I set it ?

should I boot ignite node (ignite.sh) with the following switch ?

java ...   -DIGNITE_BPLUS_TREE_LOCK_RETRIES=10

regards
Mahesh





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: ignite continuous query with XML

2019-01-16 Thread Stanislav Lukyanov
No, you have to use actual code. 

Stan

From: shivakumar
Sent: 16 января 2019 г. 11:08
To: user@ignite.apache.org
Subject: ignite continuous query with XML

is there a way to configure continuous query using spring XML? is there any 
example or reference for configuring continuous query with XML? 

Sent from the Apache Ignite Users mailing list archive at Nabble.com.



RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-16 Thread Stanislav Lukyanov
It means that Ignite couldn’t find the place it needed in a B+ tree in 1000 
iterations.
It could mean either that there is a high contention on the tree (it changes a 
lot, and 
one thread is unlucky and couldn’t keep up with the speed), or that the tree is 
corrupted.

Try to set a larger value to the IGNITE_BPLUS_TREE_LOCK_RETRIES property (e.g. 
10).
If you see still see the exception then it’s a corruption. If you don’t – it’s 
a contention.

Stan

From: mahesh76private
Sent: 16 января 2019 г. 7:48
To: user@ignite.apache.org
Subject: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

On 2.7, we are regularly seeing the below message and then the nodes stop. 


[16:45:04,759][SEVERE][disco-event-worker-#63][] JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class o.a.i.IgniteCheckedException: Maximum number
of retries 1000 reached for Put operation (the tree may be corrupted).
Increase IGNITE_BPLUS_TREE_LOCK_RETRIES system property if you regularly see
this message (current value is 1000).]]


Can you please through some light on what this error is?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



ignite continuous query with XML

2019-01-16 Thread shivakumar
is there a way to configure continuous query using spring XML? is there any
example or reference for configuring continuous query with XML?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: Is there a way to allow overwrite when set streaming on?

2019-01-16 Thread Stanislav Lukyanov
Use `SET STREAMING ON ALLOW_OVERWRITE ON`.
It’s a shame it’s not documented. Filed 
https://issues.apache.org/jira/browse/IGNITE-10952 for that.

Stan

From: yangjiajun
Sent: 16 января 2019 г. 9:19
To: user@ignite.apache.org
Subject: Is there a way to allow overwrite when set streaming on?

Hello.

We can set streaming on while insert data to ignite using sql.I want to
enable data overwrite in this mode.Is it possible?

https://apacheignite-sql.readme.io/docs/set
https://apacheignite.readme.io/docs/data-streamers#section-allow-overwrite



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/