Re: reading encrypted password from Ignite config file

2021-04-13 Thread shivakumar
Hi Ilya,
if I have to use username/password is it possible to implement
SecurityCredentialsProvider.java interface?
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/plugin/security/SecurityCredentialsProvider.java
is there any example for this ?

regards,
Shiva





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


reading encrypted password from Ignite config file

2021-04-08 Thread shivakumar
Hi Igniters,
Does Ignite support reading the encrypted password from Ignite config file
because keeping a clear text password in ignite config file is not a good
practice?
If yes how to use it? If it does not support by default is it possible to
implement it using any plugins?
Any kind of help is greatly appreciated.

Regards,
Shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


streaming behaviour with streamingPerNodeBufferSize parameter

2020-12-10 Thread shivakumar
Hi 
I'm trying to do SQL stream to load data to ignite cluster from my program
(insert sql with streaming set to ON)
My program connects to ignite over JDBC using
org.apache.ignite.IgniteJdbcThinDriver driver (Thin driver)
when i start my application with very small records rate then i don't see
records immediately inserted into my Ignite table but I'm able to see after
inserting few records(>10K records) are pushed or when i close the JDBC
connection.
when I searched about streaming in Ignite document, it says there is a
parameter streamingPerNodeBufferSize with default value set to 1024.
Is this parameter applicable in my scenario with JdbcThinDriver ?
If it is applicable what is 1024 ? Is it 1024 KB(1MB)  ??

Ref:
https://apacheignite-sql.readme.io/v2.5/docs/jdbc-client-driver#streaming-mode


Regards,
Shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Could not clear historyMap due to WAL reservation on cp

2020-12-08 Thread shivakumar
Hi Alexandr Shapkin
With master branch codebase also I'm able to re-produce this issue
(reproducible on 2.7.6, 2.9 and master)
Looks like there is a major bug in the way older WAL segment clean-up is
implemented during checkpoint.
Not sure how connecting to visor is causing this issue, any chance that
Visor puts some deadlock or lock on WAL segments ?

I grep for warning messages in server logs and i saw "history map size is"
growing like 80,81,82..
I don't know how WAL segment clean-up is implemented, do you suspect on
anything ??
There are no exceptions also in server logs.
I'm thinking to file a bug as infinitely growing WAL is a major concern. 



 
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:39:35,555Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
80"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:40:27,647Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
81"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:41:19,744Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
82"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:42:12,946Z","logger":"CheckpointPagesWriterFactory","timezone":"UTC","marker":"","log":"1
checkpoint pages were not written yet due to unsuccessful page write lock
acquisition and will be retried"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:42:12,946Z","logger":"CheckpointPagesWriterFactory","timezone":"UTC","marker":"","log":"2
checkpoint pages were not written yet due to unsuccessful page write lock
acquisition and will be retried"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:42:16,872Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
83"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:43:16,064Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
84"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:44:17,383Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
85"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:44:56,140Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
86"}
{"type":"log","host":"ignite-cluster-ignite-shiva-0","level":"WARN","systemid":"039c963b","system":"ignite-service","time":"2020-12-08T15:45:42,978Z","logger":"CheckpointHistory","timezone":"UTC","marker":"","log":"Could
not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=d9335266-e4d8-4c08-bc44-e08f191526d0, timestamp=1607438456546,
ptr=WALPointer [idx=300, fileOff=148642467, len=10370]], history map size is
87"}

RE: Could not clear historyMap due to WAL reservation on cp

2020-12-07 Thread shivakumar
Thanks for your reply Alexandr Shapkin

The ticket you shared https://issues.apache.org/jira/browse/IGNITE-13373
more related to preloading
releaseHistoryForPreloading()
But the issue am facing is somehow WAL segments which are not needed for
crash recovery after checkpoint are not getting cleaned, Do you think there
is something fixed in the above jira ticket related to my issue ?

I tried collecting logs, the logs are very huge, i couldn't find any
exceptions or error messages but I saw below warnings: 
"Partition states validation has failed for group"
"Could not clear historyMap due to WAL reservation on cp"

Please find the attached logs files (not full log but part of logs where i
saw above warnings) instance1.log
  
historymap.log
  

Shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Could not clear historyMap due to WAL reservation on cp

2020-12-02 Thread shivakumar
Hi 
I have deployed 5 node Ignite cluster on K8S with persistence enabled
(version 2.9.0 on java 11)
I started ingesting data to 3 tables and after ingesting large amount of
data using JDBC batch insertion (around 20 million records to each of 3
tables with backup set to 1), now i connected to visor shell (from one of
the pod which i deployed just to use as visor shell) using the same ignite
config file which is used for ignite servers and after visor shell connects
to ignite cluster the unwanted wal record cleanup stopped (which should run
post checkpoint ) and WAL started growing linearly as there is continuous
data ingestion. This is making WAL disk run out of space and pods crashes.

when i see logs there are continuous warning messages saying 
Could not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=e8bb9c22-0709-416f-88d6-16c5ca534024, timestamp=1606979158669,
ptr=FileWALPointer [idx=1468, fileOff=45255321, len=9857]], history map size
is 4


and if i see checkpoint finish messages 
 
Checkpoint finished [cpId=ca254956-5550-45d6-87c5-892b7e07b13b,
pages=494933, markPos=FileWALPointer [idx=1472, fileOff=72673736, len=9857],
walSegmentsCleared=0, walSegmentsCovered=[1470 - 1471], markDuration=464ms,
pagesWrite=8558ms, fsync=3597ms, total=14027ms]

here you can see walSegmentsCleared=0   means there are no WAl segments
cleared even after checkpoint, not sure what is causing this behaviour.

we are ingesting very large data (~25mb/s)
please someone help in this issue 


regards,
Shiva





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: connecting to visor shell permanently stops unwanted WAL clean-up

2020-12-01 Thread shivakumar
I with 100 GB of data in persistence store this is easily reproducible not
sure what is causing this,
any one please let me know how visor gets number of records from cache and
what could be the issue of blocking wal cleanup. Iam adding wal usage graph
where without connecting to viros the wal usage was max 6GB but once i
connect visor it started growing until i stop data ingestion and it will not
even comeback from the max.

 
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


connecting to visor shell permanently stops unwanted WAL clean-up

2020-11-29 Thread shivakumar
I have noticed a strange behaviour when i connect to visor shell after
ingesting large amount of data to ignite cluster. 
Below is the scenario:
I have deployed 5 node Ignite cluster on K8S with persistence enabled
(version 2.9.0 on java 11)
I started ingesting data to 3 tables and after ingesting large amount of
data using JDBC batch insertion (around 20 million records to each of 3
tables with backup set to 1), now i connected to visor shell (from one of
the pod which i deployed just to use as visor shell) using the same ignite
config file which is used for ignite servers and after visor shell connects
to ignite cluster the unwanted wal record cleanup stopped (which should run
post checkpoint ) and WAL started growing linearly as there is continuous
data ingestion. This is making WAL disk run out of space and pods crashes.
I have attached config file which i used to deploy ignite as well as used to
connect to Ignite visor shell.
Please let me know if am doing something wrong.
command used to connect visor shell
./ignitevisorcmd.sh -cfg=/opt/ignite/conf/ignite-config.xml
There are frequent logs with following message after connection of visor,

Could not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=c7ef72fb-b701-427b-a383-f8035c7985c1, timestamp=1606590778899,
ptr=FileWALPointer [idx=26090, fileOff=24982860, len=9572]], history map
size is 38 


could you please let me know if am doing anything wrong or any known issue
around this?


Regards,
Shiva
 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: wal archive directory running out of space

2020-11-27 Thread shivakumar
Hi Facundo 
I tried disabling wal archive by setting wal archive and wal path both to
same directory 
  
  
  
  

But again I faced the same issue, it was running fine for 10 hours and at
this time i connected to visor to check number of records and the total
records were around 200 million records and after disconnecting from visor,
I connected to sql line and ran select count(*) from table; not sure which
one caused wal disk to fill completely. 
I deployed this Ignite cluster with JAVA 11 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


wal archive directory running out of space

2020-11-26 Thread shivakumar
Hi

I deployed 5 node Ignite 2.9.0 on k8s with below configuration
Total RAM per instance 64 GB 
JVM 32 GB
Default data region 12 GB
Persistence storage 500GB volume
WAL + WAL archive 30 GB volume
After this I started ingesting data to 3 tables created, the data ingestion
is using basic JDBC batch insertions.
After around 14 hours it generated around 100GB of persistence data on each
node in 3 tables (each having backup of 1).
But suddenly 2 PODs crashed and when I check the logs, *there was errors
which says no space left on the storage volume* configured for WAL+WAL
archive.
I'm not sure what exactly caused this issue, but i couldn't recover from
this POD crash on K8S as i cannot expand the volume attached to ignite PODS.
The only operation I did when pods crashed was select count(*) from table; 
and there were around 21 crore records in that table.

Does WAL archive is needed ? how I can avoid these kind of issues, which end
up cluster in unusable state.

Your help is greatly appreciated 

Thank you 
Shiva




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: nodes in the baseline topology is going to OFFLINE state

2019-10-18 Thread shivakumar
Hi Ilya Kasnacheev,
Is there any other way of gracefully shutting down/restart the entire
cluster?

regards,
shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


nodes in the baseline topology is going to OFFLINE state

2019-10-17 Thread shivakumar
Hi all,
I have Ignite deployment on Kubernetes and I wanted to restart all nodes so
I am using "kill -k" command from the visor shell.
after running this command it is restarting all nodes, once all nodes join
the topology sometimes few nodes are going into OFFLINE state [eventhough
the nodes are up and running] and it looks like it is causing split-brain or
split cluster scenario.


[ignite@ignite-shiva-visor-68cb697b5-qbccr bin]$ ./control.sh --user ignite
--password ignite --host ignite-service-br --baseline
Control utility [ver. 2.7.6#19700101-sha1:DEV]
2019 Copyright(C) Apache Software Foundation
User: ignite

Cluster state: inactive
Current topology version: 1

Baseline nodes:
ConsistentID=253094b4-877b-45ae-ad06-07e639befffc, STATE=ONLINE
ConsistentID=862e324b-f3d1-4198-92d0-0d1d2c4a2f88, STATE=OFFLINE
ConsistentID=86b5f451-ac4f-4479-9f6e-2db6ab5d11e7, STATE=OFFLINE

Number of baseline nodes: 3

Other nodes not found.
[ignite@ignite-shiva-visor-68cb697b5-qbccr bin]$ ./control.sh --user ignite
--password ignite --host ignite-service-br --baseline
Control utility [ver. 2.7.6#19700101-sha1:DEV]
2019 Copyright(C) Apache Software Foundation
User: ignite

Cluster state: inactive
Current topology version: 2

Baseline nodes:
ConsistentID=253094b4-877b-45ae-ad06-07e639befffc, STATE=OFFLINE
ConsistentID=862e324b-f3d1-4198-92d0-0d1d2c4a2f88, STATE=ONLINE
ConsistentID=86b5f451-ac4f-4479-9f6e-2db6ab5d11e7, STATE=ONLINE

Number of baseline nodes: 3

Other nodes not found.
[ignite@ignite-shiva-visor-68cb697b5-qbccr bin]$ ./control.sh --user ignite
--password ignite --host ignite-service-br --baseline
Control utility [ver. 2.7.6#19700101-sha1:DEV]
2019 Copyright(C) Apache Software Foundation
User: ignite

Cluster state: inactive
Current topology version: 2

Baseline nodes:
ConsistentID=253094b4-877b-45ae-ad06-07e639befffc, STATE=OFFLINE
ConsistentID=862e324b-f3d1-4198-92d0-0d1d2c4a2f88, STATE=ONLINE
ConsistentID=86b5f451-ac4f-4479-9f6e-2db6ab5d11e7, STATE=ONLINE

Number of baseline nodes: 3

Other nodes not found.
[ignite@ignite-shiva-visor-68cb697b5-qbccr bin]$ ./control.sh --user ignite
--password ignite --host ignite-service-br --baseline
Control utility [ver. 2.7.6#19700101-sha1:DEV]
2019 Copyright(C) Apache Software Foundation
User: ignite

Cluster state: inactive
Current topology version: 2

Baseline nodes:
ConsistentID=253094b4-877b-45ae-ad06-07e639befffc, STATE=OFFLINE
ConsistentID=862e324b-f3d1-4198-92d0-0d1d2c4a2f88, STATE=ONLINE
ConsistentID=86b5f451-ac4f-4479-9f6e-2db6ab5d11e7, STATE=ONLINE

Number of baseline nodes: 3

Other nodes not found.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Row count [select count(*) from table] not matching with the actual row count present in the table

2019-06-17 Thread shivakumar
HI 
Any Idea on this issue ? 
I have created Jira bug for this issue
https://issues.apache.org/jira/browse/IGNITE-11917


regars,
shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Row count [select count(*) from table] not matching with the actual row count present in the table

2019-06-13 Thread shivakumar
To reproduce, create a sample table using JDBC endpoint:

CREATE TABLE person(Id VARCHAR, birthTime TIMESTAMP, name VARCHAR, PRIMARY
KEY(Id)) WITH "TEMPLATE=templateEternal,CACHE_NAME=person,
KEY_TYPE=personKey,VALUE_TYPE=person";

 

and configure cache expiry policy as below 






















with above cache configuration records will start expiring at the end of 10
minute, batch insert around 1 records to the table and after 10 minute
records will start expiring but after some time check the records count
[select count(*) from person] most of the time it will show some non zero
number but if rows are selected instead of count to see the actual data with
[select * from person]  there will be zero rows.

why count is not becoming zero even though there are now data (rows) in the
table ?

0: jdbc:ignite:thin://10...*:10800> select count from person;


COUNT


70

1 row selected (0.004 seconds)
0: jdbc:ignite:thin://10...*:10800> select * from person;
-+--+

ID  BIRTHTIME   NAME
-+--+
-+--+
No rows selected (0.015 seconds)
0: jdbc:ignite:thin://10...*:10800>



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cache expiry policy is slow

2019-05-30 Thread shivakumar
Hi,
I have configured cache expiry policy using cache templates for the cache
which I created in ignite, as below 


  
  
  
  
  

  

  

  
  

  

  





according to this configuration cache entries which completes 10 minutes
should be removed but to remove those entries ignite is taking more time.
this is my observation:
after configuring cache expiry policy as mentioned above, I'am batch
ingesting some records to the table for 4 minutes (around 1 million records
in 4 minutes) and after 10 minute it will start removing entries from the
table and number of records will start decreasing when i monitored from
visor CLI, since i configured expiry time as 10 minutes, all the entries
should get removed at the end of 14th minute (because i ingested data from
0th minute to 4th minute) but it is removing all the entries at the end of
20th minute.
any tuning needs to be done or am I missing any configuration ?

regards,
shiva




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ignite work directory: binary_meta and marshaller

2019-05-29 Thread shivakumar
Hi 
we have deployed ignite on kubernetes environment and we have set ignite
work directory (i.e., binary_meta and marshaller) to ephemeral disk which
will not be available after kubernetes node (pod) restart, we have seen
sometime "*class org.apache.ignite.binary.BinaryObjectException: Cannot find
metadata for object with compact footer*: -146362649 "
is this exception because we are setting ignite work directory where it
stores  *binary_meta* and *marshaller* to ephemeral disk and not persistence
disk ?
should we set work directory to mounted persistence volume in kubernetes ??

regards,
shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Failed to archive WAL segment

2019-04-29 Thread shivakumar
HI all,

I have 7 node ignite cluster running on kubernetes platform, each instance
is configured with 64GB total RAM(32GB Heap space + 12 GB default data
region + remaining 18GB for ignite process), 6 core CPU, 12GB disk mount for
WAL + WAL archive, 1 TB separate disk mount for native persistence.

My problem is one of the pod (ignite instance) went to crashLoopBackOff
state and it is not recovering from crash 

[root@ignite-stability-controller stability]# kubectl get pods | grep
ignite-server
ignite-cluster-ignite-server-0  3/3 Running 
   
53d19h
ignite-cluster-ignite-server-1  3/3 Running 
   
53d19h
ignite-cluster-ignite-server-2  3/3 Running 
   
53d19h
ignite-cluster-ignite-server-3  3/3 Running 
   
53d19h
ignite-cluster-ignite-server-4  3/3 Running 
   
53d19h
ignite-cluster-ignite-server-5  3/3 Running 
   
53d19h
*ignite-cluster-ignite-server-6  2/3
CrashLoopBackOff   3423d19h*
ignite-server-visor-5df679d57-p4rf41/1 Running  
  
03d19h

If i check the logs of crashed instance it says (logs are in different
formats)

:"INFO","systemid":"6f058db6","system":"ignite-service-st","time":"2019-04-29T06:47:41,149Z","logger":"FsyncModeFileWriteAheadLogManager","timezone":"UTC","marker":"","log":"Starting
to copy WAL segment [absIdx=50008, segIdx=8,
origFile=/opt/ignite/wal/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/0008.wal,
dstFile=/opt/ignite/wal/archive/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/00050008.wal]"}
{"type":"log","host":"ignite-cluster-ignite-server-6","level":"INFO","systemid":"6f058db6","system":"ignite-service-st","time":"2019-04-29T06:47:41,154Z","logger":"GridClusterStateProcessor","timezone":"UTC","marker":"","log":"Writing
BaselineTopology[id=1]"}
{"type":"log","host":"ignite-cluster-ignite-server-6","level":"ERROR","systemid":"6f058db6","system":"ignite-service-st","time":"2019-04-29T06:47:41,170Z","logger":"","timezone":"UTC","marker":"","log":"Critical
system error detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]],
failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
o.a.i.IgniteCheckedException: Failed to archive WAL segment
[srcFile=/opt/ignite/wal/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/0008.wal,
dstFile=/opt/ignite/wal/archive/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/00050008.wal.tmp]]]
class org.apache.ignite.IgniteCheckedException: Failed to archive WAL
segment
[srcFile=/opt/ignite/wal/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/0008.wal,
dstFile=/opt/ignite/wal/archive/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/00050008.wal.tmp]|
   
at
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileArchiver.archiveSegment(FsyncModeFileWriteAheadLogManager.java:1826)|

at
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileArchiver.body(FsyncModeFileWriteAheadLogManager.java:1622)|
  
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)| 
at java.lang.Thread.run(Thread.java:748)|Caused by:
java.nio.file.FileSystemException:
/opt/ignite/wal/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/0008.wal
->
/opt/ignite/wal/archive/node00-18d2aa89-7ae0-495b-a608-f28e8054e00f/00050008.wal.tmp:
No space left on device| at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)| 
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)| 
at sun.nio.fs.UnixCopyFile.copyFile(UnixCopyFile.java:253)| at
sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:581)|at
sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:253)|
at java.nio.file.Files.copy(Files.java:1274)|   at
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileArchiver.archiveSegment(FsyncModeFileWriteAheadLogManager.java:1813)|

... 3 more"}
{"type":"log","host":"ignite-cluster-ignite-server-6","level":"WARN","systemid":"6f058db6","system":"ignite-service-st","time":"2019-04-29T06:47:41,171Z","logger":"FailureProcessor","timezone":"UTC","marker":"","log":"No
deadlocked threads detected."}

and when I checked disk usage disk volume mounted for WAL+WAL archive is
full 

Filesystem  Size  Used Avail Use% Mounted on
overlay 158G  8.9G  142G   6% /
tmpfs63G 0   63G   0% /dev
tmpfs63G 0   63G   0% /sys/fs/cgroup
/dev/vda1   

JDBC client disconnected without any exception

2019-04-28 Thread shivakumar
Hi all,
I have created 3 tables and trying to ingest (batch commit with JDBC
connection) 50 crore records to each table parallely, with 5000 records as
batch commit size, after ingesting around 25 to 30 crore records the program
which ingest data using JDBC connection is stopping without any exception
and even in ignite logs there are no exceptions and also there are no node
restarts.
is there any known issue, similar to this?

regards,
shiva
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Table not getting dropped

2019-04-22 Thread shivakumar
Hi all,
I created one table with JDBC connection, batch inserted around 13 crore
records to that table then I'am trying to drop the table from sqlline, but
it hangs for some time and gives *java.sql.SQLException: Statement is
closed* exception and if i check number of records using *select count(*)
from Cell;* statements then all 13 crore records still exists in that table
but if I try to drop again it says table not exists.

[root@ignite-st-controller bin]# ./sqlline.sh --verbose=true -u
"jdbc:ignite:thin://10.*.*.*:10800;user=ignite;password=ignite;"
issuing: !connect
jdbc:ignite:thin://10.*.*.*:10800;user=ignite;password=ignite; '' ''
org.apache.ignite.IgniteJdbcThinDriver
Connecting to jdbc:ignite:thin://10.*.*.*:10800;user=ignite;password=ignite;
Connected to: Apache Ignite (version 2.7.0#19700101-sha1:)
Driver: Apache Ignite Thin JDBC Driver (version
2.7.0#20181130-sha1:256ae401)
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
sqlline version 1.3.0
0: jdbc:ignite:thin://10.*.*.*:10800> DROP TABLE IF EXISTS CELL;
Error: Statement is closed. (state=,code=0)
java.sql.SQLException: Statement is closed.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.ensureNotClosed(JdbcThinStatement.java:862)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.getWarnings(JdbcThinStatement.java:454)
at sqlline.Commands.execute(Commands.java:849)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://10.*.*.*:10800>
0: jdbc:ignite:thin://10.*.*.*:10800> select count(*) from CELL;
++
|COUNT(*)|
++
| 131471437  |
++
1 row selected (4.564 seconds)
0: jdbc:ignite:thin://10.*.*.*:10800> DROP TABLE CELL;
Error: Table doesn't exist: CELL (state=42000,code=3001)
java.sql.SQLException: Table doesn't exist: CELL
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:750)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:475)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://10.*.*.*:10800> select count(*) from CELL;
++
|COUNT(*)|
++
| 131482007  |
++
1 row selected (1.264 seconds)
0: jdbc:ignite:thin://10.*.*.*:10800>




regards,
shiva






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Total cache entries count

2019-04-22 Thread shivakumar
Hi 

I have 2.7.0 ignite running is K8s environment, I ingested large number of
records to the table by enabling native persistence and I'am monitoring
total cache entries in that table using visor by running *visor> cache
-c=Cache_name* as well as sqlline by running *select count(*) from
table_name;*
both are giving aroung 15 crore entries and i configured 12GB for default
data region/off-heap and it is full by this time.

My question is, is that count 15 crore is the cache entries available only
in *RAM/Off-heap* or cache entries available both in *RAM/Off-heap + disk
(persistence)* ?

regards,
shiva 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


nodes are restarting when i try to drop a table created with persistence enabled

2019-04-15 Thread shivakumar
Hi all,
I created a table with JDBC connection with native persistence enabled in
partitioned mode and i have 2 ignite nodes (2.7.0 version) running in
kubernetes environment, then i ingested 150 records, when i try to drop
the table both the pods are restarting one after the other.
Please find the attached thread dump logs 
and after this drop statement is unsuccessful 

0: jdbc:ignite:thin://ignite-service.cign.svc> !tables
+++++-+
|   TABLE_CAT|  TABLE_SCHEM   |  
TABLE_NAME   |   TABLE_TYPE   |REMARKS  
   
|
+++++-+
|| PUBLIC | DEVICE  
  
| TABLE  | |
|| PUBLIC |
DIMENSIONS | TABLE  |   
 
|
|| PUBLIC | CELL
  
| TABLE  | |
+++++-+
0: jdbc:ignite:thin://ignite-service.cign.svc> DROP TABLE IF EXISTS
PUBLIC.DEVICE;
Error: Statement is closed. (state=,code=0)
java.sql.SQLException: Statement is closed.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.ensureNotClosed(JdbcThinStatement.java:862)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.getWarnings(JdbcThinStatement.java:454)
at sqlline.Commands.execute(Commands.java:849)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://ignite-service.cign.svc> !quit
Closing: org.apache.ignite.internal.jdbc.thin.JdbcThinConnection
[root@vm-10-99-26-135 bin]# ./sqlline.sh --verbose=true -u
"jdbc:ignite:thin://ignite-service.cign.svc.cluster.local:10800;user=ignite;password=ignite;"
issuing: !connect
jdbc:ignite:thin://ignite-service.cign.svc.cluster.local:10800;user=ignite;password=ignite;
'' '' org.apache.ignite.IgniteJdbcThinDriver
Connecting to
jdbc:ignite:thin://ignite-service.cign.svc.cluster.local:10800;user=ignite;password=ignite;
Connected to: Apache Ignite (version 2.7.0#19700101-sha1:)
Driver: Apache Ignite Thin JDBC Driver (version
2.7.0#20181130-sha1:256ae401)
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
sqlline version 1.3.0
0: jdbc:ignite:thin://ignite-service.cign.svc> !tables
+++++-+
|   TABLE_CAT|  TABLE_SCHEM   |  
TABLE_NAME   |   TABLE_TYPE   |REMARKS  
   
|
+++++-+
|| PUBLIC | DEVICE  
  
| TABLE  | |
|| PUBLIC |
DIMENSIONS | TABLE  |   
 
|
|| PUBLIC | CELL
  
| TABLE  | |
+++++-+
0: jdbc:ignite:thin://ignite-service.cign.svc> select count(*) from DEVICE;
++
|COUNT(*)|
++
| 150|
++
1 row selected (5.665 seconds)
0: jdbc:ignite:thin://ignite-service.cign.svc>

ignite_thread_dump.txt

   


shiva





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: sql exception while batch inserting records

2019-04-09 Thread shivakumar
HI Ilya Kasnacheev,

The cluster is deployed with 2.7.0 version of ignite.
if i use 2.6.0 version of jar then i'am not getting any exception but if i
use 2.7.0 version of jar then i'am getting this exception.

regards,
shiva




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


sql exception while batch inserting records

2019-04-08 Thread shivakumar
Hi all,
I'am getting this below exception while doing batch insert.

*WARNING: Exception during batch send on streamed connection close
java.sql.BatchUpdateException: Streaming mode supports only INSERT commands
without subqueries.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1016)
at java.lang.Thread.run(Thread.java:748)*

Scenario:
insert sql statement:
InsertSql = "INSERT INTO PUBLIC.MY_TABLE (Id,
time_stamp,second_time,last_time,period,name,   
iid,ptype,pversion) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?);"

 conn = DriverManager.getConnection(url, IGNITE_USER, IGNITE_PASS);
 conn.setAutoCommit(false);
 stmt = conn.createStatement();
 stmt.execute("SET STREAMING ON;");
 stmt.close();

 pStmt = conn.prepareStatement(table.getInsertSql());
while (n < generate_rows) {
table.createBatch(pStmt); 
n++;
}
count = pStmt.executeBatch();
conn.commit();
pStmt.close();
igniteConn.close();


when last igniteConn.close(); got called it's throwing above sql exception
when i use* 2.7.0 ignite-core jar * and if i use *2.6.0 ignite-core jar*
i'am not getting this exception.

thanks with regards,
shiva







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread shivakumar
hi stan,
thanks for your reply!!

but in my case, i deployed ignite in kubernetes environment and starting a
client node on VM, which connects to ignite server(client is configured to
use  org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
as discovery mechanism and provided IP address of ignite server
pods/containers ) 
and i am placing "keytype" and "valuetype"(custom key and values) class
files in clientnode's class path (under libs folder of apache ignite HOME
path where i start client process) but servers are not loading these classes
and throwing class not found exception for those classes and if i place
those classes inside pods its loading the classes.
why it is not loading the class files when i placed on client side even when
peerClassLoading is enabled on client side?
am I missing something?

with thanks,
shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread shivakumar
when peerClassLoadingEnabled is enabled in client node which joins the
cluster of servers, if any class/jar placed in client class path, is it
possible to use those classes/jar by servers?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd party persistence with hive not updating hive with all records/entries in ignite

2019-01-21 Thread shivakumar
Hi here is my cache store configuration




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
  
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
  
http://www.springframework.org/schema/util/spring-util.xsd;>



  
  



 
 
 
 
  














y.y.y.y:47500..47509










































































































 

3rd party persistence with hive not updating hive with all records/entries in ignite

2019-01-16 Thread shivakumar
Hi 
i am trying to use hive as 3rd party persistence store and enabled write
behind and i set these cache configurations using spring xml
 
 



every 5000ms interval ignite updating only one recored/one row in hive, even
i ingested around 2000 records of data to ignite.
why all 2000 records of data not going into hive when it hits 5000ms
Flushfrequency time interval ?
is there any other parameter which affect these persistence store update.
any suggestions are appreciated!
thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


failed to connect to secure ignite server from ignite web agent

2019-01-16 Thread shivakumar
Iam trying to connect to ignite server from web agent but it is giving the
below exception:

[2019-01-15 15:54:13,684][INFO ][pool-1-thread-1][RestExecutor] Connected to
cluster [url=https://ignite-service.default.svc.cluster.local:8080]
[2019-01-15 15:54:13,720][WARN ][pool-1-thread-1][ClusterListener] Failed to
handle request - session token not found or invalid

i enabled authentication in ignite servers.
it looks like it is a known issue in 2.6 and it is fixed in 2.7. 
can i get a jira ticket used to fix this issue in 2.7.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ignite continuous query with XML

2019-01-16 Thread shivakumar
is there a way to configure continuous query using spring XML? is there any
example or reference for configuring continuous query with XML?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/