nodes are restarting when i try to drop a table created with persistence enabled

2019-04-15 Thread shivakumar
Hi all,
I created a table with JDBC connection with native persistence enabled in
partitioned mode and i have 2 ignite nodes (2.7.0 version) running in
kubernetes environment, then i ingested 150 records, when i try to drop
the table both the pods are restarting one after the other.
Please find the attached thread dump logs 
and after this drop statement is unsuccessful 

0: jdbc:ignite:thin://ignite-service.cign.svc> !tables
+++++-+
|   TABLE_CAT|  TABLE_SCHEM   |  
TABLE_NAME   |   TABLE_TYPE   |REMARKS  
   
|
+++++-+
|| PUBLIC | DEVICE  
  
| TABLE  | |
|| PUBLIC |
DIMENSIONS | TABLE  |   
 
|
|| PUBLIC | CELL
  
| TABLE  | |
+++++-+
0: jdbc:ignite:thin://ignite-service.cign.svc> DROP TABLE IF EXISTS
PUBLIC.DEVICE;
Error: Statement is closed. (state=,code=0)
java.sql.SQLException: Statement is closed.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.ensureNotClosed(JdbcThinStatement.java:862)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.getWarnings(JdbcThinStatement.java:454)
at sqlline.Commands.execute(Commands.java:849)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://ignite-service.cign.svc> !quit
Closing: org.apache.ignite.internal.jdbc.thin.JdbcThinConnection
[root@vm-10-99-26-135 bin]# ./sqlline.sh --verbose=true -u
"jdbc:ignite:thin://ignite-service.cign.svc.cluster.local:10800;user=ignite;password=ignite;"
issuing: !connect
jdbc:ignite:thin://ignite-service.cign.svc.cluster.local:10800;user=ignite;password=ignite;
'' '' org.apache.ignite.IgniteJdbcThinDriver
Connecting to
jdbc:ignite:thin://ignite-service.cign.svc.cluster.local:10800;user=ignite;password=ignite;
Connected to: Apache Ignite (version 2.7.0#19700101-sha1:)
Driver: Apache Ignite Thin JDBC Driver (version
2.7.0#20181130-sha1:256ae401)
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
sqlline version 1.3.0
0: jdbc:ignite:thin://ignite-service.cign.svc> !tables
+++++-+
|   TABLE_CAT|  TABLE_SCHEM   |  
TABLE_NAME   |   TABLE_TYPE   |REMARKS  
   
|
+++++-+
|| PUBLIC | DEVICE  
  
| TABLE  | |
|| PUBLIC |
DIMENSIONS | TABLE  |   
 
|
|| PUBLIC | CELL
  
| TABLE  | |
+++++-+
0: jdbc:ignite:thin://ignite-service.cign.svc> select count(*) from DEVICE;
++
|COUNT(*)|
++
| 150|
++
1 row selected (5.665 seconds)
0: jdbc:ignite:thin://ignite-service.cign.svc>

ignite_thread_dump.txt

   


shiva





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Listen for cache changes

2019-04-15 Thread Stephen Darlington
You can use Continuous Queries to “listen” to changes in your caches: 
https://apacheignite.readme.io/docs/continuous-queries

Regards,
Stephen

> On 15 Apr 2019, at 12:22, Mike Needham  wrote:
> 
> Hi All,
> 
> I have a cache that has 3 SQL tables in it.  There is a loop that listens to 
> a queue that has json strings on it that represent changes to the underlying 
> cache tables.  These are applied to the cache via INSERT/UPDATE/Delete SQL 
> statements.  How can I have Events triggered off of this so that people can 
> listen to those events to know when the cache has been updated?
> 
> -- 
> Don't be afraid to be wrong. Don't be afraid to admit you don't have all the 
> answers. Don't be afraid to say "I think" instead of "I know."




Listen for cache changes

2019-04-15 Thread Mike Needham
Hi All,

I have a cache that has 3 SQL tables in it.  There is a loop that listens
to a queue that has json strings on it that represent changes to the
underlying cache tables.  These are applied to the cache via
INSERT/UPDATE/Delete SQL statements.  How can I have Events triggered off
of this so that people can listen to those events to know when the cache
has been updated?

-- 
*Don't be afraid to be wrong. Don't be afraid to admit you don't have all
the answers. Don't be afraid to say "I think" instead of "I know."*


Re: Ignite DataStreamer Memory Problems

2019-04-15 Thread kellan
I'm confused. If the DataStreamer blocks until all data is loaded into remote
caches and I'm only ever running a fixed number of DataStreamers (4 max),
which close after they read a single file of a more or less fixed length
each time (no more than 200MB; e.g. I shouldn't have more than 800MB +
additional Ignite Metadata at any point in my DataStreamers), I shouldn't be
seeing a gradual build-up of memory, but that's what I'm seeing.

Maybe I should have said before that this is a persistent cache and the
problem starts at some point after I've run out of memory in my data regions
(not immediately, but hours later).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Unable to get the securitycontext after implementing respective interfaces via ignite plugin

2019-04-15 Thread radha jai
Hi,
   I have implemented the grid security processor and setting the
securityconext holder in the authenticate function as below,

public class MySecurityProcessor extends GridProcessorAdapter implements
DiscoverySpiNodeAuthenticator, GridSecurityProcessor, IgnitePlugin {


public SecurityContext authenticate(AuthenticationContext
authenticationContext) throws IgniteCheckedException {
   SecuritySubject secureSecuritySubject = new SecuritySubject(
authenticationContext.subjectId(),
authenticationContext.subjectType(),
authenticationContext.credentials().getLogin(),
authenticationContext.address()
);
SecurityContext securityContext = new
MySecurityContext(secureSecuritySubject, accessToken);
SecurityContextHolder.set(securityContext);
return securityContext;
}
public void authorize(String name, SecurityPermission perm, SecurityContext
securityCtx) throws SecurityException {
System.out.println(   SecurityContextHolder.get());
System.out.println( securityCtx );
//do some authorization
 .
}
..
}

In plugin provider i am creating the component : GridSecurityProcessor.

The server starts  without throwing any error, also plugin provider also
starts.
Questions:
1. When i start the visor , it is able to connect to ignite server. If i
execute some commands in visor like: top , cache,etc, authorize function is
getting called and always gives the  security context as NULL. How do i get
the securitycontext?  . Also when visor is called authenticate function is
not getting called.
2. When rest api call is made to create a cache why the authroize function
is getting called twice? one my GridRestProcessor and GridCacheProcessor?
In this scenario: secuirty context i am getting from
SecurityContextHolder.get(). So no issues.

regards
Radha


Can we use uuid generator in sql?

2019-04-15 Thread yangjiajun
Hello!

Is there any sql functions to generate uuids?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: [EXTERNAL] Re: Replace or Put after PutAsync causes Ignite to hang

2019-04-15 Thread James PRINCE
Hi,

Thanks for looking. You need to run two instances of the reproducer. Let the 
first run until you can see "Wait" on the console then run the second. For me 
the second instance won't get past the Replace call in either 2.6 or 2.7.

It's using the default config with nothing else set up over and above the code 
you can see on the first post.

Thanks

-Original Message-
From: Alexandr Shapkin  
Sent: 15 April 2019 11:46
To: user@ignite.apache.org
Subject: [EXTERNAL] Re: Replace or Put after PutAsync causes Ignite to hang

Hi,

I took a look at the reproducer and it works just fine with different Ignite 
and .net versions.

Is there just a single Ignite server with the default config?









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

___
This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and delete this e-mail. Any unauthorised copying, 
disclosure or distribution of the material in this e-mail is prohibited.

Please refer to http://www.bnpparibas.co.uk/en/email-disclaimer/ for additional 
disclosures.



Re: Replace or Put after PutAsync causes Ignite to hang

2019-04-15 Thread Alexandr Shapkin
Hi,

I took a look at the reproducer and it works just fine with different Ignite
and .net versions.

Is there just a single Ignite server with the default config?









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: efficient write through

2019-04-15 Thread Ilya Kasnacheev
Hello!

I'm not aware of the possibility of modifying the object in Cache Store. I
would not recommend trying to do that. What's the problem of checking row
existence by Id (i.e. Cache Key?)

Regards,
-- 
Ilya Kasnacheev


сб, 13 апр. 2019 г. в 06:53, Coleman, JohnSteven (Agoda) <
johnsteven.cole...@agoda.com>:

> Currently my writethrough executes a stored procedure, but it has to
> identify whether to insert or update which is inefficient as well as using
> locks and transaction. If I disable write behind caching my cache put
> processing slows by a factor of 10.
>
>
>
> I’m thinking of adding a rowID field to the cache values that the database
> will return and add to the value after writing, so that I will know if the
> value is already stored or is a new item and needs inserting.
>
>
>
> What approaches are recommended for ID fields? It would be nice if I
> didn’t have to have a different key value and row ID field, but catch 22 is
> I can’t wait for the Db to assign an ID when I put. At present I let my
> code assign the cache key value, maybe I should use a Guid rather than a
> sequence?
>
>
>
> Suggestions please.
>
>
>
> John
>
>
>
> CREATE PROCEDURE [dbo].[updateorinsert_simple_transaction]
>
>(@id int
>
>, @charge_amount decimal(18,6)
>
>, @fee_amount decimal(18,6)
>
>, @event_status tinyint)
>
> AS
>
> BEGIN TRANSACTION
>
> IF exists (select 1 FROM [dbo].[simpletransaction] WITH (updlock,
> serializable) WHERE id = @id)
>
> BEGIN
>
>UPDATE [dbo].[simpletransaction] SET charge_amount = @charge_amount
> , fee_amount = @fee_amount, event_status = @event_status
>
>WHERE id = @id
>
> END
>
> ELSE
>
> BEGIN
>
>INSERT INTO [dbo].[simpletransaction] (id, charge_amount,
> fee_amount, event_status)
>
>VALUES(@id, @charge_amount, @fee_amount, @event_status)
>
> END
>
> COMMIT TRANSACTION
>
> GO
>
> --
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by copyright
> or other legal rules. If you have received it by mistake please let us know
> by reply email and delete it from your system. It is prohibited to copy
> this message or disclose its content to anyone. Any confidentiality or
> privilege is not waived or lost by any mistaken delivery or unauthorized
> disclosure of the message. All messages sent to and from Agoda may be
> monitored to ensure compliance with company policies, to protect the
> company's interests and to remove potential malware. Electronic messages
> may be intercepted, amended, lost or deleted, or contain viruses.
>


Re: Ignite DataStreamer Memory Problems

2019-04-15 Thread Ilya Kasnacheev
Hello!

DataStreamer WILL block until all data is loaded in caches.

The recommendation here is probably reducing perNodeParallelOperations(),
streamerBufferSize() and perThreadBufferSize(), and flush()ing your
DataStreamer frequently to avoid data build-ups in temporary data
structures of DataStreamer. Or maybe, if you have a few entries which are
very large, you can just use Cache API to populate those.

Regards,
-- 
Ilya Kasnacheev


вс, 14 апр. 2019 г. в 18:45, kellan :

> I seem to be running into some sort of memory issues with my DataStreamers
> and I'd like to get a better idea of how they work behind the scenes to
> troubleshoot my problem.
>
> I have a cluster of 4 nodes, each of which is pulling files from S3 over an
> extended period of time and loading the contents. Each new opens up a new
> DataStreamer, loads its contents and closes the DataStreamer. At most each
> cache has 4 DataStreamers writing to 4 different caches simultaneously. A
> new DataStreamer isn't created until the last one on that thread is closed.
> I wait for the futures to complete, then close the DataStreamer. So far so
> good.
>
> After my nodes are running for a few hours, one or more inevitably ends up
> crashing. Sometimes the Java heap overflows and Java exits, and sometimes
> Java is killed by the kernel because of an OOM error.
>
> Here are my specs per node:
> Total Available Memory: 110GB
> Memory Assigned to All Data Regions: 50GB
> Total Checkpoint Page Buffers: 5GB
> Java Heap: 25GB
>
> Does DataStreamer.close block until data is loaded into the cache on remote
> nodes (I'm assuming it doesn't), and if not is there anyway to monitor the
> progress loading data in the cache on the remote nodes/replicas, so I can
> slow down my DataStreamers to keep pace?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>