Fastest way to remove in FIFO order

2018-11-15 Thread Alew

Hi!

I need to clean old data and use a query for that:

DELETE FROM IgniteTransfer WHERE _KEY IN (SELECT TOP ? _KEY FROM 
IgniteTransfer WHERE AccountId = ? ORDER BY TimeStamp ASC)


But sometimes I get a warning

[00:05:47 WRN] Query execution is too long [time=6476 ms, sql='SELECT
_KEY,
_VAL
FROM "LocalTransfers2".IGNITETRANSFER
WHERE _KEY IN( SELECT
_KEY
FROM "LocalTransfers2".IGNITETRANSFER
WHERE ACCOUNTID = ?2
ORDER BY =TIMESTAMP LIMIT ?1 )', plan=
SELECT
    _KEY,
    _VAL
FROM "LocalTransfers2".IGNITETRANSFER
    /* "LocalTransfers2"."_key_PK": _KEY IN(SELECT
    _KEY
    FROM "LocalTransfers2".IGNITETRANSFER
    /++ 
"LocalTransfers2".IGNITETRANSFER_ACCOUNTID_ASC_TIMESTAMP_ASC_IDX: 
ACCOUNTID = ?2 ++/

    WHERE ACCOUNTID = ?2
    ORDER BY =TIMESTAMP
    LIMIT ?1)
 */
WHERE _KEY IN(
    SELECT
    _KEY
    FROM "LocalTransfers2".IGNITETRANSFER
    /* 
"LocalTransfers2".IGNITETRANSFER_ACCOUNTID_ASC_TIMESTAMP_ASC_IDX: 
ACCOUNTID = ?2 */

    WHERE ACCOUNTID = ?2
    ORDER BY =TIMESTAMP
    LIMIT ?1)
, parameters=[2000, 1001]]

6 seconds to remove 2000 elements is too much. What is the complexity of 
this operation?


What is a most efficient way to remove old data in FIFO order?

All inserts and deletes handled with a dedicated thread. Multithreaded 
reading.


Cache config

var query =new QueryEntity(typeof(long),typeof(IgniteTransfer))
{
Fields =new[]
{
new QueryField {Name =nameof(IgniteTransfer.AccountId),FieldType 
=typeof(int)},
new QueryField {Name =nameof(IgniteTransfer.TransferId),FieldType 
=typeof(long)},
new QueryField {Name =nameof(IgniteTransfer.AssetId),FieldType 
=typeof(int)},
new QueryField {Name =nameof(IgniteTransfer.Reason),FieldType 
=typeof(int)},
new QueryField {Name =nameof(IgniteTransfer.TimeStamp),FieldType 
=typeof(long)},
new QueryField {Name =nameof(IgniteTransfer.IsDeposit),FieldType 
=typeof(bool)}, },
Indexes =new[]
{
new QueryIndex
{
Fields =new[]
{
new QueryIndexField {Name =nameof(IgniteTransfer.AccountId)},
new QueryIndexField {Name =nameof(IgniteTransfer.AssetId)}
},
IndexType = QueryIndexType.Sorted },
new QueryIndex
{
Fields =new[]
{
new QueryIndexField {Name =nameof(IgniteTransfer.AccountId)},
new QueryIndexField {Name =nameof(IgniteTransfer.Reason)}
},
IndexType = QueryIndexType.Sorted },
new QueryIndex
{
Fields =new[]
{
new QueryIndexField {Name =nameof(IgniteTransfer.AccountId)},
new QueryIndexField {Name =nameof(IgniteTransfer.TimeStamp)}
},
IndexType = QueryIndexType.Sorted }
}
};
var localCacheCfg =new CacheConfiguration("LocalTransfers2", query)
{
CacheMode = CacheMode.Local,
AtomicityMode = CacheAtomicityMode.Atomic };

var cache  = ignite.GetOrCreateCache(localCacheCfg);



Re: .NET ContinuousQuery lose cache entries

2018-10-01 Thread Alew

Hi!

Tried, but the issue still exists.

https://monosnap.com/file/UUbYX4RUyXPZyPxKx97hTwSUSvNtye


On 01/10/2018 13:06, Ilya Kasnacheev wrote:

Hello!

Setting BufferSize to 1 seems to fix your reproducer's problem:

newContinuousQuery(newCacheListener(skippedItems,doubleDelete),true){BufferSize=1} 



but the recommendation is to avoid doing anything synchronous from 
Continuous Query's body. Better offload any non-trivial processing to 
other threads operating asynchronously.


Regards,
--
Ilya Kasnacheev


сб, 29 сент. 2018 г. в 5:18, Alew <mailto:ale...@gmail.com>>:


Hi, attached a reproducer.
Turn off logs to fix the issue. Slow log is not the only reason. More
nodes in a cluster lead to the same behaviour.
Who is responsible for the behavior? Is it .net, java, bad docs or me?

On 24/09/2018 20:03, Alew wrote:
> Hi!
>
> I need a way to consistently get all entries in a replicated
cache and
> then all updates for them while application is working.
>
> I use ContinuousQuery for it.
>
> var cursor = cache.QueryContinuous(new ContinuousQuery byte[]>(new CacheListener(), true),
>     new ScanQuery()).GetInitialQueryCursor();
>
> But I have some issues with it.
>
> Sometimes cursor returns only part of entries in a cache and cache
> listener does not return them either.
>
> Sometimes cursor and cache listener return the same entry both.
>
> Issue somehow related to amount of work the nodes have to do and
> amount of time between start of the publisher node and
subscriber node.
>
> There are more problems if nodes start at the same time.
>
> Is there a reliable way to do it without controling order of node
> start and pauses between them?
>
>





Re: .NET ContinuousQuery lose cache entries

2018-09-28 Thread Alew

Hi, attached a reproducer.
Turn off logs to fix the issue. Slow log is not the only reason. More 
nodes in a cluster lead to the same behaviour.

Who is responsible for the behavior? Is it .net, java, bad docs or me?

On 24/09/2018 20:03, Alew wrote:

Hi!

I need a way to consistently get all entries in a replicated cache and 
then all updates for them while application is working.


I use ContinuousQuery for it.

var cursor = cache.QueryContinuous(new ContinuousQuerybyte[]>(new CacheListener(), true),

    new ScanQuery()).GetInitialQueryCursor();

But I have some issues with it.

Sometimes cursor returns only part of entries in a cache and cache 
listener does not return them either.


Sometimes cursor and cache listener return the same entry both.

Issue somehow related to amount of work the nodes have to do and 
amount of time between start of the publisher node and subscriber node.


There are more problems if nodes start at the same time.

Is there a reliable way to do it without controling order of node 
start and pauses between them?





<>


.NET ContinuousQuery lose cache entries

2018-09-24 Thread Alew

Hi!

I need a way to consistently get all entries in a replicated cache and 
then all updates for them while application is working.


I use ContinuousQuery for it.

var cursor = cache.QueryContinuous(new ContinuousQuerybyte[]>(new CacheListener(), true),

    new ScanQuery()).GetInitialQueryCursor();

But I have some issues with it.

Sometimes cursor returns only part of entries in a cache and cache 
listener does not return them either.


Sometimes cursor and cache listener return the same entry both.

Issue somehow related to amount of work the nodes have to do and amount 
of time between start of the publisher node and subscriber node.


There are more problems if nodes start at the same time.

Is there a reliable way to do it without controling order of node start 
and pauses between them?





.NET java thread count keeps growing

2018-09-15 Thread Alew

Hi!

I have growing java thread count. The application crashes with OOME. The 
application is very simple, only reading values in a cache.


Any suggestion to debug this issue?

Ignite 2.4


<>


Re: Can't connect VisualVM to Ignite 2.4, 2.6 process on windows

2018-09-14 Thread Alew

Hi!

Actually there was no OOME before VisualVM connect attempt. There is a 
point in time after Ignite start after that it is not possible to conect 
VisualVM to Ignite.

I can connect them successfully on different computer.
I think this is an environment specific bug, somehow related to 
windows+hyper-v+amd cpu


On 14/09/2018 11:54, Ilya Kasnacheev wrote:

Hello!

I don't think you can connect VisualVM to JVM in OutOfMemory state. 
You can try to use jmap to take heap dump, failing that, try to 
increase heap size and/or connect before it goes into OOM.


Regards,
--
Ilya Kasnacheev


пт, 14 сент. 2018 г. в 0:43, Alew <mailto:ale...@gmail.com>>:


Hi!

I was suggested to use VisualVM as a debugging tool but can't
connect it
to ignite process.
I start bin/ignite.bat script and point visualvm to ignite process.
After that I get default os window "java platform se binary has
stopped
working"
Cmd console contains

[00:40:19,549][SEVERE][tcp-disco-ip-finder-cleaner-#8][TcpDiscoverySpi]

Runtime error caught during grid runnable execution: IgniteSpiThread
[name=tcp-disco-ip-finder-cleaner-#8]
java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:717)
 at

org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.requestAddresses(TcpDiscoveryMulticastIpFinder.java:499)
 at

org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.getRegisteredAddresses(TcpDiscoveryMulticastIpFinder.java:452)
 at

org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1828)
 at

org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.cleanIpFinder(ServerImpl.java:1938)
 at

org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.body(ServerImpl.java:1913)
 at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Exception in thread "tcp-disco-ip-finder-cleaner-#8"
java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:717)
 at

org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.requestAddresses(TcpDiscoveryMulticastIpFinder.java:499)
 at

org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.getRegisteredAddresses(TcpDiscoveryMulticastIpFinder.java:452)
 at

org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1828)
 at

org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.cleanIpFinder(ServerImpl.java:1938)
 at

org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.body(ServerImpl.java:1913)
Press any key to continue . . .

I connect VisualVM to other apps with success.

Thanks.





Can't connect VisualVM to Ignite 2.4, 2.6 process on windows

2018-09-13 Thread Alew

Hi!

I was suggested to use VisualVM as a debugging tool but can't connect it 
to ignite process.

I start bin/ignite.bat script and point visualvm to ignite process.
After that I get default os window "java platform se binary has stopped 
working"

Cmd console contains

[00:40:19,549][SEVERE][tcp-disco-ip-finder-cleaner-#8][TcpDiscoverySpi] 
Runtime error caught during grid runnable execution: IgniteSpiThread 
[name=tcp-disco-ip-finder-cleaner-#8]

java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:717)
    at 
org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.requestAddresses(TcpDiscoveryMulticastIpFinder.java:499)
    at 
org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.getRegisteredAddresses(TcpDiscoveryMulticastIpFinder.java:452)
    at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1828)
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.cleanIpFinder(ServerImpl.java:1938)
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.body(ServerImpl.java:1913)
    at 
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Exception in thread "tcp-disco-ip-finder-cleaner-#8" 
java.lang.OutOfMemoryError: unable to create new native thread

    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:717)
    at 
org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.requestAddresses(TcpDiscoveryMulticastIpFinder.java:499)
    at 
org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder.getRegisteredAddresses(TcpDiscoveryMulticastIpFinder.java:452)
    at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1828)
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.cleanIpFinder(ServerImpl.java:1938)
    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$IpFinderCleaner.body(ServerImpl.java:1913)

Press any key to continue . . .

I connect VisualVM to other apps with success.

Thanks.


.NET. Is where a way to get JVM metrics from dotnet?

2018-09-13 Thread Alew

Hi!

I get OOMEs and my current hypothesis is that there are too many threads 
in the Java part in the _thread_in_native state.
To make a repro I need to get JVM metrics from dotnet. So I want to know 
how to do it.


Thanks.


Re: Readonly nodes

2018-04-10 Thread Alew
That is ok, I don't need Ignite to do authentication. Just prevent data 
modification.


Denis told about node filters, but i didn't get how it is working


On 11/04/2018 02:56, vkulichenko wrote:

Ignite doesn't have authentication/authorization capabilities at the moment,
however there are certain plans for that as far as I know.

In the meantime, you can take a look 3rd party vendors like GridGain that
have paid offerings for this:
https://docs.gridgain.com/docs/security-and-audit

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





Re: Readonly nodes

2018-04-10 Thread Alew
I use Ignite as a session store and need to allow write access only for 
authentication services.



On 11/04/2018 00:45, Denis Magda wrote:
The filter excludes the nodes from the list of those which can store 
data. That's why those nodes would never receive any requests.


Considering your follow-up questions, I guess you are looking for 
multi-tenancy capabilities. Do you want to prevent some of your 
applications to update data stored in the cluster?


--
Denis

On Tue, Apr 10, 2018 at 11:49 AM, Alew <mailto:ale...@gmail.com>> wrote:


Hi, Denis

Thank you for your answer.

What effect will filter have? Does it mean that read-only node
doesn't have own data copy? Is it like partitioned cache? If so,
why is it read-only?



On 10/04/2018 02:37, Denis Magda wrote:

Hi,

Yes, you can tap into NodeFilter interface applying it for your
CacheConfiguration in a way similar to ServiceConfiguration as
shown here:

https://apacheignite.readme.io/docs/service-grid#section-node-filter-based-deployment

<https://apacheignite.readme.io/docs/service-grid#section-node-filter-based-deployment>

--
Denis

On Mon, Apr 9, 2018 at 4:23 PM, Alew mailto:ale...@gmail.com>> wrote:

Hi!
 I have a relicated cache. Is there a way to forbid insert
and update data in the cache for some nodes in a cluster?

Regards









Re: Readonly nodes

2018-04-10 Thread Alew

Hi, Denis

Thank you for your answer.

What effect will filter have? Does it mean that read-only node doesn't 
have own data copy? Is it like partitioned cache? If so, why is it 
read-only?




On 10/04/2018 02:37, Denis Magda wrote:

Hi,

Yes, you can tap into NodeFilter interface applying it for your 
CacheConfiguration in a way similar to ServiceConfiguration as shown here:

https://apacheignite.readme.io/docs/service-grid#section-node-filter-based-deployment

--
Denis

On Mon, Apr 9, 2018 at 4:23 PM, Alew <mailto:ale...@gmail.com>> wrote:


Hi!
 I have a relicated cache. Is there a way to forbid insert and
update data in the cache for some nodes in a cluster?

Regards






Readonly nodes

2018-04-09 Thread Alew

Hi!
 I have a relicated cache. Is there a way to forbid insert and update 
data in the cache for some nodes in a cluster?


Regards