Re: Persistent dataregion config

2020-03-24 Thread Evgenii Zhuravlev
Hi Andrey,

After enabling persistence, Ignite itself will be responsible for data
eviction from memory to disk and eviction mode or threshold can't be
changed for it. Parameters pageEvictionMode and evictionThreshold related
to the complete data eviction from Ignite(not from memory to disk) and more
designed for in memory cases.

1. Total memory, that will be used in offheap for this persistence region.
2. No, for offheap, for this region, will be used ${config.node.memory.max}.
Also, don't forget about heap and checkpoint buffer size.
3. With enabled pageEvictionMode, data will be evicted from disk too.

Evgenii

вт, 24 мар. 2020 г. в 10:23, Andrey Davydov :

> Hello,
>
>
>
> Please help me to setup data region properly.
>
>
>
> I would like to have region which can store up to ${config.node.total.
> memory.max} bytes, but only  small part of data in RAM.
>
>
>
> My current config (Ignite 2.7.6):
>
>
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
>  value="true"/>
>
>
>
>  value="${config.node.memory.initial}"/>
>
>  value="${config.node.memory.max}"/>
>
>
>
>  value="RANDOM_2_LRU"/>
>
>  value="${config.node.memory.eviction.threshold}"/>
>
>
>
> 
>
> 
>
>
>
> Please confirm three points:
>
>
>
>1. Total size will be ${config.node.memory.max}
>2. Total used ram size will be approximately ${config.node.memory.max}*
>${config.node.memory.eviction.threshold}
>3. Data will not be evicted from disk, only from RAM
>
>
>
> Thanks.
>
> Andrey.
>
>
>


Persistent dataregion config

2020-03-24 Thread Andrey Davydov
Hello, Please help me to setup data region properly. I would like to have region which can store up to ${config.node.total.memory.max} bytes, but only  small part of data in RAM. My current config (Ignite 2.7.6):                                  Please confirm three points: Total size will be ${config.node.memory.max}Total used ram size will be approximately ${config.node.memory.max}*${config.node.memory.eviction.threshold}Data will not be evicted from disk, only from RAM Thanks.Andrey. 


Re: Exporter usage of Ignite 2.8.0

2020-03-24 Thread Denis Magda
Kamlesh, Anton,

There are documentation pages that should answer your questions:
https://apacheignite.readme.io/docs/new-metrics#section-exporters

Just in case, looping in Nikolay who is a primary contributor to the
feature.

-
Denis


On Mon, Mar 23, 2020 at 4:02 AM Kamlesh Joshi  wrote:

> Hi Team,
>
>
>
> Can you help how to use inbuilt exporter for Ignite 2.8.0. I read it in
> release notes that ‘*Added monitoring API - an exporter of Ignite metrics
> to external recipients*’.
>
>
>
> Do we have to manually expose these exporters on some port ? Or are they
> exposed on any default port ? So that these exporters then can directly be
> consumed by Prometheus.
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Versions of windows supported and meaning of log error message - This operating system has been tested less rigorously

2020-03-24 Thread Pavel Tupitsyn
This message is old and misleading, sorry.

- Yes, we test on Windows as well as on Linux
- Windows 10 is actually the most tested Windows version, I believe - all
Windows TeamCity agents are at version 10

> Can we go ahead with using ignite on windows?
Yes!


On Tue, Mar 24, 2020 at 8:00 PM rohankur  wrote:

> We are introducing ignite in a product and expect it to used by customers
> who
> will have Windows systems primarily.
> I see in this link
> https://apacheignite.readme.io/docs/getting-started
> the following list of OS
> Linux (any flavor),
> Mac OSX (10.6 and up)
> Windows (XP and up),
> Windows Server (2008 and up)
> Oracle Solaris
>
> However, when I start ignite on windows servers I see the following log
> message
> *This operating system has been tested less rigorously: Windows 10 10.0
> amd64*. Our team will appreciate the feedback if you experience any
> problems
> running ignite in this environment.
>
> WARN  org.apache.ignite.internal.GridDiagnostic - *This operating system
> has
> been tested less rigorously: Windows Server 2008 6.0 amd64.*
>
> Is there a subset of windows OS versions that have not been tested
> rigorously? Broadly speaking - What is the difference between rigorous and
> less rigorous testing?
> Can we go ahead with using ignite on windows?
>
> Thanks,
> Rohan
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Versions of windows supported and meaning of log error message - This operating system has been tested less rigorously

2020-03-24 Thread rohankur
We are introducing ignite in a product and expect it to used by customers who
will have Windows systems primarily. 
I see in this link
https://apacheignite.readme.io/docs/getting-started
the following list of OS
Linux (any flavor),
Mac OSX (10.6 and up)
Windows (XP and up),
Windows Server (2008 and up)
Oracle Solaris

However, when I start ignite on windows servers I see the following log
message 
*This operating system has been tested less rigorously: Windows 10 10.0
amd64*. Our team will appreciate the feedback if you experience any problems
running ignite in this environment.

WARN  org.apache.ignite.internal.GridDiagnostic - *This operating system has
been tested less rigorously: Windows Server 2008 6.0 amd64.*

Is there a subset of windows OS versions that have not been tested
rigorously? Broadly speaking - What is the difference between rigorous and
less rigorous testing? 
Can we go ahead with using ignite on windows?

Thanks,
Rohan





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Near cache configuration for partitioned cache

2020-03-24 Thread Evgenii Zhuravlev
Hi,

I see that you have persistence, did you clean the persistence directory
before changing configuration?

Evgenii

вт, 24 мар. 2020 г. в 02:33, Dominik Przybysz :

> Hi,
> I configured client node as you described in you email and heap usage on
> server nodes does not look as expected:
>
>
> +===+
> |   Node ID8(@), IP   | CPUs | Heap Used | CPU Load |   Up Time
>  | Size (Primary / Backup) | Hi/Mi/Rd/Wr  |
>
> +===+
> | 112132F5(@n3), 10.100.0.230 | 4| 36.49 %   | 76.50 %  | 00:13:10.385
> | Total: 75069 (75069 / 0)| Hi: 19636833 |
> | |  |   |  |
>  |   Heap: 75069 (75069 / )   | Mi: 39403166 |
> | |  |   |  |
>  |   Off-Heap: 0 (0 / 0)   | Rd: 5903 |
> | |  |   |  |
>  |   Off-Heap Memory: 0| Wr: 0|
>
> +-+--+---+--+--+-+--+
> | 74786280(@n2), 10.100.0.239 | 4| 33.94 %   | 81.07 %  | 01:06:23.896
> | Total: 74817 (74817 / 0)| Hi: 22447160 |
> | |  |   |  |
>  |   Heap: 74817 (74817 / )   | Mi: 44987105 |
> | |  |   |  |
>  |   Off-Heap: 0 (0 / 0)   | Rd: 67434265 |
> | |  |   |  |
>  |   Off-Heap Memory: 0| Wr: 0|
>
> +-+--+---+--+--+-+--+
> | 5AB7B5FD(@n0), 10.100.0.205 | 4| 69.39 %   | 15.50 %  | 00:52:54.529
> | Total: 2706142 (1460736 / 1245406)  | Hi: 43629857 |
> | |  |   |  |
>  |   Heap: 15 (15 / ) | Mi: 0|
> | |  |   |  |
>  |   Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 |
> | |  |   |  |
>  |   Off-Heap Memory: | Wr: 52347667 |
>
> +-+--+---+--+--+-+--+
> | 0608CF95(@n1), 10.100.0.206 | 4| 42.24 %   | 17.07 %  | 00:52:39.093
> | Total: 2706142 (1395406 / 1310736)  | Hi: 43644401 |
> | |  |   |  |
>  |   Heap: 15 (15 / ) | Mi: 0|
> | |  |   |  |
>  |   Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 |
> | |  |   |  |
>  |   Off-Heap Memory: | Wr: 52347791 |
>
> +---+
>
> 1st and 2nd entries are clients and 3rd and 4th are server nodes.
> My client nodes has LRU near cache with size 10 and I am querying
> cache with 15 random data.
> But why there are heap entries on server nodes?
>
> wt., 24 mar 2020 o 08:40 Dominik Przybysz 
> napisał(a):
>
>> Hi,
>> exactly I want to have near cache only on client nodes. I will check your
>> advice with dynamic cache.
>> I have two server nodes which keep data and I want to get data from them
>> via my client nodes.
>> I am also  curious what had happened with heap on server nodes.
>>
>> pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev 
>> napisał(a):
>>
>>> Hi,
>>>
>>> Near Cache configuration in xml creates near caches for all nodes,
>>> including server nodes. As far as I understand, you want to have them on
>>> client side only, right? If so, I'd recommend to create them dynamically:
>>> https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes
>>>
>>> What kind of operations are you running? Are you trying to access data
>>> on server from another server node? In any case, so many entries in Heap on
>>> server nodes looks strange.
>>>
>>> Evgenii
>>>
>>> пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz :
>>>
 Hi,
 I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned
 cache and configuration:

 
 http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>

 >>> 

Re: Loading Data from RDMS to ignite using Data Streamer

2020-03-24 Thread akurbanov
Since this feels like as a performance question, I would start with
identifying the bottleneck in this case.

How much data are you going to load from Oracle?

What is consuming the most time while streaming, reading data from Oracle or
streaming itself? How high the resource usage while streaming (CPU/memory
consumption)? This will define what should be tuned/changed.

What current numbers are for loadCache(null) vs DataStreamer?

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Authenticate at cache-level

2020-03-24 Thread akurbanov
Hello,

Could you elaborate your use-case please, what do you mean by
"authentication at the cache level", do you mean the storage for users?

It is possible to create/store users in SQL in Ignite, but the customization
of GridSecurityProcessor is allowing you to store users anywhere, and this
includes writing some custom authentication/authorization logic.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exporter usage of Ignite 2.8.0

2020-03-24 Thread akurbanov
Hello,

Unfortunately, the documentation is not available yet on the website, but
you can use
org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi that
comes with ignite-opencensus in distribution:
$IGNITE_HOME/libs/optional/ignite-opencensus. 

The metric exporter should be registered in IgniteConfiguration, please see
the Java example:
https://github.com/nizhikov/ignite/blob/b362cfad309ec8f31c6cba172391c74589c9191f/modules/opencensus/src/test/java/org/apache/ignite/internal/processors/monitoring/opencensus/OpenCensusMetricExporterSpiTest.java

Prometeus:
https://opencensus.io/exporters/supported-exporters/java/prometheus/
Documentation waiting list:
http://apache-ignite-developers.2346864.n4.nabble.com/Ignite-2-8-documentation-td46008.html
IEP 35:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=112820392=sidebar

Best regards,
Anton




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Server Node comes down when a node comes down and an update is issued within the failuredetectiontimeout

2020-03-24 Thread VeenaMithare
We have a 3 node server cluster ( Issue observed in 2.7.6, could not test in
2.8.0 because I am unable to bring up the dbeaver in 2.8.0 with
securityplugin enabled
)(http://apache-ignite-users.70518.x6.nabble.com/2-8-0-JDBC-Thin-Client-Unable-to-load-the-tables-via-DBeaver-td31681.html)

A 4th node joins as a client with a continuous query on a Table A(
Transaction_mode = transactional ).

Now If I bring the client down and issue an update to the Table A within
failureDetectionTimeout 3 , I get the following error and this error
brings the server down since it causes a */unhandled exception in a critical
thread/*:

==
/*ERROR  [] - Critical system error detected. Will be handled accordingly to
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false,
timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
[type=CRITICAL_ERROR, err=java.lang.NoClassDefFoundError:
com/companyname/projectname/modulename/helper/ContinuousQueryHelper]]
java.lang.NoClassDefFoundError:
com/companyname/projectname/modulename/helper/ContinuousQueryHelper*/
at
com.companyname.projectname.modulename.helper.ContinuousQueryHelper$ModuleTableRemoteFilterFactory$1.evaluate(ContinuousQueryHelper.java:289)
~[?:?]
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.filter(CacheContinuousQueryHandler.java:833)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$2.onEntryUpdated(CacheContinuousQueryHandler.java:422)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:426)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerSet(GridCacheMapEntry.java:1584)
~[ignite-core-2.7.6.jar:2.7.6]
=
"(err) Failed to notify listener: GridDhtTxPrepareFuture Error"
===
Basically the server , tries to update the record on the Table A, and tries
to  notify Client since it had registered a continuous query for Table A.  
But since the Client Node has been brought down, it undeploys the
remotefilterfactory lambda. Hence the server is no longer able to complete
the
transaction .

*/This also brings the server down./
*
How can I resolve this issue ?
===
Please find the complete stack trace for this error :


2020-03-13 17:13:40,145 sys-stripe-19-#20 ERROR [] - Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
[type=CRITICAL_ERROR, err=java.lang.NoClassDefFoundError:
com/companyname/projectname/Module/helper/ContinuousQueryHelper]]
java.lang.NoClassDefFoundError:
com/companyname/projectname/Module/helper/ContinuousQueryHelper
at
com.companyname.projectname.Module.helper.ContinuousQueryHelper$ModuleTableRemoteFilterFactory$1.evaluate(ContinuousQueryHelper.java:289)
~[?:?]
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.filter(CacheContinuousQueryHandler.java:833)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$2.onEntryUpdated(CacheContinuousQueryHandler.java:422)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:426)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerSet(GridCacheMapEntry.java:1584)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:741)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3646)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:475)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:425)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:3788)
~[ignite-core-2.7.6.jar:2.7.6]
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:3782)
~[ignite-core-2.7.6.jar:2.7.6]
at

Re: 2.8.0 : JDBC Thin Client : Unable to load the tables via DBeaver

2020-03-24 Thread VeenaMithare
HI , 
I created a ticket for Select Operation since IGNITE-12579 mentions only
insert operation failure : 
https://issues.apache.org/jira/browse/IGNITE-12833

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No ignitevisorcmd.sh in Ignite 2.8

2020-03-24 Thread joaogoncalves
Hi again

It happened to be the same as  Thank you for your help

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.0 : JDBC Thin Client : Unable to load the tables via DBeaver

2020-03-24 Thread VeenaMithare
Hi , On further debugging, I found out that when security is enabled, and I
do updates/selects from dbeaver, the security context in
ctx.security().securityContext() in the class GridIOManager , method :
createGridIoMessage returns me the securitycontext of the thin client. The
message generated out of createGridIoMessage  is passed on to the next node
, and is used in IgniteSecurityProcessor ( withContext method )on the next
node :@Override public OperationSecurityContext withContext(UUID nodeId)
{return withContext(secCtxs.computeIfAbsent(nodeId, 
  
uuid -> nodeSecurityContext(marsh,
U.resolveClassLoader(ctx.config()), ctx.discovery().node(uuid)   
)));}The ctx.discovery().node(uuid) used to
determine the ClusterNode that is passed into nodeSecurityContext() returns
null, since the uuid is that of the remote client id not the remote node id.
I feel this might be a bug and handling security for thin clients might have
broken. Could you advice.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: Re: Ignite memory leaks in 2.8.0

2020-03-24 Thread Andrey Davydov
Sorry, It was message to another mail thread where another way of leaks in connection manager is discussed.  >> thread local logic of connection manager is mess+100500 kill it with fire =)) Andrey. От: Taras LedkovОтправлено: 24 марта 2020 г. в 11:09Кому: user@ignite.apache.orgТема: Re: Ignite memory leaks in 2.8.0 Hi, Andrey Hmm. There is ConnectionManager#cleanupConnections to close connections of terminated threads (run periodically with default timeout is 2000 ms).So, if the detached connection is recycled after use and returns into #threadConns it should be closed after the owner thread is terminated.take a look at the test: H2ConnectionLeaksSelfTest#testConnectionLeaksBut I wrote: thread local logic of connection manager is mess, hard to understand and promising to many troubles.I think we have to change it.On 23.03.2020 23:00, Andrey Davydov wrote:It seems detached connection NEVER become attached to thread other it was born. Because borrow method always return object related to caller thread. I.e. all detached connection borned in joined thread are not collectable forewer. So possible reproduce scenario: start separate thread. Run in this thread some logic that creates detached connection, finish and join thread. Remove link to thread. Repeat. пн, 23 мар. 2020 г., 15:49 Taras Ledkov :Hi,Thanks for your investigation.Root cause is clear. What use-case is causing the leak?I've created the issue to remove mess ThreadLocal logic from ConnectionManager. [1]We 've done it in GG Community Edition and it works OK.[1]. https://issues.apache.org/jira/browse/IGNITE-12804On 21.03.2020 22:50, Andrey Davydov wrote:A simple diagnostic utility I use to detect these problems:  import java.lang.ref.WeakReference;import java.util.ArrayList;import java.util.LinkedList;import java.util.List;import org.apache.ignite.Ignite;import org.apache.ignite.internal.GridComponent;import org.apache.ignite.internal.IgniteKernal;import org.apache.logging.log4j.LogManager;import org.apache.logging.log4j.Logger;public class IgniteWeakRefTracker {    private static final Logger LOGGER = LogManager.getLogger(IgniteWeakRefTracker.class);    private final String clazz;    private final String testName;    private final String name;    private final WeakReference innerRef;    private final List> componentRefs = new ArrayList<>(128);    private static final LinkedList refs = new LinkedList<>();    private IgniteWeakRefTracker(String testName, Ignite ignite) {        this.clazz = ignite.getClass().getCanonicalName();        this.innerRef = new WeakReference<>(ignite);        this.name = ignite.name();        this.testName = testName;        if (ignite instanceof IgniteKernal) {            IgniteKernal ik = (IgniteKernal) ignite;            List components = ik.context().components();            for (GridComponent c : components) {                componentRefs.add(new WeakReference<>(c));            }        }    }    public static void register(String testName, Ignite ignite) {        refs.add(new IgniteWeakRefTracker(testName, ignite));    }    public static void trimCollectedRefs() {        List toRemove = new ArrayList<>();        for (IgniteWeakRefTracker ref : refs) {            if (ref.isIgniteCollected()) {                LOGGER.info("Collected ignite: ignite {} from test {}", ref.getIgniteName(), ref.getTestName());                toRemove.add(ref);                if (ref.igniteComponentsNonCollectedCount() != 0) {                    throw new IllegalStateException("Non collected components for collected ignite.");                }            } else {                LOGGER.warn("Leaked ignite: ignite {} from test {}", ref.getIgniteName(), ref.getTestName());            }        }        refs.removeAll(toRemove);        LOGGER.info("Leaked ignites count:  {}", refs.size());    }    public static int getLeakedSize() {        return refs.size();    }    public boolean isIgniteCollected() {        return innerRef.get() == null;    }    public int igniteComponentsNonCollectedCount() {        int res = 0;        for (WeakReference cr : componentRefs) {            GridComponent gridComponent = cr.get();            if (gridComponent != null) {                LOGGER.warn("Uncollected component: {}", gridComponent.getClass().getSimpleName());                res++;            }        }        return res;    }    public String getClazz() {        return clazz;    }    public String getTestName() {        return testName;    }    public String getIgniteName() {        return name;    }}  On Fri, Mar 20, 2020 at 11:51 PM Andrey Davydov  wrote:I found one more way for leak and understand reason: this     - value: org.apache.ignite.internal.IgniteKernal #1 <- grid     - class: org.apache.ignite.internal.GridKernalContextImpl, value: org.apache.ignite.internal.IgniteKernal #1  <- ctx     - class: org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor, value: 

Re: Near cache configuration for partitioned cache

2020-03-24 Thread Dominik Przybysz
Hi,
I configured client node as you described in you email and heap usage on
server nodes does not look as expected:

+===+
|   Node ID8(@), IP   | CPUs | Heap Used | CPU Load |   Up Time
 | Size (Primary / Backup) | Hi/Mi/Rd/Wr  |
+===+
| 112132F5(@n3), 10.100.0.230 | 4| 36.49 %   | 76.50 %  | 00:13:10.385
| Total: 75069 (75069 / 0)| Hi: 19636833 |
| |  |   |  |
 |   Heap: 75069 (75069 / )   | Mi: 39403166 |
| |  |   |  |
 |   Off-Heap: 0 (0 / 0)   | Rd: 5903 |
| |  |   |  |
 |   Off-Heap Memory: 0| Wr: 0|
+-+--+---+--+--+-+--+
| 74786280(@n2), 10.100.0.239 | 4| 33.94 %   | 81.07 %  | 01:06:23.896
| Total: 74817 (74817 / 0)| Hi: 22447160 |
| |  |   |  |
 |   Heap: 74817 (74817 / )   | Mi: 44987105 |
| |  |   |  |
 |   Off-Heap: 0 (0 / 0)   | Rd: 67434265 |
| |  |   |  |
 |   Off-Heap Memory: 0| Wr: 0|
+-+--+---+--+--+-+--+
| 5AB7B5FD(@n0), 10.100.0.205 | 4| 69.39 %   | 15.50 %  | 00:52:54.529
| Total: 2706142 (1460736 / 1245406)  | Hi: 43629857 |
| |  |   |  |
 |   Heap: 15 (15 / ) | Mi: 0|
| |  |   |  |
 |   Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 |
| |  |   |  |
 |   Off-Heap Memory: | Wr: 52347667 |
+-+--+---+--+--+-+--+
| 0608CF95(@n1), 10.100.0.206 | 4| 42.24 %   | 17.07 %  | 00:52:39.093
| Total: 2706142 (1395406 / 1310736)  | Hi: 43644401 |
| |  |   |  |
 |   Heap: 15 (15 / ) | Mi: 0|
| |  |   |  |
 |   Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 |
| |  |   |  |
 |   Off-Heap Memory: | Wr: 52347791 |
+---+

1st and 2nd entries are clients and 3rd and 4th are server nodes.
My client nodes has LRU near cache with size 10 and I am querying cache
with 15 random data.
But why there are heap entries on server nodes?

wt., 24 mar 2020 o 08:40 Dominik Przybysz  napisał(a):

> Hi,
> exactly I want to have near cache only on client nodes. I will check your
> advice with dynamic cache.
> I have two server nodes which keep data and I want to get data from them
> via my client nodes.
> I am also  curious what had happened with heap on server nodes.
>
> pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev 
> napisał(a):
>
>> Hi,
>>
>> Near Cache configuration in xml creates near caches for all nodes,
>> including server nodes. As far as I understand, you want to have them on
>> client side only, right? If so, I'd recommend to create them dynamically:
>> https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes
>>
>> What kind of operations are you running? Are you trying to access data on
>> server from another server node? In any case, so many entries in Heap on
>> server nodes looks strange.
>>
>> Evgenii
>>
>> пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz :
>>
>>> Hi,
>>> I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned
>>> cache and configuration:
>>>
>>> 
>>> http://www.springframework.org/schema/beans;
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>>xsi:schemaLocation="
>>>http://www.springframework.org/schema/beans
>>>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>>>
>>> >> class="org.apache.ignite.configuration.IgniteConfiguration">
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>> 
>>> >> 

Re: Python - Ignite 2.8.0 - java.lang.NullPointerException

2020-03-24 Thread dbutkovic
Hi Evgenii,
yesterday and in several previous days i was doing some testing on Ignite
2.8.0.
Yesterday i did a fresh install (rm $ IGNITE_HOME/work) on test instance and
now on a freshly started instance I can't reproduce the problem.
For now, we can close the case, if I succeed to reproduce the problem again
I will reopen post.

Best regards

Dren



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite memory leaks in 2.8.0

2020-03-24 Thread Taras Ledkov

Hi, Andrey

Hmm. There is ConnectionManager#cleanupConnections to close connections 
of terminated threads

(run periodically with default timeout is 2000 ms).

So, if the detached connection is recycled after use and returns into 
#threadConns it should be closed after the owner thread is terminated.

take a look at the test: H2ConnectionLeaksSelfTest#testConnectionLeaks

But I wrote: thread local logic of connection manager is mess, hard to 
understand and promising to many troubles.

I think we have to change it.

On 23.03.2020 23:00, Andrey Davydov wrote:
It seems detached connection NEVER become attached to thread other it 
was born. Because borrow method always return object related to caller 
thread. I.e. all detached connection borned in joined thread are not 
collectable forewer.


So possible reproduce scenario: start separate thread. Run in this 
thread some logic that creates detached connection, finish and join 
thread. Remove link to thread. Repeat.


пн, 23 мар. 2020 г., 15:49 Taras Ledkov >:


Hi,

Thanks for your investigation.
Root cause is clear. What use-case is causing the leak?

I've created the issue to remove mess ThreadLocal logic from
ConnectionManager. [1]
We 've done it in GG Community Edition and it works OK.

[1]. https://issues.apache.org/jira/browse/IGNITE-12804

On 21.03.2020 22:50, Andrey Davydov wrote:

A simple diagnostic utility I use to detect these problems:

import java.lang.ref.WeakReference;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import org.apache.ignite.Ignite;
import org.apache.ignite.internal.GridComponent;
import org.apache.ignite.internal.IgniteKernal;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class IgniteWeakRefTracker {

    private static final Logger LOGGER =
LogManager.getLogger(IgniteWeakRefTracker.class);

    private final String clazz;
    private final String testName;
    private final String name;
    private final WeakReference innerRef;
    private final List>
componentRefs = new ArrayList<>(128);

    private static final LinkedList refs =
new LinkedList<>();

    private IgniteWeakRefTracker(String testName, Ignite ignite) {
        this.clazz = ignite.getClass().getCanonicalName();
        this.innerRef = new WeakReference<>(ignite);
this.name  = ignite.name ();
        this.testName = testName;

        if (ignite instanceof IgniteKernal) {
            IgniteKernal ik = (IgniteKernal) ignite;
            List components =
ik.context().components();
            for (GridComponent c : components) {
                componentRefs.add(new WeakReference<>(c));
            }
        }
    }

    public static void register(String testName, Ignite ignite) {
        refs.add(new IgniteWeakRefTracker(testName, ignite));
    }

    public static void trimCollectedRefs() {

        List toRemove = new ArrayList<>();

        for (IgniteWeakRefTracker ref : refs) {
            if (ref.isIgniteCollected()) {
                LOGGER.info("Collected ignite: ignite {} from
test {}", ref.getIgniteName(), ref.getTestName());
                toRemove.add(ref);
                if (ref.igniteComponentsNonCollectedCount() != 0) {
                    throw new IllegalStateException("Non
collected components for collected ignite.");
                }
            } else {
                LOGGER.warn("Leaked ignite: ignite {} from test
{}", ref.getIgniteName(), ref.getTestName());
            }
        }

        refs.removeAll(toRemove);

        LOGGER.info("Leaked ignites count:  {}", refs.size());

    }

    public static int getLeakedSize() {
        return refs.size();
    }

    public boolean isIgniteCollected() {
        return innerRef.get() == null;
    }

    public int igniteComponentsNonCollectedCount() {
        int res = 0;

        for (WeakReference cr : componentRefs) {
            GridComponent gridComponent = cr.get();
            if (gridComponent != null) {
                LOGGER.warn("Uncollected component: {}",
gridComponent.getClass().getSimpleName());
                res++;
            }
        }

        return res;
    }

    public String getClazz() {
        return clazz;
    }

    public String getTestName() {
        return testName;
    }

    public String getIgniteName() {
        return name;
    }

}


On Fri, Mar 20, 2020 at 11:51 PM Andrey Davydov
mailto:andrey.davy...@gmail.com>> wrote:

I found one more way for leak and understand reason:


this     - value: 

Re: Query timeout

2020-03-24 Thread breathem
Hi, Taras.
We try to connect to server via NetBeans 8.2 for development purposes.
In Services tab we add Ignite 2.8.0 JDBC driver (ignite-core-2.8.0.jar) and
choose org.apache.ignite.IgniteJdbcThinDriver.

To reproduce long running query we create 2 tables:
create table if not exists TEST
(kField long not null, vField varchar(100) not null, rField varchar(100) not
null, primary key (kField))
WITH "template=REPLICATED, cache_name=TEST, key_type=KTEST,
value_type=VTEST"

create table if not exists TEST1
(kField long not null, vField varchar(100) not null, rField varchar(100) not
null, primary key (kField))
WITH "template=REPLICATED, cache_name=TEST1, key_type=KTEST1,
value_type=VTEST1"

Then fill this tables with 1 000 000 random data rows, eg
int size = 1_000_000;

IgniteCallable clb = new IgniteCallable()
{
  @IgniteInstanceResource
  Ignite ignite;
  
  @Override
  public Integer call() throws Exception
  {
IgniteCache test =
ignite.cache("TEST").withKeepBinary();

if (test.size(CachePeekMode.ALL) > 0)
{
  test.clear();
}

IgniteBinary bin = ignite.binary();
long t;

for (int i = 0; i < size; i++)
{
  BinaryObjectBuilder k = bin.builder("KTEST");
  BinaryObjectBuilder v = bin.builder("VTEST");
  
  t = System.currentTimeMillis();
  
  k.setField("kField", t);
  v.setField("vField", "I am value string " + t);
  v.setField("rField", randomString(100));
  
  test.putAsync(k.build(), v.build());
}

return null;
  }
};

Then connect to server via NetBeans with
jdbc:ignite:thin://192.168.1.138:10800?queryTimeout=1

Then make long running query:
select t.kField, t.vField, t1.vField from test t inner join test1 t1 on
t.vField = t1.vField;

This query in our case is executed ~73 sec and not cancelled.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Near cache configuration for partitioned cache

2020-03-24 Thread Dominik Przybysz
Hi,
exactly I want to have near cache only on client nodes. I will check your
advice with dynamic cache.
I have two server nodes which keep data and I want to get data from them
via my client nodes.
I am also  curious what had happened with heap on server nodes.

pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev 
napisał(a):

> Hi,
>
> Near Cache configuration in xml creates near caches for all nodes,
> including server nodes. As far as I understand, you want to have them on
> client side only, right? If so, I'd recommend to create them dynamically:
> https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes
>
> What kind of operations are you running? Are you trying to access data on
> server from another server node? In any case, so many entries in Heap on
> server nodes looks strange.
>
> Evgenii
>
> пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz :
>
>> Hi,
>> I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned
>> cache and configuration:
>>
>> 
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xsi:schemaLocation="
>>http://www.springframework.org/schema/beans
>>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>>
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>> 
>> 
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>> 
>> ignite1:47100..47200
>> ignite2:47100..47200
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.configuration.ClientConnectorConfiguration">
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.configuration.DataStorageConfiguration">
>> 
>> > class="org.apache.ignite.configuration.DataRegionConfiguration">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>>
>> > value="{{ignite_system_thread_pool_size}}"/>
>> > value="{{ignite_cluster_data_streamer_thread_pool_size}}"/>
>> 
>> 
>>
>> I loaded 1,5mln entries into cluster via data streamer.
>> I tested this topology without near cache and everything was fine, but
>> when I tried to add near cache to my client nodes then server nodes started
>> to keep data on heap and reads rps dramatically fell down (150k rps to 10k
>> rps).
>>
>> My clients' configuration:
>>
>> 
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xsi:schemaLocation="
>>http://www.springframework.org/schema/beans
>>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>> 
>> ignite1:47100..47200
>> ignite2:47100..47200
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> > class="org.apache.ignite.configuration.NearCacheConfiguration">
>> 
>> > class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> On visor i see:
>>
>> Nodes for: cache1(@c0)
>>
>> +=+
>> |   Node