Re: IgniteConfigurationFile does not load consistently

2017-12-26 Thread soodrah
I noticed the bean is defined as abstract. Failed to see that earlier, so in
examples we have default configuration and then the actual bean can have
extra and be instantiated.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite-schedule latest version?

2017-12-26 Thread Alexey Kuznetsov
Hi,

I filed issue about ignite-schedule some time ago [1].
But it was not resolved yet.


[1] https://issues.apache.org/jira/browse/IGNITE-5565

On Wed, Dec 27, 2017 at 6:40 AM, vkulichenko 
wrote:

> Hi,
>
> You should specify the same version for all Ignite artifacts in your
> project. If you're using 2.3, ignite-schedule:2.3.0 should be used.
>
> Artifacts for this module are not deployed to Maven central after 1.2 due
> to
> licensing restrictions (it uses LGPL dependencies). So you should either
> build from source, or try using 3rd party repositories [1].
>
> [1] https://ignite.apache.org/download.cgi#3rdparty
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Alexey Kuznetsov


Re: Reconnect after cluster shutdown fails

2017-12-26 Thread Wolfram Huesken

Hello Dmitry,

thank you very much, I'll try that!

Cheers
Wolfram

On 26/12/2017 22:50, dkarachentsev wrote:

Hi,

Discovery events are processed in a single thread, and cache creation uses
discovery custom messages. Trying to create cache in discovery thread will
lead to deadlock, because discovery thread will wait in your lambda instead
of processing messages.

To avoid it just start another thread in your listener.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: IgniteConfigurationFile does not load consistently

2017-12-26 Thread soodrah
yes you are right, it is saying default instance already started. Either way
I  am facing the following problem. On my unix machine I have the binary
ignite folder and I am trying to run the ignite node over there. To that
effect I changed the example-default.xml as follows but the script gives
below exception. All I did was add the cacheConfiguration element
http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>















i.model.LabelId
i.model.Label















































:47500..47509










Exception

org.apache.ignite.startup.cmdline.CommandLineStartup
examples/config/example-default.xml
class org.apache.ignite.IgniteException: Failed to find configuration in:
file:/home/username/apache-ignite-fabric-2.3.0-bin/examples/config/example-default.xml
at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:966)
at org.apache.ignite.Ignition.start(Ignition.java:350)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
configuration in:
file:/home/username/apache-ignite-fabric-2.3.0-bin/examples/config/example-default.xml
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:116)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:673)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:874)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:783)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:653)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:622)
at org.apache.ignite.Ignition.start(Ignition.java:347)
... 1 more





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteConfigurationFile does not load consistently

2017-12-26 Thread vkulichenko
Ignition.start never returns null, it either throws exception or returns
ready to use Ignite instance. Please check your code, and if you can't find
a reason, create a separate project that would reproduce the issue and share
it with us somehow (e.g. via GitHub). This way we will be able to help you.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite-schedule latest version?

2017-12-26 Thread vkulichenko
Hi,

You should specify the same version for all Ignite artifacts in your
project. If you're using 2.3, ignite-schedule:2.3.0 should be used.

Artifacts for this module are not deployed to Maven central after 1.2 due to
licensing restrictions (it uses LGPL dependencies). So you should either
build from source, or try using 3rd party repositories [1].

[1] https://ignite.apache.org/download.cgi#3rdparty

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ignite-schedule latest version?

2017-12-26 Thread NK
Hi, 

I am using ignite-core (v2.3) for a couple features and now I need to use
the cron based scheduler (ignite.scheduler().scheduleLocal(...)). My code
(using scheduleLocal(...)) compiles well with ignite-core.jar (v2.3), but at
runtime ignite complains about needing a ignite-schedule jar in the
classpath. 

I do not find any ignite-schedule.jar V2.3; the latest version I found on
maven for ignite-schedule is "1.2.0-incubating". My code works fine with
that version and does what it is supposed to, but I wanted to check whether
I was doing it right or if I were missing something. 

Is ignite-schedule:1.2.0-incubating the right version to use for
ignite.scheduler().scheduleLocal(...)?

Thanks,
NK



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"

2017-12-26 Thread soodrah
I figured this out...I still get the error but its not stopping TomCat from
coming up. 
I added Parent dependency on springboot Pom

org.springframework.boot
spring-boot-starter-parent
1.5.4.RELEASE
 

commented out all the hardcoded spring dependencies. 
added these into my webapp module

org.apache.ignite
ignite-core
${ignite.version}


org.apache.ignite
ignite-spring
${ignite.version}


org.apache.ignite
ignite-indexing
${ignite.version}


org.apache.ignite
ignite-log4j
${ignite.version}


Here is the error, which now is a warning

'ignite.cfg$child#0'
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
java.lang.ClassNotFoundException:
org.apache.ignite.logger.java.JavaLoggerFileHandler
java.lang.ClassNotFoundException:
org.apache.ignite.logger.java.JavaLoggerFileHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.util.logging.LogManager$5.run(LogManager.java:965)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.loadLoggerHandlers(LogManager.java:958)
at
java.util.logging.LogManager.initializeGlobalHandlers(LogManager.java:1578)
at java.util.logging.LogManager.access$1500(LogManager.java:145)
at
java.util.logging.LogManager$RootLogger.accessCheckedHandlers(LogManager.java:1667)
at java.util.logging.Logger.getHandlers(Logger.java:1777)
at
org.apache.ignite.logger.java.JavaLogger.findHandler(JavaLogger.java:399)
at 
org.apache.ignite.logger.java.JavaLogger.configure(JavaLogger.java:229)
at org.apache.ignite.logger.java.JavaLogger.(JavaLogger.java:170)
at org.apache.ignite.logger.java.JavaLogger.(JavaLogger.java:126)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initLogger(IgnitionEx.java:2390)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeConfiguration(IgnitionEx.java:2058)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1678)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1652)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1080)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:998)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:884)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:783)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:653)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:622)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at 
com.db.sl.ignite.IgniteNode.createAnIgniteInstance(IgniteNode.java:54)
at
com.db.sl.inventory.impl.IgniteConnection.connect(IgniteConnection.java:15)
at com.db.sl.inventory.InvServiceFactory.of(InvServiceFactory.java:26)
at
com.db.sl.inventory.config.SpringConfig.createInvService(SpringConfig.java:75)
at
com.db.sl.inventory.config.SpringConfig$$EnhancerBySpringCGLIB$$8436d1b8.CGLIB$createInvService$1()
at
com.db.sl.inventory.config.SpringConfig$$EnhancerBySpringCGLIB$$8436d1b8$$FastClassBySpringCGLIB$$79f5bf04.invoke()
at
org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at
org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358)
at
com.db.sl.inventory.config.SpringConfig$$EnhancerBySpringCGLIB$$8436d1b8.createInvService()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
at
org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1173)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1067)
at

Re: Runtime.availableProcessors() returns hardware's CPU count whichis the issue with Ignite in Kubernetes

2017-12-26 Thread David Wimsey
Older versions of the JVM are not aware of Linux cgroups used to limit
memory and CPUs to containers.  They look at the host information and get
the wrong impression of their environment.  Starting with 1.8u131 and in
Java 9, the JVM has been updated, assuming you enable the additional
experimental options.

https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits

All JVM based docker images should use those options, but this is
especially important in kubernetes where memory and cpu limits are more
likely to be set (or even required in most clusters) than just some random
docker container running on the desktop.


On Tue, Dec 26, 2017 at 11:39 AM, Stanislav Lukyanov  wrote:

> Hi Arseny,
>
>
>
> Both OpenJDK and Oracle JDK 8u151 should do.
>
> I’ve checked and it seems that it is indeed about the specific way of
> invoking Docker. By default it doesn’t restrict container to particular
> CPUs, but you can make it do that by using `--cpuset-cpus` flag.
>
>
>
> Example:
>
> Without cpuset (Docker host has 4 CPUs)
>
> >docker run -it java-docker bash
>
> root@8a2cd9d06695:/usr/test# java -version
>
> openjdk version "1.8.0_151"
>
> OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.
> 16.04.2-b12)
>
> OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
>
> root@8a2cd9d06695:/usr/test# jjs
>
> jjs> print(java.lang.Runtime.runtime.availableProcessors());
>
> 4
>
>
>
> With cpuset specifying the first two CPUs
>
> >docker run -it --cpuset-cpus=0,1 java-docker bash
>
> root@7c2723a9819e:/usr/test# jjs
>
> jjs> print(java.lang.Runtime.runtime.availableProcessors());
>
> 2
>
>
>
> Note also that by using cpuset you change the settings for the whole
> container, not just for the Ignite, so that other Java pools in the same
> JVM, e.g. parallel stream executor, should also work better.
>
>
>
> Unfortunately, I’m not familiar with Kubernetes, but the manual [1] says
> that you can enable the use of cpusets via a static CPU management policy.
>
>
>
> Hope that helps,
>
> Stan
>
>
>
> [1] https://kubernetes.io/docs/tasks/administer-cluster/cpu-
> management-policies/
>
>
>
> *From: *Arseny Kovalchuk 
> *Sent: *26 декабря 2017 г. 17:37
> *To: *user@ignite.apache.org
> *Cc: *d...@ignite.apache.org
> *Subject: *Re: Runtime.availableProcessors() returns hardware's CPU count
> whichis the issue with Ignite in Kubernetes
>
>
>
> Hi Stanislav.
>
>
> We use OpenJDK and use Alpine Linux based images. See java version below.
> In our environment availableProcessors returns CPU's for the host.
>
>
>
> Did you mean to try Oracle's JDK 8u151?
>
> arseny@kovalchuka-ubuntu:~/kipod-x$ ku exec ignite-instance-0 -ti bash
>
> bash-4.4# java -version
>
> openjdk version "1.8.0_151"
>
> OpenJDK Runtime Environment (IcedTea 3.6.0) (Alpine 8.151.12-r0)
>
> OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
>
> bash-4.4# jjs
>
> jjs> print(java.lang.Runtime.runtime.availableProcessors());
>
> 40
>
> jjs>
>
>
>
>
> ​
>
> Arseny Kovalchuk
>
>
>
> Senior Software Engineer at Synesis
>
> skype: arseny.kovalchuk
>
> mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
>
> ​LinkedIn Profile ​
>
>
>
> On 26 December 2017 at 16:56, Yakov Zhdanov  wrote:
>
> Ilya, agree. I like IGNITE_AVAILABLE_CPU more.
>
>
> Yakov Zhdanov,
>
> www.gridgain.com
>
>
>
> 2017-12-26 16:36 GMT+03:00 Ilya Lantukh :
>
> Hi Yakov,
>
> I think that property IGNITE_NODES_PER_HOST, as you suggested, would be
> confusing, because users might want to reduce amount of available resources
> for ignite node not only because they run multiple nodes per host, but also
> because they run other software. Also, in my opinion all types of system
> resources (CPU, memory, network) shouldn't be scaled using the same value.
>
> So I'd prefer to have IGNITE_CONCURRENCY_LEVEL or
> IGNITE_AVAILABLE_PROCESSORS, as it was originally suggested.
>
>
>
> On Tue, Dec 26, 2017 at 4:05 PM, Yakov Zhdanov 
> wrote:
>
> Cross-posting to dev list.
>
> Guys,
>
> Suggestion below makes sense to me. Filed a ticket
> https://issues.apache.org/jira/browse/IGNITE-7310
>
> Perhaps, Arseny would like to provide a PR himself ;)
>
> --Yakov
>
>
> 2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk :
>
> > Hi guys.
> >
> > Ignite configures all thread pools, selectors, etc. basing on
> Runtime.availableProcessors()
> > which seems not correct in containerized environment. In Kubernetes with
> > Docker that method returns CPU count of a Node/machine, which is 64 in
> our
> > particular case. But those 64 CPU and their timings are shared between
> > other stuff on the node like other Pods and services. Appropriate value
> of
> > available cores for Pod is usually configured as CPU Resource and
> estimated
> > basing on 

Re: Memory leak in GridCachePartitionExchangeManager?

2017-12-26 Thread Michael Cherkasov
Hi,

I created a ticket for the issue you found:
https://issues.apache.org/jira/browse/IGNITE-7319

Thanks,
Mike.

2017-12-18 17:05 GMT+03:00 zbyszek :

>
> Dear all,
>
> I was wondering if this is a know issue which has a chance to be fixed in
> future or (I hope) it is me who missed something obvious in working with
> Ignite caches.
> I have a simple single node test app (built to investigate a memory leak
> observed in our PROD deployment), that creates c.a. 20 LOCAL caches per
> sec.
> with the config below:
>
> private IgniteCache createLocalCache(String name) {
> CacheConfiguration cCfg = new
> CacheConfiguration<>();
> cCfg.setName(name);
> cCfg.setGroupName("localCaches"); // without group leak is much
> bigger!
> cCfg.setStoreKeepBinary(true);
> cCfg.setCacheMode(CacheMode.LOCAL);
> cCfg.setOnheapCacheEnabled(false);
> cCfg.setCopyOnRead(false);
> cCfg.setBackups(0);
> cCfg.setWriteBehindEnabled(false);
> cCfg.setReadThrough(false);
> cCfg.setReadFromBackup(false);
> cCfg.setQueryEntities();
> return ignite.createCache(cCfg).withKeepBinary();
> }
>
> The caches are placed in the queue and are picked up by the worker thread
> which just destorys them after removing from the queue.
> This setup seems to generate a memory leak of about 1GB per day.
> When looking at heapdump, I see all space is occupied by instances of
> java.util.concurrent.ConcurrentSkipListMap$Node:
>
> (please copy paste table into notepad to see tables correctly)
>
> Objects by class
> +---
> 
> -++--+--
> -+
> |   Class
> |  Objects   | Shallow Size |   Retained Size   |
> +---
> 
> -++--+--
> -+
> |  java.util.concurrent.ConcurrentSkipListMap$Node
> |  4,987,415   13 %  |  119,697,960   10 %  |  ~  1,204,893,605  100 %  |
> |
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$
> CancelableTask
> |  4,985,687   13 %  |  239,312,976   20 %  |~  917,361,000   76 %  |
> |
> org.apache.ignite.internal.processors.cache.query.continuous.
> CacheContinuousQueryManager$BackupCleaner
> |  4,985,680   13 %  |  119,656,320   10 %  |~  558,390,752   46 %  |
> |  org.jsr166.ConcurrentHashMap8
> |  4,990,926   13 %  |  199,637,040   17 %  |~  439,459,352   36 %  |
> |  org.jsr166.LongAdder8
> |  4,992,416   13 %  |  159,757,312   13 %  |~  159,757,312   13 %  |
> |  org.apache.ignite.lang.IgniteUuid
> |  4,989,306   13 %  |  119,743,344   10 %  |~  119,745,456   10 %  |
> |  java.util.concurrent.ConcurrentSkipListMap$Index
> |  2,488,9877 %  |   59,735,6885 %  |~  119,502,384   10 %  |
> |  java.util.concurrent.ConcurrentSkipListMap$HeadIndex
> | 490 %  |1,5680 %  |~  106,991,8329 %  |
> |  org.jsr166.ConcurrentHashMap8$ValuesView
> |  4,985,368   13 %  |   79,765,8887 %  | ~  79,765,8887 %  |
> |  java.util.HashMap$Node
> | 44,3350 %  |1,418,7200 %  | ~  79,618,1047 %  |
> |  java.util.HashMap$Node[]
> | 13,0930 %  |1,098,8560 %  | ~  68,150,5206 %  |
> |  java.util.HashMap
> | 13,5500 %  |  650,4000 %  | ~  67,636,1126 %  |
> |  java.util.concurrent.ConcurrentSkipListMap
> | 100 %  |  4800 %  | ~  59,830,7685 %  |
>
>
> Merged paths to java.util.concurrent.ConcurrentSkipListMap$Node instances
> (first 5 levels) reports no obvious dominator
> (at least no dominator from my test namespace):
>
>
> Merged paths
> +---
> -+--
> --++--+
> |Name
> |  Objects   | Retained Size  |  Dominators  |
> +---
> -+--
> --++--+
> |  +---
> |  4,987,415  100 %  |  1,037,112,344  100 %  |  |
> ||
> |||  |
> |+--- dominator)> |  1,245,699   25 %  |
> 1,037,015,992   99 %  |  |
> || |
> |||  |
> || +---java.util.concurrent.ConcurrentSkipListMap$Index
> |  

Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2017-12-26 Thread Denis Magda
Cross-posting to the dev list.

Ignite persistence maintainers please chime in.

—
Denis

> On Dec 26, 2017, at 2:17 AM, Arseny Kovalchuk  
> wrote:
> 
> Hi guys.
> 
> Another issue when using Ignite 2.3 with native persistence enabled. See 
> details below.
> 
> We deploy Ignite along with our services in Kubernetes (v 1.8) on premises. 
> Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite version 
> 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD. 
> 
> We put about 230 events/second into Ignite, 70% of events are ~200KB in size 
> and 30% are 5000KB. Smaller events have indexed fields and we query them via 
> SQL.
> 
> The cluster is activated from a client node which also streams events into 
> Ignite from Kafka. We use custom implementation of streamer which uses 
> cache.putAll() API.
> 
> We started cluster from scratch without any persistent data. After a while we 
> got corrupted data with the error message.
> 
> [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%] 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
>  - Partition eviction failed, this can cause grid hang.
> class org.apache.ignite.IgniteException: Runtime failure on search row: 
> Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val: 
> ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419, 
> face_last_name=null, face_list_id=null, channel=171, source=, 
> face_similarity=null, license_plate_number=null, descriptors=null, 
> cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854, 
> stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854, 
> persistent=false, face_first_name=null, license_plate_first_name=null, 
> face_full_name=null, level=0, module=Kpx.Synesis.Outdoor, 
> end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0, human, 
> 0, truck, 0, start_time=1513946618964, processed=false, kafka_offset=111259, 
> license_plate_last_name=null, armed=false, license_plate_country=null, 
> topic=MovingObject, comment=, expiration=1514033024000, original_id=null, 
> license_plate_lists=null], ver: GridCacheVersion [topVer=125430590, 
> order=1513955001926, nodeOrder=3] ][ 3008806055072854, MovingObject, 
> Kpx.Synesis.Outdoor, 0, , 1513946618964, 1513946624379, 171, 171, FALSE, 
> FALSE, , FALSE, FALSE, 0, 0, 111259, 1514033024000, (vehicle, 0, human, 0, 
> truck, 0), null, null, null, null, null, null, null, null, null, null, null, 
> null ]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:216)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:496)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:423)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:580)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2334)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:461)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1453)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1416)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.remove(GridCacheOffheapManager.java:1271)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
>   at 
> 

Re: Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"

2017-12-26 Thread soodrah
I am facing the same issue and have spend 2 days trying to understand it.
Would appreciate if I can get any help. Here is the synopsis

1. sl-ignite-cache -dependencies below and it uses log4j2.xml

org.apache.ignite
ignite-core
${ignite.version}


org.apache.ignite
ignite-spring
${ignite.version}


org.apache.ignite
ignite-indexing
${ignite.version}



org.apache.ignite
ignite-slf4j
${ignite.version}



org.apache.logging.log4j
log4j-core
${log4j.version}


org.apache.logging.log4j
log4j-api
${log4j.version}


2. sl-inv-mgr-appsrv - this is the springboot application.  When I try to
start it I get the following error. This application has dependency on
another module ie. sl-mr-common which then depends on ignite-cache module
from above. Here are the dependencies of appsrv

org.springframework.boot
spring-boot-starter-aop
${spring.boot.version}



org.springframework.boot
spring-boot-starter-web
${spring.boot.version}



org.springframework.boot
spring-boot-starter-tomcat
${spring.boot.version}



com.db.sl
sl-inv-mgr-common-service
${sl-inv-mgr-common.version}



org.springframework
spring-jdbc
3.0.5.RELEASE


spring-core
org.springframework


spring-beans
org.springframework





Here is the stack trace.  I tried using ignite-log4j but that did not help.
Like I said I do have log4j2.xml in my resources folder.
2017-12-26 11:23:21.577  INFO 14992 --- [   main]
o.apache.catalina.core.StandardService   : Stopping service [Tomcat]
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
java.lang.ClassNotFoundException:
org.apache.ignite.logger.java.JavaLoggerFileHandler
java.lang.ClassNotFoundException:
org.apache.ignite.logger.java.JavaLoggerFileHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.util.logging.LogManager$5.run(LogManager.java:965)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.loadLoggerHandlers(LogManager.java:958)
at
java.util.logging.LogManager.initializeGlobalHandlers(LogManager.java:1578)
at java.util.logging.LogManager.access$1500(LogManager.java:145)
at
java.util.logging.LogManager$RootLogger.accessCheckedHandlers(LogManager.java:1667)
at java.util.logging.Logger.getHandlers(Logger.java:1777)
at
org.apache.ignite.logger.java.JavaLogger.findHandler(JavaLogger.java:399)
at 
org.apache.ignite.logger.java.JavaLogger.configure(JavaLogger.java:229)
at org.apache.ignite.logger.java.JavaLogger.(JavaLogger.java:170)
at org.apache.ignite.logger.java.JavaLogger.(JavaLogger.java:126)
at 
org.apache.ignite.IgniteJdbcDriver.(IgniteJdbcDriver.java:410)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at java.sql.DriverManager$2.run(DriverManager.java:603)
at java.sql.DriverManager$2.run(DriverManager.java:583)
at java.security.AccessController.doPrivileged(Native Method)
at java.sql.DriverManager.loadInitialDrivers(DriverManager.java:583)
at java.sql.DriverManager.(DriverManager.java:101)
at
org.apache.catalina.loader.JdbcLeakPrevention.clearJdbcDriverRegistrations(JdbcLeakPrevention.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

RE: Runtime.availableProcessors() returns hardware's CPU count whichis the issue with Ignite in Kubernetes

2017-12-26 Thread Stanislav Lukyanov
Hi Arseny,

Both OpenJDK and Oracle JDK 8u151 should do.
I’ve checked and it seems that it is indeed about the specific way of invoking 
Docker. By default it doesn’t restrict container to particular CPUs, but you 
can make it do that by using `--cpuset-cpus` flag.

Example:
Without cpuset (Docker host has 4 CPUs)
>docker run -it java-docker bash
root@8a2cd9d06695:/usr/test# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
root@8a2cd9d06695:/usr/test# jjs
jjs> print(java.lang.Runtime.runtime.availableProcessors());
4

With cpuset specifying the first two CPUs
>docker run -it --cpuset-cpus=0,1 java-docker bash
root@7c2723a9819e:/usr/test# jjs
jjs> print(java.lang.Runtime.runtime.availableProcessors());
2

Note also that by using cpuset you change the settings for the whole container, 
not just for the Ignite, so that other Java pools in the same JVM, e.g. 
parallel stream executor, should also work better.

Unfortunately, I’m not familiar with Kubernetes, but the manual [1] says that 
you can enable the use of cpusets via a static CPU management policy.

Hope that helps,
Stan

[1] https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/

From: Arseny Kovalchuk
Sent: 26 декабря 2017 г. 17:37
To: user@ignite.apache.org
Cc: d...@ignite.apache.org
Subject: Re: Runtime.availableProcessors() returns hardware's CPU count whichis 
the issue with Ignite in Kubernetes

Hi Stanislav.

We use OpenJDK and use Alpine Linux based images. See java version below. In 
our environment availableProcessors returns CPU's for the host. 

Did you mean to try Oracle's JDK 8u151?
arseny@kovalchuka-ubuntu:~/kipod-x$ ku exec ignite-instance-0 -ti bash
bash-4.4# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (IcedTea 3.6.0) (Alpine 8.151.12-r0)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
bash-4.4# jjs
jjs> print(java.lang.Runtime.runtime.availableProcessors());
40
jjs> 



​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile​

On 26 December 2017 at 16:56, Yakov Zhdanov  wrote:
Ilya, agree. I like IGNITE_AVAILABLE_CPU more.


Yakov Zhdanov,
www.gridgain.com

2017-12-26 16:36 GMT+03:00 Ilya Lantukh :
Hi Yakov,

I think that property IGNITE_NODES_PER_HOST, as you suggested, would be 
confusing, because users might want to reduce amount of available resources for 
ignite node not only because they run multiple nodes per host, but also because 
they run other software. Also, in my opinion all types of system resources 
(CPU, memory, network) shouldn't be scaled using the same value.

So I'd prefer to have IGNITE_CONCURRENCY_LEVEL or IGNITE_AVAILABLE_PROCESSORS, 
as it was originally suggested.

On Tue, Dec 26, 2017 at 4:05 PM, Yakov Zhdanov  wrote:
Cross-posting to dev list.

Guys,

Suggestion below makes sense to me. Filed a ticket
https://issues.apache.org/jira/browse/IGNITE-7310

Perhaps, Arseny would like to provide a PR himself ;)

--Yakov

2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk :

> Hi guys.
>
> Ignite configures all thread pools, selectors, etc. basing on 
> Runtime.availableProcessors()
> which seems not correct in containerized environment. In Kubernetes with
> Docker that method returns CPU count of a Node/machine, which is 64 in our
> particular case. But those 64 CPU and their timings are shared between
> other stuff on the node like other Pods and services. Appropriate value of
> available cores for Pod is usually configured as CPU Resource and estimated
> basing on different things taking performance into account. General idea,
> if you want to run several Pods on the same node, they all should request
> less resources then the node provides. So, we give 4-8 cores for Ignite
> instance in Kubernetes, but Ignite's thread pools are configured like they
> get all 64 CPUs, and in turn we get a lot of threads for the Pod with 4-8
> cores available.
>
> Now we manually set appropriate values for all available properties which
> relate to thread pools.
>
> Would it be correct to have one environment variable, say
> IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for those
> configurations and by default equals to Runtime.availableProcessors()?
>
> Thanks.
>
> ​
> Arseny Kovalchuk
>
> Senior Software Engineer at Synesis
> skype: arseny.kovalchuk
> mobile: +375 (29) 666-16-16
> ​LinkedIn Profile ​
>



-- 
Best regards,
Ilya





Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-26 Thread Dmitry Karachentsev

Hi Nikolay,

I think it may be useful too. Will try to describe possible API in a ticket.

Thanks!
-Dmitry

21.12.2017 13:18, Nikolay Izhikov пишет:

Hello, Dmitry.

I think it a great idea.

Do we a feature to list all running ComputeTasks?

I, personally, think we have to implement possibility to track all
user-provided tasks - CacheListener, ContinuousQuery, ComputeTasks,
etc.

В Чт, 21/12/2017 в 10:13 +0300, Dmitry Karachentsev пишет:

Crossposting to devlist.

Hi Igniters!

It's might be a nice feature to have - get list of registered
continuous
queries with ability to deregister them.

What do you think?

Thanks!
-Dmitry

20.12.2017 16:59, fefe пишет:

For sanity checks or tests. I want to be sure that I haven't forgot
to
deregister any listener.

Its also very important metric to see how many continuous
queries/listeners
are currently running.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/






Re: Ignite service method cannot invoke for third time

2017-12-26 Thread dkarachentsev
Hi,

Anonymous and inner classes have link to outer class object and might bring
it to marshaller. When you set it inner static or separate class you're
explicitly saying that you don't need such links.

In thread dumps you need to lookup for waiting or blocked threads. In your
case in service node you may find that service thread is waiting on
invoke():

"svc-#70" #102 prio=5 os_prio=0 tid=0x7fe820024800 nid=0x2c44 waiting on
condition [0x7fe7d51f4000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.invoke(GridDhtAtomicCache.java:785)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1338)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1320)
at
com.mediaiq.caps.platform.choreography.service.IgniteWorkflowServiceImpl.startWorkflow(IgniteWorkflowServiceImpl.java:165)
...

Cache operations are invoked on data nodes, so you may go to data node and
find:

"sys-stripe-5-#6" #15 prio=5 os_prio=0 tid=0x7fd96459b800 nid=0x29a7
waiting on condition [0x7fd94cf9c000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4512)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4493)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1326)
at
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.get(GridCacheProxyImpl.java:329)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.getCollection(DataStructuresProcessor.java:1001)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.queue(DataStructuresProcessor.java:794)
at
org.apache.ignite.internal.processors.datastructures.GridCacheQueueProxy.readResolve(GridCacheQueueProxy.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:549)
at
org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:917)
at
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346)
at
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:199)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
...

Here OptimizedMarshaller tries to deserialize EntryProcessor, but hanged on
deserializing GridCacheQueueProxy aka IgniteQueue. Obviously you do not need
to marshal/unmarshal it, and the best solution here would be to overcome
it's serialization - remove it from anonymous EntryProcessor context.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Reconnect after cluster shutdown fails

2017-12-26 Thread dkarachentsev
Hi,

Discovery events are processed in a single thread, and cache creation uses
discovery custom messages. Trying to create cache in discovery thread will
lead to deadlock, because discovery thread will wait in your lambda instead
of processing messages.

To avoid it just start another thread in your listener.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread Arseny Kovalchuk
Hi Stanislav.

We use OpenJDK and use Alpine Linux based images. See java version below.
In our environment availableProcessors returns CPU's for the host.

Did you mean to try Oracle's JDK 8u151?

arseny@kovalchuka-ubuntu:~/kipod-x$ ku exec ignite-instance-0 -ti bash
bash-4.4# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (IcedTea 3.6.0) (Alpine 8.151.12-r0)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
bash-4.4# jjs
jjs> print(java.lang.Runtime.runtime.availableProcessors());
40
jjs>


​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 26 December 2017 at 16:56, Yakov Zhdanov  wrote:

> Ilya, agree. I like IGNITE_AVAILABLE_CPU more.
>
> Yakov Zhdanov,
> www.gridgain.com
>
> 2017-12-26 16:36 GMT+03:00 Ilya Lantukh :
>
>> Hi Yakov,
>>
>> I think that property IGNITE_NODES_PER_HOST, as you suggested, would be
>> confusing, because users might want to reduce amount of available resources
>> for ignite node not only because they run multiple nodes per host, but also
>> because they run other software. Also, in my opinion all types of system
>> resources (CPU, memory, network) shouldn't be scaled using the same value.
>>
>> So I'd prefer to have IGNITE_CONCURRENCY_LEVEL or
>> IGNITE_AVAILABLE_PROCESSORS, as it was originally suggested.
>>
>> On Tue, Dec 26, 2017 at 4:05 PM, Yakov Zhdanov 
>> wrote:
>>
>>> Cross-posting to dev list.
>>>
>>> Guys,
>>>
>>> Suggestion below makes sense to me. Filed a ticket
>>> https://issues.apache.org/jira/browse/IGNITE-7310
>>>
>>> Perhaps, Arseny would like to provide a PR himself ;)
>>>
>>> --Yakov
>>>
>>> 2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk >> >:
>>>
>>> > Hi guys.
>>> >
>>> > Ignite configures all thread pools, selectors, etc. basing on
>>> Runtime.availableProcessors()
>>> > which seems not correct in containerized environment. In Kubernetes
>>> with
>>> > Docker that method returns CPU count of a Node/machine, which is 64 in
>>> our
>>> > particular case. But those 64 CPU and their timings are shared between
>>> > other stuff on the node like other Pods and services. Appropriate
>>> value of
>>> > available cores for Pod is usually configured as CPU Resource and
>>> estimated
>>> > basing on different things taking performance into account. General
>>> idea,
>>> > if you want to run several Pods on the same node, they all should
>>> request
>>> > less resources then the node provides. So, we give 4-8 cores for Ignite
>>> > instance in Kubernetes, but Ignite's thread pools are configured like
>>> they
>>> > get all 64 CPUs, and in turn we get a lot of threads for the Pod with
>>> 4-8
>>> > cores available.
>>> >
>>> > Now we manually set appropriate values for all available properties
>>> which
>>> > relate to thread pools.
>>> >
>>> > Would it be correct to have one environment variable, say
>>> > IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for
>>> those
>>> > configurations and by default equals to Runtime.availableProcessors()?
>>> >
>>> > Thanks.
>>> >
>>> > ​
>>> > Arseny Kovalchuk
>>> >
>>> > Senior Software Engineer at Synesis
>>> > skype: arseny.kovalchuk
>>> > mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
>>> > ​LinkedIn Profile ​
>>> >
>>>
>>
>>
>>
>> --
>> Best regards,
>> Ilya
>>
>
>


Re: “Failed to communicate with Ignite cluster" error when using JDBC Thin driver

2017-12-26 Thread dkarachentsev
Hi,

It's hard to say why it happens. I'm not familiar with mybatis and actually
don't know if it shares jdbc connection between threads. It would be great
if you could provide some reproducible example that will help to debug the
issue.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteVisor showing the cache, but SQLLINE tables are not seen

2017-12-26 Thread slava.koptilin
Hello Naveen,

It seems that your type (Customer) have a reference to a type which cannot
be mapped to SQL data type.

public class Customer {
private SomeDataType field;
...
}

Unfortunately, JDBC thin driver does not support such compound objects.
The list of data types available in Apache Ignite can be found here:
https://apacheignite-sql.readme.io/docs/data-types

Thanks!





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error of start with multiple data regions

2017-12-26 Thread slava.koptilin
Hi,

I have reproduced the problem you described.
I will investigate this issue and file JIRA ticket in order to track this.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread Yakov Zhdanov
Ilya, agree. I like IGNITE_AVAILABLE_CPU more.

Yakov Zhdanov,
www.gridgain.com

2017-12-26 16:36 GMT+03:00 Ilya Lantukh :

> Hi Yakov,
>
> I think that property IGNITE_NODES_PER_HOST, as you suggested, would be
> confusing, because users might want to reduce amount of available resources
> for ignite node not only because they run multiple nodes per host, but also
> because they run other software. Also, in my opinion all types of system
> resources (CPU, memory, network) shouldn't be scaled using the same value.
>
> So I'd prefer to have IGNITE_CONCURRENCY_LEVEL or
> IGNITE_AVAILABLE_PROCESSORS, as it was originally suggested.
>
> On Tue, Dec 26, 2017 at 4:05 PM, Yakov Zhdanov 
> wrote:
>
>> Cross-posting to dev list.
>>
>> Guys,
>>
>> Suggestion below makes sense to me. Filed a ticket
>> https://issues.apache.org/jira/browse/IGNITE-7310
>>
>> Perhaps, Arseny would like to provide a PR himself ;)
>>
>> --Yakov
>>
>> 2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk > >:
>>
>> > Hi guys.
>> >
>> > Ignite configures all thread pools, selectors, etc. basing on
>> Runtime.availableProcessors()
>> > which seems not correct in containerized environment. In Kubernetes with
>> > Docker that method returns CPU count of a Node/machine, which is 64 in
>> our
>> > particular case. But those 64 CPU and their timings are shared between
>> > other stuff on the node like other Pods and services. Appropriate value
>> of
>> > available cores for Pod is usually configured as CPU Resource and
>> estimated
>> > basing on different things taking performance into account. General
>> idea,
>> > if you want to run several Pods on the same node, they all should
>> request
>> > less resources then the node provides. So, we give 4-8 cores for Ignite
>> > instance in Kubernetes, but Ignite's thread pools are configured like
>> they
>> > get all 64 CPUs, and in turn we get a lot of threads for the Pod with
>> 4-8
>> > cores available.
>> >
>> > Now we manually set appropriate values for all available properties
>> which
>> > relate to thread pools.
>> >
>> > Would it be correct to have one environment variable, say
>> > IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for
>> those
>> > configurations and by default equals to Runtime.availableProcessors()?
>> >
>> > Thanks.
>> >
>> > ​
>> > Arseny Kovalchuk
>> >
>> > Senior Software Engineer at Synesis
>> > skype: arseny.kovalchuk
>> > mobile: +375 (29) 666-16-16
>> > ​LinkedIn Profile ​
>> >
>>
>
>
>
> --
> Best regards,
> Ilya
>


Re: Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread Ilya Lantukh
Hi Yakov,

I think that property IGNITE_NODES_PER_HOST, as you suggested, would be
confusing, because users might want to reduce amount of available resources
for ignite node not only because they run multiple nodes per host, but also
because they run other software. Also, in my opinion all types of system
resources (CPU, memory, network) shouldn't be scaled using the same value.

So I'd prefer to have IGNITE_CONCURRENCY_LEVEL or
IGNITE_AVAILABLE_PROCESSORS, as it was originally suggested.

On Tue, Dec 26, 2017 at 4:05 PM, Yakov Zhdanov  wrote:

> Cross-posting to dev list.
>
> Guys,
>
> Suggestion below makes sense to me. Filed a ticket
> https://issues.apache.org/jira/browse/IGNITE-7310
>
> Perhaps, Arseny would like to provide a PR himself ;)
>
> --Yakov
>
> 2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk :
>
> > Hi guys.
> >
> > Ignite configures all thread pools, selectors, etc. basing on
> Runtime.availableProcessors()
> > which seems not correct in containerized environment. In Kubernetes with
> > Docker that method returns CPU count of a Node/machine, which is 64 in
> our
> > particular case. But those 64 CPU and their timings are shared between
> > other stuff on the node like other Pods and services. Appropriate value
> of
> > available cores for Pod is usually configured as CPU Resource and
> estimated
> > basing on different things taking performance into account. General idea,
> > if you want to run several Pods on the same node, they all should request
> > less resources then the node provides. So, we give 4-8 cores for Ignite
> > instance in Kubernetes, but Ignite's thread pools are configured like
> they
> > get all 64 CPUs, and in turn we get a lot of threads for the Pod with 4-8
> > cores available.
> >
> > Now we manually set appropriate values for all available properties which
> > relate to thread pools.
> >
> > Would it be correct to have one environment variable, say
> > IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for
> those
> > configurations and by default equals to Runtime.availableProcessors()?
> >
> > Thanks.
> >
> > ​
> > Arseny Kovalchuk
> >
> > Senior Software Engineer at Synesis
> > skype: arseny.kovalchuk
> > mobile: +375 (29) 666-16-16
> > ​LinkedIn Profile ​
> >
>



-- 
Best regards,
Ilya


RE: Runtime.availableProcessors() returns hardware's CPU count whichis the issue with Ignite in Kubernetes

2017-12-26 Thread Stanislav Lukyanov
Hi Arseny,

This behavior of the `Runtime.availableProcessors()` is actually a recognized 
issue of the Hotspot, see [1]. It was fixed not that long ago for JDK 9 and 
8uX, and I can see correct values returned in my Docker environment with JDK 
8u151, although I believe it depends on a specific way a container is 
configured.

Which Java version do you use? Can you try your code on JDK 8u151?

BTW see also [2] and [3] on more stuff to be fixed in JDK for better container 
support.

Thanks,
Stan

[1] https://bugs.openjdk.java.net/browse/JDK-6515172
[2] https://bugs.openjdk.java.net/browse/JDK-8146115
[3] https://bugs.openjdk.java.net/browse/JDK-8182070


From: Yakov Zhdanov
Sent: 26 декабря 2017 г. 16:05
To: user@ignite.apache.org; d...@ignite.apache.org
Subject: Re: Runtime.availableProcessors() returns hardware's CPU count whichis 
the issue with Ignite in Kubernetes

Cross-posting to dev list.

Guys,

Suggestion below makes sense to me. Filed a ticket
https://issues.apache.org/jira/browse/IGNITE-7310

Perhaps, Arseny would like to provide a PR himself ;)

--Yakov

2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk :

> Hi guys.
>
> Ignite configures all thread pools, selectors, etc. basing on 
> Runtime.availableProcessors()
> which seems not correct in containerized environment. In Kubernetes with
> Docker that method returns CPU count of a Node/machine, which is 64 in our
> particular case. But those 64 CPU and their timings are shared between
> other stuff on the node like other Pods and services. Appropriate value of
> available cores for Pod is usually configured as CPU Resource and estimated
> basing on different things taking performance into account. General idea,
> if you want to run several Pods on the same node, they all should request
> less resources then the node provides. So, we give 4-8 cores for Ignite
> instance in Kubernetes, but Ignite's thread pools are configured like they
> get all 64 CPUs, and in turn we get a lot of threads for the Pod with 4-8
> cores available.
>
> Now we manually set appropriate values for all available properties which
> relate to thread pools.
>
> Would it be correct to have one environment variable, say
> IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for those
> configurations and by default equals to Runtime.availableProcessors()?
>
> Thanks.
>
> ​
> Arseny Kovalchuk
>
> Senior Software Engineer at Synesis
> skype: arseny.kovalchuk
> mobile: +375 (29) 666-16-16
> ​LinkedIn Profile ​
>



Re: Error of start with multiple data regions

2017-12-26 Thread huangyuanqiang
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(CACHE_SERVER_INFO);
cacheCfg.setDataRegionName("memory");
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setCopyOnRead(false);
cacheCfg.setIndexedTypes(String.class, byte[].class);
this.server.getOrCreateCache(cacheCfg);



  And


CacheConfiguration cacheCfg = new CacheConfiguration(COMMIT_LOG_PRE_NAME + 
groupid);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setIndexedTypes(Long.class, byte[].class);
this.server.getOrCreateCache(cacheCfg);





huangyuanqiang
huangyuanqi...@elex-tech.com



> 在 2017年12月26日,下午8:43,slava.koptilin  写道:
> 
> Hello,
> 
> Data region configurations look good to me. I've just tried this config and
> it works.
> Could you provide more details? cache configurations etc
> 
> Thanks!
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Error of start with multiple data regions

2017-12-26 Thread slava.koptilin
Hello,

Data region configurations look good to me. I've just tried this config and
it works.
Could you provide more details? cache configurations etc

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2017-12-26 Thread Arseny Kovalchuk
Hi Andrey.

Thanks for information. Issues look like related to those we've got.
Looking forward for fixes.

Regards.

​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 26 December 2017 at 14:49, Andrey Mashenkov 
wrote:

> Hi Arseny,
>
> Seems this is already fixed [1] in master, but seems there is another
> issue [2] and we are in the middle of fixing it.
> We've found there were some unsafe memory changing operations without lock.
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6423
> [2] https://issues.apache.org/jira/browse/IGNITE-7278
>
> On Tue, Dec 26, 2017 at 1:02 PM, Arseny Kovalchuk <
> arseny.kovalc...@synesis.ru> wrote:
>
>> Hi guys.
>>
>> Another issue when using Ignite 2.3 with native persistence enabled. See
>> details below.
>>
>> We deploy Ignite along with our services in Kubernetes (v 1.8) on
>> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
>> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>>
>> We put about 230 events/second into Ignite, 70% of events are ~200KB in
>> size and 30% are 5000KB. Smaller events have indexed fields and we query
>> them via SQL.
>>
>> The cluster is activated from a client node which also streams events
>> into Ignite from Kafka. We use custom implementation of streamer which uses
>> cache.putAll() API.
>>
>> We started cluster from scratch without any persistent data. After a
>> while we got corrupted data with the error message.
>>
>> [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
>> - Partition eviction failed, this can cause grid hang.
>> class org.apache.ignite.IgniteException: Runtime failure on search row:
>> Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
>> ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419,
>> face_last_name=null, face_list_id=null, channel=171, source=,
>> face_similarity=null, license_plate_number=null, descriptors=null,
>> cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854,
>> stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854,
>> persistent=false, face_first_name=null, license_plate_first_name=null,
>> face_full_name=null, level=0, module=Kpx.Synesis.Outdoor,
>> end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0,
>> human, 0, truck, 0, start_time=1513946618964, processed=false,
>> kafka_offset=111259, license_plate_last_name=null, armed=false,
>> license_plate_country=null, topic=MovingObject, comment=,
>> expiration=1514033024000, original_id=null, license_plate_lists=null], ver:
>> GridCacheVersion [topVer=125430590, order=1513955001926, nodeOrder=3] ][
>> 3008806055072854, MovingObject, Kpx.Synesis.Outdoor, 0, , 1513946618964,
>> 1513946624379, 171, 171, FALSE, FALSE, , FALSE, FALSE, 0, 0, 111259,
>> 1514033024000, (vehicle, 0, human, 0, truck, 0), null, null, null, null,
>> null, null, null, null, null, null, null, null ]
>> at org.apache.ignite.internal.processors.cache.persistence.tree
>> .BPlusTree.doRemove(BPlusTree.java:1787)
>> at org.apache.ignite.internal.processors.cache.persistence.tree
>> .BPlusTree.remove(BPlusTree.java:1578)
>> at org.apache.ignite.internal.processors.query.h2.database.H2Tr
>> eeIndex.remove(H2TreeIndex.java:216)
>> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tab
>> le.doUpdate(GridH2Table.java:496)
>> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tab
>> le.update(GridH2Table.java:423)
>> at org.apache.ignite.internal.processors.query.h2.IgniteH2Index
>> ing.remove(IgniteH2Indexing.java:580)
>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>> or.remove(GridQueryProcessor.java:2334)
>> at org.apache.ignite.internal.processors.cache.query.GridCacheQ
>> ueryManager.remove(GridCacheQueryManager.java:461)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheOffhe
>> apManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOff
>> heapManagerImpl.java:1453)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheOffhe
>> apManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapMa
>> nagerImpl.java:1416)
>> at org.apache.ignite.internal.processors.cache.persistence.Grid
>> CacheOffheapManager$GridCacheDataStore.remove(GridCacheOffhe
>> apManager.java:1271)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheOffhe
>> apManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374)
>> at org.apache.ignite.internal.processors.cache.GridCacheMapEntr
>> y.removeValue(GridCacheMapEntry.java:3233)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> 

Re: Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread Arseny Kovalchuk
Thanks, Slava

​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 26 December 2017 at 15:13, slava.koptilin 
wrote:

> Hi Arseny,
>
> > Now we manually set appropriate values for all available properties which
> > relate to thread pools.
> For now, this is the only possible and correct way to setup thread pools in
> virtual/containerized environment.
>
> > Would it be correct to have one environment variable,
> > say IGNITE_CONCURRENCY_LEVEL which will be used as a reference value
> > for those configurations and by default equals to
> > Runtime.availableProcessors()?
> looks reasonable to me. I would suggest cross-posting this proposal to
> Ignite dev list to see what dev community thinks. (d...@ignite.apache.org)
>
> Best regards,
> Slava.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread slava.koptilin
Hi Arseny,

> Now we manually set appropriate values for all available properties which
> relate to thread pools.
For now, this is the only possible and correct way to setup thread pools in
virtual/containerized environment.

> Would it be correct to have one environment variable,
> say IGNITE_CONCURRENCY_LEVEL which will be used as a reference value
> for those configurations and by default equals to
> Runtime.availableProcessors()?
looks reasonable to me. I would suggest cross-posting this proposal to
Ignite dev list to see what dev community thinks. (d...@ignite.apache.org)

Best regards,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2017-12-26 Thread Andrey Mashenkov
Hi Arseny,

Seems this is already fixed [1] in master, but seems there is another issue
[2] and we are in the middle of fixing it.
We've found there were some unsafe memory changing operations without lock.


[1] https://issues.apache.org/jira/browse/IGNITE-6423
[2] https://issues.apache.org/jira/browse/IGNITE-7278

On Tue, Dec 26, 2017 at 1:02 PM, Arseny Kovalchuk <
arseny.kovalc...@synesis.ru> wrote:

> Hi guys.
>
> Another issue when using Ignite 2.3 with native persistence enabled. See
> details below.
>
> We deploy Ignite along with our services in Kubernetes (v 1.8) on
> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>
> We put about 230 events/second into Ignite, 70% of events are ~200KB in
> size and 30% are 5000KB. Smaller events have indexed fields and we query
> them via SQL.
>
> The cluster is activated from a client node which also streams events into
> Ignite from Kafka. We use custom implementation of streamer which uses
> cache.putAll() API.
>
> We started cluster from scratch without any persistent data. After a while
> we got corrupted data with the error message.
>
> [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
> - Partition eviction failed, this can cause grid hang.
> class org.apache.ignite.IgniteException: Runtime failure on search row:
> Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
> ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419,
> face_last_name=null, face_list_id=null, channel=171, source=,
> face_similarity=null, license_plate_number=null, descriptors=null,
> cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854,
> stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854,
> persistent=false, face_first_name=null, license_plate_first_name=null,
> face_full_name=null, level=0, module=Kpx.Synesis.Outdoor,
> end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0,
> human, 0, truck, 0, start_time=1513946618964, processed=false,
> kafka_offset=111259, license_plate_last_name=null, armed=false,
> license_plate_country=null, topic=MovingObject, comment=,
> expiration=1514033024000, original_id=null, license_plate_lists=null], ver:
> GridCacheVersion [topVer=125430590, order=1513955001926, nodeOrder=3] ][
> 3008806055072854, MovingObject, Kpx.Synesis.Outdoor, 0, , 1513946618964,
> 1513946624379, 171, 171, FALSE, FALSE, , FALSE, FALSE, 0, 0, 111259,
> 1514033024000, (vehicle, 0, human, 0, truck, 0), null, null, null, null,
> null, null, null, null, null, null, null, null ]
> at org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree.doRemove(BPlusTree.java:1787)
> at org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree.remove(BPlusTree.java:1578)
> at org.apache.ignite.internal.processors.query.h2.database.
> H2TreeIndex.remove(H2TreeIndex.java:216)
> at org.apache.ignite.internal.processors.query.h2.opt.
> GridH2Table.doUpdate(GridH2Table.java:496)
> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(
> GridH2Table.java:423)
> at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(
> IgniteH2Indexing.java:580)
> at org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(
> GridQueryProcessor.java:2334)
> at org.apache.ignite.internal.processors.cache.query.
> GridCacheQueryManager.remove(GridCacheQueryManager.java:461)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(
> IgniteCacheOffheapManagerImpl.java:1453)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(
> IgniteCacheOffheapManagerImpl.java:1416)
> at org.apache.ignite.internal.processors.cache.persistence.
> GridCacheOffheapManager$GridCacheDataStore.remove(
> GridCacheOffheapManager.java:1271)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.
> java:374)
> at org.apache.ignite.internal.processors.cache.
> GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
> at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPreloader$3.call(GridDhtPreloader.java:593)
> at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPreloader$3.call(GridDhtPreloader.java:580)
> at org.apache.ignite.internal.util.IgniteUtils.
> 

Re: Segmentation fault (JVM crash) while memory restoring on start with native persistance

2017-12-26 Thread Andrey Mashenkov
Hi Arseny,

This looks like a known issues that is unresolved yet [1],
but we can't sure it is same issue as there is no stacktrace in logs
attached.


[1] https://issues.apache.org/jira/browse/IGNITE-7278

On Tue, Dec 26, 2017 at 12:54 PM, Arseny Kovalchuk <
arseny.kovalc...@synesis.ru> wrote:

> Hi guys.
>
> We've successfully tested Ignite as in-memory solution, it showed
> acceptable performance. But we cannot get stable work of Ignite cluster
> with native persistence enabled. Our first error we've got is Segmentation
> fault (JVM crash) while memory restoring on start.
>
> [2017-12-22 11:11:51,992]  INFO [exchange-worker-#46%ignite-instance-0%]
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager: - Read checkpoint status
> [startMarker=/ignite-work-directory/db/ignite_instance_
> 0/cp/1513938154201-8c574131-763d-4cfa-99b6-0ce0321d61ab-START.bin,
> endMarker=/ignite-work-directory/db/ignite_instance_
> 0/cp/1513932413840-55ea1713-8e9e-44cd-b51a-fcad8fb94de1-END.bin]
> [2017-12-22 11:11:51,993]  INFO [exchange-worker-#46%ignite-instance-0%]
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager: - Checking memory state
> [lastValidPos=FileWALPointer [idx=391, fileOffset=220593830, len=19573,
> forceFlush=false], lastMarked=FileWALPointer [idx=394, fileOffset=38532201,
> len=19573, forceFlush=false], lastCheckpointId=8c574131-
> 763d-4cfa-99b6-0ce0321d61ab]
> [2017-12-22 11:11:51,993]  WARN [exchange-worker-#46%ignite-instance-0%]
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager: - Ignite node stopped in the middle of
> checkpoint. Will restore memory state and finish checkpoint on node start.
> [CodeBlob (0x7f9b58f24110)]
> Framesize: 0
> BufferBlob (0x7f9b58f24110) used for StubRoutines (2)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  Internal Error (sharedRuntime.cpp:842), pid=221, tid=0x7f9b473c1ae8
> #  fatal error: exception happened outside interpreter, nmethods and
> vtable stubs at pc 0x7f9b58f248f6
> #
> # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build
> 1.8.0_151-b12)
> # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode linux-amd64
> compressed oops)
> # Derivative: IcedTea 3.6.0
> # Distribution: Custom build (Tue Nov 21 11:22:36 GMT 2017)
> # Core dump written. Default location: /opt/ignite/core or core.221
> #
> # An error report file with more information is saved as:
> # /ignite-work-directory/core_dump_221.log
> #
> # If you would like to submit a bug report, please include
> # instructions on how to reproduce the bug and visit:
> #   http://icedtea.classpath.org/bugzilla
> #
>
>
>
> Please find logs and configs attached.
>
> We deploy Ignite along with our services in Kubernetes (v 1.8) on
> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>
> We put about 230 events/second into Ignite, 70% of events are ~200KB in
> size and 30% are 5000KB. Smaller events have indexed fields and we query
> them via SQL.
>
> The cluster is activated from a client node which also streams events into
> Ignite from Kafka. We use custom implementation of streamer which uses
> cache.putAll() API.
>
> We got the error when we stopped and restarted cluster again. It happened
> only on one instance.
>
> The general question is:
>
> *Is it possible to tune up (or implement) native persistence in a way when
> it just reports about error in data or corrupted data, then skip it and
> continue to work without that corrupted part. Thus it will make the cluster
> to continue operating regardless of errors on storage?*
>
>
> ​
> Arseny Kovalchuk
>
> Senior Software Engineer at Synesis
> skype: arseny.kovalchuk
> mobile: +375 (29) 666-16-16
> ​LinkedIn Profile ​
>



-- 
Best regards,
Andrey V. Mashenkov


Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread Arseny Kovalchuk
Hi guys.

Ignite configures all thread pools, selectors, etc. basing
on Runtime.availableProcessors() which seems not correct in containerized
environment. In Kubernetes with Docker that method returns CPU count of a
Node/machine, which is 64 in our particular case. But those 64 CPU and
their timings are shared between other stuff on the node like other Pods
and services. Appropriate value of available cores for Pod is usually
configured as CPU Resource and estimated basing on different things taking
performance into account. General idea, if you want to run several Pods on
the same node, they all should request less resources then the node
provides. So, we give 4-8 cores for Ignite instance in Kubernetes, but
Ignite's thread pools are configured like they get all 64 CPUs, and in turn
we get a lot of threads for the Pod with 4-8 cores available.

Now we manually set appropriate values for all available properties which
relate to thread pools.

Would it be correct to have one environment variable, say
IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for those
configurations and by default equals to Runtime.availableProcessors()?

Thanks.

​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile 

Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2017-12-26 Thread Arseny Kovalchuk
Hi guys.

Another issue when using Ignite 2.3 with native persistence enabled. See
details below.

We deploy Ignite along with our services in Kubernetes (v 1.8) on premises.
Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite version
2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.

We put about 230 events/second into Ignite, 70% of events are ~200KB in
size and 30% are 5000KB. Smaller events have indexed fields and we query
them via SQL.

The cluster is activated from a client node which also streams events into
Ignite from Kafka. We use custom implementation of streamer which uses
cache.putAll() API.

We started cluster from scratch without any persistent data. After a while
we got corrupted data with the error message.

[2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
- Partition eviction failed, this can cause grid hang.
class org.apache.ignite.IgniteException: Runtime failure on search row:
Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419,
face_last_name=null, face_list_id=null, channel=171, source=,
face_similarity=null, license_plate_number=null, descriptors=null,
cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854,
stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854,
persistent=false, face_first_name=null, license_plate_first_name=null,
face_full_name=null, level=0, module=Kpx.Synesis.Outdoor,
end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0,
human, 0, truck, 0, start_time=1513946618964, processed=false,
kafka_offset=111259, license_plate_last_name=null, armed=false,
license_plate_country=null, topic=MovingObject, comment=,
expiration=1514033024000, original_id=null, license_plate_lists=null], ver:
GridCacheVersion [topVer=125430590, order=1513955001926, nodeOrder=3] ][
3008806055072854, MovingObject, Kpx.Synesis.Outdoor, 0, , 1513946618964,
1513946624379, 171, 171, FALSE, FALSE, , FALSE, FALSE, 0, 0, 111259,
1514033024000, (vehicle, 0, human, 0, truck, 0), null, null, null, null,
null, null, null, null, null, null, null, null ]
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
at
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:216)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:496)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:423)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:580)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2334)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:461)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1453)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1416)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.remove(GridCacheOffheapManager.java:1271)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Failed to get page IO instance
(page content is corrupted)
at

RE: Error of start with multiple data regions

2017-12-26 Thread Alexey Popov
Sorry. Please ignore my response ). I just misread you message. 

Thank you,
Alexey

From: Alexey Popov
Sent: 26 декабря 2017 г. 12:05
To: user@ignite.apache.org
Subject: Re: Error of start with multiple data regions

Hi,

Apache Ignite does not have this functionality out of the box and the
community could not help with your question. 
You should ask this question directly to the company who provides a multiple
data regions solution.

Thank you,
Alexey





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Error of start with multiple data regions

2017-12-26 Thread Alexey Popov
Hi,

Apache Ignite does not have this functionality out of the box and the
community could not help with your question. 
You should ask this question directly to the company who provides a multiple
data regions solution.

Thank you,
Alexey





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/