Re: Ignite grid stops after a few days of uptime

2019-01-30 Thread manish
Thanks Ilya.
Appreciate the quick response here



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite grid stops after a few days of uptime

2019-01-30 Thread manish
Thanks Ilya. Appreciate the quick response



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


2.8 release date?

2019-01-30 Thread joseheitor
Hi Ignite Team,

When is version 2.8 expected to be released?

Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Default Cache template

2019-01-30 Thread mahesh76private
It works.

it isn't in your documentation though. 
https://apacheignite.readme.io/docs/cache-template




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IGNITE-3180 : please add support for median, stddev, var in Ignite SQL

2019-01-30 Thread Ilya Kasnacheev
Hello!

As far as my understanding goes, it's a very non-trivial thing to calculate
median across nodes.
You end up needing to pull all the data in one place or use some
complicated iterative approach.

Same thing probably applies to std, though there might be shortcuts. Still
it's not as easy as just implementing collocated algorithm.

Denis, should we probably delist aggregate functions that aren't supported?

Regards,
-- 
Ilya Kasnacheev


ср, 30 янв. 2019 г. в 13:25, mahesh76private :

> The simple usecase is as follows
>
> In big data visualization, for drawing graphs, one needs to know the
> standard characteristics of a measure column (numeric column) to draw and
> plan out graphs.
>
>
> ideally, we need to this support in Ignite itself so that client code does
> not pull large columns (running in millions) to calculate the metric such
> as
> var, std, median, etc
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Tree is being concurrently destroyed: PendingEntries

2019-01-30 Thread Ilya Kasnacheev
Hello!

While I'm not extra sure, you could:
* Upgrade to 2.7 and hope that it was fixed.
* Have more than one node in cluster and enough backup level so that your
cluster is intact in case one node fails.

Regards,
-- 
Ilya Kasnacheev


пт, 25 янв. 2019 г. в 17:15, Karun Chand :

> Hi Weizhou,
>
> Your ignite.log file shows this error -
>
> [12:43:49,649][SEVERE][grid-nio-worker-tcp-comm-0-#33][TcpCommunicationSpi]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
>
> Can you please provide more details about your query and definition?
>
> Regards,
> RH
>
> On Thu, Jan 24, 2019 at 11:16 PM Weizhou He 
> wrote:
>
>> Hi guys,
>>
>> I was using ignite in cluster mode with SQL. Recently, I start create and
>> drop table per 5 minutes. After few hours, an error will occur and the
>> whole cluster seems done. Error logs shows below. Any idea what should I do
>> to avoid this error again? Thanks a lot.
>>
>> *[13:55:09,678][SEVERE][ttl-cleanup-worker-#53][] Critical system error
>> detected. Will be handled accordingly to configured handler [hnd=class
>> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
>> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Tree
>> is being concurrently destroyed: PendingEntries]]*
>> *java.lang.IllegalStateException: Tree is being concurrently destroyed:
>> PendingEntries*
>> *at
>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.checkDestroyed(BPlusTree.java:945)*
>> *at
>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:955)*
>> *at
>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)*
>> *at
>> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1024)*
>> *at
>> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)*
>> *at
>> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)*
>> *at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)*
>> *at java.lang.Thread.run(Thread.java:748)*
>> *[13:55:09,679][SEVERE][ttl-cleanup-worker-#53][] JVM will be halted
>> immediately due to the failure: [failureCtx=FailureContext
>> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Tree
>> is being concurrently destroyed: PendingEntries]]*
>>
>


Re: ScanQuery for List<...>

2019-01-30 Thread Ilya Kasnacheev
Hello!

Can you create a simple reproducer project for this behavior? I could try
and debug it.

Regards,
-- 
Ilya Kasnacheev


пн, 28 янв. 2019 г. в 14:19, AndrewV :

> Thanks a lot.
> I tried to create CustomIgniteBiPredicate, but inside the method "apply"
> List list is always empty.
>
> *Example with an empty list in filter predicate:*
> public boolean apply(Integer key, List list) {
>// Here the list is always empty
> for (Foo item: list) {
> if (item.id == 2)
> return true;
> }
> return false;
> }
>
> *But actually the cache contains data:*
> iCache.iterator().forEachRemaining(data -> {
> List list = data.getValue(); // Here I have fetched list
> }
> });
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Problem with Ignite OSGI in Karaf: self-depency in ignite-osgi bundle

2019-01-30 Thread Ilya Kasnacheev
Hello!

OSGi/Karaf don't see that much action and bugs are certainly possible. If
you think there is a bug you should file an issue against our JIRA and
propose a fix.
Optimally if there is a test case which highlights the issue.

Regards,
-- 
Ilya Kasnacheev


пт, 25 янв. 2019 г. в 18:33, Oleksii Mohylin :

> Hello
>
>
>
> I’ve got problem using Ignite 2.7.0 in Apache Karaf. I believe the problem
> is caused by strange bundle meta of ignite-osgi. It exports package
> org.apache.ignite.osgi.classloaders and imports it at the same time. Here’s
> extract from METADATA.MF:
>
>
>
> Import-Package: org.apache.ignite;version="[2.7,3)",org.apache.ignite.
>
> configuration;version="[2.7,3)",org.apache.ignite.internal.util;versi
>
> on="[2.7,3)",org.apache.ignite.internal.util.tostring;version="[2.7,3
>
> )",org.apache.ignite.internal.util.typedef.internal;version="[2.7,3)"
>
> ,org.apache.ignite.osgi.classloaders,org.osgi.framework;version="[1.7
>
> ,2)"
>
> Require-Capability: osgi.ee;filter:="(&(osgi.ee=JavaSE)(version=1.8))"
>
> Fragment-Host: org.apache.ignite.ignite-core
>
> Export-Package: org.apache.ignite.osgi.classloaders;uses:="org.apache.
>
> ignite.internal.util.tostring,org.osgi.framework";version="2.7.0",org
>
> .apache.ignite.osgi;uses:="org.apache.ignite,org.apache.ignite.config
>
> uration,org.apache.ignite.osgi.classloaders,org.osgi.framework";versi
>
> on="2.7.0"
>
>
>
> I have no problem with initial installation of my application into Karaf,
> although after I restart Karaf  ignite-osgi dependency is not resolved, and
> I see this exception in log:
>
>
>
> org.osgi.framework.BundleException: Unable to resolve graphql-core [399](R
> 399.0): missing requirement [graphql-core [399](R 399.0)]
> osgi.wiring.package;
> (&(osgi.wiring.package=org.apache.ignite.osgi.classloaders)(version>=2.7.0)(!(version>=3.0.0)))
> [caused by: Unable to resolve org.apache.ignite.ignite-osgi [432](R 432.0):
> missing requirement [org.apache.ignite.ignite-osgi [432](R 432.0)]
> osgi.wiring.host;
> (&(osgi.wiring.host=org.apache.ignite.ignite-core)(bundle-version>=0.0.0))]
> Unresolved requirements: [[graphql-core [399](R 399.0)]
> osgi.wiring.package;
> (&(osgi.wiring.package=org.apache.ignite.osgi.classloaders)(version>=2.7.0)(!(version>=3.0.0)))]
>
>
>
> I made a set of simple bundles which model this situation (Bundle A
> imports packages from Bundle B and Bundle C, where Bundle C is fragment for
> bundle B and bundle C has similar setup in exports and imports) and
> confirmed that removing the self-import solves the problem.
>
>
>
> Then I hacked into ignite-osgi JAR, manually edited MANIFEST.MF and
> removed the problematic import of org.apache.ignite.osgi.classloaders. The
> issue with missing ignite-osgi dependency didn’t reproduce with hacked
> bundle installed in Karaf instead of bundle from Maven.
>
>
>
> This self-import looks like a bug to me. Although I can see that the
> import was introduced in version 2.4.0, which was released quite long ago
> and that makes me to think that import of the package which causes problems
> to me might be added on purpose and not just a bug.
>
>
>
> Any ideas whether it’s bug or not?
>
>
>


Re: Performance problems while running SQL query involving JOINS and ORDER BY eventually leading to heap OOME.

2019-01-30 Thread Ilya Kasnacheev
Hello!

Frankly speaking I got lost in your verbal description of tables and
indexes. Can you please provide actual cache configurations or CREATE TABLE
statements (obfuscated if needed so)?

Otherwise I guess that query planner expects you won't have too much
entries after WHERE so it's more important to choose the one used in WHERE
(or in JOIN).

Regards,
-- 
Ilya Kasnacheev


чт, 24 янв. 2019 г. в 18:10, gourav10041996 :

> To simplify our use-case, we created two caches using the SQL query and
> loaded data consisting of about 4 million records and 60k records
> approximately, in the respective caches with INDEX created on all the
> columns. Ignite is set up to run on a single node, meaning all the data is
> present on the same node. The query used for testing/the one we are facing
> issue with is of the type -
>
> SELECT * FROM CACHE1 C1, CACHE2 C2  WHERE  C1.JOINCol = C2.JOINCol AND
> C1.COL1 = 'someValue' ORDER BY C1.COL2
>
> The above query execution leads to the Ignite thread memory rising
> extensively, eventually leading to heap OOME. When the heap memory was
> increased to about 14GB,  we were able to get the results back, but the
> processing time of the query was too long, about 2-4 minutes ( with CPUs
> =2).
>
> We ran an EXPLAIN for the above query and found out that INDEX was created
> on COL1 for C1 cache and on JOINCol for C2 cache. There was no index on the
> sorted column. We think the problem of 'slow querying and huge heap memory
> requirement' is because of the absence of an index in the sorted column.
> Whenever there is a condition present in the WHERE clause ( in our example
> C1.COL1='someValue'), Ignite is using an INDEX for that column and there is
> no INDEX being created on the ORDER BY column.
>
> And for our use-case, it is imperative that we have a condition in the
> where
> clause ( to filter out the data) and a join condition apart from the order
> by clause.
>
>  We tried the multiple column indexing strategy on the COL1, COL2 as per
> our
> use case.
>
>  In case of a composite index with the order as (COL1, COL2), INDEX was
> created only for the COL1.
>
> While for the composite index order as (COL2, COL1), INDEX was getting
> created for both COL1 and COL2 and the results were index sorted. ( But
> only
> in case of the absence of an INDEX for COL1, it looks for the ORDER BY
> clause column and uses a composite index). But, if we don't have a separate
> INDEX for COL1, it again poses a problem as COL1 is something which is
> heavily used for filtering in all other queries. So an INDEX on COL1 is
> necessary.
>
> To summarize, In case there is a condition present in the WHERE clause,
> Ignite uses the WHERE clause column for indexing, and therefore there is no
> INDEX in the sorting column, resulting in severe query performance, which
> can eventually lead us to our system going down.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL Query | Cache | Partition - strange exception after 3 days in production

2019-01-30 Thread Ilya Kasnacheev
Hello!

It's hard to say what happens here. Are you sure that this query didn't
genuinely time out? Can you provide full logs from affected nodes?

Regards,
-- 
Ilya Kasnacheev


пн, 28 янв. 2019 г. в 14:57, Aat :

> Hello,
>
> After 3 days in production : now when we try to execute this query we get
> an
> exception :
>
> var query =
> new SqlFieldsQuery("select Perimeter, sum(delta) from farVe" +
> " where Perimeter in('A','B')" +
> " and arDate='2019-01-25'" +
> " and UndlName='FTSE' GROUP BY Perimeter");
>
> this query worked well until this morning.
>
> now in the app logs i have :
>  javax.cache.CacheException: Failed to execute map query on remote node
> [nodeId=673edfe7-aec7-4d1f-b476-3d4e0ef3ee98, errMsg=Failed to execute SQL
> query. General error: "class
> org.apache.ignite.binary.BinaryObjectException:
> Not enough data to read the value [position=0, requiredBytes=1,
> remainingBytes=0]"; SQL statement:
> SELECT
> __Z0.PERIMETER AS __C0_0,
> SUM(__Z0.DELTA) AS __C0_1
> FROM "farVe".FARVE __Z0
> WHERE (__Z0.UNDLNAME = 'FTS') AND ((__Z0.PERIMETER IN('A', 'B')) AND
> (__Z0.ARDATE = DATE '2019-01-25'))
> GROUP BY __Z0.PERIMETER [5-197]]
>
> And from the random node i get this message:
>
> [12:34:45,961][SEVERE][query-#24551][GridMapQueryExecutor] Failed to
> execute
> local query.
>   85601 class org.apache.ignite.cache.query.QueryCancelledException: The
> query was cancelled while executing.
>   85602 at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1426)
>
>   85603 at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1489)
>
>   85604 at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:930)
>
>   85605 at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:705)
>
>   85606 at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onMessage(GridMapQueryExecutor.java:240)
>
>   85607 at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor$2.onMessage(GridMapQueryExecutor.java:200)
>
>   85608 at
> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:2349)
>
>   85609 at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>
>   85610 at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>
>   85611 at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>
>   85612 at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
>
>   85613 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>   85614 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
>   85615 at java.lang.Thread.run(Thread.java:748)
>
> Infra:
> --  5 nodes
> -- version : 2.7
>
>
> cache definition:
> createCache("farVe", new CacheConfig FarVE>().init().setIndexedTypes(Long.class, FarVE.class));
>
>  // Specify cache mode and/or any other Ignite-specific configuration
> properties.
> setCacheMode(CacheMode.PARTITIONED);
>
> setStoreByValue(false)
> .setWriteThrough(false)
> .setReadThrough(false)
>
> .setBackups(1)
> .setWriteSynchronizationMode(FULL_SYNC)
>
> .setStatisticsEnabled(true)
> .setManagementEnabled(true);
>
>
> java class:
>
> @Data
> public class FarVE implements Serializable {
> @QuerySqlField(index = true)
> private LocalDate arDate;
>
> @QuerySqlField
> private Double delta;
>
> @QuerySqlField(index = true)
> private String perimeter;
>
> }
>
> __ sorry if this error has  already been evoked but i search and i did not
> found  answer.
>
> Aat,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite cache: Serializable vs Externalizable

2019-01-30 Thread Ilya Kasnacheev
Hello!

Why not Binarylizable? Or just avoiding having either one.

Regards,
-- 
Ilya Kasnacheev


чт, 24 янв. 2019 г. в 14:42, ashishb888 :

> Which is better among Serializable and Externalizable?
> I have experienced Externalizable is slower than Serializable. Can somebody
> help me with this?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite grid stops after a few days of uptime

2019-01-30 Thread Ilya Kasnacheev
Hello!

This is a known issue https://issues.apache.org/jira/browse/IGNITE-7476

The recommendation is to either avoid destroying caches :) or upgrade to
Ignite 2.6.

Regards,
-- 
Ilya Kasnacheev


ср, 30 янв. 2019 г. в 07:47, manish :

> After our cluster is up for 2-3 days, the grid on one of the two node stops
> without proper details.
> In logs I could see the below NPE.
>
> /o.a.i.s.d.tcp.TcpDiscoverySpi - TcpDiscoverSpi's message worker thread
> failed abnormally. Stopping the node in order to prevent cluster wide
> instability.
> java.lang.NullPointerException: null
>   at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$7.cacheMetrics(GridDiscoveryManager.java:1150)
>   at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMetricsUpdateMessage(ServerImpl.java:5077)
>   at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2647)
>   at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2447)
>   at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6648)
>   at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2533)
>   at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)/
>
>
> Can someone please provide some inputs that what is going wrong.
> We are using Ignite 2.3.0 version and the only change which we did recently
> was to enable Statistics on the cache and fetch the metrics from the cache.
>
> IgniteCache cache =
> ignite.cache(dictionary.getCacheName());
> metrics = cache.metrics();
>
> Thanks in advance
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Default Cache template

2019-01-30 Thread Вячеслав Коптилин
Hello,

You have to add '*' to the cache name as follows:











Thanks,
S.

вт, 29 янв. 2019 г. в 11:07, mahesh76private :

> Hi,
>
> I added the below, in node config.xml file. However, SQL table create from
> client side keep complaining that "SQLTABLE_TEMPLATE" template is not
> found.
>
>
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
>
> The only way this works is from Java code, when I use the
> CacheConfiguration.addCacheConfiguration and register the template with the
> cluster.
>
> My need is to set the template in node config xml and ensure it
> automatically resisters the template and there should be no need to
> explicitly set it.
>
> Please let me know, if I am doing something wrong.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IGNITE-3180 : please add support for median, stddev, var in Ignite SQL

2019-01-30 Thread mahesh76private
The simple usecase is as follows

In big data visualization, for drawing graphs, one needs to know the
standard characteristics of a measure column (numeric column) to draw and
plan out graphs. 


ideally, we need to this support in Ignite itself so that client code does
not pull large columns (running in millions) to calculate the metric such as
var, std, median, etc





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IGNITE-3180 : please add support for median, stddev, var in Ignite SQL

2019-01-30 Thread Stephen Darlington
The weird thing about this is that the documentation says they do exist: 
https://apacheignite-sql.readme.io/docs/aggregate-functions 
 

(They don’t.)

At the very least we need to update the documentation.

Regards,
Stephen

> On 30 Jan 2019, at 09:38, mahesh76private  wrote:
> 
> Looks like these are not supported currently, please add them in Ignite 2.8
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Ignite and dynamic linking

2019-01-30 Thread Igor Sapego
You should not call it before FreeLibrary(). Try calling it after.

Best Regards,
Igor


On Tue, Jan 29, 2019 at 5:02 PM F.D.  wrote:

> Hi Igor,
>
> thanks for your reply, I've added this code:
>
> Snippet
>
> void Ignition::DestroyJVM()
> {
>factoryLock.Enter();
>
>JniErrorInfo jniErr;
>
>SharedPointer ctx(JniContext::Create(0, 0, JniHandlers(), 
> &jniErr));
>
>IgniteError err;
>IgniteError::SetError(jniErr.code, jniErr.errCls, jniErr.errMsg, err);
>
>if(err.GetCode() == IgniteError::IGNITE_SUCCESS)
>   ctx.Get()->DestroyJvm();
>
>factoryLock.Leave();
> }
>
> And call it before the FreeLibrary(), now when I call the start I got a
> unknow error. Any ideas?
>
> Thanks,
>F.D.
>
>
> On Mon, Jan 28, 2019 at 5:08 PM Igor Sapego  wrote:
>
>> Hi,
>>
>> Currently, Ignite on start creates JVM instance internally, but
>> it never stops it. Also, it currently can not work with already started
>> JVM.
>>
>> So when you start Ignite the first time, it loads JVM, when you stop
>> and unload it, the JVM remains loaded in process memory. When
>> you start Ignite again, it discovers that JVM was already loaded, and
>> as it can not work with pre-loaded JVM, it just returns you the error.
>>
>> To solve the issue, the following ticket should be implemented [1], but
>> currently, it is not. As a workaround you may try to call
>> JNI_DestroyJavaVM() after you have unloaded Ignite, I'm not sure
>> of the result though. This is simply is not a use case we have tested.
>>
>> [1] - https://issues.apache.org/jira/browse/IGNITE-4618
>>
>> Best Regards,
>> Igor
>>
>>
>> On Mon, Jan 28, 2019 at 3:49 PM F.D.  wrote:
>>
>>> Hi Igniters,
>>> I'm trying to use Ignite in a dll (using c++) that is dinamically
>>> loaded. I wrapped the method start/end/... bihind a "c" pure interface that
>>> I export.
>>>
>>> It works quite well. I can call the LoadLibrary and start a Ignite node.
>>> I can stop it and restart it again smoothly.
>>>
>>> I've the problem when I LoadLibrary and then I call FreeLibrary (and
>>> until here it works), but when I try to LoadLibrary again and to start the
>>> node, I get the error: Failed to initialize JVM* [errCls=, errMsg=JVM
>>> already created.]*
>>>
>>> Do you any ideas why I got this error?
>>>
>>> Thanks,
>>>F.D.
>>>
>>


IGNITE-3180 : please add support for median, stddev, var in Ignite SQL

2019-01-30 Thread mahesh76private
Looks like these are not supported currently, please add them in Ignite 2.8



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: The cluster with persistence enable is stuck for 60 seconds when restarting a node. Is this normal?

2019-01-30 Thread Sergey Antonov
Hi!

"After I reduced the checkpointFrequency, the block time decrease to 10
seconds."
Good news!

"But I do not understand how the checkpointFrequency impacts the partition
map exchange, can someone explain it?"
Applying chages from WAL since last checkpoint (i.e. recovery) is a part of
PME proccess. So, the more often checkpoint the less changes will be
applied from WAL. You could find more information about checkopointing here
[1].

[1] https://apacheignite.readme.io/docs/persistence-checkpointing

ср, 30 янв. 2019 г. в 11:48, Justin Ji :

> Thank Sergey!
>
> From the email that he sent to me(I do not know why the email doesn't
> display in this post), I know that the default checkpointFrequency(18)
> was too long, so it impacts the partition map exchange. After I reduced the
> checkpointFrequency, the block time decrease to 10 seconds.
>
> But I do not understand how the checkpointFrequency impacts the partition
> map exchange, can someone explain it?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
BR, Sergey Antonov


Re: The cluster with persistence enable is stuck for 60 seconds when restarting a node. Is this normal?

2019-01-30 Thread Justin Ji
Thank Sergey!

>From the email that he sent to me(I do not know why the email doesn't
display in this post), I know that the default checkpointFrequency(18)
was too long, so it impacts the partition map exchange. After I reduced the
checkpointFrequency, the block time decrease to 10 seconds.

But I do not understand how the checkpointFrequency impacts the partition
map exchange, can someone explain it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.7.0 and Hadoop Accelerator

2019-01-30 Thread Petr Ivanov
Hi!

The Hadoop Accelerator binary (!) was dropped in 2.7.0 release.
However, code still exists and you can build it manually (until 3.0 i guess).


> On 29 Jan 2019, at 21:00, Sergio Hernández Martínez  
> wrote:
> 
> Hello Everybody,
> 
> After saw the download page, i have one question.
> 
> In the download page we have binaries for Apache Ignite 2.7.0 but i don't see 
> Hadoop Acelerator binaries for 2.7.0 version.
> 
> IGFS for Hadoop is deprecated?
> 
> Thank you very much!