Re: Data regions on client nodes

2018-07-20 Thread Valentin Kulichenko
Actually, I would go even further: only allocate a data region on a node
when the first cache assigned to this region is deployed on that node.
Because issue is broader than client nodes and local caches. One can have
server nodes without any caches as well - running only services, for
example.

-Val

On Fri, Jul 20, 2018 at 6:30 PM Dmitriy Setrakyan 
wrote:

> Val, thanks for pointing this out.
>
> I would actually not allocate any off-heap memory on the client side unless
> we see Local caches in the configuration. This is such a rare case, that we
> can ignore it altogether.
>
> D.
>
> On Fri, Jul 20, 2018 at 3:59 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Folks,
> >
> > Currently do not create any regions or allocate any offheap memory on
> > client nodes unless it's explicitly configured. This is good behavior,
> > however there is a usability issue caused by the fact that many users
> have
> > the same config file for both server and clients. This can lead to
> > unexpected excessive memory usage on client side and forces users to
> > maintain two config files in most cases.
> >
> > At the same time, the only case when offheap memory can be required on a
> > client node is using LOCAL caches there, which is a very rare use case.
> >
> > Having said that, is it possible to allocate memory on client node
> > dynamically ONLY if a local cache is created there? This would fix the
> > usability issue without limiting the use of local caches on client side.
> >
> > -Val
> >
>


Re: Data regions on client nodes

2018-07-20 Thread Dmitriy Setrakyan
Val, thanks for pointing this out.

I would actually not allocate any off-heap memory on the client side unless
we see Local caches in the configuration. This is such a rare case, that we
can ignore it altogether.

D.

On Fri, Jul 20, 2018 at 3:59 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Folks,
>
> Currently do not create any regions or allocate any offheap memory on
> client nodes unless it's explicitly configured. This is good behavior,
> however there is a usability issue caused by the fact that many users have
> the same config file for both server and clients. This can lead to
> unexpected excessive memory usage on client side and forces users to
> maintain two config files in most cases.
>
> At the same time, the only case when offheap memory can be required on a
> client node is using LOCAL caches there, which is a very rare use case.
>
> Having said that, is it possible to allocate memory on client node
> dynamically ONLY if a local cache is created there? This would fix the
> usability issue without limiting the use of local caches on client side.
>
> -Val
>


Re: Apache Ignite 2.7: scope, time and release manager

2018-07-20 Thread Pavel Petroshenko
Hi Denis, Nikolay,

The proposed 2.7 release timing sounds reasonable to me.
Python [1], PHP [2], and Node.js [3] thin clients should take the train.

p.

[1] https://jira.apache.org/jira/browse/IGNITE-7782
[2] https://jira.apache.org/jira/browse/IGNITE-7783
[3] https://jira.apache.org/jira/browse/IGNITE-


On Fri, Jul 20, 2018 at 2:35 PM, Denis Magda  wrote:

> Igniters,
>
> Let's agree on the time and the scope of 2.7. As for the release manager,
> we had a conversation with Nikolay Izhikov and he decided to try the role
> out. Thanks, Nikolay!
>
> Nikolay, we need to prepare a page like that [1] once the release terms are
> defined.
>
> I propose us to roll Ignite 2.7 at the end of September. Folks who are
> working on SQL, core, C++/NET, thin clients, ML, service grid
> optimizations, data structures please enlist what you're ready to deliver.
>
>
> [1] https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.6
>


Apache Ignite 2.7: scope, time and release manager

2018-07-20 Thread Denis Magda
Igniters,

Let's agree on the time and the scope of 2.7. As for the release manager,
we had a conversation with Nikolay Izhikov and he decided to try the role
out. Thanks, Nikolay!

Nikolay, we need to prepare a page like that [1] once the release terms are
defined.

I propose us to roll Ignite 2.7 at the end of September. Folks who are
working on SQL, core, C++/NET, thin clients, ML, service grid
optimizations, data structures please enlist what you're ready to deliver.


[1] https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.6


Data regions on client nodes

2018-07-20 Thread Valentin Kulichenko
Folks,

Currently do not create any regions or allocate any offheap memory on
client nodes unless it's explicitly configured. This is good behavior,
however there is a usability issue caused by the fact that many users have
the same config file for both server and clients. This can lead to
unexpected excessive memory usage on client side and forces users to
maintain two config files in most cases.

At the same time, the only case when offheap memory can be required on a
client node is using LOCAL caches there, which is a very rare use case.

Having said that, is it possible to allocate memory on client node
dynamically ONLY if a local cache is created there? This would fix the
usability issue without limiting the use of local caches on client side.

-Val


[jira] [Created] (IGNITE-9046) Actualize dependency versions for Cassandra Cache Store

2018-07-20 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-9046:
--

 Summary: Actualize dependency versions for Cassandra Cache Store
 Key: IGNITE-9046
 URL: https://issues.apache.org/jira/browse/IGNITE-9046
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitriy Pavlov
 Fix For: 2.7


It is suggested 
A. to update commons-beanutils version. It can be done using property 
commons-beanutils.version in pom.xml:
change 1.9.2 to latest version, currently it is 1.9.3
http://central.maven.org/maven2/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.pom
 

B. Update Netty Project netty.version (currently 4.0.33.Final)
Upgrade at least to 4.0.37.Final or later or to 4.1.1.Final or later

It is required to run RunAll to check all tests passed, check full build 
locally using build.sh.

Probably it is required to run release step to make sure release candidate can 
be prepared.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Spark DataFrame Partition Ordering Issue

2018-07-20 Thread Nikolay Izhikov
Hello, Stuart.

I will investigate this issue and return to you in a couple days.

пт, 20 июля 2018 г., 17:59 Stuart Macdonald :

> Ignite Dev Community,
>
> I’m working with the Ignite 2.4+ Spark SQL DataFrame functionality and
> have run into what I believe to be a bug where spark partition information
> is incorrect for non-trivial sizes of Ignite clusters.
>
> The partition array returned to Spark via
> org.apache.ignite.spark.impl.calcPartitions() needs to be in the order of
> the spark partition numbers, but the function doesn’t make that guarantee
> and consistently fails for anything but very small Ignite clusters. Without
> the correct partition sequencing, Spark will throw errors such as:
>
> java.lang.IllegalArgumentException: requirement failed:
> partitions(0).partition == 3, but it should equal 0
> at scala.Predef$.require(Predef.scala:224)
> at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2$$anonfun$apply$3.apply(RDD.scala:255)
> at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2$$anonfun$apply$3.apply(RDD.scala:254)
> at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:254)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
> at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
> at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
> at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
> at
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
> at scala.Option.getOrElse(Option.scala:121)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
> at org.apache.spark.rdd.RDD.count(RDD.scala:1162)
> at
> org.apache.ignite.spark.IgniteSQLDataFrameSpec$$anonfun$1$$anonfun$apply$mcV$sp$11.apply$mcV$sp(IgniteSQLDataFrameSpec.scala:145)
>
> I’ve forked and committed a change which demonstrates this by increasing
> the number of servers in the spark tests from 3 to 4 which causes the
> IgniteSQLDataFrameSpec test to start failing per above. This commit also
> demonstrates the fix which is to just sequence the ignite node map before
> zipping:
>
>
> https://github.com/stuartmacd/ignite/commit/c9e7294c71de9e7b2bddfae671605a71260b80b3
>
> Can anyone help confirm this behaviour? Happy to create a jira and pull
> request for the proposed change.
>
> I believe this might also be related to another earlier report:
> http://apache-ignite-users.70518.x6.nabble.com/Getting-an-exception-when-listing-partitions-of-IgniteDataFrame-td22434.html
>
> Thanks,
> Stuart.
>
>


[jira] [Created] (IGNITE-9045) TxRecord is logged to WAL during node stop procedure

2018-07-20 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-9045:
---

 Summary: TxRecord is logged to WAL during node stop procedure
 Key: IGNITE-9045
 URL: https://issues.apache.org/jira/browse/IGNITE-9045
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Sergey Chugunov
 Fix For: 2.7


When *IGNITE_WAL_LOG_TX_RECORDS* flag is set to true special TxRecords are 
logged to WAL on changes of transaction state.

It turned out that during node stop transaction futures (e.g. 
GridDhtTxPrepareFuture) change transaction state which is logged to WAL.

This situation may violate transactional consistency and should be fixed: no 
writes to WAL should be issued during node stop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Code duplicates in ssh tests

2018-07-20 Thread Dmitry Pavlov
Ok, I agree here, that we can remove one test. Feel free to create an issue
and PR if nobody else mind. Let us wait at least until Mon 23 Jul before
merge.

пт, 20 июл. 2018 г. в 17:57, Иван Федотов :

> Hi, Dmitry.
>
> I thought about elements order, but if we go deeper in
> ignite.cluster().stopNodes() method, we can see that in ClusterIgniteImpl
> [1] all nodes id will be collected in HashSet in forNodesIds method [2].
>
> So I think that in this case it's not important what use initially, HashSet
>  or ArrayList.
>
> [1]
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/cluster/IgniteClusterImpl.java#L250
> [2]
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/cluster/ClusterGroupAdapter.java#L454
>
>
> 2018-07-20 16:52 GMT+03:00 Dmitry Pavlov :
>
> > Hi Ivan,
> >
> > I can suppose that it is related to elements order. Is it reasonable to
> > keep 2 tests with 1 common mehod? One test will call this method with
> > HashSet, and other with ArrayList.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > пт, 20 июл. 2018 г. в 15:01, Иван Федотов :
> >
> > > Hi, Igniters!
> > >
> > > I’m working on ssh module and found some code duplicates in
> > > IgniteProjectionStartStopRestartSelfTest.
> > >
> > > 1. Tests testRestartNodesByIds and testRestartNodesByIdsC are fully
> > > duplicate themself [1]. I tried to found what differences should they
> > have
> > > and looked at similar tests: testStopNodesByIds and testStopNodesByIdsC
> > > [2]. It relates to the second point.
> > >
> > > 2. The only difference is that in testStopNodesByIds we stop nodes by
> > > passing HashSet of Ids and in testStopNodesByIdsC we stop by passing
> > > ArrayList of Ids. In my opinion it does not matter, because stopNodes
> > > methods have Collection as argument and we can pass to it both HashSet
> > and
> > > ArrayList. So, I think that code in these tests are also duplicate each
> > > other.
> > >
> > > What do you think? Can we remove one of these tests in both cases?
> > >
> > >
> > > [1]
> > >
> > > https://github.com/apache/ignite/blob/master/modules/
> > ssh/src/test/java/org/apache/ignite/internal/
> > IgniteProjectionStartStopRestartSelfTest.java#L878
> > >
> > > [2]
> > >
> > > https://github.com/apache/ignite/blob/master/modules/
> > ssh/src/test/java/org/apache/ignite/internal/
> > IgniteProjectionStartStopRestartSelfTest.java#L659
> > >
> > >
> > > --
> > > Ivan Fedotov.
> > >
> > > ivanan...@gmail.com
> > >
> >
>
>
>
> --
> Ivan Fedotov.
>
> ivanan...@gmail.com
>


[GitHub] ignite pull request #4390: IGNITE-9039 AssertionError on cache stop

2018-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4390


---


Spark DataFrame Partition Ordering Issue

2018-07-20 Thread Stuart Macdonald
Ignite Dev Community,  

I’m working with the Ignite 2.4+ Spark SQL DataFrame functionality and have run 
into what I believe to be a bug where spark partition information is incorrect 
for non-trivial sizes of Ignite clusters.  

The partition array returned to Spark via 
org.apache.ignite.spark.impl.calcPartitions() needs to be in the order of the 
spark partition numbers, but the function doesn’t make that guarantee and 
consistently fails for anything but very small Ignite clusters. Without the 
correct partition sequencing, Spark will throw errors such as:

java.lang.IllegalArgumentException: requirement failed: partitions(0).partition 
== 3, but it should equal 0
at scala.Predef$.require(Predef.scala:224)
at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2$$anonfun$apply$3.apply(RDD.scala:255)
at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2$$anonfun$apply$3.apply(RDD.scala:254)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:254)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
at org.apache.spark.rdd.RDD.count(RDD.scala:1162)
at 
org.apache.ignite.spark.IgniteSQLDataFrameSpec$$anonfun$1$$anonfun$apply$mcV$sp$11.apply$mcV$sp(IgniteSQLDataFrameSpec.scala:145)

I’ve forked and committed a change which demonstrates this by increasing the 
number of servers in the spark tests from 3 to 4 which causes the 
IgniteSQLDataFrameSpec test to start failing per above. This commit also 
demonstrates the fix which is to just sequence the ignite node map before 
zipping:

https://github.com/stuartmacd/ignite/commit/c9e7294c71de9e7b2bddfae671605a71260b80b3

Can anyone help confirm this behaviour? Happy to create a jira and pull request 
for the proposed change.

I believe this might also be related to another earlier report: 
http://apache-ignite-users.70518.x6.nabble.com/Getting-an-exception-when-listing-partitions-of-IgniteDataFrame-td22434.html

Thanks,
Stuart.



Re: Code duplicates in ssh tests

2018-07-20 Thread Иван Федотов
Hi, Dmitry.

I thought about elements order, but if we go deeper in
ignite.cluster().stopNodes() method, we can see that in ClusterIgniteImpl
[1] all nodes id will be collected in HashSet in forNodesIds method [2].

So I think that in this case it's not important what use initially, HashSet
 or ArrayList.

[1]
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/cluster/IgniteClusterImpl.java#L250
[2]
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/cluster/ClusterGroupAdapter.java#L454


2018-07-20 16:52 GMT+03:00 Dmitry Pavlov :

> Hi Ivan,
>
> I can suppose that it is related to elements order. Is it reasonable to
> keep 2 tests with 1 common mehod? One test will call this method with
> HashSet, and other with ArrayList.
>
> Sincerely,
> Dmitriy Pavlov
>
> пт, 20 июл. 2018 г. в 15:01, Иван Федотов :
>
> > Hi, Igniters!
> >
> > I’m working on ssh module and found some code duplicates in
> > IgniteProjectionStartStopRestartSelfTest.
> >
> > 1. Tests testRestartNodesByIds and testRestartNodesByIdsC are fully
> > duplicate themself [1]. I tried to found what differences should they
> have
> > and looked at similar tests: testStopNodesByIds and testStopNodesByIdsC
> > [2]. It relates to the second point.
> >
> > 2. The only difference is that in testStopNodesByIds we stop nodes by
> > passing HashSet of Ids and in testStopNodesByIdsC we stop by passing
> > ArrayList of Ids. In my opinion it does not matter, because stopNodes
> > methods have Collection as argument and we can pass to it both HashSet
> and
> > ArrayList. So, I think that code in these tests are also duplicate each
> > other.
> >
> > What do you think? Can we remove one of these tests in both cases?
> >
> >
> > [1]
> >
> > https://github.com/apache/ignite/blob/master/modules/
> ssh/src/test/java/org/apache/ignite/internal/
> IgniteProjectionStartStopRestartSelfTest.java#L878
> >
> > [2]
> >
> > https://github.com/apache/ignite/blob/master/modules/
> ssh/src/test/java/org/apache/ignite/internal/
> IgniteProjectionStartStopRestartSelfTest.java#L659
> >
> >
> > --
> > Ivan Fedotov.
> >
> > ivanan...@gmail.com
> >
>



-- 
Ivan Fedotov.

ivanan...@gmail.com


[GitHub] ignite pull request #4395: IGNITE-9040 new FailureHandler for node segmentat...

2018-07-20 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/4395

IGNITE-9040 new FailureHandler for node segmentation special case, test for 
the root cause error



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9040

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4395.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4395


commit 1a76eab2912f7b8925c051c76763e87511d4
Author: Sergey Chugunov 
Date:   2018-07-20T14:27:05Z

IGNITE-9040 new FailureHandler for node segmentation special case, test for 
the root cause error




---


[GitHub] ignite pull request #4394: test

2018-07-20 Thread voipp
GitHub user voipp opened a pull request:

https://github.com/apache/ignite/pull/4394

test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/voipp/ignite IGNITE-999

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4394


commit 4584c2745245bc5cf3d62e4a082177a88135b1fe
Author: voipp 
Date:   2018-07-20T14:08:53Z

test




---


[jira] [Created] (IGNITE-9044) Update scala dependency version in Apache Ignite

2018-07-20 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-9044:
--

 Summary: Update scala dependency version in Apache Ignite
 Key: IGNITE-9044
 URL: https://issues.apache.org/jira/browse/IGNITE-9044
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitriy Pavlov
 Fix For: 2.7



*ignite-scalar*
scala.library.version=2.11.8, need to be at least 2.11.12 or newer.

*ignite-scalar_2.10*
scala210.library.version 2.10.6, need to be at least 2.10.7, probably newer

*visor 2.10*
scala210.jline.version = 2.10.4 , need to be at least 2.10.7, probably newer

Probably impact would be wider.

We need at least run run-all, local build.sh and optionally release candate 
step on TC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Code duplicates in ssh tests

2018-07-20 Thread Dmitry Pavlov
Hi Ivan,

I can suppose that it is related to elements order. Is it reasonable to
keep 2 tests with 1 common mehod? One test will call this method with
HashSet, and other with ArrayList.

Sincerely,
Dmitriy Pavlov

пт, 20 июл. 2018 г. в 15:01, Иван Федотов :

> Hi, Igniters!
>
> I’m working on ssh module and found some code duplicates in
> IgniteProjectionStartStopRestartSelfTest.
>
> 1. Tests testRestartNodesByIds and testRestartNodesByIdsC are fully
> duplicate themself [1]. I tried to found what differences should they have
> and looked at similar tests: testStopNodesByIds and testStopNodesByIdsC
> [2]. It relates to the second point.
>
> 2. The only difference is that in testStopNodesByIds we stop nodes by
> passing HashSet of Ids and in testStopNodesByIdsC we stop by passing
> ArrayList of Ids. In my opinion it does not matter, because stopNodes
> methods have Collection as argument and we can pass to it both HashSet and
> ArrayList. So, I think that code in these tests are also duplicate each
> other.
>
> What do you think? Can we remove one of these tests in both cases?
>
>
> [1]
>
> https://github.com/apache/ignite/blob/master/modules/ssh/src/test/java/org/apache/ignite/internal/IgniteProjectionStartStopRestartSelfTest.java#L878
>
> [2]
>
> https://github.com/apache/ignite/blob/master/modules/ssh/src/test/java/org/apache/ignite/internal/IgniteProjectionStartStopRestartSelfTest.java#L659
>
>
> --
> Ivan Fedotov.
>
> ivanan...@gmail.com
>


[jira] [Created] (IGNITE-9043) Map field is registered as Object in BinaryType

2018-07-20 Thread Denis Mekhanikov (JIRA)
Denis Mekhanikov created IGNITE-9043:


 Summary: Map field is registered as Object in BinaryType
 Key: IGNITE-9043
 URL: https://issues.apache.org/jira/browse/IGNITE-9043
 Project: Ignite
  Issue Type: Bug
  Components: binary
Affects Versions: 2.6
Reporter: Denis Mekhanikov
Assignee: Denis Mekhanikov
 Fix For: 2.7


When a binary type is registered during first insertion without use of 
BinaryObject, then fields of type {{Map}} are registered as {{Object}}-s.

It leads to inconvenience in further usage of this type over {{BinaryObject}} 
interface.

The following code results in an exception:
{code:java}
public static void main(String[] args) {
Ignite ignite = Ignition.start("config/ignite.xml");
IgniteCache cache = ignite.getOrCreateCache("cache");

cache.put(1, new ExamplePojo());

BinaryObject val = cache.withKeepBinary().get(1);

Map map = val.field("map");
map.put(1, "1");
BinaryObjectBuilder bldr = val.toBuilder();
bldr.setField("map", map);

bldr.build(); // Throws exception.
}

static class ExamplePojo {
Map map = new HashMap<>();
}
{code}
Stacktrace:
{noformat}
Exception in thread "main" class 
org.apache.ignite.binary.BinaryObjectException: Wrong value has been set 
[typeName=binary.BinaryObjectMapExample$ExamplePojo, fieldName=map, 
fieldType=Object, assignedValueType=Map]
at 
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.checkMetadata(BinaryObjectBuilderImpl.java:428)
at 
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:223)
at 
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.build(BinaryObjectBuilderImpl.java:183)
at binary.BinaryObjectMapExample.main(BinaryObjectMapExample.java:26)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4393: IGNITE-8866 Retries to upload class

2018-07-20 Thread ezagumennov
GitHub user ezagumennov opened a pull request:

https://github.com/apache/ignite/pull/4393

IGNITE-8866 Retries to upload class



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8866

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4393


commit 40dc0cca5e010b8e75d15b6eab8e8978423ec4d4
Author: ezagumennov 
Date:   2018-07-20T12:36:33Z

IGNITE-8866 Retries to upload class




---


[GitHub] ignite pull request #4392: IGNITE-9041 AssertionError in TcpCommunicationSpi

2018-07-20 Thread SpiderRus
GitHub user SpiderRus opened a pull request:

https://github.com/apache/ignite/pull/4392

IGNITE-9041 AssertionError in TcpCommunicationSpi



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9041

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4392


commit e0af6ae9246225bdf4c179cca48ccd12a249f67f
Author: Alexey Stelmak 
Date:   2018-07-20T12:10:32Z

IGNITE-9041




---


[jira] [Created] (IGNITE-9042) Transaction with small timeout may lead to inconsistent partition state

2018-07-20 Thread Dmitriy Govorukhin (JIRA)
Dmitriy Govorukhin created IGNITE-9042:
--

 Summary: Transaction with small timeout may lead to inconsistent 
partition state
 Key: IGNITE-9042
 URL: https://issues.apache.org/jira/browse/IGNITE-9042
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitriy Govorukhin
 Attachments: Reproducer.java

The transaction with a small timeout may lead to inconsistent partition state. 
Reproducer in attached.

Problem in GridDhtTxPrepareFuture.sendPrepareRequests() if timeout will reached 
during iteration over  tx.dhtMap().values() we do not send 
GridDhtTxPrepareRequest for some backups, it lead that backup will not know any 
think about transaction and will not participate in commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9041) AssertionError in TcpCommunicationSpi

2018-07-20 Thread Alexey Stelmak (JIRA)
Alexey Stelmak created IGNITE-9041:
--

 Summary: AssertionError in TcpCommunicationSpi
 Key: IGNITE-9041
 URL: https://issues.apache.org/jira/browse/IGNITE-9041
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Stelmak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Desynchronization of true repo and github repo

2018-07-20 Thread Dmitry Pavlov
Yes, very strange thing, I've also checked this commit in mirror before,
and it is appeared only now. I have no clue about reasons.

пт, 20 июл. 2018 г. в 15:02, Nikolay Izhikov :

> When I'm try to push 8633d34e to master, GitHub repo doesn't contain Yury
> commit.
>
> It appear on GitHub only after my merge and push.
>
> пт, 20 июля 2018 г., 14:53 Dmitry Pavlov :
>
> > Hi Yury,
> >
> > it seems commit has appeared now:
> >
> >
> https://github.com/apache/ignite/commit/26e405281792d38b5505cde22b5c6a91749c4990
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > пт, 20 июл. 2018 г. в 14:02, Yury Babak :
> >
> > > Igniters,
> > >
> > > Few hours ago I pushed the  commit
> > > <
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=commit;h=26e405281792d38b5505cde22b5c6a91749c4990
> > >
> > >
> > > into https://git-wip-us.apache.org/repos/asf/ignite
> > >
> > > But I dont see this commit in github repo, may be we have some problem
> > with
> > > synchronization between those two repos?
> > >
> > > Can someone check it?
> > >
> > > Regards,
> > > Yury
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> > >
> >
>


Re: Desynchronization of true repo and github repo

2018-07-20 Thread Nikolay Izhikov
When I'm try to push 8633d34e to master, GitHub repo doesn't contain Yury
commit.

It appear on GitHub only after my merge and push.

пт, 20 июля 2018 г., 14:53 Dmitry Pavlov :

> Hi Yury,
>
> it seems commit has appeared now:
>
> https://github.com/apache/ignite/commit/26e405281792d38b5505cde22b5c6a91749c4990
>
> Sincerely,
> Dmitriy Pavlov
>
> пт, 20 июл. 2018 г. в 14:02, Yury Babak :
>
> > Igniters,
> >
> > Few hours ago I pushed the  commit
> > <
> >
> https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=commit;h=26e405281792d38b5505cde22b5c6a91749c4990
> >
> >
> > into https://git-wip-us.apache.org/repos/asf/ignite
> >
> > But I dont see this commit in github repo, may be we have some problem
> with
> > synchronization between those two repos?
> >
> > Can someone check it?
> >
> > Regards,
> > Yury
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>


Code duplicates in ssh tests

2018-07-20 Thread Иван Федотов
Hi, Igniters!

I’m working on ssh module and found some code duplicates in
IgniteProjectionStartStopRestartSelfTest.

1. Tests testRestartNodesByIds and testRestartNodesByIdsC are fully
duplicate themself [1]. I tried to found what differences should they have
and looked at similar tests: testStopNodesByIds and testStopNodesByIdsC
[2]. It relates to the second point.

2. The only difference is that in testStopNodesByIds we stop nodes by
passing HashSet of Ids and in testStopNodesByIdsC we stop by passing
ArrayList of Ids. In my opinion it does not matter, because stopNodes
methods have Collection as argument and we can pass to it both HashSet and
ArrayList. So, I think that code in these tests are also duplicate each
other.

What do you think? Can we remove one of these tests in both cases?


[1]
https://github.com/apache/ignite/blob/master/modules/ssh/src/test/java/org/apache/ignite/internal/IgniteProjectionStartStopRestartSelfTest.java#L878

[2]
https://github.com/apache/ignite/blob/master/modules/ssh/src/test/java/org/apache/ignite/internal/IgniteProjectionStartStopRestartSelfTest.java#L659


-- 
Ivan Fedotov.

ivanan...@gmail.com


Re: Desynchronization of true repo and github repo

2018-07-20 Thread Dmitry Pavlov
Hi Yury,

it seems commit has appeared now:
https://github.com/apache/ignite/commit/26e405281792d38b5505cde22b5c6a91749c4990

Sincerely,
Dmitriy Pavlov

пт, 20 июл. 2018 г. в 14:02, Yury Babak :

> Igniters,
>
> Few hours ago I pushed the  commit
> <
> https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=commit;h=26e405281792d38b5505cde22b5c6a91749c4990>
>
> into https://git-wip-us.apache.org/repos/asf/ignite
>
> But I dont see this commit in github repo, may be we have some problem with
> synchronization between those two repos?
>
> Can someone check it?
>
> Regards,
> Yury
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


[GitHub] ignite pull request #4378: IGNITE-8915: NPE on local sql query for client no...

2018-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4378


---


[GitHub] ignite pull request #4385: IGNITE-9021: Refactor vectors/matrix classes

2018-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4385


---


[GitHub] ignite pull request #4391: IGNITE-8892 add additional test.

2018-07-20 Thread zstan
GitHub user zstan opened a pull request:

https://github.com/apache/ignite/pull/4391

IGNITE-8892 add additional test.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8892-zstan

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4391


commit 36f743f739e34fada7db2826827e1aea1136b7a8
Author: Andrey V. Mashenkov 
Date:   2018-06-28T14:45:06Z

ignite-8892: Get rid of CacheQuery.keepAll flag.

commit 0ffd92dac85f5a7e75a1902b2b20ff643b058b4a
Author: Evgeny Stanilovskiy 
Date:   2018-07-20T09:33:46Z

IGNITE-8892 add additional test.




---


Desynchronization of true repo and github repo

2018-07-20 Thread Yury Babak
Igniters,

Few hours ago I pushed the  commit

  
into https://git-wip-us.apache.org/repos/asf/ignite 

But I dont see this commit in github repo, may be we have some problem with
synchronization between those two repos?

Can someone check it?

Regards,
Yury



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[jira] [Created] (IGNITE-9040) StopNodeFailureHandler is not able to stop node correctly on node segmentation

2018-07-20 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-9040:
---

 Summary: StopNodeFailureHandler is not able to stop node correctly 
on node segmentation
 Key: IGNITE-9040
 URL: https://issues.apache.org/jira/browse/IGNITE-9040
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Sergey Chugunov
Assignee: Sergey Chugunov
 Fix For: 2.7


When flag *IGNITE_WAL_LOG_TX_RECORDS* is set up special TxRecords are logged to 
WAL even on node stop.

With STOP segmentation policy *StopNodeFailureHandler* is used to stop the 
segmented node and it marks node's state as invalid. As a result all write 
requests to WAL get failed.

So as part of stop-on-segmentation procedure node needs to log Tx but it cannot 
as its state is marked as invalid. This leads to stop procedure finishing 
incorrectly, some threads started by the node are not cleaned up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ML] Machine Learning Pipeline Improvement

2018-07-20 Thread Yury Babak
Alexey,

I like this idea, this should improve usability of our ML module.

Regards,
Yury



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #4390: IGNITE-9039 AssertionError on cache stop

2018-07-20 Thread EdShangGG
GitHub user EdShangGG opened a pull request:

https://github.com/apache/ignite/pull/4390

IGNITE-9039 AssertionError on cache stop



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9039

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4390


commit a9cc8fbc0541b32c65e373f566777cebb33aac57
Author: Eduard Shangareev 
Date:   2018-07-20T10:38:27Z

IGNITE-9039 AssertionError on cache stop




---


[GitHub] ignite pull request #4389: Ignite 9038

2018-07-20 Thread dkarachentsev
GitHub user dkarachentsev opened a pull request:

https://github.com/apache/ignite/pull/4389

Ignite 9038



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9038

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4389


commit 26e405281792d38b5505cde22b5c6a91749c4990
Author: zaleslaw 
Date:   2018-07-19T22:47:12Z

IGNITE-9021: [ML] Refactor vectors to dence/sparse

this closes #4385

commit 9be9cebcd615b0241ca99be452a4af46f883f51f
Author: dkarachentsev 
Date:   2018-07-17T10:06:13Z

IGNITE-9038 - Node join serialization defaults




---


Re: GridCacheReplicatedFullApiMultithreadedSelfTest1 not used, not compile. Remove?

2018-07-20 Thread Dmitry Pavlov
Hi Maxim,

I think we should remove such code. And if nobody objects I can apply PR on
Monday.

Ilya, please confirm you're agree.

Sincerely,
Dmitriy Pavlov

пт, 20 июл. 2018 г. в 13:16, Maxim Muzafarov :

> Ignites,
>
> I've faced with test in Ignite code base that is fully commented. You can
> check it
> by yoursefl [1]. As it not used since 2014 and not even compile I'm
> suggesting
> to remove it.
>
> What do you think about it? Please, share your thoughts.
>
> Full name:
>
> org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedFullApiMultithreadedSelfTest1
>
> [1]
>
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedFullApiMultithreadedSelfTest1.java
> --
> --
> Maxim Muzafarov
>


[jira] [Created] (IGNITE-9039) AssertionError on cache stop

2018-07-20 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-9039:
-

 Summary: AssertionError on cache stop
 Key: IGNITE-9039
 URL: https://issues.apache.org/jira/browse/IGNITE-9039
 Project: Ignite
  Issue Type: Bug
Reporter: Eduard Shangareev
Assignee: Eduard Shangareev


It was introduced by IGNITE-8955:

{code}
[2018-07-20 13:24:39,190][INFO 
][exchange-worker-#38%db.CheckpointBufferDeadlockTest0%][root] 
[GridStringLogger echo] class org.apache.ignite.IgniteCheckedException: 
Compound exception for CountDownFuture.   at 
org.apache.ignite.internal.util.future.CountDownFuture.addError(CountDownFuture.java:72)
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:46)
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:28)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:462)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$ClearSegmentRunnable.run(PageMemoryImpl.java:2757)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.AssertionError: Release pinned page: FullPageId 
[pageId=00013c43, effectivePageId=3c43, grpId=-1778028968]
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.releaseFreePage(PageMemoryImpl.java:1787)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.access$1900(PageMemoryImpl.java:1659)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$ClearSegmentRunnable.run(PageMemoryImpl.java:2748)
... 3 more
Suppressed: java.lang.AssertionError: Release pinned page: FullPageId 
[pageId=000128bc, effectivePageId=28bc, grpId=-1778028968]
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.releaseFreePage(PageMemoryImpl.java:1787)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.access$1900(PageMemoryImpl.java:1659)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$ClearSegmentRunnable.run(PageMemoryImpl.java:2748)
... 3 more
Suppressed: java.lang.AssertionError: Release pinned page: FullPageId 
[pageId=00013f1a, effectivePageId=3f1a, grpId=-1778028968]
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.releaseFreePage(PageMemoryImpl.java:1787)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.access$1900(PageMemoryImpl.java:1659)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$ClearSegmentRunnable.run(PageMemoryImpl.java:2748)
... 3 more
Suppressed: java.lang.AssertionError: Release pinned page: FullPageId 
[pageId=00012eb9, effectivePageId=2eb9, grpId=-1778028968]
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.releaseFreePage(PageMemoryImpl.java:1787)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.access$1900(PageMemoryImpl.java:1659)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$ClearSegmentRunnable.run(PageMemoryImpl.java:2748)
... 3 more
Suppressed: java.lang.AssertionError: Release pinned page: FullPageId 
[pageId=000120f2, effectivePageId=20f2, grpId=-1778028968]
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.releaseFreePage(PageMemoryImpl.java:1787)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.access$1900(PageMemoryImpl.java:1659)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$ClearSegmentRunnable.run(PageMemoryImpl.java:2748)
... 3 more
Suppressed: java.lang.AssertionError: Release pinned page: FullPageId 
[pageId=000115e9, effectivePageId=15e9, grpId=-1778028968]
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl$PagePool.releaseFreePage(PageMemoryImpl.java:1787)
at 

[jira] [Created] (IGNITE-9038) Node join serialization defaults

2018-07-20 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-9038:
---

 Summary: Node join serialization defaults
 Key: IGNITE-9038
 URL: https://issues.apache.org/jira/browse/IGNITE-9038
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.6
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.7


staticallyConfigured flag in CacheJoinNodeDiscoveryData.CacheInfo should be 
true by default to keep it consistent to previous protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: data extractor

2018-07-20 Thread Dmitriy Govorukhin
Alexey,

1. The utility will extract raw payload bytes. If you want to build binary
object or Java class instances you will need binary/marshaller metadata.
If two grid will have different metadata, you should move metadata as well
as dumped data for construct binary objects on another grid.
Do you have any ideas on how we can improve this approach?

2. I do not think that I understood your idea, please explain in more
details who do you want to use the utility in checkpoint statistic?

3. In the first implementation, I prefer simple *file path* approach, you
can specify a path as a parameter to some partition file or directory
cache/group or root to caches/groups directory.

4. I have not had time to work out how we will upload date to another grid.
Any ideas are welcome.


On Mon, Jul 2, 2018 at 5:34 PM Alexey Goncharuk 
wrote:

> Dmitriy,
>
> A few questions regarding the user cases for the utility:
> 1) Would I be able to read the extracted data from the dumped file without
> Ignite node binary/marshaller metadata? In other words, will I be able to
> move only the dumped file to another grid or will I need to move the
> metadata as well?
> 2) Are you planning to add a public API version of this utility as a part
> of Ignite? For example, if I am planning to run some statistics on a
> checkpointed data, will I be able to get some sort of an iterator to
> process this data?
> 3) How a user will choose which caches (cache groups) to process? Will the
> user need to provide a cache or cache ID (or either of them)? Will the
> utility be able to extract a single cache data from a cache group?
> 4) I think the upload part of the utility is missing some input parameters
> - for example, what cluster to connect to, what caches to upload to, etc.
>
> сб, 30 июн. 2018 г. в 22:38, Dmitriy Govorukhin <
> dmitriy.govoruk...@gmail.com>:
>
> > Igniters,
> >
> > I am working on IGNITE-7644
> >  (export all
> key-value
> > data from a persisted partition),
> > it will be command line tool for extracting data from Ignite partition
> > file without the need to start node.
> > The main motivation is to have a lifebuoy in case if a file has damage
> for
> > some reason.
> >
> > I suggest simple API and two commands for the first implementation:
> >
> > -c
> > --CRC [srcPath] - check CRC for all(or by type) pages in partition
> >
> > -e
> > --extract [srcPath] [outPath] - dump all survey data from partition to
> > another file with raw key/value pair format
> > (required graceful stop for a node, not necessary after --restore will be
> > implemented)
> >
> > Output file format see in attached, this format does not contain any
> index
> > inside but it is very simple and
> > flexible for future works with raw key/value data.
> >
> > Future features:
> > -u
> > --upload - reload raw key/value pairs to node
> >
> > -s
> > --status - check current node file status, need binary recovery or not
> > (node crash on the middle of a checkpoint)
> >
> > -r
> > --restore - restore binary consistency (finish checkpoint, required WAL
> > file for recovery)
> >
> > Let's start a discussion, any comments are welcome.
> >
> >
>


GridCacheReplicatedFullApiMultithreadedSelfTest1 not used, not compile. Remove?

2018-07-20 Thread Maxim Muzafarov
Ignites,

I've faced with test in Ignite code base that is fully commented. You can
check it
by yoursefl [1]. As it not used since 2014 and not even compile I'm
suggesting
to remove it.

What do you think about it? Please, share your thoughts.

Full name:
org.apache.ignite.internal.processors.cache.distributed.replicated.GridCacheReplicatedFullApiMultithreadedSelfTest1

[1]
https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/distributed/replicated/GridCacheReplicatedFullApiMultithreadedSelfTest1.java
-- 
--
Maxim Muzafarov


Re: [ML] Machine Learning Pipeline Improvement

2018-07-20 Thread Alexey Zinoviev
Yes, it make the prerocessing easy and clear for reading and understanding.

In API it will looks like

Model mdl = Pipeline.of(reading, featureExctracting, labelExtracting,
normalizing, encoding, scaling, logisticRegression)

where in .of(...) we can see the sequence of ML stages.


Fwd: The Apache Ignite Book

2018-07-20 Thread Dmitry Pavlov
Hi Igniters,

We somethimes mention lack of overall view and documentaion about Ignite,
and I guess it could be changed soon.

FYI, please find forwarded message below, and I hope you will have a free
minute to review.

At least I can see free book sample available and I find deep technical
details about the product.

Sincerely,
Dmitriy Pavlov

-- Forwarded message -
From: srecon 
Date: пт, 20 июл. 2018 г. в 11:06
Subject: The Apache Ignite Book
To: 


Dear Igniters,
  we are happy to announce that a free sample chapter of our new title "The
Apache Ignite book" has been published on  leanpub
  . The full table of contents of the book
also available at leanpub.
  This is an agile-published book and the first portion of the book will be
published soon. We want this book to be the perfect guide for the Ignite
users. So, any suggestions, comments, ideas, and critics are welcome.

Best regards
  Shamim



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[jira] [Created] (IGNITE-9037) Apache Pulsar intergration

2018-07-20 Thread Roman Shtykh (JIRA)
Roman Shtykh created IGNITE-9037:


 Summary: Apache Pulsar intergration
 Key: IGNITE-9037
 URL: https://issues.apache.org/jira/browse/IGNITE-9037
 Project: Ignite
  Issue Type: New Feature
  Components: streaming
Reporter: Roman Shtykh


Streamer integration with [https://pulsar.incubator.apache.org/]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4388: test for Ignite 5980

2018-07-20 Thread 1vanan
GitHub user 1vanan opened a pull request:

https://github.com/apache/ignite/pull/4388

test for Ignite 5980



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/1vanan/ignite IGNITE-5980

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4388


commit 8916ac716a8d2f9c120de2f0eaf5969e57c70287
Author: Fedotov 
Date:   2018-07-20T08:01:09Z

repeat 100 times




---


[GitHub] ignite pull request #4375: tests for ignite 5980

2018-07-20 Thread 1vanan
Github user 1vanan closed the pull request at:

https://github.com/apache/ignite/pull/4375


---


[GitHub] ignite pull request #2957: IGNITE-5798

2018-07-20 Thread 1vanan
Github user 1vanan closed the pull request at:

https://github.com/apache/ignite/pull/2957


---


Re: Place Ignite TC helper to ASF Ignite supplementary git repo

2018-07-20 Thread Maxim Muzafarov
Dmitry,

At the last Moscow Apache Ignite Meetup a very high threshold of entry into
Ignite code development was discussed. So, for me placing MTCGA.Bot to ASF
sounds reasonable and I'm voiting for this case. It would be a good start
point
for each new community member.

But I'm with all hands for Sergey's mail. We defenetly should provide clear
documentation and information about MTCGA.Bot project it would be very
usefull for newly members. I will do my best to help you with it.


чт, 19 июл. 2018 г. в 16:51, Sergey Chugunov :

> Hi Dmitriy Pavlov,
>
> MTCGA.Bot seems like useful tool to make analysis and monitoring of our
> test base so I also support the idea of publishing its source code.
> When it is adopted by more community members they may come up with ideas of
> improvements so its sources should be available.
>
> Placing it in a separate repo seems reasonable to me but we should provide
> clear information about new repo and its purpose somewhere on wiki to make
> it visible to the community.
> Clear documentation on source code won't hurt as well.
>
> --
> Thanks,
> Sergey Chugunov
>
> On Thu, Jul 19, 2018 at 1:29 PM Dmitry Pavlov 
> wrote:
>
> > Hi Dmitriy,
> >
> > Yes, I'm going to create INFRA ticket for new ASF supplementary
> repository
> > for project, I just want to be absolutely sure, that Community supports
> my
> > plan.
> >
> > Or do you mean I need to create ticket to find out if domain
> > mtcga.ignite.apache.org is possible to create?
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > чт, 19 июл. 2018 г. в 1:43, Dmitriy Setrakyan :
> >
> > > Dmitriy,
> > >
> > > I think you should file an INFRA ticket and ask if this is possible.
> > >
> > > D.
> > >
> > > On Wed, Jul 18, 2018 at 3:12 PM, Denis Magda 
> wrote:
> > >
> > > > Dmitriy,
> > > >
> > > > Things for clearing the things out. No objections from my side then.
> > > >
> > > > Let's see what other Ignite fellows think on your proposal. Someone
> > might
> > > > have a different perspective.
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Wed, Jul 18, 2018 at 1:58 PM Dmitry Pavlov  >
> > > > wrote:
> > > >
> > > > > Hi Denis,
> > > > >
> > > > > It will made things simple.
> > > > >
> > > > > 1) For example any comitter will be able to change rules of
> > > notification
> > > > > and fix the Bot if something goes wrong. Now it is my github repo.
> > ASF
> > > > repo
> > > > > will guarantee that code will be always accessible by community
> > > members.
> > > > >
> > > > > 2) Being a part of ASF repo the Bot will be simple thing that less
> > > > > experienced developer can start with. The Bot uses latest AI
> release
> > as
> > > > DB
> > > > > with persistence enabled, so bot developer became at least Apache
> > > Ignite
> > > > > user, and as most - new contributor.
> > > > >
> > > > > If we agree to place this bot to ASF, next step could be asking
> Infra
> > > > Team
> > > > > to provide 2nd level apache domain, e.g. mtcga.ignite.apache.org
> for
> > > web
> > > > > UI. I guess it would be plus if our tool code is available in ASF
> > repo,
> > > > but
> > > > > not in some private git repo.
> > > > >
> > > > > Sincerely,
> > > > > Dmitriy Pavlov
> > > > >
> > > > > ср, 18 июл. 2018 г. в 23:03, Denis Magda :
> > > > >
> > > > > > Hi Dmitriy,
> > > > > >
> > > > > > The whole year has passed since this initiative launch, hell, the
> > > times
> > > > > > passes by :)
> > > > > >
> > > > > > What would be the benefits of having the tool in the Apache repo?
> > > Does
> > > > it
> > > > > > simplify the things for us.
> > > > > >
> > > > > > --
> > > > > > Denis
> > > > > >
> > > > > > On Wed, Jul 18, 2018 at 3:59 AM Dmitry Pavlov <
> > dpavlov@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Igniters,
> > > > > > >
> > > > > > > Almost 1 year has passed since Make Teamcity Green Again was
> > > > initially
> > > > > > > proposed. During this process we managed to get almost
> successful
> > > Run
> > > > > > Alls
> > > > > > > in master, but currently regressions still occur. We all tried
> a
> > > lot
> > > > of
> > > > > > > things: careful examination of PR tests, continuous monitoring
> of
> > > > > master,
> > > > > > > suite responsible contributor, tickets creation and so on.
> > > > > > >
> > > > > > > According to Igniter's feedback most productive thing was
> master
> > > > > > monitoring
> > > > > > > and timely fix of new failures. But contributor’s enthusiasm is
> > > > limited
> > > > > > and
> > > > > > > monitoring is not most enjoyable thing, so it's time to
> automate
> > > this
> > > > > > > activity. I’ve created MTCGA.Bot which sends emails about new
> > > > failures
> > > > > > and
> > > > > > > in addition has a couple of useful features.
> > > > > > >
> > > > > > > The Bot is being developed only based on your feedback. 30
> Ignite
> > > > > > > developers already tried it. I'm going to run short
> > > > > webinar/presentation
> > > > > > at
> > > > > > > Mon 23 July and tell more