Re: Persistence per memory policy configuration

2017-10-17 Thread Denis Magda
Ivan,

Please don’t forget to update all the persistence and memory pools related
examples to the new configuration format. Let’s make sure non of our
example prints out the class deprecated warning.

Denis

On Tuesday, October 17, 2017, Dmitriy Setrakyan 
wrote:

> Thanks Ivan! Let's make sure that every property gets sufficient javadoc
> for our users to understand. We should also document this configuration on
> readme.
>
> On Tue, Oct 17, 2017 at 3:06 PM, Ivan Rakov  > wrote:
>
> > Dmitriy,
> >
> > Please check description of https://issues.apache.org/jira
> > /browse/IGNITE-6030, I've updated it with actual list of properties.
> >
> > Best Regards,
> > Ivan Rakov
> >
> >
> > On 17.10.2017 21:46, Dmitriy Setrakyan wrote:
> >
> >> I am now confused. Can I please ask for the final configuration again?
> >> What
> >> will it look like?
> >>
> >> On Tue, Oct 17, 2017 at 1:16 AM, Alexey Goncharuk <
> >> alexey.goncha...@gmail.com > wrote:
> >>
> >> Agree with Ivan. If we implemented backward compatibility, this would be
> >>> completely counterintuitive behavior, so +1 to keep the behavior as is.
> >>>
> >>> As for the swap path, I see nothing wrong with having it for in-memory
> >>> caches. This is a simple overflow mechanism that works fine if you do
> not
> >>> need persistence guarantees.
> >>>
> >>> 2017-10-16 21:00 GMT+03:00 Ivan Rakov  >:
> >>>
> >>> *swapPath* is ok for me. It is also consistent with *walPath* and
>  *walArchivePath*.
> 
>  Regarding persistencePath/storagePath, I don't like the idea when path
> 
> >>> for
> >>>
>  WAL is implicitly changed, especially when we have separate option for
> 
> >>> it.
> >>>
>  WAL and storage files are already located under same $IGNITE_HOME
> root.
>   From user perspective, there's no need to change root for all
>  persistence-related directories as long as $IGNITE_HOME points to the
>  correct disk.
>   From developer perspective, this change breaks backwards
> compatibility.
>  Maintaining backwards compatibility in fail-safe way (checking both
>  old-style and new-style paths) is complex and hard to maintain in the
>  codebase.
> 
>  Best Regards,
>  Ivan Rakov
> 
>  My vote is for *storagePath* and keeping behavior as is.
> 
> 
>  On 16.10.2017 16:53, Pavel Tupitsyn wrote:
> 
>  Igniters, another thing to consider:
> >
> > DataRegionConfiguration.SwapFilePath should be SwapPath,
> > since this is actually not a single file, but a directory path.
> >
> > On Fri, Oct 13, 2017 at 7:53 PM, Denis Magda  >
> > wrote:
> >
> > Seems I've got what you’re talking about.
> >
> >> I’ve tried to change the root directory (*persistencePath*) and saw
> >>
> > that
> >>>
>  only data/indexes were placed to it while wal stayed somewhere in my
> >>
> > work
> >>>
>  dir. It works counterintuitive and causes non productive discussions
> >>
> > like
> >>>
>  we are in arguing about *persistencePath* or *storagePath*. Neither
> >>
> > name
> >>>
>  fits this behavior.
> >>
> >> My suggestion will be the following:
> >> - *persistencePath* refers to the path of all storage files
> >> (data/indexes,
> >> wal, archive). If the path is changed *all the files* will be under
> >> the
> >> new
> >> directory unless *setWalPath* and *setWalArchivePath* are set
> >> *explicitly*.
> >> - *setWalPath* overrides the default location of WAL (which is
> >> setPersistencePath)
> >> - *setWalArchivePath* overrides the default location of the archive
> >> (which
> >> is again has to be setPersistencePath).
> >>
> >> If we follow this approach the configuration and behavior becomes
> >>
> > vivid.
> >>>
>  Thoughts?
> >>
> >> —
> >> Denis
> >>
> >> On Oct 13, 2017, at 1:21 AM, Ivan Rakov  >
> >> wrote:
> >>
> >>> Denis,
> >>>
> >>> Data/index storage and WAL are located under the same root by
> >>> default.
> >>> However, this is not mandatory: *storagePath* and *walPath*
> >>> properties
> >>>
> >>> can contain both absolute and relative paths. If paths are
> absolute,
> >> storage and WAL can reside on different devices, like this:
> >>
> >> storagePath: /storage1/NMVe_drive/storage
> >>>
>  walPath: /storage2/Big_SSD_drive/wal
> 
>  We even recommend this in tuning guide:
> https://apacheignite.readme
> >>> .
> >>>
> >>> io/docs/durable-memory-tuning
> >>
> >> That's why I think *persistencePath* is misleading.
> >>>
> >>> Best Regards,
> >>> Ivan Rakov
> >>>
> >>> On 13.10.2017 5:03, Dmitriy Setrakyan 

Re: Persistence per memory policy configuration

2017-10-17 Thread Dmitriy Setrakyan
Thanks Ivan! Let's make sure that every property gets sufficient javadoc
for our users to understand. We should also document this configuration on
readme.

On Tue, Oct 17, 2017 at 3:06 PM, Ivan Rakov  wrote:

> Dmitriy,
>
> Please check description of https://issues.apache.org/jira
> /browse/IGNITE-6030, I've updated it with actual list of properties.
>
> Best Regards,
> Ivan Rakov
>
>
> On 17.10.2017 21:46, Dmitriy Setrakyan wrote:
>
>> I am now confused. Can I please ask for the final configuration again?
>> What
>> will it look like?
>>
>> On Tue, Oct 17, 2017 at 1:16 AM, Alexey Goncharuk <
>> alexey.goncha...@gmail.com> wrote:
>>
>> Agree with Ivan. If we implemented backward compatibility, this would be
>>> completely counterintuitive behavior, so +1 to keep the behavior as is.
>>>
>>> As for the swap path, I see nothing wrong with having it for in-memory
>>> caches. This is a simple overflow mechanism that works fine if you do not
>>> need persistence guarantees.
>>>
>>> 2017-10-16 21:00 GMT+03:00 Ivan Rakov :
>>>
>>> *swapPath* is ok for me. It is also consistent with *walPath* and
 *walArchivePath*.

 Regarding persistencePath/storagePath, I don't like the idea when path

>>> for
>>>
 WAL is implicitly changed, especially when we have separate option for

>>> it.
>>>
 WAL and storage files are already located under same $IGNITE_HOME root.
  From user perspective, there's no need to change root for all
 persistence-related directories as long as $IGNITE_HOME points to the
 correct disk.
  From developer perspective, this change breaks backwards compatibility.
 Maintaining backwards compatibility in fail-safe way (checking both
 old-style and new-style paths) is complex and hard to maintain in the
 codebase.

 Best Regards,
 Ivan Rakov

 My vote is for *storagePath* and keeping behavior as is.


 On 16.10.2017 16:53, Pavel Tupitsyn wrote:

 Igniters, another thing to consider:
>
> DataRegionConfiguration.SwapFilePath should be SwapPath,
> since this is actually not a single file, but a directory path.
>
> On Fri, Oct 13, 2017 at 7:53 PM, Denis Magda 
> wrote:
>
> Seems I've got what you’re talking about.
>
>> I’ve tried to change the root directory (*persistencePath*) and saw
>>
> that
>>>
 only data/indexes were placed to it while wal stayed somewhere in my
>>
> work
>>>
 dir. It works counterintuitive and causes non productive discussions
>>
> like
>>>
 we are in arguing about *persistencePath* or *storagePath*. Neither
>>
> name
>>>
 fits this behavior.
>>
>> My suggestion will be the following:
>> - *persistencePath* refers to the path of all storage files
>> (data/indexes,
>> wal, archive). If the path is changed *all the files* will be under
>> the
>> new
>> directory unless *setWalPath* and *setWalArchivePath* are set
>> *explicitly*.
>> - *setWalPath* overrides the default location of WAL (which is
>> setPersistencePath)
>> - *setWalArchivePath* overrides the default location of the archive
>> (which
>> is again has to be setPersistencePath).
>>
>> If we follow this approach the configuration and behavior becomes
>>
> vivid.
>>>
 Thoughts?
>>
>> —
>> Denis
>>
>> On Oct 13, 2017, at 1:21 AM, Ivan Rakov 
>> wrote:
>>
>>> Denis,
>>>
>>> Data/index storage and WAL are located under the same root by
>>> default.
>>> However, this is not mandatory: *storagePath* and *walPath*
>>> properties
>>>
>>> can contain both absolute and relative paths. If paths are absolute,
>> storage and WAL can reside on different devices, like this:
>>
>> storagePath: /storage1/NMVe_drive/storage
>>>
 walPath: /storage2/Big_SSD_drive/wal

 We even recommend this in tuning guide: https://apacheignite.readme
>>> .
>>>
>>> io/docs/durable-memory-tuning
>>
>> That's why I think *persistencePath* is misleading.
>>>
>>> Best Regards,
>>> Ivan Rakov
>>>
>>> On 13.10.2017 5:03, Dmitriy Setrakyan wrote:
>>>
>>> On Thu, Oct 12, 2017 at 7:01 PM, Denis Magda 

 wrote:
>>>   From what I see after running an example they are under the same
>>> root
>>>
 folder and in different subdirectories. The root folder should be
>
> defined

>>> by setPersistencePath as I guess.
>>>
 If that is the case, then you are right. Then we should not have
>
 storagePath or WalPath, and store them both under "persistencePath"

 root.
>>> However, I would need Alexey Goncharuk or Ivan Rakov to confirm this.
>>>


>


Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Dmitriy Setrakyan
Val, thanks for the review. Can I ask you to add the same comments to the
ticket?

On Tue, Oct 17, 2017 at 3:20 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Nikolay, Anton,
>
> I did a high level review of the code. First of all, impressive results!
> However, I have some questions/comments.
>
> 1. Why do we have org.apache.spark.sql.ignite package in our codebase? Can
> these classes reside under org.apache.ignite.spark instead?
> 2. IgniteRelationProvider contains multiple constants which I guess are
> some king of config options. Can you describe the purpose of each of them?
> 3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog
> implementations and what is the difference?
> 4. IgniteStrategy and IgniteOptimization are currently no-op. What are our
> plans on implementing them? Also, what exactly is planned in
> IgniteOptimization and what is its purpose?
> 5. I don't like that IgniteStrategy and IgniteOptimization have to be set
> manually on SQLContext each time it's created. This seems to be very error
> prone. Is there any way to automate this and improve usability?
> 6. What is the purpose of IgniteSparkSession? I see it's used
> in IgniteCatalogExample but not in IgniteDataFrameExample, which is
> confusing.
> 7. To create IgniteSparkSession we first create IgniteContext. Is it really
> needed? It looks like we can directly provide the configuration file; if
> IgniteSparkSession really requires IgniteContext, it can create it by
> itself under the hood. Actually, I think it makes sense to create a builder
> similar to SparkSession.builder(), it would be good if our APIs here are
> consistent with Spark APIs.
> 8. Can you clarify the query syntax
> inIgniteDataFrameExample#nativeSparkSqlFromCacheExample2?
> 9. Do I understand correctly that IgniteCacheRelation is for the case when
> we don't have SQL configured on Ignite side? I thought we decided not to
> support this, no? Or this is something else?
>
> Thanks!
>
> -Val
>
> On Tue, Oct 17, 2017 at 4:40 AM, Anton Vinogradov <
> avinogra...@gridgain.com>
> wrote:
>
> > Sounds awesome.
> >
> > I'll try to review API & tests this week.
> >
> > Val,
> > Your review still required :)
> >
> > On Tue, Oct 17, 2017 at 2:36 PM, Николай Ижиков 
> > wrote:
> >
> > > Yes
> > >
> > > 17 окт. 2017 г. 2:34 PM пользователь "Anton Vinogradov" <
> > > avinogra...@gridgain.com> написал:
> > >
> > > > Nikolay,
> > > >
> > > > So, it will be able to start regular spark and ignite clusters and,
> > using
> > > > peer classloading via spark-context, perform any DataFrame request,
> > > > correct?
> > > >
> > > > On Tue, Oct 17, 2017 at 2:25 PM, Николай Ижиков <
> > nizhikov@gmail.com>
> > > > wrote:
> > > >
> > > > > Hello, Anton.
> > > > >
> > > > > An example you provide is a path to a master *local* file.
> > > > > These libraries are added to the classpath for each remote node
> > running
> > > > > submitted job.
> > > > >
> > > > > Please, see documentation:
> > > > >
> > > > > http://spark.apache.org/docs/latest/api/java/org/apache/
> > > > > spark/SparkContext.html#addJar(java.lang.String)
> > > > > http://spark.apache.org/docs/latest/api/java/org/apache/
> > > > > spark/SparkContext.html#addFile(java.lang.String)
> > > > >
> > > > >
> > > > > 2017-10-17 13:10 GMT+03:00 Anton Vinogradov <
> > avinogra...@gridgain.com
> > > >:
> > > > >
> > > > > > Nikolay,
> > > > > >
> > > > > > > With Data Frame API implementation there are no requirements to
> > > have
> > > > > any
> > > > > > > Ignite files on spark worker nodes.
> > > > > >
> > > > > > What do you mean? I see code like:
> > > > > >
> > > > > > spark.sparkContext.addJar(MAVEN_HOME +
> > > > > > "/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-
> > > > > > core-2.3.0-SNAPSHOT.jar")
> > > > > >
> > > > > > On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков <
> > > > nizhikov@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hello, guys.
> > > > > > >
> > > > > > > I have created example application to run Ignite Data Frame on
> > > > > standalone
> > > > > > > Spark cluster.
> > > > > > > With Data Frame API implementation there are no requirements to
> > > have
> > > > > any
> > > > > > > Ignite files on spark worker nodes.
> > > > > > >
> > > > > > > I ran this application on the free dataset: ATP tennis match
> > > > > statistics.
> > > > > > >
> > > > > > > data - https://github.com/nizhikov/atp_matches
> > > > > > > app - https://github.com/nizhikov/ignite-spark-df-example
> > > > > > >
> > > > > > > Valentin, do you have a chance to look at my changes?
> > > > > > >
> > > > > > >
> > > > > > > 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> > > > > > > valentin.kuliche...@gmail.com
> > > > > > > >:
> > > > > > >
> > > > > > > > Hi Nikolay,
> > > > > > > >
> > > > > > > > Sorry for delay on this, got a little swamped lately. I will
> do
> > > my
> > > > > best
> > > > > > > to
> > > > > > > > review the code this week.
> > 

Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Valentin Kulichenko
Nikolay, Anton,

I did a high level review of the code. First of all, impressive results!
However, I have some questions/comments.

1. Why do we have org.apache.spark.sql.ignite package in our codebase? Can
these classes reside under org.apache.ignite.spark instead?
2. IgniteRelationProvider contains multiple constants which I guess are
some king of config options. Can you describe the purpose of each of them?
3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog
implementations and what is the difference?
4. IgniteStrategy and IgniteOptimization are currently no-op. What are our
plans on implementing them? Also, what exactly is planned in
IgniteOptimization and what is its purpose?
5. I don't like that IgniteStrategy and IgniteOptimization have to be set
manually on SQLContext each time it's created. This seems to be very error
prone. Is there any way to automate this and improve usability?
6. What is the purpose of IgniteSparkSession? I see it's used
in IgniteCatalogExample but not in IgniteDataFrameExample, which is
confusing.
7. To create IgniteSparkSession we first create IgniteContext. Is it really
needed? It looks like we can directly provide the configuration file; if
IgniteSparkSession really requires IgniteContext, it can create it by
itself under the hood. Actually, I think it makes sense to create a builder
similar to SparkSession.builder(), it would be good if our APIs here are
consistent with Spark APIs.
8. Can you clarify the query syntax
inIgniteDataFrameExample#nativeSparkSqlFromCacheExample2?
9. Do I understand correctly that IgniteCacheRelation is for the case when
we don't have SQL configured on Ignite side? I thought we decided not to
support this, no? Or this is something else?

Thanks!

-Val

On Tue, Oct 17, 2017 at 4:40 AM, Anton Vinogradov 
wrote:

> Sounds awesome.
>
> I'll try to review API & tests this week.
>
> Val,
> Your review still required :)
>
> On Tue, Oct 17, 2017 at 2:36 PM, Николай Ижиков 
> wrote:
>
> > Yes
> >
> > 17 окт. 2017 г. 2:34 PM пользователь "Anton Vinogradov" <
> > avinogra...@gridgain.com> написал:
> >
> > > Nikolay,
> > >
> > > So, it will be able to start regular spark and ignite clusters and,
> using
> > > peer classloading via spark-context, perform any DataFrame request,
> > > correct?
> > >
> > > On Tue, Oct 17, 2017 at 2:25 PM, Николай Ижиков <
> nizhikov@gmail.com>
> > > wrote:
> > >
> > > > Hello, Anton.
> > > >
> > > > An example you provide is a path to a master *local* file.
> > > > These libraries are added to the classpath for each remote node
> running
> > > > submitted job.
> > > >
> > > > Please, see documentation:
> > > >
> > > > http://spark.apache.org/docs/latest/api/java/org/apache/
> > > > spark/SparkContext.html#addJar(java.lang.String)
> > > > http://spark.apache.org/docs/latest/api/java/org/apache/
> > > > spark/SparkContext.html#addFile(java.lang.String)
> > > >
> > > >
> > > > 2017-10-17 13:10 GMT+03:00 Anton Vinogradov <
> avinogra...@gridgain.com
> > >:
> > > >
> > > > > Nikolay,
> > > > >
> > > > > > With Data Frame API implementation there are no requirements to
> > have
> > > > any
> > > > > > Ignite files on spark worker nodes.
> > > > >
> > > > > What do you mean? I see code like:
> > > > >
> > > > > spark.sparkContext.addJar(MAVEN_HOME +
> > > > > "/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-
> > > > > core-2.3.0-SNAPSHOT.jar")
> > > > >
> > > > > On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков <
> > > nizhikov@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hello, guys.
> > > > > >
> > > > > > I have created example application to run Ignite Data Frame on
> > > > standalone
> > > > > > Spark cluster.
> > > > > > With Data Frame API implementation there are no requirements to
> > have
> > > > any
> > > > > > Ignite files on spark worker nodes.
> > > > > >
> > > > > > I ran this application on the free dataset: ATP tennis match
> > > > statistics.
> > > > > >
> > > > > > data - https://github.com/nizhikov/atp_matches
> > > > > > app - https://github.com/nizhikov/ignite-spark-df-example
> > > > > >
> > > > > > Valentin, do you have a chance to look at my changes?
> > > > > >
> > > > > >
> > > > > > 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> > > > > > valentin.kuliche...@gmail.com
> > > > > > >:
> > > > > >
> > > > > > > Hi Nikolay,
> > > > > > >
> > > > > > > Sorry for delay on this, got a little swamped lately. I will do
> > my
> > > > best
> > > > > > to
> > > > > > > review the code this week.
> > > > > > >
> > > > > > > -Val
> > > > > > >
> > > > > > > On Mon, Oct 9, 2017 at 11:48 AM, Николай Ижиков <
> > > > > nizhikov@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hello, Valentin.
> > > > > > >>
> > > > > > >> Did you have a chance to look at my changes?
> > > > > > >>
> > > > > > >> Now I think I have done almost all required features.
> > > > > > >> I want to make some performance test to ensure my
> 

Re: Persistence per memory policy configuration

2017-10-17 Thread Ivan Rakov

Dmitriy,

Please check description of 
https://issues.apache.org/jira/browse/IGNITE-6030, I've updated it with 
actual list of properties.


Best Regards,
Ivan Rakov

On 17.10.2017 21:46, Dmitriy Setrakyan wrote:

I am now confused. Can I please ask for the final configuration again? What
will it look like?

On Tue, Oct 17, 2017 at 1:16 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:


Agree with Ivan. If we implemented backward compatibility, this would be
completely counterintuitive behavior, so +1 to keep the behavior as is.

As for the swap path, I see nothing wrong with having it for in-memory
caches. This is a simple overflow mechanism that works fine if you do not
need persistence guarantees.

2017-10-16 21:00 GMT+03:00 Ivan Rakov :


*swapPath* is ok for me. It is also consistent with *walPath* and
*walArchivePath*.

Regarding persistencePath/storagePath, I don't like the idea when path

for

WAL is implicitly changed, especially when we have separate option for

it.

WAL and storage files are already located under same $IGNITE_HOME root.
 From user perspective, there's no need to change root for all
persistence-related directories as long as $IGNITE_HOME points to the
correct disk.
 From developer perspective, this change breaks backwards compatibility.
Maintaining backwards compatibility in fail-safe way (checking both
old-style and new-style paths) is complex and hard to maintain in the
codebase.

Best Regards,
Ivan Rakov

My vote is for *storagePath* and keeping behavior as is.


On 16.10.2017 16:53, Pavel Tupitsyn wrote:


Igniters, another thing to consider:

DataRegionConfiguration.SwapFilePath should be SwapPath,
since this is actually not a single file, but a directory path.

On Fri, Oct 13, 2017 at 7:53 PM, Denis Magda  wrote:

Seems I've got what you’re talking about.

I’ve tried to change the root directory (*persistencePath*) and saw

that

only data/indexes were placed to it while wal stayed somewhere in my

work

dir. It works counterintuitive and causes non productive discussions

like

we are in arguing about *persistencePath* or *storagePath*. Neither

name

fits this behavior.

My suggestion will be the following:
- *persistencePath* refers to the path of all storage files
(data/indexes,
wal, archive). If the path is changed *all the files* will be under the
new
directory unless *setWalPath* and *setWalArchivePath* are set
*explicitly*.
- *setWalPath* overrides the default location of WAL (which is
setPersistencePath)
- *setWalArchivePath* overrides the default location of the archive
(which
is again has to be setPersistencePath).

If we follow this approach the configuration and behavior becomes

vivid.

Thoughts?

—
Denis

On Oct 13, 2017, at 1:21 AM, Ivan Rakov  wrote:

Denis,

Data/index storage and WAL are located under the same root by default.
However, this is not mandatory: *storagePath* and *walPath* properties


can contain both absolute and relative paths. If paths are absolute,
storage and WAL can reside on different devices, like this:


storagePath: /storage1/NMVe_drive/storage

walPath: /storage2/Big_SSD_drive/wal


We even recommend this in tuning guide: https://apacheignite.readme.


io/docs/durable-memory-tuning


That's why I think *persistencePath* is misleading.

Best Regards,
Ivan Rakov

On 13.10.2017 5:03, Dmitriy Setrakyan wrote:


On Thu, Oct 12, 2017 at 7:01 PM, Denis Magda 


wrote:
  From what I see after running an example they are under the same root

folder and in different subdirectories. The root folder should be


defined

by setPersistencePath as I guess.

If that is the case, then you are right. Then we should not have

storagePath or WalPath, and store them both under "persistencePath"


root.
However, I would need Alexey Goncharuk or Ivan Rakov to confirm this.






Apache Ignite meetups and events around the world in October

2017-10-17 Thread Denis Magda
Igniters,

The next couple of weeks Ignite experts and enthusiasts are going to present 
the project covering various topics:
https://ignite.apache.org/events.html

Silicon Valley and Bay Area:

- Oct 18, Meetup at the Google Headquartes driven by Val - 
https://twitter.com/denismagda/status/920366393341591553

- Oct 19, Meetup at General Electrics Headquarters. Val and I going to demo 
high-availability techniques and SQL
   https://twitter.com/denismagda/status/920365091014373377

- Oct 24 - 26, IMC Summit in San Francisco: https://www.imcsummit.org/us/ 


Europe:

-  Oct 19, Meetup in London to talk about Ignite ML capabilities - 
https://www.meetup.com/preview/Eurostaff-Big-Data/events/243650721

- Oct 24 - 26, Spark Summit in Dublin. Two talks were accepted there: 
  
https://spark-summit.org/eu-2017/events/fast-data-with-apache-ignite-and-apache-spark/

  
https://spark-summit.org/eu-2017/events/how-to-share-state-across-multiple-apache-spark-jobs-using-apache-ignite/


Swing by if you’re around!

—
Denis



Re: Adding sqlline tool to Apache Ignite project

2017-10-17 Thread Denis Magda
Looks good to me. Prachi will help us documenting the tool usage:
https://issues.apache.org/jira/browse/IGNITE-6656 


However, I can’t conceive how to see a table structure (columns and their 
types, indexes with names and types) using SQLLine. I’ve tried !metadata with a 
variety of parameters but no luck. As for !indexes and !tables commands they 
just print out table names and secondary indexes omitting columns, indexes 
types and *primary indexes*. Considering that Ignite doesn’t support standard 
*describe* command I assumed SQLLine would help us out. But how do I do this 
with SQLLine?

—
Denis

> On Oct 17, 2017, at 4:33 AM, Oleg Ostanin  wrote:
> 
> New example build with sqlline:
> 
> https://ci.ignite.apache.org/viewLog.html?buildId=894407=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3PrepareVote#!1rrb2,1esn4zrslm4po,-h8h0hn9vvvxp
> 
> 
> On Wed, Oct 11, 2017 at 1:00 AM, Denis Magda  wrote:
> 
>> Oleg,
>> 
>> Looks good to me. Please consider the notes left in the ticket. I want us
>> to prepare a script for Windows, review the language for help notice and
>> errors, put together documentation. Prachi will be able to help with the
>> editing and documentation.
>> 
>> —
>> Denis
>> 
>>> On Oct 9, 2017, at 10:13 AM, Oleg Ostanin  wrote:
>>> 
>>> New build with fixed argument parsing:
>>> https://ci.ignite.apache.org/viewLog.html?buildId=882282;
>> tab=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3Pre
>> pareVote#!1rrb2,1esn4zrslm4po,-h8h0hn9vvvxp
>>> 
>>> On Mon, Oct 9, 2017 at 5:38 PM, Denis Magda  wrote:
>>> 
 I think it’s a must have for the ticket resolution.
 
 Denis
 
 On Monday, October 9, 2017, Anton Vinogradov 
 wrote:
 
> Any plans to have ignitesql.bat?
> 
> On Mon, Oct 9, 2017 at 5:29 PM, Oleg Ostanin  > wrote:
> 
>> Another build with sqlline included:
>> https://ci.ignite.apache.org/viewLog.html?buildId=881120;
>> tab=artifacts=IgniteRelease_
>> XxxFromMirrorIgniteRelease3Pre
>> pareVote#!1rrb2,-wpvx2aopzexz,1esn4zrslm4po,-h8h0hn9vvvxp
>> 
>> On Sun, Oct 8, 2017 at 5:11 PM, Denis Magda  > wrote:
>> 
>>> No more doubts on my side. +1 for Vladimir’s suggestion.
>>> 
>>> Denis
>>> 
>>> On Saturday, October 7, 2017, Dmitriy Setrakyan <
 dsetrak...@apache.org
> >
>>> wrote:
>>> 
 I now tend to agree with Vladimir. We should always require that
 some
 address is specified. The help menu should clearly state how to
> connect
>>> to
 a localhost.
 
 D.
 
 On Sat, Oct 7, 2017 at 12:44 AM, Vladimir Ozerov <
> voze...@gridgain.com 
 >
 wrote:
 
> Denis,
> 
> Default Ignite configuration uses multicast, this is why you do
 not
>>> need
 to
> change anything. Ignite node is always both a server (listens)
 and
> a
 client
> (connects).
> 
> This will not work for ignitesql, as this is a client. And in
 real
> deployments it will connect to remote nodes, not local. So the
>> earlier
>>> we
> explain user how to do this, the better. This is why it should
 not
>> work
 out
> of the box connecting to 127.0.0.1. No magic for users please.
> 
> This is what user will see (draft):
>> ./ignitesql.sh
>> Please specify the host: ignitesql.sh [host]; type --help for
> more
> information.
>> ./ignitesql.sh 192.168.12.55
>> Connected successfully.
> 
> Again, specifying parameters manually is not poor UX. This is
>> excellent
 UX,
> as user learns on his own how to connect to a node in 1 minute.
> Most
> command line tools work this way.
> 
> сб, 7 окт. 2017 г. в 7:12, Dmitriy Setrakyan <
> dsetrak...@apache.org 
 >:
> 
>> How does the binding happen? Can we bind to everything, like we
> do
>> in
>> Ignite?
>> 
>> On Fri, Oct 6, 2017 at 2:51 PM, Denis Magda  
 > wrote:
>> 
>>> Thought over 127.0.0.1 as a default host once again. The bad
>> thing
> about
>>> it is that the user gets a lengthy exception stack trace if
>> Ignite
 is
>> not
>>> running locally and not a small error message.
>>> 
>>> What are the other opinions on this? Do we want to follow
>>> Vladimir’s
>>> suggestion forcing to set the host name/IP (port is 

Re: Clarification on DROP TABLE command needed

2017-10-17 Thread Dmitriy Setrakyan
Vladimir,

Is the cache purged synchronously or in the background?

D.

On Tue, Oct 17, 2017 at 12:10 AM, Vladimir Ozerov 
wrote:

> Hi Denis,
>
> 1) Yes, cache is destroyed and data is purged.
> 2) We acquire table lock, so that DML and SELECT statements will be blocked
> until operation is finished. Exception about missing table will be thrown
> afterwards.
>
> On Tue, Oct 17, 2017 at 1:40 AM, Denis Magda  wrote:
>
> > Alex P., Vladimir,
> >
> > I’m finalizing DROP TABLE command documentation and unsure what happens
> > with the following once the command is issued:
> >
> > 1. Do we destroy the underlying cache and purge all its content?
> >
> > 2. What happens to DML operations targeting a table that is somewhere in
> > the middle of the drop process? Do we block such DML commands waiting
> while
> > the DROP TABLE is over or kick them off right away with some special
> > exception?
> >
> > —
> > Denis
>


[jira] [Created] (IGNITE-6656) SQLLine Documentation

2017-10-17 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-6656:
---

 Summary: SQLLine Documentation
 Key: IGNITE-6656
 URL: https://issues.apache.org/jira/browse/IGNITE-6656
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
Reporter: Denis Magda
Assignee: Prachi Garg
Priority: Blocker


SQLLine is officially used as a default command line tool for SQL connectivity 
in Ignite. Document the tool usage on a dedicated page:
https://apacheignite-sql.readme.io/docs/sqlline

Consider the following sections:
# The tool overview with a list of commands that are supported and not by 
Ignite. The current list is here [1]. Split the list into 2 parts on readme.io 
(supported and not) and briefly describe eache command.
# Connectivity section. Start a cluster and connect to the tool using 
{{./ignitesql.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/}} 
script. Add a tab for ignite.bat file as well.
# Usage section. Execute DDL and DML commands taken from the getting started as 
it's done here [2]. When tables and indexes are created, run {{!metadata}} to 
show the structure.

Use private 2.3 builds generated by TC to make sure the doc is ready for the 
release.

[1] https://cwiki.apache.org/confluence/display/IGNITE/Overview+sqlline+tool
[2] https://apacheignite-sql.readme.io/v2.1/docs/sql-tooling



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Persistence per memory policy configuration

2017-10-17 Thread Dmitriy Setrakyan
I am now confused. Can I please ask for the final configuration again? What
will it look like?

On Tue, Oct 17, 2017 at 1:16 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Agree with Ivan. If we implemented backward compatibility, this would be
> completely counterintuitive behavior, so +1 to keep the behavior as is.
>
> As for the swap path, I see nothing wrong with having it for in-memory
> caches. This is a simple overflow mechanism that works fine if you do not
> need persistence guarantees.
>
> 2017-10-16 21:00 GMT+03:00 Ivan Rakov :
>
> > *swapPath* is ok for me. It is also consistent with *walPath* and
> > *walArchivePath*.
> >
> > Regarding persistencePath/storagePath, I don't like the idea when path
> for
> > WAL is implicitly changed, especially when we have separate option for
> it.
> > WAL and storage files are already located under same $IGNITE_HOME root.
> > From user perspective, there's no need to change root for all
> > persistence-related directories as long as $IGNITE_HOME points to the
> > correct disk.
> > From developer perspective, this change breaks backwards compatibility.
> > Maintaining backwards compatibility in fail-safe way (checking both
> > old-style and new-style paths) is complex and hard to maintain in the
> > codebase.
> >
> > Best Regards,
> > Ivan Rakov
> >
> > My vote is for *storagePath* and keeping behavior as is.
> >
> >
> > On 16.10.2017 16:53, Pavel Tupitsyn wrote:
> >
> >> Igniters, another thing to consider:
> >>
> >> DataRegionConfiguration.SwapFilePath should be SwapPath,
> >> since this is actually not a single file, but a directory path.
> >>
> >> On Fri, Oct 13, 2017 at 7:53 PM, Denis Magda  wrote:
> >>
> >> Seems I've got what you’re talking about.
> >>>
> >>> I’ve tried to change the root directory (*persistencePath*) and saw
> that
> >>> only data/indexes were placed to it while wal stayed somewhere in my
> work
> >>> dir. It works counterintuitive and causes non productive discussions
> like
> >>> we are in arguing about *persistencePath* or *storagePath*. Neither
> name
> >>> fits this behavior.
> >>>
> >>> My suggestion will be the following:
> >>> - *persistencePath* refers to the path of all storage files
> >>> (data/indexes,
> >>> wal, archive). If the path is changed *all the files* will be under the
> >>> new
> >>> directory unless *setWalPath* and *setWalArchivePath* are set
> >>> *explicitly*.
> >>> - *setWalPath* overrides the default location of WAL (which is
> >>> setPersistencePath)
> >>> - *setWalArchivePath* overrides the default location of the archive
> >>> (which
> >>> is again has to be setPersistencePath).
> >>>
> >>> If we follow this approach the configuration and behavior becomes
> vivid.
> >>> Thoughts?
> >>>
> >>> —
> >>> Denis
> >>>
> >>> On Oct 13, 2017, at 1:21 AM, Ivan Rakov  wrote:
> 
>  Denis,
> 
>  Data/index storage and WAL are located under the same root by default.
>  However, this is not mandatory: *storagePath* and *walPath* properties
> 
> >>> can contain both absolute and relative paths. If paths are absolute,
> >>> storage and WAL can reside on different devices, like this:
> >>>
>  storagePath: /storage1/NMVe_drive/storage
> > walPath: /storage2/Big_SSD_drive/wal
> >
>  We even recommend this in tuning guide: https://apacheignite.readme.
> 
> >>> io/docs/durable-memory-tuning
> >>>
>  That's why I think *persistencePath* is misleading.
> 
>  Best Regards,
>  Ivan Rakov
> 
>  On 13.10.2017 5:03, Dmitriy Setrakyan wrote:
> 
> > On Thu, Oct 12, 2017 at 7:01 PM, Denis Magda 
> >
>  wrote:
> >>>
>   From what I see after running an example they are under the same root
> >> folder and in different subdirectories. The root folder should be
> >>
> > defined
> >>>
>  by setPersistencePath as I guess.
> >>
> >> If that is the case, then you are right. Then we should not have
> > storagePath or WalPath, and store them both under "persistencePath"
> >
>  root.
> >>>
>  However, I would need Alexey Goncharuk or Ivan Rakov to confirm this.
> >
> >
> >>>
> >
>


Re: ContinuousQueryWithTransformer implementation questions - 2

2017-10-17 Thread Николай Ижиков
Hello, guys.

Anton, Yakov, can you, please, share your wisdom, and make final review of
IGNITE-425

Task: https://issues.apache.org/jira/browse/IGNITE-425
PR: https://github.com/apache/ignite/pull/2372

2017-09-18 18:33 GMT+03:00 Николай Ижиков :

> So, resuming:
>
> 1) My solution reduces network communication.
> As far as I know, a lot of people want to have this feature at Ignite 2.x.
> It's impossible to gain perfect API right now, it will take months to gain
> it.
> My solution ready right now!, let's merge it and refactor whole Continuous
> Query API at 3.0.
>
> 2) Current API is bad, and, yes, my changes make it a little bit
> complicated.
> But, complication minimized as possible and profit much bigger that
> complication.
>
> 2017-09-18 17:39 GMT+03:00 Николай Ижиков :
>
>> Vladimir,
>>
>> Here is a short summary what is wrong with continuous query
>>
>>
>> OK.
>> I agree - this is problems of current API.
>>
>> How we can fix it by not merging ContinuousQueryWIthTransformer?
>> How we can quickly design, discuss and implement new API?
>> Because at the moment there is no any ticket to start working at.
>> Moreover we can't throw ContinuousQuery away until 3.0 version.
>>
>> > What is worse, this interface is inconsistent with JCache event
>> listeners, which distinguish between create, update and delete events.
>>
>> Can't agree with you.
>>
>> 1. As far as I know jcache doesn't have any Transformer conception.
>> 2. We can distinguish create, update and delete events in transformer and
>> we can push that knowledge to listener.
>>
>>
>>> What I see in your PR is that we add one more confusing concept - in
>>> addition to wrongly named "local listener" now we will also have
>>> "TransformedEventListener".
>>>
>>
>> I think usage of jcache API in some Ignite-specific classes is one more
>> issue of existing ContinuousQuery.
>> I think we must use Ignite only API for Ignite only features and use some
>> wrappers to provide external API support.
>>
>> For these reasons I would still prefer to think of better continuous
>>> queries design first instead of making current API even more complicated.
>>>
>>
>> I think the main reason for some feature is to provide value to the user.
>> Transformers adds value to a product because usage of transformer can
>> lead to significant performance win.
>>
>>
>>> Vladimir.
>>>
>>> On Mon, Sep 18, 2017 at 4:04 PM, Николай Ижиков 
>>> wrote:
>>>
>>> > Igniters,
>>> >
>>> > I discussed API of ContinuousQuery and ContinuousQueryWithTransformer
>>> with
>>> > Anton Vinogradov one more time.
>>> >
>>> > Since users who use regular ContinuousQuery already knows pros. and
>>> cons of
>>> > using initialQuery and to not to complicate API more and more until
>>> 3.0 we
>>> > agreed that best choice is to stay with existing initialQuery that
>>> return
>>> > Cache.Entry for ContinuousQueryWithTransformer.
>>> >
>>> > Notice that initialQuery is not required and can be null.
>>> >
>>> > Thoughts?
>>> >
>>> > 2017-09-15 1:45 GMT+03:00 Denis Magda :
>>> >
>>> > > Vladimir,
>>> > >
>>> > > If the API is so bad then it might take much more time to make up and
>>> > roll
>>> > > out the new. Plus, there should be a community member who is ready to
>>> > take
>>> > > it over. My suggestion would be to accept this contribution and
>>> initiate
>>> > an
>>> > > activity towards the new API if you like.
>>> > >
>>> > > Personally, I considered this API as one of the most vivid we have
>>> basing
>>> > > on my practical usage experience. I was aware of initial query’s
>>> pitfalls
>>> > > but isn’t it something we can put on paper?
>>> > >
>>> > > —
>>> > > Denis
>>> > >
>>> > > > On Sep 12, 2017, at 6:04 AM, Vladimir Ozerov >> >
>>> > > wrote:
>>> > > >
>>> > > > My opinion is that our query API is big piece of ... you know,
>>> > especially
>>> > > > ContinuousQuery. A lot of concepts and features are mixed in a
>>> single
>>> > > > entity, what makes it hard to understand and use. Let's finally
>>> > deprecate
>>> > > > ContinuousQuery and design nice and consistent API. E.g.:
>>> > > >
>>> > > > interface IgniteCache {
>>> > > >UUID addListener(CacheEntryListener listener)
>>> > > >void removeListener(UUID listenerId);
>>> > > > }
>>> > > >
>>> > > > This method set's a listener on all nodes which will process event
>>> > > locally,
>>> > > > no network communication. Now if you want semantics similar to
>>> existing
>>> > > > continuous queries, you use special entry listener type:
>>> > > >
>>> > > > class ContinuousQueryCacheEntryListener implements
>>> CacheEntryListener
>>> > {
>>> > > >ContinuousQueryRemoteFilter rmtFilter;
>>> > > >ContinuousQueryRemoteTransformer rmtTransformer;
>>> > > >ContinuousQueryLocalCallback locCb;
>>> > > > }
>>> > > >
>>> > > > Last, "initial query" concept should be dropped from "continuous
>>> query"

[GitHub] ignite pull request #2873: Ignite 2.1.6.b2

2017-10-17 Thread mcherkasov
GitHub user mcherkasov opened a pull request:

https://github.com/apache/ignite/pull/2873

Ignite 2.1.6.b2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.1.6.b2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2873


commit e9139d4916a22acab22008663e89f963f3d3a6b5
Author: sboikov 
Date:   2017-08-02T14:25:31Z

Call updateRebalanceVersion after evictions (was broken in 
c6fbe2d82a9f56f96c94551b09e85a12d192f32e);

(cherry picked from commit b277682)

commit 615a582b83d9acdd5703995f52f97e70027e0639
Author: Dmitriy Shabalin 
Date:   2017-08-21T12:34:59Z

IGNITE-4728 Fixed get params for saved state.
(cherry picked from commit 97b3cef)

commit 891a9e0001bd0d77e0bafc3ad109d04d02a9b7ff
Author: sboikov 
Date:   2017-08-21T10:21:44Z

ignite-6124 Merge exchanges for multiple discovery events

(cherry picked from commit bebf299799712b464ee0e3800752ecc07770d9f0)

commit 96d86acdfe1a251758c6915b473ceb20514cbbad
Author: sboikov 
Date:   2017-08-21T13:19:18Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit e9f729fcb9ba7b555aed246a834e385ce2c88795
Author: Alexey Goncharuk 
Date:   2017-08-21T10:49:22Z

ignite-5872 Replace standard java maps for partition counters with more 
effective data structures

(cherry picked from commit 918c409)

commit 8e672d4a83cb20136d375818505cacccf6c9b4fd
Author: sboikov 
Date:   2017-08-21T13:39:29Z

Changed ignite version.

commit fd5d83c44624baea9466c591f119195cb092df4c
Author: Alexey Kuznetsov 
Date:   2017-08-21T15:06:36Z

IGNITE-6104 Fixed target.
(cherry picked from commit 8775d2c)

commit cca9117c805d144ff451694e7324720f8d466d94
Author: sboikov 
Date:   2017-08-21T15:39:12Z

ignite-5872 Fixed backward compatibility

commit dc9b2660723b325e1d07991cd0936ee27f624a81
Author: sboikov 
Date:   2017-08-21T15:39:36Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit 1072654c8762113bc49a658f2898475381b47832
Author: Sergey Chugunov 
Date:   2017-08-21T17:05:05Z

IGNITE-6052 class rename was reverted

commit cf97f3dd903549d3b5968ceb5ca9eda29ef521cf
Author: Dmitriy Govorukhin 
Date:   2017-08-21T17:04:04Z

ignite-6035 Clear indexes on cache clear/destroy

Signed-off-by: Andrey Gura 

commit 47b84ec62fb988faad438bdfde4ee48165a4065f
Author: Andrey Gura 
Date:   2017-08-21T17:36:56Z

Visor compilation issues are fixed.

commit 6e0fd7222e64b3a5e05204b300b62f60e3142e40
Author: Ilya Lantukh 
Date:   2017-08-21T18:12:41Z

ignite-gg-12639 Fill factor metric is not working with enabled persistence

Signed-off-by: Andrey Gura 

commit bbafa1e946121b7143101d4060effc48ed74a0b3
Author: Alexey Kuznetsov 
Date:   2017-08-22T03:59:15Z

IGNITE-6136 Web Console: implemented universal version check.
(cherry picked from commit 1a42e78)

commit 2a2e8407aaa99a56f79a861fc1e201dd183eea11
Author: vsisko 
Date:   2017-08-22T04:08:21Z

IGNITE-6131 Visor Cmd: Fixed "cache -a" command output cache entries count.
(cherry picked from commit 1f5054a)

commit db64729fc9ebb0217f06b0cf9d5e53ab8d657510
Author: sboikov 
Date:   2017-08-22T08:29:32Z

ignite-6124 Fixed NPE in GridDhtPartitionsExchangeFuture.topologyVersion 
after future cleanup.

(cherry picked from commit 2c9057a)

commit 5b7724714264c14cc10f4b25abc9234387224e4b
Author: Ilya Lantukh 
Date:   2017-08-22T08:50:35Z

Fixed javadoc format.

commit 785a85eb0155444b3eef48cf373a990dc4c8c6dd
Author: sboikov 
Date:   2017-08-22T09:24:03Z

ignite-5872 GridDhtPartitionsSingleMessage.partitionUpdateCounters should 
not return null.

commit 6b506e774c59b64fc6254ea151699c852620a408
Author: sboikov 
Date:   2017-08-22T09:24:21Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit 160d9b7c707efc359b4014aa1a481dc0fbbf596f
Author: Ilya Lantukh 
Date:   2017-08-22T11:10:10Z

Fixed flaky test.

commit 9ed4b72044ba1b2c105761b431625736166af7e7
Author: Alexey Goncharuk 
Date:   2017-08-01T09:25:25Z

master - Fixed visor compilation after merge

commit 16b819a6131c95a30d8dfaefbac6f6593826258b
Author: Ilya Lantukh 

Re: [Ignite 2.3 release]: Java 9 in compatibility mode

2017-10-17 Thread Dmitriy Setrakyan
On Tue, Oct 17, 2017 at 9:11 AM, Denis Magda  wrote:

> Considering the scope of the work to be done it sounds reasonable to move
> this task to the next release. Anyway, Java 9 was released just a couple of
> weeks ago and it’s normal to spend some time for the code adaptation
> avoiding any rush. We can release the next version by the end of the year
> which is ~2 months away.
>
>
I agree, let's push it to 2.4. However, I hope the community gets behind
the Java 1.9 support and our users will not have to wait for too long. End
of the year looks like a good target.


Re: [Ignite 2.3 release]: Java 9 in compatibility mode

2017-10-17 Thread Denis Magda
Considering the scope of the work to be done it sounds reasonable to move this 
task to the next release. Anyway, Java 9 was released just a couple of weeks 
ago and it’s normal to spend some time for the code adaptation avoiding any 
rush. We can release the next version by the end of the year which is ~2 months 
away.

—
Denis
 
> On Oct 17, 2017, at 5:50 AM, Vladimir Ozerov  wrote:
> 
> Guys,
> 
> I analyzed what was done by Evgeniy Zhuravlev to make Ignite work on Java 9
> (it was done long ago for Ignite 1.x):
> 1) We need to create two additional modules - one for Java 9 code, other
> for Java 7/8. Each module will contain version-specific stuff. Also we will
> need some additional logic to load either one or another module based Java
> version. May be some Maven magic will be required as well.
> 2) We need to fix a number of places:
> 2.1) IgnitionEx, GridDiscoveryManager - version resolution logic, minor
> thing
> 2.2) IgniteUtils.close - rely on deleted SharedSecrets.getJavaNetAccess().
> getURLClassPath()
> 2.3) ATOMIC cache - rely on deleted Unsafe.monitorEnter()/monitorExit()
> 2.4) GridRestProtocolAdapter - rely on deleted BASE64Encoder
> 2.5) DirectByteBufferStreamImpl - rely on no longer accessible DirectBuffer.
> address()
> 2.6) GridNioServer, PlatformMemoryPool - rely on refactored Cleaner.
> 3) We need to execute full set of tests on TC before release. This will
> require additional efforts on TC configuration.
> 
> I propose to move this task to AI 2.4 as it might take weeks.
> 
> Vladimir.
> 
> On Fri, Oct 13, 2017 at 2:33 AM, Denis Magda  wrote:
> 
>> Igniters,
>> 
>> If to put aside the compilation task [1] on Java 9, nobody can start
>> Ignite even in the compatibility mode on that version of JDK. That’s the
>> reason [2].
>> 
>> Let’s fix this for 2.3 by expanding the “if” close to -
>> !U.jdkVersion().contains("1.9"))
>> 
>> Any objections?
>> 
>> [1] https://issues.apache.org/jira/browse/IGNITE-4615 <
>> https://issues.apache.org/jira/browse/IGNITE-4615>
>> [2] https://github.com/apache/ignite/blob/master/modules/
>> core/src/main/java/org/apache/ignite/internal/IgnitionEx.java#L186 <
>> https://github.com/apache/ignite/blob/master/modules/
>> core/src/main/java/org/apache/ignite/internal/IgnitionEx.java#L186>
>> 
>> —
>> Denis
>> 
>> 
>>> Begin forwarded message:
>>> 
>>> From: Paolo Di Tommaso 
>>> Subject: Re: Java 9
>>> Date: October 11, 2017 at 11:41:24 PM PDT
>>> To: u...@ignite.apache.org
>>> Reply-To: u...@ignite.apache.org
>>> 
>>> Hi Denis,
>>> 
>>> Neither in compatibility mode? I mean adding some --add-opens options <
>> https://docs.oracle.com/javase/9/migrate/toc.htm#
>> JSMIG-GUID-2F61F3A9-0979-46A4-8B49-325BA0EE8B66> to access
>> deprecated/internal apis?
>>> 
>>> Almost any any existing Java app works in this way. I've tried that but
>> it seems Ignite is throwing an exception because the Java version number
>> does not match the expected pattern.
>>> 
>>> Any workaround ?
>>> 
>>> Cheers,
>>> Paolo
>>> 
>>> 
>>> On Thu, Oct 12, 2017 at 1:25 AM, Denis Magda  dma...@apache.org>> wrote:
>>> Hi Paolo,
>>> 
>>> There is some work to do to make Ignite running on Java 9:
>>> https://issues.apache.org/jira/browse/IGNITE-4615 <
>> https://issues.apache.org/jira/browse/IGNITE-4615>
>>> 
>>> Guess the version will be supported by the end of the year.
>>> 
>>> —
>>> Denis
>>> 
 On Oct 11, 2017, at 2:08 PM, Paolo Di Tommaso <
>> paolo.ditomm...@gmail.com > wrote:
 
 Hi,
 
 Which the minimal Ignite version that can run on Java 9 ?
 
 I'm trying Ignite 1.9 and I'm getting
 
 
 Caused by: java.lang.IllegalStateException: Ignite requires Java 7 or
>> above. Current Java version is not supported: 9
 at org.apache.ignite.internal.IgnitionEx.(
>> IgnitionEx.java:185)
 
 
 
 
 Thanks,
 Paolo
 
>>> 
>>> 
>> 
>> 



[jira] [Created] (IGNITE-6655) gracefully handle AmazonS3Exception: Slow Down in TcpDiscoveryS3IpFinder

2017-10-17 Thread Konstantin Dudkov (JIRA)
Konstantin Dudkov created IGNITE-6655:
-

 Summary: gracefully handle AmazonS3Exception: Slow Down in 
TcpDiscoveryS3IpFinder
 Key: IGNITE-6655
 URL: https://issues.apache.org/jira/browse/IGNITE-6655
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.1
Reporter: Konstantin Dudkov
Assignee: Konstantin Dudkov


for now if we got "AmazonS3Exception: Slow Down" in TcpDiscoveryS3IpFinder node 
stops

we should handle this situation with some kind of backoff algorithm



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6654) Ignite client can hang in case IgniteOOM on server

2017-10-17 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6654:
-

 Summary: Ignite client can hang in case IgniteOOM on server
 Key: IGNITE-6654
 URL: https://issues.apache.org/jira/browse/IGNITE-6654
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: cache, general
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov


Ignite client can hang in case IgniteOOM on server



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


New Compatibility Testing Framework module in the project

2017-10-17 Thread Vyacheslav Daradur
Hi, Igniters!

I would like to announce new Compatibility Testing Framework module in
the project.

This module has been recently included in the project [1].

Framework provides an opportunity to start working with Ignite
instances of previously released versions.

The entire module is built on top of the Ignite Testing Framework,
especially on the MiltiJVM-mode classes. There is a class
IgniteCompatibilityAbstractTest that provides methods to start Ignite
nodes with versions which have been previously released in the Maven
repository in separate JVM and allows them to join topology. Framework
looking for artifacts of a specific version in the Maven local
repository, if they don’t exist there, they will be downloaded and
stored via Maven.

The main implemented API:
startGrid(name, version, configurationClosure);
startGrid(name, version, configurationClosure, postStartupClosure);

You can simply specify a version of Ignite, which you want to start,
define the configuration in the configurationClosure and set the
actions on the started node in the postStartupClosure. It’s very easy
to use it for writing unit tests, here is a simple example [2] which
demonstrates the main functional.

I hope this framework helps us to make our project even better.

I want to thank Anton Vinogradov for his help with API design and
Dmitriy Pavlov for sharing first-time user experience [3] [4].


[1] https://issues.apache.org/jira/browse/IGNITE-5732 - Provide API to
test compatibility with old releases
[2] 
https://github.com/apache/ignite/blob/master/modules/compatibility/src/test/java/org/apache/ignite/compatibility/persistence/DummyPersistenceCompatibilityTest.java
[3] 
http://apache-ignite-developers.2346864.n4.nabble.com/Binary-compatibility-of-persistent-storage-tp22419p22913.html
[4] https://issues.apache.org/jira/browse/IGNITE-6285 - Enhance
persistent store paths on start

-- 
Best Regards, Vyacheslav D.


[jira] [Created] (IGNITE-6653) Check equality configuration between metasore and XML

2017-10-17 Thread Sergey Puchnin (JIRA)
Sergey Puchnin created IGNITE-6653:
--

 Summary: Check equality configuration between metasore and XML
 Key: IGNITE-6653
 URL: https://issues.apache.org/jira/browse/IGNITE-6653
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.3
Reporter: Sergey Puchnin


As another point of validation need to avoid join old version nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6652) Gracefully deny joins of nodes with older version when baseline topology is set up

2017-10-17 Thread Sergey Puchnin (JIRA)
Sergey Puchnin created IGNITE-6652:
--

 Summary: Gracefully deny joins of nodes with older version when 
baseline topology is set up
 Key: IGNITE-6652
 URL: https://issues.apache.org/jira/browse/IGNITE-6652
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.3
Reporter: Sergey Puchnin


Baseline should save attributes for AF to metasore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6651) Baseline includes only attributes required for affinity function

2017-10-17 Thread Sergey Puchnin (JIRA)
Sergey Puchnin created IGNITE-6651:
--

 Summary: Baseline includes only attributes required for affinity 
function
 Key: IGNITE-6651
 URL: https://issues.apache.org/jira/browse/IGNITE-6651
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.3
Reporter: Sergey Puchnin


We need to design and implement an effective baseline topology format for 
metastore so that metastore does not grow too fast.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2872: IGNITE-5195 : DataStreamer remap optimization.

2017-10-17 Thread ilantukh
GitHub user ilantukh opened a pull request:

https://github.com/apache/ignite/pull/2872

IGNITE-5195 : DataStreamer remap optimization.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5195

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2872


commit caa7f7c833a162a01edacb241634515cdce67ec2
Author: Ilya Lantukh 
Date:   2017-10-16T14:56:18Z

ignite-5195 : DataStreamer remap optimization.

commit 68b13fa248393c8ab75686c2956e84a2c5177839
Author: Ilya Lantukh 
Date:   2017-10-16T15:03:48Z

ignite-5195 : Added test.

commit e35b2d6508bd8b96c10d245253e066cdc2839a01
Author: Ilya Lantukh 
Date:   2017-10-17T13:50:53Z

ignite-5195 : Fixed failing tests, simplified solution.




---


[GitHub] ignite pull request #2862: IGNITE-5195 : DataStreamer remap optimization.

2017-10-17 Thread ilantukh
Github user ilantukh closed the pull request at:

https://github.com/apache/ignite/pull/2862


---


[GitHub] ignite pull request #2871: Ignite-gg-12960

2017-10-17 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/2871

Ignite-gg-12960



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-12960

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2871


commit 3b613407c056c01a4eda0c77612a08b8468886d5
Author: Alexey Goncharuk 
Date:   2017-08-02T08:25:08Z

IGNITE-5757 - Rent partitions on exchange completion

(cherry picked from commit c6fbe2d)

commit 762b533d3666903dc4119b5a5252373354bb98e1
Author: sboikov 
Date:   2017-08-21T12:22:18Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit e9139d4916a22acab22008663e89f963f3d3a6b5
Author: sboikov 
Date:   2017-08-02T14:25:31Z

Call updateRebalanceVersion after evictions (was broken in 
c6fbe2d82a9f56f96c94551b09e85a12d192f32e);

(cherry picked from commit b277682)

commit 615a582b83d9acdd5703995f52f97e70027e0639
Author: Dmitriy Shabalin 
Date:   2017-08-21T12:34:59Z

IGNITE-4728 Fixed get params for saved state.
(cherry picked from commit 97b3cef)

commit 891a9e0001bd0d77e0bafc3ad109d04d02a9b7ff
Author: sboikov 
Date:   2017-08-21T10:21:44Z

ignite-6124 Merge exchanges for multiple discovery events

(cherry picked from commit bebf299799712b464ee0e3800752ecc07770d9f0)

commit 96d86acdfe1a251758c6915b473ceb20514cbbad
Author: sboikov 
Date:   2017-08-21T13:19:18Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit e9f729fcb9ba7b555aed246a834e385ce2c88795
Author: Alexey Goncharuk 
Date:   2017-08-21T10:49:22Z

ignite-5872 Replace standard java maps for partition counters with more 
effective data structures

(cherry picked from commit 918c409)

commit 8e672d4a83cb20136d375818505cacccf6c9b4fd
Author: sboikov 
Date:   2017-08-21T13:39:29Z

Changed ignite version.

commit fd5d83c44624baea9466c591f119195cb092df4c
Author: Alexey Kuznetsov 
Date:   2017-08-21T15:06:36Z

IGNITE-6104 Fixed target.
(cherry picked from commit 8775d2c)

commit cca9117c805d144ff451694e7324720f8d466d94
Author: sboikov 
Date:   2017-08-21T15:39:12Z

ignite-5872 Fixed backward compatibility

commit dc9b2660723b325e1d07991cd0936ee27f624a81
Author: sboikov 
Date:   2017-08-21T15:39:36Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit 1072654c8762113bc49a658f2898475381b47832
Author: Sergey Chugunov 
Date:   2017-08-21T17:05:05Z

IGNITE-6052 class rename was reverted

commit cf97f3dd903549d3b5968ceb5ca9eda29ef521cf
Author: Dmitriy Govorukhin 
Date:   2017-08-21T17:04:04Z

ignite-6035 Clear indexes on cache clear/destroy

Signed-off-by: Andrey Gura 

commit 47b84ec62fb988faad438bdfde4ee48165a4065f
Author: Andrey Gura 
Date:   2017-08-21T17:36:56Z

Visor compilation issues are fixed.

commit 6e0fd7222e64b3a5e05204b300b62f60e3142e40
Author: Ilya Lantukh 
Date:   2017-08-21T18:12:41Z

ignite-gg-12639 Fill factor metric is not working with enabled persistence

Signed-off-by: Andrey Gura 

commit bbafa1e946121b7143101d4060effc48ed74a0b3
Author: Alexey Kuznetsov 
Date:   2017-08-22T03:59:15Z

IGNITE-6136 Web Console: implemented universal version check.
(cherry picked from commit 1a42e78)

commit 2a2e8407aaa99a56f79a861fc1e201dd183eea11
Author: vsisko 
Date:   2017-08-22T04:08:21Z

IGNITE-6131 Visor Cmd: Fixed "cache -a" command output cache entries count.
(cherry picked from commit 1f5054a)

commit db64729fc9ebb0217f06b0cf9d5e53ab8d657510
Author: sboikov 
Date:   2017-08-22T08:29:32Z

ignite-6124 Fixed NPE in GridDhtPartitionsExchangeFuture.topologyVersion 
after future cleanup.

(cherry picked from commit 2c9057a)

commit 5b7724714264c14cc10f4b25abc9234387224e4b
Author: Ilya Lantukh 
Date:   2017-08-22T08:50:35Z

Fixed javadoc format.

commit 785a85eb0155444b3eef48cf373a990dc4c8c6dd
Author: sboikov 
Date:   2017-08-22T09:24:03Z

ignite-5872 GridDhtPartitionsSingleMessage.partitionUpdateCounters should 
not return null.

commit 6b506e774c59b64fc6254ea151699c852620a408
Author: sboikov 
Date:   2017-08-22T09:24:21Z

Merge remote-tracking branch 'community/ignite-2.1.4' 

[jira] [Created] (IGNITE-6650) Introduce effective storage format for baseline topology

2017-10-17 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-6650:


 Summary: Introduce effective storage format for baseline topology
 Key: IGNITE-6650
 URL: https://issues.apache.org/jira/browse/IGNITE-6650
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.3
Reporter: Alexey Goncharuk


We need to design and implement an effective baseline topology format for 
metastore so that metastore does not grow too fast.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2870: IGNITE-2766 Deterministically reopen cache after ...

2017-10-17 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/2870

IGNITE-2766 Deterministically reopen cache after client reconnect.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2766ex

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2870


commit 843566b74b3afdf78d3ede1a46adc927878c89e2
Author: Ilya Kasnacheev 
Date:   2017-10-17T12:53:05Z

IGNITE-2766 Deterministically reopen cache after client reconnect bis.




---


Re: [Ignite 2.3 release]: Java 9 in compatibility mode

2017-10-17 Thread Vladimir Ozerov
Guys,

I analyzed what was done by Evgeniy Zhuravlev to make Ignite work on Java 9
(it was done long ago for Ignite 1.x):
1) We need to create two additional modules - one for Java 9 code, other
for Java 7/8. Each module will contain version-specific stuff. Also we will
need some additional logic to load either one or another module based Java
version. May be some Maven magic will be required as well.
2) We need to fix a number of places:
2.1) IgnitionEx, GridDiscoveryManager - version resolution logic, minor
thing
2.2) IgniteUtils.close - rely on deleted SharedSecrets.getJavaNetAccess().
getURLClassPath()
2.3) ATOMIC cache - rely on deleted Unsafe.monitorEnter()/monitorExit()
2.4) GridRestProtocolAdapter - rely on deleted BASE64Encoder
2.5) DirectByteBufferStreamImpl - rely on no longer accessible DirectBuffer.
address()
2.6) GridNioServer, PlatformMemoryPool - rely on refactored Cleaner.
3) We need to execute full set of tests on TC before release. This will
require additional efforts on TC configuration.

I propose to move this task to AI 2.4 as it might take weeks.

Vladimir.

On Fri, Oct 13, 2017 at 2:33 AM, Denis Magda  wrote:

> Igniters,
>
> If to put aside the compilation task [1] on Java 9, nobody can start
> Ignite even in the compatibility mode on that version of JDK. That’s the
> reason [2].
>
> Let’s fix this for 2.3 by expanding the “if” close to -
> !U.jdkVersion().contains("1.9"))
>
> Any objections?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4615 <
> https://issues.apache.org/jira/browse/IGNITE-4615>
> [2] https://github.com/apache/ignite/blob/master/modules/
> core/src/main/java/org/apache/ignite/internal/IgnitionEx.java#L186 <
> https://github.com/apache/ignite/blob/master/modules/
> core/src/main/java/org/apache/ignite/internal/IgnitionEx.java#L186>
>
> —
> Denis
>
>
> > Begin forwarded message:
> >
> > From: Paolo Di Tommaso 
> > Subject: Re: Java 9
> > Date: October 11, 2017 at 11:41:24 PM PDT
> > To: u...@ignite.apache.org
> > Reply-To: u...@ignite.apache.org
> >
> > Hi Denis,
> >
> > Neither in compatibility mode? I mean adding some --add-opens options <
> https://docs.oracle.com/javase/9/migrate/toc.htm#
> JSMIG-GUID-2F61F3A9-0979-46A4-8B49-325BA0EE8B66> to access
> deprecated/internal apis?
> >
> > Almost any any existing Java app works in this way. I've tried that but
> it seems Ignite is throwing an exception because the Java version number
> does not match the expected pattern.
> >
> > Any workaround ?
> >
> > Cheers,
> > Paolo
> >
> >
> > On Thu, Oct 12, 2017 at 1:25 AM, Denis Magda > wrote:
> > Hi Paolo,
> >
> > There is some work to do to make Ignite running on Java 9:
> > https://issues.apache.org/jira/browse/IGNITE-4615 <
> https://issues.apache.org/jira/browse/IGNITE-4615>
> >
> > Guess the version will be supported by the end of the year.
> >
> > —
> > Denis
> >
> >> On Oct 11, 2017, at 2:08 PM, Paolo Di Tommaso <
> paolo.ditomm...@gmail.com > wrote:
> >>
> >> Hi,
> >>
> >> Which the minimal Ignite version that can run on Java 9 ?
> >>
> >> I'm trying Ignite 1.9 and I'm getting
> >>
> >>
> >> Caused by: java.lang.IllegalStateException: Ignite requires Java 7 or
> above. Current Java version is not supported: 9
> >>  at org.apache.ignite.internal.IgnitionEx.(
> IgnitionEx.java:185)
> >>
> >>
> >>
> >>
> >> Thanks,
> >> Paolo
> >>
> >
> >
>
>


[jira] [Created] (IGNITE-6649) Add EvitionPolicy factory support in IgniteConfiguration.

2017-10-17 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-6649:


 Summary: Add EvitionPolicy factory support in IgniteConfiguration.
 Key: IGNITE-6649
 URL: https://issues.apache.org/jira/browse/IGNITE-6649
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: cache
Reporter: Andrew Mashenkov
 Fix For: 2.4


For now the only way to set EvictionPolicy to IgniteConfiguration is to use 
EvictionPolicy instance. 
That looks error prone as user can easily share instance between caches or 
cache reincarnations and got unexpected results.

E.g. it can cause an AssertionError if EvictionPolicy is reused.
Steps to reproduce.

1. Create CacheConfiguration object that will be reused.
2. Create and fill a cache.
3. Destroy cache and create cache again with same CacheConfiguration object.
4. One of next put can fails with stacktrace below.

The error is throws when EvictionPolicy tries to evict entries from cache that 
has just been destroyed.

java.lang.AssertionError
at 
org.apache.ignite.internal.processors.cache.CacheEvictableEntryImpl.evict(CacheEvictableEntryImpl.java:71)
at 
org.apache.ignite.cache.eviction.lru.LruEvictionPolicy.shrink0(LruEvictionPolicy.java:275)
at 
org.apache.ignite.cache.eviction.lru.LruEvictionPolicy.shrink(LruEvictionPolicy.java:250)
at 
org.apache.ignite.cache.eviction.lru.LruEvictionPolicy.onEntryAccessed(LruEvictionPolicy.java:161)
at 
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.notifyPolicy(GridCacheEvictionManager.java:1393)
at 
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.touch(GridCacheEvictionManager.java:825)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.unlockEntries(GridDhtAtomicCache.java:3058)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1952)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1730)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.mapSingle(GridNearAtomicAbstractUpdateFuture.java:264)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:494)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:436)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:209)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1245)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:680)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2328)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2305)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1379)
 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2869: IGNITE-1804: Test is not not flaky anymore

2017-10-17 Thread andrey-kuznetsov
GitHub user andrey-kuznetsov opened a pull request:

https://github.com/apache/ignite/pull/2869

IGNITE-1804: Test is not not flaky anymore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrey-kuznetsov/ignite master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2869.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2869


commit ffc3d84245b7f40c6b7f91a5e799aa2cb31a9607
Author: Andrey Kuznetsov 
Date:   2017-10-17T12:25:33Z

IGNITE-1804: Reenabled 
GridCachePartitionedQueueCreateMultiNodeSelfTest.testTx: does not fail since 
v.2.0.




---


[GitHub] ignite pull request #2868: IGNITE 6648

2017-10-17 Thread oleg-ostanin
GitHub user oleg-ostanin opened a pull request:

https://github.com/apache/ignite/pull/2868

IGNITE 6648



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6648

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2868.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2868


commit c7a7e5a71fbac0cae5b38532784ce8f162018a2a
Author: oleg-ostanin 
Date:   2017-10-17T11:04:35Z

IGNITE-6648 added ML javadoc group in parent/pom.xml

commit 1241391f6a7bbb66980c3215a3a755fb16a2ba55
Author: oleg-ostanin 
Date:   2017-10-17T11:36:02Z

Merge branch 'ignite-2.3' of https://git-wip-us.apache.org/repos/asf/ignite 
into ignite-6648




---


[GitHub] ignite pull request #2864: IGNITE-6627 .NET: cache deserialization fails wit...

2017-10-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2864


---


Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Anton Vinogradov
Sounds awesome.

I'll try to review API & tests this week.

Val,
Your review still required :)

On Tue, Oct 17, 2017 at 2:36 PM, Николай Ижиков 
wrote:

> Yes
>
> 17 окт. 2017 г. 2:34 PM пользователь "Anton Vinogradov" <
> avinogra...@gridgain.com> написал:
>
> > Nikolay,
> >
> > So, it will be able to start regular spark and ignite clusters and, using
> > peer classloading via spark-context, perform any DataFrame request,
> > correct?
> >
> > On Tue, Oct 17, 2017 at 2:25 PM, Николай Ижиков 
> > wrote:
> >
> > > Hello, Anton.
> > >
> > > An example you provide is a path to a master *local* file.
> > > These libraries are added to the classpath for each remote node running
> > > submitted job.
> > >
> > > Please, see documentation:
> > >
> > > http://spark.apache.org/docs/latest/api/java/org/apache/
> > > spark/SparkContext.html#addJar(java.lang.String)
> > > http://spark.apache.org/docs/latest/api/java/org/apache/
> > > spark/SparkContext.html#addFile(java.lang.String)
> > >
> > >
> > > 2017-10-17 13:10 GMT+03:00 Anton Vinogradov  >:
> > >
> > > > Nikolay,
> > > >
> > > > > With Data Frame API implementation there are no requirements to
> have
> > > any
> > > > > Ignite files on spark worker nodes.
> > > >
> > > > What do you mean? I see code like:
> > > >
> > > > spark.sparkContext.addJar(MAVEN_HOME +
> > > > "/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-
> > > > core-2.3.0-SNAPSHOT.jar")
> > > >
> > > > On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков <
> > nizhikov@gmail.com>
> > > > wrote:
> > > >
> > > > > Hello, guys.
> > > > >
> > > > > I have created example application to run Ignite Data Frame on
> > > standalone
> > > > > Spark cluster.
> > > > > With Data Frame API implementation there are no requirements to
> have
> > > any
> > > > > Ignite files on spark worker nodes.
> > > > >
> > > > > I ran this application on the free dataset: ATP tennis match
> > > statistics.
> > > > >
> > > > > data - https://github.com/nizhikov/atp_matches
> > > > > app - https://github.com/nizhikov/ignite-spark-df-example
> > > > >
> > > > > Valentin, do you have a chance to look at my changes?
> > > > >
> > > > >
> > > > > 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> > > > > valentin.kuliche...@gmail.com
> > > > > >:
> > > > >
> > > > > > Hi Nikolay,
> > > > > >
> > > > > > Sorry for delay on this, got a little swamped lately. I will do
> my
> > > best
> > > > > to
> > > > > > review the code this week.
> > > > > >
> > > > > > -Val
> > > > > >
> > > > > > On Mon, Oct 9, 2017 at 11:48 AM, Николай Ижиков <
> > > > nizhikov@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > >> Hello, Valentin.
> > > > > >>
> > > > > >> Did you have a chance to look at my changes?
> > > > > >>
> > > > > >> Now I think I have done almost all required features.
> > > > > >> I want to make some performance test to ensure my implementation
> > > work
> > > > > >> properly with a significant amount of data.
> > > > > >> And I definitely need some feedback for my changes.
> > > > > >>
> > > > > >>
> > > > > >> 2017-10-09 18:45 GMT+03:00 Николай Ижиков <
> nizhikov@gmail.com
> > >:
> > > > > >>
> > > > > >>> Hello, guys.
> > > > > >>>
> > > > > >>> Which version of Spark do we want to use?
> > > > > >>>
> > > > > >>> 1. Currently, Ignite depends on Spark 2.1.0.
> > > > > >>>
> > > > > >>> * Can be run on JDK 7.
> > > > > >>> * Still supported: 2.1.2 will be released soon.
> > > > > >>>
> > > > > >>> 2. Latest Spark version is 2.2.0.
> > > > > >>>
> > > > > >>> * Can be run only on JDK 8+
> > > > > >>> * Released Jul 11, 2017.
> > > > > >>> * Already supported by huge vendors(Amazon for example).
> > > > > >>>
> > > > > >>> Note that in IGNITE-3084 I implement some internal Spark API.
> > > > > >>> So It will take some effort to switch between Spark 2.1 and 2.2
> > > > > >>>
> > > > > >>>
> > > > > >>> 2017-09-27 2:20 GMT+03:00 Valentin Kulichenko <
> > > > > >>> valentin.kuliche...@gmail.com>:
> > > > > >>>
> > > > >  I will review in the next few days.
> > > > > 
> > > > >  -Val
> > > > > 
> > > > >  On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda <
> dma...@apache.org
> > >
> > > > > wrote:
> > > > > 
> > > > >  > Hello Nikolay,
> > > > >  >
> > > > >  > This is good news. Finally this capability is coming to
> > Ignite.
> > > > >  >
> > > > >  > Val, Vladimir, could you do a preliminary review?
> > > > >  >
> > > > >  > Answering on your questions.
> > > > >  >
> > > > >  > 1. Yardstick should be enough for performance measurements.
> > As a
> > > > > Spark
> > > > >  > user, I will be curious to know what’s the point of this
> > > > > integration.
> > > > >  > Probably we need to compare Spark + Ignite and Spark + Hive
> or
> > > > > Spark +
> > > > >  > RDBMS cases.
> > > > >  >
> > > > >  > 2. If Spark community is reluctant let’s include the 

Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Николай Ижиков
Yes

17 окт. 2017 г. 2:34 PM пользователь "Anton Vinogradov" <
avinogra...@gridgain.com> написал:

> Nikolay,
>
> So, it will be able to start regular spark and ignite clusters and, using
> peer classloading via spark-context, perform any DataFrame request,
> correct?
>
> On Tue, Oct 17, 2017 at 2:25 PM, Николай Ижиков 
> wrote:
>
> > Hello, Anton.
> >
> > An example you provide is a path to a master *local* file.
> > These libraries are added to the classpath for each remote node running
> > submitted job.
> >
> > Please, see documentation:
> >
> > http://spark.apache.org/docs/latest/api/java/org/apache/
> > spark/SparkContext.html#addJar(java.lang.String)
> > http://spark.apache.org/docs/latest/api/java/org/apache/
> > spark/SparkContext.html#addFile(java.lang.String)
> >
> >
> > 2017-10-17 13:10 GMT+03:00 Anton Vinogradov :
> >
> > > Nikolay,
> > >
> > > > With Data Frame API implementation there are no requirements to have
> > any
> > > > Ignite files on spark worker nodes.
> > >
> > > What do you mean? I see code like:
> > >
> > > spark.sparkContext.addJar(MAVEN_HOME +
> > > "/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-
> > > core-2.3.0-SNAPSHOT.jar")
> > >
> > > On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков <
> nizhikov@gmail.com>
> > > wrote:
> > >
> > > > Hello, guys.
> > > >
> > > > I have created example application to run Ignite Data Frame on
> > standalone
> > > > Spark cluster.
> > > > With Data Frame API implementation there are no requirements to have
> > any
> > > > Ignite files on spark worker nodes.
> > > >
> > > > I ran this application on the free dataset: ATP tennis match
> > statistics.
> > > >
> > > > data - https://github.com/nizhikov/atp_matches
> > > > app - https://github.com/nizhikov/ignite-spark-df-example
> > > >
> > > > Valentin, do you have a chance to look at my changes?
> > > >
> > > >
> > > > 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> > > > valentin.kuliche...@gmail.com
> > > > >:
> > > >
> > > > > Hi Nikolay,
> > > > >
> > > > > Sorry for delay on this, got a little swamped lately. I will do my
> > best
> > > > to
> > > > > review the code this week.
> > > > >
> > > > > -Val
> > > > >
> > > > > On Mon, Oct 9, 2017 at 11:48 AM, Николай Ижиков <
> > > nizhikov@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> Hello, Valentin.
> > > > >>
> > > > >> Did you have a chance to look at my changes?
> > > > >>
> > > > >> Now I think I have done almost all required features.
> > > > >> I want to make some performance test to ensure my implementation
> > work
> > > > >> properly with a significant amount of data.
> > > > >> And I definitely need some feedback for my changes.
> > > > >>
> > > > >>
> > > > >> 2017-10-09 18:45 GMT+03:00 Николай Ижиков  >:
> > > > >>
> > > > >>> Hello, guys.
> > > > >>>
> > > > >>> Which version of Spark do we want to use?
> > > > >>>
> > > > >>> 1. Currently, Ignite depends on Spark 2.1.0.
> > > > >>>
> > > > >>> * Can be run on JDK 7.
> > > > >>> * Still supported: 2.1.2 will be released soon.
> > > > >>>
> > > > >>> 2. Latest Spark version is 2.2.0.
> > > > >>>
> > > > >>> * Can be run only on JDK 8+
> > > > >>> * Released Jul 11, 2017.
> > > > >>> * Already supported by huge vendors(Amazon for example).
> > > > >>>
> > > > >>> Note that in IGNITE-3084 I implement some internal Spark API.
> > > > >>> So It will take some effort to switch between Spark 2.1 and 2.2
> > > > >>>
> > > > >>>
> > > > >>> 2017-09-27 2:20 GMT+03:00 Valentin Kulichenko <
> > > > >>> valentin.kuliche...@gmail.com>:
> > > > >>>
> > > >  I will review in the next few days.
> > > > 
> > > >  -Val
> > > > 
> > > >  On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda  >
> > > > wrote:
> > > > 
> > > >  > Hello Nikolay,
> > > >  >
> > > >  > This is good news. Finally this capability is coming to
> Ignite.
> > > >  >
> > > >  > Val, Vladimir, could you do a preliminary review?
> > > >  >
> > > >  > Answering on your questions.
> > > >  >
> > > >  > 1. Yardstick should be enough for performance measurements.
> As a
> > > > Spark
> > > >  > user, I will be curious to know what’s the point of this
> > > > integration.
> > > >  > Probably we need to compare Spark + Ignite and Spark + Hive or
> > > > Spark +
> > > >  > RDBMS cases.
> > > >  >
> > > >  > 2. If Spark community is reluctant let’s include the module in
> > > >  > ignite-spark integration.
> > > >  >
> > > >  > —
> > > >  > Denis
> > > >  >
> > > >  > > On Sep 25, 2017, at 11:14 AM, Николай Ижиков <
> > > >  nizhikov@gmail.com>
> > > >  > wrote:
> > > >  > >
> > > >  > > Hello, guys.
> > > >  > >
> > > >  > > Currently, I’m working on integration between Spark and
> Ignite
> > > > [1].
> > > >  > >
> > > >  > > For now, I implement following:
> > > >  > 

Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Anton Vinogradov
Nikolay,

So, it will be able to start regular spark and ignite clusters and, using
peer classloading via spark-context, perform any DataFrame request, correct?

On Tue, Oct 17, 2017 at 2:25 PM, Николай Ижиков 
wrote:

> Hello, Anton.
>
> An example you provide is a path to a master *local* file.
> These libraries are added to the classpath for each remote node running
> submitted job.
>
> Please, see documentation:
>
> http://spark.apache.org/docs/latest/api/java/org/apache/
> spark/SparkContext.html#addJar(java.lang.String)
> http://spark.apache.org/docs/latest/api/java/org/apache/
> spark/SparkContext.html#addFile(java.lang.String)
>
>
> 2017-10-17 13:10 GMT+03:00 Anton Vinogradov :
>
> > Nikolay,
> >
> > > With Data Frame API implementation there are no requirements to have
> any
> > > Ignite files on spark worker nodes.
> >
> > What do you mean? I see code like:
> >
> > spark.sparkContext.addJar(MAVEN_HOME +
> > "/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-
> > core-2.3.0-SNAPSHOT.jar")
> >
> > On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков 
> > wrote:
> >
> > > Hello, guys.
> > >
> > > I have created example application to run Ignite Data Frame on
> standalone
> > > Spark cluster.
> > > With Data Frame API implementation there are no requirements to have
> any
> > > Ignite files on spark worker nodes.
> > >
> > > I ran this application on the free dataset: ATP tennis match
> statistics.
> > >
> > > data - https://github.com/nizhikov/atp_matches
> > > app - https://github.com/nizhikov/ignite-spark-df-example
> > >
> > > Valentin, do you have a chance to look at my changes?
> > >
> > >
> > > 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com
> > > >:
> > >
> > > > Hi Nikolay,
> > > >
> > > > Sorry for delay on this, got a little swamped lately. I will do my
> best
> > > to
> > > > review the code this week.
> > > >
> > > > -Val
> > > >
> > > > On Mon, Oct 9, 2017 at 11:48 AM, Николай Ижиков <
> > nizhikov@gmail.com>
> > > > wrote:
> > > >
> > > >> Hello, Valentin.
> > > >>
> > > >> Did you have a chance to look at my changes?
> > > >>
> > > >> Now I think I have done almost all required features.
> > > >> I want to make some performance test to ensure my implementation
> work
> > > >> properly with a significant amount of data.
> > > >> And I definitely need some feedback for my changes.
> > > >>
> > > >>
> > > >> 2017-10-09 18:45 GMT+03:00 Николай Ижиков :
> > > >>
> > > >>> Hello, guys.
> > > >>>
> > > >>> Which version of Spark do we want to use?
> > > >>>
> > > >>> 1. Currently, Ignite depends on Spark 2.1.0.
> > > >>>
> > > >>> * Can be run on JDK 7.
> > > >>> * Still supported: 2.1.2 will be released soon.
> > > >>>
> > > >>> 2. Latest Spark version is 2.2.0.
> > > >>>
> > > >>> * Can be run only on JDK 8+
> > > >>> * Released Jul 11, 2017.
> > > >>> * Already supported by huge vendors(Amazon for example).
> > > >>>
> > > >>> Note that in IGNITE-3084 I implement some internal Spark API.
> > > >>> So It will take some effort to switch between Spark 2.1 and 2.2
> > > >>>
> > > >>>
> > > >>> 2017-09-27 2:20 GMT+03:00 Valentin Kulichenko <
> > > >>> valentin.kuliche...@gmail.com>:
> > > >>>
> > >  I will review in the next few days.
> > > 
> > >  -Val
> > > 
> > >  On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda 
> > > wrote:
> > > 
> > >  > Hello Nikolay,
> > >  >
> > >  > This is good news. Finally this capability is coming to Ignite.
> > >  >
> > >  > Val, Vladimir, could you do a preliminary review?
> > >  >
> > >  > Answering on your questions.
> > >  >
> > >  > 1. Yardstick should be enough for performance measurements. As a
> > > Spark
> > >  > user, I will be curious to know what’s the point of this
> > > integration.
> > >  > Probably we need to compare Spark + Ignite and Spark + Hive or
> > > Spark +
> > >  > RDBMS cases.
> > >  >
> > >  > 2. If Spark community is reluctant let’s include the module in
> > >  > ignite-spark integration.
> > >  >
> > >  > —
> > >  > Denis
> > >  >
> > >  > > On Sep 25, 2017, at 11:14 AM, Николай Ижиков <
> > >  nizhikov@gmail.com>
> > >  > wrote:
> > >  > >
> > >  > > Hello, guys.
> > >  > >
> > >  > > Currently, I’m working on integration between Spark and Ignite
> > > [1].
> > >  > >
> > >  > > For now, I implement following:
> > >  > >* Ignite DataSource implementation(IgniteRelationProvider)
> > >  > >* DataFrame support for Ignite SQL table.
> > >  > >* IgniteCatalog implementation for a transparent resolving
> of
> > >  ignites
> > >  > > SQL tables.
> > >  > >
> > >  > > Implementation of it can be found in PR [2]
> > >  > > It would be great if someone provides feedback for a
> prototype.
> > 

Re: Adding sqlline tool to Apache Ignite project

2017-10-17 Thread Oleg Ostanin
New example build with sqlline:

https://ci.ignite.apache.org/viewLog.html?buildId=894407=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3PrepareVote#!1rrb2,1esn4zrslm4po,-h8h0hn9vvvxp


On Wed, Oct 11, 2017 at 1:00 AM, Denis Magda  wrote:

> Oleg,
>
> Looks good to me. Please consider the notes left in the ticket. I want us
> to prepare a script for Windows, review the language for help notice and
> errors, put together documentation. Prachi will be able to help with the
> editing and documentation.
>
> —
> Denis
>
> > On Oct 9, 2017, at 10:13 AM, Oleg Ostanin  wrote:
> >
> > New build with fixed argument parsing:
> > https://ci.ignite.apache.org/viewLog.html?buildId=882282;
> tab=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3Pre
> pareVote#!1rrb2,1esn4zrslm4po,-h8h0hn9vvvxp
> >
> > On Mon, Oct 9, 2017 at 5:38 PM, Denis Magda  wrote:
> >
> >> I think it’s a must have for the ticket resolution.
> >>
> >> Denis
> >>
> >> On Monday, October 9, 2017, Anton Vinogradov 
> >> wrote:
> >>
> >>> Any plans to have ignitesql.bat?
> >>>
> >>> On Mon, Oct 9, 2017 at 5:29 PM, Oleg Ostanin  >>> > wrote:
> >>>
>  Another build with sqlline included:
>  https://ci.ignite.apache.org/viewLog.html?buildId=881120;
>  tab=artifacts=IgniteRelease_
> XxxFromMirrorIgniteRelease3Pre
>  pareVote#!1rrb2,-wpvx2aopzexz,1esn4zrslm4po,-h8h0hn9vvvxp
> 
>  On Sun, Oct 8, 2017 at 5:11 PM, Denis Magda  >>> > wrote:
> 
> > No more doubts on my side. +1 for Vladimir’s suggestion.
> >
> > Denis
> >
> > On Saturday, October 7, 2017, Dmitriy Setrakyan <
> >> dsetrak...@apache.org
> >>> >
> > wrote:
> >
> >> I now tend to agree with Vladimir. We should always require that
> >> some
> >> address is specified. The help menu should clearly state how to
> >>> connect
> > to
> >> a localhost.
> >>
> >> D.
> >>
> >> On Sat, Oct 7, 2017 at 12:44 AM, Vladimir Ozerov <
> >>> voze...@gridgain.com 
> >> >
> >> wrote:
> >>
> >>> Denis,
> >>>
> >>> Default Ignite configuration uses multicast, this is why you do
> >> not
> > need
> >> to
> >>> change anything. Ignite node is always both a server (listens)
> >> and
> >>> a
> >> client
> >>> (connects).
> >>>
> >>> This will not work for ignitesql, as this is a client. And in
> >> real
> >>> deployments it will connect to remote nodes, not local. So the
>  earlier
> > we
> >>> explain user how to do this, the better. This is why it should
> >> not
>  work
> >> out
> >>> of the box connecting to 127.0.0.1. No magic for users please.
> >>>
> >>> This is what user will see (draft):
>  ./ignitesql.sh
>  Please specify the host: ignitesql.sh [host]; type --help for
> >>> more
> >>> information.
>  ./ignitesql.sh 192.168.12.55
>  Connected successfully.
> >>>
> >>> Again, specifying parameters manually is not poor UX. This is
>  excellent
> >> UX,
> >>> as user learns on his own how to connect to a node in 1 minute.
> >>> Most
> >>> command line tools work this way.
> >>>
> >>> сб, 7 окт. 2017 г. в 7:12, Dmitriy Setrakyan <
> >>> dsetrak...@apache.org 
> >> >:
> >>>
>  How does the binding happen? Can we bind to everything, like we
> >>> do
>  in
>  Ignite?
> 
>  On Fri, Oct 6, 2017 at 2:51 PM, Denis Magda  >>> 
> >> > wrote:
> 
> > Thought over 127.0.0.1 as a default host once again. The bad
>  thing
> >>> about
> > it is that the user gets a lengthy exception stack trace if
>  Ignite
> >> is
>  not
> > running locally and not a small error message.
> >
> > What are the other opinions on this? Do we want to follow
> > Vladimir’s
> > suggestion forcing to set the host name/IP (port is optional)
> >>> for
> > the
>  sake
> > of usability or leaver 127.0.0.1 as default?
> >
> > —
> > Denis
> >
> >> On Oct 6, 2017, at 12:21 PM, Denis Magda <
> >> dma...@apache.org
> >>> 
> >> > wrote:
> >>
> >>> But, we need to support “help” (-h, -help) argument
> >> listing
>  all
> >> the
> > parameters accepted by the tools.
> >>
> >> Meant accepted by the ignitesql script only such as host
> >>> name.
> >>
> >> —
> >> Denis
> >>
> >>> On Oct 6, 2017, at 12:20 PM, Denis Magda <
> >> dma...@apache.org
> >>> 
> >> > wrote:
> >>>
> >>> Really nice, could click 

Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Николай Ижиков
Hello, Anton.

An example you provide is a path to a master *local* file.
These libraries are added to the classpath for each remote node running
submitted job.

Please, see documentation:

http://spark.apache.org/docs/latest/api/java/org/apache/
spark/SparkContext.html#addJar(java.lang.String)
http://spark.apache.org/docs/latest/api/java/org/apache/
spark/SparkContext.html#addFile(java.lang.String)


2017-10-17 13:10 GMT+03:00 Anton Vinogradov :

> Nikolay,
>
> > With Data Frame API implementation there are no requirements to have any
> > Ignite files on spark worker nodes.
>
> What do you mean? I see code like:
>
> spark.sparkContext.addJar(MAVEN_HOME +
> "/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-
> core-2.3.0-SNAPSHOT.jar")
>
> On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков 
> wrote:
>
> > Hello, guys.
> >
> > I have created example application to run Ignite Data Frame on standalone
> > Spark cluster.
> > With Data Frame API implementation there are no requirements to have any
> > Ignite files on spark worker nodes.
> >
> > I ran this application on the free dataset: ATP tennis match statistics.
> >
> > data - https://github.com/nizhikov/atp_matches
> > app - https://github.com/nizhikov/ignite-spark-df-example
> >
> > Valentin, do you have a chance to look at my changes?
> >
> >
> > 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> > valentin.kuliche...@gmail.com
> > >:
> >
> > > Hi Nikolay,
> > >
> > > Sorry for delay on this, got a little swamped lately. I will do my best
> > to
> > > review the code this week.
> > >
> > > -Val
> > >
> > > On Mon, Oct 9, 2017 at 11:48 AM, Николай Ижиков <
> nizhikov@gmail.com>
> > > wrote:
> > >
> > >> Hello, Valentin.
> > >>
> > >> Did you have a chance to look at my changes?
> > >>
> > >> Now I think I have done almost all required features.
> > >> I want to make some performance test to ensure my implementation work
> > >> properly with a significant amount of data.
> > >> And I definitely need some feedback for my changes.
> > >>
> > >>
> > >> 2017-10-09 18:45 GMT+03:00 Николай Ижиков :
> > >>
> > >>> Hello, guys.
> > >>>
> > >>> Which version of Spark do we want to use?
> > >>>
> > >>> 1. Currently, Ignite depends on Spark 2.1.0.
> > >>>
> > >>> * Can be run on JDK 7.
> > >>> * Still supported: 2.1.2 will be released soon.
> > >>>
> > >>> 2. Latest Spark version is 2.2.0.
> > >>>
> > >>> * Can be run only on JDK 8+
> > >>> * Released Jul 11, 2017.
> > >>> * Already supported by huge vendors(Amazon for example).
> > >>>
> > >>> Note that in IGNITE-3084 I implement some internal Spark API.
> > >>> So It will take some effort to switch between Spark 2.1 and 2.2
> > >>>
> > >>>
> > >>> 2017-09-27 2:20 GMT+03:00 Valentin Kulichenko <
> > >>> valentin.kuliche...@gmail.com>:
> > >>>
> >  I will review in the next few days.
> > 
> >  -Val
> > 
> >  On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda 
> > wrote:
> > 
> >  > Hello Nikolay,
> >  >
> >  > This is good news. Finally this capability is coming to Ignite.
> >  >
> >  > Val, Vladimir, could you do a preliminary review?
> >  >
> >  > Answering on your questions.
> >  >
> >  > 1. Yardstick should be enough for performance measurements. As a
> > Spark
> >  > user, I will be curious to know what’s the point of this
> > integration.
> >  > Probably we need to compare Spark + Ignite and Spark + Hive or
> > Spark +
> >  > RDBMS cases.
> >  >
> >  > 2. If Spark community is reluctant let’s include the module in
> >  > ignite-spark integration.
> >  >
> >  > —
> >  > Denis
> >  >
> >  > > On Sep 25, 2017, at 11:14 AM, Николай Ижиков <
> >  nizhikov@gmail.com>
> >  > wrote:
> >  > >
> >  > > Hello, guys.
> >  > >
> >  > > Currently, I’m working on integration between Spark and Ignite
> > [1].
> >  > >
> >  > > For now, I implement following:
> >  > >* Ignite DataSource implementation(IgniteRelationProvider)
> >  > >* DataFrame support for Ignite SQL table.
> >  > >* IgniteCatalog implementation for a transparent resolving of
> >  ignites
> >  > > SQL tables.
> >  > >
> >  > > Implementation of it can be found in PR [2]
> >  > > It would be great if someone provides feedback for a prototype.
> >  > >
> >  > > I made some examples in PR so you can see how API suppose to be
> >  used [3].
> >  > > [4].
> >  > >
> >  > > I need some advice. Can you help me?
> >  > >
> >  > > 1. How should this PR be tested?
> >  > >
> >  > > Of course, I need to provide some unit tests. But what about
> >  scalability
> >  > > tests, etc.
> >  > > Maybe we need some Yardstick benchmark or similar?
> >  > > What are your thoughts?
> >  > > Which scenarios should I consider in the 

[jira] [Created] (IGNITE-6648) ML javadoc is missing in 2.2 binary release

2017-10-17 Thread Oleg Ostanin (JIRA)
Oleg Ostanin created IGNITE-6648:


 Summary: ML javadoc is missing in 2.2 binary release   
 Key: IGNITE-6648
 URL: https://issues.apache.org/jira/browse/IGNITE-6648
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
Reporter: Oleg Ostanin
Assignee: Oleg Ostanin


ML javadoc is missing in binary releases.   




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #1037: IGNITE-3840 : Continue investigation: High memory...

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/1037


---


[GitHub] ignite pull request #968: Ignite-3698: ODBC: LEN(string) function

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/968


---


[GitHub] ignite pull request #2210: IGNITE-5593: Affinity change message leak on mass...

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/2210


---


[GitHub] ignite pull request #2532: IGNITE-5839: Unclear exception from BinaryObjectB...

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/2532


---


[GitHub] ignite pull request #2665: Ignite-gg-12773

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/2665


---


[GitHub] ignite pull request #2717: Ignite-gg-12717

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/2717


---


[GitHub] ignite pull request #2732: Ignite-gg-12751

2017-10-17 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/2732


---


[GitHub] ignite pull request #2867: -gnite 2.1.7

2017-10-17 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/2867

-gnite 2.1.7



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.1.7

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2867.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2867


commit e9139d4916a22acab22008663e89f963f3d3a6b5
Author: sboikov 
Date:   2017-08-02T14:25:31Z

Call updateRebalanceVersion after evictions (was broken in 
c6fbe2d82a9f56f96c94551b09e85a12d192f32e);

(cherry picked from commit b277682)

commit 615a582b83d9acdd5703995f52f97e70027e0639
Author: Dmitriy Shabalin 
Date:   2017-08-21T12:34:59Z

IGNITE-4728 Fixed get params for saved state.
(cherry picked from commit 97b3cef)

commit 891a9e0001bd0d77e0bafc3ad109d04d02a9b7ff
Author: sboikov 
Date:   2017-08-21T10:21:44Z

ignite-6124 Merge exchanges for multiple discovery events

(cherry picked from commit bebf299799712b464ee0e3800752ecc07770d9f0)

commit 96d86acdfe1a251758c6915b473ceb20514cbbad
Author: sboikov 
Date:   2017-08-21T13:19:18Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit e9f729fcb9ba7b555aed246a834e385ce2c88795
Author: Alexey Goncharuk 
Date:   2017-08-21T10:49:22Z

ignite-5872 Replace standard java maps for partition counters with more 
effective data structures

(cherry picked from commit 918c409)

commit 8e672d4a83cb20136d375818505cacccf6c9b4fd
Author: sboikov 
Date:   2017-08-21T13:39:29Z

Changed ignite version.

commit fd5d83c44624baea9466c591f119195cb092df4c
Author: Alexey Kuznetsov 
Date:   2017-08-21T15:06:36Z

IGNITE-6104 Fixed target.
(cherry picked from commit 8775d2c)

commit cca9117c805d144ff451694e7324720f8d466d94
Author: sboikov 
Date:   2017-08-21T15:39:12Z

ignite-5872 Fixed backward compatibility

commit dc9b2660723b325e1d07991cd0936ee27f624a81
Author: sboikov 
Date:   2017-08-21T15:39:36Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit 1072654c8762113bc49a658f2898475381b47832
Author: Sergey Chugunov 
Date:   2017-08-21T17:05:05Z

IGNITE-6052 class rename was reverted

commit cf97f3dd903549d3b5968ceb5ca9eda29ef521cf
Author: Dmitriy Govorukhin 
Date:   2017-08-21T17:04:04Z

ignite-6035 Clear indexes on cache clear/destroy

Signed-off-by: Andrey Gura 

commit 47b84ec62fb988faad438bdfde4ee48165a4065f
Author: Andrey Gura 
Date:   2017-08-21T17:36:56Z

Visor compilation issues are fixed.

commit 6e0fd7222e64b3a5e05204b300b62f60e3142e40
Author: Ilya Lantukh 
Date:   2017-08-21T18:12:41Z

ignite-gg-12639 Fill factor metric is not working with enabled persistence

Signed-off-by: Andrey Gura 

commit bbafa1e946121b7143101d4060effc48ed74a0b3
Author: Alexey Kuznetsov 
Date:   2017-08-22T03:59:15Z

IGNITE-6136 Web Console: implemented universal version check.
(cherry picked from commit 1a42e78)

commit 2a2e8407aaa99a56f79a861fc1e201dd183eea11
Author: vsisko 
Date:   2017-08-22T04:08:21Z

IGNITE-6131 Visor Cmd: Fixed "cache -a" command output cache entries count.
(cherry picked from commit 1f5054a)

commit db64729fc9ebb0217f06b0cf9d5e53ab8d657510
Author: sboikov 
Date:   2017-08-22T08:29:32Z

ignite-6124 Fixed NPE in GridDhtPartitionsExchangeFuture.topologyVersion 
after future cleanup.

(cherry picked from commit 2c9057a)

commit 5b7724714264c14cc10f4b25abc9234387224e4b
Author: Ilya Lantukh 
Date:   2017-08-22T08:50:35Z

Fixed javadoc format.

commit 785a85eb0155444b3eef48cf373a990dc4c8c6dd
Author: sboikov 
Date:   2017-08-22T09:24:03Z

ignite-5872 GridDhtPartitionsSingleMessage.partitionUpdateCounters should 
not return null.

commit 6b506e774c59b64fc6254ea151699c852620a408
Author: sboikov 
Date:   2017-08-22T09:24:21Z

Merge remote-tracking branch 'community/ignite-2.1.4' into ignite-2.1.4

commit 160d9b7c707efc359b4014aa1a481dc0fbbf596f
Author: Ilya Lantukh 
Date:   2017-08-22T11:10:10Z

Fixed flaky test.

commit 9ed4b72044ba1b2c105761b431625736166af7e7
Author: Alexey Goncharuk 
Date:   2017-08-01T09:25:25Z

master - Fixed visor compilation after merge

commit 16b819a6131c95a30d8dfaefbac6f6593826258b
Author: Ilya Lantukh 

Re: Integration of Spark and Ignite. Prototype.

2017-10-17 Thread Anton Vinogradov
Nikolay,

> With Data Frame API implementation there are no requirements to have any
> Ignite files on spark worker nodes.

What do you mean? I see code like:

spark.sparkContext.addJar(MAVEN_HOME +
"/org/apache/ignite/ignite-core/2.3.0-SNAPSHOT/ignite-core-2.3.0-SNAPSHOT.jar")

On Mon, Oct 16, 2017 at 5:22 PM, Николай Ижиков 
wrote:

> Hello, guys.
>
> I have created example application to run Ignite Data Frame on standalone
> Spark cluster.
> With Data Frame API implementation there are no requirements to have any
> Ignite files on spark worker nodes.
>
> I ran this application on the free dataset: ATP tennis match statistics.
>
> data - https://github.com/nizhikov/atp_matches
> app - https://github.com/nizhikov/ignite-spark-df-example
>
> Valentin, do you have a chance to look at my changes?
>
>
> 2017-10-12 6:03 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com
> >:
>
> > Hi Nikolay,
> >
> > Sorry for delay on this, got a little swamped lately. I will do my best
> to
> > review the code this week.
> >
> > -Val
> >
> > On Mon, Oct 9, 2017 at 11:48 AM, Николай Ижиков 
> > wrote:
> >
> >> Hello, Valentin.
> >>
> >> Did you have a chance to look at my changes?
> >>
> >> Now I think I have done almost all required features.
> >> I want to make some performance test to ensure my implementation work
> >> properly with a significant amount of data.
> >> And I definitely need some feedback for my changes.
> >>
> >>
> >> 2017-10-09 18:45 GMT+03:00 Николай Ижиков :
> >>
> >>> Hello, guys.
> >>>
> >>> Which version of Spark do we want to use?
> >>>
> >>> 1. Currently, Ignite depends on Spark 2.1.0.
> >>>
> >>> * Can be run on JDK 7.
> >>> * Still supported: 2.1.2 will be released soon.
> >>>
> >>> 2. Latest Spark version is 2.2.0.
> >>>
> >>> * Can be run only on JDK 8+
> >>> * Released Jul 11, 2017.
> >>> * Already supported by huge vendors(Amazon for example).
> >>>
> >>> Note that in IGNITE-3084 I implement some internal Spark API.
> >>> So It will take some effort to switch between Spark 2.1 and 2.2
> >>>
> >>>
> >>> 2017-09-27 2:20 GMT+03:00 Valentin Kulichenko <
> >>> valentin.kuliche...@gmail.com>:
> >>>
>  I will review in the next few days.
> 
>  -Val
> 
>  On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda 
> wrote:
> 
>  > Hello Nikolay,
>  >
>  > This is good news. Finally this capability is coming to Ignite.
>  >
>  > Val, Vladimir, could you do a preliminary review?
>  >
>  > Answering on your questions.
>  >
>  > 1. Yardstick should be enough for performance measurements. As a
> Spark
>  > user, I will be curious to know what’s the point of this
> integration.
>  > Probably we need to compare Spark + Ignite and Spark + Hive or
> Spark +
>  > RDBMS cases.
>  >
>  > 2. If Spark community is reluctant let’s include the module in
>  > ignite-spark integration.
>  >
>  > —
>  > Denis
>  >
>  > > On Sep 25, 2017, at 11:14 AM, Николай Ижиков <
>  nizhikov@gmail.com>
>  > wrote:
>  > >
>  > > Hello, guys.
>  > >
>  > > Currently, I’m working on integration between Spark and Ignite
> [1].
>  > >
>  > > For now, I implement following:
>  > >* Ignite DataSource implementation(IgniteRelationProvider)
>  > >* DataFrame support for Ignite SQL table.
>  > >* IgniteCatalog implementation for a transparent resolving of
>  ignites
>  > > SQL tables.
>  > >
>  > > Implementation of it can be found in PR [2]
>  > > It would be great if someone provides feedback for a prototype.
>  > >
>  > > I made some examples in PR so you can see how API suppose to be
>  used [3].
>  > > [4].
>  > >
>  > > I need some advice. Can you help me?
>  > >
>  > > 1. How should this PR be tested?
>  > >
>  > > Of course, I need to provide some unit tests. But what about
>  scalability
>  > > tests, etc.
>  > > Maybe we need some Yardstick benchmark or similar?
>  > > What are your thoughts?
>  > > Which scenarios should I consider in the first place?
>  > >
>  > > 2. Should we provide Spark Catalog implementation inside Ignite
>  codebase?
>  > >
>  > > A current implementation of Spark Catalog based on *internal Spark
>  API*.
>  > > Spark community seems not interested in making Catalog API public
> or
>  > > including Ignite Catalog in Spark code base [5], [6].
>  > >
>  > > *Should we include Spark internal API implementation inside Ignite
>  code
>  > > base?*
>  > >
>  > > Or should we consider to include Catalog implementation in some
>  external
>  > > module?
>  > > That will be created and released outside Ignite?(we still can
>  support
>  > and
>  > > develop it inside Ignite community).
> 

[GitHub] ignite pull request #2866: quick proof on concept impl

2017-10-17 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/2866

quick proof on concept impl



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6641

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2866.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2866


commit 831fb5dee0fba430eec56f5732f4a5bbd79d9339
Author: Sergey Chugunov 
Date:   2017-10-17T09:27:13Z

IGNITE-6641 quick proof on concept impl




---


Re: Wrong SQL statement

2017-10-17 Thread Taras Ledkov

Hi,

There is a ticket to track it: 
https://issues.apache.org/jira/browse/IGNITE-6111.



On 17.10.2017 11:16, Иван Федотов wrote:

Hello, Igniters!


Currently, if one try to execute `INSERT INTO t1 VALUES(...);` it will be
“IgniteSQLException: Failed to parse query: INSERT INTO Person VALUES(?,?)

…

Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
parse query: INSERT INTO Person VALUES(?,?)

…

Caused by: org.h2.jdbc.JdbcSQLException: Неверное количество столбцов”

It looks like a bug, because:


  1.H2 supports format “Insert into t1 values()” [1]

  2.SQL-92 tells us it is a correct query. Paragraph 13.8 - insert
statement, syntax rules[2].


So, I want to create ticket to fix it, what do you think?


Reproducer:


public class IgniteSqlAllColumnsInsertTest extends GridCommonAbstractTest {

public void testSqlInsert() throws Exception {

try (Ignite ignite = startGrid(0)) {

CacheConfiguration cacheCfg = new
CacheConfiguration("CachePerson").setSqlSchema("PUBLIC");


IgniteCache cache =
ignite.getOrCreateCache(cacheCfg);


cache.query(new SqlFieldsQuery("CREATE TABLE Person (Name
varchar, Age int, primary key (Name))"));


// Good query

QueryCursor cursor = cache.query(

new SqlFieldsQuery("INSERT INTO Person (Name, Age) VALUES
(?,?)")

.setArgs("Alice", 23)

);


assertEquals(1L, cursor.getAll().get(0).get(0));


// Bad query

cursor = cache.query(

new SqlFieldsQuery("INSERT INTO Person VALUES(?,?)")

.setArgs("Bob", 25)

);


assertEquals(1L, cursor.getAll().get(0).get(0));

}

}

}




[1] http://www.h2database.com/html/history.html


[2]http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Indexing fields of non-POJO cache values

2017-10-17 Thread Alexey Kuznetsov
Alexey G.,

AFAIK we are going to migrate to our own parser at some point.

On Tue, Oct 17, 2017 at 3:43 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Alexey K., looks like this will require significant changes in H2 (I cannot
> find anything on partial indexes there).
>
> Vladimir, any ideas?
>
> 2017-10-17 11:35 GMT+03:00 Alexey Kuznetsov :
>
> > Alexey G.,
> >
> > >  How these field extractors will be configured. QueryField and
> > QueryIndex are
> > already quite complex classes.
> > > Adding such a closure to configuration would complicate them even
> > further.
> > May be we can go in "JavaScript" way and pass a string with expression
> that
> > will be parsed and evaluated later on server side.
> >
> > > How these extractors will interact with future SQL drivers (my current
> > guess
> > - there is no way to define them in SQL)
> >
> > AFAIK RDBMS support index on expression.
> > For example: https://sqlite.org/expridx.html
> >
> > Make sense?
> >
> > On Tue, Oct 17, 2017 at 3:26 PM, Alexey Goncharuk <
> > alexey.goncha...@gmail.com> wrote:
> >
> > > I like this idea. In general case, this will not even require
> > > deserializing the cache value. Consider a binary tree implementation
> > with a
> > > binary object node {val, left, right}. In this case, it is impossible
> to
> > > have an index of min or max, but with Andrey's suggestion, these
> indexes
> > > are trivially extracted.
> > >
> > > Two things to consider:
> > >  * How these field extractors will be configured. QueryField and
> > QueryIndex
> > > are already quite complex classes. Adding such a closure to
> configuration
> > > would complicate them even further.
> > >  * How these extractors will interact with future SQL drivers (my
> current
> > > guess - there is no way to define them in SQL)
> > >
> > > Andrey, can you create a ticket and suggest an API design so we can
> > review
> > > it?
> > >
> > > Thanks,
> > > AG
> > >
> > > 2017-10-17 5:44 GMT+03:00 Andrey Kornev :
> > >
> > > > Of course it does, Dmitriy! However as I suggested below, the feature
> > > > should be optional. The current behavior (not requiring user classes
> on
> > > the
> > > > server, etc.) would remain the default one.
> > > >
> > > > Also, please realize that not everyone stores their data as POJOs or
> > uses
> > > > Ignite as a JDBC source -- the use cases that appear to have been the
> > > main
> > > > focus of Ignite community lately.
> > > >
> > > > Payloads with dynamic structures require more advanced mechanisms for
> > > > indexing, for example, to avoid the overhead of duplicating the
> > indexable
> > > > fields as top level fields of the BinaryObjects. In cases where the
> > cache
> > > > sizes are in tens of millions of entries, the ability to generate
> index
> > > > values on the fly rather than store them, would go a long way in
> terms
> > of
> > > > reducing memory utilization.
> > > >
> > > > In Ignite community finds this feature generally useful, I'd be more
> > than
> > > > happy to contribute its implementation.
> > > >
> > > > Regards
> > > > Andrey
> > > >
> > > > 
> > > > From: Dmitriy Setrakyan 
> > > > Sent: Monday, October 16, 2017 6:14 PM
> > > > To: dev@ignite.apache.org
> > > > Subject: Re: Indexing fields of non-POJO cache values
> > > >
> > > > On Mon, Oct 16, 2017 at 12:35 PM, Andrey Kornev <
> > > andrewkor...@hotmail.com>
> > > > wrote:
> > > >
> > > > > [Crossposting to the dev list]
> > > > >
> > > > > Alexey,
> > > > >
> > > > > Yes, something like that, where the "reference"/"alias" is
> expressed
> > > as a
> > > > > piece of Java code (as part of QueryEntity definition, perhaps)
> that
> > is
> > > > > invoked by Ignite at the cache entry indexing time.
> > > > >
> > > > > My point is that rather than limiting indexable fields only to
> > > predefined
> > > > > POJO attributes (or BinaryObject fields) Ignite could adopt a more
> > > > general
> > > > > approach by allowing users designate an arbitrary piece of code (a
> > > > > lambda/closure) to be used as an index value extractor. In such
> case,
> > > the
> > > > > current functionality (extracting index values from POJO
> attributes)
> > > > > becomes just a special case that's supported by Ignite out of the
> > box.
> > > > >
> > > >
> > > > Andrey, this would require deserialization on the server side. It
> would
> > > > also require that user classes are present on the server side. Both
> of
> > > this
> > > > scenarios Ignite tries to avoid.
> > > >
> > > > Makes sense?
> > > >
> > >
> >
> >
> >
> > --
> > Alexey Kuznetsov
> >
>
> --
> Alexey Kuznetsov
>
>


Re: Indexing fields of non-POJO cache values

2017-10-17 Thread Alexey Goncharuk
Alexey K., looks like this will require significant changes in H2 (I cannot
find anything on partial indexes there).

Vladimir, any ideas?

2017-10-17 11:35 GMT+03:00 Alexey Kuznetsov :

> Alexey G.,
>
> >  How these field extractors will be configured. QueryField and
> QueryIndex are
> already quite complex classes.
> > Adding such a closure to configuration would complicate them even
> further.
> May be we can go in "JavaScript" way and pass a string with expression that
> will be parsed and evaluated later on server side.
>
> > How these extractors will interact with future SQL drivers (my current
> guess
> - there is no way to define them in SQL)
>
> AFAIK RDBMS support index on expression.
> For example: https://sqlite.org/expridx.html
>
> Make sense?
>
> On Tue, Oct 17, 2017 at 3:26 PM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > I like this idea. In general case, this will not even require
> > deserializing the cache value. Consider a binary tree implementation
> with a
> > binary object node {val, left, right}. In this case, it is impossible to
> > have an index of min or max, but with Andrey's suggestion, these indexes
> > are trivially extracted.
> >
> > Two things to consider:
> >  * How these field extractors will be configured. QueryField and
> QueryIndex
> > are already quite complex classes. Adding such a closure to configuration
> > would complicate them even further.
> >  * How these extractors will interact with future SQL drivers (my current
> > guess - there is no way to define them in SQL)
> >
> > Andrey, can you create a ticket and suggest an API design so we can
> review
> > it?
> >
> > Thanks,
> > AG
> >
> > 2017-10-17 5:44 GMT+03:00 Andrey Kornev :
> >
> > > Of course it does, Dmitriy! However as I suggested below, the feature
> > > should be optional. The current behavior (not requiring user classes on
> > the
> > > server, etc.) would remain the default one.
> > >
> > > Also, please realize that not everyone stores their data as POJOs or
> uses
> > > Ignite as a JDBC source -- the use cases that appear to have been the
> > main
> > > focus of Ignite community lately.
> > >
> > > Payloads with dynamic structures require more advanced mechanisms for
> > > indexing, for example, to avoid the overhead of duplicating the
> indexable
> > > fields as top level fields of the BinaryObjects. In cases where the
> cache
> > > sizes are in tens of millions of entries, the ability to generate index
> > > values on the fly rather than store them, would go a long way in terms
> of
> > > reducing memory utilization.
> > >
> > > In Ignite community finds this feature generally useful, I'd be more
> than
> > > happy to contribute its implementation.
> > >
> > > Regards
> > > Andrey
> > >
> > > 
> > > From: Dmitriy Setrakyan 
> > > Sent: Monday, October 16, 2017 6:14 PM
> > > To: dev@ignite.apache.org
> > > Subject: Re: Indexing fields of non-POJO cache values
> > >
> > > On Mon, Oct 16, 2017 at 12:35 PM, Andrey Kornev <
> > andrewkor...@hotmail.com>
> > > wrote:
> > >
> > > > [Crossposting to the dev list]
> > > >
> > > > Alexey,
> > > >
> > > > Yes, something like that, where the "reference"/"alias" is expressed
> > as a
> > > > piece of Java code (as part of QueryEntity definition, perhaps) that
> is
> > > > invoked by Ignite at the cache entry indexing time.
> > > >
> > > > My point is that rather than limiting indexable fields only to
> > predefined
> > > > POJO attributes (or BinaryObject fields) Ignite could adopt a more
> > > general
> > > > approach by allowing users designate an arbitrary piece of code (a
> > > > lambda/closure) to be used as an index value extractor. In such case,
> > the
> > > > current functionality (extracting index values from POJO attributes)
> > > > becomes just a special case that's supported by Ignite out of the
> box.
> > > >
> > >
> > > Andrey, this would require deserialization on the server side. It would
> > > also require that user classes are present on the server side. Both of
> > this
> > > scenarios Ignite tries to avoid.
> > >
> > > Makes sense?
> > >
> >
>
>
>
> --
> Alexey Kuznetsov
>


Re: Indexing fields of non-POJO cache values

2017-10-17 Thread Alexey Kuznetsov
Alexey G.,

>  How these field extractors will be configured. QueryField and QueryIndex are
already quite complex classes.
> Adding such a closure to configuration would complicate them even further.
May be we can go in "JavaScript" way and pass a string with expression that
will be parsed and evaluated later on server side.

> How these extractors will interact with future SQL drivers (my current guess
- there is no way to define them in SQL)

AFAIK RDBMS support index on expression.
For example: https://sqlite.org/expridx.html

Make sense?

On Tue, Oct 17, 2017 at 3:26 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> I like this idea. In general case, this will not even require
> deserializing the cache value. Consider a binary tree implementation with a
> binary object node {val, left, right}. In this case, it is impossible to
> have an index of min or max, but with Andrey's suggestion, these indexes
> are trivially extracted.
>
> Two things to consider:
>  * How these field extractors will be configured. QueryField and QueryIndex
> are already quite complex classes. Adding such a closure to configuration
> would complicate them even further.
>  * How these extractors will interact with future SQL drivers (my current
> guess - there is no way to define them in SQL)
>
> Andrey, can you create a ticket and suggest an API design so we can review
> it?
>
> Thanks,
> AG
>
> 2017-10-17 5:44 GMT+03:00 Andrey Kornev :
>
> > Of course it does, Dmitriy! However as I suggested below, the feature
> > should be optional. The current behavior (not requiring user classes on
> the
> > server, etc.) would remain the default one.
> >
> > Also, please realize that not everyone stores their data as POJOs or uses
> > Ignite as a JDBC source -- the use cases that appear to have been the
> main
> > focus of Ignite community lately.
> >
> > Payloads with dynamic structures require more advanced mechanisms for
> > indexing, for example, to avoid the overhead of duplicating the indexable
> > fields as top level fields of the BinaryObjects. In cases where the cache
> > sizes are in tens of millions of entries, the ability to generate index
> > values on the fly rather than store them, would go a long way in terms of
> > reducing memory utilization.
> >
> > In Ignite community finds this feature generally useful, I'd be more than
> > happy to contribute its implementation.
> >
> > Regards
> > Andrey
> >
> > 
> > From: Dmitriy Setrakyan 
> > Sent: Monday, October 16, 2017 6:14 PM
> > To: dev@ignite.apache.org
> > Subject: Re: Indexing fields of non-POJO cache values
> >
> > On Mon, Oct 16, 2017 at 12:35 PM, Andrey Kornev <
> andrewkor...@hotmail.com>
> > wrote:
> >
> > > [Crossposting to the dev list]
> > >
> > > Alexey,
> > >
> > > Yes, something like that, where the "reference"/"alias" is expressed
> as a
> > > piece of Java code (as part of QueryEntity definition, perhaps) that is
> > > invoked by Ignite at the cache entry indexing time.
> > >
> > > My point is that rather than limiting indexable fields only to
> predefined
> > > POJO attributes (or BinaryObject fields) Ignite could adopt a more
> > general
> > > approach by allowing users designate an arbitrary piece of code (a
> > > lambda/closure) to be used as an index value extractor. In such case,
> the
> > > current functionality (extracting index values from POJO attributes)
> > > becomes just a special case that's supported by Ignite out of the box.
> > >
> >
> > Andrey, this would require deserialization on the server side. It would
> > also require that user classes are present on the server side. Both of
> this
> > scenarios Ignite tries to avoid.
> >
> > Makes sense?
> >
>



-- 
Alexey Kuznetsov


Re: Indexing fields of non-POJO cache values

2017-10-17 Thread Alexey Goncharuk
I like this idea. In general case, this will not even require
deserializing the cache value. Consider a binary tree implementation with a
binary object node {val, left, right}. In this case, it is impossible to
have an index of min or max, but with Andrey's suggestion, these indexes
are trivially extracted.

Two things to consider:
 * How these field extractors will be configured. QueryField and QueryIndex
are already quite complex classes. Adding such a closure to configuration
would complicate them even further.
 * How these extractors will interact with future SQL drivers (my current
guess - there is no way to define them in SQL)

Andrey, can you create a ticket and suggest an API design so we can review
it?

Thanks,
AG

2017-10-17 5:44 GMT+03:00 Andrey Kornev :

> Of course it does, Dmitriy! However as I suggested below, the feature
> should be optional. The current behavior (not requiring user classes on the
> server, etc.) would remain the default one.
>
> Also, please realize that not everyone stores their data as POJOs or uses
> Ignite as a JDBC source -- the use cases that appear to have been the main
> focus of Ignite community lately.
>
> Payloads with dynamic structures require more advanced mechanisms for
> indexing, for example, to avoid the overhead of duplicating the indexable
> fields as top level fields of the BinaryObjects. In cases where the cache
> sizes are in tens of millions of entries, the ability to generate index
> values on the fly rather than store them, would go a long way in terms of
> reducing memory utilization.
>
> In Ignite community finds this feature generally useful, I'd be more than
> happy to contribute its implementation.
>
> Regards
> Andrey
>
> 
> From: Dmitriy Setrakyan 
> Sent: Monday, October 16, 2017 6:14 PM
> To: dev@ignite.apache.org
> Subject: Re: Indexing fields of non-POJO cache values
>
> On Mon, Oct 16, 2017 at 12:35 PM, Andrey Kornev 
> wrote:
>
> > [Crossposting to the dev list]
> >
> > Alexey,
> >
> > Yes, something like that, where the "reference"/"alias" is expressed as a
> > piece of Java code (as part of QueryEntity definition, perhaps) that is
> > invoked by Ignite at the cache entry indexing time.
> >
> > My point is that rather than limiting indexable fields only to predefined
> > POJO attributes (or BinaryObject fields) Ignite could adopt a more
> general
> > approach by allowing users designate an arbitrary piece of code (a
> > lambda/closure) to be used as an index value extractor. In such case, the
> > current functionality (extracting index values from POJO attributes)
> > becomes just a special case that's supported by Ignite out of the box.
> >
>
> Andrey, this would require deserialization on the server side. It would
> also require that user classes are present on the server side. Both of this
> scenarios Ignite tries to avoid.
>
> Makes sense?
>


Re: Persistence per memory policy configuration

2017-10-17 Thread Alexey Goncharuk
Agree with Ivan. If we implemented backward compatibility, this would be
completely counterintuitive behavior, so +1 to keep the behavior as is.

As for the swap path, I see nothing wrong with having it for in-memory
caches. This is a simple overflow mechanism that works fine if you do not
need persistence guarantees.

2017-10-16 21:00 GMT+03:00 Ivan Rakov :

> *swapPath* is ok for me. It is also consistent with *walPath* and
> *walArchivePath*.
>
> Regarding persistencePath/storagePath, I don't like the idea when path for
> WAL is implicitly changed, especially when we have separate option for it.
> WAL and storage files are already located under same $IGNITE_HOME root.
> From user perspective, there's no need to change root for all
> persistence-related directories as long as $IGNITE_HOME points to the
> correct disk.
> From developer perspective, this change breaks backwards compatibility.
> Maintaining backwards compatibility in fail-safe way (checking both
> old-style and new-style paths) is complex and hard to maintain in the
> codebase.
>
> Best Regards,
> Ivan Rakov
>
> My vote is for *storagePath* and keeping behavior as is.
>
>
> On 16.10.2017 16:53, Pavel Tupitsyn wrote:
>
>> Igniters, another thing to consider:
>>
>> DataRegionConfiguration.SwapFilePath should be SwapPath,
>> since this is actually not a single file, but a directory path.
>>
>> On Fri, Oct 13, 2017 at 7:53 PM, Denis Magda  wrote:
>>
>> Seems I've got what you’re talking about.
>>>
>>> I’ve tried to change the root directory (*persistencePath*) and saw that
>>> only data/indexes were placed to it while wal stayed somewhere in my work
>>> dir. It works counterintuitive and causes non productive discussions like
>>> we are in arguing about *persistencePath* or *storagePath*. Neither name
>>> fits this behavior.
>>>
>>> My suggestion will be the following:
>>> - *persistencePath* refers to the path of all storage files
>>> (data/indexes,
>>> wal, archive). If the path is changed *all the files* will be under the
>>> new
>>> directory unless *setWalPath* and *setWalArchivePath* are set
>>> *explicitly*.
>>> - *setWalPath* overrides the default location of WAL (which is
>>> setPersistencePath)
>>> - *setWalArchivePath* overrides the default location of the archive
>>> (which
>>> is again has to be setPersistencePath).
>>>
>>> If we follow this approach the configuration and behavior becomes vivid.
>>> Thoughts?
>>>
>>> —
>>> Denis
>>>
>>> On Oct 13, 2017, at 1:21 AM, Ivan Rakov  wrote:

 Denis,

 Data/index storage and WAL are located under the same root by default.
 However, this is not mandatory: *storagePath* and *walPath* properties

>>> can contain both absolute and relative paths. If paths are absolute,
>>> storage and WAL can reside on different devices, like this:
>>>
 storagePath: /storage1/NMVe_drive/storage
> walPath: /storage2/Big_SSD_drive/wal
>
 We even recommend this in tuning guide: https://apacheignite.readme.

>>> io/docs/durable-memory-tuning
>>>
 That's why I think *persistencePath* is misleading.

 Best Regards,
 Ivan Rakov

 On 13.10.2017 5:03, Dmitriy Setrakyan wrote:

> On Thu, Oct 12, 2017 at 7:01 PM, Denis Magda 
>
 wrote:
>>>
  From what I see after running an example they are under the same root
>> folder and in different subdirectories. The root folder should be
>>
> defined
>>>
 by setPersistencePath as I guess.
>>
>> If that is the case, then you are right. Then we should not have
> storagePath or WalPath, and store them both under "persistencePath"
>
 root.
>>>
 However, I would need Alexey Goncharuk or Ivan Rakov to confirm this.
>
>
>>>
>


Wrong SQL statement

2017-10-17 Thread Иван Федотов
Hello, Igniters!


Currently, if one try to execute `INSERT INTO t1 VALUES(...);` it will be
“IgniteSQLException: Failed to parse query: INSERT INTO Person VALUES(?,?)

…

Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
parse query: INSERT INTO Person VALUES(?,?)

…

Caused by: org.h2.jdbc.JdbcSQLException: Неверное количество столбцов”

It looks like a bug, because:


 1.H2 supports format “Insert into t1 values()” [1]

 2.SQL-92 tells us it is a correct query. Paragraph 13.8 - insert
statement, syntax rules[2].


So, I want to create ticket to fix it, what do you think?


Reproducer:


public class IgniteSqlAllColumnsInsertTest extends GridCommonAbstractTest {

   public void testSqlInsert() throws Exception {

   try (Ignite ignite = startGrid(0)) {

   CacheConfiguration cacheCfg = new
CacheConfiguration("CachePerson").setSqlSchema("PUBLIC");


   IgniteCache cache =
ignite.getOrCreateCache(cacheCfg);


   cache.query(new SqlFieldsQuery("CREATE TABLE Person (Name
varchar, Age int, primary key (Name))"));


   // Good query

   QueryCursor cursor = cache.query(

   new SqlFieldsQuery("INSERT INTO Person (Name, Age) VALUES
(?,?)")

   .setArgs("Alice", 23)

   );


   assertEquals(1L, cursor.getAll().get(0).get(0));


   // Bad query

   cursor = cache.query(

   new SqlFieldsQuery("INSERT INTO Person VALUES(?,?)")

   .setArgs("Bob", 25)

   );


   assertEquals(1L, cursor.getAll().get(0).get(0));

   }

   }

}




[1] http://www.h2database.com/html/history.html


[2]http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt


-- 
Ivan Fedotov.

ivanan...@gmail.com


Re: Clarification on DROP TABLE command needed

2017-10-17 Thread Vladimir Ozerov
Hi Denis,

1) Yes, cache is destroyed and data is purged.
2) We acquire table lock, so that DML and SELECT statements will be blocked
until operation is finished. Exception about missing table will be thrown
afterwards.

On Tue, Oct 17, 2017 at 1:40 AM, Denis Magda  wrote:

> Alex P., Vladimir,
>
> I’m finalizing DROP TABLE command documentation and unsure what happens
> with the following once the command is issued:
>
> 1. Do we destroy the underlying cache and purge all its content?
>
> 2. What happens to DML operations targeting a table that is somewhere in
> the middle of the drop process? Do we block such DML commands waiting while
> the DROP TABLE is over or kick them off right away with some special
> exception?
>
> —
> Denis