Re: Failed to start client node

2015-12-14 Thread Alexey Kuznetsov
Denis, after mvn clean package classnames.properties was not marked as
changed in Idea.

On Mon, Dec 14, 2015 at 2:57 PM, Denis Magda  wrote:

> Alex,
>
> Have you updated and pushed classnames.properties? We have to do this
> otherwise the end user will have the same issues in runtime.
>
> Regards,
> Denis
>
>
> On 12/14/2015 10:41 AM, Alexey Kuznetsov wrote:
>
>> Thanks, Semyon.
>>
>> mvn clean package -DskipTests -DskipClientDocs -Dmaven.javadoc.skip=true
>>
>> Solved this issue.
>>
>> On Mon, Dec 14, 2015 at 1:01 PM, Semyon Boikov 
>> wrote:
>>
>> Alexey,
>>>
>>> You could get this error if classnames.properties was not regenerated
>>> after
>>> recent classes renaming,
>>>
>>>
>>> On Mon, Dec 14, 2015 at 7:45 AM, Alexey Kuznetsov <
>>> akuznet...@gridgain.com
>>> wrote:
>>>
>>> Igniters,

 My code that works fine couple of days ago started to fail with

>>> exception:
>>>
   java.lang.AssertionError: BinaryMetadataKey [typeId=1877955432]

 Also if I fallback to OptimizedMarshaller my code works fine.

 I created blocker: https://issues.apache.org/jira/browse/IGNITE-2148

 --
 Alexey Kuznetsov
 GridGain Systems
 www.gridgain.com


>>
>>
>


-- 
Alexey Kuznetsov
GridGain Systems
www.gridgain.com


Re: Using HDFS as a secondary FS

2015-12-14 Thread Denis Magda

Hi Ivan,

1) Yes, I think that it makes sense to have the old versions of the docs 
while an old version is still considered to be used by someone.


2) Absolutely, the time to add a corresponding article on the readme.io 
has come. It's not the first time I see the question related to HDFS as 
a secondary FS.
Before and now it's not clear for me what exact steps I should follow to 
enable such a configuration. Our current suggestions look like a puzzle.
I'll assemble the puzzle on my side and prepare the article. Ivan if you 
don't mind I would reaching you out directly asking for any technical 
assistance if needed.


Regards,
Denis

On 12/14/2015 10:25 AM, Ivan V. wrote:

Hi, Valentin,

1) first of all note that the author of the question uses not the latest
doc page, namely
http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system .
This is version 1.0, while the latest is 1.5:
https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
appeared that some links from the latest doc version point to 1.0 doc
version. I fixed that in several places where I found that. Do we really
need old doc versions (1.0 -1.4)?

2) our documentation (
http://apacheignite.gridgain.org/docs/secondary-file-system) does not
provide any special setup instructions to configure HDFS as secondary file
system in Ignite. Our docs assume that if a user wants to integrate with
Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop). It
looks like the page
http://apacheignite.gridgain.org/docs/secondary-file-system should be more
clear regarding the required configuration steps (in fact, setting up
HADOOP_HOME variable for Ignite node process).

3) Hadoop jars are correctly found by Ignite if the following conditions
are met:
(a) The "Hadoop Edition" distribution is used (not a "Fabric" edition).
(b) Either HADOOP_HOME environment variable is set up (for Apache Hadoop
distribution), or file "/etc/default/hadoop" exists and matches the Hadoop
distribution used (BigTop, Cloudera, HDP, etc.)

The exact mechanism of the Hadoop classpath composition can be found in
files
IGNITE_HOME/bin/include/hadoop-classpath.sh
IGNITE_HOME/bin/include/setenv.sh .

The issue is discussed in https://issues.apache.org/jira/browse/IGNITE-372
, https://issues.apache.org/jira/browse/IGNITE-483 .

On Sat, Dec 12, 2015 at 3:45 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:


Igniters,

I'm looking at the question on SO [1] and I'm a bit confused.

We ship ignite-hadoop module only in Hadoop Accelerator and without Hadoop
JARs, assuming that user will include them from the Hadoop distribution he
uses. It seems OK for me when accelerator is plugged in to Hadoop to run
mapreduce jobs, but I can't figure out steps required to configure HDFS as
a secondary FS for IGFS. Which Hadoop JARs should be on classpath? Is user
supposed to add them manually?

Can someone with more expertise in our Hadoop integration clarify this? I
believe there is not enough documentation on this topic.

BTW, any ideas why user gets exception for JobConf class which is in
'mapred' package? Why map-reduce class is being used?

[1]

http://stackoverflow.com/questions/34221355/apache-ignite-what-are-the-dependencies-of-ignitehadoopigfssecondaryfilesystem

-Val





Re: Failed to start client node

2015-12-14 Thread Denis Magda

Alex,

Try to build a package and replace classnames.properties located in the 
project with the classnames.properties from your output directory.
There should be the changes. Otherwise everyone has to do this clean 
step but not everyone is reading this thread. :)


--
Denis

On 12/14/2015 11:08 AM, Alexey Kuznetsov wrote:

Denis, after mvn clean package classnames.properties was not marked as
changed in Idea.

On Mon, Dec 14, 2015 at 2:57 PM, Denis Magda  wrote:


Alex,

Have you updated and pushed classnames.properties? We have to do this
otherwise the end user will have the same issues in runtime.

Regards,
Denis


On 12/14/2015 10:41 AM, Alexey Kuznetsov wrote:


Thanks, Semyon.

mvn clean package -DskipTests -DskipClientDocs -Dmaven.javadoc.skip=true

Solved this issue.

On Mon, Dec 14, 2015 at 1:01 PM, Semyon Boikov 
wrote:

Alexey,

You could get this error if classnames.properties was not regenerated
after
recent classes renaming,


On Mon, Dec 14, 2015 at 7:45 AM, Alexey Kuznetsov <
akuznet...@gridgain.com
wrote:

Igniters,

My code that works fine couple of days ago started to fail with


exception:


   java.lang.AssertionError: BinaryMetadataKey [typeId=1877955432]

Also if I fallback to OptimizedMarshaller my code works fine.

I created blocker: https://issues.apache.org/jira/browse/IGNITE-2148

--
Alexey Kuznetsov
GridGain Systems
www.gridgain.com










Re: Using HDFS as a secondary FS

2015-12-14 Thread Valentin Kulichenko
Guys,

Why don't we include ignite-hadoop module in Fabric? This user simply wants
to configure HDFS as a secondary file system to ensure persistence. Not
having the opportunity to do this in Fabric looks weird to me. And actually
I don't think this is a use case for Hadoop Accelerator.

-Val

On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda  wrote:

> Hi Ivan,
>
> 1) Yes, I think that it makes sense to have the old versions of the docs
> while an old version is still considered to be used by someone.
>
> 2) Absolutely, the time to add a corresponding article on the readme.io
> has come. It's not the first time I see the question related to HDFS as a
> secondary FS.
> Before and now it's not clear for me what exact steps I should follow to
> enable such a configuration. Our current suggestions look like a puzzle.
> I'll assemble the puzzle on my side and prepare the article. Ivan if you
> don't mind I would reaching you out directly asking for any technical
> assistance if needed.
>
> Regards,
> Denis
>
>
> On 12/14/2015 10:25 AM, Ivan V. wrote:
>
>> Hi, Valentin,
>>
>> 1) first of all note that the author of the question uses not the latest
>> doc page, namely
>> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system .
>> This is version 1.0, while the latest is 1.5:
>> https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
>> appeared that some links from the latest doc version point to 1.0 doc
>> version. I fixed that in several places where I found that. Do we really
>> need old doc versions (1.0 -1.4)?
>>
>> 2) our documentation (
>> http://apacheignite.gridgain.org/docs/secondary-file-system) does not
>> provide any special setup instructions to configure HDFS as secondary file
>> system in Ignite. Our docs assume that if a user wants to integrate with
>> Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
>> http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop). It
>> looks like the page
>> http://apacheignite.gridgain.org/docs/secondary-file-system should be
>> more
>> clear regarding the required configuration steps (in fact, setting up
>> HADOOP_HOME variable for Ignite node process).
>>
>> 3) Hadoop jars are correctly found by Ignite if the following conditions
>> are met:
>> (a) The "Hadoop Edition" distribution is used (not a "Fabric" edition).
>> (b) Either HADOOP_HOME environment variable is set up (for Apache Hadoop
>> distribution), or file "/etc/default/hadoop" exists and matches the Hadoop
>> distribution used (BigTop, Cloudera, HDP, etc.)
>>
>> The exact mechanism of the Hadoop classpath composition can be found in
>> files
>> IGNITE_HOME/bin/include/hadoop-classpath.sh
>> IGNITE_HOME/bin/include/setenv.sh .
>>
>> The issue is discussed in
>> https://issues.apache.org/jira/browse/IGNITE-372
>> , https://issues.apache.org/jira/browse/IGNITE-483 .
>>
>> On Sat, Dec 12, 2015 at 3:45 AM, Valentin Kulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>
>> Igniters,
>>>
>>> I'm looking at the question on SO [1] and I'm a bit confused.
>>>
>>> We ship ignite-hadoop module only in Hadoop Accelerator and without
>>> Hadoop
>>> JARs, assuming that user will include them from the Hadoop distribution
>>> he
>>> uses. It seems OK for me when accelerator is plugged in to Hadoop to run
>>> mapreduce jobs, but I can't figure out steps required to configure HDFS
>>> as
>>> a secondary FS for IGFS. Which Hadoop JARs should be on classpath? Is
>>> user
>>> supposed to add them manually?
>>>
>>> Can someone with more expertise in our Hadoop integration clarify this? I
>>> believe there is not enough documentation on this topic.
>>>
>>> BTW, any ideas why user gets exception for JobConf class which is in
>>> 'mapred' package? Why map-reduce class is being used?
>>>
>>> [1]
>>>
>>>
>>> http://stackoverflow.com/questions/34221355/apache-ignite-what-are-the-dependencies-of-ignitehadoopigfssecondaryfilesystem
>>>
>>> -Val
>>>
>>>
>


Re: Using HDFS as a secondary FS

2015-12-14 Thread Vladimir Ozerov
Valya,

Because we decide whether to load Hadoop module based on its availability
in classpath. And when Hadoop module is loaded, certain restrictions are
applied to configuration, e.g. peerClassLoadingEnabled must be false.
All this looks very inconvenient for me, but this is how things currently
work.

Vladimir.

On Mon, Dec 14, 2015 at 11:21 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Guys,
>
> Why don't we include ignite-hadoop module in Fabric? This user simply wants
> to configure HDFS as a secondary file system to ensure persistence. Not
> having the opportunity to do this in Fabric looks weird to me. And actually
> I don't think this is a use case for Hadoop Accelerator.
>
> -Val
>
> On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda  wrote:
>
> > Hi Ivan,
> >
> > 1) Yes, I think that it makes sense to have the old versions of the docs
> > while an old version is still considered to be used by someone.
> >
> > 2) Absolutely, the time to add a corresponding article on the readme.io
> > has come. It's not the first time I see the question related to HDFS as a
> > secondary FS.
> > Before and now it's not clear for me what exact steps I should follow to
> > enable such a configuration. Our current suggestions look like a puzzle.
> > I'll assemble the puzzle on my side and prepare the article. Ivan if you
> > don't mind I would reaching you out directly asking for any technical
> > assistance if needed.
> >
> > Regards,
> > Denis
> >
> >
> > On 12/14/2015 10:25 AM, Ivan V. wrote:
> >
> >> Hi, Valentin,
> >>
> >> 1) first of all note that the author of the question uses not the latest
> >> doc page, namely
> >> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system .
> >> This is version 1.0, while the latest is 1.5:
> >> https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
> >> appeared that some links from the latest doc version point to 1.0 doc
> >> version. I fixed that in several places where I found that. Do we really
> >> need old doc versions (1.0 -1.4)?
> >>
> >> 2) our documentation (
> >> http://apacheignite.gridgain.org/docs/secondary-file-system) does not
> >> provide any special setup instructions to configure HDFS as secondary
> file
> >> system in Ignite. Our docs assume that if a user wants to integrate with
> >> Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
> >> http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop). It
> >> looks like the page
> >> http://apacheignite.gridgain.org/docs/secondary-file-system should be
> >> more
> >> clear regarding the required configuration steps (in fact, setting up
> >> HADOOP_HOME variable for Ignite node process).
> >>
> >> 3) Hadoop jars are correctly found by Ignite if the following conditions
> >> are met:
> >> (a) The "Hadoop Edition" distribution is used (not a "Fabric" edition).
> >> (b) Either HADOOP_HOME environment variable is set up (for Apache Hadoop
> >> distribution), or file "/etc/default/hadoop" exists and matches the
> Hadoop
> >> distribution used (BigTop, Cloudera, HDP, etc.)
> >>
> >> The exact mechanism of the Hadoop classpath composition can be found in
> >> files
> >> IGNITE_HOME/bin/include/hadoop-classpath.sh
> >> IGNITE_HOME/bin/include/setenv.sh .
> >>
> >> The issue is discussed in
> >> https://issues.apache.org/jira/browse/IGNITE-372
> >> , https://issues.apache.org/jira/browse/IGNITE-483 .
> >>
> >> On Sat, Dec 12, 2015 at 3:45 AM, Valentin Kulichenko <
> >> valentin.kuliche...@gmail.com> wrote:
> >>
> >> Igniters,
> >>>
> >>> I'm looking at the question on SO [1] and I'm a bit confused.
> >>>
> >>> We ship ignite-hadoop module only in Hadoop Accelerator and without
> >>> Hadoop
> >>> JARs, assuming that user will include them from the Hadoop distribution
> >>> he
> >>> uses. It seems OK for me when accelerator is plugged in to Hadoop to
> run
> >>> mapreduce jobs, but I can't figure out steps required to configure HDFS
> >>> as
> >>> a secondary FS for IGFS. Which Hadoop JARs should be on classpath? Is
> >>> user
> >>> supposed to add them manually?
> >>>
> >>> Can someone with more expertise in our Hadoop integration clarify
> this? I
> >>> believe there is not enough documentation on this topic.
> >>>
> >>> BTW, any ideas why user gets exception for JobConf class which is in
> >>> 'mapred' package? Why map-reduce class is being used?
> >>>
> >>> [1]
> >>>
> >>>
> >>>
> http://stackoverflow.com/questions/34221355/apache-ignite-what-are-the-dependencies-of-ignitehadoopigfssecondaryfilesystem
> >>>
> >>> -Val
> >>>
> >>>
> >
>


Re: Client connect "hangs"

2015-12-14 Thread Denis Magda

Guys,

There is already a configuration property that lets to complete client's 
launching procedure even if there is no any server node in a cluster - 
TcpDiscoverySpi.setForceServerMode.
The only side effect of this property is that a client node will become 
a part of the ring.


Is this property applicable or you want to support something different?

--
Denis

On 12/12/2015 6:13 AM, Valentin Kulichenko wrote:

Dmitry,

How do you think, should we just change the behavior or make it
configurable?

-Val

On Fri, Dec 11, 2015 at 1:59 PM, Dmitriy Setrakyan 
wrote:


I agree that we have a consistency issue here. I am OK with the change.

On Fri, Dec 11, 2015 at 11:43 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:


Folks,

Currently there are two different ways how a client node behaves in case
there are no server nodes:

1. If it's trying to start, it will wait and block the thread that
called Ignition.start().
2. If server nodes left when it was already running, it will throw
disconnect exception on any API call.

It seems confusing to me (and not only to me, as far as I can see from

the

users' feedback). First of all, it's just inconsistent and requires
different processing for these different cases. Second of all, p.1 is

often

treated as a hang, but not as correct behavior. And it's getting worse

when

the node is started as a part of web application, because it blocks the
application startup process.

I think we should start a client node (or at least have a configurable
option) even if there are no servers yet. Until the first server joins,

it

will just throw disconnect exceptions.

Thoughts?

-Val





Re: Failed to start client node

2015-12-14 Thread Alexey Kuznetsov
Denis,

Thanks for pointing to this. File rebuilded and pushed to ignite-1.5.

On Mon, Dec 14, 2015 at 3:14 PM, Denis Magda  wrote:

> Alex,
>
> Try to build a package and replace classnames.properties located in the
> project with the classnames.properties from your output directory.
> There should be the changes. Otherwise everyone has to do this clean step
> but not everyone is reading this thread. :)
>
> --
> Denis
>
>
> On 12/14/2015 11:08 AM, Alexey Kuznetsov wrote:
>
>> Denis, after mvn clean package classnames.properties was not marked as
>> changed in Idea.
>>
>> On Mon, Dec 14, 2015 at 2:57 PM, Denis Magda  wrote:
>>
>> Alex,
>>>
>>> Have you updated and pushed classnames.properties? We have to do this
>>> otherwise the end user will have the same issues in runtime.
>>>
>>> Regards,
>>> Denis
>>>
>>>
>>> On 12/14/2015 10:41 AM, Alexey Kuznetsov wrote:
>>>
>>> Thanks, Semyon.

 mvn clean package -DskipTests -DskipClientDocs -Dmaven.javadoc.skip=true

 Solved this issue.

 On Mon, Dec 14, 2015 at 1:01 PM, Semyon Boikov 
 wrote:

 Alexey,

> You could get this error if classnames.properties was not regenerated
> after
> recent classes renaming,
>
>
> On Mon, Dec 14, 2015 at 7:45 AM, Alexey Kuznetsov <
> akuznet...@gridgain.com
> wrote:
>
> Igniters,
>
>> My code that works fine couple of days ago started to fail with
>>
>> exception:
>
>java.lang.AssertionError: BinaryMetadataKey [typeId=1877955432]
>>
>> Also if I fallback to OptimizedMarshaller my code works fine.
>>
>> I created blocker: https://issues.apache.org/jira/browse/IGNITE-2148
>>
>> --
>> Alexey Kuznetsov
>> GridGain Systems
>> www.gridgain.com
>>
>>
>>

>>
>


-- 
Alexey Kuznetsov
GridGain Systems
www.gridgain.com


Re: Client connect "hangs"

2015-12-14 Thread Valentin Kulichenko
Denis,

Yes, this can be a workaround, but at the same time  it makes things even
more confusing :) This means that client node behavior depends on
some property on discovery SPI, while this property should influence only
internals of discovery protocol.

I think the client should always work in the same way: start without
blocking and then throw disconnect exception if there are no
servers. Currently this behavior depends on presence of server nodes,
forceServerMode flag and probably smth else, which makes it unpredictable.

-Val

On Monday, December 14, 2015, Denis Magda  wrote:

> Guys,
>
> There is already a configuration property that lets to complete client's
> launching procedure even if there is no any server node in a cluster -
> TcpDiscoverySpi.setForceServerMode.
> The only side effect of this property is that a client node will become a
> part of the ring.
>
> Is this property applicable or you want to support something different?
>
> --
> Denis
>
> On 12/12/2015 6:13 AM, Valentin Kulichenko wrote:
>
>> Dmitry,
>>
>> How do you think, should we just change the behavior or make it
>> configurable?
>>
>> -Val
>>
>> On Fri, Dec 11, 2015 at 1:59 PM, Dmitriy Setrakyan > >
>> wrote:
>>
>> I agree that we have a consistency issue here. I am OK with the change.
>>>
>>> On Fri, Dec 11, 2015 at 11:43 AM, Valentin Kulichenko <
>>> valentin.kuliche...@gmail.com> wrote:
>>>
>>> Folks,

 Currently there are two different ways how a client node behaves in case
 there are no server nodes:

 1. If it's trying to start, it will wait and block the thread that
 called Ignition.start().
 2. If server nodes left when it was already running, it will throw
 disconnect exception on any API call.

 It seems confusing to me (and not only to me, as far as I can see from

>>> the
>>>
 users' feedback). First of all, it's just inconsistent and requires
 different processing for these different cases. Second of all, p.1 is

>>> often
>>>
 treated as a hang, but not as correct behavior. And it's getting worse

>>> when
>>>
 the node is started as a part of web application, because it blocks the
 application startup process.

 I think we should start a client node (or at least have a configurable
 option) even if there are no servers yet. Until the first server joins,

>>> it
>>>
 will just throw disconnect exceptions.

 Thoughts?

 -Val


>


[GitHub] ignite pull request: IGNITE-2147 fixed reload data in ui ace tabs

2015-12-14 Thread Dmitriyff
GitHub user Dmitriyff opened a pull request:

https://github.com/apache/ignite/pull/328

IGNITE-2147 fixed reload data in ui ace tabs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Dmitriyff/ignite ignite-2147

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/328.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #328


commit e17aab9aa83e54fc8fca709654241ba683385bfd
Author: Dmitriyff 
Date:   2015-12-14T08:40:21Z

IGNITE-2147 added fix to reload data on tabs

commit a2a45de09427c7c426d37cb529eeb6398e77075e
Author: Dmitriyff 
Date:   2015-12-14T08:43:14Z

Merge branch 'ignite-843-rc2' of https://github.com/apache/ignite into 
ignite-2147




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-2124 - Do not notify DS manager for us...

2015-12-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/313


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-1932 Fixed wrong value of BusyTimePerc...

2015-12-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/242


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [GitHub] ignite pull request: [Test] IgniteSemaphore's failover related tes...

2015-12-14 Thread Denis Magda

Saikat,

Thanks for fixing this compilation issue. I've merged the changes into 
the master.


--
Denis

On 12/13/2015 11:48 AM, samaitra wrote:

GitHub user samaitra opened a pull request:

 https://github.com/apache/ignite/pull/326

 [Test] IgniteSemaphore's failover related tests are failing to compile

 


You can merge this pull request into a Git repository by running:

 $ git pull https://github.com/samaitra/ignite IGNITE-2144

Alternatively you can review and apply these changes as the patch at:

 https://github.com/apache/ignite/pull/326.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

 This closes #326
 


commit 10ff1914d487ef65760c3baafde087178d558f5c
Author: samaitra 
Date:   2015-12-13T08:45:06Z

 [Test] IgniteSemaphore's failover related tests are failing to compile




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---




Introduction

2015-12-14 Thread Ilya Lantukh
Dear Igniters,

My name is Ilya Lantukh, I am a Java Developer from St. Petersburg, Russia
and I've recently joined the Apache Ignite community.
I hope to find here a lot of opportunities to improve my skills and
knowledge and I wish to be helpful in solving current problems and
developing new functionality.

-- 
Best regards,
Ilya


Re: Introduction

2015-12-14 Thread Sergi Vladykin
Welcome, Ilya!

Should you have any questions, feel free to ask here.

Sergi

2015-12-14 12:14 GMT+03:00 Ilya Lantukh :

> Dear Igniters,
>
> My name is Ilya Lantukh, I am a Java Developer from St. Petersburg, Russia
> and I've recently joined the Apache Ignite community.
> I hope to find here a lot of opportunities to improve my skills and
> knowledge and I wish to be helpful in solving current problems and
> developing new functionality.
>
> --
> Best regards,
> Ilya
>


Re: Work directory cleanup between tests

2015-12-14 Thread Yakov Zhdanov
I am against changing core logic just for test env.

How about cleaning up the disc storage after test class completes?

--Yakov

2015-12-14 10:12 GMT+03:00 Vladimir Ozerov :

> Folks,
>
> As you know our marshallers persists some data to disk. Namely, mappings
> from class name to class ID. For this reason some tests work with clean
> work directory, some other tests work with dirty work directory.
>
> This hides some marshaller bugs. E.g. after work dir cleanup I immediately
> received several failures in BinaryObjectBuilderSelfTest.
>
> I think we must fix this ASAP and ensure that all tests run in a clean
> environment. Probably in test environment we should store this metadata in
> a kind of node-bound HashMap or so.
>
> Thoughts?
>
> Vladimir.
>


Re: 1.5 GA

2015-12-14 Thread Romain Gilles
Hi Igniters,
Yes but could please provide a push that includes the ignite osgi stuffs?
Regards,

Romain

Le lun. 14 déc. 2015 à 07:27, Dmitriy Setrakyan  a
écrit :

> Igniters,
>
> Since we have releases 1.5-b1, the community has reported several issues,
> which were addressed or are being addressed.
>
> Would it be realistic to plan releasing GA by the end of the week?
>
> D.
>


[jira] [Created] (IGNITE-2152) .NET: Introduction page is missing.

2015-12-14 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-2152:
---

 Summary: .NET: Introduction page is missing.
 Key: IGNITE-2152
 URL: https://issues.apache.org/jira/browse/IGNITE-2152
 Project: Ignite
  Issue Type: Bug
  Components: interop
Affects Versions: ignite-1.4
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
 Fix For: 1.5






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request: IGNITE-2065 Binary renaming: Platforms

2015-12-14 Thread ashutakGG
GitHub user ashutakGG opened a pull request:

https://github.com/apache/ignite/pull/329

IGNITE-2065 Binary renaming: Platforms

https://issues.apache.org/jira/browse/IGNITE-2065

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ashutakGG/incubator-ignite 
ignite-2065-platforms

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/329.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #329


commit 717dab259e3f0287046ffcefa28cf9214ab65ff7
Author: sboikov 
Date:   2015-12-14T07:00:57Z

ignite-1.5 Fixed TcpDiscoveryMulticastIpFinder to request address on each 
getRegisteredAddresses call

commit 6d96bb6a3219255b62af826e6806b1aa564bb005
Author: vozerov-gridgain 
Date:   2015-12-14T07:08:00Z

Fix to GridBinaryWildcardsSelfTest.

commit 4ae6292cface325f21237db5d18ce77dee380072
Author: vozerov-gridgain 
Date:   2015-12-14T07:08:29Z

Fixed failure in BinaryObjectBuilderSelfTest.testCopyFromInnerObject.

commit ed27fbdab9d8307b4db427866ed1094526c117c8
Author: vozerov-gridgain 
Date:   2015-12-14T07:08:47Z

Merge remote-tracking branch 'origin/ignite-1.5' into ignite-1.5

commit 3db14482091d01e96e1ae40a15cb903c3adcddbd
Author: vozerov-gridgain 
Date:   2015-12-14T07:19:24Z

Fixed failure in BinaryObjectBuilderSelfTest.testSetBinaryObject.

commit acb57c5eb95d11ebde5557618226d80f25ac610c
Author: sboikov 
Date:   2015-12-14T07:40:57Z

ignite-1.5 Fixed NPE in IgniteKernal.dumpDebugInfo.

commit ec2a64714c176f3a5aca60a058686d3354dc6a76
Author: sboikov 
Date:   2015-12-14T07:41:18Z

Merge remote-tracking branch 'origin/ignite-1.5' into ignite-1.5

commit 484a3afd05b9ad5a524a28c517ddc0a6b9dffcbc
Author: sboikov 
Date:   2015-12-14T07:58:57Z

ignite-1.5 Added waitForCondition in tests with TTL.

commit 47f1ced214b4176b13293386c1e2042c9cc20b32
Author: sboikov 
Date:   2015-12-14T08:09:17Z

ignite-1.5 Fixed test to override correct 'getConfiguration' method.

commit 4d086734833d96b7e2403ce4f17f763a1c75caab
Author: sboikov 
Date:   2015-12-14T08:27:33Z

ignite-1.5 Use TcpDiscoveryVmIpFinder in test.

commit de0b1badbf34538e04e7cf4be5378ef929fb6201
Author: Alexey Kuznetsov 
Date:   2015-12-14T08:33:09Z

ignite-1.5 Updated classnames.properties

commit beb64c34c271b7eac1388179b11f042b34fa0d60
Author: sboikov 
Date:   2015-12-14T08:34:07Z

ignite-1.5 Increased test timeouts.

commit 72e5b9adfdcfcad4b0002bbfc1cf20fd3a0ed149
Author: sboikov 
Date:   2015-12-14T08:34:32Z

Merge remote-tracking branch 'origin/ignite-1.5' into ignite-1.5

commit a248c166ce60ec3d903529c874a408f7d977ea7c
Author: ashutak 
Date:   2015-12-14T09:25:48Z

ignite-2065: portable -> binary renaming (Fix platforms)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: CacheLoadOnlyStoreAdapter

2015-12-14 Thread Yakov Zhdanov
Renaming will break config compatibility. I don't think it is a good idea
to rename this, but we can update documentation to make things better.

--Yakov

2015-12-13 19:25 GMT+03:00 Vladimir Ozerov :

> +1
> I was really stuck when sew current name for the first time.
>
> On Sat, Dec 12, 2015 at 3:32 AM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Igniters,
> >
> > We have a class that is called CacheLoadOnlyStoreAdapter. This is a cache
> > store adapter that implements utility logic for multithreaded data
> loading,
> > so that user has to provide only the logic that queries DB and parses DB
> > row into object. Everything else is done automatically.
> >
> > It's a cool feature, but the name looks really confusing to me. It's
> > definitely not 'load-only' store, because it implements CacheWriter.
> >
> > How about we rename it to CacheMultithreadedLoadStoreAdapter? Any other
> > suggestions?
> >
> > -Val
> >
>


Re: Work directory cleanup between tests

2015-12-14 Thread Vladimir Ozerov
This doesn't work because there still will be interference between tests
within the same class.

Proposed change is minimal. We already have interface - MarshallerContext.
All we need is to create dummy implementation for it and hack a single line
in GridKernalCotextImpl class.

On Mon, Dec 14, 2015 at 12:22 PM, Yakov Zhdanov  wrote:

> I am against changing core logic just for test env.
>
> How about cleaning up the disc storage after test class completes?
>
> --Yakov
>
> 2015-12-14 10:12 GMT+03:00 Vladimir Ozerov :
>
> > Folks,
> >
> > As you know our marshallers persists some data to disk. Namely, mappings
> > from class name to class ID. For this reason some tests work with clean
> > work directory, some other tests work with dirty work directory.
> >
> > This hides some marshaller bugs. E.g. after work dir cleanup I
> immediately
> > received several failures in BinaryObjectBuilderSelfTest.
> >
> > I think we must fix this ASAP and ensure that all tests run in a clean
> > environment. Probably in test environment we should store this metadata
> in
> > a kind of node-bound HashMap or so.
> >
> > Thoughts?
> >
> > Vladimir.
> >
>


Re: Client connect "hangs"

2015-12-14 Thread Yakov Zhdanov
I am ok with the suggestion. Val, can you please file a ticket (or I guess
we already should have one) and put your suggestion to it.

--Yakov

2015-12-14 11:47 GMT+03:00 Valentin Kulichenko <
valentin.kuliche...@gmail.com>:

> Denis,
>
> Yes, this can be a workaround, but at the same time  it makes things even
> more confusing :) This means that client node behavior depends on
> some property on discovery SPI, while this property should influence only
> internals of discovery protocol.
>
> I think the client should always work in the same way: start without
> blocking and then throw disconnect exception if there are no
> servers. Currently this behavior depends on presence of server nodes,
> forceServerMode flag and probably smth else, which makes it unpredictable.
>
> -Val
>
> On Monday, December 14, 2015, Denis Magda  wrote:
>
> > Guys,
> >
> > There is already a configuration property that lets to complete client's
> > launching procedure even if there is no any server node in a cluster -
> > TcpDiscoverySpi.setForceServerMode.
> > The only side effect of this property is that a client node will become a
> > part of the ring.
> >
> > Is this property applicable or you want to support something different?
> >
> > --
> > Denis
> >
> > On 12/12/2015 6:13 AM, Valentin Kulichenko wrote:
> >
> >> Dmitry,
> >>
> >> How do you think, should we just change the behavior or make it
> >> configurable?
> >>
> >> -Val
> >>
> >> On Fri, Dec 11, 2015 at 1:59 PM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> >> >
> >> wrote:
> >>
> >> I agree that we have a consistency issue here. I am OK with the change.
> >>>
> >>> On Fri, Dec 11, 2015 at 11:43 AM, Valentin Kulichenko <
> >>> valentin.kuliche...@gmail.com> wrote:
> >>>
> >>> Folks,
> 
>  Currently there are two different ways how a client node behaves in
> case
>  there are no server nodes:
> 
>  1. If it's trying to start, it will wait and block the thread that
>  called Ignition.start().
>  2. If server nodes left when it was already running, it will throw
>  disconnect exception on any API call.
> 
>  It seems confusing to me (and not only to me, as far as I can see from
> 
> >>> the
> >>>
>  users' feedback). First of all, it's just inconsistent and requires
>  different processing for these different cases. Second of all, p.1 is
> 
> >>> often
> >>>
>  treated as a hang, but not as correct behavior. And it's getting worse
> 
> >>> when
> >>>
>  the node is started as a part of web application, because it blocks
> the
>  application startup process.
> 
>  I think we should start a client node (or at least have a configurable
>  option) even if there are no servers yet. Until the first server
> joins,
> 
> >>> it
> >>>
>  will just throw disconnect exceptions.
> 
>  Thoughts?
> 
>  -Val
> 
> 
> >
>


[jira] [Created] (IGNITE-2153) .NET: Package descriptions are missing in doxygen-generated files.

2015-12-14 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-2153:
---

 Summary: .NET: Package descriptions are missing in 
doxygen-generated files.
 Key: IGNITE-2153
 URL: https://issues.apache.org/jira/browse/IGNITE-2153
 Project: Ignite
  Issue Type: Bug
  Components: interop
Affects Versions: ignite-1.4
Reporter: Vladimir Ozerov
 Fix For: 1.5






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2154) .NET: doxygen docs generation must be included into Maven build process.

2015-12-14 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-2154:
---

 Summary: .NET: doxygen docs generation must be included into Maven 
build process.
 Key: IGNITE-2154
 URL: https://issues.apache.org/jira/browse/IGNITE-2154
 Project: Ignite
  Issue Type: Task
  Components: interop
Affects Versions: ignite-1.4
Reporter: Vladimir Ozerov
Priority: Critical
 Fix For: 1.5






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request: IGNITE-2138 Introduced toBuilder() method.

2015-12-14 Thread agoncharuk
GitHub user agoncharuk opened a pull request:

https://github.com/apache/ignite/pull/330

IGNITE-2138 Introduced toBuilder() method.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agoncharuk/ignite ignite-2138

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #330


commit fa68666281823995f62d084accbebd396801c445
Author: Alexey Goncharuk 
Date:   2015-12-14T09:46:25Z

IGNITE-2138 Introduced toBuilder() method.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Using HDFS as a secondary FS

2015-12-14 Thread Ivan V.
To enable just an IGFS persistence there is no need to use HDFS (this
requires Hadoop dependency, requires configured HDFS cluster, etc.).
We have requests https://issues.apache.org/jira/browse/IGNITE-1120 ,
https://issues.apache.org/jira/browse/IGNITE-1926 to implement the
persistence upon local file system, and we already close to  the solution.

Regarding the secondary Fs doc page (
http://apacheignite.gridgain.org/docs/secondary-file-system) I would
suggest to add the following text there:

If Ignite node with secondary file system configured on a machine with
Hadoop distribution, make sure Ignite is able to find appropriate Hadoop
libraries: set HADOOP_HOME environment variable for the Ignite process if
you're using Apache Hadoop distribution, or, if you use another
distribution (HDP, Cloudera, BigTop, etc.) make sure /etc/default/hadoop
file exists and has appropriate contents.

If Ignite node with secondary file system configured on a machine without
Hadoop distribution, you can manually add necessary Hadoop dependencies to
Ignite node classpath: these are dependencies of groupId
"org.apache.hadoop" listed in file modules/hadoop/pom.xml . Currently they
are:

   1. hadoop-annotations
   2. hadoop-auth
   3. hadoop-common
   4. hadoop-hdfs
   5. hadoop-mapreduce-client-common
   6. hadoop-mapreduce-client-core



On Mon, Dec 14, 2015 at 11:21 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Guys,
>
> Why don't we include ignite-hadoop module in Fabric? This user simply wants
> to configure HDFS as a secondary file system to ensure persistence. Not
> having the opportunity to do this in Fabric looks weird to me. And actually
> I don't think this is a use case for Hadoop Accelerator.
>
> -Val
>
> On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda  wrote:
>
> > Hi Ivan,
> >
> > 1) Yes, I think that it makes sense to have the old versions of the docs
> > while an old version is still considered to be used by someone.
> >
> > 2) Absolutely, the time to add a corresponding article on the readme.io
> > has come. It's not the first time I see the question related to HDFS as a
> > secondary FS.
> > Before and now it's not clear for me what exact steps I should follow to
> > enable such a configuration. Our current suggestions look like a puzzle.
> > I'll assemble the puzzle on my side and prepare the article. Ivan if you
> > don't mind I would reaching you out directly asking for any technical
> > assistance if needed.
> >
> > Regards,
> > Denis
> >
> >
> > On 12/14/2015 10:25 AM, Ivan V. wrote:
> >
> >> Hi, Valentin,
> >>
> >> 1) first of all note that the author of the question uses not the latest
> >> doc page, namely
> >> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system .
> >> This is version 1.0, while the latest is 1.5:
> >> https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
> >> appeared that some links from the latest doc version point to 1.0 doc
> >> version. I fixed that in several places where I found that. Do we really
> >> need old doc versions (1.0 -1.4)?
> >>
> >> 2) our documentation (
> >> http://apacheignite.gridgain.org/docs/secondary-file-system) does not
> >> provide any special setup instructions to configure HDFS as secondary
> file
> >> system in Ignite. Our docs assume that if a user wants to integrate with
> >> Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
> >> http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop). It
> >> looks like the page
> >> http://apacheignite.gridgain.org/docs/secondary-file-system should be
> >> more
> >> clear regarding the required configuration steps (in fact, setting up
> >> HADOOP_HOME variable for Ignite node process).
> >>
> >> 3) Hadoop jars are correctly found by Ignite if the following conditions
> >> are met:
> >> (a) The "Hadoop Edition" distribution is used (not a "Fabric" edition).
> >> (b) Either HADOOP_HOME environment variable is set up (for Apache Hadoop
> >> distribution), or file "/etc/default/hadoop" exists and matches the
> Hadoop
> >> distribution used (BigTop, Cloudera, HDP, etc.)
> >>
> >> The exact mechanism of the Hadoop classpath composition can be found in
> >> files
> >> IGNITE_HOME/bin/include/hadoop-classpath.sh
> >> IGNITE_HOME/bin/include/setenv.sh .
> >>
> >> The issue is discussed in
> >> https://issues.apache.org/jira/browse/IGNITE-372
> >> , https://issues.apache.org/jira/browse/IGNITE-483 .
> >>
> >> On Sat, Dec 12, 2015 at 3:45 AM, Valentin Kulichenko <
> >> valentin.kuliche...@gmail.com> wrote:
> >>
> >> Igniters,
> >>>
> >>> I'm looking at the question on SO [1] and I'm a bit confused.
> >>>
> >>> We ship ignite-hadoop module only in Hadoop Accelerator and without
> >>> Hadoop
> >>> JARs, assuming that user will include them from the Hadoop distribution
> >>> he
> >>> uses. It seems OK for me when accelerator is plugged in to Hadoop to
> run
> >>> mapreduce jobs, but I can't figure out 

Abandoned ticket: Implement IgniteStormStreamer to stream data from Apache Storm

2015-12-14 Thread Denis Magda

Hi Chandresh,

Is there any chance you complete the ticket from the subj during this month?
https://issues.apache.org/jira/browse/IGNITE-429

If you're too busy for know don't you mind if someone else from the 
community pick it up and complete? We want to see the feature as a part 
of the nearest releases.


--
Denis


Re: 1.5 GA

2015-12-14 Thread Yakov Zhdanov
Romain,

osgi has been pushed to 1.5 after b1 has been built. You can build ignite
binaries from ignite-1.5 branch to test the functionality.

--Yakov

2015-12-14 12:21 GMT+03:00 Romain Gilles :

> Hi Igniters,
> Yes but could please provide a push that includes the ignite osgi stuffs?
> Regards,
>
> Romain
>
> Le lun. 14 déc. 2015 à 07:27, Dmitriy Setrakyan  a
> écrit :
>
> > Igniters,
> >
> > Since we have releases 1.5-b1, the community has reported several issues,
> > which were addressed or are being addressed.
> >
> > Would it be realistic to plan releasing GA by the end of the week?
> >
> > D.
> >
>


Re: Abandoned ticket: Implement IgniteStormStreamer to stream data from Apache Storm

2015-12-14 Thread chandresh pancholi
Sure,
Will try to finish this by end of this month.
On Dec 14, 2015 3:59 PM, "Denis Magda"  wrote:

> Hi Chandresh,
>
> Is there any chance you complete the ticket from the subj during this
> month?
> https://issues.apache.org/jira/browse/IGNITE-429
>
> If you're too busy for know don't you mind if someone else from the
> community pick it up and complete? We want to see the feature as a part of
> the nearest releases.
>
> --
> Denis
>


Re: 1.5 GA

2015-12-14 Thread Yakov Zhdanov
Guys,

Here is the list of open issues for 1.5
https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%20fixVersion%20%3D%201.5%20AND%20status%20!%3D%20closed%20ORDER%20BY%20status%20DESC%2C%20due%20ASC%2C%20priority%20DESC%2C%20created%20DESC

I see 58 items in this list, however 17 are resolved and 4 are in patch
available state.

The most critical issues are assigned to
Alexey Goncharuk
Semyon Boikov
Artem Shutak
Vladimir Ozerov
Pavel Tupitsyn
Vladimir Ershov

I think we can release GA when mentioned guys finish with their tickets. I
think this is possible to accomplish prior to Wed Dec 16 evening. Remaining
tickets can be moved further.

I also want to ask everyone having resolved tickets to look them through
and close or reassign properly:
Raul Kripalani
Cos Boudnik
Denis Magda
Igor Sapego

Thanks!

--Yakov

2015-12-14 9:26 GMT+03:00 Dmitriy Setrakyan :

> Igniters,
>
> Since we have releases 1.5-b1, the community has reported several issues,
> which were addressed or are being addressed.
>
> Would it be realistic to plan releasing GA by the end of the week?
>
> D.
>


Debug Output in Prod

2015-12-14 Thread Yakov Zhdanov
Guys,

I noticed the following code in repo several days
ago(org/apache/ignite/internal/portable/BinaryWriterExImpl.java:1810):


out.unsafeEnsure(1 + 4);

out.unsafeWriteByte(GridPortableMarshaller.HANDLE);
out.unsafeWriteInt(pos - old);

if (obj.getClass().isArray())
System.out.println("CASE!");

return true;

Couple of points here:

1. When putting debug output to production code use U.debug(). This method
gives at least deprecation warning which can be caught and debug printouts
will not get to the repo + it prints timestamp and thread name.
2. Can we change release build to fail if U.debug() is somewhere in
production code? Anton V, perhaps you know how to achieve that?

--Yakov


[jira] [Created] (IGNITE-2155) CacheStarSchemaExample isn't clear and it seems the output is wrong

2015-12-14 Thread Sergey Kozlov (JIRA)
Sergey Kozlov created IGNITE-2155:
-

 Summary: CacheStarSchemaExample isn't clear and it seems the 
output is wrong
 Key: IGNITE-2155
 URL: https://issues.apache.org/jira/browse/IGNITE-2155
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 1.5
Reporter: Sergey Kozlov
Assignee: Yakov Zhdanov
 Fix For: 1.5


I run CacheStarSchemaExample and can't understand what actually it does and 
what the ouptut means (e.g. why all Ids are negative numbers)?
{noformat}
All purchases made at store1:
FactPurchase [id=-1604674364, productId=-1604674374, storeId=-1604674395, 
purchasePrice=10.0]
FactPurchase [id=-1604674362, productId=-1604674377, storeId=-1604674395, 
purchasePrice=12.0]
FactPurchase [id=-1604674349, productId=-1604674382, storeId=-1604674395, 
purchasePrice=25.0]
FactPurchase [id=-1604674346, productId=-1604674393, storeId=-1604674395, 
purchasePrice=28.0]
FactPurchase [id=-1604674343, productId=-1604674375, storeId=-1604674395, 
purchasePrice=31.0]
FactPurchase [id=-1604674342, productId=-1604674392, storeId=-1604674395, 
purchasePrice=32.0]
FactPurchase [id=-1604674337, productId=-1604674375, storeId=-1604674395, 
purchasePrice=37.0]
FactPurchase [id=-1604674332, productId=-1604674384, storeId=-1604674395, 
purchasePrice=42.0]
FactPurchase [id=-1604674325, productId=-1604674388, storeId=-1604674395, 
purchasePrice=49.0]
FactPurchase [id=-1604674323, productId=-1604674391, storeId=-1604674395, 
purchasePrice=51.0]
FactPurchase [id=-1604674317, productId=-1604674390, storeId=-1604674395, 
purchasePrice=57.0]
FactPurchase [id=-1604674316, productId=-1604674389, storeId=-1604674395, 
purchasePrice=58.0]
FactPurchase [id=-1604674311, productId=-1604674389, storeId=-1604674395, 
purchasePrice=63.0]
FactPurchase [id=-1604674305, productId=-1604674377, storeId=-1604674395, 
purchasePrice=69.0]
FactPurchase [id=-1604674300, productId=-1604674382, storeId=-1604674395, 
purchasePrice=74.0]
FactPurchase [id=-1604674296, productId=-1604674392, storeId=-1604674395, 
purchasePrice=78.0]
FactPurchase [id=-1604674295, productId=-1604674378, storeId=-1604674395, 
purchasePrice=79.0]
FactPurchase [id=-1604674293, productId=-1604674377, storeId=-1604674395, 
purchasePrice=81.0]
FactPurchase [id=-1604674289, productId=-1604674378, storeId=-1604674395, 
purchasePrice=85.0]
FactPurchase [id=-1604674285, productId=-1604674379, storeId=-1604674395, 
purchasePrice=89.0]
FactPurchase [id=-1604674281, productId=-1604674379, storeId=-1604674395, 
purchasePrice=93.0]
FactPurchase [id=-1604674371, productId=-1604674383, storeId=-1604674395, 
purchasePrice=3.0]
FactPurchase [id=-1604674370, productId=-1604674392, storeId=-1604674395, 
purchasePrice=4.0]
FactPurchase [id=-1604674368, productId=-1604674389, storeId=-1604674395, 
purchasePrice=6.0]
FactPurchase [id=-1604674363, productId=-1604674375, storeId=-1604674395, 
purchasePrice=11.0]
FactPurchase [id=-1604674357, productId=-1604674393, storeId=-1604674395, 
purchasePrice=17.0]
FactPurchase [id=-1604674355, productId=-1604674380, storeId=-1604674395, 
purchasePrice=19.0]
FactPurchase [id=-1604674353, productId=-1604674384, storeId=-1604674395, 
purchasePrice=21.0]
FactPurchase [id=-1604674348, productId=-1604674376, storeId=-1604674395, 
purchasePrice=26.0]
FactPurchase [id=-1604674340, productId=-1604674383, storeId=-1604674395, 
purchasePrice=34.0]
FactPurchase [id=-1604674339, productId=-1604674379, storeId=-1604674395, 
purchasePrice=35.0]
FactPurchase [id=-1604674336, productId=-1604674385, storeId=-1604674395, 
purchasePrice=38.0]
FactPurchase [id=-1604674335, productId=-1604674380, storeId=-1604674395, 
purchasePrice=39.0]
FactPurchase [id=-1604674333, productId=-1604674390, storeId=-1604674395, 
purchasePrice=41.0]
FactPurchase [id=-1604674331, productId=-1604674385, storeId=-1604674395, 
purchasePrice=43.0]
FactPurchase [id=-1604674329, productId=-1604674388, storeId=-1604674395, 
purchasePrice=45.0]
FactPurchase [id=-1604674327, productId=-1604674391, storeId=-1604674395, 
purchasePrice=47.0]
FactPurchase [id=-1604674322, productId=-1604674385, storeId=-1604674395, 
purchasePrice=52.0]
FactPurchase [id=-1604674321, productId=-1604674380, storeId=-1604674395, 
purchasePrice=53.0]
FactPurchase [id=-1604674319, productId=-1604674386, storeId=-1604674395, 
purchasePrice=55.0]
FactPurchase [id=-1604674315, productId=-1604674374, storeId=-1604674395, 
purchasePrice=59.0]
FactPurchase [id=-1604674314, productId=-1604674383, storeId=-1604674395, 
purchasePrice=60.0]
FactPurchase [id=-1604674312, productId=-1604674385, storeId=-1604674395, 
purchasePrice=62.0]
FactPurchase [id=-1604674302, productId=-1604674386, storeId=-1604674395, 
pur

[jira] [Created] (IGNITE-2156) .Net: add IgniteClientDisconnectedException to the .Net API

2015-12-14 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-2156:
---

 Summary: .Net: add IgniteClientDisconnectedException to the .Net 
API
 Key: IGNITE-2156
 URL: https://issues.apache.org/jira/browse/IGNITE-2156
 Project: Ignite
  Issue Type: Bug
  Components: interop
Affects Versions: ignite-1.4
Reporter: Denis Magda
Assignee: Vladimir Ozerov
 Fix For: 1.6


Java API has IgniteClientDisconnectedException that lets to take a reference to 
the future and wait for client reconnection.

.Net API should be improved by adding the same functionality
https://apacheignite.readme.io/docs/clients-vs-servers#client-reconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request: Ignite-2087

2015-12-14 Thread avinogradovgg
GitHub user avinogradovgg opened a pull request:

https://github.com/apache/ignite/pull/331

Ignite-2087



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/avinogradovgg/ignite ignite-2087

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/331.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #331


commit 84b9f58ebd1b07e88870c11973ce4e47d256a0b5
Author: Anton Vinogradov 
Date:   2015-12-14T12:32:30Z

Ignite-2087




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: Ignite-2065

2015-12-14 Thread ashutakGG
Github user ashutakGG closed the pull request at:

https://github.com/apache/ignite/pull/325


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-2065 Binary renaming: Platforms

2015-12-14 Thread ashutakGG
Github user ashutakGG closed the pull request at:

https://github.com/apache/ignite/pull/329


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Debug Output in Prod

2015-12-14 Thread Anton Vinogradov
Yakov,
We can simply grep sources for U.debug(), but I'm not sure it helpes.

Am I rigth that we have to reconfigure log4j setting each time we want to
use U.debug()?

On Mon, Dec 14, 2015 at 3:01 PM, Yakov Zhdanov  wrote:

> Guys,
>
> I noticed the following code in repo several days
> ago(org/apache/ignite/internal/portable/BinaryWriterExImpl.java:1810):
>
>
> out.unsafeEnsure(1 + 4);
>
> out.unsafeWriteByte(GridPortableMarshaller.HANDLE);
> out.unsafeWriteInt(pos - old);
>
> if (obj.getClass().isArray())
> System.out.println("CASE!");
>
> return true;
>
> Couple of points here:
>
> 1. When putting debug output to production code use U.debug(). This method
> gives at least deprecation warning which can be caught and debug printouts
> will not get to the repo + it prints timestamp and thread name.
> 2. Can we change release build to fail if U.debug() is somewhere in
> production code? Anton V, perhaps you know how to achieve that?
>
> --Yakov
>


Re: 1.5 GA

2015-12-14 Thread Raul Kripalani
On Mon, Dec 14, 2015 at 10:36 AM, Yakov Zhdanov  wrote:

> osgi has been pushed to 1.5 after b1 has been built. You can build ignite
> binaries from ignite-1.5 branch to test the functionality.
>

Then the 1.5.0-b1 tag is incorrect. It points
to 9a14d6432932fc1a1fdf2ddd77dea920382efe8c which would have included OSGi.

Could you please fix the tag to point to the commit you actually built the
release from?

Thanks,

*Raúl Kripalani*
PMC & Committer @ Apache Ignite, Apache Camel | Integration, Big Data and
Messaging Engineer
http://about.me/raulkripalani | http://www.linkedin.com/in/raulkripalani
http://blog.raulkr.net | twitter: @raulvk


[jira] [Created] (IGNITE-2157) CacheClientBinaryPutGetExample javadoc should be updated

2015-12-14 Thread Vasilisa Sidorova (JIRA)
Vasilisa  Sidorova created IGNITE-2157:
--

 Summary: CacheClientBinaryPutGetExample javadoc should be updated
 Key: IGNITE-2157
 URL: https://issues.apache.org/jira/browse/IGNITE-2157
 Project: Ignite
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.5
 Environment: Apache Ignite 1.5.0-b2 build #105
Reporter: Vasilisa  Sidorova
Priority: Trivial
 Fix For: 1.5


Javadoc for CacheClientBinaryPutGetExample is obsoleted. It contains:
{noformat}
* 
* Remote nodes should always be started with special configuration file which
 * enables the binary marshaller: {@code 'ignite.{sh|bat} 
examples/config/binary/example-ignite-binary.xml'}.
 * 
{noformat}

Instead:
* 
* Remote nodes should always be started with the following command:
* {@code 'ignite.{sh|bat} examples/config/example-ignite.xml'}.
* 
Please, fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Debug Output in Prod

2015-12-14 Thread Yakov Zhdanov
No, it prints to std or to log with INFO level. Let's postpone this for
now, however, I ask everyone to be attentive and remove debug output on
commit.

--Yakov

2015-12-14 15:46 GMT+03:00 Anton Vinogradov :

> Yakov,
> We can simply grep sources for U.debug(), but I'm not sure it helpes.
>
> Am I rigth that we have to reconfigure log4j setting each time we want to
> use U.debug()?
>
> On Mon, Dec 14, 2015 at 3:01 PM, Yakov Zhdanov 
> wrote:
>
> > Guys,
> >
> > I noticed the following code in repo several days
> > ago(org/apache/ignite/internal/portable/BinaryWriterExImpl.java:1810):
> >
> >
> > out.unsafeEnsure(1 + 4);
> >
> > out.unsafeWriteByte(GridPortableMarshaller.HANDLE);
> > out.unsafeWriteInt(pos - old);
> >
> > if (obj.getClass().isArray())
> > System.out.println("CASE!");
> >
> > return true;
> >
> > Couple of points here:
> >
> > 1. When putting debug output to production code use U.debug(). This
> method
> > gives at least deprecation warning which can be caught and debug
> printouts
> > will not get to the repo + it prints timestamp and thread name.
> > 2. Can we change release build to fail if U.debug() is somewhere in
> > production code? Anton V, perhaps you know how to achieve that?
> >
> > --Yakov
> >
>


[jira] [Created] (IGNITE-2158) Null values in examples output

2015-12-14 Thread Ilya Suntsov (JIRA)
Ilya Suntsov created IGNITE-2158:


 Summary: Null values in examples output
 Key: IGNITE-2158
 URL: https://issues.apache.org/jira/browse/IGNITE-2158
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 1.5
 Environment: OS X 10.10.2
jdk 1.7
Reporter: Ilya Suntsov
Assignee: Yakov Zhdanov
 Fix For: 1.5


In output of the following examples there are fields that have 'null' or zero 
values (salary, orgId, resume ):
1.CacheDummyStoreExample
{noformat}
[16:48:47] Ignite node started OK (id=64a079ad)
[16:48:47] Topology snapshot [ver=9, servers=3, clients=0, CPUs=8, heap=8.0GB]

>>> Cache store example started.
>>> Store loadCache for entry count: 10
>>> Loaded 10 keys with backups in 971ms.
Read value: null
Overwrote old value: null
Read value: Person [id=7754401276845136764, orgId=null, lastName=Newton, 
firstName=Isaac, salary=0.0, resume=null]
>>> Store put [key=7754401276845136764, val=Person [id=7754401276845136764, 
>>> orgId=null, lastName=Newton, firstName=Isaac, salary=0.0, resume=null], 
>>> xid=ab2b0c0a151-03aba62c--0009--0151a0c0307a]
Read value after commit: Person [id=7754401276845136764, orgId=null, 
lastName=Newton, firstName=Isaac, salary=0.0, resume=null]
[16:48:48] Ignite node stopped OK [uptime=00:00:01:38]
{noformat}

2. CacheJdbcStoreExample
{noformat}
[16:49:36] Ignite node started OK (id=f662db78)
[16:49:36] Topology snapshot [ver=11, servers=3, clients=0, CPUs=8, heap=8.0GB]

>>> Cache store example started.
>>> Loaded 0 values into cache.
>>> Loaded 1 keys with backups in 26ms.
>>> Store load [key=8980561285181288706]
Read value: null
Overwrote old value: null
Read value: Person [id=8980561285181288706, orgId=null, lastName=Newton, 
firstName=Isaac, salary=0.0, resume=null]
>>> Store write [key=8980561285181288706, val=Person [id=8980561285181288706, 
>>> orgId=null, lastName=Newton, firstName=Isaac, salary=0.0, resume=null]]
Read value after commit: Person [id=8980561285181288706, orgId=null, 
lastName=Newton, firstName=Isaac, salary=0.0, resume=null]
[16:49:36] Ignite node stopped OK [uptime=00:00:00:182]
{noformat}

3. CacheSpringStoreExample
{noformat}
[16:50:16] Ignite node started OK (id=670dd40a)
[16:50:16] Topology snapshot [ver=13, servers=3, clients=0, CPUs=8, heap=8.0GB]

>>> Cache store example started.
>>> Loaded 0 values into cache.
>>> Loaded 0 keys with backups in 33ms.
Read value: null
Overwrote old value: null
Read value: Person [id=8312945083421167351, orgId=null, lastName=Newton, 
firstName=Isaac, salary=0.0, resume=null]
>>> Store write [key=8312945083421167351, val=Person [id=8312945083421167351, 
>>> orgId=null, lastName=Newton, firstName=Isaac, salary=0.0, resume=null]]
Read value after commit: Person [id=8312945083421167351, orgId=null, 
lastName=Newton, firstName=Isaac, salary=0.0, resume=null]
[16:50:16] Ignite node stopped OK [uptime=00:00:00:469]
{noformat}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 1.5 GA

2015-12-14 Thread Anton Vinogradov
Raul,

Tag 1.5.0-b1 fixed, please check.

On Mon, Dec 14, 2015 at 3:59 PM, Raul Kripalani  wrote:

> On Mon, Dec 14, 2015 at 10:36 AM, Yakov Zhdanov 
> wrote:
>
> > osgi has been pushed to 1.5 after b1 has been built. You can build ignite
> > binaries from ignite-1.5 branch to test the functionality.
> >
>
> Then the 1.5.0-b1 tag is incorrect. It points
> to 9a14d6432932fc1a1fdf2ddd77dea920382efe8c which would have included OSGi.
>
> Could you please fix the tag to point to the commit you actually built the
> release from?
>
> Thanks,
>
> *Raúl Kripalani*
> PMC & Committer @ Apache Ignite, Apache Camel | Integration, Big Data and
> Messaging Engineer
> http://about.me/raulkripalani | http://www.linkedin.com/in/raulkripalani
> http://blog.raulkr.net | twitter: @raulvk
>


[GitHub] ignite pull request: Ignite 2100

2015-12-14 Thread agoncharuk
GitHub user agoncharuk opened a pull request:

https://github.com/apache/ignite/pull/332

Ignite 2100



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agoncharuk/ignite ignite-2100

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/332.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #332


commit e84b268f8c3ff85a8e2af8c763b10b41a3659a18
Author: Alexey Goncharuk 
Date:   2015-12-09T17:29:12Z

IGNITE-2100 - Fixes for Externalizable classes and queries.

commit e98f073847bec5162f6939523f47be6acf2d085a
Author: Alexey Goncharuk 
Date:   2015-12-14T14:47:12Z

Merge branch 'ignite-1.5' of https://git-wip-us.apache.org/repos/asf/ignite 
into ignite-2100




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Mandatory requirement to run ExampleNodeStartup only for a several examples

2015-12-14 Thread Yakov Zhdanov
Guys,

There is currently a limitation - we cannot effectively deploy class
definitions over discovery protocol. This is possible to fix, but needs
some more investigation.

As far as current situation with examples, I think that the following
examples should work with remote nodes started with server classpath
(yes/no). Sergey, can you please check if my vision is correct and let us
know:

Java
Y: org.apache.ignite.examples.binary.datagrid.CacheClientBinaryQueryExample
Y: org.apache.ignite.examples.datagrid.starschema.CacheStarSchemaExample
N: org.apache.ignite.examples.datagrid.store.auto.CacheAutoStoreExample
N: org.apache.ignite.examples.datagrid.store.dummy.CacheDummyStoreExample
N: org.apache.ignite.examples.datagrid.store.jdbc.CacheJdbcStoreExample
N: org.apache.ignite.examples.datagrid.store.spring.CacheSpringStoreExample
Y: org.apache.ignite.examples.datagrid.CacheQueryExample
N: org.apache.ignite.examples.servicegrid.ServicesExample - service cannot
be peer deployed now
Y: org.apache.ignite.examples.streaming.StreamTransformerExample
Y: org.apache.ignite.examples.streaming.StreamVisitorExample
Y: org.apache.ignite.examples.streaming.org.apache.ignite.
examples.streaming.wordcount.QueryWords
Y: org.apache.ignite.examples.streaming.org.apache.ignite.
examples.streaming.wordcount.StreamWords

Scala
Y: org.apache.ignite.scalar.examples.ScalarCacheQueryExample
Y: org.apache.ignite.scalar.examples.ScalarSnowflakeSchemaExample

Java8
Y: org.apache.ignite.examples.java8.streaming.StreamTransformerExample
Y: org.apache.ignite.examples.java8.streaming.StreamVisitorExample

LGPL:
N: org.apache.ignite.examples.datagrid.store.hibernate.
CacheHibernateStoreExample
--Yakov

2015-12-12 5:23 GMT+03:00 Alexey Kuznetsov :

> Dmitriy,
>
> As far as I know P2P works mainly for Compute
> See: https://apacheignite.readme.io/v1.5/docs/zero-deployment
>
> On Sat, Dec 12, 2015 at 4:47 AM, Dmitriy Setrakyan 
> wrote:
>
> > Why are we not allowing to deploy factory classes over P2P deployment?
> >
> > On Fri, Dec 11, 2015 at 7:29 AM, Alexey Kuznetsov <
> akuznet...@gridgain.com
> > >
> > wrote:
> >
> > > I can confirm examples developed/changed by me.
> > >
> > > >> org.apache.ignite.examples.datagrid.store.auto.CacheAutoStoreExample
> > > DO NOT ALLOW to start remote nodes via ignite.sh out of the box, but if
> > > user put example cache store factory class into \libs folder example
> will
> > > run.
> > >
> > > >>
> > >
> > >
> >
> org.apache.ignite.examples.binary.datagrid.store.auto.CacheBinaryAutoStoreExample
> > > This example ALLOW to start remote nodes via ignite.sh.
> > >
> > > --
> > > Alexey Kuznetsov
> > > GridGain Systems
> > > www.gridgain.com
> > >
> >
>
>
>
> --
> Alexey Kuznetsov
> GridGain Systems
> www.gridgain.com
>


[GitHub] ignite pull request: IGNITE-435.

2015-12-14 Thread younggyuchun80
GitHub user younggyuchun80 opened a pull request:

https://github.com/apache/ignite/pull/333

IGNITE-435.

fix the typo 'Curent' to 'Current'

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/younggyuchun80/ignite ignite-435

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/333.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #333


commit 226e245586afd5b1f81dfebe683b8619c8f2c95c
Author: YoungGyu Chun 
Date:   2015-12-14T14:45:33Z

IGNITE-435.
fix the typo 'Curent' to 'Current'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Using HDFS as a secondary FS

2015-12-14 Thread Dmitriy Setrakyan
Ivan, I think this should be documented, no?

On Mon, Dec 14, 2015 at 2:25 AM, Ivan V.  wrote:

> To enable just an IGFS persistence there is no need to use HDFS (this
> requires Hadoop dependency, requires configured HDFS cluster, etc.).
> We have requests https://issues.apache.org/jira/browse/IGNITE-1120 ,
> https://issues.apache.org/jira/browse/IGNITE-1926 to implement the
> persistence upon local file system, and we already close to  the solution.
>
> Regarding the secondary Fs doc page (
> http://apacheignite.gridgain.org/docs/secondary-file-system) I would
> suggest to add the following text there:
> 
> If Ignite node with secondary file system configured on a machine with
> Hadoop distribution, make sure Ignite is able to find appropriate Hadoop
> libraries: set HADOOP_HOME environment variable for the Ignite process if
> you're using Apache Hadoop distribution, or, if you use another
> distribution (HDP, Cloudera, BigTop, etc.) make sure /etc/default/hadoop
> file exists and has appropriate contents.
>
> If Ignite node with secondary file system configured on a machine without
> Hadoop distribution, you can manually add necessary Hadoop dependencies to
> Ignite node classpath: these are dependencies of groupId
> "org.apache.hadoop" listed in file modules/hadoop/pom.xml . Currently they
> are:
>
>1. hadoop-annotations
>2. hadoop-auth
>3. hadoop-common
>4. hadoop-hdfs
>5. hadoop-mapreduce-client-common
>6. hadoop-mapreduce-client-core
>
> 
>
> On Mon, Dec 14, 2015 at 11:21 AM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Guys,
> >
> > Why don't we include ignite-hadoop module in Fabric? This user simply
> wants
> > to configure HDFS as a secondary file system to ensure persistence. Not
> > having the opportunity to do this in Fabric looks weird to me. And
> actually
> > I don't think this is a use case for Hadoop Accelerator.
> >
> > -Val
> >
> > On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda 
> wrote:
> >
> > > Hi Ivan,
> > >
> > > 1) Yes, I think that it makes sense to have the old versions of the
> docs
> > > while an old version is still considered to be used by someone.
> > >
> > > 2) Absolutely, the time to add a corresponding article on the
> readme.io
> > > has come. It's not the first time I see the question related to HDFS
> as a
> > > secondary FS.
> > > Before and now it's not clear for me what exact steps I should follow
> to
> > > enable such a configuration. Our current suggestions look like a
> puzzle.
> > > I'll assemble the puzzle on my side and prepare the article. Ivan if
> you
> > > don't mind I would reaching you out directly asking for any technical
> > > assistance if needed.
> > >
> > > Regards,
> > > Denis
> > >
> > >
> > > On 12/14/2015 10:25 AM, Ivan V. wrote:
> > >
> > >> Hi, Valentin,
> > >>
> > >> 1) first of all note that the author of the question uses not the
> latest
> > >> doc page, namely
> > >> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system
> .
> > >> This is version 1.0, while the latest is 1.5:
> > >> https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
> > >> appeared that some links from the latest doc version point to 1.0 doc
> > >> version. I fixed that in several places where I found that. Do we
> really
> > >> need old doc versions (1.0 -1.4)?
> > >>
> > >> 2) our documentation (
> > >> http://apacheignite.gridgain.org/docs/secondary-file-system) does not
> > >> provide any special setup instructions to configure HDFS as secondary
> > file
> > >> system in Ignite. Our docs assume that if a user wants to integrate
> with
> > >> Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
> > >> http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop).
> It
> > >> looks like the page
> > >> http://apacheignite.gridgain.org/docs/secondary-file-system should be
> > >> more
> > >> clear regarding the required configuration steps (in fact, setting up
> > >> HADOOP_HOME variable for Ignite node process).
> > >>
> > >> 3) Hadoop jars are correctly found by Ignite if the following
> conditions
> > >> are met:
> > >> (a) The "Hadoop Edition" distribution is used (not a "Fabric"
> edition).
> > >> (b) Either HADOOP_HOME environment variable is set up (for Apache
> Hadoop
> > >> distribution), or file "/etc/default/hadoop" exists and matches the
> > Hadoop
> > >> distribution used (BigTop, Cloudera, HDP, etc.)
> > >>
> > >> The exact mechanism of the Hadoop classpath composition can be found
> in
> > >> files
> > >> IGNITE_HOME/bin/include/hadoop-classpath.sh
> > >> IGNITE_HOME/bin/include/setenv.sh .
> > >>
> > >> The issue is discussed in
> > >> https://issues.apache.org/jira/browse/IGNITE-372
> > >> , https://issues.apache.org/jira/browse/IGNITE-483 .
> > >>
> > >> On Sat, Dec 12, 2015 at 3:45 AM, Valentin Kulichenko <
> > >> valentin.kuliche...@gmail.com> wrote:
> > >>
> > >> Igniters,
> > >>>
> > >>> I'm looking at t

Re: Work directory cleanup between tests

2015-12-14 Thread Dmitriy Setrakyan
On Mon, Dec 14, 2015 at 1:30 AM, Vladimir Ozerov 
wrote:

> This doesn't work because there still will be interference between tests
> within the same class.


I agree with Yakov. How about cleaning this marshalling directory right
after the grid stop?Cleaning it more frequently is not justified, as the
marshaling directory should not be tampered with while the grid is running.


Re: Using HDFS as a secondary FS

2015-12-14 Thread Denis Magda
Yes, this will be documented tomorrow. I want to go though all the steps by 
myself checking all other possible obstacles the user may face with.

—
Denis

> On 14 дек. 2015 г., at 18:11, Dmitriy Setrakyan  wrote:
> 
> Ivan, I think this should be documented, no?
> 
> On Mon, Dec 14, 2015 at 2:25 AM, Ivan V.  wrote:
> 
>> To enable just an IGFS persistence there is no need to use HDFS (this
>> requires Hadoop dependency, requires configured HDFS cluster, etc.).
>> We have requests https://issues.apache.org/jira/browse/IGNITE-1120 ,
>> https://issues.apache.org/jira/browse/IGNITE-1926 to implement the
>> persistence upon local file system, and we already close to  the solution.
>> 
>> Regarding the secondary Fs doc page (
>> http://apacheignite.gridgain.org/docs/secondary-file-system) I would
>> suggest to add the following text there:
>> 
>> If Ignite node with secondary file system configured on a machine with
>> Hadoop distribution, make sure Ignite is able to find appropriate Hadoop
>> libraries: set HADOOP_HOME environment variable for the Ignite process if
>> you're using Apache Hadoop distribution, or, if you use another
>> distribution (HDP, Cloudera, BigTop, etc.) make sure /etc/default/hadoop
>> file exists and has appropriate contents.
>> 
>> If Ignite node with secondary file system configured on a machine without
>> Hadoop distribution, you can manually add necessary Hadoop dependencies to
>> Ignite node classpath: these are dependencies of groupId
>> "org.apache.hadoop" listed in file modules/hadoop/pom.xml . Currently they
>> are:
>> 
>>   1. hadoop-annotations
>>   2. hadoop-auth
>>   3. hadoop-common
>>   4. hadoop-hdfs
>>   5. hadoop-mapreduce-client-common
>>   6. hadoop-mapreduce-client-core
>> 
>> 
>> 
>> On Mon, Dec 14, 2015 at 11:21 AM, Valentin Kulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>> 
>>> Guys,
>>> 
>>> Why don't we include ignite-hadoop module in Fabric? This user simply
>> wants
>>> to configure HDFS as a secondary file system to ensure persistence. Not
>>> having the opportunity to do this in Fabric looks weird to me. And
>> actually
>>> I don't think this is a use case for Hadoop Accelerator.
>>> 
>>> -Val
>>> 
>>> On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda 
>> wrote:
>>> 
 Hi Ivan,
 
 1) Yes, I think that it makes sense to have the old versions of the
>> docs
 while an old version is still considered to be used by someone.
 
 2) Absolutely, the time to add a corresponding article on the
>> readme.io
 has come. It's not the first time I see the question related to HDFS
>> as a
 secondary FS.
 Before and now it's not clear for me what exact steps I should follow
>> to
 enable such a configuration. Our current suggestions look like a
>> puzzle.
 I'll assemble the puzzle on my side and prepare the article. Ivan if
>> you
 don't mind I would reaching you out directly asking for any technical
 assistance if needed.
 
 Regards,
 Denis
 
 
 On 12/14/2015 10:25 AM, Ivan V. wrote:
 
> Hi, Valentin,
> 
> 1) first of all note that the author of the question uses not the
>> latest
> doc page, namely
> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system
>> .
> This is version 1.0, while the latest is 1.5:
> https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
> appeared that some links from the latest doc version point to 1.0 doc
> version. I fixed that in several places where I found that. Do we
>> really
> need old doc versions (1.0 -1.4)?
> 
> 2) our documentation (
> http://apacheignite.gridgain.org/docs/secondary-file-system) does not
> provide any special setup instructions to configure HDFS as secondary
>>> file
> system in Ignite. Our docs assume that if a user wants to integrate
>> with
> Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
> http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop).
>> It
> looks like the page
> http://apacheignite.gridgain.org/docs/secondary-file-system should be
> more
> clear regarding the required configuration steps (in fact, setting up
> HADOOP_HOME variable for Ignite node process).
> 
> 3) Hadoop jars are correctly found by Ignite if the following
>> conditions
> are met:
> (a) The "Hadoop Edition" distribution is used (not a "Fabric"
>> edition).
> (b) Either HADOOP_HOME environment variable is set up (for Apache
>> Hadoop
> distribution), or file "/etc/default/hadoop" exists and matches the
>>> Hadoop
> distribution used (BigTop, Cloudera, HDP, etc.)
> 
> The exact mechanism of the Hadoop classpath composition can be found
>> in
> files
> IGNITE_HOME/bin/include/hadoop-classpath.sh
> IGNITE_HOME/bin/include/setenv.sh .
> 
> The issue is discussed in
> https://issues.apache.org/jira/browse/IGNIT

[GitHub] ignite pull request: ignite-2065

2015-12-14 Thread ashutakGG
GitHub user ashutakGG opened a pull request:

https://github.com/apache/ignite/pull/334

ignite-2065



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ashutakGG/incubator-ignite 
ignite-2065-requests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/334.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #334


commit 5fb61d6e3ebc553459a6fe61d19e1137f9d9cd8d
Author: ashutak 
Date:   2015-12-14T15:48:09Z

ignite-2065




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: Fix Cache.localEntries() for binary

2015-12-14 Thread ashutakGG
Github user ashutakGG closed the pull request at:

https://github.com/apache/ignite/pull/317


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: 1.5 GA

2015-12-14 Thread Raul Kripalani
On Mon, Dec 14, 2015 at 2:29 PM, Anton Vinogradov 
wrote:

> Tag 1.5.0-b1 fixed, please check.
>

Looks much better now. Thanks!

*Raúl Kripalani*
PMC & Committer @ Apache Ignite, Apache Camel | Integration, Big Data and
Messaging Engineer
http://about.me/raulkripalani | http://www.linkedin.com/in/raulkripalani
http://blog.raulkr.net | twitter: @raulvk


Re: Using HDFS as a secondary FS

2015-12-14 Thread Dmitriy Setrakyan
On Mon, Dec 14, 2015 at 7:28 AM, Denis Magda  wrote:

> Yes, this will be documented tomorrow. I want to go though all the steps
> by myself checking all other possible obstacles the user may face with.
>

Thanks, Denis!


>
> —
> Denis
>
> > On 14 дек. 2015 г., at 18:11, Dmitriy Setrakyan 
> wrote:
> >
> > Ivan, I think this should be documented, no?
> >
> > On Mon, Dec 14, 2015 at 2:25 AM, Ivan V. 
> wrote:
> >
> >> To enable just an IGFS persistence there is no need to use HDFS (this
> >> requires Hadoop dependency, requires configured HDFS cluster, etc.).
> >> We have requests https://issues.apache.org/jira/browse/IGNITE-1120 ,
> >> https://issues.apache.org/jira/browse/IGNITE-1926 to implement the
> >> persistence upon local file system, and we already close to  the
> solution.
> >>
> >> Regarding the secondary Fs doc page (
> >> http://apacheignite.gridgain.org/docs/secondary-file-system) I would
> >> suggest to add the following text there:
> >> 
> >> If Ignite node with secondary file system configured on a machine with
> >> Hadoop distribution, make sure Ignite is able to find appropriate Hadoop
> >> libraries: set HADOOP_HOME environment variable for the Ignite process
> if
> >> you're using Apache Hadoop distribution, or, if you use another
> >> distribution (HDP, Cloudera, BigTop, etc.) make sure /etc/default/hadoop
> >> file exists and has appropriate contents.
> >>
> >> If Ignite node with secondary file system configured on a machine
> without
> >> Hadoop distribution, you can manually add necessary Hadoop dependencies
> to
> >> Ignite node classpath: these are dependencies of groupId
> >> "org.apache.hadoop" listed in file modules/hadoop/pom.xml . Currently
> they
> >> are:
> >>
> >>   1. hadoop-annotations
> >>   2. hadoop-auth
> >>   3. hadoop-common
> >>   4. hadoop-hdfs
> >>   5. hadoop-mapreduce-client-common
> >>   6. hadoop-mapreduce-client-core
> >>
> >> 
> >>
> >> On Mon, Dec 14, 2015 at 11:21 AM, Valentin Kulichenko <
> >> valentin.kuliche...@gmail.com> wrote:
> >>
> >>> Guys,
> >>>
> >>> Why don't we include ignite-hadoop module in Fabric? This user simply
> >> wants
> >>> to configure HDFS as a secondary file system to ensure persistence. Not
> >>> having the opportunity to do this in Fabric looks weird to me. And
> >> actually
> >>> I don't think this is a use case for Hadoop Accelerator.
> >>>
> >>> -Val
> >>>
> >>> On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda 
> >> wrote:
> >>>
>  Hi Ivan,
> 
>  1) Yes, I think that it makes sense to have the old versions of the
> >> docs
>  while an old version is still considered to be used by someone.
> 
>  2) Absolutely, the time to add a corresponding article on the
> >> readme.io
>  has come. It's not the first time I see the question related to HDFS
> >> as a
>  secondary FS.
>  Before and now it's not clear for me what exact steps I should follow
> >> to
>  enable such a configuration. Our current suggestions look like a
> >> puzzle.
>  I'll assemble the puzzle on my side and prepare the article. Ivan if
> >> you
>  don't mind I would reaching you out directly asking for any technical
>  assistance if needed.
> 
>  Regards,
>  Denis
> 
> 
>  On 12/14/2015 10:25 AM, Ivan V. wrote:
> 
> > Hi, Valentin,
> >
> > 1) first of all note that the author of the question uses not the
> >> latest
> > doc page, namely
> >
> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system
> >> .
> > This is version 1.0, while the latest is 1.5:
> > https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
> > appeared that some links from the latest doc version point to 1.0 doc
> > version. I fixed that in several places where I found that. Do we
> >> really
> > need old doc versions (1.0 -1.4)?
> >
> > 2) our documentation (
> > http://apacheignite.gridgain.org/docs/secondary-file-system) does
> not
> > provide any special setup instructions to configure HDFS as secondary
> >>> file
> > system in Ignite. Our docs assume that if a user wants to integrate
> >> with
> > Hadoop, (s)he follows generic Hadoop integration instruction (e.g.
> > http://apacheignite.gridgain.org/docs/installing-on-apache-hadoop).
> >> It
> > looks like the page
> > http://apacheignite.gridgain.org/docs/secondary-file-system should
> be
> > more
> > clear regarding the required configuration steps (in fact, setting up
> > HADOOP_HOME variable for Ignite node process).
> >
> > 3) Hadoop jars are correctly found by Ignite if the following
> >> conditions
> > are met:
> > (a) The "Hadoop Edition" distribution is used (not a "Fabric"
> >> edition).
> > (b) Either HADOOP_HOME environment variable is set up (for Apache
> >> Hadoop
> > distribution), or file "/etc/default/hadoop" exists and matches the
> >>> Hadoop
> > distrib

[jira] [Created] (IGNITE-2159) Platforms examples couldn't be executed under 32-bit OS

2015-12-14 Thread Vasilisa Sidorova (JIRA)
Vasilisa  Sidorova created IGNITE-2159:
--

 Summary: Platforms examples couldn't be executed under 32-bit OS
 Key: IGNITE-2159
 URL: https://issues.apache.org/jira/browse/IGNITE-2159
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.5
 Environment: Win XP 32-bits, MS VS 2010, MS SDK 7.0, Apache Ignite 
1.5.0-b2 build #105
Reporter: Vasilisa  Sidorova
 Fix For: 1.6


-
DESCRIPTION
-
Platforms examples couldn't be executed under 32-bit OS

-
STEPS FOR REPRODUCE
-
# Open and build %IGNITE_HOME%\platforms\cpp\project\vs\ignite_x86.sln
# Open and build 
%IGNITE_HOME%\platforms\cpp\examples\project\vs\ignite-examples.sln (select 
proper platform  -  x86
# Run the solution

-
ACTUAL RESULT
-
Follow error appears: "The procedure entry point InterlockedCompareExchange64 
could not be located in the dynamic link library KERNEL32.dll"

-
EXPECTED RESULT
-
Example should be executed

-
ADDITIONAL INFO
-
# Also reproducible for .Net examples
# Wasn't reproducible for the same configuration about 4 month ago (near of the 
Apache Ignite 1.3.3-rc1)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Using HDFS as a secondary FS

2015-12-14 Thread Denis Magda
Ivan,

Is there any reason why we don’t recommend using 
apache-ignite-hadoop-{version}/bin/setup-hadoop.sh/bat in our Hadooop 
Accelerator articles?

With setup-hadoop.sh I was able to build a valid classpath, create symlinks to 
the accelerator's jars from hadoop’s libs folder automatically and started an 
Ignite node that uses HDFS as a secondary FS in less than 10 minutes.

I just followed the instructions from 
apache-ignite-hadoop-{version}/HADOOP_README.txt. Instructions from the 
readme.io  look much more complex for me, they don’t mention 
setup-hadoop.sh/bat at all making the end user to perform a manual setup.

—
Denis

> On 14 дек. 2015 г., at 20:24, Dmitriy Setrakyan  wrote:
> 
> On Mon, Dec 14, 2015 at 7:28 AM, Denis Magda  wrote:
> 
>> Yes, this will be documented tomorrow. I want to go though all the steps
>> by myself checking all other possible obstacles the user may face with.
>> 
> 
> Thanks, Denis!
> 
> 
>> 
>> —
>> Denis
>> 
>>> On 14 дек. 2015 г., at 18:11, Dmitriy Setrakyan 
>> wrote:
>>> 
>>> Ivan, I think this should be documented, no?
>>> 
>>> On Mon, Dec 14, 2015 at 2:25 AM, Ivan V. 
>> wrote:
>>> 
 To enable just an IGFS persistence there is no need to use HDFS (this
 requires Hadoop dependency, requires configured HDFS cluster, etc.).
 We have requests https://issues.apache.org/jira/browse/IGNITE-1120 ,
 https://issues.apache.org/jira/browse/IGNITE-1926 to implement the
 persistence upon local file system, and we already close to  the
>> solution.
 
 Regarding the secondary Fs doc page (
 http://apacheignite.gridgain.org/docs/secondary-file-system) I would
 suggest to add the following text there:
 
 If Ignite node with secondary file system configured on a machine with
 Hadoop distribution, make sure Ignite is able to find appropriate Hadoop
 libraries: set HADOOP_HOME environment variable for the Ignite process
>> if
 you're using Apache Hadoop distribution, or, if you use another
 distribution (HDP, Cloudera, BigTop, etc.) make sure /etc/default/hadoop
 file exists and has appropriate contents.
 
 If Ignite node with secondary file system configured on a machine
>> without
 Hadoop distribution, you can manually add necessary Hadoop dependencies
>> to
 Ignite node classpath: these are dependencies of groupId
 "org.apache.hadoop" listed in file modules/hadoop/pom.xml . Currently
>> they
 are:
 
  1. hadoop-annotations
  2. hadoop-auth
  3. hadoop-common
  4. hadoop-hdfs
  5. hadoop-mapreduce-client-common
  6. hadoop-mapreduce-client-core
 
 
 
 On Mon, Dec 14, 2015 at 11:21 AM, Valentin Kulichenko <
 valentin.kuliche...@gmail.com> wrote:
 
> Guys,
> 
> Why don't we include ignite-hadoop module in Fabric? This user simply
 wants
> to configure HDFS as a secondary file system to ensure persistence. Not
> having the opportunity to do this in Fabric looks weird to me. And
 actually
> I don't think this is a use case for Hadoop Accelerator.
> 
> -Val
> 
> On Mon, Dec 14, 2015 at 12:11 AM, Denis Magda 
 wrote:
> 
>> Hi Ivan,
>> 
>> 1) Yes, I think that it makes sense to have the old versions of the
 docs
>> while an old version is still considered to be used by someone.
>> 
>> 2) Absolutely, the time to add a corresponding article on the
 readme.io
>> has come. It's not the first time I see the question related to HDFS
 as a
>> secondary FS.
>> Before and now it's not clear for me what exact steps I should follow
 to
>> enable such a configuration. Our current suggestions look like a
 puzzle.
>> I'll assemble the puzzle on my side and prepare the article. Ivan if
 you
>> don't mind I would reaching you out directly asking for any technical
>> assistance if needed.
>> 
>> Regards,
>> Denis
>> 
>> 
>> On 12/14/2015 10:25 AM, Ivan V. wrote:
>> 
>>> Hi, Valentin,
>>> 
>>> 1) first of all note that the author of the question uses not the
 latest
>>> doc page, namely
>>> 
>> http://apacheignite.gridgain.org/v1.0/docs/igfs-secondary-file-system
 .
>>> This is version 1.0, while the latest is 1.5:
>>> https://apacheignite.readme.io/docs/hadoop-accelerator. Besides, it
>>> appeared that some links from the latest doc version point to 1.0 doc
>>> version. I fixed that in several places where I found that. Do we
 really
>>> need old doc versions (1.0 -1.4)?
>>> 
>>> 2) our documentation (
>>> http://apacheignite.gridgain.org/docs/secondary-file-system) does
>> not
>>> provide any special setup instructions to configure HDFS as secondary
> file
>>> system in Ignite. Our docs assume that if a user wants to integrate
 with
>>> Hadoop, (s)he follows gen

[jira] [Created] (IGNITE-2160) Ignition.start() is blocked if there are no server nodes

2015-12-14 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-2160:
---

 Summary: Ignition.start() is blocked if there are no server nodes
 Key: IGNITE-2160
 URL: https://issues.apache.org/jira/browse/IGNITE-2160
 Project: Ignite
  Issue Type: Bug
Reporter: Valentin Kulichenko
Priority: Critical
 Fix For: 1.6


A node (server or client) should always start without blocking the thread that 
called {{Ignition.start()}}. Current behavior is confusing and undesirable - 
e.g., if a node is embedded into web application, the whole application startup 
process is stuck.

Additionally, if there are no servers, client node should throw 
{{IgniteClientDisconnectedException}} on all API calls. It already works this 
way if all servers leave while client is running.

@dev list discussion: 
http://apache-ignite-developers.2346864.n4.nabble.com/Client-connect-quot-hangs-quot-td5765.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Client connect "hangs"

2015-12-14 Thread Valentin Kulichenko
Yakov,

Here is the ticket: https://issues.apache.org/jira/browse/IGNITE-2160

-Val

On Mon, Dec 14, 2015 at 1:33 AM, Yakov Zhdanov  wrote:

> I am ok with the suggestion. Val, can you please file a ticket (or I guess
> we already should have one) and put your suggestion to it.
>
> --Yakov
>
> 2015-12-14 11:47 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Denis,
> >
> > Yes, this can be a workaround, but at the same time  it makes things even
> > more confusing :) This means that client node behavior depends on
> > some property on discovery SPI, while this property should influence only
> > internals of discovery protocol.
> >
> > I think the client should always work in the same way: start without
> > blocking and then throw disconnect exception if there are no
> > servers. Currently this behavior depends on presence of server nodes,
> > forceServerMode flag and probably smth else, which makes it
> unpredictable.
> >
> > -Val
> >
> > On Monday, December 14, 2015, Denis Magda  wrote:
> >
> > > Guys,
> > >
> > > There is already a configuration property that lets to complete
> client's
> > > launching procedure even if there is no any server node in a cluster -
> > > TcpDiscoverySpi.setForceServerMode.
> > > The only side effect of this property is that a client node will
> become a
> > > part of the ring.
> > >
> > > Is this property applicable or you want to support something different?
> > >
> > > --
> > > Denis
> > >
> > > On 12/12/2015 6:13 AM, Valentin Kulichenko wrote:
> > >
> > >> Dmitry,
> > >>
> > >> How do you think, should we just change the behavior or make it
> > >> configurable?
> > >>
> > >> -Val
> > >>
> > >> On Fri, Dec 11, 2015 at 1:59 PM, Dmitriy Setrakyan <
> > dsetrak...@apache.org
> > >> >
> > >> wrote:
> > >>
> > >> I agree that we have a consistency issue here. I am OK with the
> change.
> > >>>
> > >>> On Fri, Dec 11, 2015 at 11:43 AM, Valentin Kulichenko <
> > >>> valentin.kuliche...@gmail.com> wrote:
> > >>>
> > >>> Folks,
> > 
> >  Currently there are two different ways how a client node behaves in
> > case
> >  there are no server nodes:
> > 
> >  1. If it's trying to start, it will wait and block the thread
> that
> >  called Ignition.start().
> >  2. If server nodes left when it was already running, it will
> throw
> >  disconnect exception on any API call.
> > 
> >  It seems confusing to me (and not only to me, as far as I can see
> from
> > 
> > >>> the
> > >>>
> >  users' feedback). First of all, it's just inconsistent and requires
> >  different processing for these different cases. Second of all, p.1
> is
> > 
> > >>> often
> > >>>
> >  treated as a hang, but not as correct behavior. And it's getting
> worse
> > 
> > >>> when
> > >>>
> >  the node is started as a part of web application, because it blocks
> > the
> >  application startup process.
> > 
> >  I think we should start a client node (or at least have a
> configurable
> >  option) even if there are no servers yet. Until the first server
> > joins,
> > 
> > >>> it
> > >>>
> >  will just throw disconnect exceptions.
> > 
> >  Thoughts?
> > 
> >  -Val
> > 
> > 
> > >
> >
>


Custom Java serialization and BinaryMarshaller

2015-12-14 Thread Vladimir Ozerov
Folks,

Currently BinaryMarshaller works in a very non-trivial way:
1) If class is Serializable or Binarylizable, it is written in binary
format and can be used without deserialization.
2) If class implements Externalizable, it is written in binary format, but
without fields metadata.
3) If class has writeObject/readObject methods, it is written with
OptimizedMarshaller, also without fields metadata.

Class from p.2 and p.3 must always be deserialized on server side to allow
queries.

There was an idea to ignore Externalizable/writeObject/readObject and
always write object fields directly with binary format and metadata. And
let user fallback to default Java logic if needed.
I tried this approach today and it appears to be very unstable. Lots of
classes from JDK and other libraries has custom serialization logic. As we
ignore it, it produces weird exceptions (such as NPE) which we cannot
handle and cannot give user any advice on what is going on.

I think we should resolve this problem as follows:
1) Both Externalizable and writeObject/readObject cases *by default* are
handled in a similar way - with OptimizedMarshaller. *I.e. they are
deserialized on server by default.*
2) User can optionally fallback to binary format if he thinks it is safe.
But he must do it explicitly via configuration.
3) When we met such classes for the first time, a warning is printed to the
user. Something like: "Binary format cannot be applied to the [class]
because it implements Externalizable interface; instances will be
deserialized on server (to enable binary format please implement
Binarylizable interface, or enable [property] or define custom serializer)".
4) Only one exception class is possible on the server - ClassNotFound. We
will throw it with sensible error message as well.

Thoughts?

Vladimir.


Re: Custom Java serialization and BinaryMarshaller

2015-12-14 Thread Dmitriy Setrakyan
Vladimir,

I am not sure I like the approach you are suggesting. I am thinking that by
“unstable” classes you are referring to classes like HashMap or ArrayList,
in which case, providing field metadata for them does not make sense, as
well as deserializing them on the server side does not make sense.

Let’s dig a bit deeper. Can you provide a list of the “unstable” classes?

D.

On Mon, Dec 14, 2015 at 1:30 PM, Vladimir Ozerov 
wrote:

> Folks,
>
> Currently BinaryMarshaller works in a very non-trivial way:
> 1) If class is Serializable or Binarylizable, it is written in binary
> format and can be used without deserialization.
> 2) If class implements Externalizable, it is written in binary format, but
> without fields metadata.
> 3) If class has writeObject/readObject methods, it is written with
> OptimizedMarshaller, also without fields metadata.
>
> Class from p.2 and p.3 must always be deserialized on server side to allow
> queries.
>
> There was an idea to ignore Externalizable/writeObject/readObject and
> always write object fields directly with binary format and metadata. And
> let user fallback to default Java logic if needed.
> I tried this approach today and it appears to be very unstable. Lots of
> classes from JDK and other libraries has custom serialization logic. As we
> ignore it, it produces weird exceptions (such as NPE) which we cannot
> handle and cannot give user any advice on what is going on.
>
> I think we should resolve this problem as follows:
> 1) Both Externalizable and writeObject/readObject cases *by default* are
> handled in a similar way - with OptimizedMarshaller. *I.e. they are
> deserialized on server by default.*
> 2) User can optionally fallback to binary format if he thinks it is safe.
> But he must do it explicitly via configuration.
> 3) When we met such classes for the first time, a warning is printed to the
> user. Something like: "Binary format cannot be applied to the [class]
> because it implements Externalizable interface; instances will be
> deserialized on server (to enable binary format please implement
> Binarylizable interface, or enable [property] or define custom
> serializer)".
> 4) Only one exception class is possible on the server - ClassNotFound. We
> will throw it with sensible error message as well.
>
> Thoughts?
>
> Vladimir.
>


[jira] [Created] (IGNITE-2161) We need to tune Ignite site css to make it looks good for printing

2015-12-14 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-2161:


 Summary: We need to tune Ignite site css to make it looks good for 
printing
 Key: IGNITE-2161
 URL: https://issues.apache.org/jira/browse/IGNITE-2161
 Project: Ignite
  Issue Type: Bug
  Components: website
Reporter: Alexey Kuznetsov






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2162) Add dialect name to property name in 'secret.properties' file

2015-12-14 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-2162:
--

 Summary: Add dialect name to property name in 'secret.properties' 
file
 Key: IGNITE-2162
 URL: https://issues.apache.org/jira/browse/IGNITE-2162
 Project: Ignite
  Issue Type: Sub-task
Reporter: Pavel Konstantinov


Currently for example for DB2 we generate the following secret.properties body:
{code}
ds.jdbc.server_name=YOUR_JDBC_SERVER_NAME
ds.jdbc.port_number=YOUR_JDBC_PORT_NUMBER
ds.jdbc.database_name=YOUR_JDBC_DATABASE_TYPE
ds.jdbc.driver_type=YOUR_JDBC_DRIVER_TYPE
ds.jdbc.username=YOUR_USER_NAME
ds.jdbc.password=YOUR_PASSWORD
{code}

This will be a bit better and clear:
{code}
ds.jdbc.db2.server_name=YOUR_JDBC_SERVER_NAME
ds.jdbc.db2.port_number=YOUR_JDBC_PORT_NUMBER
ds.jdbc.db2.database_name=YOUR_JDBC_DATABASE_TYPE
ds.jdbc.db2.driver_type=YOUR_JDBC_DRIVER_TYPE
ds.jdbc.db2.username=YOUR_USER_NAME
ds.jdbc.db2.password=YOUR_PASSWORD
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Custom Java serialization and BinaryMarshaller

2015-12-14 Thread Vladimir Ozerov
Dima,

I do not think it is possible to get list of such classes because this can
be any class from any library. Can you explain what is your concerns?

On Tue, Dec 15, 2015 at 1:46 AM, Dmitriy Setrakyan 
wrote:

> Vladimir,
>
> I am not sure I like the approach you are suggesting. I am thinking that by
> “unstable” classes you are referring to classes like HashMap or ArrayList,
> in which case, providing field metadata for them does not make sense, as
> well as deserializing them on the server side does not make sense.
>
> Let’s dig a bit deeper. Can you provide a list of the “unstable” classes?
>
> D.
>
> On Mon, Dec 14, 2015 at 1:30 PM, Vladimir Ozerov 
> wrote:
>
> > Folks,
> >
> > Currently BinaryMarshaller works in a very non-trivial way:
> > 1) If class is Serializable or Binarylizable, it is written in binary
> > format and can be used without deserialization.
> > 2) If class implements Externalizable, it is written in binary format,
> but
> > without fields metadata.
> > 3) If class has writeObject/readObject methods, it is written with
> > OptimizedMarshaller, also without fields metadata.
> >
> > Class from p.2 and p.3 must always be deserialized on server side to
> allow
> > queries.
> >
> > There was an idea to ignore Externalizable/writeObject/readObject and
> > always write object fields directly with binary format and metadata. And
> > let user fallback to default Java logic if needed.
> > I tried this approach today and it appears to be very unstable. Lots of
> > classes from JDK and other libraries has custom serialization logic. As
> we
> > ignore it, it produces weird exceptions (such as NPE) which we cannot
> > handle and cannot give user any advice on what is going on.
> >
> > I think we should resolve this problem as follows:
> > 1) Both Externalizable and writeObject/readObject cases *by default* are
> > handled in a similar way - with OptimizedMarshaller. *I.e. they are
> > deserialized on server by default.*
> > 2) User can optionally fallback to binary format if he thinks it is safe.
> > But he must do it explicitly via configuration.
> > 3) When we met such classes for the first time, a warning is printed to
> the
> > user. Something like: "Binary format cannot be applied to the [class]
> > because it implements Externalizable interface; instances will be
> > deserialized on server (to enable binary format please implement
> > Binarylizable interface, or enable [property] or define custom
> > serializer)".
> > 4) Only one exception class is possible on the server - ClassNotFound. We
> > will throw it with sensible error message as well.
> >
> > Thoughts?
> >
> > Vladimir.
> >
>


Re: Custom Java serialization and BinaryMarshaller

2015-12-14 Thread Dmitriy Setrakyan
Vova,

I think I am beginning to see your point. However, my concern is that users
may wish to ignore Externalizable and use the Binary protocol. Is it
possible to have a configuration flag on per-cache basis?

Also, why delegate to OptimizedMarshaller for Externalizalbe classes? Can’t
we have this logic for the Binary marshaller?

D.

On Mon, Dec 14, 2015 at 9:58 PM, Vladimir Ozerov 
wrote:

> Dima,
>
> I do not think it is possible to get list of such classes because this can
> be any class from any library. Can you explain what is your concerns?
>
> On Tue, Dec 15, 2015 at 1:46 AM, Dmitriy Setrakyan 
> wrote:
>
> > Vladimir,
> >
> > I am not sure I like the approach you are suggesting. I am thinking that
> by
> > “unstable” classes you are referring to classes like HashMap or
> ArrayList,
> > in which case, providing field metadata for them does not make sense, as
> > well as deserializing them on the server side does not make sense.
> >
> > Let’s dig a bit deeper. Can you provide a list of the “unstable” classes?
> >
> > D.
> >
> > On Mon, Dec 14, 2015 at 1:30 PM, Vladimir Ozerov 
> > wrote:
> >
> > > Folks,
> > >
> > > Currently BinaryMarshaller works in a very non-trivial way:
> > > 1) If class is Serializable or Binarylizable, it is written in binary
> > > format and can be used without deserialization.
> > > 2) If class implements Externalizable, it is written in binary format,
> > but
> > > without fields metadata.
> > > 3) If class has writeObject/readObject methods, it is written with
> > > OptimizedMarshaller, also without fields metadata.
> > >
> > > Class from p.2 and p.3 must always be deserialized on server side to
> > allow
> > > queries.
> > >
> > > There was an idea to ignore Externalizable/writeObject/readObject and
> > > always write object fields directly with binary format and metadata.
> And
> > > let user fallback to default Java logic if needed.
> > > I tried this approach today and it appears to be very unstable. Lots of
> > > classes from JDK and other libraries has custom serialization logic. As
> > we
> > > ignore it, it produces weird exceptions (such as NPE) which we cannot
> > > handle and cannot give user any advice on what is going on.
> > >
> > > I think we should resolve this problem as follows:
> > > 1) Both Externalizable and writeObject/readObject cases *by default*
> are
> > > handled in a similar way - with OptimizedMarshaller. *I.e. they are
> > > deserialized on server by default.*
> > > 2) User can optionally fallback to binary format if he thinks it is
> safe.
> > > But he must do it explicitly via configuration.
> > > 3) When we met such classes for the first time, a warning is printed to
> > the
> > > user. Something like: "Binary format cannot be applied to the [class]
> > > because it implements Externalizable interface; instances will be
> > > deserialized on server (to enable binary format please implement
> > > Binarylizable interface, or enable [property] or define custom
> > > serializer)".
> > > 4) Only one exception class is possible on the server - ClassNotFound.
> We
> > > will throw it with sensible error message as well.
> > >
> > > Thoughts?
> > >
> > > Vladimir.
> > >
> >
>


translate document into korean

2015-12-14 Thread younggyu Chun
Dear igniters

I would like to translate the ignite documents in Korean as many Koreans
recently are getting intereted in the ignite project. I introduced
my colleagues to the ignite, and then my colleagues are planning to use the
ignite as shared RDDs for spark and data grid instead of applying Oracle
Coherence.

I am sure that the ignite would become one of the most popular project in
the near future in Korea so I would like to help many Koreans who are
strugging with reading documents written in English, which would
dramatically boost up the popularity of the ignite in Korea

feel free to give any comment on translation

Best Regards,
Younggyu, Chun