[jira] [Updated] (IGNITE-5410) Invocation of HadoopDataOutStream#write(byte[], int, int) with zero len сauses an AssertionError.

2017-06-05 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5410:

Description: 
Writing an array of zero length causes the following AssertionError:

{code}
java.lang.AssertionError: 0
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopOffheapBuffer.move(HadoopOffheapBuffer.java:95)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.move(HadoopDataOutStream.java:55)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopMultimapBase$AdderBase$1.move(HadoopMultimapBase.java:206)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.write(HadoopDataOutStream.java:70)
at org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:187)
...
{code}

Suggested fix is to change the assertion to 
{code} assert size >= 0 : size; {code}

  was:
Writing an array of zero length causes the following AssertionError:

{code}
java.lang.AssertionError: 0
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopOffheapBuffer.move(HadoopOffheapBuffer.java:95)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.move(HadoopDataOutStream.java:55)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopMultimapBase$AdderBase$1.move(HadoopMultimapBase.java:206)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.write(HadoopDataOutStream.java:70)
at org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:187)
...
{code}

Suggested fix is to change the assertion to 
{code} assert size > 0 : size; {code}


> Invocation of HadoopDataOutStream#write(byte[], int, int) with zero len 
> сauses an AssertionError.
> -
>
> Key: IGNITE-5410
> URL: https://issues.apache.org/jira/browse/IGNITE-5410
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.1
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> Writing an array of zero length causes the following AssertionError:
> {code}
> java.lang.AssertionError: 0
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopOffheapBuffer.move(HadoopOffheapBuffer.java:95)
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.move(HadoopDataOutStream.java:55)
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopMultimapBase$AdderBase$1.move(HadoopMultimapBase.java:206)
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.write(HadoopDataOutStream.java:70)
>   at org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:187)
> ...
> {code}
> Suggested fix is to change the assertion to 
> {code} assert size >= 0 : size; {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5410) Invocation of HadoopDataOutStream#write(byte[], int, int) with zero len сauses an AssertionError.

2017-06-05 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5410:

Summary: Invocation of HadoopDataOutStream#write(byte[], int, int) with 
zero len сauses an AssertionError.  (was: Invocation of 
HadoopOutputStream.write() with an empty array сauses an AssertionError.)

> Invocation of HadoopDataOutStream#write(byte[], int, int) with zero len 
> сauses an AssertionError.
> -
>
> Key: IGNITE-5410
> URL: https://issues.apache.org/jira/browse/IGNITE-5410
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.1
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> Writing an array of zero length causes the following AssertionError:
> {code}
> java.lang.AssertionError: 0
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopOffheapBuffer.move(HadoopOffheapBuffer.java:95)
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.move(HadoopDataOutStream.java:55)
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopMultimapBase$AdderBase$1.move(HadoopMultimapBase.java:206)
>   at 
> org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.write(HadoopDataOutStream.java:70)
>   at org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:187)
> ...
> {code}
> Suggested fix is to change the assertion to 
> {code} assert size > 0 : size; {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5410) Invocation of HadoopOutputStream.write() with an empty array сauses an AssertionError.

2017-06-05 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-5410:
---

 Summary: Invocation of HadoopOutputStream.write() with an empty 
array сauses an AssertionError.
 Key: IGNITE-5410
 URL: https://issues.apache.org/jira/browse/IGNITE-5410
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Affects Versions: 2.1
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky
Priority: Minor


Writing an array of zero length causes the following AssertionError:

{code}
java.lang.AssertionError: 0
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopOffheapBuffer.move(HadoopOffheapBuffer.java:95)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.move(HadoopDataOutStream.java:55)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopMultimapBase$AdderBase$1.move(HadoopMultimapBase.java:206)
at 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataOutStream.write(HadoopDataOutStream.java:70)
at org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:187)
...
{code}

Suggested fix is to change the assertion to 
{code} assert size > 0 : size; {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-05-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16026280#comment-16026280
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 5/26/17 1:33 PM:
--

[~vozerov] , thanks, fixed.
One note: as I understand, reading from the underlying secondary Fs stream does 
not need synchronization, only getting this stream and closing it needs to be 
synchronized.


was (Author: iveselovskiy):
[~vozerov] , thanks, fixed.

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.1
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (IGNITE-5168) Make IGFS metrics available for user

2017-05-17 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5168:

Comment: was deleted

(was: tests)

> Make IGFS metrics available for user
> 
>
> Key: IGNITE-5168
> URL: https://issues.apache.org/jira/browse/IGNITE-5168
> Project: Ignite
>  Issue Type: New Feature
>  Components: igfs
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Now in Ignite the only way to see IGFS metrics is to get them programmically 
> from the code. But customers need them to understand what is cached in IGFS, 
> so to expose the metrics would be useful. 
> There are options to expose the metrics via 
> 1) MBean 
> 2) console Visor .
> 3) web console ( https://apacheignite-tools.readme.io/docs/ignite-web-console 
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5168) Make IGFS metrics available for user

2017-05-17 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014443#comment-16014443
 ] 

Ivan Veselovsky edited comment on IGNITE-5168 at 5/17/17 5:23 PM:
--

Pull request https://github.com/apache/ignite/pull/1966 exposes IGFS metrics 
through the MXBeans (makes the metrics observable via JConsole).


was (Author: iveselovskiy):
https://github.com/apache/ignite/pull/1966

> Make IGFS metrics available for user
> 
>
> Key: IGNITE-5168
> URL: https://issues.apache.org/jira/browse/IGNITE-5168
> Project: Ignite
>  Issue Type: New Feature
>  Components: igfs
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Now in Ignite the only way to see IGFS metrics is to get them programmically 
> from the code. But customers need them to understand what is cached in IGFS, 
> so to expose the metrics would be useful. 
> There are options to expose the metrics via 
> 1) MBean 
> 2) console Visor .
> 3) web console ( https://apacheignite-tools.readme.io/docs/ignite-web-console 
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-5131) Hadoop: update asm library to a version that can parse 1.8 bytecode

2017-05-15 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16010968#comment-16010968
 ] 

Ivan Veselovsky commented on IGNITE-5131:
-

https://github.com/apache/ignite/pull/1946

> Hadoop: update asm library to a version that can parse 1.8 bytecode
> ---
>
> Key: IGNITE-5131
> URL: https://issues.apache.org/jira/browse/IGNITE-5131
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> This question indicates, that asm bytecode parser 4.2 cannot parse 1.8 
> bytecode:
> http://stackoverflow.com/questions/34318028/apache-ignite-failed-to-load-job-class-class-org-apache-ignite-internal-proces
> {code}
>  
>  org.ow2.asm
>  asm-all
>  4.2
>  
> {code}
> Consider to update asm lib to newer version that understands 1.8 bytecode 
> version, (likely 5.2 ? 
> http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.ow2.asm%22%20AND%20a%3A%22asm-all%22
>  )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5193) Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME variables are set.

2017-05-11 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006452#comment-16006452
 ] 

Ivan Veselovsky edited comment on IGNITE-5193 at 5/11/17 4:39 PM:
--

Pull Request https://github.com/apache/ignite/pull/1928  ,  
Tests 
http://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests_IgniteHadoop&branch_IgniteTests=pull%2F1928%2Fhead&tab=buildTypeStatusDiv
 .


was (Author: iveselovskiy):
https://github.com/apache/ignite/pull/1928 

> Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME 
> variables are set.
> ---
>
> Key: IGNITE-5193
> URL: https://issues.apache.org/jira/browse/IGNITE-5193
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>  Labels: easyfix
> Fix For: 2.1
>
>
> Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
> are set (see trace below). This is caused by the following gap in Ignite 
> logic: 
> {{org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists , 
> #isDirectory , #isReadable}} return {{true}} for empty String argument. For 
> the unset location variables the value is empty String, but 
> {{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists}} 
> gets {{true}}, so the location considered to be valid. This is the cause of 
> the problem.
> {code}
> [06:09:42] Security status [authentication=off, tls/ssl=off]
> [06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
> rollback startup routine).
> java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
> Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
> at 
> org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:254)
> at 
> org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:186)
> at 
> org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:101)
> at 
> org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:128)
> at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1638)
> at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:900)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
> at 
> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at 
> org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to resolve 
> Hadoop JAR locations: Failed to get directory files [dir=]
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:456)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:419)
> ... 18 more
> Caused by: java.io.IOException: Failed to get directory files [dir=]
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils$SearchDirectory.files(HadoopClasspathUtils.java:344)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils.classpathForClassLoader(HadoopClasspathUtils.java:68)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:453)
> ... 19 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5193) Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME variables are set.

2017-05-11 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5193:

Labels: easyfix  (was: )

> Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME 
> variables are set.
> ---
>
> Key: IGNITE-5193
> URL: https://issues.apache.org/jira/browse/IGNITE-5193
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>  Labels: easyfix
> Fix For: 2.1
>
>
> Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
> are set (see trace below). This is caused by the following gap in Ignite 
> logic: 
> {{org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists , 
> #isDirectory , #isReadable}} return {{true}} for empty String argument. For 
> the unset location variables the value is empty String, but 
> {{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists}} 
> gets {{true}}, so the location considered to be valid. This is the cause of 
> the problem.
> {code}
> [06:09:42] Security status [authentication=off, tls/ssl=off]
> [06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
> rollback startup routine).
> java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
> Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
> at 
> org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:254)
> at 
> org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:186)
> at 
> org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:101)
> at 
> org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:128)
> at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1638)
> at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:900)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
> at 
> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at 
> org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to resolve 
> Hadoop JAR locations: Failed to get directory files [dir=]
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:456)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:419)
> ... 18 more
> Caused by: java.io.IOException: Failed to get directory files [dir=]
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils$SearchDirectory.files(HadoopClasspathUtils.java:344)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils.classpathForClassLoader(HadoopClasspathUtils.java:68)
> at 
> org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:453)
> ... 19 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5193) Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME variables are set.

2017-05-10 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5193:

Description: 
Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
are set (see trace below). This is caused by the following gap in Ignite logic: 
{{org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists , 
#isDirectory , #isReadable}} return {{true}} for empty String argument. For the 
unset location variables the value is empty String, but 
{{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists}} 
gets {{true}}, so the location considered to be valid. This is the cause of the 
problem.

{code}
[06:09:42] Security status [authentication=off, tls/ssl=off]
[06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
rollback startup routine).
java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
at 
org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
at 
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:254)
at 
org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:186)
at 
org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:101)
at 
org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:128)
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1638)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:900)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to resolve 
Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:456)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:419)
... 18 more
Caused by: java.io.IOException: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils$SearchDirectory.files(HadoopClasspathUtils.java:344)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils.classpathForClassLoader(HadoopClasspathUtils.java:68)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:453)
... 19 more
{code}

  was:
Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
are set (see trace below). This is caused by the following gap in Ignite logic: 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists , 
#isDirectory , #isReadable return {true} for empty String argument. For the 
unset location variables the value is empty String, but 
{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists} gets 
{true}, so the location considered to be valid. This is the cause of the 
problem.

{code}
[06:09:42] Security status [authentication=off, tls/ssl=off]
[06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
rollback startup routine).
java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
at 
org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
at 
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopI

[jira] [Updated] (IGNITE-5193) Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME variables are set.

2017-05-10 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5193:

Description: 
Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
are set (see trace below). This is caused by the following gap in Ignite logic: 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists , 
#isDirectory , #isReadable return {true} for empty String argument. For the 
unset location variables the value is empty String, but 
{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists} gets 
{true}, so the location considered to be valid. This is the cause of the 
problem.

{code}
[06:09:42] Security status [authentication=off, tls/ssl=off]
[06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
rollback startup routine).
java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
at 
org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
at 
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:254)
at 
org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:186)
at 
org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:101)
at 
org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:128)
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1638)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:900)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to resolve 
Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:456)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:419)
... 18 more
Caused by: java.io.IOException: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils$SearchDirectory.files(HadoopClasspathUtils.java:344)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils.classpathForClassLoader(HadoopClasspathUtils.java:68)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:453)
... 19 more
{code}

  was:
Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
are set (see trace below). This is caused by the following gap in Ignite logic: 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists 
returns {true} for empty String argument. For the unset location variables the 
value is empty string, but 
{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists} gets 
{true}. This is the cause of the problem.

{code}
[06:09:42] Security status [authentication=off, tls/ssl=off]
[06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
rollback startup routine).
java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
at 
org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
at 
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:254)
at 
org.apache.ignite.internal.process

[jira] [Created] (IGNITE-5193) Hadoop: Ignite node fails to start if some , but not all HADOOP_XXX_HOME variables are set.

2017-05-10 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-5193:
---

 Summary: Hadoop: Ignite node fails to start if some , but not all 
HADOOP_XXX_HOME variables are set.
 Key: IGNITE-5193
 URL: https://issues.apache.org/jira/browse/IGNITE-5193
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Affects Versions: 1.8
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky
 Fix For: 2.1


Ignite node fails to start if some , but not all 3 HADOOP_XXX_HOME variables 
are set (see trace below). This is caused by the following gap in Ignite logic: 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils#exists 
returns {true} for empty String argument. For the unset location variables the 
value is empty string, but 
{org.apache.ignite.internal.processors.hadoop.HadoopLocations#xExists} gets 
{true}. This is the cause of the problem.

{code}
[06:09:42] Security status [authentication=off, tls/ssl=off]
[06:17:23,822][ERROR][main][IgniteKernal] Got exception while starting (will 
rollback startup routine).
java.lang.RuntimeException: class org.apache.ignite.IgniteCheckedException: 
Failed to resolve Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:422)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:134)
at 
org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.commonClassLoader(HadoopHelperImpl.java:78)
at 
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:254)
at 
org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:186)
at 
org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:101)
at 
org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:128)
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1638)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:900)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to resolve 
Hadoop JAR locations: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:456)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.addHadoopUrls(HadoopClassLoader.java:419)
... 18 more
Caused by: java.io.IOException: Failed to get directory files [dir=]
at 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils$SearchDirectory.files(HadoopClasspathUtils.java:344)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClasspathUtils.classpathForClassLoader(HadoopClasspathUtils.java:68)
at 
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.hadoopUrls(HadoopClassLoader.java:453)
... 19 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5168) Make IGFS metrics available for user

2017-05-04 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5168:

Description: 
Now in Ignite the only way to see IGFS metrics is to get them programmically 
from the code. But customers need them to understand what is cached in IGFS, so 
to expose the metrics would be useful. 
There are options to expose the metrics via 
1) MBean 
2) console Visor .
3) web console ( https://apacheignite-tools.readme.io/docs/ignite-web-console )

  was:
Now in Ignite the only way to see IGFS metrics is to get them programmically 
from the code. But customers need them to understand what is cached in IGFS, so 
to expose the metrics would be useful. 
There are options to expose the metrics via MBean and/or via console Visor 
interface.


> Make IGFS metrics available for user
> 
>
> Key: IGNITE-5168
> URL: https://issues.apache.org/jira/browse/IGNITE-5168
> Project: Ignite
>  Issue Type: New Feature
>  Components: IGFS
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Now in Ignite the only way to see IGFS metrics is to get them programmically 
> from the code. But customers need them to understand what is cached in IGFS, 
> so to expose the metrics would be useful. 
> There are options to expose the metrics via 
> 1) MBean 
> 2) console Visor .
> 3) web console ( https://apacheignite-tools.readme.io/docs/ignite-web-console 
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5168) Make IGFS metrics available for user

2017-05-04 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5168:

Summary: Make IGFS metrics available for user  (was: Expose IGFS metrics 
via an MBean or console Visor.)

> Make IGFS metrics available for user
> 
>
> Key: IGNITE-5168
> URL: https://issues.apache.org/jira/browse/IGNITE-5168
> Project: Ignite
>  Issue Type: New Feature
>  Components: IGFS
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Now in Ignite the only way to see IGFS metrics is to get them programmically 
> from the code. But customers need them to understand what is cached in IGFS, 
> so to expose the metrics would be useful. 
> There are options to expose the metrics via MBean and/or via console Visor 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5168) Expose IGFS metrics via an MBean or console Visor.

2017-05-04 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-5168:
---

 Summary: Expose IGFS metrics via an MBean or console Visor.
 Key: IGNITE-5168
 URL: https://issues.apache.org/jira/browse/IGNITE-5168
 Project: Ignite
  Issue Type: New Feature
  Components: IGFS
Affects Versions: 2.0
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky


Now in Ignite the only way to see IGFS metrics is to get them programmically 
from the code. But customers need them to understand what is cached in IGFS, so 
to expose the metrics would be useful. 
There are options to expose the metrics via MBean and/or via console Visor 
interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (IGNITE-4862) NPE in reading data from IGFS

2017-05-03 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-4862:

Comment: was deleted

(was: https://github.com/apache/ignite/pull/1901)

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.1
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-05-03 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994991#comment-15994991
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 5/3/17 2:39 PM:
-

I prepared another version of the fix (pull request 
https://github.com/apache/ignite/pull/1901) that leaves the 2 problematic 
classes almost unsynchronized, but adds {{ synchronized (mux) {} }} if needed 
to use them in concurrent context.  
This fix also proven to work and looks simple, but for me it looks less 
preferable, because it does not make good encapsulation and responsibility 
separation, so I would recommend to use 1st version of the fix. The  {{ 
LazyValue }} class is just generic purpose utility class , that is to be placed 
in utilities. Other than that , the change is small , and , what is more 
important, it makes the logic very clear.

  


was (Author: iveselovskiy):
I prepared another version of the fix (pull request 1901) that leaves the 2 
problematic classes unsynchronized, but adds {{ synchronized (mux) {} }} if 
needed to use them in concurrent context.  
This fix also proven to work and looks simple, but for me it looks less 
preferable, because it does not make good encapsulation and responsibility 
separation, so I would recommend to use 1st version of the fix. The  {{ 
LazyValue }} class is just generic purpose utility class , that is to be placed 
in utilities. Other than that , the change is small , and , what is more 
important,  very clear.

  

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.1
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4862) NPE in reading data from IGFS

2017-05-03 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994864#comment-15994864
 ] 

Ivan Veselovsky commented on IGNITE-4862:
-

https://github.com/apache/ignite/pull/1901

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.1
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5131) Hadoop: update asm library to a version that can parse 1.8 bytecode

2017-05-02 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-5131:

Description: 
This question indicates, that asm bytecode parser 4.2 cannot parse 1.8 bytecode:
http://stackoverflow.com/questions/34318028/apache-ignite-failed-to-load-job-class-class-org-apache-ignite-internal-proces

{code}
 
 org.ow2.asm
 asm-all
 4.2
 
{code}

Consider to update asm lib to newer version that understands 1.8 bytecode 
version, (likely 5.2 ? 
http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.ow2.asm%22%20AND%20a%3A%22asm-all%22
 )


  was:
This question indicates, that asm bytecode parser 4.2 cannot parse 1.8 bytecode:
http://stackoverflow.com/questions/34318028/apache-ignite-failed-to-load-job-class-class-org-apache-ignite-internal-proces

 
 org.ow2.asm
 asm-all
 4.2
 

Consider to update asm lib to newer version that understands 1.8 bytecode 
version, (likely 5.2 ? 
http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.ow2.asm%22%20AND%20a%3A%22asm-all%22
 )




> Hadoop: update asm library to a version that can parse 1.8 bytecode
> ---
>
> Key: IGNITE-5131
> URL: https://issues.apache.org/jira/browse/IGNITE-5131
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> This question indicates, that asm bytecode parser 4.2 cannot parse 1.8 
> bytecode:
> http://stackoverflow.com/questions/34318028/apache-ignite-failed-to-load-job-class-class-org-apache-ignite-internal-proces
> {code}
>  
>  org.ow2.asm
>  asm-all
>  4.2
>  
> {code}
> Consider to update asm lib to newer version that understands 1.8 bytecode 
> version, (likely 5.2 ? 
> http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.ow2.asm%22%20AND%20a%3A%22asm-all%22
>  )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5131) Hadoop: update asm library to a version that can parse 1.8 bytecode

2017-05-02 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-5131:
---

 Summary: Hadoop: update asm library to a version that can parse 
1.8 bytecode
 Key: IGNITE-5131
 URL: https://issues.apache.org/jira/browse/IGNITE-5131
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky
Priority: Minor


This question indicates, that asm bytecode parser 4.2 cannot parse 1.8 bytecode:
http://stackoverflow.com/questions/34318028/apache-ignite-failed-to-load-job-class-class-org-apache-ignite-internal-proces

 
 org.ow2.asm
 asm-all
 4.2
 

Consider to update asm lib to newer version that understands 1.8 bytecode 
version, (likely 5.2 ? 
http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.ow2.asm%22%20AND%20a%3A%22asm-all%22
 )





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-1925) Test HadoopSkipListSelfTest.testLevel flakily fails

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989130#comment-15989130
 ] 

Ivan Veselovsky edited comment on IGNITE-1925 at 4/28/17 4:47 PM:
--

[~vozerov] , the fix is that  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList#randomLevel
 returns value in range 0..Infinity, not 0..32 or 0..64. This makes the 
function correct. The point is that the returned value should not be limited . 
With very low probability it may return any positive number. And comprehensive 
statistic analysis performed in the test really reveals that: if you use old 
#randomLevel function (now it is called "renndomLevel32"), the test will start 
to fail with noticeable probability.




was (Author: iveselovskiy):
[~vozerov] , the fix is that  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList#randomLevel
 returns value in range 0..Infinity, not 0..32 or 0..64. This makes the 
function correct. The point is that the returned value should not be limited . 
With very low probability it may return any positive number. And comprehensive 
statistic analysis performed in the test really reveals that. 



> Test HadoopSkipListSelfTest.testLevel flakily fails
> ---
>
> Key: IGNITE-1925
> URL: https://issues.apache.org/jira/browse/IGNITE-1925
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.6
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> Test HadoopSkipListSelfTest.testLevel fails from time to time with ~ 3% 
> probability.
>  
> junit.framework.AssertionFailedError: null
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipListSelfTest.testLevel(HadoopSkipListSelfTest.java:83)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-1925) Test HadoopSkipListSelfTest.testLevel flakily fails

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989130#comment-15989130
 ] 

Ivan Veselovsky edited comment on IGNITE-1925 at 4/28/17 4:47 PM:
--

[~vozerov] , the fix is that  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList#randomLevel
 returns value in range 0..Infinity, not 0..32 or 0..64. This makes the 
function correct. The point is that the returned value should not be limited . 
With very low probability it may return any positive number. And comprehensive 
statistic analysis performed in the test really reveals that: if you use old 
#randomLevel function (now it is called "randomLevel32"), the test will start 
to fail with noticeable probability.




was (Author: iveselovskiy):
[~vozerov] , the fix is that  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList#randomLevel
 returns value in range 0..Infinity, not 0..32 or 0..64. This makes the 
function correct. The point is that the returned value should not be limited . 
With very low probability it may return any positive number. And comprehensive 
statistic analysis performed in the test really reveals that: if you use old 
#randomLevel function (now it is called "renndomLevel32"), the test will start 
to fail with noticeable probability.



> Test HadoopSkipListSelfTest.testLevel flakily fails
> ---
>
> Key: IGNITE-1925
> URL: https://issues.apache.org/jira/browse/IGNITE-1925
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.6
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> Test HadoopSkipListSelfTest.testLevel fails from time to time with ~ 3% 
> probability.
>  
> junit.framework.AssertionFailedError: null
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipListSelfTest.testLevel(HadoopSkipListSelfTest.java:83)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (IGNITE-5044) JVM crash

2017-04-28 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky resolved IGNITE-5044.
-
Resolution: Duplicate

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/28/17 2:52 PM:
--

The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: {{set ignite.job.shared.classloader}} .

Note. As per my observations, {{ --hiveconf k=v}} option does not pass 
properties when used with {{beeline  -u jdbc:hive2://}} . At the same time {{ 
--hiveconf k=v}} works fine with {{hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/}} , as well as with {{hive  ...}} client (with 
local connection) . 


was (Author: iveselovskiy):
The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: {{set ignite.job.shared.classloader}} .

Note. As per my observations, {{--hiveconf k=v}} option does not pass 
properties when used with {{beeline  -u jdbc:hive2://}} . At the same time 
{{--hiveconf k=v}} works fine with {{hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/}} , as well as with {{hive  ...}} client (with 
local connection) . 

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/28/17 2:52 PM:
--

The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: {{set ignite.job.shared.classloader}} .

Note. As per my observations, {{\--hiveconf k=v}} option does not pass 
properties when used with {{beeline  -u jdbc:hive2://}} . At the same time 
{{\--hiveconf k=v}} works fine with {{hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/}} , as well as with {{hive  ...}} client (with 
local connection) . 


was (Author: iveselovskiy):
The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: {{set ignite.job.shared.classloader}} .

Note. As per my observations, {{ --hiveconf k=v}} option does not pass 
properties when used with {{beeline  -u jdbc:hive2://}} . At the same time {{ 
--hiveconf k=v}} works fine with {{hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/}} , as well as with {{hive  ...}} client (with 
local connection) . 

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/28/17 2:51 PM:
--

The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: {{set ignite.job.shared.classloader}} .

Note. As per my observations, {{--hiveconf k=v}} option does not pass 
properties when used with {{beeline  -u jdbc:hive2://}} . At the same time 
{{--hiveconf k=v}} works fine with {{hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/}} , as well as with {{hive  ...}} client (with 
local connection) . 


was (Author: iveselovskiy):
The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: "set ignite.job.shared.classloader" .

Note. As per my observations, {{ --hiveconf k=v }} option does not pass 
properties when used with {{beeline  -u jdbc:hive2:// }} . At the same time {{ 
--hiveconf k=v }} works fine with {{ hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/ }} , as well as with {{ hive  ... }} client (with 
local connection) . 

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/28/17 2:50 PM:
--

The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: "set ignite.job.shared.classloader" .

Note. As per my observations, {{ --hiveconf k=v }} option does not pass 
properties when used with {{beeline  -u jdbc:hive2:// }} . At the same time {{ 
--hiveconf k=v }} works fine with {{ hiveserver2 ; beeline  -u 
jdbc:hive2://localhost:1000/ }} , as well as with {{ hive  ... }} client (with 
local connection) . 


was (Author: iveselovskiy):
The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: "set ignite.job.shared.classloader" .

As per my observations, --hiveconf k=v option does not pass properties when 
used with "beeline  -u jdbc:hive2:// " . At the same time "--hiveconf k=v" 
works fine with "hiveserver2 ; beeline  -u jdbc:hive2://localhost:1000/ " , as 
well as with "hive  ..." with local connection . 

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/28/17 2:48 PM:
--

The task runs Okay when the Hive job is run with property 
{{ignite.job.shared.classloader=false}}  , 
for Hive this can be achieved with 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
So, this actually is a duplicate of IGNITE-4720 .

To query actual property value in Hive job, the following Hive query can be 
launched: "set ignite.job.shared.classloader" .

As per my observations, --hiveconf k=v option does not pass properties when 
used with "beeline  -u jdbc:hive2:// " . At the same time "--hiveconf k=v" 
works fine with "hiveserver2 ; beeline  -u jdbc:hive2://localhost:1000/ " , as 
well as with "hive  ..." with local connection . 


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}  , 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
This suggests that this may be duplicate of IGNITE-4720 .

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:22 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}  
 {{  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. }} .  
This suggests that this may be duplicate of IGNITE-4720 .


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}   {{  {HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} .  This suggests that 
this may be duplicate of IGNITE-4720 .

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:22 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}  , 
 {code}  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. 
{code} .  
This suggests that this may be duplicate of IGNITE-4720 .


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}  
 {{  HIVE_HOME/bin/hive --hiveconf ignite.job.shared.classloader=false .. }} .  
This suggests that this may be duplicate of IGNITE-4720 .

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:21 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}  {{ ${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} .  This suggests that 
this may be duplicate of IGNITE-4720 .


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}} ( {{ ${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} ) .  This suggests that 
this may be duplicate of IGNITE-4720 .

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:21 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}} ( {{ ${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} ) .  This suggests that 
this may be duplicate of IGNITE-4720 .


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}} ( {{ "${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} ) .  This suggests that 
this may be duplicate of IGNITE-4720 .

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:21 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}   {{  {HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} .  This suggests that 
this may be duplicate of IGNITE-4720 .


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}}  {{ ${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} .  This suggests that 
this may be duplicate of IGNITE-4720 .

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:16 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}} ( {{ "${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} ) .  This suggests that 
this may be duplicate of IGNITE-4720 .


was (Author: iveselovskiy):
Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}} ( {{ "${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} ) . 

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-26 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/26/17 7:15 PM:
--

Experiments in my local environment (very simplified, e.g. w/o IGFS) do not 
show the SIGSEGV crash, but also show this query failure (with a very strange 
NPE from hive code).
But the following observation may be useful: the task runs Okay when run with 
property {{ignite.job.shared.classloader=false}} ( {{ "${HIVE_HOME}/bin/hive 
--hiveconf ignite.job.shared.classloader=false ... }} ) . 


was (Author: iveselovskiy):
branch {{ignite-5044}}: looks like I was able to provide an Exception instead 
of crash, so the error is now much more graceful.

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-25 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/25/17 5:52 PM:
--

branch {{ignite-5044}}: looks like I was able to provide an Exception instead 
of crash, so the error is now much more graceful.


was (Author: iveselovskiy):
branch `ignite-5044`: looks like I was able to provide an Exception instead of 
crash, so the error is now much more graceful.

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-5044) JVM crash

2017-04-25 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky edited comment on IGNITE-5044 at 4/25/17 5:51 PM:
--

branch `ignite-5044`: looks like I was able to provide an Exception instead of 
crash, so the error is now much more graceful.


was (Author: iveselovskiy):
branch {ignite-5044}: looks like I was able to provide an Exception instead of 
crash, so the error is now much more graceful.

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-5044) JVM crash

2017-04-25 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983334#comment-15983334
 ] 

Ivan Veselovsky commented on IGNITE-5044:
-

branch {ignite-5044}: looks like I was able to provide an Exception instead of 
crash, so the error is now much more graceful.

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.1
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-5044) JVM crash

2017-04-21 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978465#comment-15978465
 ] 

Ivan Veselovsky commented on IGNITE-5044:
-

The crashed thread stack trace: 
{code}
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 2352 C2 
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMemory.readInt(J)I (5 
bytes) @ 0x7f5a054e269f [0x7f5a054e2680+0x1f]
j  
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopDataInStream.readInt()I+11
j  org.apache.hadoop.io.BytesWritable.readFields(Ljava/io/DataInput;)V+7
j  
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopWritableSerialization.read(Ljava/io/DataInput;Ljava/lang/Object;)Ljava/lang/Object;+31
j  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopMultimapBase$ReaderBase.read(JJ)Ljava/lang/Object;+25
j  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList$Reader.readKey(J)Ljava/lang/Object;+45
j  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList$AdderImpl.cmp(Ljava/lang/Object;J)I+77
j  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList$AdderImpl.add(Ljava/lang/Object;Ljava/lang/Object;)J+231
j  
org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipList$AdderImpl.write(Ljava/lang/Object;Ljava/lang/Object;)V+9
j  
org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffleJob$PartitionedOutput.write(Ljava/lang/Object;Ljava/lang/Object;)V+301
j  
org.apache.ignite.internal.processors.hadoop.impl.v1.HadoopV1OutputCollector.collect(Ljava/lang/Object;Ljava/lang/Object;)V+30
j  
org.apache.ignite.internal.processors.hadoop.impl.v1.HadoopV1Task$1.collect(Ljava/lang/Object;Ljava/lang/Object;)V+23
j  
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.collect(Lorg/apache/hadoop/io/BytesWritable;Lorg/apache/hadoop/io/Writable;)V+143
j  
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.process(Ljava/lang/Object;I)V+668
J 2360 C2 
org.apache.hadoop.hive.ql.exec.Operator.forward(Ljava/lang/Object;Lorg/apache/hadoop/hive/serde2/objectinspector/ObjectInspector;)V
 (84 bytes) @ 0x7f5a054ec8bc [0x7f5a054ec840+0x7c]
j  
org.apache.hadoop.hive.ql.exec.TableScanOperator.process(Ljava/lang/Object;I)V+138
j  
org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(Ljava/lang/Object;)Z+18
j  
org.apache.hadoop.hive.ql.exec.MapOperator.process(Lorg/apache/hadoop/io/Writable;)V+66
j  
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(Ljava/lang/Object;Ljava/lang/Object;Lorg/apache/hadoop/mapred/OutputCollector;Lorg/apache/hadoop/mapred/Reporter;)V+80
j  
org.apache.ignite.internal.processors.hadoop.impl.v1.HadoopV1MapTask.run(Lorg/apache/ignite/internal/processors/hadoop/HadoopTaskContext;)V+313
j  
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run()V+59
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(Lorg/apache/ignite/internal/processors/hadoop/counter/HadoopPerformanceCounter;)V+76
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0()V+63
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call()Ljava/lang/Void;+4
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call()Ljava/lang/Object;+1
j  
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(Ljava/util/concurrent/Callable;)Ljava/lang/Object;+90
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call()Ljava/lang/Void;+27
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call()Ljava/lang/Object;+1
j  
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body()V+4
j  org.apache.ignite.internal.util.worker.GridWorker.run()V+82
j  java.lang.Thread.run()V+11

{code}


May be this is a consequence of zero value returned from 
org.apache.ignite.internal.processors.hadoop.shuffle.streams.HadoopOffheapBuffer#move
  ?

> JVM crash
> -
>
> Key: IGNITE-5044
> URL: https://issues.apache.org/jira/browse/IGNITE-5044
> Project: Ignite
>  Issue Type: Bug
>  Components: general, hadoop
>Affects Versions: 2.0
>Reporter: Sergey Kozlov
>Assignee: Ivan Veselovsky
>Priority: Critical
> Fix For: 2.0
>
> Attachments: grid.2.node.1.0.out.log, hs_err_pid4079.log
>
>
> Sometimes testing Apache Hadoop +  Apache Hive kills JVM
> Take a look on the attached file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 2:45 PM:
--

The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}
After that the problem was reproducible just after node start with an attempt 
to read a 44Mb file via IGFS (with  {{./hadoop-ig fs -copyToLocal 
igfs://localhost:10500/tmp/myfile /tmp/}} ) that exists on HDFS (secondary Fs), 
but is yet missing in the IGFS caches.

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}
After that the problem was reproducible just after node start with an attempt 
to read a 44Mb file via IGFS (with {{./hadoop-ig fs -copyToLocal 
igfs://localhost:10500/tmp/myfile /tmp/ }} ) that exists on HDFS (secondary 
Fs), but is yet missing in the IGFS caches.

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 2:45 PM:
--

The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}
After that the problem was reproducible just after node start with an attempt 
to read a 44Mb file via IGFS (with {{./hadoop-ig fs -copyToLocal 
igfs://localhost:10500/tmp/myfile /tmp/ }} ) that exists on HDFS (secondary 
Fs), but is yet missing in the IGFS caches.

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}
After that the problem was reproducible just after node start with an attempt 
to read a 44Mb file via IGFS (with {{./hadoop-ig fs -copyToLocal 
igfs://localhost:10500/tmp/myfile /tmp/ }}) that exists on HDFS (secondary Fs), 
but is yet missing in the IGFS caches.

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 2:45 PM:
--

The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}
After that the problem was reproducible just after node start with an attempt 
to read a 44Mb file via IGFS (with {{./hadoop-ig fs -copyToLocal 
igfs://localhost:10500/tmp/myfile /tmp/ }}) that exists on HDFS (secondary Fs), 
but is yet missing in the IGFS caches.

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.proc

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 12:04 PM:
---

The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem ({{perNodeParallelBatchCount}} should be > 0, but it is 16 by default):
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem:
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

>

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 10:19 AM:
---

The property {{prefetchBlocks}} must be set to non-zero value to reproduce the 
problem:
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The property ```prefetchBlocks``` must be set to non-zero value to reproduce 
the problem:
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
>   

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 10:17 AM:
---

The property `prefetchBlocks` must be set to non-zero value to reproduce the 
problem:
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The following properties should be set to reproduce the problem:
{code}



{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 10:18 AM:
---

The property ```prefetchBlocks``` must be set to non-zero value to reproduce 
the problem:
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The property `prefetchBlocks` must be set to non-zero value to reproduce the 
problem:
{code}

 .

{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
> 

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 10:12 AM:
---

The following properties should be set to reproduce the problem:
{code}



{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The following properties should be set to reproduce the problem:
{code}





{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
>

[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-31 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/31/17 10:12 AM:
---

The following properties should be set to reproduce the problem:
{code}





{code}

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
> 

[jira] [Commented] (IGNITE-4862) NPE in reading data from IGFS

2017-03-30 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949662#comment-15949662
 ] 

Ivan Veselovsky commented on IGNITE-4862:
-

https://github.com/apache/ignite/pull/1706

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4862) NPE in reading data from IGFS

2017-03-30 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky edited comment on IGNITE-4862 at 3/30/17 4:58 PM:
--

The following exception happens on the Ignite node at the same time: 
{code}
Exception in thread "igfs-#50%null%" java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at 
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:419)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:357)
at 
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:346)
at 
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)


was (Author: iveselovskiy):
The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDesti

[jira] [Commented] (IGNITE-4862) NPE in reading data from IGFS

2017-03-30 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949291#comment-15949291
 ] 

Ivan Veselovsky commented on IGNITE-4862:
-

The problem is that class 
`org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable`
 was designed fro single-threaded access, but in fact was accessed by multiple 
threads. That lead to unexpected situation, when 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable#in
 returned null , that caused a bunch of further errors. 
Fixed by making HadoopIgfsSecondaryFileSystemPositionedReadable thread safe 
(underlying secondary file system input stream has its own mechanisms to 
preserve consistency upon multithreaded access.)

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4862) NPE in reading data from IGFS

2017-03-29 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-4862:
---

Assignee: Ivan Veselovsky

> NPE in reading data from IGFS
> -
>
> Key: IGNITE-4862
> URL: https://issues.apache.org/jira/browse/IGNITE-4862
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.9
>Reporter: Dmitry Karachentsev
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> {noformat}
> D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
> igfs:///cacheLink/test3.orc D:\test3.orc
> -copyToLocal: Fatal internal error
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> {noformat}
> Details in discussion: 
> [http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945503#comment-15945503
 ] 

Ivan Veselovsky edited comment on IGNITE-3542 at 3/28/17 4:40 PM:
--

[~vozerov] , thanks, that was a mistake, and now it is fixed.

Note, that several tests fail on Windows agents (agent name `*_9090`), but this 
happens for reasons not related to this fix (some problem with OS-locked files 
on rename/move, also happens on `master` branch). Ticket IGNITE-4623 intended 
to fix that.


was (Author: iveselovskiy):
[~vozerov] , thanks, that was a mistake, and now it is fixed.

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945503#comment-15945503
 ] 

Ivan Veselovsky commented on IGNITE-3542:
-

[~vozerov] , thanks, that was a mistake, and now it is fixed.

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4623) IGFS test suite should run successfully on Windows agents

2017-03-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945495#comment-15945495
 ] 

Ivan Veselovsky commented on IGNITE-4623:
-

I merged last patch of IGNITE-3920 (pull request 1107) with master, but do not 
see the fix for NPE there.
I included the NPE fix into the last patch for IGNITE-3542.

> IGFS test suite should run successfully on Windows agents
> -
>
> Key: IGNITE-4623
> URL: https://issues.apache.org/jira/browse/IGNITE-4623
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Veselovsky
>
> Currently on Windows agents (*_*_*_9090) there are ~560 failures related to 
> local file system behavior difference on Windows systems.
> E.g. see 1.8.3: 
> http://ci.ignite.apache.org/viewLog.html?buildId=435288&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteGgfs
>  
> 2.0: 
> http://ci.ignite.apache.org/viewLog.html?buildId=435289&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteGgfs
>  .
> Most of the failures are caused by NPE in 
> org.apache.ignite.igfs.secondary.local.LocalIgfsSecondaryFileSystem#info 
> (line 370).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3920) Hadoop: remove PayloadAware interface.

2017-03-28 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945241#comment-15945241
 ] 

Ivan Veselovsky commented on IGNITE-3920:
-

merged with 'master' branch, https://github.com/apache/ignite/pull/1107 .



> Hadoop: remove PayloadAware interface.
> --
>
> Key: IGNITE-3920
> URL: https://issues.apache.org/jira/browse/IGNITE-3920
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.7
>Reporter: Vladimir Ozerov
>Assignee: Alexey Kuznetsov
> Fix For: 2.0
>
>
> When IGNITE-3376 is ready, we will be able to execute {{PROXY}} operations 
> directly from {{IgfsImpl}}. It means that we no longer need 
> {{HadoopPayloadAware}} interface, and we no longer need to pass {{IgfsPaths}} 
> to the client. 
> Let's remove them all together.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-1925) Test HadoopSkipListSelfTest.testLevel flakily fails

2017-03-27 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943845#comment-15943845
 ] 

Ivan Veselovsky commented on IGNITE-1925:
-

https://github.com/apache/ignite/pull/1678

> Test HadoopSkipListSelfTest.testLevel flakily fails
> ---
>
> Key: IGNITE-1925
> URL: https://issues.apache.org/jira/browse/IGNITE-1925
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.6
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> Test HadoopSkipListSelfTest.testLevel fails from time to time with ~ 3% 
> probability.
>  
> junit.framework.AssertionFailedError: null
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipListSelfTest.testLevel(HadoopSkipListSelfTest.java:83)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-1925) Test HadoopSkipListSelfTest.testLevel flakily fails

2017-03-20 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-1925:
---

Assignee: Ivan Veselovsky

> Test HadoopSkipListSelfTest.testLevel flakily fails
> ---
>
> Key: IGNITE-1925
> URL: https://issues.apache.org/jira/browse/IGNITE-1925
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.6
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
>
> Test HadoopSkipListSelfTest.testLevel fails from time to time with ~ 3% 
> probability.
>  
> junit.framework.AssertionFailedError: null
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipListSelfTest.testLevel(HadoopSkipListSelfTest.java:83)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-481) Add tests for Metrics to the file system tests infrastructure

2017-03-17 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930564#comment-15930564
 ] 

Ivan Veselovsky edited comment on IGNITE-481 at 3/17/17 7:29 PM:
-

Comments addressed, now should be reviewed against 2.0 branch.


was (Author: iveselovskiy):
Comments addressed, now should be reviewed against 2.0 branch.

> Add tests for Metrics to the file system tests infrastructure
> -
>
> Key: IGNITE-481
> URL: https://issues.apache.org/jira/browse/IGNITE-481
> Project: Ignite
>  Issue Type: Task
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> Need to add tests for org.apache.ignite.igfs.IgfsMetrics to the filesystem 
> tests.
> See org.apache.ignite.IgniteFileSystem#metrics , 
> org.apache.ignite.IgniteFileSystem#resetMetrics .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-13 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905208#comment-15905208
 ] 

Ivan Veselovsky edited comment on IGNITE-3542 at 3/13/17 1:56 PM:
--

[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The reason is that if a file does not exist on primary file system, 
but exists on the secondary one, IgfsImpl#exists() returns *true*. Test 
org.apache.ignite.internal.processors.igfs.IgfsDualAbstractSelfTest#testDeletePathMissingPartially
 asserts this behavior directly.
It seems, throwing an exception in this case would violate the method contract, 
since method org.apache.ignite.IgniteFileSystem#delete javadoc specifies 
throwing IgniteException on error only. 
So, I would suggest to stay with the behavior suggested in the patch -- it 
seems logical and corresponds to the existing tests.


was (Author: iveselovskiy):
[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The reason is that if a file does not exist on primary file system, 
but exists on the secondary one, IgfsImpl#exists() returns *true*. Test 
org.apache.ignite.internal.processors.igfs.IgfsDualAbstractSelfTest#testDeletePathMissingPartially
 checks this behavior.
It seems, throwing an exception in this case would violate the method contract, 
since method org.apache.ignite.IgniteFileSystem#delete javadoc specifies 
throwing IgniteException on error only. 
So, I would suggest to stay with the behavior suggested in the patch -- it 
seems logical and corresponds to the existing tests.

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-13 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905208#comment-15905208
 ] 

Ivan Veselovsky edited comment on IGNITE-3542 at 3/13/17 1:54 PM:
--

[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The reason is that if a file does not exist on primary file system, 
but exists on the secondary one, IgfsImpl#exists() returns *true*. Test 
org.apache.ignite.internal.processors.igfs.IgfsDualAbstractSelfTest#testDeletePathMissingPartially
 checks this behavior.
It seems, throwing an exception in this case would violate the method contract, 
since method org.apache.ignite.IgniteFileSystem#delete javadoc specifies 
throwing IgniteException on error only. 
So, I would suggest to stay with the behavior suggested in the patch -- it 
seems logical and corresponds to the existing tests.


was (Author: iveselovskiy):
[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The matter is that if a file does not exist on primary file system, 
but exists on the secondary one, #exists() returns *true*.

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-13 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905208#comment-15905208
 ] 

Ivan Veselovsky edited comment on IGNITE-3542 at 3/13/17 1:32 PM:
--

[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The matter is that if a file does not exist on primary file system, 
but exists on the secondary one, #exists() returns *true*.


was (Author: iveselovskiy):
[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The matter is that if a file does not exist on primary file system, 
but exists on the secondary one, #exists() returns *true*.

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-13 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905208#comment-15905208
 ] 

Ivan Veselovsky edited comment on IGNITE-3542 at 3/13/17 1:22 PM:
--

[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2) The problem is that we base on natural assertion "file does not exist after 
removal", and with  current implementation of IgfsImpl#exists() we can fulfill 
it only if the file is also deleted from the secondary file system upon 
removal. The matter is that if a file does not exist on primary file system, 
but exists on the secondary one, #exists() returns *true*.


was (Author: iveselovskiy):
[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2)  Agree, I'm not sure what to do best in case 
{code}!pathIds.allExists(){code} -- deletion on secondary Fs really looks 
strange, but that is an existing functionality. Agree, this may look strange. 
But I guess, we should not throw an Exception in this case, but rather should 
return {code}new IgfsDeleteResult(false, null);{code}, shouldn't we? 

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-10 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905208#comment-15905208
 ] 

Ivan Veselovsky edited comment on IGNITE-3542 at 3/10/17 3:05 PM:
--

[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2)  Agree, I'm not sure what to do best in case 
{code}!pathIds.allExists(){code} -- deletion on secondary Fs really looks 
strange, but that is an existing functionality. Agree, this may look strange. 
But I guess, we should not throw an Exception in this case, but rather should 
return {code}new IgfsDeleteResult(false, null);{code}, shouldn't we? 


was (Author: iveselovskiy):
[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2)  Agree, I'm not sure what to do best in case 
{code}!pathIds.allExists(){code} -- deletion on secondary Fs really looks 
strange, but that is an existing functionality. But I guess   we should not 
throw an Exception in this case, but rather should return {code}new 
IgfsDeleteResult(false, null);{code}, shouldn't we? 

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-03-10 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905208#comment-15905208
 ] 

Ivan Veselovsky commented on IGNITE-3542:
-

[~vozerov] , 
1) The suggested fix does not prevent the {code}delete{code} to be invoked on 
underlying Fs in case integrity check failed, we just do retry and go to next 
loop in this case. This is exactly what is requested by this ticket.
2)  Agree, I'm not sure what to do best in case 
{code}!pathIds.allExists(){code} -- deletion on secondary Fs really looks 
strange, but that is an existing functionality. But I guess   we should not 
throw an Exception in this case, but rather should return {code}new 
IgfsDeleteResult(false, null);{code}, shouldn't we? 

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4808) Add all Hadoop examples as Ignite unit tests with default multi-JVM execution mode

2017-03-10 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903553#comment-15903553
 ] 

Ivan Veselovsky edited comment on IGNITE-4808 at 3/10/17 1:51 PM:
--

Notes:
1) I had to copy DistBbp sample, because it contains a bug that cannot be 
otherwise fixed. Also the copying makes possible to verify the sample results.
2) One problem revealed by "Join" sample -- addressed separately in IGNITE-4813 
.


was (Author: iveselovskiy):
Notes:
1) I had to copy DistBbp sample, because it contains a bug that cannot be 
otherwise fixed. Also the copying makes possible to verify the sample results.
2) One problem revealed by "Join" sample (TODO mark in method 
org.apache.ignite.internal.processors.hadoop.impl.HadoopGenericExampleTest.GenericHadoopExample#prepare
 ) likely needs to be debugged and fixed in Ignite MR engine.  

> Add all Hadoop examples as Ignite unit tests with default multi-JVM execution 
> mode
> --
>
> Key: IGNITE-4808
> URL: https://issues.apache.org/jira/browse/IGNITE-4808
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Should have all Hadoop examples as Ignite unit tests with multi-JVM mode 
> being the default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4813) Ignite map-reduce engine should set MRJobConfig.TASK_ATTEMPT_ID

2017-03-10 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904968#comment-15904968
 ] 

Ivan Veselovsky commented on IGNITE-4813:
-

Pull request: https://github.com/apache/ignite/pull/1610 

> Ignite map-reduce engine should set MRJobConfig.TASK_ATTEMPT_ID
> ---
>
> Key: IGNITE-4813
> URL: https://issues.apache.org/jira/browse/IGNITE-4813
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> Hadoop "join" example fails on Ignite with the error like this:
> {code}
>  Out: class org.apache.ignite.IgniteCheckedException: class 
> org.apache.ignite.IgniteCheckedException: null
> [14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2MapTask.run0(HadoopV2MapTask.java:102)
> [14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55)
> [14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
> [14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> java.lang.Thread.run(Thread.java:745)
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: Caused by: 
> java.lang.NullPointerException
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.(TaskAttemptContextImpl.java:49)
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.hadoop.mapreduce.lib.join.Parser$WNode.createRecordReader(Parser.java:348)
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.hadoop.mapreduce.lib.join.Parser$CNode.createRecordReader(Parser.java:486)
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat.createRecordReader(CompositeInputFormat.java:143)
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2MapTask.run0(HadoopV2MapTask.java:69)
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: ... 12 
> more
> [14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: 
> {code}
> This is because 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Context sets the 
> job id and task id, but does not set task attempt id. In Hadoop this is done 
> in method org.apache.hadoop.mapred.Task#localizeConfiguration .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4813) Ignite map-reduce engine should set MRJobConfig.TASK_ATTEMPT_ID

2017-03-10 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-4813:
---

 Summary: Ignite map-reduce engine should set 
MRJobConfig.TASK_ATTEMPT_ID
 Key: IGNITE-4813
 URL: https://issues.apache.org/jira/browse/IGNITE-4813
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Affects Versions: 1.8
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky
 Fix For: 2.0


Hadoop "join" example fails on Ignite with the error like this:

{code}
 Out: class org.apache.ignite.IgniteCheckedException: class 
org.apache.ignite.IgniteCheckedException: null
[14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2MapTask.run0(HadoopV2MapTask.java:102)
[14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55)
[14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
[14:27:29,636][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[14:27:29,637][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
java.lang.Thread.run(Thread.java:745)
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: Caused by: 
java.lang.NullPointerException
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.(TaskAttemptContextImpl.java:49)
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.hadoop.mapreduce.lib.join.Parser$WNode.createRecordReader(Parser.java:348)
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.hadoop.mapreduce.lib.join.Parser$CNode.createRecordReader(Parser.java:486)
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat.createRecordReader(CompositeInputFormat.java:143)
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2MapTask.run0(HadoopV2MapTask.java:69)
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out:   ... 12 more
[14:27:29,638][INFO ][Thread-3][jvm-a6fc1c46] PID-31907  Out: 
{code}

This is because 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Context sets the 
job id and task id, but does not set task attempt id. In Hadoop this is done in 
method org.apache.hadoop.mapred.Task#localizeConfiguration .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4808) Add all Hadoop examples as Ignite unit tests with default multi-JVM execution mode

2017-03-09 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903553#comment-15903553
 ] 

Ivan Veselovsky commented on IGNITE-4808:
-

Notes:
1) I had to copy DistBbp sample, because it contains a bug that cannot be 
otherwise fixed. Also the copying makes possible to verify the sample results.
2) One problem revealed by "Join" sample (TODO mark in method 
org.apache.ignite.internal.processors.hadoop.impl.HadoopGenericExampleTest.GenericHadoopExample#prepare
 ) likely needs to be debugged and fixed in Ignite MR engine.  

> Add all Hadoop examples as Ignite unit tests with default multi-JVM execution 
> mode
> --
>
> Key: IGNITE-4808
> URL: https://issues.apache.org/jira/browse/IGNITE-4808
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Should have all Hadoop examples as Ignite unit tests with multi-JVM mode 
> being the default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4808) Add all Hadoop examples as Ignite unit tests with default multi-JVM execution mode

2017-03-09 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903489#comment-15903489
 ] 

Ivan Veselovsky commented on IGNITE-4808:
-

Pull request: https://github.com/apache/ignite/pull/1589 
Builds: 
http://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests_IgniteHadoopMultiJvmExamples&branch_IgniteTests=pull/1589/head&tab=buildTypeStatusDiv
 

> Add all Hadoop examples as Ignite unit tests with default multi-JVM execution 
> mode
> --
>
> Key: IGNITE-4808
> URL: https://issues.apache.org/jira/browse/IGNITE-4808
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 2.0
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> Should have all Hadoop examples as Ignite unit tests with multi-JVM mode 
> being the default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4808) Add all Hadoop examples as Ignite unit tests with default multi-JVM execution mode

2017-03-09 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-4808:
---

 Summary: Add all Hadoop examples as Ignite unit tests with default 
multi-JVM execution mode
 Key: IGNITE-4808
 URL: https://issues.apache.org/jira/browse/IGNITE-4808
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Affects Versions: 2.0
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky


Should have all Hadoop examples as Ignite unit tests with multi-JVM mode being 
the default.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4755) Print a warning when option 'ignite.job.shared.classloader' is on.

2017-02-27 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-4755:
---

 Summary: Print a warning when option 
'ignite.job.shared.classloader' is on.
 Key: IGNITE-4755
 URL: https://issues.apache.org/jira/browse/IGNITE-4755
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-22 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky resolved IGNITE-4720.
-
Resolution: Won't Fix

Fix suggested by Valdimir O. is to leave default option value 'true', but print 
warning saying that it is potentially dangerous. The warning will be 
implemented in 2.0 branch . 

> Sporadically fails for Hadoop
> -
>
> Key: IGNITE-4720
> URL: https://issues.apache.org/jira/browse/IGNITE-4720
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Irina Zaporozhtseva
>Assignee: Ivan Veselovsky
> Fix For: 1.9
>
>
> hadoop example aggregatewordcount under apache ignite hadoop edition grid 
> with 4 nodes for hadoop-2_6_4 and hadoop-2_7_2:
> aggregatewordcount returns 999712 instead of 100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-22 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15878214#comment-15878214
 ] 

Ivan Veselovsky commented on IGNITE-4720:
-

Actual reason of failure is that field 
org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase#aggregatorDescriptorList
holding the ValueAggregatorDescriptor-s  is *static*, so when using 
ignite.job.shared.classloader=true , this field is used in several Map tasks , 
creating data race.


> Sporadically fails for Hadoop
> -
>
> Key: IGNITE-4720
> URL: https://issues.apache.org/jira/browse/IGNITE-4720
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Irina Zaporozhtseva
>Assignee: Ivan Veselovsky
> Fix For: 1.9
>
>
> hadoop example aggregatewordcount under apache ignite hadoop edition grid 
> with 4 nodes for hadoop-2_6_4 and hadoop-2_7_2:
> aggregatewordcount returns 999712 instead of 100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-21 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876456#comment-15876456
 ] 

Ivan Veselovsky commented on IGNITE-4720:
-

Looks like problem caused by the following change: 
--
commit 30b869ddd32db637ee9ea8f13a115dd4bacc52fe
Author: devozerov 
Date:   Wed Dec 14 14:35:29 2016 +0300
   IGNITE-4426: Hadoop: tasks can share the same classloader. This closes #1344.
--

The following property allows to *work around* the problem:  
-Dignite.job.shared.classloader=false  .

> Sporadically fails for Hadoop
> -
>
> Key: IGNITE-4720
> URL: https://issues.apache.org/jira/browse/IGNITE-4720
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Irina Zaporozhtseva
>Assignee: Ivan Veselovsky
> Fix For: 1.9
>
>
> hadoop example aggregatewordcount under apache ignite hadoop edition grid 
> with 4 nodes for hadoop-2_6_4 and hadoop-2_7_2:
> aggregatewordcount returns 999712 instead of 100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-20 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15874981#comment-15874981
 ] 

Ivan Veselovsky commented on IGNITE-4720:
-

1) localized the problem to only one test (aggregatewordcount).
2) have proven that igfs:// does not play any role there: the test fails in the 
same way even with file:// file system.

> Sporadically fails for Hadoop
> -
>
> Key: IGNITE-4720
> URL: https://issues.apache.org/jira/browse/IGNITE-4720
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Irina Zaporozhtseva
>Assignee: Ivan Veselovsky
> Fix For: 1.9
>
>
> hadoop example aggregatewordcount under apache ignite hadoop edition grid 
> with 4 nodes for hadoop-2_6_4 and hadoop-2_7_2:
> aggregatewordcount returns 999712 instead of 100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-17 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15872091#comment-15872091
 ] 

Ivan Veselovsky edited comment on IGNITE-4720 at 2/17/17 6:37 PM:
--

Node logs show that some nodes were failed and excluded from the topology 
during the test. This happens nearly at the same time when the test failures 
observed:
{code} 

 [16:19:00,784][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] Connect 
timed out (consider increasing 'failureDetectionTimeout' configuration 
property) [addr=/127.0.0.1:47103, failureDetectionTimeout=10000]
130 [16:19:00,788][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] 
Connect timed out (consider increasing 'failureDetectionTimeout' configuration 
property) [addr=/172.25.2.17:47103, failureDetectionTimeout=10000]
131 [16:19:00,788][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] Failed 
to connect to a remote node (make sure that destination node is alive and 
operating system firewall is disabled on local and remote hosts) 
[addrs=[/127.0.0.1:47103, /172.25.2.17:47103]]
132 [16:19:00,789][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] 
TcpCommunicationSpi failed to establish connection to node, node will be 
dropped from cluster [rmtNode=TcpDiscoveryNode [id=ca2c554d-6d48-4a6
1-abe5-d1d188cc3f53, addrs=[127.0.0.1, 172.25.2.17], 
sockAddrs=[/172.25.2.17:47503, /127.0.0.1:47503], discPort=47503, order=4, 
intOrder=4, lastExchangeTime=1487337395028, loc=false, 
ver=1.8.3#20170217-sha1:92493562, isClient=false], err=class 
o.a.i.IgniteCheckedException: Failed to connect to node (is node still alive?). 
Make sure that each ComputeTask and cache Transaction has a timeout set in 
order to prevent parties from waiting forever in case of network issues 
[nodeId=ca2c554d-6d48-4a61-abe5-d1d188cc3f53, addrs=[/127.0.0.1:47103, 
/172.25.2.17:47103]], connectErrs=[class o.a.i.IgniteCheckedException: Failed 
to connect to address: /127.0.0.1:47103, class 
o.a.i.IgniteCheckedException: Failed to connect to address: /172.25.2.17:47103]]
133 [16:19:00,795][WARN ][disco-event-worker-#29%null%][GridDiscoveryManager] 
Node FAILED: TcpDiscoveryNode [id=ca2c554d-6d48-4a61-abe5-d1d188cc3f53, 
addrs=[127.0.0.1, 172.25.2.17], sockAddrs=[/172.25.2.17:47503, /
127.0.0.1:47503], discPort=47503, order=4, intOrder=4, 
lastExchangeTime=1487337395028, loc=false, ver=1.8.3#20170217-sha1:92493562, 
isClient=false]
134 [16:19:00,796][INFO ][disco-event-worker-#29%null%][GridDiscoveryManager] 
Topology snapshot [ver=5, servers=3, clients=0, CPUs=4, heap=4.4GB]

{code}

>From the logs and configs it appears that  igfs:// is used as default file 
>system, so we may need to run the same tests with e.g. file:// file system to 
>exclude IGFS.
Similar failures related to timeouts were observed in Ignite clusters under 
high load when running Map-Reduce jobs. Special configuration tuning (increased 
timeouts, etc.) were used to overcome the problem.

Looks like in QA tests several map-red jobs are running on the same cluster 
consequently, so previous job may affect subsequent. 


was (Author: iveselovskiy):
Node logs show that some nodes were failed and excluded from the topology 
during the test. This happens nearly at the same time when the test failures 
observed:
{code} 

 [16:19:00,784][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] Connect 
timed out (consider increasing 'failureDetectionTimeout' configuration 
property) [addr=/127.0.0.1:47103, failureDetectionTimeout=10000]
130 [16:19:00,788][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] 
Connect timed out (consider increasing 'failureDetectionTimeout' configuration 
property) [addr=/172.25.2.17:47103, failureDetectionTimeout=10000]
131 [16:19:00,788][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] Failed 
to connect to a remote node (make sure that destination node is alive and 
operating system firewall is disabled on local and remote hosts) 
[addrs=[/127.0.0.1:47103, /172.25.2.17:47103]]
132 [16:19:00,789][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] 
TcpCommunicationSpi failed to establish connection to node, node will be 
dropped from cluster [rmtNode=TcpDiscoveryNode [id=ca2c554d-6d48-4a6
1-abe5-d1d188cc3f53, addrs=[127.0.0.1, 172.25.2.17], 
sockAddrs=[/172.25.2.17:47503, /127.0.0.1:47503], discPort=47503, order=4, 
intOrder=4, lastExchangeTime=1487337395028, loc=false, 
ver=1.8.3#20170217-sha1:92493562, isClient=false], err=class 
o.a.i.IgniteCheckedException: Failed to connect to node (is node still alive?). 
Make sure that each ComputeTask and cache Transaction has a timeout set in 
order to prevent parties from waiting forever in case of network issues 
[nodeId=ca2c554d-6d48-4a61-abe5-d1d188cc3f53, addrs=[/127.0.0.1:47103, 
/172.25.2.17:47103]], connectErrs=[class o.a.i.IgniteChec

[jira] [Commented] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-17 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15872091#comment-15872091
 ] 

Ivan Veselovsky commented on IGNITE-4720:
-

Node logs show that some nodes were failed and excluded from the topology 
during the test. This happens nearly at the same time when the test failures 
observed:
{code} 

 [16:19:00,784][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] Connect 
timed out (consider increasing 'failureDetectionTimeout' configuration 
property) [addr=/127.0.0.1:47103, failureDetectionTimeout=10000]
130 [16:19:00,788][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] 
Connect timed out (consider increasing 'failureDetectionTimeout' configuration 
property) [addr=/172.25.2.17:47103, failureDetectionTimeout=10000]
131 [16:19:00,788][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] Failed 
to connect to a remote node (make sure that destination node is alive and 
operating system firewall is disabled on local and remote hosts) 
[addrs=[/127.0.0.1:47103, /172.25.2.17:47103]]
132 [16:19:00,789][WARN ][tcp-comm-worker-#1%null%][TcpCommunicationSpi] 
TcpCommunicationSpi failed to establish connection to node, node will be 
dropped from cluster [rmtNode=TcpDiscoveryNode [id=ca2c554d-6d48-4a6
1-abe5-d1d188cc3f53, addrs=[127.0.0.1, 172.25.2.17], 
sockAddrs=[/172.25.2.17:47503, /127.0.0.1:47503], discPort=47503, order=4, 
intOrder=4, lastExchangeTime=1487337395028, loc=false, 
ver=1.8.3#20170217-sha1:92493562, isClient=false], err=class 
o.a.i.IgniteCheckedException: Failed to connect to node (is node still alive?). 
Make sure that each ComputeTask and cache Transaction has a timeout set in 
order to prevent parties from waiting forever in case of network issues 
[nodeId=ca2c554d-6d48-4a61-abe5-d1d188cc3f53, addrs=[/127.0.0.1:47103, 
/172.25.2.17:47103]], connectErrs=[class o.a.i.IgniteCheckedException: Failed 
to connect to address: /127.0.0.1:47103, class 
o.a.i.IgniteCheckedException: Failed to connect to address: /172.25.2.17:47103]]
133 [16:19:00,795][WARN ][disco-event-worker-#29%null%][GridDiscoveryManager] 
Node FAILED: TcpDiscoveryNode [id=ca2c554d-6d48-4a61-abe5-d1d188cc3f53, 
addrs=[127.0.0.1, 172.25.2.17], sockAddrs=[/172.25.2.17:47503, /
127.0.0.1:47503], discPort=47503, order=4, intOrder=4, 
lastExchangeTime=1487337395028, loc=false, ver=1.8.3#20170217-sha1:92493562, 
isClient=false]
134 [16:19:00,796][INFO ][disco-event-worker-#29%null%][GridDiscoveryManager] 
Topology snapshot [ver=5, servers=3, clients=0, CPUs=4, heap=4.4GB]

{code}

>From the logs and configs it appears that  igfs:// is used as default file 
>system, so we may need to run the same tests with e.g. file:// file system to 
>exclude IGFS.
Similar failures related to timeouts were observed in Ignite clusters under 
high load when running Map-Reduce jobs. Special configuration tuning (increased 
timeouts, etc.) were used to overcome the problem.

> Sporadically fails for Hadoop
> -
>
> Key: IGNITE-4720
> URL: https://issues.apache.org/jira/browse/IGNITE-4720
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Irina Zaporozhtseva
>Assignee: Ivan Veselovsky
> Fix For: 1.9
>
>
> hadoop example aggregatewordcount under apache ignite hadoop edition grid 
> with 4 nodes for hadoop-2_6_4 and hadoop-2_7_2:
> aggregatewordcount returns 999712 instead of 100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4720) Sporadically fails for Hadoop

2017-02-17 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-4720:
---

Assignee: Ivan Veselovsky

> Sporadically fails for Hadoop
> -
>
> Key: IGNITE-4720
> URL: https://issues.apache.org/jira/browse/IGNITE-4720
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Irina Zaporozhtseva
>Assignee: Ivan Veselovsky
> Fix For: 1.9
>
>
> hadoop example aggregatewordcount under apache ignite hadoop edition grid 
> with 4 nodes for hadoop-2_6_4 and hadoop-2_7_2:
> aggregatewordcount returns 999712 instead of 100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4541) Create test for Hadoop running in external JVM

2017-02-03 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15851566#comment-15851566
 ] 

Ivan Veselovsky edited comment on IGNITE-4541 at 2/3/17 6:30 PM:
-

Created 2 versions of this fix:

"Full" version: all the tests suite is running in multi-JVM mode, branch 
ignite-4541, prq 1490 , build passed on TC, but ~10 tests are disabled -- 2 
problems are still pending.
Tests: 
http://ci.ignite.apache.org/viewLog.html?buildId=443000&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteHadoop

"Lingt" version: only TeraSort test is running in Multi-JVM mode, branch 
ignite-4541b, prq 1497, waiting for TC results, but locally tests pass. This is 
ready for review if tests are ok on TC. 
Tests: 
http://ci.ignite.apache.org/viewLog.html?buildId=443470&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteHadoop
 


was (Author: iveselovskiy):
Created 2 versions of this fix:
"Full" version: all the tests suite is running in multi-JVM mode, branch 
ignite-4541, prq 1490 , build passed on TC, but ~10 tests are disabled -- 2 
problems are still pending.
"Lingt" version: only TeraSort test is running in Multi-JVM mode, branch 
ignite-4541b, prq 1497, waiting for TC results, but locally tests pass. This is 
ready for review if tests are ok on TC. 

> Create test for Hadoop running in external JVM
> --
>
> Key: IGNITE-4541
> URL: https://issues.apache.org/jira/browse/IGNITE-4541
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> *Problem*
> Currently we run all Hadoop tests in synthetic "single JVM" mode. This way we 
> miss lots of potential issues. We need to be able to run them in real 
> distributed mode, but on a single machine.
> *Simplified solution*
> 1) Start N external nodes
> 2) Submit a job to these nodes and validate the result
> We can start with simple Teragen->Terasort->Teravalidate scenario. 
> Please look through {{Process}} and {{ProcessBuilder}} classes usages - most 
> probably we already have all necessary infrastructure to start nodes in 
> external JVM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4541) Create test for Hadoop running in external JVM

2017-02-03 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15851566#comment-15851566
 ] 

Ivan Veselovsky commented on IGNITE-4541:
-

Created 2 versions of this fix:
"Full" version: all the tests suite is running in multi-JVM mode, branch 
ignite-4541, prq 1490 , build passed on TC, but ~10 tests are disabled -- 2 
problems are still pending.
"Lingt" version: only TeraSort test is running in Multi-JVM mode, branch 
ignite-4541b, prq 1497, waiting for TC results, but locally tests pass. This is 
ready for review if tests are ok on TC. 

> Create test for Hadoop running in external JVM
> --
>
> Key: IGNITE-4541
> URL: https://issues.apache.org/jira/browse/IGNITE-4541
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> *Problem*
> Currently we run all Hadoop tests in synthetic "single JVM" mode. This way we 
> miss lots of potential issues. We need to be able to run them in real 
> distributed mode, but on a single machine.
> *Simplified solution*
> 1) Start N external nodes
> 2) Submit a job to these nodes and validate the result
> We can start with simple Teragen->Terasort->Teravalidate scenario. 
> Please look through {{Process}} and {{ProcessBuilder}} classes usages - most 
> probably we already have all necessary infrastructure to start nodes in 
> external JVM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4541) Create test for Hadoop running in external JVM

2017-02-02 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850316#comment-15850316
 ] 

Ivan Veselovsky commented on IGNITE-4541:
-

https://github.com/apache/ignite/pull/1490

> Create test for Hadoop running in external JVM
> --
>
> Key: IGNITE-4541
> URL: https://issues.apache.org/jira/browse/IGNITE-4541
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> *Problem*
> Currently we run all Hadoop tests in synthetic "single JVM" mode. This way we 
> miss lots of potential issues. We need to be able to run them in real 
> distributed mode, but on a single machine.
> *Simplified solution*
> 1) Start N external nodes
> 2) Submit a job to these nodes and validate the result
> We can start with simple Teragen->Terasort->Teravalidate scenario. 
> Please look through {{Process}} and {{ProcessBuilder}} classes usages - most 
> probably we already have all necessary infrastructure to start nodes in 
> external JVM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-01-27 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15842600#comment-15842600
 ] 

Ivan Veselovsky commented on IGNITE-3542:
-

Vladimir, these failures happen because the build ran on a Windows agent. 
Similar failures can be observed on pure "ignite-2.0" branch : 
http://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests_IgniteGgfs&tab=buildTypeStatusDiv&branch_IgniteTests=ignite-2.0
 .
We created https://issues.apache.org/jira/browse/IGNITE-4623 to address that -- 
these failures are not related to the changes discussed in this ticket.


> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 1.9
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-01-27 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-3542:

Assignee: Vladimir Ozerov  (was: Ivan Veselovsky)

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
>Priority: Minor
> Fix For: 1.9
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4623) IGFS test suite should run successfully on Windows agents

2017-01-27 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-4623:
---

 Summary: IGFS test suite should run successfully on Windows agents
 Key: IGNITE-4623
 URL: https://issues.apache.org/jira/browse/IGNITE-4623
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Veselovsky
Assignee: Taras Ledkov


Currently on Windows agents (*_*_*_9090) there are ~560 failures related to 
local file system behavior difference on Windows systems.
E.g. see 1.8.3: 
http://ci.ignite.apache.org/viewLog.html?buildId=435288&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteGgfs
 
2.0: 
http://ci.ignite.apache.org/viewLog.html?buildId=435289&tab=buildResultsDiv&buildTypeId=IgniteTests_IgniteGgfs
 .

Most of the failures are caused by NPE in 
org.apache.ignite.igfs.secondary.local.LocalIgfsSecondaryFileSystem#info (line 
370).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4541) Create test for Hadoop running in external JVM

2017-01-13 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821471#comment-15821471
 ] 

Ivan Veselovsky commented on IGNITE-4541:
-

Tests (at least 
org.apache.ignite.internal.processors.hadoop.impl.HadoopTeraSortTest) works 
using {code} #isMultiJvm() { return true; } {code} override after very small 
corrections. 

> Create test for Hadoop running in external JVM
> --
>
> Key: IGNITE-4541
> URL: https://issues.apache.org/jira/browse/IGNITE-4541
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> *Problem*
> Currently we run all Hadoop tests in synthetic "single JVM" mode. This way we 
> miss lots of potential issues. We need to be able to run them in real 
> distributed mode, but on a single machine.
> *Simplified solution*
> 1) Start N external nodes
> 2) Submit a job to these nodes and validate the result
> We can start with simple Teragen->Terasort->Teravalidate scenario. 
> Please look through {{Process}} and {{ProcessBuilder}} classes usages - most 
> probably we already have all necessary infrastructure to start nodes in 
> external JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-01-11 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15818642#comment-15818642
 ] 

Ivan Veselovsky commented on IGNITE-3542:
-

Set retry functionality on move and delete operations, as requested. 
However, I think, this may slow down the operations because creates more 
starvation possibilities.  

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-01-11 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15818623#comment-15818623
 ] 

Ivan Veselovsky commented on IGNITE-3542:
-

https://github.com/apache/ignite/pull/1421

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3543) IGFS: Merge isRetryForSecondary() and verifyIntegrity() methods.

2017-01-11 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15818034#comment-15818034
 ] 

Ivan Veselovsky commented on IGNITE-3543:
-

https://github.com/apache/ignite/pull/1420

> IGFS: Merge isRetryForSecondary() and verifyIntegrity() methods.
> 
>
> Key: IGNITE-3543
> URL: https://issues.apache.org/jira/browse/IGNITE-3543
> Project: Ignite
>  Issue Type: Task
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> There are two methods with very similar semantics:
> 1) {{IgfsPathIds.verifyIntegrity}}
> 2) {{IgfsMetaManager.isRetryForSecondary}}
> The latter method ensures that if path is incomplete, then the last existing 
> item do not have reference to child with expected name, but unexpected ID. 
> Semantically this situation means that concurrent update occurred. 
> Instead of heaving two identical methods, we should merge these checks in a 
> single method {{IgfsPathIds.verifyIntegrity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4503) HadoopDirectDataInput must have boundary checks

2017-01-10 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814882#comment-15814882
 ] 

Ivan Veselovsky commented on IGNITE-4503:
-

https://github.com/apache/ignite/pull/1416

> HadoopDirectDataInput must have boundary checks
> ---
>
> Key: IGNITE-4503
> URL: https://issues.apache.org/jira/browse/IGNITE-4503
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> Otherwise we may end up with JVM crash in case of invalid implementation of 
> key/value deserialization routine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4219) Hive job submission failed with exception ”java.io.UTFDataFormatException“

2017-01-09 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15811513#comment-15811513
 ] 

Ivan Veselovsky edited comment on IGNITE-4219 at 1/9/17 11:23 AM:
--

java.io.DataOutput#writeUTF exactly specifies this behavior in javadoc: 
{code}
 *If this number is larger than
 * 65535, then a UTFDataFormatException
 * is thrown. Otherwise, this length is written
 * to the output stream in exactly the manner
 * of the writeShort method;
 * after this, the one-, two-, or three-byte
 * representation of each character in the
 * string s is written.
{code}


was (Author: iveselovskiy):
ObjectOutputStream#writeUTF() exactly specifies this behavior in javadoc: 
{code}
 *If this number is larger than
 * 65535, then a UTFDataFormatException
 * is thrown. Otherwise, this length is written
 * to the output stream in exactly the manner
 * of the writeShort method;
 * after this, the one-, two-, or three-byte
 * representation of each character in the
 * string s is written.
{code}

> Hive job submission failed with exception ”java.io.UTFDataFormatException“
> --
>
> Key: IGNITE-4219
> URL: https://issues.apache.org/jira/browse/IGNITE-4219
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Reporter: Andrew Mashenkov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> Long property passing to Hadoop causes an exception:
> {code}
> Caused by: java.io.UTFDataFormatException 
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:2144)
>  
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:1987)
>  
>at java.io.ObjectOutputStream.writeUTF(ObjectOutputStream.java:865) 
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeUTFStringNullable(IgniteUtils.java:5029)
>  
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeStringMap(IgniteUtils.java:4989)
>  
>at 
> org.apache.ignite.internal.processors.hadoop.HadoopDefaultJobInfo.writeExternal(HadoopDefaultJobInfo.java:137)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4219) Hive job submission failed with exception ”java.io.UTFDataFormatException“

2017-01-09 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15811513#comment-15811513
 ] 

Ivan Veselovsky commented on IGNITE-4219:
-

ObjectOutputStream#writeUTF() exactly specifies this behavior in javadoc: 
{code}
 *If this number is larger than
 * 65535, then a UTFDataFormatException
 * is thrown. Otherwise, this length is written
 * to the output stream in exactly the manner
 * of the writeShort method;
 * after this, the one-, two-, or three-byte
 * representation of each character in the
 * string s is written.
{code}

> Hive job submission failed with exception ”java.io.UTFDataFormatException“
> --
>
> Key: IGNITE-4219
> URL: https://issues.apache.org/jira/browse/IGNITE-4219
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Reporter: Andrew Mashenkov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> Long property passing to Hadoop causes an exception:
> {code}
> Caused by: java.io.UTFDataFormatException 
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:2144)
>  
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:1987)
>  
>at java.io.ObjectOutputStream.writeUTF(ObjectOutputStream.java:865) 
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeUTFStringNullable(IgniteUtils.java:5029)
>  
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeStringMap(IgniteUtils.java:4989)
>  
>at 
> org.apache.ignite.internal.processors.hadoop.HadoopDefaultJobInfo.writeExternal(HadoopDefaultJobInfo.java:137)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-4219) Hive job submission failed with exception ”java.io.UTFDataFormatException“

2017-01-09 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-4219:

Summary: Hive job submission failed with exception 
”java.io.UTFDataFormatException“  (was: Hive job submsiion failed with 
exception ”java.io.UTFDataFormatException“)

> Hive job submission failed with exception ”java.io.UTFDataFormatException“
> --
>
> Key: IGNITE-4219
> URL: https://issues.apache.org/jira/browse/IGNITE-4219
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Reporter: Andrew Mashenkov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> Long property passing to Hadoop causes an exception:
> {code}
> Caused by: java.io.UTFDataFormatException 
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:2144)
>  
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:1987)
>  
>at java.io.ObjectOutputStream.writeUTF(ObjectOutputStream.java:865) 
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeUTFStringNullable(IgniteUtils.java:5029)
>  
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeStringMap(IgniteUtils.java:4989)
>  
>at 
> org.apache.ignite.internal.processors.hadoop.HadoopDefaultJobInfo.writeExternal(HadoopDefaultJobInfo.java:137)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3543) IGFS: Merge isRetryForSecondary() and verifyIntegrity() methods.

2017-01-09 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-3543:
---

Assignee: Ivan Veselovsky

> IGFS: Merge isRetryForSecondary() and verifyIntegrity() methods.
> 
>
> Key: IGNITE-3543
> URL: https://issues.apache.org/jira/browse/IGNITE-3543
> Project: Ignite
>  Issue Type: Task
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> There are two methods with very similar semantics:
> 1) {{IgfsPathIds.verifyIntegrity}}
> 2) {{IgfsMetaManager.isRetryForSecondary}}
> The latter method ensures that if path is incomplete, then the last existing 
> item do not have reference to child with expected name, but unexpected ID. 
> Semantically this situation means that concurrent update occurred. 
> Instead of heaving two identical methods, we should merge these checks in a 
> single method {{IgfsPathIds.verifyIntegrity}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-4219) Hive job submsiion failed with exception ”java.io.UTFDataFormatException“

2017-01-09 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-4219:
---

Assignee: Ivan Veselovsky

> Hive job submsiion failed with exception ”java.io.UTFDataFormatException“
> -
>
> Key: IGNITE-4219
> URL: https://issues.apache.org/jira/browse/IGNITE-4219
> Project: Ignite
>  Issue Type: Bug
>  Components: hadoop
>Reporter: Andrew Mashenkov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> Long property passing to Hadoop causes an exception:
> {code}
> Caused by: java.io.UTFDataFormatException 
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:2144)
>  
>at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeUTF(ObjectOutputStream.java:1987)
>  
>at java.io.ObjectOutputStream.writeUTF(ObjectOutputStream.java:865) 
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeUTFStringNullable(IgniteUtils.java:5029)
>  
>at 
> org.apache.ignite.internal.util.IgniteUtils.writeStringMap(IgniteUtils.java:4989)
>  
>at 
> org.apache.ignite.internal.processors.hadoop.HadoopDefaultJobInfo.writeExternal(HadoopDefaultJobInfo.java:137)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3542) IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.

2017-01-09 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-3542:
---

Assignee: Ivan Veselovsky

> IGFS: Ensure IgfsPathIds.verifyIntegrity() always lead to re-try.
> -
>
> Key: IGNITE-3542
> URL: https://issues.apache.org/jira/browse/IGNITE-3542
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
>Priority: Minor
> Fix For: 2.0
>
>
> If integrity check failed, it means that some concurrent file system update 
> occurred. We should always perform re-try in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-4503) HadoopDirectDataInput must have boundary checks

2017-01-09 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky reassigned IGNITE-4503:
---

Assignee: Ivan Veselovsky

> HadoopDirectDataInput must have boundary checks
> ---
>
> Key: IGNITE-4503
> URL: https://issues.apache.org/jira/browse/IGNITE-4503
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Vladimir Ozerov
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> Otherwise we may end up with JVM crash in case of invalid implementation of 
> key/value deserialization routine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-481) Add tests for Metrics to the file system tests infrastructure

2016-12-30 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15788256#comment-15788256
 ] 

Ivan Veselovsky edited comment on IGNITE-481 at 12/30/16 8:11 PM:
--

fix corrected by review results. 
Tests passed: 
http://172.25.1.150:8111/viewLog.html?buildTypeId=IgniteTests_IgniteGgfs&buildId=408878


was (Author: iveselovskiy):
fix corrected by review results. Tests passed.

> Add tests for Metrics to the file system tests infrastructure
> -
>
> Key: IGNITE-481
> URL: https://issues.apache.org/jira/browse/IGNITE-481
> Project: Ignite
>  Issue Type: Task
>  Components: IGFS
>Affects Versions: 1.6
>Reporter: Ivan Veselovsky
>Assignee: Vladimir Ozerov
>Priority: Minor
> Fix For: 2.0
>
>
> Need to add tests for org.apache.ignite.igfs.IgfsMetrics to the filesystem 
> tests.
> See org.apache.ignite.IgniteFileSystem#metrics , 
> org.apache.ignite.IgniteFileSystem#resetMetrics .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-4514) Test HadoopCommandLineTest.testHiveCommandLine fails

2016-12-30 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15788115#comment-15788115
 ] 

Ivan Veselovsky edited comment on IGNITE-4514 at 12/30/16 6:50 PM:
---

https://github.com/apache/ignite/pull/1397 
Tests passed: 
http://172.25.1.150:8111/viewLog.html?buildTypeId=IgniteTests_IgniteHadoop&buildId=408872&branch_IgniteTests=pull/1397/head
 


was (Author: iveselovskiy):
https://github.com/apache/ignite/pull/1397 

> Test HadoopCommandLineTest.testHiveCommandLine fails
> 
>
> Key: IGNITE-4514
> URL: https://issues.apache.org/jira/browse/IGNITE-4514
> Project: Ignite
>  Issue Type: Test
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Ivan Veselovsky
>Assignee: Vladimir Ozerov
> Fix For: 2.0
>
>
> test HadoopCommandLineTest.testHiveCommandLine reproducibly fails due to 
> failed assertion: 
> {code} 
> java.lang.AssertionError
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.readExternalSplit(HadoopV2TaskContext.java:505)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.getNativeSplit(HadoopV2TaskContext.java:483)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v1.HadoopV1MapTask.run(HadoopV1MapTask.java:75)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:257)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:201)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:569)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code} 
> problem is that 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext is 
> loaded by the Job class loader if the class loader is "shared" (see 
> org.apache.ignite.internal.processors.hadoop.HadoopJobProperty#JOB_SHARED_CLASSLOADER),
>  and this is true by default. But the assertion in the test expects this to 
> be task class loader, what can be true, but is false by default. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-4514) Test HadoopCommandLineTest.testHiveCommandLine fails

2016-12-30 Thread Ivan Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15788115#comment-15788115
 ] 

Ivan Veselovsky commented on IGNITE-4514:
-

https://github.com/apache/ignite/pull/1397 

> Test HadoopCommandLineTest.testHiveCommandLine fails
> 
>
> Key: IGNITE-4514
> URL: https://issues.apache.org/jira/browse/IGNITE-4514
> Project: Ignite
>  Issue Type: Test
>  Components: hadoop
>Affects Versions: 1.8
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
> Fix For: 2.0
>
>
> test HadoopCommandLineTest.testHiveCommandLine reproducibly fails due to 
> failed assertion: 
> {code} 
> java.lang.AssertionError
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.readExternalSplit(HadoopV2TaskContext.java:505)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.getNativeSplit(HadoopV2TaskContext.java:483)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v1.HadoopV1MapTask.run(HadoopV1MapTask.java:75)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:257)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:201)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
> at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:569)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
> at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code} 
> problem is that 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext is 
> loaded by the Job class loader if the class loader is "shared" (see 
> org.apache.ignite.internal.processors.hadoop.HadoopJobProperty#JOB_SHARED_CLASSLOADER),
>  and this is true by default. But the assertion in the test expects this to 
> be task class loader, what can be true, but is false by default. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   >