[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2016-02-12 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145124#comment-15145124
 ] 

Mithun Radhakrishnan commented on HIVE-11470:
-

Thanks for working on this, [~sushanth]!

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 2.0.0, 1.2.2, 2.1.0
>
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2016-02-08 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138181#comment-15138181
 ] 

Sushanth Sowmyan commented on HIVE-11470:
-

Backported to branch-1.2 as well.

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 2.0.0, 1.2.2, 2.1.0
>
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2016-02-08 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138161#comment-15138161
 ] 

Sushanth Sowmyan commented on HIVE-11470:
-

Pushed to branch-2.0 as well. Thanks!

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 2.0.0, 2.1.0
>
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2016-02-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138106#comment-15138106
 ] 

Sergey Shelukhin commented on HIVE-11470:
-

Sure. Can you ping me when you commit it? I was about to cut another RC after 
committing HIVE-13025

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 2.1.0
>
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2016-02-08 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138098#comment-15138098
 ] 

Sushanth Sowmyan commented on HIVE-11470:
-

[~sershe], can I get this on 2.0 as well, if you're still adding contenders? 
I've seen this bug appear on a couple of other reports I've had.

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 2.1.0
>
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2016-01-21 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15111541#comment-15111541
 ] 

Sushanth Sowmyan commented on HIVE-11470:
-

+1, LGTM. The tests reported do not seem to be related.

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2015-12-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15070173#comment-15070173
 ] 

Hive QA commented on HIVE-11470:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12779124/HIVE-11470.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 9961 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats2
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_columnstats_partlvl_multiple_part_clause
org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl.testMultiSessionMultipleUse
org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl.testSingleSessionMultipleUse
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveOperationType.checkHiveOperationTypeMatch
org.apache.hive.jdbc.TestSSL.testSSLVersion
org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles
org.apache.hive.spark.client.TestSparkClient.testCounters
org.apache.hive.spark.client.TestSparkClient.testErrorJob
org.apache.hive.spark.client.TestSparkClient.testJobSubmission
org.apache.hive.spark.client.TestSparkClient.testMetricsCollection
org.apache.hive.spark.client.TestSparkClient.testRemoteClient
org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6459/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6459/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6459/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12779124 - PreCommit-HIVE-TRUNK-Build

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWrite

[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

2015-12-07 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045886#comment-15045886
 ] 

Sushanth Sowmyan commented on HIVE-11470:
-

Hi Mithun, thanks for the catch.

Since you're using HIVE_DEFAULT_PARTITION_VALUE to store null, could you please 
explicitly initialize it to null?

Also, I think the test framework was flaky a while back, but should be good now 
if you resubmit a .2.patch.

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --
>
> Key: HIVE-11470
> URL: https://issues.apache.org/jira/browse/HIVE-11470
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-11470.1.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if 
> the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at 
> org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at 
> org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an 
> unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
> throws IOException, HCatException {
> 
> OutputJobInfo localJobInfo = null;
> // Calculate which writer to use from the remaining values - this needs to
> // be done before we delete cols.
> List dynamicPartValues = new ArrayList();
> for (Integer colToAppend : dynamicPartCols) {
>   dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
> }
> ...
>   }
> {code}
> Must check for null, and substitute with 
> {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)