Review Request 73507: TestIntegrationGitActionExecutor can fail due to "Address already in use"

2021-08-09 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/73507/
---

Review request for oozie.


Bugs: OOZIE-3633
https://issues.apache.org/jira/browse/OOZIE-3633


Repository: oozie-git


Description
---

When executed unit tests I ran into "Address already in use" issue when 
TestIntegrationGitActionExecutor ran:

java.net.BindException: Address already in use (Bind failed)
at java.net.PlainSocketImpl.socketBind(Native Method)
at 
java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at org.eclipse.jgit.transport.Daemon.start(Daemon.java:373)
at org.apache.oozie.action.hadoop.GitServer.start(GitServer.java:76)
... 
The root cause may be the approach how the test case checks and uses the port.


Diffs
-

  sharelib/git/src/test/java/org/apache/oozie/action/hadoop/GitServer.java 
8398e5fef 
  
sharelib/git/src/test/java/org/apache/oozie/action/hadoop/TestIntegrationGitActionExecutor.java
 20368dabb 


Diff: https://reviews.apache.org/r/73507/diff/1/


Testing
---


Thanks,

Denes Bodo



Review Request 73427: Oozie server startup error when JDBC URL for a MySql DB with HA is used

2021-06-18 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/73427/
---

Review request for oozie.


Bugs: OOZIE-2136
https://issues.apache.org/jira/browse/OOZIE-2136


Repository: oozie-git


Description
---

Oozie cannot handle comma (') in the JDBC URL.


Diffs
-

  core/src/main/java/org/apache/oozie/service/JPAService.java 5c621b2a9 


Diff: https://reviews.apache.org/r/73427/diff/1/


Testing
---


Thanks,

Denes Bodo



Review Request 72711: Extend file system EL functions to use custom file system properties

2020-07-28 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72711/
---

Review request for oozie.


Bugs: OOZIE-3606
https://issues.apache.org/jira/browse/OOZIE-3606


Repository: oozie-git


Description
---

In case we want to check file availability on abfss file system in scope of a 
decision node and we have access to that file system only using a key file 
which is password protected then currently we are blocked as we cannot allow 
configuring hadoop.security.credstore.java-keystore-provider.password-file 
globally.

This is why this ticket is created to propose a solution for this issue:

allow special file system configurations only for file system EL functions
let them configure globally via oozie-site.xml and for every workflow 
independently


Diffs
-

  core/src/main/java/org/apache/oozie/ErrorCode.java 8f4e21d03 
  core/src/main/java/org/apache/oozie/action/hadoop/FsELFunctions.java 
0f81d7633 
  core/src/main/resources/oozie-default.xml ab7b8d358 
  core/src/test/java/org/apache/oozie/action/hadoop/TestFsELFunctions.java 
7b8187e0f 
  docs/src/site/markdown/WorkflowFunctionalSpec.md 136f1d77c 


Diff: https://reviews.apache.org/r/72711/diff/1/


Testing
---


Thanks,

Denes Bodo



[jira] [Updated] (OOZIE-3578) MapReduce counters cannot be used over 120

2020-01-08 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3578:
--
Attachment: OOZIE-3578-003.patch

> MapReduce counters cannot be used over 120
> --
>
> Key: OOZIE-3578
> URL: https://issues.apache.org/jira/browse/OOZIE-3578
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
> Attachments: OOZIE-3578-001.patch, OOZIE-3578-002.patch, 
> OOZIE-3578-003.patch
>
>
> When we create a mapreduce action which then creates more than 120 counters 
> then the following exception is thrown:
> {noformat}
> org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
> org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:155)
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:264)
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:383)
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:859)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:820)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:817)
> java.security.AccessController.doPrivileged(Native Method)
> javax.security.auth.Subject.doAs(Subject.java:422)
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> org.apache.hadoop.mapreduce.Job.getCounters(Job.java:817)
> org.apache.hadoop.mapred.JobClient$NetworkedJob.getCounters(JobClient.java:379)
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.end(MapReduceActionExecutor.java:252)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:183)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:62)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:244)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:56)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {noformat}
> It turned out if we use Oozie with Hadoop 3 the MR class called {{Limits}} is 
> not initialised properly but with default values:  
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java#L40
> If we set the "mapreduce.job.counters.max" to 500 in mapred-site.xml or in 
> core-site.xml has no positive effect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OOZIE-3578) MapReduce counters cannot be used over 120

2020-01-07 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3578:
--
Attachment: OOZIE-3578-002.patch

> MapReduce counters cannot be used over 120
> --
>
> Key: OOZIE-3578
> URL: https://issues.apache.org/jira/browse/OOZIE-3578
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
> Attachments: OOZIE-3578-001.patch, OOZIE-3578-002.patch
>
>
> When we create a mapreduce action which then creates more than 120 counters 
> then the following exception is thrown:
> {noformat}
> org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
> org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:155)
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:264)
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:383)
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:859)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:820)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:817)
> java.security.AccessController.doPrivileged(Native Method)
> javax.security.auth.Subject.doAs(Subject.java:422)
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> org.apache.hadoop.mapreduce.Job.getCounters(Job.java:817)
> org.apache.hadoop.mapred.JobClient$NetworkedJob.getCounters(JobClient.java:379)
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.end(MapReduceActionExecutor.java:252)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:183)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:62)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:244)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:56)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {noformat}
> It turned out if we use Oozie with Hadoop 3 the MR class called {{Limits}} is 
> not initialised properly but with default values:  
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java#L40
> If we set the "mapreduce.job.counters.max" to 500 in mapred-site.xml or in 
> core-site.xml has no positive effect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OOZIE-3578) MapReduce counters cannot be used over 120

2020-01-07 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17009720#comment-17009720
 ] 

Denes Bodo commented on OOZIE-3578:
---

Thanks [~asalamon74] for letting me know. I did run an incorrect git command. 
Sorry.

> MapReduce counters cannot be used over 120
> --
>
> Key: OOZIE-3578
> URL: https://issues.apache.org/jira/browse/OOZIE-3578
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
> Attachments: OOZIE-3578-001.patch, OOZIE-3578-002.patch
>
>
> When we create a mapreduce action which then creates more than 120 counters 
> then the following exception is thrown:
> {noformat}
> org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
> org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:155)
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:264)
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:383)
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:859)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:820)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:817)
> java.security.AccessController.doPrivileged(Native Method)
> javax.security.auth.Subject.doAs(Subject.java:422)
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> org.apache.hadoop.mapreduce.Job.getCounters(Job.java:817)
> org.apache.hadoop.mapred.JobClient$NetworkedJob.getCounters(JobClient.java:379)
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.end(MapReduceActionExecutor.java:252)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:183)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:62)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:244)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:56)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {noformat}
> It turned out if we use Oozie with Hadoop 3 the MR class called {{Limits}} is 
> not initialised properly but with default values:  
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java#L40
> If we set the "mapreduce.job.counters.max" to 500 in mapred-site.xml or in 
> core-site.xml has no positive effect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OOZIE-3578) MapReduce counters cannot be used over 120

2020-01-07 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3578:
--
Attachment: OOZIE-3578-001.patch

> MapReduce counters cannot be used over 120
> --
>
> Key: OOZIE-3578
> URL: https://issues.apache.org/jira/browse/OOZIE-3578
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
> Attachments: OOZIE-3578-001.patch
>
>
> When we create a mapreduce action which then creates more than 120 counters 
> then the following exception is thrown:
> {noformat}
> org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
> org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
> org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:155)
> org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:264)
> org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:383)
> org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:859)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:820)
> org.apache.hadoop.mapreduce.Job$8.run(Job.java:817)
> java.security.AccessController.doPrivileged(Native Method)
> javax.security.auth.Subject.doAs(Subject.java:422)
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> org.apache.hadoop.mapreduce.Job.getCounters(Job.java:817)
> org.apache.hadoop.mapred.JobClient$NetworkedJob.getCounters(JobClient.java:379)
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.end(MapReduceActionExecutor.java:252)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:183)
> org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:62)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:244)
> org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:56)
> org.apache.oozie.command.XCommand.call(XCommand.java:291)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {noformat}
> It turned out if we use Oozie with Hadoop 3 the MR class called {{Limits}} is 
> not initialised properly but with default values:  
> https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java#L40
> If we set the "mapreduce.job.counters.max" to 500 in mapred-site.xml or in 
> core-site.xml has no positive effect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OOZIE-3578) MapReduce counters cannot be used over 120

2020-01-06 Thread Denes Bodo (Jira)
Denes Bodo created OOZIE-3578:
-

 Summary: MapReduce counters cannot be used over 120
 Key: OOZIE-3578
 URL: https://issues.apache.org/jira/browse/OOZIE-3578
 Project: Oozie
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0
Reporter: Denes Bodo


When we create a mapreduce action which then creates more than 120 counters 
then the following exception is thrown:
{noformat}
org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:155)
org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:264)
org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:383)
org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:859)
org.apache.hadoop.mapreduce.Job$8.run(Job.java:820)
org.apache.hadoop.mapreduce.Job$8.run(Job.java:817)
java.security.AccessController.doPrivileged(Native Method)
javax.security.auth.Subject.doAs(Subject.java:422)
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
org.apache.hadoop.mapreduce.Job.getCounters(Job.java:817)
org.apache.hadoop.mapred.JobClient$NetworkedJob.getCounters(JobClient.java:379)
org.apache.oozie.action.hadoop.MapReduceActionExecutor.end(MapReduceActionExecutor.java:252)
org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:183)
org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:62)
org.apache.oozie.command.XCommand.call(XCommand.java:291)
org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:244)
org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:56)
org.apache.oozie.command.XCommand.call(XCommand.java:291)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
{noformat}

It turned out if we use Oozie with Hadoop 3 the MR class called {{Limits}} is 
not initialised properly but with default values:  
https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java#L40

If we set the "mapreduce.job.counters.max" to 500 in mapred-site.xml or in 
core-site.xml has no positive effect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-26 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982500#comment-16982500
 ] 

Denes Bodo commented on OOZIE-3561:
---

Thank you for the fix [~pbacsko] . I managed to check your change with the 
original WF which caused the slowness and that worked well and quick. 
Unfortunately that wf contains sensitive data so I could not share here but the 
provided 80-action wf is almost the same as the original one.

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
> Attachments: OOZIE-3561-002.patch, OOZIE-3561-003.patch, 
> OOZIE-3561_001.patch
>
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-26 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3561:
--
Attachment: OOZIE-3561-004.patch

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
> Attachments: OOZIE-3561-002.patch, OOZIE-3561-003.patch, 
> OOZIE-3561-004.patch, OOZIE-3561_001.patch
>
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-26 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3561:
--
Attachment: (was: OOZIE-3561-003.patch)

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
> Attachments: OOZIE-3561-002.patch, OOZIE-3561-003.patch, 
> OOZIE-3561_001.patch
>
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-26 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3561:
--
Attachment: OOZIE-3561-003.patch

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
> Attachments: OOZIE-3561-002.patch, OOZIE-3561-003.patch, 
> OOZIE-3561-003.patch, OOZIE-3561_001.patch
>
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-22 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3561:
--
Attachment: OOZIE-3561_001.patch

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
> Attachments: OOZIE-3561_001.patch
>
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-21 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979311#comment-16979311
 ] 

Denes Bodo commented on OOZIE-3561:
---

I managed to reproduce the issue by using only unit tests:
 # Create a workflow with similar content:
{noformat}









{noformat}
 # create a test like:
{code:java}
public void test40ActionsInARow() throws WorkflowException, IOException {
LiteWorkflowAppParser parser = newLiteWorkflowAppParser();
try {

parser.validateAndParse(IOUtils.getResourceAsReader(
"wf-actions-40.xml", -1), new Configuration());
} catch (final WorkflowException e) {
e.printStackTrace();
Assert.fail("This workflow has to be correct.");
}
}
{code}

With 40 actions, the check couldn't finish within 10 minutes.

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-21 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979281#comment-16979281
 ] 

Denes Bodo commented on OOZIE-3561:
---

cc [~pbacsko], [~rkanter]

> Forkjoin validation is slow when there are many actions in chain
> 
>
> Key: OOZIE-3561
> URL: https://issues.apache.org/jira/browse/OOZIE-3561
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: performance
>
> In case we have a workflow which has, let's say, 80 actions after each other:
> {{a1 -> a2 -> ... a80}}
> then the validator code "never" finishes.
> Currently the validation (in my understanding) does depth first checks from 
> the start node and runs in time of n! . This is confirmed as when we split 
> this huge workflow into two 40-element workflow then we get 2x ~40!-step in 
> validation instead of ~80! steps.
> Guys, could you please confirm or disprove my theory?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OOZIE-3561) Forkjoin validation is slow when there are many actions in chain

2019-11-21 Thread Denes Bodo (Jira)
Denes Bodo created OOZIE-3561:
-

 Summary: Forkjoin validation is slow when there are many actions 
in chain
 Key: OOZIE-3561
 URL: https://issues.apache.org/jira/browse/OOZIE-3561
 Project: Oozie
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0
Reporter: Denes Bodo
Assignee: Denes Bodo


In case we have a workflow which has, let's say, 80 actions after each other:
{{a1 -> a2 -> ... a80}}
then the validator code "never" finishes.

Currently the validation (in my understanding) does depth first checks from the 
start node and runs in time of n! . This is confirmed as when we split this 
huge workflow into two 40-element workflow then we get 2x ~40!-step in 
validation instead of ~80! steps.

Guys, could you please confirm or disprove my theory?

Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-07 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945715#comment-16945715
 ] 

Denes Bodo commented on OOZIE-3529:
---

Thanks [~asalamon74] for your review comments. I think all the failures are 
completely unrelated to my change.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, OOZIE-3529.007.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutp

[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-06 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.007.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, OOZIE-3529.007.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.a

[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-04 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.006.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apac

Re: Review Request 71168: Oozie not supported for s3 as filesystem

2019-10-04 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/
---

(Updated Oct. 4, 2019, 8:48 a.m.)


Review request for oozie and Andras Salamon.


Bugs: OOZIE-3529
https://issues.apache.org/jira/browse/OOZIE-3529


Repository: oozie-git


Description
---

Many customer who uses s3 file system as secondary one experiences 
"UnsupportedOperationException: Accessing local file system is not allowed" 
error when Oozie tries to submit the Yarn application.


Diffs (updated)
-

  core/pom.xml d2a211a89 
  core/pom.xml d2a211a89 
  core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
0b53a3611 
  core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
0b53a3611 
  core/src/main/resources/oozie-default.xml f33c7b938 
  core/src/main/resources/oozie-default.xml f33c7b938 
  core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
89ce18550 
  core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
89ce18550 
  core/src/test/java/org/apache/oozie/test/XFsTestCase.java c0f3c6959 
  core/src/test/java/org/apache/oozie/test/XFsTestCase.java c0f3c6959 
  docs/src/site/markdown/AG_HadoopConfiguration.md ab71d7cb6 
  docs/src/site/markdown/AG_HadoopConfiguration.md ab71d7cb6 


Diff: https://reviews.apache.org/r/71168/diff/4/

Changes: https://reviews.apache.org/r/71168/diff/3-4/


Testing
---


Thanks,

Denes Bodo



Re: Review Request 71168: Oozie not supported for s3 as filesystem

2019-10-04 Thread Denes Bodo


> On July 26, 2019, 4:05 p.m., Andras Salamon wrote:
> > core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java
> > Lines 638 (patched)
> > <https://reviews.apache.org/r/71168/diff/1/?file=2157981#file2157981line638>
> >
> > The property name contains the uri scheme, so it will not be possible 
> > to list all the possible properties in the oozie-default.xml. It means that 
> > we will get lots of "Invalid configuration defined" warning messages (see 
> > OOZIE-2338).
> > 
> > Could you modify ConfigurationService.verifyConfigurationName() just 
> > like in the OOZIE-2338 do ingore the warnings.
> 
> Denes Bodo wrote:
> I think this kind of work is out of scope of this fix. Also number of 
> file systems which shall have custom properties are quite few so in my 
> opinion defining the options in oozie-default.xml with description will help 
> the user to set the configuration more easily.
> 
> Andras Salamon wrote:
> OK, please open an upstream jira about this limitation, Oozie will print 
> out warnings if someone wants to set filesystem properties other then s3a and 
> link it this this jira and also to OOZIE-2338.

https://issues.apache.org/jira/browse/OOZIE-3546


> On July 26, 2019, 4:05 p.m., Andras Salamon wrote:
> > core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java
> > Lines 646 (patched)
> > <https://reviews.apache.org/r/71168/diff/1/?file=2157981#file2157981line646>
> >
> > What if a property value contains ","? This solution does not allow 
> > that. I'm not familiar with the fs properties. Can we safely assume that 
> > there is no , is the values?
> 
> Denes Bodo wrote:
> This is documented now, we will not allow to use comma in property values 
> nor in property names.
> 
> Andras Salamon wrote:
> OK, we can live with this limitation now. Could you please also add a 
> sentence to the oozie-default.xml description that we don't allow commas in 
> the property values. 
> Also please open a new upstream jira which will fix that (no need to 
> solve the jira in foreseable future but show that we plan to do that).

https://issues.apache.org/jira/browse/OOZIE-3547


- Denes


-------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/#review216891
---


On Oct. 1, 2019, 8:09 a.m., Denes Bodo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71168/
> ---
> 
> (Updated Oct. 1, 2019, 8:09 a.m.)
> 
> 
> Review request for oozie and Andras Salamon.
> 
> 
> Bugs: OOZIE-3529
> https://issues.apache.org/jira/browse/OOZIE-3529
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> Many customer who uses s3 file system as secondary one experiences 
> "UnsupportedOperationException: Accessing local file system is not allowed" 
> error when Oozie tries to submit the Yarn application.
> 
> 
> Diffs
> -
> 
>   core/pom.xml d2a211a89 
>   core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
> 0b53a3611 
>   core/src/main/resources/oozie-default.xml f33c7b938 
>   core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
> 89ce18550 
>   core/src/test/java/org/apache/oozie/test/XFsTestCase.java c0f3c6959 
>   docs/src/site/markdown/AG_HadoopConfiguration.md ab71d7cb6 
> 
> 
> Diff: https://reviews.apache.org/r/71168/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denes Bodo
> 
>



[jira] [Created] (OOZIE-3547) Oozie does not allow comma in custom file system properties

2019-10-04 Thread Denes Bodo (Jira)
Denes Bodo created OOZIE-3547:
-

 Summary: Oozie does not allow comma in custom file system 
properties
 Key: OOZIE-3547
 URL: https://issues.apache.org/jira/browse/OOZIE-3547
 Project: Oozie
  Issue Type: Bug
  Components: core
Affects Versions: 5.2.0
Reporter: Denes Bodo


In OOZIE-3529 we introduced the opportunity to set file system properties. The 
first implementation does not allow us to use comma neither in keys nor in 
values.

This Jira is intended to track the work to let us use commas in such properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (OOZIE-3546) Oozie prints out warning messages when non-s3 file system custom properties are set

2019-10-04 Thread Denes Bodo (Jira)
Denes Bodo created OOZIE-3546:
-

 Summary: Oozie prints out warning messages when non-s3 file system 
custom properties are set
 Key: OOZIE-3546
 URL: https://issues.apache.org/jira/browse/OOZIE-3546
 Project: Oozie
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0
Reporter: Denes Bodo


In OOZIE-3529 an implementation helps us to provide custom file system 
properties like {{oozie.service.HadoopAccessorService.fs.s3a}}. In case other 
fs is specified than s3a Oozie prints a warning log message stating that 
{{Invalid configuration defined}}.

This Jira is filed to track the fix for the above behaviour either we put all 
the file systems to oozie-default.xml or implement a pattern style check like 
in OOZIE-2338.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Review Request 71168: Oozie not supported for s3 as filesystem

2019-10-01 Thread Denes Bodo


> On Sept. 27, 2019, 10:19 a.m., Andras Salamon wrote:
> > docs/src/site/markdown/AG_HadoopConfiguration.md
> > Lines 89 (patched)
> > <https://reviews.apache.org/r/71168/diff/2/?file=2166642#file2166642line89>
> >
> > Instead of "now" please use "since 5.2.0". It will be quite helpful if 
> > someone wants to find the first version which supports this feature. (It's 
> > a bit confusing before the 5.2.0 release, but better in the long run).

I totally agree with you, @asalamon74.


> On Sept. 27, 2019, 10:19 a.m., Andras Salamon wrote:
> > docs/src/site/markdown/AG_HadoopConfiguration.md
> > Lines 97 (patched)
> > <https://reviews.apache.org/r/71168/diff/2/?file=2166642#file2166642line97>
> >
> > I think you should use \`\` instead of \`\`\`\`\`\` if you want 
> > an inline code block.
> > 
> > A space is missing: thefile

Thank you for noticing it. IDEA showed that me correctly. However, I agree with 
you.


- Denes


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/#review217962
---


On Oct. 1, 2019, 8:09 a.m., Denes Bodo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71168/
> ---
> 
> (Updated Oct. 1, 2019, 8:09 a.m.)
> 
> 
> Review request for oozie and Andras Salamon.
> 
> 
> Bugs: OOZIE-3529
> https://issues.apache.org/jira/browse/OOZIE-3529
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> Many customer who uses s3 file system as secondary one experiences 
> "UnsupportedOperationException: Accessing local file system is not allowed" 
> error when Oozie tries to submit the Yarn application.
> 
> 
> Diffs
> -
> 
>   core/pom.xml d2a211a89 
>   core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
> 0b53a3611 
>   core/src/main/resources/oozie-default.xml f33c7b938 
>   core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
> 89ce18550 
>   core/src/test/java/org/apache/oozie/test/XFsTestCase.java c0f3c6959 
>   docs/src/site/markdown/AG_HadoopConfiguration.md ab71d7cb6 
> 
> 
> Diff: https://reviews.apache.org/r/71168/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denes Bodo
> 
>



Re: Review Request 71168: Oozie not supported for s3 as filesystem

2019-10-01 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/
---

(Updated Oct. 1, 2019, 8:09 a.m.)


Review request for oozie and Andras Salamon.


Bugs: OOZIE-3529
https://issues.apache.org/jira/browse/OOZIE-3529


Repository: oozie-git


Description
---

Many customer who uses s3 file system as secondary one experiences 
"UnsupportedOperationException: Accessing local file system is not allowed" 
error when Oozie tries to submit the Yarn application.


Diffs (updated)
-

  core/pom.xml d2a211a89 
  core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
0b53a3611 
  core/src/main/resources/oozie-default.xml f33c7b938 
  core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
89ce18550 
  core/src/test/java/org/apache/oozie/test/XFsTestCase.java c0f3c6959 
  docs/src/site/markdown/AG_HadoopConfiguration.md ab71d7cb6 


Diff: https://reviews.apache.org/r/71168/diff/3/

Changes: https://reviews.apache.org/r/71168/diff/2-3/


Testing
---


Thanks,

Denes Bodo



[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-01 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.005.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, id.pig, 
> job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apac

[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-26 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.004.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.

[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-25 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.003.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSyst

[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-25 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.002.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, id.pig, 
> job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.j

Re: Review Request 71168: Oozie not supported for s3 as filesystem

2019-09-25 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/
---

(Updated Sept. 25, 2019, 3:41 p.m.)


Review request for oozie.


Bugs: OOZIE-3529
https://issues.apache.org/jira/browse/OOZIE-3529


Repository: oozie-git


Description
---

Many customer who uses s3 file system as secondary one experiences 
"UnsupportedOperationException: Accessing local file system is not allowed" 
error when Oozie tries to submit the Yarn application.


Diffs (updated)
-

  core/pom.xml d2a211a89 
  core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
0b53a3611 
  core/src/main/resources/oozie-default.xml f33c7b938 
  core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
89ce18550 
  core/src/test/java/org/apache/oozie/test/XFsTestCase.java c0f3c6959 
  docs/src/site/markdown/AG_HadoopConfiguration.md ab71d7cb6 


Diff: https://reviews.apache.org/r/71168/diff/2/

Changes: https://reviews.apache.org/r/71168/diff/1-2/


Testing
---


Thanks,

Denes Bodo



Re: Review Request 71168: Oozie not supported for s3 as filesystem

2019-09-25 Thread Denes Bodo


> On July 26, 2019, 4:05 p.m., Andras Salamon wrote:
> > core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java
> > Lines 638 (patched)
> > <https://reviews.apache.org/r/71168/diff/1/?file=2157981#file2157981line638>
> >
> > The property name contains the uri scheme, so it will not be possible 
> > to list all the possible properties in the oozie-default.xml. It means that 
> > we will get lots of "Invalid configuration defined" warning messages (see 
> > OOZIE-2338).
> > 
> > Could you modify ConfigurationService.verifyConfigurationName() just 
> > like in the OOZIE-2338 do ingore the warnings.

I think this kind of work is out of scope of this fix. Also number of file 
systems which shall have custom properties are quite few so in my opinion 
defining the options in oozie-default.xml with description will help the user 
to set the configuration more easily.


> On July 26, 2019, 4:05 p.m., Andras Salamon wrote:
> > core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java
> > Lines 646 (patched)
> > <https://reviews.apache.org/r/71168/diff/1/?file=2157981#file2157981line646>
> >
> > What if a property value contains ","? This solution does not allow 
> > that. I'm not familiar with the fs properties. Can we safely assume that 
> > there is no , is the values?

This is documented now, we will not allow to use comma in property values nor 
in property names.


> On July 26, 2019, 4:05 p.m., Andras Salamon wrote:
> > core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java
> > Lines 648 (patched)
> > <https://reviews.apache.org/r/71168/diff/1/?file=2157981#file2157981line648>
> >
> > We should not expect that the entry contains an = sign. Without the = 
> > we get ArrayOutOfBoundsException.
> > 
> > What if it's a=b=c? Should we just ignore the =c part? At least we 
> > should print out a warning. Maybe we can just give an error and ignore the 
> > whole parameter.
> > 
> > The solution also assumes that there are no = character in the value.

You are correct. In case there is a property which contains equal sign, then 
from the second one will be part of the *value*. Please see the update patch.


> On July 26, 2019, 4:05 p.m., Andras Salamon wrote:
> > core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java
> > Lines 358 (patched)
> > <https://reviews.apache.org/r/71168/diff/1/?file=2157982#file2157982line358>
> >
> > Please add a few properties to this empty configuration. It would 
> > improve this test.

I do not really think that here we shall add more properties. The purpose of 
the test is to check the configuration object is untouched.


- Denes


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/#review216891
---


On July 26, 2019, 1:43 p.m., Denes Bodo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71168/
> ---
> 
> (Updated July 26, 2019, 1:43 p.m.)
> 
> 
> Review request for oozie.
> 
> 
> Bugs: OOZIE-3529
> https://issues.apache.org/jira/browse/OOZIE-3529
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> Many customer who uses s3 file system as secondary one experiences 
> "UnsupportedOperationException: Accessing local file system is not allowed" 
> error when Oozie tries to submit the Yarn application.
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
> 0b53a3611 
>   core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
> 89ce18550 
> 
> 
> Diff: https://reviews.apache.org/r/71168/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denes Bodo
> 
>



[jira] [Commented] (OOZIE-3542) Handle better old Hdfs implementations in ECPolicyDisabler

2019-09-11 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927321#comment-16927321
 ] 

Denes Bodo commented on OOZIE-3542:
---

Thanks [~kmarton] for your comments. [~zsombor] if you do not mind I've 
uploaded the requested changes.

> Handle better old Hdfs implementations in ECPolicyDisabler
> --
>
> Key: OOZIE-3542
> URL: https://issues.apache.org/jira/browse/OOZIE-3542
> Project: Oozie
>  Issue Type: Bug
>  Components: tools
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
> Attachments: OOZIE-3542-2.patch, OOZIE-3542-3.patch
>
>
> Currently, ECPolicyDisabler checks if the local hdfs implementation has the 
> necessary methods to get and set erasure coding policy. However, if the 
> namenode implementation is old, it could throw a 
> org.apache.hadoop.ipc.RemoteException with 
> RpcErrorCodeProto.ERROR_NO_SUCH_METHOD value in it.
> In this case, ECPolicyDisabler fails, and prevents the installation to 
> succeed.
> This case should be handled just like, when erasure coding is not supported.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (OOZIE-3542) Handle better old Hdfs implementations in ECPolicyDisabler

2019-09-11 Thread Denes Bodo (Jira)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3542:
--
Attachment: OOZIE-3542-3.patch

> Handle better old Hdfs implementations in ECPolicyDisabler
> --
>
> Key: OOZIE-3542
> URL: https://issues.apache.org/jira/browse/OOZIE-3542
> Project: Oozie
>  Issue Type: Bug
>  Components: tools
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
> Attachments: OOZIE-3542-2.patch, OOZIE-3542-3.patch
>
>
> Currently, ECPolicyDisabler checks if the local hdfs implementation has the 
> necessary methods to get and set erasure coding policy. However, if the 
> namenode implementation is old, it could throw a 
> org.apache.hadoop.ipc.RemoteException with 
> RpcErrorCodeProto.ERROR_NO_SUCH_METHOD value in it.
> In this case, ECPolicyDisabler fails, and prevents the installation to 
> succeed.
> This case should be handled just like, when erasure coding is not supported.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893753#comment-16893753
 ] 

Denes Bodo commented on OOZIE-3529:
---

I know that documentation is missing. If the approach is acceptable then I'll 
document it. Thanks for your comments.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop

Review Request 71168: Oozie not supported for s3 as filesystem

2019-07-26 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71168/
---

Review request for oozie.


Bugs: OOZIE-3529
https://issues.apache.org/jira/browse/OOZIE-3529


Repository: oozie-git


Description
---

Many customer who uses s3 file system as secondary one experiences 
"UnsupportedOperationException: Accessing local file system is not allowed" 
error when Oozie tries to submit the Yarn application.


Diffs
-

  core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
0b53a3611 
  core/src/test/java/org/apache/oozie/service/TestHadoopAccessorService.java 
89ce18550 


Diff: https://reviews.apache.org/r/71168/diff/1/


Testing
---


Thanks,

Denes Bodo



[jira] [Assigned] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo reassigned OOZIE-3529:
-

Assignee: Denes Bodo

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:11

[jira] [Updated] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3529:
--
Attachment: OOZIE-3529.001.patch

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891802#comment-16891802
 ] 

Denes Bodo commented on OOZIE-3529:
---

What can go wrong if we put {{fs.s3a.fast.upload.buffer=bytebuffer}} into 
core-site.xml globally?

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
&g

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891793#comment-16891793
 ] 

Denes Bodo commented on OOZIE-3529:
---

As I experienced mentioning the following two properties in workflow.xml action 
configuration, the S3AFileSystem will not use local file system as temp.
{code:xml}

oozie.launcher.fs.s3a.fast.upload.buffer
bytebuffer


oozie.launcher.fs.s3a.impl.disable.cache
true

{code}

Conclusion now:
This can be a workaround if customer wants to have the cve fixed and accept to 
modify their workflows.
This cannot be a solution as it requires modification of all the job 
configuration.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3

[jira] [Created] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-15 Thread Denes Bodo (JIRA)
Denes Bodo created OOZIE-3529:
-

 Summary: Oozie not supported for s3 as filesystem
 Key: OOZIE-3529
 URL: https://issues.apache.org/jira/browse/OOZIE-3529
 Project: Oozie
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0, 4.3.1
Reporter: Denes Bodo
 Attachments: id.pig, job.properties, workflow.xml

Many customer who uses s3 file system as secondary one experiences the 
following error when Oozie tries to submit the Yarn application:
{noformat}
2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
action [streaming-node]. ErrorType [ERROR], ErrorCode 
[UnsupportedOperationException], Message [UnsupportedOperationException: 
Accessing local file system is not allowed]
org.apache.oozie.action.ActionExecutorException: UnsupportedOperationException: 
Accessing local file system is not allowed
at 
org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
at 
org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
at 
org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
at 
org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
at org.apache.oozie.command.XCommand.call(XCommand.java:287)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Accessing local file system 
is not allowed
at 
org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
at 
org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
at 
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
at 
org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1033)
... 12 more
{noformat}
Does anybody has any idea how could we modify the RawLocalFileSystem class to 
make it a bit less strict?

Thank you for the repro wf to Soumitra Sulav.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (OOZIE-3459) Oozie cannot be built using Java 11

2019-05-07 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834743#comment-16834743
 ] 

Denes Bodo commented on OOZIE-3459:
---

[~kmarton] Thank you for your suggestion and offering. When this ticket was 
created I did not foresee such number of issues. It is a good idea to track the 
found ones with this ticket as an umbrella.

> Oozie cannot be built using Java 11
> ---
>
> Key: OOZIE-3459
> URL: https://issues.apache.org/jira/browse/OOZIE-3459
> Project: Oozie
>  Issue Type: Bug
>  Components: core, fluent-job
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Priority: Major
>
> Using OpenJDK 11 I am not able to build Oozie using {{mvn clean install}}.
> I found two issues:
>  * Fluent job API build fails due to Jaxb2 maven plugin.
>  * No {{com.sun.tools.}} package is available so *TestMetricsInstrumentation* 
> will not work.
>  * Maven surefire plugin has to be updated. It works with 3.0.0-M3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-2972) Server goes inconsistent when prepare war called with secure without SSL

2019-04-16 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818771#comment-16818771
 ] 

Denes Bodo commented on OOZIE-2972:
---

[~asalamon74] Am I right that with eliminating Tomcat and using Jetty instead 
this situation is no more possible? Thanks

> Server goes inconsistent when prepare war called with secure without SSL
> 
>
> Key: OOZIE-2972
> URL: https://issues.apache.org/jira/browse/OOZIE-2972
> Project: Oozie
>  Issue Type: Bug
>  Components: security
>Affects Versions: 4.3.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> When prepare-war with secure is called by some user by mistake on a Oozie 
> Server which is not configured with SSL causes inconsistent state. Oozie 
> Server runs fine but the oozie clients are failed with Authentication failure 
> status 302. Checking curl verbose, Oozie Server redirects client to https 
> port even though it is not listening. We need to validate the prepare-war 
> command when SSL is not configured instead of going to inconsistent state.
> Repro:
> {code}
> Oozie Server without SSL
> /usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war -secure
> Start Oozie Server
>  curl -ikv -L --negotiate -u: 
> http://prabhuzeppelin2.openstacklocal:11000/oozie/v1/admin/status
> * About to connect() to prabhuzeppelin2.openstacklocal port 11000 (#0)
> *   Trying 172.26.93.73... connected
> * Connected to prabhuzeppelin2.openstacklocal (172.26.93.73) port 11000 (#0)
> > GET /oozie/v1/admin/status HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 
> > zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: prabhuzeppelin2.openstacklocal:11000
> > Accept: */*
> > 
> < HTTP/1.1 302 Found
> HTTP/1.1 302 Found
> < Server: Apache-Coyote/1.1
> Server: Apache-Coyote/1.1
> < Pragma: No-cache
> Pragma: No-cache
> < Cache-Control: no-cache
> Cache-Control: no-cache
> < Expires: Thu, 01 Jan 1970 00:00:00 UTC
> Expires: Thu, 01 Jan 1970 00:00:00 UTC
> < Location: https://prabhuzeppelin2.openstacklocal:11443/oozie/v1/admin/status
> Location: https://prabhuzeppelin2.openstacklocal:11443/oozie/v1/admin/status
> < Content-Length: 0
> Content-Length: 0
> < Date: Tue, 27 Jun 2017 11:05:45 GMT
> Date: Tue, 27 Jun 2017 11:05:45 GMT
> < 
> * Connection #0 to host prabhuzeppelin2.openstacklocal left intact
> * Issue another request to this URL: 
> 'https://prabhuzeppelin2.openstacklocal:11443/oozie/v1/admin/status'
> * About to connect() to prabhuzeppelin2.openstacklocal port 11443 (#1)
> *   Trying 172.26.93.73... Connection refused
> * couldn't connect to host
> * Closing connection #1
> curl: (7) couldn't connect to host
> * Closing connection #0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-2231) Upgrade curator to latest version 2.13.0

2019-04-11 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815420#comment-16815420
 ] 

Denes Bodo commented on OOZIE-2231:
---

[~kmarton] Curator 2.x supports Zookeeper 3.4.x. Curator 3.x supports only 
Zookeeper 3.5.x. It can be risky from Oozie side to replace the supported ZK 
version but using the same dependency as Hadoop means reliability. As I see in 
Hadoop and Hive pom files they use Curator 2.12.0 so Ozoie shall use the same 
in my opinion. I cannot see if any of these use Curator 3.
+1 to Curator 2.12.0 or 2.13.0

> Upgrade curator to latest version 2.13.0
> 
>
> Key: OOZIE-2231
> URL: https://issues.apache.org/jira/browse/OOZIE-2231
> Project: Oozie
>  Issue Type: Bug
>  Components: HA
>Reporter: Purshotam Shah
>Assignee: Julia Kinga Marton
>Priority: Blocker
> Fix For: 5.2.0
>
> Attachments: OOZIE-2231-00.patch, OOZIE-2231-01.patch, 
> OOZIE-2231-02.patch, OOZIE-2231-02.patch, OOZIE-2231-03.patch, 
> OOZIE-2231-04.patch, OOZIE-2231-05.patch, OOZIE-2231-06.patch
>
>
> It have some fix related to InterProcessReadWriteLock, ChildReaper, 
> LeaderSelector which we use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OOZIE-3328) Create Hive compatibility action executor to run hive actions using beeline

2019-04-05 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810778#comment-16810778
 ] 

Denes Bodo edited comment on OOZIE-3328 at 4/5/19 12:55 PM:


The suggested implementation needs Hive3 as dependency due to the used 
beeline-site.xml location. I am uploading the diff here and to 
[RB|https://reviews.apache.org/r/70406/]. CC [~matijhs]


was (Author: dionusos):
The suggested implementation needs Hive3 as dependency due to the used 
beeline-site.xml location. I am uploading the diff here and to 
[RB|https://reviews.apache.org/r/70406/].

> Create Hive compatibility action executor to run hive actions using beeline
> ---
>
> Key: OOZIE-3328
> URL: https://issues.apache.org/jira/browse/OOZIE-3328
> Project: Oozie
>  Issue Type: Task
>  Components: action, core
>Affects Versions: 5.0.0, 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
>  Labels: features, usability
> Attachments: OOZIE-3328.001.patch
>
>
> If I am correct then Hive will not support HiveCli for long and Oozie may 
> have to handle this.
> A new executor shall be created which can understand the original hive action 
> format while this executor shall run the action using beeline.
> What are your thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3328) Create Hive compatibility action executor to run hive actions using beeline

2019-04-05 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3328:
--
Attachment: OOZIE-3328.001.patch

> Create Hive compatibility action executor to run hive actions using beeline
> ---
>
> Key: OOZIE-3328
> URL: https://issues.apache.org/jira/browse/OOZIE-3328
> Project: Oozie
>  Issue Type: Task
>  Components: action, core
>Affects Versions: 5.0.0, 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
>  Labels: features, usability
> Attachments: OOZIE-3328.001.patch
>
>
> If I am correct then Hive will not support HiveCli for long and Oozie may 
> have to handle this.
> A new executor shall be created which can understand the original hive action 
> format while this executor shall run the action using beeline.
> What are your thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3328) Create Hive compatibility action executor to run hive actions using beeline

2019-04-05 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810778#comment-16810778
 ] 

Denes Bodo commented on OOZIE-3328:
---

The suggested implementation needs Hive3 as dependency due to the used 
beeline-site.xml location. I am uploading the diff here and to 
[RB|https://reviews.apache.org/r/70406/].

> Create Hive compatibility action executor to run hive actions using beeline
> ---
>
> Key: OOZIE-3328
> URL: https://issues.apache.org/jira/browse/OOZIE-3328
> Project: Oozie
>  Issue Type: Task
>  Components: action, core
>Affects Versions: 5.0.0, 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
>  Labels: features, usability
>
> If I am correct then Hive will not support HiveCli for long and Oozie may 
> have to handle this.
> A new executor shall be created which can understand the original hive action 
> format while this executor shall run the action using beeline.
> What are your thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OOZIE-3204) Oozie cannot run HBase code in Java action

2019-04-05 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo resolved OOZIE-3204.
---
Resolution: Invalid

> Oozie cannot run HBase code in Java action 
> ---
>
> Key: OOZIE-3204
> URL: https://issues.apache.org/jira/browse/OOZIE-3204
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>
> After having custom raw file system implementation, HBase (java) action 
> cannot run:
> {noformat}
> 018-03-28 06:55:46,372  WARN HbaseCredentials:523 - 
> SERVER[ctr-e138-1518143905142-137559-01-03.hwx.site] USER[hrt_qa] 
> GROUP[-] TOKEN[] APP[tpch_query1] JOB[002-180328065157516-oozie-oozi-W] 
> ACTION[002-180328065157516-oozie-oozi-W@tpch_query1] Exception in 
> receiving hbase credentials
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>   at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:68)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials$1.run(HbaseCredentials.java:86)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials$1.run(HbaseCredentials.java:84)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials.obtainToken(HbaseCredentials.java:83)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials.addtoJobConf(HbaseCredentials.java:56)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.setCredentialTokens(JavaActionExecutor.java:1338)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1178)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1424)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:65)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:331)
>   at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:260)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:178)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>   ... 24 more
> Caused by: java.lang.ExceptionInInitializerError
>   at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:903)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:648)
>   ... 29 more
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is 

[jira] [Assigned] (OOZIE-3204) Oozie cannot run HBase code in Java action

2019-04-05 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo reassigned OOZIE-3204:
-

Assignee: Denes Bodo

> Oozie cannot run HBase code in Java action 
> ---
>
> Key: OOZIE-3204
> URL: https://issues.apache.org/jira/browse/OOZIE-3204
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>
> After having custom raw file system implementation, HBase (java) action 
> cannot run:
> {noformat}
> 018-03-28 06:55:46,372  WARN HbaseCredentials:523 - 
> SERVER[ctr-e138-1518143905142-137559-01-03.hwx.site] USER[hrt_qa] 
> GROUP[-] TOKEN[] APP[tpch_query1] JOB[002-180328065157516-oozie-oozi-W] 
> ACTION[002-180328065157516-oozie-oozi-W@tpch_query1] Exception in 
> receiving hbase credentials
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>   at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:68)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials$1.run(HbaseCredentials.java:86)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials$1.run(HbaseCredentials.java:84)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials.obtainToken(HbaseCredentials.java:83)
>   at 
> org.apache.oozie.action.hadoop.HbaseCredentials.addtoJobConf(HbaseCredentials.java:56)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.setCredentialTokens(JavaActionExecutor.java:1338)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1178)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1424)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:65)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:331)
>   at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:260)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:178)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>   ... 24 more
> Caused by: java.lang.ExceptionInInitializerError
>   at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:903)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:648)
>   ... 29 more
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 

[jira] [Commented] (OOZIE-3459) Oozie cannot be built using Java 11

2019-04-01 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16806797#comment-16806797
 ] 

Denes Bodo commented on OOZIE-3459:
---

[~nobigo] I think JDK11 is good with OpenJPA, according to Unit Tests. However, 
I tried Oozie 4.3.1 with setting JAVA_HOME to 11 I got the following:
{noformat}
Error: Could not connect to the database: 
org.apache.oozie.service.ServiceException: E0100: Could not initialize service 
[org.apache.oozie.service.HadoopAccessorService], failure to login: for 
principal: oozie/ctr-e139-1542663976389-92586-01-02.hwx.s...@example.com 
from keytab /etc/security/keytabs/oozie.service.keytab 
javax.security.auth.login.LoginException: Message stream modified (41)

Stack trace for the error was (for debug purposes):
--
java.lang.Exception: Could not connect to the database: 
org.apache.oozie.service.ServiceException: E0100: Could not initialize service 
[org.apache.oozie.service.HadoopAccessorService], failure to login: for 
principal: oozie/ctr-e139-1542663976389-92586-01-02.hwx.s...@example.com 
from keytab /etc/security/keytabs/oozie.service.keytab 
javax.security.auth.login.LoginException: Message stream modified (41)
at 
org.apache.oozie.tools.OozieDBCLI.validateConnection(OozieDBCLI.java:968)
at org.apache.oozie.tools.OozieDBCLI.createDB(OozieDBCLI.java:193)
at org.apache.oozie.tools.OozieDBCLI.run(OozieDBCLI.java:131)
at org.apache.oozie.tools.OozieDBCLI.main(OozieDBCLI.java:79)
Caused by: org.apache.oozie.service.ServiceException: E0100: Could not 
initialize service [org.apache.oozie.service.HadoopAccessorService], failure to 
login: for principal: 
oozie/ctr-e139-1542663976389-92586-01-02.hwx.s...@example.com from keytab 
/etc/security/keytabs/oozie.service.keytab 
javax.security.auth.login.LoginException: Message stream modified (41)
at 
org.apache.oozie.service.HadoopAccessorService.kerberosInit(HadoopAccessorService.java:244)
at 
org.apache.oozie.service.HadoopAccessorService.init(HadoopAccessorService.java:143)
at 
org.apache.oozie.service.HadoopAccessorService.init(HadoopAccessorService.java:114)
at 
org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.setService(Services.java:372)
at org.apache.oozie.service.Services.loadServices(Services.java:305)
at org.apache.oozie.service.Services.init(Services.java:213)
at org.apache.oozie.tools.OozieDBCLI.getJdbcConf(OozieDBCLI.java:180)
at 
org.apache.oozie.tools.OozieDBCLI.createConnection(OozieDBCLI.java:956)
at 
org.apache.oozie.tools.OozieDBCLI.validateConnection(OozieDBCLI.java:964)
... 3 more
Caused by: org.apache.hadoop.security.KerberosAuthException: failure to login: 
for principal: 
oozie/ctr-e139-1542663976389-92586-01-02.hwx.s...@example.com from keytab 
/etc/security/keytabs/oozie.service.keytab 
javax.security.auth.login.LoginException: Message stream modified (41)
at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
at 
org.apache.oozie.service.HadoopAccessorService.kerberosInit(HadoopAccessorService.java:236)
... 12 more
Caused by: javax.security.auth.login.LoginException: Message stream modified 
(41)
at 
jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:781)
at 
jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:592)
at 
java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:726)
at 
java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
at 
java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
at 
java.base/javax.security.auth.login.LoginContext.login(LoginContext.java:574)
at 
org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.login(UserGroupInformation.java:1926)
at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1837)
... 15 more
Caused by: KrbException: Message stream modified (41)
at 
java.security.jgss/sun.security.krb5.KrbKdcRep.check(KrbKdcRep.java:83)
at 
java.security.jgss/sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:158)
at 
java.security.jgss/sun.security.krb5.KrbAsRep.decryptUsingKeyTab

[jira] [Updated] (OOZIE-3459) Oozie cannot be built using Java 11

2019-04-01 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3459:
--
Description: 
Using OpenJDK 11 I am not able to build Oozie using {{mvn clean install}}.

I found two issues:
 * Fluent job API build fails due to Jaxb2 maven plugin.
 * No {{com.sun.tools.}} package is available so *TestMetricsInstrumentation* 
will not work.
 * Maven surefire plugin has to be updated. It works with 3.0.0-M3

  was:
Using OpenJDK 11 I am not able to build Oozie using {{mvn clean install}}.

I found two issues:
 * Fluent job API build fails due to Jaxb2 maven plugin.
 * No {{com.sun.tools.}} package is available so *TestMetricsInstrumentation* 
will not work.


> Oozie cannot be built using Java 11
> ---
>
> Key: OOZIE-3459
> URL: https://issues.apache.org/jira/browse/OOZIE-3459
> Project: Oozie
>  Issue Type: Bug
>  Components: core, fluent-job
>Affects Versions: 5.1.0
>    Reporter: Denes Bodo
>Priority: Major
>
> Using OpenJDK 11 I am not able to build Oozie using {{mvn clean install}}.
> I found two issues:
>  * Fluent job API build fails due to Jaxb2 maven plugin.
>  * No {{com.sun.tools.}} package is available so *TestMetricsInstrumentation* 
> will not work.
>  * Maven surefire plugin has to be updated. It works with 3.0.0-M3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OOZIE-3459) Oozie cannot be built using Java 11

2019-03-29 Thread Denes Bodo (JIRA)
Denes Bodo created OOZIE-3459:
-

 Summary: Oozie cannot be built using Java 11
 Key: OOZIE-3459
 URL: https://issues.apache.org/jira/browse/OOZIE-3459
 Project: Oozie
  Issue Type: Bug
  Components: core, fluent-job
Affects Versions: 5.1.0
Reporter: Denes Bodo


Using OpenJDK 11 I am not able to build Oozie using {{mvn clean install}}.

I found two issues:
 * Fluent job API build fails due to Jaxb2 maven plugin.
 * No {{com.sun.tools.}} package is available so *TestMetricsInstrumentation* 
will not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OOZIE-148) GH-126: PurgeCommand should purge the workflow jobs w/o end_time

2019-03-21 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797895#comment-16797895
 ] 

Denes Bodo edited comment on OOZIE-148 at 3/21/19 7:51 AM:
---

I think this can be closed. See OOZIE-1401.
cc [~kmarton] [~asalamon74]


was (Author: dionusos):
I think this can be closed. See OOZIE-1401.
cc [~BoglarkaEgyed] [~asalamon74]

> GH-126: PurgeCommand should purge the workflow jobs w/o end_time
> 
>
> Key: OOZIE-148
> URL: https://issues.apache.org/jira/browse/OOZIE-148
> Project: Oozie
>  Issue Type: Bug
>Reporter: Hadoop QA
>Priority: Major
>
> Currently, PurgeCommand is not working with those workflow jobs with 
> end_time=null. This command needs to take care of those jobs as well. It 
> could be done by checking created_time if end_time is not available.
> The current query:
> select w from WorkflowJobBean w where w.endTimestamp < :endTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-148) GH-126: PurgeCommand should purge the workflow jobs w/o end_time

2019-03-21 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797895#comment-16797895
 ] 

Denes Bodo commented on OOZIE-148:
--

I think this can be closed. See OOZIE-1401.
cc [~BoglarkaEgyed] [~asalamon74]

> GH-126: PurgeCommand should purge the workflow jobs w/o end_time
> 
>
> Key: OOZIE-148
> URL: https://issues.apache.org/jira/browse/OOZIE-148
> Project: Oozie
>  Issue Type: Bug
>Reporter: Hadoop QA
>Priority: Major
>
> Currently, PurgeCommand is not working with those workflow jobs with 
> end_time=null. This command needs to take care of those jobs as well. It 
> could be done by checking created_time if end_time is not available.
> The current query:
> select w from WorkflowJobBean w where w.endTimestamp < :endTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-1974) SSH Action doesn't handle compound commands eg: cmd1 && cmd2 and stuck in [PREP] stage

2019-03-08 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787848#comment-16787848
 ] 

Denes Bodo commented on OOZIE-1974:
---

[~michalisk] Do you think OOZIE-2126 can be related to your issue? Thanks

> SSH Action doesn't handle compound commands eg: cmd1 && cmd2 and stuck in 
> [PREP] stage
> --
>
> Key: OOZIE-1974
> URL: https://issues.apache.org/jira/browse/OOZIE-1974
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: trunk
>Reporter: Michalis Kongtongk
>Priority: Major
>
> example WF that will fail:
> {code}
>  
>  
>  
>  
> oozie-u...@somedomain.com 
> kinit 
> oozie-u...@somedomain.com 
> -k 
> -t 
> /home/oozie-user/oozie.keytab 
>  
> hdfs 
> dfs 
> -put 
> /tmp/random-file.txt 
> /tmp/random-file.txt 
>  
>  
>  
>  
>  
> Action failed, error 
> message[${wf:errorMessage(wf:lastErrorNode())}] 
>  
>  
> 
> {code}
> Workaround is to execute the compound command in subshell eg: $(cmd1 && cmd2) 
> {code}
>  
>  
>  
>  
> oozie-u...@somedomain.com 
> $(kinit 
> oozie-u...@somedomain.com 
> -k 
> -t 
> /home/oozie-user/oozie.keytab 
>  
> hdfs 
> dfs 
> -put 
> /tmp/random-file.txt 
> /tmp/random-file.txt 
> ) 
>  
>  
>  
>  
>  
> Action failed, error 
> message[${wf:errorMessage(wf:lastErrorNode())}] 
>  
>  
> 
> {code}
> Stack trace "org.apache.oozie.command.CommandException: E0800: Action it is 
> not running its in [PREP] state,"
> {code}
> 2014-08-05 23:29:49,721 INFO org.apache.oozie.action.ssh.SshActionExecutor: 
> SERVER[192-168-88-213.lunix.lan] USER[mko] GROUP[-] TOKEN[] APP[Ssh-copy] 
> JOB[008-140805224842389-oozie-oozi-W] 
> ACTION[008-140805224842389-oozie-oozi-W@Ssh] start() begins 
> 2014-08-05 23:29:49,723 INFO org.apache.oozie.action.ssh.SshActionExecutor: 
> SERVER[192-168-88-213.lunix.lan] USER[mko] GROUP[-] TOKEN[] APP[Ssh-copy] 
> JOB[008-140805224842389-oozie-oozi-W] 
> ACTION[008-140805224842389-oozie-oozi-W@Ssh] Attempting to copy ssh base 
> scripts to remote host [m...@192-168-88-213.lunix.lan] 
> 2014-08-05 23:29:52,691 INFO org.apache.oozie.servlet.CallbackServlet: 
> SERVER[192-168-88-213.lunix.lan] USER[-] GROUP[-] TOKEN[-] APP[-] 
> JOB[008-140805224842389-oozie-oozi-W] 
> ACTION[008-140805224842389-oozie-oozi-W@Ssh] callback for action 
> [008-140805224842389-oozie-oozi-W@Ssh] 
> 2014-08-05 23:29:52,714 ERROR 
> org.apache.oozie.command.wf.CompletedActionXCommand: 
> SERVER[192-168-88-213.lunix.lan] USER[-] GROUP[-] TOKEN[] APP[-] 
> JOB[008-140805224842389-oozie-oozi-W] 
> ACTION[008-140805224842389-oozie-oozi-W@Ssh] XException, 
> org.apache.oozie.command.CommandException: E0800: Action it is not running 
> its in [PREP] state, action [008-140805224842389-oozie-oozi-W@Ssh] 
> at 
> org.apache.oozie.command.wf.CompletedActionXCommand.eagerVerifyPrecondition(CompletedActionXCommand.java:77)
>  
> at org.apache.oozie.command.XCommand.call(XCommand.java:251) 
> at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>  
> at java.lang.Thread.run(Thread.java:662) 
> 2014-08-05 23:29:52,714 WARN 
> org.apache.oozie.service.CallableQueueService$CallableWrapper: 
> SERVER[192-168-88-213.lunix.lan] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] 
> ACTION[-] exception callable [callback], E0800: Action it is not running its 
> in [PREP] state, action [008-140805224842389-oozie-oozi-W@Ssh] 
> org.apache.oozie.command.CommandException: E0800: Action it is not running 
> its in [PREP] state, action [008-140805224842389-oozie-oozi-W@Ssh] 
> at 
> org.apache.oozie.command.wf.CompletedActionXCommand.eagerVerifyPrecondition(CompletedActionXCommand.java:77)
>  
> at org.apache.oozie.command.XCommand.call(XCommand.java:251) 
> at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>  
> at java.lang.Thread.run(Thread.java:662) 
> 2014-08-05 23:29:57,262 INFO org.apache.oozie.action.ssh.SshActionExecutor: 
> SERVER[192-168-88-213.lunix.lan] USER[mko] GROUP[-] TOKEN[] APP[Ssh-copy] 
> JOB[008-140805224842389-oozie-oozi-W] 
> ACTION[008-140805224842389-oozie-oozi-W@Ssh] start() ends
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66656: Exclusion pattern for sharelib.

2019-03-06 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66656/#review213472
---



Please use final modifier where possible in the new code.


core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java
Lines 1164-1165 (patched)
<https://reviews.apache.org/r/66656/#comment299404>

In case of an error wi shall throw an exception. It this is not a real 
error, then I think we should use WARN level.



core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java
Lines 1173 (patched)
<https://reviews.apache.org/r/66656/#comment299405>

Are you sure it is not WARN?



core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java
Lines 1177 (patched)
<https://reviews.apache.org/r/66656/#comment299406>

Are you sure it is not WARN?



core/src/main/java/org/apache/oozie/action/hadoop/ShareLibExcluder.java
Lines 46 (patched)
<https://reviews.apache.org/r/66656/#comment299407>

... pattern is not ...



core/src/main/java/org/apache/oozie/action/hadoop/ShareLibExcluder.java
Lines 62 (patched)
<https://reviews.apache.org/r/66656/#comment299411>

maybeExcludePattern can be final



core/src/main/java/org/apache/oozie/action/hadoop/ShareLibExcluder.java
Lines 64 (patched)
<https://reviews.apache.org/r/66656/#comment299410>

excludePattern can be final



core/src/main/java/org/apache/oozie/action/hadoop/ShareLibExcluder.java
Lines 74 (patched)
<https://reviews.apache.org/r/66656/#comment299408>

please be consistent with *final* modifier in parameter list. Use it 
everywhere or use nowhere.



core/src/main/java/org/apache/oozie/action/hadoop/ShareLibExcluder.java
Lines 86 (patched)
<https://reviews.apache.org/r/66656/#comment299409>

What does "skipping" mean? I would like to see a bit more detailed message 
what hapens this case.



core/src/main/java/org/apache/oozie/service/ShareLibService.java
Lines 624 (patched)
<https://reviews.apache.org/r/66656/#comment299412>

What can cause that IOException?



core/src/test/java/org/apache/oozie/action/hadoop/TestJavaActionExecutorLibAddition.java
Lines 431 (patched)
<https://reviews.apache.org/r/66656/#comment299403>

please remove trailing whitespaces


- Denes Bodo


On March 5, 2019, 4:54 p.m., Mate Juhasz wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66656/
> ---
> 
> (Updated March 5, 2019, 4:54 p.m.)
> 
> 
> Review request for oozie, András Piros and Denes Bodo.
> 
> 
> Bugs: OOZIE-1624
> https://issues.apache.org/jira/browse/OOZIE-1624
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> OOZIE-1624 Exclusion pattern for sharelib.
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java 
> 231b38ea 
>   core/src/main/java/org/apache/oozie/action/hadoop/ShareLibExcluder.java 
> PRE-CREATION 
>   core/src/main/java/org/apache/oozie/service/ShareLibService.java b88dab3a 
>   
> core/src/test/java/org/apache/oozie/action/hadoop/ActionExecutorTestCase.java 
> 05511e4c 
>   
> core/src/test/java/org/apache/oozie/action/hadoop/TestJavaActionExecutor.java 
> 6383e814 
>   
> core/src/test/java/org/apache/oozie/action/hadoop/TestJavaActionExecutorLibAddition.java
>  PRE-CREATION 
>   core/src/test/java/org/apache/oozie/action/hadoop/TestShareLibExcluder.java 
> PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/66656/diff/7/
> 
> 
> Testing
> ---
> 
> Tested on a cluster
> 
> 
> Thanks,
> 
> Mate Juhasz
> 
>



[jira] [Updated] (OOZIE-1702) [build] Fix Javadoc warnings

2019-03-01 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-1702:
--
Attachment: OOZIE-1702-002.patch

> [build] Fix Javadoc warnings
> 
>
> Key: OOZIE-1702
> URL: https://issues.apache.org/jira/browse/OOZIE-1702
> Project: Oozie
>  Issue Type: Wish
>  Components: build
>Affects Versions: trunk
>Reporter: Mona Chitnis
>Assignee: Denes Bodo
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: OOZIE-1702-001.patch, OOZIE-1702-002.patch
>
>
> A lot of warnings are thrown during Oozie compilation, complaining about 
> javadoc mistakes, missing links etc among probably other severe ones. This 
> clutters the output. This JIRA is to fix these warnings



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 70072: Reduce warnings thrown while building

2019-03-01 Thread Denes Bodo
://reviews.apache.org/r/70072/diff/3-4/


Testing
---

mvn javadoc:javadoc does not show warning


Thanks,

Denes Bodo



Re: Review Request 70072: Reduce warnings thrown while building

2019-03-01 Thread Denes Bodo
 does not show warning


Thanks,

Denes Bodo



Re: Review Request 70072: Reduce warnings thrown while building

2019-03-01 Thread Denes Bodo
 does not show warning


Thanks,

Denes Bodo



[jira] [Commented] (OOZIE-1702) [build] Fix Javadoc warnings

2019-02-28 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780646#comment-16780646
 ] 

Denes Bodo commented on OOZIE-1702:
---

Purpose of my patch is to eliminate the 100+ javadoc warnings. The interesting 
thing that I managed to eliminate them by checking *mvn javadoc:javadoc* and 
all gone. I'll check it later.

> [build] Fix Javadoc warnings
> 
>
> Key: OOZIE-1702
> URL: https://issues.apache.org/jira/browse/OOZIE-1702
> Project: Oozie
>  Issue Type: Wish
>  Components: build
>Affects Versions: trunk
>Reporter: Mona Chitnis
>Assignee: Denes Bodo
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: OOZIE-1702-001.patch
>
>
> A lot of warnings are thrown during Oozie compilation, complaining about 
> javadoc mistakes, missing links etc among probably other severe ones. This 
> clutters the output. This JIRA is to fix these warnings



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OOZIE-1702) Reduce warnings thrown while building

2019-02-28 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo reassigned OOZIE-1702:
-

Assignee: Denes Bodo

> Reduce warnings thrown while building
> -
>
> Key: OOZIE-1702
> URL: https://issues.apache.org/jira/browse/OOZIE-1702
> Project: Oozie
>  Issue Type: Wish
>  Components: build
>Affects Versions: trunk
>Reporter: Mona Chitnis
>Assignee: Denes Bodo
>Priority: Minor
>  Labels: documentation, newbie
>
> A lot of warnings are thrown during Oozie compilation, complaining about 
> javadoc mistakes, missing links etc among probably other severe ones. This 
> clutters the output. This JIRA is to fix these warnings



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 70072: Reduce warnings thrown while building

2019-02-28 Thread Denes Bodo
/StringBlobValueHandler.java 
c2e246a6e 
  
core/src/main/java/org/apache/oozie/executor/jpa/WorkflowActionDeleteJPAExecutor.java
 810f0e663 
  
core/src/main/java/org/apache/oozie/executor/jpa/WorkflowActionRetryManualGetJPAExecutor.java
 ed2400b11 
  
core/src/main/java/org/apache/oozie/executor/jpa/WorkflowActionSubsetGetJPAExecutor.java
 9ae364639 
  
core/src/main/java/org/apache/oozie/executor/jpa/WorkflowInfoWithActionsSubsetGetJPAExecutor.java
 3ceda38c0 
  
core/src/main/java/org/apache/oozie/executor/jpa/WorkflowJobsDeleteJPAExecutor.java
 01fe53495 
  
core/src/main/java/org/apache/oozie/executor/jpa/WorkflowsJobGetJPAExecutor.java
 9e68f49c6 
  core/src/main/java/org/apache/oozie/jms/ConnectionContext.java 4a76dfbe2 
  core/src/main/java/org/apache/oozie/jms/JMSExceptionListener.java 75751eba4 
  core/src/main/java/org/apache/oozie/local/LocalOozie.java 6475f33ed 
  core/src/main/java/org/apache/oozie/service/AuthorizationService.java 
6f72c4193 
  core/src/main/java/org/apache/oozie/service/CallableQueueService.java 
a94260002 
  core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java 
05cdc555a 
  core/src/main/java/org/apache/oozie/service/JMSTopicService.java a28330b6b 
  core/src/main/java/org/apache/oozie/service/JPAService.java cefcb43b8 
  
core/src/main/java/org/apache/oozie/service/MetricsInstrumentationService.java 
b57a2a5c6 
  core/src/main/java/org/apache/oozie/service/ProxyUserService.java 4bfd52779 
  core/src/main/java/org/apache/oozie/service/SLAStoreService.java 02899ccee 
  core/src/main/java/org/apache/oozie/service/SchedulerService.java 81fbc0d19 
  core/src/main/java/org/apache/oozie/service/SchemaService.java 14d1eeb14 
  core/src/main/java/org/apache/oozie/service/Services.java 739160542 
  core/src/main/java/org/apache/oozie/service/StoreService.java 7868e021a 
  core/src/main/java/org/apache/oozie/service/URIHandlerService.java c4a370179 
  core/src/main/java/org/apache/oozie/service/WorkflowAppService.java 17d1d2e1e 
  core/src/main/java/org/apache/oozie/service/WorkflowStoreService.java 
72e0fe480 
  core/src/main/java/org/apache/oozie/service/XLogService.java 1ac82f5a8 
  core/src/main/java/org/apache/oozie/servlet/BaseAdminServlet.java 85610eb51 
  core/src/main/java/org/apache/oozie/servlet/BaseJobServlet.java 31456503a 
  core/src/main/java/org/apache/oozie/servlet/BaseJobsServlet.java f351ae7a6 
  core/src/main/java/org/apache/oozie/servlet/V0JobsServlet.java 0ff9c6aa5 
  core/src/main/java/org/apache/oozie/servlet/V1AdminServlet.java 8eb6ee1a0 
  core/src/main/java/org/apache/oozie/servlet/V1JobServlet.java 9acd5719a 
  core/src/main/java/org/apache/oozie/servlet/V1JobsServlet.java a582b1b12 
  core/src/main/java/org/apache/oozie/servlet/V2JobServlet.java c2b90c179 
  core/src/main/java/org/apache/oozie/sla/SLACalcStatus.java 3d7d6e8a5 
  core/src/main/java/org/apache/oozie/sla/SLACalculator.java db7186511 
  core/src/main/java/org/apache/oozie/sla/SLAOperations.java 390500341 
  core/src/main/java/org/apache/oozie/sla/listener/SLAEventListener.java 
ab771043f 
  core/src/main/java/org/apache/oozie/store/OozieSchema.java ea785d397 
  core/src/main/java/org/apache/oozie/store/SLAStore.java 34f47fb83 
  core/src/main/java/org/apache/oozie/store/Store.java b60f0225d 
  core/src/main/java/org/apache/oozie/store/WorkflowStore.java 573bcdd2e 
  core/src/main/java/org/apache/oozie/util/BufferDrainer.java 304fd6d5d 
  core/src/main/java/org/apache/oozie/util/ConfigUtils.java af54145c9 
  core/src/main/java/org/apache/oozie/util/XConfiguration.java d6e59a63d 
  core/src/main/java/org/apache/oozie/util/db/FailingDBHelperForTest.java 
af55b068a 
  core/src/main/java/org/apache/oozie/util/db/SqlStatement.java 1229ad20b 
  core/src/main/java/org/apache/oozie/workflow/WorkflowInstance.java 931364280 
  core/src/main/java/org/apache/oozie/workflow/lite/DBLiteWorkflowLib.java 
22c14fce7 
  core/src/main/java/org/apache/oozie/workflow/lite/LiteWorkflowAppParser.java 
a767124b2 


Diff: https://reviews.apache.org/r/70072/diff/1/


Testing
---

mvn javadoc:javadoc does not show warning


Thanks,

Denes Bodo



[jira] [Updated] (OOZIE-1702) Reduce warnings thrown while building

2019-02-28 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-1702:
--
Attachment: OOZIE-1702-001.patch

> Reduce warnings thrown while building
> -
>
> Key: OOZIE-1702
> URL: https://issues.apache.org/jira/browse/OOZIE-1702
> Project: Oozie
>  Issue Type: Wish
>  Components: build
>Affects Versions: trunk
>Reporter: Mona Chitnis
>Assignee: Denes Bodo
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: OOZIE-1702-001.patch
>
>
> A lot of warnings are thrown during Oozie compilation, complaining about 
> javadoc mistakes, missing links etc among probably other severe ones. This 
> clutters the output. This JIRA is to fix these warnings



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 70042: Oozie Spark action replaces path symlink # to %23

2019-02-26 Thread Denes Bodo


> On Feb. 26, 2019, 12:08 p.m., Andras Salamon wrote:
> > sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
> > Lines 461 (patched)
> > <https://reviews.apache.org/r/70042/diff/2/?file=2126600#file2126600line461>
> >
> > If neither positive nor negative is present, then the assertTrue above 
> > should give us an error.
> > 
> > If I understand correctly we reach this fail if the collection has no 
> > --files / --archives item.

Just in case if --files or --archives changes the test will let us know. I 
tried to provide meaningful error message, I hope I managed to.


- Denes


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70042/#review213221
---


On Feb. 26, 2019, 4:41 p.m., Denes Bodo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70042/
> ---
> 
> (Updated Feb. 26, 2019, 4:41 p.m.)
> 
> 
> Review request for oozie.
> 
> 
> Bugs: OOZIE-3440
> https://issues.apache.org/jira/browse/OOZIE-3440
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> 
> 
> Diffs
> -
> 
>   
> sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java
>  28d9c5cc4 
>   
> sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
>  7ccd26ad5 
> 
> 
> Diff: https://reviews.apache.org/r/70042/diff/3/
> 
> 
> Testing
> ---
> 
> Unit tests run
> 
> 
> Thanks,
> 
> Denes Bodo
> 
>



Re: Review Request 70042: Oozie Spark action replaces path symlink # to %23

2019-02-26 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70042/
---

(Updated Feb. 26, 2019, 4:41 p.m.)


Review request for oozie.


Bugs: OOZIE-3440
https://issues.apache.org/jira/browse/OOZIE-3440


Repository: oozie-git


Description
---

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], 
main() threw exception, File 
file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist


Diffs (updated)
-

  
sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java
 28d9c5cc4 
  
sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
 7ccd26ad5 


Diff: https://reviews.apache.org/r/70042/diff/3/

Changes: https://reviews.apache.org/r/70042/diff/2-3/


Testing
---

Unit tests run


Thanks,

Denes Bodo



[jira] [Updated] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-26 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3440:
--
Attachment: OOZIE-3440-003.patch

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 2.7.3
> Spark 2 - 2.3.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch, OOZIE-3440-002.patch, 
> OOZIE-3440-003.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-25 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777664#comment-16777664
 ] 

Denes Bodo commented on OOZIE-3440:
---

Thanks [~asalamon74] for your comments, and thanks [~nobigo] for the testing. 
I've made the requested changes, please see the review board. Thanks.

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 2.7.3
> Spark 2 - 2.3.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch, OOZIE-3440-002.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 70042: Oozie Spark action replaces path symlink # to %23

2019-02-25 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70042/
---

(Updated Feb. 25, 2019, 1:42 p.m.)


Review request for oozie.


Bugs: OOZIE-3440
https://issues.apache.org/jira/browse/OOZIE-3440


Repository: oozie-git


Description
---

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], 
main() threw exception, File 
file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist


Diffs (updated)
-

  
sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java
 28d9c5cc4 
  
sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
 7ccd26ad5 


Diff: https://reviews.apache.org/r/70042/diff/2/

Changes: https://reviews.apache.org/r/70042/diff/1-2/


Testing
---

Unit tests run


Thanks,

Denes Bodo



Re: Review Request 70042: Oozie Spark action replaces path symlink # to %23

2019-02-25 Thread Denes Bodo


> On Feb. 25, 2019, 11:04 a.m., Andras Salamon wrote:
> > sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
> > Lines 398 (patched)
> > <https://reviews.apache.org/r/70042/diff/1/?file=2126418#file2126418line398>
> >
> > There are two ways to specify the files:
> > 
> > --files 
> > --files=
> > 
> > Could you please also test the second type.

I think for this 4 cases creating parametrized tests does not worth.


> On Feb. 25, 2019, 11:04 a.m., Andras Salamon wrote:
> > sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
> > Lines 410-414 (patched)
> > <https://reviews.apache.org/r/70042/diff/1/?file=2126418#file2126418line410>
> >
> > I think it would be possible to create better assertions here without 
> > converting the List to String. 
> > 
> > Could you please also add assert messages.

In an array list I cannot see a better check. Any idea is welcome.


- Denes


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70042/#review213151
---


On Feb. 25, 2019, 1:42 p.m., Denes Bodo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70042/
> ---
> 
> (Updated Feb. 25, 2019, 1:42 p.m.)
> 
> 
> Review request for oozie.
> 
> 
> Bugs: OOZIE-3440
> https://issues.apache.org/jira/browse/OOZIE-3440
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> 
> 
> Diffs
> -
> 
>   
> sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java
>  28d9c5cc4 
>   
> sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
>  7ccd26ad5 
> 
> 
> Diff: https://reviews.apache.org/r/70042/diff/2/
> 
> 
> Testing
> ---
> 
> Unit tests run
> 
> 
> Thanks,
> 
> Denes Bodo
> 
>



[jira] [Updated] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-25 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3440:
--
Attachment: OOZIE-3440-002.patch

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 2.7.3
> Spark 2 - 2.3.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch, OOZIE-3440-002.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16775154#comment-16775154
 ] 

Denes Bodo commented on OOZIE-3440:
---

I've updated the environment info:
HDP 2.6.5, Hadoop 2.7.3 and Spark 2 . 2.3.0 . Oozie parameters the Spark 
command's --files option with the "%23" contained string. This can be 
reproduces using unit tests.

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 2.7.3
> Spark 2 - 2.3.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3440:
--
Environment: 
Hadoop 2.7.3
Spark 2 - 2.3.0

  was:
Hadoop 3
Spark 2 - 2.3.0


> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 2.7.3
> Spark 2 - 2.3.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16775146#comment-16775146
 ] 

Denes Bodo edited comment on OOZIE-3440 at 2/22/19 1:39 PM:


[~nobigo] have you used spark 1 or spark 2? Can you please share the Hadoop 
version too? Thanks

cc [~Hari Matta]


was (Author: dionusos):
[~nobigo] have you used spark 1 or spark 2? Can you please share the Hadoop 
version too? Thanks

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 3
> Spark 2 - 2.3.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3440:
--
Environment: 
Hadoop 3
Spark 2 - 2.3.0

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
> Environment: Hadoop 3
> Spark 2 - 2.3.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16775146#comment-16775146
 ] 

Denes Bodo commented on OOZIE-3440:
---

[~nobigo] have you used spark 1 or spark 2? Can you please share the Hadoop 
version too? Thanks

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3440:
--
Attachment: OOZIE-3440-001.patch

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
> Attachments: OOZIE-3440-001.patch
>
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 70042: Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70042/
---

Review request for oozie.


Bugs: OOZIE-3440
https://issues.apache.org/jira/browse/OOZIE-3440


Repository: oozie-git


Description
---

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], 
main() threw exception, File 
file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist


Diffs
-

  
sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java
 28d9c5cc4 
  
sharelib/spark/src/test/java/org/apache/oozie/action/hadoop/TestSparkArgsExtractor.java
 7ccd26ad5 


Diff: https://reviews.apache.org/r/70042/diff/1/


Testing
---

Unit tests run


Thanks,

Denes Bodo



[jira] [Updated] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3440:
--
Description: 
When we provide in a hive action:
 * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
 * /etc/spark2/conf/hive-site.xml#hive-site.xml or
* hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
 * /etc/spark2/conf/hive-site.xml#hive-site.xml

we get the following error:
{code}
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], 
main() threw exception, File 
file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
{code}
The culprit seems to be 
https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
 .

Please help me confirm if this is a bug or not. Meanwhile I am creating a 
fix/workaround to this.

  was:
When we provide in a hive action:
 * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
 * /etc/spark2/conf/hive-site.xml#hive-site.xml or
 * /etc/spark2/conf/hive-site.xml#hive-site.xml

we get the following error:
{code}
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], 
main() threw exception, File 
file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
{code}
The culprit seems to be 
https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
 .

Please help me confirm if this is a bug or not. Meanwhile I am creating a 
fix/workaround to this.

Summary: Oozie Spark action replaces path symlink # to %23  (was: Oozie 
Spark action replaces local path symlink # to %23)

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3440) Oozie Spark action replaces path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774908#comment-16774908
 ] 

Denes Bodo commented on OOZIE-3440:
---

Hallo [~nobigo],
thanks for your answer. We tried hdfs://, I've just extended the description. 
This fails too.

> Oozie Spark action replaces path symlink # to %23
> -
>
> Key: OOZIE-3440
> URL: https://issues.apache.org/jira/browse/OOZIE-3440
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Affects Versions: 4.3.1, 5.1.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Major
>
> When we provide in a hive action:
>  * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml or
> * hdfs:///tmp/spark2/conf/hive-site.xml#hive-site.xml or
>  * /etc/spark2/conf/hive-site.xml#hive-site.xml
> we get the following error:
> {code}
> Failing Oozie Launcher, Main class 
> [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, File 
> file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
> {code}
> The culprit seems to be 
> https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
>  .
> Please help me confirm if this is a bug or not. Meanwhile I am creating a 
> fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OOZIE-3440) Oozie Spark action replaces local path symlink # to %23

2019-02-22 Thread Denes Bodo (JIRA)
Denes Bodo created OOZIE-3440:
-

 Summary: Oozie Spark action replaces local path symlink # to %23
 Key: OOZIE-3440
 URL: https://issues.apache.org/jira/browse/OOZIE-3440
 Project: Oozie
  Issue Type: Bug
  Components: action
Affects Versions: 5.1.0, 4.3.1
Reporter: Denes Bodo
Assignee: Denes Bodo


When we provide in a hive action:
 * --files /etc/spark2/conf/hive-site.xml#hive-site.xml or
 * /etc/spark2/conf/hive-site.xml#hive-site.xml or
 * /etc/spark2/conf/hive-site.xml#hive-site.xml

we get the following error:
{code}
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], 
main() threw exception, File 
file:/etc/spark2/conf/hive-site.xml%23hive-site.xml does not exist
{code}
The culprit seems to be 
https://github.com/apache/oozie/blob/master/sharelib/spark/src/main/java/org/apache/oozie/action/hadoop/SparkArgsExtractor.java#L480L489
 .

Please help me confirm if this is a bug or not. Meanwhile I am creating a 
fix/workaround to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 68317: Sqoop Action should support tez delegation tokens for hive-import

2019-02-19 Thread Denes Bodo


> On Feb. 18, 2019, 9:43 p.m., Peter Bacsko wrote:
> > sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/LauncherAM.java
> > Lines 94-96 (patched)
> > <https://reviews.apache.org/r/68317/diff/6/?file=2125751#file2125751line94>
> >
> > I think we already support Java8, so it can be a good practice to use 
> > lambdas. 
> > 
> > I recently faced a similar problem with System.getEnv(), I think this 
> > solution should be considered because it's more compact:
> > 
> > ```
> > import java.util.function.Function;
> > ...
> > private static Function envProvider = System::getenv;
> > ...
> > @VisibleForTesting
> > static void setEnvProvider(Function envProvider) {
> > LauncherAM.envProvider = envProvider;
> > }
> > ...
> > String envValue = envProvider.apply(key);
> > ```
> > 
> > Then in the test code, you can just override like that:
> > 
> > ```
> > LauncherAM.setEnvProvider(s -> { return dummyEnv.get(s); });
> > ```
> > 
> > So we don't need that tiny helper class, just replace the method 
> > reference with a new one.

Thanks Peter for your comment. I agree with we should use the latest technology 
in Oozie. However in this specific case we need two functions: one for 
System.getenv(KEY) and one for System.getenv(). Latter is only used to print 
out key-value pairs and does not affect anything but I think we should not use 
two form of getting sysenv nor to use two functions for it.


- Denes


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68317/#review212905
---


On Feb. 15, 2019, 12:33 p.m., Denes Bodo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68317/
> ---
> 
> (Updated Feb. 15, 2019, 12:33 p.m.)
> 
> 
> Review request for oozie and Andras Salamon.
> 
> 
> Bugs: OOZIE-3326
> https://issues.apache.org/jira/browse/OOZIE-3326
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> 
> At present, hive-import will fail to start a tez session in secure 
> environment.
> 
> 
> Diffs
> -
> 
>   sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/LauncherAM.java 
> 63afd91d3 
>   
> sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/LauncherMain.java 
> b6599f7f3 
>   
> sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/SystemEnvironment.java
>  PRE-CREATION 
>   sharelib/sqoop/src/main/java/org/apache/oozie/action/hadoop/SqoopMain.java 
> 27f9306a0 
>   
> sharelib/sqoop/src/test/java/org/apache/oozie/action/hadoop/TestSqoopMain.java
>  d6f96d546 
> 
> 
> Diff: https://reviews.apache.org/r/68317/diff/6/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denes Bodo
> 
>



[jira] [Commented] (OOZIE-3326) Sqoop Action should support tez delegation tokens for hive-import

2019-02-18 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771626#comment-16771626
 ] 

Denes Bodo commented on OOZIE-3326:
---

Thanks [~asalamon74] for your comments and help.

> Sqoop Action should support tez delegation tokens for hive-import
> -
>
> Key: OOZIE-3326
> URL: https://issues.apache.org/jira/browse/OOZIE-3326
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Reporter: Brian Goerlitz
>Assignee: Brian Goerlitz
>Priority: Major
> Attachments: OOZIE-3326_003.patch, OOZIE-3326_004.patch, 
> OOZIE-3326_005.patch, OOZIE-3326_006.patch, OOZIE-3326_007.patch, 
> OZIE-3326_001.patch, OZIE-3326_002.patch
>
>
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> At present, hive-import will fail to start a tez session in secure 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3326) Sqoop Action should support tez delegation tokens for hive-import

2019-02-18 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3326:
--
Attachment: OOZIE-3326_006.patch

> Sqoop Action should support tez delegation tokens for hive-import
> -
>
> Key: OOZIE-3326
> URL: https://issues.apache.org/jira/browse/OOZIE-3326
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Reporter: Brian Goerlitz
>Assignee: Brian Goerlitz
>Priority: Major
> Attachments: OOZIE-3326_003.patch, OOZIE-3326_004.patch, 
> OOZIE-3326_005.patch, OOZIE-3326_006.patch, OZIE-3326_001.patch, 
> OZIE-3326_002.patch
>
>
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> At present, hive-import will fail to start a tez session in secure 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3326) Sqoop Action should support tez delegation tokens for hive-import

2019-02-17 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770834#comment-16770834
 ] 

Denes Bodo commented on OOZIE-3326:
---

I managed to fix the issue found by findbugs and updated the patches. 
Unfortunately I cannot see any review on review board. Have you pushed the 
"publish" button?

> Sqoop Action should support tez delegation tokens for hive-import
> -
>
> Key: OOZIE-3326
> URL: https://issues.apache.org/jira/browse/OOZIE-3326
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Reporter: Brian Goerlitz
>Assignee: Brian Goerlitz
>Priority: Major
> Attachments: OOZIE-3326_003.patch, OOZIE-3326_004.patch, 
> OOZIE-3326_005.patch, OZIE-3326_001.patch, OZIE-3326_002.patch
>
>
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> At present, hive-import will fail to start a tez session in secure 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3326) Sqoop Action should support tez delegation tokens for hive-import

2019-02-15 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3326:
--
Attachment: OOZIE-3326_004.patch

> Sqoop Action should support tez delegation tokens for hive-import
> -
>
> Key: OOZIE-3326
> URL: https://issues.apache.org/jira/browse/OOZIE-3326
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Reporter: Brian Goerlitz
>Assignee: Brian Goerlitz
>Priority: Major
> Attachments: OOZIE-3326_003.patch, OOZIE-3326_004.patch, 
> OZIE-3326_001.patch, OZIE-3326_002.patch
>
>
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> At present, hive-import will fail to start a tez session in secure 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 68317: Sqoop Action should support tez delegation tokens for hive-import

2019-02-15 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68317/
---

(Updated Feb. 15, 2019, 12:33 p.m.)


Review request for oozie and András Piros.


Bugs: OOZIE-3326
https://issues.apache.org/jira/browse/OOZIE-3326


Repository: oozie-git


Description
---

SqoopMain needs to support tez delegation tokens for hive-imports. 
Implementation is similar to that of HiveMain and Hive2Main.

At present, hive-import will fail to start a tez session in secure environment.


Diffs (updated)
-

  sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/LauncherAM.java 
63afd91d3 
  sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/LauncherMain.java 
b6599f7f3 
  
sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/SystemEnvironment.java
 PRE-CREATION 
  sharelib/sqoop/src/main/java/org/apache/oozie/action/hadoop/SqoopMain.java 
27f9306a0 
  
sharelib/sqoop/src/test/java/org/apache/oozie/action/hadoop/TestSqoopMain.java 
d6f96d546 


Diff: https://reviews.apache.org/r/68317/diff/3/

Changes: https://reviews.apache.org/r/68317/diff/2-3/


Testing
---


Thanks,

Denes Bodo



[jira] [Updated] (OOZIE-3326) Sqoop Action should support tez delegation tokens for hive-import

2019-02-15 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3326:
--
Attachment: OOZIE-3326_003.patch

> Sqoop Action should support tez delegation tokens for hive-import
> -
>
> Key: OOZIE-3326
> URL: https://issues.apache.org/jira/browse/OOZIE-3326
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Reporter: Brian Goerlitz
>Assignee: Brian Goerlitz
>Priority: Major
> Attachments: OOZIE-3326_003.patch, OZIE-3326_001.patch, 
> OZIE-3326_002.patch
>
>
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> At present, hive-import will fail to start a tez session in secure 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3326) Sqoop Action should support tez delegation tokens for hive-import

2019-02-15 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769103#comment-16769103
 ] 

Denes Bodo commented on OOZIE-3326:
---

[~gezapeti] what do you think about this change? In production env the change 
has been tested. Thanks.

> Sqoop Action should support tez delegation tokens for hive-import
> -
>
> Key: OOZIE-3326
> URL: https://issues.apache.org/jira/browse/OOZIE-3326
> Project: Oozie
>  Issue Type: Bug
>  Components: action
>Reporter: Brian Goerlitz
>Assignee: Brian Goerlitz
>Priority: Major
> Attachments: OZIE-3326_001.patch, OZIE-3326_002.patch
>
>
> SqoopMain needs to support tez delegation tokens for hive-imports. 
> Implementation is similar to that of HiveMain and Hive2Main.
> At present, hive-import will fail to start a tez session in secure 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-2689) Spark options --keytab and --principal is not working from Spark Action

2019-02-14 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769031#comment-16769031
 ] 

Denes Bodo commented on OOZIE-2689:
---

Is it still a valid ticket? I think it works in 4.3, 4.3.1 and 5.x.

[~andras.piros] [~gezapeti]

> Spark options --keytab and --principal is not working from Spark Action
> ---
>
> Key: OOZIE-2689
> URL: https://issues.apache.org/jira/browse/OOZIE-2689
> Project: Oozie
>  Issue Type: Bug
>Reporter: Peter Cseh
>Priority: Major
>
> Spark job running longer than Kerberos ticket lifetime are failing because 
> the --principal and --keytab options are not working in Spark Action. 
> We're getting messages like
> {quote}
> Delegation Token can be issued only with kerberos or web authentication
> {quote}
> It's possible to work around these issue using the Shell Action and giving 
> these options to spark-submit, but it's not a good thing to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 69668: OOZIE 2949 - Escape quotes whitespaces in Sqoop field

2019-01-21 Thread Denes Bodo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69668/#review212175
---



lgtm

- Denes Bodo


On Jan. 16, 2019, 4:38 p.m., Andras Salamon wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/69668/
> ---
> 
> (Updated Jan. 16, 2019, 4:38 p.m.)
> 
> 
> Review request for oozie, András Piros, Denes Bodo, Peter Cseh, and Kinga 
> Marton.
> 
> 
> Repository: oozie-git
> 
> 
> Description
> ---
> 
> OOZIE 2949 - Escape quotes whitespaces in Sqoop  field
> 
> 
> Diffs
> -
> 
>   core/src/main/java/org/apache/oozie/ErrorCode.java e274b9d70 
>   core/src/main/java/org/apache/oozie/action/hadoop/ShellSplitter.java 
> PRE-CREATION 
>   
> core/src/main/java/org/apache/oozie/action/hadoop/ShellSplitterException.java 
> PRE-CREATION 
>   core/src/main/java/org/apache/oozie/action/hadoop/SqoopActionExecutor.java 
> 556f2cfd1 
>   core/src/main/resources/oozie-default.xml 6c7fc9ddc 
>   core/src/test/java/org/apache/oozie/action/hadoop/TestShellSplitter.java 
> PRE-CREATION 
>   docs/src/site/markdown/DG_SqoopActionExtension.md b186c5ab7 
>   
> sharelib/sqoop/src/test/java/org/apache/oozie/action/hadoop/TestSqoopActionExecutor.java
>  edfe0c739 
> 
> 
> Diff: https://reviews.apache.org/r/69668/diff/2/
> 
> 
> Testing
> ---
> 
> Unit tests locally
> 
> 
> Thanks,
> 
> Andras Salamon
> 
>



[jira] [Commented] (OOZIE-2949) Escape quotes whitespaces in Sqoop field

2019-01-08 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737179#comment-16737179
 ] 

Denes Bodo commented on OOZIE-2949:
---

Code review:
 * Does ShellSplitter need to be a static inner class? Isn't a normal class in 
a separate file means cleaner code?
 * Can you please extract characters into ShellSplitter constants which are 
used in the state machine?
 * In TestSqoopActionExecutor.java:242 you commented out Java code. Was that 
intentional and can be deleted?

 

> Escape quotes whitespaces in Sqoop  field
> --
>
> Key: OOZIE-2949
> URL: https://issues.apache.org/jira/browse/OOZIE-2949
> Project: Oozie
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Peter Cseh
>Assignee: Andras Salamon
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: OOZIE-2949-0.patch, OOZIE-2949-01.patch, 
> OOZIE-2949-02.patch
>
>
> The current behavior of the Sqoop action is:
> {noformat}
> The Sqoop command can be specified either using the command element or 
> multiple arg elements.
> When using the command element, Oozie will split the command on every space 
> into multiple arguments.
> When using the arg elements, Oozie will pass each argument value as an 
> argument to Sqoop.
> {noformat}
> This prevents the user to simply copy-paste the command worked in the shell 
> into the workflow.xml.
> We should split the  field by taking quotes into account, similar to 
> what OOZIE-2391
> did for the Spark action's  field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-2949) Escape quotes whitespaces in Sqoop field

2019-01-08 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737161#comment-16737161
 ] 

Denes Bodo commented on OOZIE-2949:
---

In case of changing default behaviour: can we just put a switch into 
oozie-default.xml where we can declare which mode we want to use? I think 
current users will be afraid of an upgrade due to many possible changes in 
their workflow. However, new users shall be forced to use the new way. Does it 
sound crazy?

> Escape quotes whitespaces in Sqoop  field
> --
>
> Key: OOZIE-2949
> URL: https://issues.apache.org/jira/browse/OOZIE-2949
> Project: Oozie
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Peter Cseh
>Assignee: Andras Salamon
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: OOZIE-2949-0.patch, OOZIE-2949-01.patch, 
> OOZIE-2949-02.patch
>
>
> The current behavior of the Sqoop action is:
> {noformat}
> The Sqoop command can be specified either using the command element or 
> multiple arg elements.
> When using the command element, Oozie will split the command on every space 
> into multiple arguments.
> When using the arg elements, Oozie will pass each argument value as an 
> argument to Sqoop.
> {noformat}
> This prevents the user to simply copy-paste the command worked in the shell 
> into the workflow.xml.
> We should split the  field by taking quotes into account, similar to 
> what OOZIE-2391
> did for the Spark action's  field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3218) Oozie Sqoop action with command splits the select clause into multiple parts due to delimiter being space

2019-01-04 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16734035#comment-16734035
 ] 

Denes Bodo commented on OOZIE-3218:
---

I prefer to handle the quotes. In my opinion using quotes in linux environment 
is a common when I need to write whitespaces. Also the implementation could be 
more maintainable due to we do not have to watch newly created options/token 
borders.

> Oozie Sqoop action with command splits the select clause into multiple parts 
> due to delimiter being space
> -
>
> Key: OOZIE-3218
> URL: https://issues.apache.org/jira/browse/OOZIE-3218
> Project: Oozie
>  Issue Type: Bug
>  Components: action, workflow
>Affects Versions: 3.3.2, 4.1.0, 4.2.0, 4.3.0, 5.0.0
> Environment: Hortonworks Hadoop HDP-2.6.4.x release 
>  oozie admin -version: Oozie server build version: 4.2.0.2.6.4.0-91
>Reporter: Mahesh Balakrishnan
>Assignee: Mahesh Balakrishnan
>Priority: Major
> Attachments: OOZIE-3218-2.patch, OOZIE-3218-3.patch, OOZIE-3218.patch
>
>
> When running a Oozie Sqoop action which has command with {{--query}} in place 
> the query is split into multiple parts causing {{"Unrecognized argument:"}} 
> and in-turn fails.
> {code:xml}
> 
> ${resourceManager}
> ${nameNode}
> import --verbose --connect jdbc:mysql://test.openstacklocal/db 
> --query select * from abc where $CONDITIONS --username test --password test 
> --driver com.mysql.jdbc.Driver -m 1 
> 
> 
> {code}
>  
> Oozie Launcher logs:
> {noformat}
>  Sqoop command arguments :
>  import
>  --verbose
>  --connect
>  jdbc:mysql://test.openstacklocal/db
>  --query
>  "select
>  *
>  from
>  abc
>  where
>  $CONDITIONS"
>  --username
>  hive
>  --password
>  
>  --driver
>  com.mysql.jdbc.Driver
>  -m
>  1
>  Fetching child yarn jobs
>  tag id : oozie-a1bbe03a0983b9e822d12ae7bb269ee3
>  2791 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to 
> ResourceManager at hdp263-3.openstacklocal/172.26.105.248:8050
>  Child yarn jobs are found - 
>  =
> >>> Invoking Sqoop command line now >>>
> 3172 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not 
> been set in the environment. Cannot check for additional configuration.
>  3172 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not 
> been set in the environment. Cannot check for additional configuration.
>  3218 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 
> 1.4.6.2.6.4.0-91
>  3218 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 
> 1.4.6.2.6.4.0-91
>  3287 [main] DEBUG org.apache.sqoop.tool.BaseSqoopTool - Enabled debug 
> logging.
>  3287 [main] DEBUG org.apache.sqoop.tool.BaseSqoopTool - Enabled debug 
> logging.
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Error parsing 
> arguments for import:
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Error parsing 
> arguments for import:
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: *
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: *
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: from
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: from
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: abc
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: abc
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: where
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: where
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: $CONDITIONS"
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: $CONDITIONS"
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: --username
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: --username
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: abc
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: abc
>  3288 [main] ERROR org.apache.sqoop.tool.BaseSqoopTool - Unrecognized 
> argument: --passwor

[jira] [Updated] (OOZIE-3186) Oozie is unable to use configuration linked using jceks://file/...

2019-01-03 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3186:
--
Attachment: OOZIE-3186-003.patch

> Oozie is unable to use configuration linked using jceks://file/...
> --
>
> Key: OOZIE-3186
> URL: https://issues.apache.org/jira/browse/OOZIE-3186
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0b1, 5.0.0, 4.3.1
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: usability
> Fix For: trunk
>
> Attachments: OOZIE-3186-001.patch, OOZIE-3186-002.patch, 
> OOZIE-3186-003.patch
>
>
> When Oozie is used with Ambari, the next configuration makes Oozie fail to 
> start:
> {noformat}
> 
>     hadoop.security.credential.provider.path
>     jceks://file/.../oozie-site.jceks
> 
> {noformat}
> Value should have *localjceks://* instead of *jceks://*. But Ambari does not 
> let change this value. I propose change the url when Oozie loads it.
>  
> Stacktrace, when the issue occurs:
> {code:java}
> org.apache.oozie.service.ServiceException: E0103: Could not load service 
> classes, Could not load password for [oozie.service.JPAService.jdbc.password] 
> at org.apache.oozie.service.Services.loadServices(Services.java:309) at 
> org.apache.oozie.service.Services.init(Services.java:213) at 
> org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
>  at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
>  at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
>  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) 
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
> org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944) at 
> org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779) at 
> org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505) at 
> org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
>  at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
> org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
> org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
> org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: 
> java.lang.IllegalArgumentException: Could not load password for 
> [oozie.service.JPAService.jdbc.password] at 
> org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:615)
>  at 
> org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:602)
>  at org.apache.oozie.service.JPAService.init(JPAService.java:147) at 
> org.apache.oozie.service.Services.setServiceInternal(Services.java:386) at 
> org.apache.oozie.service.Services.setService(Services.java:372) at 
> org.apache.oozie.service.Services.loadServices(Services.java:305) ... 26 more 
> Caused by: java.lang.reflect.InvocationTargetException at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:608)
>  ... 31 more Caused by: java.lang.UnsupportedOperationException: Accessing 
> local file system is not allowed at 
> org.apache.hadoop.fs.RawLocalFileS

[jira] [Updated] (OOZIE-3194) Oozie should set proper permissions to sharelib after upload

2019-01-03 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3194:
--
Attachment: OOZIE-3194-v3.patch

> Oozie should set proper permissions to sharelib after upload
> 
>
> Key: OOZIE-3194
> URL: https://issues.apache.org/jira/browse/OOZIE-3194
> Project: Oozie
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 4.3.0
>    Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
> Fix For: 4.3.1
>
> Attachments: OOZIE-3194-v1.patch, OOZIE-3194-v2.patch, 
> OOZIE-3194-v3.patch
>
>
> When user changes HDFS umask that does not support world readability to newly 
> created files, upgrading sharelib causes failing Oozie jobs.
> I suggest set the required permissions to sharelib after upgrading it. Files 
> to 544 and directories to 755.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OOZIE-3120) maven-assembly-plugin fails when bumped from 2.2.1

2018-11-27 Thread Denes Bodo (JIRA)


 [ 
https://issues.apache.org/jira/browse/OOZIE-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Bodo updated OOZIE-3120:
--
Attachment: OOZIE-3120-002.patch

> maven-assembly-plugin fails when bumped from 2.2.1
> --
>
> Key: OOZIE-3120
> URL: https://issues.apache.org/jira/browse/OOZIE-3120
> Project: Oozie
>  Issue Type: Task
>Affects Versions: 4.3.0
>Reporter: Artem Ervits
>Assignee: Artem Ervits
>Priority: Major
> Attachments: OOZIE-3120-001.patch, OOZIE-3120-002.patch
>
>
> maven-assembly plugin 3.1.0 is available, version 2.2.1 is old, with upgrade 
> to even 2.2.2 build fails with
> {noformat}
> [INFO] --- maven-assembly-plugin:3.1.0:single (default-cli) @ oozie-main ---
> [INFO] Reading assembly descriptor: src/main/assemblies/empty.xml
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Oozie Main .. FAILURE [  0.972 
> s]
> [INFO] Apache Oozie Client  SKIPPED
> [INFO] Apache Oozie Share Lib Oozie ... SKIPPED
> [INFO] Apache Oozie Share Lib HCatalog  SKIPPED
> [INFO] Apache Oozie Share Lib Distcp .. SKIPPED
> [INFO] Apache Oozie Core .. SKIPPED
> [INFO] Apache Oozie Share Lib Streaming ... SKIPPED
> [INFO] Apache Oozie Share Lib Pig . SKIPPED
> [INFO] Apache Oozie Share Lib Hive  SKIPPED
> [INFO] Apache Oozie Share Lib Hive 2 .. SKIPPED
> [INFO] Apache Oozie Share Lib Sqoop ... SKIPPED
> [INFO] Apache Oozie Examples .. SKIPPED
> [INFO] Apache Oozie Share Lib Spark ... SKIPPED
> [INFO] Apache Oozie Share Lib . SKIPPED
> [INFO] Apache Oozie Docs .. SKIPPED
> [INFO] Apache Oozie WebApp  SKIPPED
> [INFO] Apache Oozie Tools . SKIPPED
> [INFO] Apache Oozie MiniOozie . SKIPPED
> [INFO] Apache Oozie Server  SKIPPED
> [INFO] Apache Oozie Distro  SKIPPED
> [INFO] Apache Oozie ZooKeeper Security Tests .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1.872 s
> [INFO] Finished at: 2017-11-10T13:38:17-05:00
> [INFO] Final Memory: 31M/437M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.1.0:single (default-cli) on 
> project oozie-main: No formats specified in the execution parameters or the 
> assembly descriptor. -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> ERROR, Oozie distro creation failed
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3355) Regex based search option for searching workflows in the WFM-View of Ambari

2018-10-05 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639515#comment-16639515
 ] 

Denes Bodo commented on OOZIE-3355:
---

[~andras.piros] I have never seen Workflow Manager code, but with [~matijhs] we 
will take a look what can be done.

> Regex based search option for searching workflows in the WFM-View of Ambari
> ---
>
> Key: OOZIE-3355
> URL: https://issues.apache.org/jira/browse/OOZIE-3355
> Project: Oozie
>  Issue Type: New Feature
>  Components: bundle, coordinator, core, tools, ui, workflow
>Affects Versions: 4.2.0
>Reporter: Krishnadevan Purushothaman
>Priority: Critical
>
> *Challenge faced :*
> _{color:#d04437}In the WFM view of ambari, there is no Filter option 
> available to search for the Workflows. In order to search for the desired 
> workflow, one has to type the full name of workflow,coordinator,bundles else 
> it does not return anything which becomes a time-consuming job.{color}_
> *Feature description:*
> _{color:#14892c}There is a need for Regex based filter option in order to 
> search the workflows without the need of entering the complete name of the 
> workflow,coordinator,bundles rather by just typing the first three letters of 
> the work post which it should populate suggestions based on the first three 
> letters of the workflow,coordinator,bundles through which we can refine and 
> optimize the searching mechanism.{color}_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3355) Regex based search option for searching workflows in the WFM-View of Ambari

2018-10-04 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16638145#comment-16638145
 ] 

Denes Bodo commented on OOZIE-3355:
---

As I understand Ambari's Workflow Manager uses Oozie REST API as is. I am not 
sure if this issue can be solved efficiently without Oozie's support; querying 
all the job names from Oozie means no impact on Oozie but generating heavy DB 
queries and great volume of data transmission over REST.

However, that would be a great feature in Oozie, too, if it supported some 
wildcard-like search, for example regex, or simply use the SQL's "%" and "_" 
characters. In the latter case, it would have only impact on Oozie but not on 
Ambari.

> Regex based search option for searching workflows in the WFM-View of Ambari
> ---
>
> Key: OOZIE-3355
> URL: https://issues.apache.org/jira/browse/OOZIE-3355
> Project: Oozie
>  Issue Type: New Feature
>  Components: bundle, coordinator, core, tools, ui, workflow
>Affects Versions: 4.2.0
>Reporter: Krishnadevan Purushothaman
>Priority: Critical
>
> *Challenge faced :*
> _{color:#d04437}In the WFM view of ambari, there is no Filter option 
> available to search for the Workflows. In order to search for the desired 
> workflow, one has to type the full name of workflow,coordinator,bundles else 
> it does not return anything which becomes a time-consuming job.{color}_
> *Feature description:*
> _{color:#14892c}There is a need for Regex based filter option in order to 
> search the workflows without the need of entering the complete name of the 
> workflow,coordinator,bundles rather by just typing the first three letters of 
> the work post which it should populate suggestions based on the first three 
> letters of the workflow,coordinator,bundles through which we can refine and 
> optimize the searching mechanism.{color}_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OOZIE-3120) maven-assembly-plugin fails when bumped from 2.2.1

2018-09-18 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16618752#comment-16618752
 ] 

Denes Bodo edited comment on OOZIE-3120 at 9/18/18 9:21 AM:


PreCommit build has error:
{code:java}
[ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.12:check 
(default-cli) on project oozie-main: Too many files with unapproved license: 1 
See RAT report in: 
/home/jenkins/jenkins-slave/workspace/PreCommit-OOZIE-Build/target/rat.txt 
- [Help 1]{code}
And several test errors due to {code}java.net.ConnectException: Connection 
refused{code}


was (Author: dionusos):
PreCommit build has error:

{code}
[ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.12:check 
(default-cli) on project oozie-main: Too many files with unapproved license: 1 
See RAT report in: 
/home/jenkins/jenkins-slave/workspace/PreCommit-OOZIE-Build/target/rat.txt 
- [Help 1]
{code}

> maven-assembly-plugin fails when bumped from 2.2.1
> --
>
> Key: OOZIE-3120
> URL: https://issues.apache.org/jira/browse/OOZIE-3120
> Project: Oozie
>  Issue Type: Task
>Affects Versions: 4.3.0
>Reporter: Artem Ervits
>Assignee: Artem Ervits
>Priority: Major
> Attachments: OOZIE-3120-001.patch
>
>
> maven-assembly plugin 3.1.0 is available, version 2.2.1 is old, with upgrade 
> to even 2.2.2 build fails with
> {noformat}
> [INFO] --- maven-assembly-plugin:3.1.0:single (default-cli) @ oozie-main ---
> [INFO] Reading assembly descriptor: src/main/assemblies/empty.xml
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Oozie Main .. FAILURE [  0.972 
> s]
> [INFO] Apache Oozie Client  SKIPPED
> [INFO] Apache Oozie Share Lib Oozie ... SKIPPED
> [INFO] Apache Oozie Share Lib HCatalog  SKIPPED
> [INFO] Apache Oozie Share Lib Distcp .. SKIPPED
> [INFO] Apache Oozie Core .. SKIPPED
> [INFO] Apache Oozie Share Lib Streaming ... SKIPPED
> [INFO] Apache Oozie Share Lib Pig . SKIPPED
> [INFO] Apache Oozie Share Lib Hive  SKIPPED
> [INFO] Apache Oozie Share Lib Hive 2 .. SKIPPED
> [INFO] Apache Oozie Share Lib Sqoop ... SKIPPED
> [INFO] Apache Oozie Examples .. SKIPPED
> [INFO] Apache Oozie Share Lib Spark ... SKIPPED
> [INFO] Apache Oozie Share Lib . SKIPPED
> [INFO] Apache Oozie Docs .. SKIPPED
> [INFO] Apache Oozie WebApp  SKIPPED
> [INFO] Apache Oozie Tools . SKIPPED
> [INFO] Apache Oozie MiniOozie . SKIPPED
> [INFO] Apache Oozie Server  SKIPPED
> [INFO] Apache Oozie Distro  SKIPPED
> [INFO] Apache Oozie ZooKeeper Security Tests .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1.872 s
> [INFO] Finished at: 2017-11-10T13:38:17-05:00
> [INFO] Final Memory: 31M/437M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.1.0:single (default-cli) on 
> project oozie-main: No formats specified in the execution parameters or the 
> assembly descriptor. -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> ERROR, Oozie distro creation failed
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OOZIE-3120) maven-assembly-plugin fails when bumped from 2.2.1

2018-09-18 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16618752#comment-16618752
 ] 

Denes Bodo commented on OOZIE-3120:
---

PreCommit build has error:

{code}
[ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.12:check 
(default-cli) on project oozie-main: Too many files with unapproved license: 1 
See RAT report in: 
/home/jenkins/jenkins-slave/workspace/PreCommit-OOZIE-Build/target/rat.txt 
- [Help 1]
{code}

> maven-assembly-plugin fails when bumped from 2.2.1
> --
>
> Key: OOZIE-3120
> URL: https://issues.apache.org/jira/browse/OOZIE-3120
> Project: Oozie
>  Issue Type: Task
>Affects Versions: 4.3.0
>Reporter: Artem Ervits
>Assignee: Artem Ervits
>Priority: Major
> Attachments: OOZIE-3120-001.patch
>
>
> maven-assembly plugin 3.1.0 is available, version 2.2.1 is old, with upgrade 
> to even 2.2.2 build fails with
> {noformat}
> [INFO] --- maven-assembly-plugin:3.1.0:single (default-cli) @ oozie-main ---
> [INFO] Reading assembly descriptor: src/main/assemblies/empty.xml
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Oozie Main .. FAILURE [  0.972 
> s]
> [INFO] Apache Oozie Client  SKIPPED
> [INFO] Apache Oozie Share Lib Oozie ... SKIPPED
> [INFO] Apache Oozie Share Lib HCatalog  SKIPPED
> [INFO] Apache Oozie Share Lib Distcp .. SKIPPED
> [INFO] Apache Oozie Core .. SKIPPED
> [INFO] Apache Oozie Share Lib Streaming ... SKIPPED
> [INFO] Apache Oozie Share Lib Pig . SKIPPED
> [INFO] Apache Oozie Share Lib Hive  SKIPPED
> [INFO] Apache Oozie Share Lib Hive 2 .. SKIPPED
> [INFO] Apache Oozie Share Lib Sqoop ... SKIPPED
> [INFO] Apache Oozie Examples .. SKIPPED
> [INFO] Apache Oozie Share Lib Spark ... SKIPPED
> [INFO] Apache Oozie Share Lib . SKIPPED
> [INFO] Apache Oozie Docs .. SKIPPED
> [INFO] Apache Oozie WebApp  SKIPPED
> [INFO] Apache Oozie Tools . SKIPPED
> [INFO] Apache Oozie MiniOozie . SKIPPED
> [INFO] Apache Oozie Server  SKIPPED
> [INFO] Apache Oozie Distro  SKIPPED
> [INFO] Apache Oozie ZooKeeper Security Tests .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1.872 s
> [INFO] Finished at: 2017-11-10T13:38:17-05:00
> [INFO] Final Memory: 31M/437M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.1.0:single (default-cli) on 
> project oozie-main: No formats specified in the execution parameters or the 
> assembly descriptor. -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> ERROR, Oozie distro creation failed
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >