[
https://issues.apache.org/jira/browse/FALCON-787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14166060#comment-14166060
]
Balu Vellanki commented on FALCON-787:
--------------------------------------
I tested e2e using hdfs-replication recipe on a machine with umask 077. Recipe
e2e testing fails. When I submit recipe using FalconCLI,
{code}
[hrt_qa@falcon-balu-6-3 falcon]$ ./bin/falcon recipe -name hdfs-replication-balu
Copied WF to:
hdfs://172.18.145.72:8020/user/hrt_qa/falcon/recipes/hdfs-replication-balu/hdfs-replication-balu-workflow.xml
Generated process file to be scheduled:
<process XML here>
Completed recipe processing
Stacktrace:
org.apache.falcon.client.FalconCLIException: Bad Request;dryRun failed on
cluster primaryCluster
at
org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1090)
at org.apache.falcon.cli.FalconCLI.recipeCommand(FalconCLI.java:1003)
at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:203)
at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:144)
Caused by: org.apache.falcon.client.FalconCLIException: Bad Request;dryRun
failed on cluster primaryCluster
at
org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
at
org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1132)
at
org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:686)
at org.apache.falcon.client.FalconClient.validate(FalconClient.java:311)
at
org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1087)
... 3 more
[hrt_qa@falcon-balu-6-3 falcon]$
{code}
This is caused because the /tmp/ directory to which oozie coordinator xml is
being written to is owned by falcon with perms 700.
{code}
Caused by: org.apache.falcon.FalconException: AUTHENTICATION : E0507 : E0507:
Could not access to
[hdfs://172.18.145.72:8020/tmp/falconhdfs-replication-balu1412899852231/DEFAULT/coordinator.xml],
Permission denied: user=hrt_qa, access=EXECUTE,
inode="/tmp/falconhdfs-replication-balu1412899852231":falcon:hdfs:drwx------
{code}
This needs to be fixed.
> FalconCLI - Submit recipe failed
> ---------------------------------
>
> Key: FALCON-787
> URL: https://issues.apache.org/jira/browse/FALCON-787
> Project: Falcon
> Issue Type: Bug
> Components: client
> Affects Versions: 0.6
> Reporter: Balu Vellanki
> Assignee: Sowmya Ramesh
> Fix For: 0.6
>
> Attachments: FALCON-787-v1.patch, FALCON-787.patch
>
>
> Attempted submitting falcon hdfs-replication recipe without setting
> falcon.recipe.path in client.properties. Expected falcon to look under
> falcon.home directory for recipe
> {code}
> [hrt_qa@falcon-balu-6-3 falcon]$ ./bin/falcon recipe -name hdfs-replication
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Recipe template file does not
> exist : null/hdfs-replication-template.xml
> at org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1049)
> at org.apache.falcon.cli.FalconCLI.recipeCommand(FalconCLI.java:1003)
> at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:203)
> at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:144)
> {code}
> In this case falcon.home is not set. A better behavior would be to ensure
> user sets falcon.recipe.path before using recipe cli.
> Error 2 : After setting falcon.recipe.path in client.properties and
> restarting falcon, I saw this error.
> {code}
> [hrt_qa@falcon-balu-6-3 falcon]$ ./bin/falcon recipe -name hdfs-replication
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Recipe workflow file does not
> exist : /recipes/hdfs-replication/hdfs-replication-workflow.xml
> Submitted process entity:
> /tmp/falcon-recipe-14121951222843863920744826423593.xml
> at org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1090)
> at org.apache.falcon.cli.FalconCLI.recipeCommand(FalconCLI.java:1003)
> at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:203)
> at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:144)
> Caused by: java.lang.Exception: Recipe workflow file does not exist :
> /recipes/hdfs-replication/hdfs-replication-workflow.xml
> at org.apache.falcon.recipe.RecipeTool.validateArtifacts(RecipeTool.java:139)
> at org.apache.falcon.recipe.RecipeTool.run(RecipeTool.java:74)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.falcon.recipe.RecipeTool.main(RecipeTool.java:60)
> {code}
> The doc FalconCLI.twiki should be updated to say “recipename-workflow.xml”
> should be present in the falcon.recipe.path dir.
> Once I manually fixed above error and tried to resubmit recipe, it fails with
> the following error.
> {code}
> [hrt_qa@falcon-balu-6-3 falcon]$ ./bin/falcon recipe -name hdfs-replication
> recipeWfPathName:
> falcon/recipes/hdfs-replication/hdfs-replication-workflow.xml
> Completed disaster recovery
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;dryRun failed on
> cluster primaryCluster
> Submitted process entity:
> /tmp/falcon-recipe-1412196250999979012201450070295.xml
> at org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1090)
> at org.apache.falcon.cli.FalconCLI.recipeCommand(FalconCLI.java:1003)
> at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:203)
> at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:144)
> Caused by: java.io.FileNotFoundException: File
> hdfs://172.18.145.72:8020/user/falcon/falcon/recipes/hdfs-replication/hdfs-replication-workflow.xml
> does not exist.
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:697)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)