ticator.java:261)
... 48 more
> Replication Job throws GSSException on secure cluster
> ---
>
> Key: FALCON-505
> URL: https://issues.apache.org/jira/browse/FALCON-505
> Project:
doSpnegoSequence(KerberosAuthenticator.java:261)
... 48 more
> Replication Job throws GSSException on secure cluster
> ---
>
> Key: FALCON-505
> URL: https://issues.apache.org/jira/browse/FALCON-505
>
Created JIRA: https://issues.apache.org/jira/browse/FALCON-505
Venkat
On Tuesday, July 15, 2014 4:58 AM, Seetharam Venkatesh
wrote:
Venkat, can you please create a bug with this issue. Thanks!
On Mon, Jul 14, 2014 at 10:58 AM, Venkat R
wrote:
> After much of debugging, we had to add
Venkat R created FALCON-505:
---
Summary: Replication Job throws GSSException on secure cluster
Key: FALCON-505
URL: https://issues.apache.org/jira/browse/FALCON-505
Project: Falcon
Issue Type: Bug
Venkat, can you please create a bug with this issue. Thanks!
On Mon, Jul 14, 2014 at 10:58 AM, Venkat R
wrote:
> After much of debugging, we had to add the following property to oozie's
> hadoop confs in order to obtain kerberos tokens between secure hadoop
> clusters. May be this can be added
Arpit,
Yes, I have copied the hadoop cluster config under
oozie/conf/hadoop-conf-cluster-1 as explained in the JIRA. However, I need to
add the following line (correction to my previous email) in the
oozie/conf/hadoop-conf-cluster-1/mapred-site.xml:
mapreduce.job.hdfs-servers
webh
hmm i dont recall having to set that. However take a look at the first
comment here
https://issues.apache.org/jira/browse/FALCON-389
I had to make sure NN and RM properties were set but that was when we had
to do hcat replication.
On Sun, Jul 13, 2014 at 10:28 PM, Venkat R
wrote:
> After much
After much of debugging, we had to add the following property to oozie's hadoop
confs in order to obtain kerberos tokens between secure hadoop clusters. May be
this can be added to the generated oozie job?
oozie.launcher.mapreduce.job.hdfs-servers
${nameNode1},${nameNode2}
Venkat
On Friday
Venkatesh,
* Switched to using webhdfs
* Using Hadoop-2/Yarn security enables
* both source and target oozie-site.xml points to oozie/conf/hadoop-conf of all
the clusters it talkes to.
Venkat
On Friday, July 11, 2014 9:50 AM, Seetharam Venkatesh
wrote:
This issue has been documented in
This issue has been documented in the twiki page:
* Use of hftp as the scheme for read only interface in cluster entity [[
https://issues.apache.org/jira/browse/HADOOP-10215][will not work in Oozie]]
The alternative is to use webhdfs scheme instead and its been tested
with DistCp.
Have you
I am able to run oozie jobs on both the clustes (primarycluster and
backupCluster, both secured).
I'm also able to access hdfs -ls command on primaryCluser from the
backupCluster Oozie/Falcon machine.
It's that replication job that kick off in backupCluster compute node that
fails when tryin
Hmm we have been running this setup and it works for us. Are you able to
run any other job through oozie (without falcon)? If so can you do the
following.
kinit as some user and make the following call using curl
curl --negotiate -u : "
http://eat1-nertznn01.grid.linkedin.com:50070/webhdfs/v1/?op
Hi Arpit,
The jersey-server and jersey-core jars were missing and I copied them to
WEB-INF and the coordinator is able to talk to source cluster name-node to
indentify the new dirs and kick off the workflow.
But the workflow fails with similar exception as hftp (unable to get the token)
-- exc
This looks like the Oozie war file is missing sole jars that hadoop needs.
What version of hadoop are you running and how did you do the Oozie war
setup?
On Thursday, July 10, 2014, Venkat R wrote:
> Switched to webhdfs, but the co-ordinator keeps failing with the following
> exception and think
Switched to webhdfs, but the co-ordinator keeps failing with the following
exception and thinks the data on the other side is not present. I am running
Apache version of Oozie (4.0.1).
Any thoughts?
Venkat
ACTION[006-140710220847349-oozie-oozi-C@1] Error,
java.lang.NoClassDefFoundError: C
ok, will try now and see.
On Thursday, July 10, 2014 2:37 PM, Arpit Gupta wrote:
from the stack trace it looks like you are using hftp. We ran into issues when
running tests against secure hadoop + hftp
https://issues.apache.org/jira/browse/HDFS-5842
I recommend switch the readonly interf
from the stack trace it looks like you are using hftp. We ran into issues when
running tests against secure hadoop + hftp
https://issues.apache.org/jira/browse/HDFS-5842
I recommend switch the readonly interface to webhdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Jul 10, 20
Hi Arpit
I have provided that in both the clusters definition like below:
On Thursday, July 10, 2014 2:16 PM, Arpit Gupta wrote:
You need to provide the nn principal in the cluster.xml for each cluster. The
following property needs to be provided in each cluster's xml
You need to provide the nn principal in the cluster.xml for each cluster. The
following property needs to be provided in each cluster's xml
dfs.namenode.kerberos.principal
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Jul 10, 2014, at 2:08 PM, Venkat R wrote:
> Using the demo exam
Using the demo example. There is a replication job that copies dataset from
Source to Target cluster by launching a REPLICATION job on Target Oozie
cluster. But it fails with the following GSSException:
I have added both the oozie servers (one for source and target clusters) to the
core-site.xm
20 matches
Mail list logo