Hi,
RS Legacy is pure Java based implementation. Probably you can look at the
encoding/decoding logic at github repo
https://github.com/Jerry-Xin/hadoop/blob/master/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoderLegacy.java
https://github.c
Hi,
dfs.namenode.http-address, this is the fully-qualified HTTP address for
each NameNode to listen on. Similarly to rpc-address configuration, set the
addresses for both NameNodes HTTP servers(Web UI) to listen on and can
browse the status of Active/Standby NN in Web browser. Also, hdfs supports
Hi Jian Feng,
Could you please check your code and see any possibilities of simultaneous
access to the same file. Mostly this situation happens when multiple
clients tries to access the same file.
Code Reference:- https://github.com/apache/hadoop/blob/branch-2.2/hadoop-
hdfs-project/hadoop-hdfs/s
I could see "Ticket cache: KEYRING:persistent:1004:1004" in your env.
May be KEYRING persistent cache setting is causing trouble, Kerberos
libraries to store the krb cache in a location and the Hadoop libraries
can't seem to access it.
Please refer these links,
https://community.hortonworks.com/q
gt;>
>>>
>>>
>>>
>>> For namenode httpserver start fail, please check rakesh comments..
>>>
>>>
>>>
>>> This is probably due to some missing configuration.
>>>
>>> Could you please re-check the ssl-server.xml,
>>Caused by: javax.security.auth.login.LoginException: Unable to obtain
password from user
Could you please check kerberos principal name is specified correctly in
"hdfs-site.xml", which is used to authenticate against Kerberos.
If keytab file defined in "hdfs-site.xml" and doesn't exists or
at 11:22 AM, Shashi Vishwakarma <
shashi.vish...@gmail.com> wrote:
> Thanks Rakesh.
>
> Just last question, is there any Java API available for recursively
> applying ACL or I need to iterate on all folders of dir and apply acl for
> each?
>
> Thanks
> Shashi
>
It looks like '/user/test3' has owner '"hdfs" and denying the access while
performing operations via "shashi" user. One idea is to recursively set ACL
to sub-directories and files as follows:
hdfs dfs -setfacl -R -m default:user:shashi:rwx /user
-R, option can be used to
, Rakesh Radhakrishnan
wrote:
> Have you taken multiple thread dumps (jstack) and observed the operations
> which are performing during this period of time. Perhaps there could be
> high chance of searching for data blocks which it can move around to
> balance the cluster.
>
> Cou
’t see the ENUM HADOOP_CLASSPATH in Yarn API ..
>
>
>
> --Senthil
>
> *From:* Rakesh Radhakrishnan [mailto:rake...@apache.org]
> *Sent:* Friday, August 26, 2016 8:26 PM
> *To:* kumar, Senthil(AWF)
> *Cc:* user.hadoop
> *Subject:* Re: java.lang.NoSuchFieldError: HADOOP_CLAS
Hi Francis,
There could be cases of connection fluctuations between ZKFC and ZK server,
I've observed the following message from your logs. I'd suggest you to
start analyzing all your ZooKeeper servers log messages and see ZooKeeper
cluster status during this period. BTW, could you tell me the ZK
Hi Senthil,
There might be case of including the wrong version of a jar file, could you
please check "Environment.HADOOP_CLASSPATH" enum variable in
"org.apache.hadoop.yarn.api.ApplicationConstants.java" class in your hadoop
jar file?. I think it is throwing "NoSuchFieldError" as its not seeing th
Hi Akash,
In general "GSSException: No valid credentials provided" means you don’t
have valid Kerberos credentials. I'm suspecting some issues related to
spnego, could you please revisit all of your kerb related configurations,
probably you can start from the below configuration. Please share
*-si
Yes, it is possible is to enable HA mode and Automatic Failover in
a federated namespace. Following are some of the quick references, I feel
its worth reading these blogs to get more insight into this. I think, you
can start prototyping a test cluster with this and post your queries to
this forum i
mance benchmarks showing what bandwidth we can sustain with shared vs
>> dedicated storage for the journal nodes?
>>
>> Thank you,
>> Konstantinos
>>
>>
>>
>>
>> On Fri, Aug 12, 2016 at 2:26 PM, Rakesh Radhakrishnan > > wrote:
>>
>>
Hi Konstantinos,
The typical deployment is, three Journal Nodes(JNs) and can collocate two
of the three JNs on the same machine where Namenodes(2 NNs) are running.
The third one can be deployed to the machine where ZK server is
running(assume ZK cluster has 3 nodes). I'd recommend to have a dedica
Hi Atri,
Do you meant, something like, jconsole [processID].
afaik, the local jmx uses the local filesystem. I hope your processes are
running under same user to ensure there is no permission issues. Also,
could you please check %TEMP% and %TMP% environment variables and make sure
YOUR_USER_NAME
Hi Atri,
I doubt the problem is due to space in the path -> "Program Files".
Instead of C:\Program Files\Java\jdk1.8.0_101, please copy JDK dir to
C:\java\jdk1.8.0_101 and try once.
Rakesh
Intel
On Mon, Aug 8, 2016 at 4:34 PM, Atri Sharma wrote:
> Hi All,
>
> I am trying to run a compiled Hado
Hey Aneela,
I've filtered the below output from your log messages. It looks like you
have "/ranger" directory under the root directory and directory listing is
working fine.
*Found 1 items*
*drwxr-xr-x - hdfs supergroup 0 2016-08-02 14:44 /ranger*
I think its putting all the log messa
> Thanks in advance for your help.
>
> Bhagaban
>
> On Mon, Aug 1, 2016 at 6:07 PM, Rakesh Radhakrishnan
> wrote:
>
>> Hi Bhagaban,
>>
>> Perhaps, you can try "Apache Sqoop" to transfer data to Hadoop from
>> Teradata. Apache Sqoop provides an effici
Hi Bhagaban,
Perhaps, you can try "Apache Sqoop" to transfer data to Hadoop from
Teradata. Apache Sqoop provides an efficient approach for transferring
large data between Hadoop related systems and structured data stores. It
allows support for a data store to be added as a so-called connector and
If I remember correctly, Huawei also adopted QJM component. I hope @Vinay
might have discussed internally in Huawei before starting this e-mail
discussion thread. I'm +1, for removing the bkjm contrib from the trunk
code.
Also, there are quite few open sub-tasks under HDFS-3399 umbrella jira,
whic
can
> display to the client, do you think stripping would still help ?
> Is there a possibility that since I know that all the segments of the HD
> image would always be read together, by stripping and distributing it on
> different nodes, I am ignoring its special/temporal localit
Ren wrote:
> Thanks for your reply. So The clients can be located at any machine that
> has the HDFS client library, correct?
>
> On Fri, Jul 22, 2016 at 10:50 AM, Rakesh Radhakrishnan > wrote:
>
>> Hi Kun,
>>
>> HDFS won't start any client side object(for
Hi Kun,
HDFS won't start any client side object(for example,
DistributedFileSystem). I can say, HDFS Client -> user applications access
the file system using the HDFS client, a library that exports the HDFS file
system interface. Perhaps, you can visit api docs,
https://hadoop.apache.org/docs/r2.6
have come to cold , I don't need to tell it what exactly files/dirs
> need to be move now ?
> Of course I should tell it what files/dirs need to monitoring.
>
> 2016-07-20 12:35 GMT+08:00 Rakesh Radhakrishnan :
>
>> >>>I have another question is , hdfs mover (A New
ta from hot to cold automatically ? It use algorithm
> like LRU、LFU ?
>
> 2016-07-19 19:55 GMT+08:00 Rakesh Radhakrishnan :
>
>> >>>>Is that mean I should config dfs.replication with 1 ? if more than
>> one I should not use *Lazy_Persist* policies ?
>>
>
Hi Alexandr,
Since you powered off the Active NN machine, during fail-over SNN timed out
to connect to this machine and fencing is failed. Typically fencing methods
should be configured to not to allow multiple writers to same shared
storage. It looks like you are using 'QJM' and it supports the f
ve just installed it. ZKFC became work properly!
>
> Best regards,
> Alexandr
>
> On Tue, Jul 19, 2016 at 5:29 PM, Rakesh Radhakrishnan
> wrote:
>
>> Hi Alexandr,
>>
>> I could see the following warning message in your logs and is the reason
>> for unsuc
Hi Alexandr,
I could see the following warning message in your logs and is the reason
for unsuccessful fencing. Could you please check 'fuser' command execution
in your system.
2016-07-19 14:43:23,705 WARN org.apache.hadoop.ha.SshFenceByTcpPort:
PATH=$PATH:/sbin:/usr/sbin fuser -v -k -n tcp 8020
Is that mean I should config dfs.replication with 1 ? if more than one
I should not use *Lazy_Persist* policies ?
The idea of Lazy_Persist policy is, while writing blocks, one replica will
be placed in memory first and then it is lazily persisted into DISK. It
doesn't means that, you are not
>>>I couldn't find folder* conf in *hadoop home.
Could you check %HADOOP_HOME%/etc/hadoop/hadoop-env.cmd path. May be,
U:/Desktop/hadoop-2.7.2/etc/hadoop/hadoop-env.cmd location.
Typically HADOOP_CONF_DIR will be set to %HADOOP_HOME%/etc/hadoop. Could
you check "HADOOP_CONF_DIR" env variable valu
Hi Sandeep,
This alert could be triggered if the NN operations exceeds certain
threshold value. Sometimes an increase in the RPC processing time increases
the length of call queue and results in this situation. Could you please
provide more details about the client operations you are performing an
Hi Sandeep,
Please go through the web page: "
https://hadoop.apache.org/mailing_lists.html"; and can subscribe by
following these steps.
Regards,
Rakesh
On Mon, Jul 18, 2016 at 8:32 AM, sandeep vura wrote:
> Hi Team,
>
> please add my email id in subscribe list.
>
> Regards,
> Sandeep.v
>
your information,while executing restore command the table is
> restored successfully without this kind of issues.
>
> So please could you explain how to solve this exception.
>
> Looking back for your response,
>
> Thanks,
> Matheskrishna
>
>
> On Mon, Jul 11, 20
.
Regards,
Rakesh
On Mon, Jul 11, 2016 at 6:13 PM, Rakesh Radhakrishnan
wrote:
> Hi,
>
> Hope you are executing 'distcp' command from the secured cluster. Are you
> executing the command from a non-super user? Please explain me the
> command/way you are executing t
Hi,
Hope you are executing 'distcp' command from the secured cluster. Are you
executing the command from a non-super user? Please explain me the
command/way you are executing to understand, how you are entering "entered
super user credentials" and -D command line args.
Also, please share your hdf
Hi Aneela,
IIUC, Namenode, Datanode is using _HOST pattern in their principal and
needs to create separate principal for NN and DN if running in different
machines. I hope the below explanation will help you.
"dfs.namenode.kerberos.principal" is typically set to nn/_HOST@REALM. Each
Namenode will
is JIRA gives you some
>> background. https://issues.apache.org/jira/browse/HADOOP-4487
>>
>>
>>
>> After the setup of your cluster, if you are still having issues with
>> other services or HDFS – This is a diagnostic tool that can help you.
>> https://github.com/stevelough
Hi,
Could you please check kerberos principal name is specified correctly in
"hdfs-site.xml", which is used to authenticate against Kerberos. If using
_HOST variable in hdfs-site.xml, ensure that hostname is getting resolved
and it matches with the principal name.
If keytab file defined in "hdfs-
40 matches
Mail list logo