Hi All,
Currently I used eclipse to compile/debug the source code, and I configured
the "remote Java application" to debug the source code in eclipse, for
example, I can debug the client side code when I run the command
"./bin/hdfs dfs -mkdir test", it goes through "FsShell--->
DistributedFileSys
Apr 19, 2016 at 12:21 PM, Vinayakumar B
wrote:
> Hi Kun Ren,
>
> You can follow the below steps.
> 1. configure HADOOP_NAMENODE_OPTS="-Xdebug
> -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=3988" in
> hadoop-env.sh
> 2. Start Namenode
> 3. Now Namenode
ayakumar B
wrote:
>
> -Vinay
>
> -- Forwarded message --
> From: Vinayakumar B
> Date: Tue, Apr 19, 2016 at 11:47 PM
> Subject: Re: Eclipse debug HDFS server side code
> To: Kun Ren
>
>
> 1. Since you are debugging remote code, you can't change
Thanks a lot.
I will explore more related to these.
On Tue, Apr 19, 2016 at 2:37 PM, Vinayakumar B
wrote:
> Usually namenode console logs will be in .out file.
>
> -vinay
> On Apr 20, 2016 12:03 AM, "Kun Ren" wrote:
>
>> Hi Vinay,
>>
>> Thanks a
Hi All,
I compiled the source code, and used eclipse to remotely debug the code, I
want to see the Debug information from the log, so I changed the log level
for some classes, for example, I changed the FsShell's log level to
DEBUG(change it from http://localhost:50070/logLevel), then I add the
fo
confirm that the Hadoop 2.7.2 support HDFS Federation, but
in default there is only 1 namenode, is this correct? Meanwhile, do you
think it is possible to configure the HDFS Fderation in the pseudo
distributed mode in one node?
Thanks so much in advance.
Best,
Kun Ren
in one node by
> setting different HTTP/RPC port and fsimage/edits directory for each
> NameNode. I haven't tried this, so perhaps it is not possible.
>
> Regards,
> Akira
>
> On 4/28/16 09:30, Kun Ren wrote:
>
>> Hi Genius,
>>
>> I have two questions about
n one node, I probably will change this,
> right?
> No. If you configure NameNode federation, you need to set
> "dfs.nameservices" and "dfs.namenode.http-address.".
> Please see
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/Federation.
Hi Genius,
I want to intercept the requests in the processRpcRequest() method in the
listener component in server.java, for example if I want to intercept the
"mkdirs" and "append" request, I just try to get the method name and
parameters before this line:
callQueue.put(call);
Currently
Hi Genius,
Does HDFS Federation support the cross namenodes operations?
For example:
./bin/hdfs dfs -cp input1/a.xml input2/b.xml
Supposed that input1 belongs namenode 1, and input 2 belongs namenode 2,
does Federation support this operation? And if not, why?
Thanks.
rks with federation. The command would copy the file
> from NameNode 1/block pool 1 to NameNode 2/block pool 2.
>
> --Chris Nauroth
>
> From: Kun Ren
> Date: Wednesday, May 25, 2016 at 8:57 AM
> To: "user@hadoop.apache.org"
> Subject: HDFS Federation
>
> Hi G
atisfy
> its promise that rename is atomic. There are more details about this in
> the ViewFs guide.
>
>
> http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/ViewFs.html
>
> --Chris Nauroth
>
> From: Kun Ren
> Date: Wednesday, May 25, 2016 at 11:0
ses
> different file systems, then typically the rename either degrades to a
> non-atomic copy-delete or the call simply fails fast.
>
> --Chris Nauroth
>
> From: Kun Ren
> Date: Wednesday, May 25, 2016 at 1:58 PM
>
> To: Chris Nauroth
> Cc: "user@hadoop.apache.org
Hi Genius,
I just configured HDFS Federation, and try to use it(2 namenodes, one is
for /my, another is for /your). When I run the command:
hdfs dfs -ls /,
I can get:
-r-xr-xr-x - hadoop hadoop 0 2016-06-05 20:05 /my
-r-xr-xr-x - hadoop hadoop 0 2016-06-05 20:05 /your
This
On Sun, Jun 5, 2016 at 1:10 PM, Kun Ren wrote:
>
>> Hi Genius,
>>
>> I just configured HDFS Federation, and try to use it(2 namenodes, one is
>> for /my, another is for /your). When I run the command:
>> hdfs dfs -ls /,
>>
>> I can get:
>> -
Hi Genius,
I understand that we use the command to start namenode and datanode. But I
don't know how HDFS starts client side and creates the Client side
object(Like DistributedFileSystem), and client side RPC server? Could you
please point it out how HDFS start the client side dameon?
If the clien
p.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#The_Communication_Protocols
>
> Regards,
> Rakesh
> Intel
>
> On Fri, Jul 22, 2016 at 7:21 PM, Kun Ren wrote:
>
>> Hi Genius,
>>
>> I understand that we use the command to start namenode and datan
rite/read/delete files
> etc via client.
>
> Thanks,
> Rakesh
>
>
> On Fri, Jul 22, 2016 at 8:38 PM, Kun Ren wrote:
>
>> Thanks for your reply. So The clients can be located at any machine that
>> has the HDFS client library, correct?
>>
>> On Fri,
18 matches
Mail list logo