Thank you for comment.
I understand that stacks is very useful for debug and trouble-shooting.
I think that the change of LogLevel has a big influence on others.
So the control of the output to log file of stack traces is necessary.
I'll register patch with JIRA.
Regards,
Shinichi Yamashita
(201
I Think the client side configuration will take effect.
Shumin
On Jul 12, 2013 11:50 AM, "Shalish VJ" wrote:
> Hi,
>
>
> Suppose block size set in configuration file at client side is 64MB,
> block size set in configuration file at name node side is 128MB and block
> size set in configuratio
Hi Shalish,
The client side conf will take precedence. Further you can use the FileSystem
API which can set the block size:
create(Path f, boolean overwrite, int bufferSize, short replication, long
blockSize)
Cheers,
Subroto Sanyal
On Jul 13, 2013, at 9:10 PM, Shalish VJ wrote:
> Hi,
>
> Ru
Hi,
Ru sure?
or have u ever tried it out.
Pls advice.
From: Azuryy Yu
To: user@hadoop.apache.org
Sent: Saturday, July 13, 2013 3:19 PM
Subject: Re: Hadoop property precedence
the conf that client running on will take effect.
On Jul 13, 2013 4:42 PM, "K
Hi Andrea,
For copying the full sky map to each node, look up the distributed cache.
It works by placing the sky map file on HDFS and each task will pull it
down when needed. For feeding the input data into Hadoop, what format is
it in currently? One simple way would be to have a text file with
Hello,
I have downloaded from the Apache web site the Hadoop.1.1.2, the current
stable version. Then I simply used the command of:
ant -Dcompile-core=true
to try to build the Hadoop core.
After the successful compilation, I used the diff tool to try to compare
the hardoop-core.jar that I just
the conf that client running on will take effect.
On Jul 13, 2013 4:42 PM, "Kiran Dangeti" wrote:
> Shalish,
>
> The default block size is 64MB which is good at the client end. Make sure
> the same at your end also in conf. You can increase the size of each block
> to 128MB or greater than that o
Shalish,
The default block size is 64MB which is good at the client end. Make sure
the same at your end also in conf. You can increase the size of each block
to 128MB or greater than that only thing you can see the processing will be
fast but at end there may be chances of losing data.
Thanks,
K
Hi Please help me on this. Its urgent.
From: Shalish VJ
To: hadoop-mailerlist
Sent: Friday, July 12, 2013 10:20 PM
Subject: Hadoop property precedence
Hi,
Suppose block size set in configuration file at client side is 64MB,
block size set in config