Hello,
Is S3 block file system still supported ? I read in aws docs that s3 block file
system is deprecated; Not sure if the doc is in error (or old)
Thanks,
Madhu
Looks like you missed the '#' in line beginning
Feel free to set HADOOP_LOG_DIR in that script or elsewhere
On 6/22/11 1:02 PM, "Jack Craig" wrote:
>Hi Folks,
>
>In the hadoop-env.sh, we find, ...
>
># Where log files are stored. $HADOOP_HOME/logs by default.
># export HADOOP_LOG_DIR=${HADOOP_
Try this command:
which hadoop
Make sure you're modifying the correct hadoop config file.
Secondly, you may also try
echo $JAVA_HOME
And see if it is set.
On 6/11/11 6:20 AM, "J3VS" wrote:
>
>I'm trying to set up hadoop on Fedora 15, I have set JAVA_HOME (ie
>configured
>linux to see my
voted to
>it.
>
>On Fri, Jun 10, 2011 at 7:24 PM, Madhu Ramanna wrote:
>
>> Hello,
>>
>> What is the most optimal way to compress several files already in
>>hadoop ?
>>
>>
Hello,
What is the most optimal way to compress several files already in hadoop ?
reading in your Reducer code by any chance? MTOF may
>> not be thread-safe in the release you're using. Using MultipleOutputs
>> is recommended right now, if this is the cause/case.
>>
>> On Wed, Jun 8, 2011 at 7:58 PM, Madhu Ramanna
>>wrote:
>>> Hel
Hello,
We're using CDH3b3 0.20.2 hadoop. In our map reduce jobs we've extended
MultipleTextOutputFormat to override checkOutputSpecs() and
generateFileNameForKeyValue() returning
relative path based on key. I don't have multiple jobs running with the same
output directory. When I rerun it succe