Harsh,
Can we load the file into HDFS with one replication and lock the file.
Regards
Abhishek
On Feb 22, 2013, at 1:03 AM, Harsh J ha...@cloudera.com wrote:
HDFS does not have such a client-side feature, but your applications
can use Apache Zookeeper to coordinate and implement this on
Hi Abhishek,
I fail to understand what you mean by that; but HDFS generally has no
client-exposed file locking on reads. There's leases for preventing
multiple writers to a single file, but nothing on the read side.
Replication of the blocks under a file is a different concept and is
completely
Harsh,
As a part of my use case, my work was to read a file from HDFS and update a
value in it , the value gets incremented for every source load ( load id )
Say
Source load1 is started the value in the file could incremented by 1.
Source load 2 is started then value in the file could
Hi!
I am at present using Hadoop 1.1.1 .
In the mrunit website [
https://cwiki.apache.org/confluence/display/MRUNIT/MRUnit+Tutorial] it is
given that MRUnit testing framework is based on JUnit and it can test Map
Reduce programs written on 0.20 , 0.23.x , 1.0.x , 2.x version of Hadoop.
I
Hello Varsha,
Sorry for confusion.
From maven you can use something like
Latest version will do , but you need to use classsifier as hadoop1
dependency
groupIdorg.apache.mrunit/groupId
artifactIdmrunit/artifactId
version0.9.0-incubating/version
classifierhadoop1/classifier
/dependency
If you
Hi
I have updated the tutorial with maven discussion.
https://cwiki.apache.org/confluence/display/MRUNIT/MRUnit+Tutorial
Please share what others doubt you have while using mrunit , i can update
the tutorial accordingly.
Thanks,
Jagat Singh
On Fri, Feb 22, 2013 at 7:49 PM, Jagat Singh
Thank you, Jagat! I would also like to know which IDE to use to develop
MapReduce programs for Hadoop 1.1.1.
Eclipse plugin not available for this version.
Regards,
Varsha
On Fri, Feb 22, 2013 at 2:28 PM, Jagat Singh jagatsi...@gmail.com wrote:
Hi
I have updated the tutorial with maven
Thanks Vinod for the wonderful pdf.
It explains how the security can be achieved via Kerberos.
Any other way to implement security in hadoop (without using kerberos)?
On Fri, Feb 22, 2013 at 2:39 AM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
You should read the hadoop security
Hi Brice,
To add disk space to you datanode you simply need to add another drive,
then add it to the dfs.data.dir or dfs.datanode.data.dir entry. After a
datanode restart, hadoop will start to use it.
It will not balance the existing data between the directories. It will
continue to add to the
Just want to add up from JM.
If you already have balancer run in cluster every day, that will help the new
drive(s) get balanced.
P
From: Jean-Marc Spaggiari
jean-m...@spaggiari.orgmailto:jean-m...@spaggiari.org
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Fri, 22 Feb
Hi Mayur,
How are you installing the package? Can you install it with dpkg --install?
Are you trying with another command?
I googled the error (*se of uninitialized value $ENV{HOME} in
concatenation (.) or string at /usr/bin/lintian*) and found many reference
to it. You might want to take a look
Hello,
In my Java Hadoop job, i have reset the reuse variable to be -1.
hence a JVM will process multiple tasks.
I have also seen to it that instead of writing to the job context, the
keys and values are accumulated in a hashtable.
When the bytes written to this table reach BUFSIZE (e..g 150MB)
Hello vikas,
Exception nowhere it says failed to connect remote machine or hdfs, and you
can run from windows or linux that s not an issue..please check your code
once again or post your code!.
If you still doubt connecting remote machine..Make a jar out of your
wordcount and give a chance in
Could you please clarify, are you opening the file in your mapper code and
reading from there ?
Thanks
Hemanth
On Friday, February 22, 2013, Lucas Bernardi wrote:
Hello there, I'm trying to use hadoop map reduce to process an open file. The
writing process, writes a line to the file and syncs
15 matches
Mail list logo