I am using hadoop-0.20.203.0 version.
I have set
dfs.support.append to true and then using append method
It is working but need to know how stable it is to deploy and use in production
clusters ?
Regards,
Jagaran
From: jagaran das
To: common-user@hadoop.a
Hi All,
Is append to an existing file is now supported in Hadoop for production
clusters?
If yes, please let me know which version and how
Thanks
Jagaran
On Jun 12, 2011, at 11:15 PM, Nitesh Kaushik wrote:
> Dear Sir/Madam,
>
>I am Nitesh kaushik, working with institution dealing in
> satellite images. I am trying to read bulk images
>using Hadoop but unfortunately that is not working, i am not
> getting any clue h
On Jun 11, 2011, at 6:20 AM, J3VS wrote:
>
> I'm trying to set up hadoop on Fedora 15, I have set JAVA_HOME (ie configured
> linux to see my java installation) and i have altered hadoop-env.sh so that
> JAVA_HOME is set as my jdk folder,
Check for typos.
Something else you can
On Jun 13, 2011, at 5:52 AM, Steve Loughran wrote:
>
> Unless your cluster is bigger than Facebooks, you have too many small files
>
>
+1
(I'm actually sort of surprised the NN is still standing with only 24mb. The
gc logs would be interesting to look at.)
I'd also likely increase the bl
Dear Sir/Madam,
I am Nitesh kaushik, working with institution dealing in
satellite images. I am trying to read bulk images
using Hadoop but unfortunately that is not working, i am not
getting any clue how to work with images in hadoop
Hadoop is worki
Hi Guys
I intend to record the Write pattern of a Job using the following
record : , . Inorder to obtain
this, I was thinking of maintaining a global buffer
(Collection) and keep adding to the buffer whenever there is
write called via the OutputFormat class.
But I am not really able to figure out
On 6/13/11 6:23 AM, "Joey Echeverria" wrote:
> This feature doesn't currently work. I don't remember the JIRA for it, but
> there's a ticket which will allow a reader to read from an HDFS file before
> it's closed. In that case, you implement a queue by having the producer write
> to the end o
Thank you very much
I see
At 2011-06-13 19:23:59,"Joey Echeverria" wrote:
>This feature doesn't currently work. I don't remember the JIRA for it, but
>there's a ticket which will allow a reader to read from an HDFS file before
>it's closed. In that case, you implement a queue by having the
thank you very much
I see
At 2011-06-13 19:23:59,"Joey Echeverria" wrote:
>This feature doesn't currently work. I don't remember the JIRA for it, but
>there's a ticket which will allow a reader to read from an HDFS file before
>it's closed. In that case, you implement a queue by having the
On 06/13/2011 07:52 AM, Loughran, Steve wrote:
>>On 06/10/2011 03:23 PM, Bible, Landy wrote:
>> I'm currently running HDFS on Windows 7 desktops. I had to create a
>> hadoop.bat that provided the same functionality of the shell scripts, and
>> some Java Service Wrapper configs to run the DataNo
On 06/12/2011 03:01 AM, Raja Nagendra Kumar wrote:
Hi,
I see hadoop would need unix (on windows with Cygwin) to run.
It would be much nice if Hadoop gets away from the shell scripts though
appropriate ant scripts or with java Admin Console kind of model. Then it
becomes lighter for development.
On 06/10/2011 05:31 PM, si...@ugcv.com wrote:
I would add more RAM for sure but there's hardware limitation. How if
the motherboard
couldn't support more than ... say 128GB ? seems I can't keep adding RAM
to resolve it.
compressed pointers, do u mean turning on jvm compressed reference ?
I did
On 06/10/2011 03:23 PM, Bible, Landy wrote:
Hi Raja,
I'm currently running HDFS on Windows 7 desktops. I had to create a hadoop.bat
that provided the same functionality of the shell scripts, and some Java
Service Wrapper configs to run the DataNodes and NameNode as windows services.
Once I
Dear All,
I am trying to write a test case for my mapreduce job, which use the new api
in 0.20(using the mapreduce packege rather than the mapred package). For the
mapper used the distributed cache, so I could not use the mrunit to do the
test. I thought I could use the ClusterMapReduceTestCase to
This feature doesn't currently work. I don't remember the JIRA for it, but
there's a ticket which will allow a reader to read from an HDFS file before
it's closed. In that case, you implement a queue by having the producer write
to the end of the file and the reader read from the beginning of th
Eric,
Is the problem reported as something like "webapps not found on
CLASSPATH"? I see this on mapreduce and hdfs projects at times, but
common tests usually run fine out of the box for me.
If it is indeed that, this conversation may help solve/provide an
answer: http://search-hadoop.com/m/gLWel
Hi,
I've imported the hadoop common trunk in eclipse (.classpath and
.project created via ant eclipse).
ant test build fine (0 failures).
When I run the junit tests from eclipse (right-click on test folder,
"Run as test"), there are many failures...
Is there some env (-D...) to give when r
18 matches
Mail list logo