Hi Habeeb,
Try to turn off the safemode with the following command
bin/hadoop dfsadmin -safemode leave
as safemode is on and no modifications can be done to the fs or
blocks.[read only mode of the HDFS] and restart.
Hope you find this useful.
Thank You.
On Mon, Jun 18, 2012 at 3:22 PM, Habeeb
Refer this
http://www.cloudera.com/blog/2009/09/apache-hadoop-log-files-where-to-find-them-in-cdh-and-what-info-they-contain/
On Fri, Jun 15, 2012 at 1:49 PM, cldo cldo datk...@gmail.com wrote:
Where are hadoop job history log files ?
thank.
--
https://github.com/zinnia-phatak-dev/Nectar
Hi,
May be namenode is down. Please look into namenode logs.
On Thu, Jun 14, 2012 at 9:37 PM, Yongwei Xing jdxyw2...@gmail.com wrote:
Hi all
My hadoop is running well for some days. Suddenly, the
http://localhost:50070 is not accessible. Give such message like below.
HTTP ERROR 404
Hello. I'm trying to test the new patch, 'Allow setting of end-of-record
delimiter for TextInputFormat'.
TextInputFormat may now split lines with delimiters other than newline,
by specifying a configuration parameter textinputformat.record.delimiter
[MAPREDUCE-2254]
Now I'm
Hi,
it may be a stupid question, but in my application I could do without sort
by keys. If only reducers could be told to start their work on the first
maps that they see, my processing would begin to show results much earlier,
before all the mappers are done. Now, eventually, all mappers will
Hi,
The record delimiter is not to be specified while copying the file, but
when you run the map reduce job. Just copy the file and specify the
delimiter at the time of the job run.
Best Regards,
Sonal
Crux: Reporting for HBase https://github.com/sonalgoyal/crux
Nube Technologies
On 06/18/2012 10:19 AM, Mark Kerzner wrote:
If only reducers could be told to start their work on the first
maps that they see, my processing would begin to show results much earlier,
before all the mappers are done.
The sort/shuffle phase isn't just about ordering the keys, it's about
John,
that sounds very interesting, and I may implement such a workflow, but can
I write back to HDFS in the mapper? In the reducer it is a standard
context.write(), but it is a different context.
Thank you,
Mark
On Mon, Jun 18, 2012 at 9:24 AM, John Armstrong j...@ccri.com wrote:
On
Mark,
Instead of the mapper writing intermediate data that usually goes to the
reducers, the mapper can write directly to HDFS if the job is map-only.
According to http://hadoop.apache.org/common/docs/r0.20.1/streaming.html
Mapper-Only Jobs
Often, you may want to process input data using a
On 06/18/2012 10:40 AM, Mark Kerzner wrote:
that sounds very interesting, and I may implement such a workflow, but
can I write back to HDFS in the mapper? In the reducer it is a standard
context.write(), but it is a different context.
Both Mapper.Context and Reducer.Context descend from
Thank you for the great instructions!
Mark
On Mon, Jun 18, 2012 at 9:53 AM, John Armstrong j...@ccri.com wrote:
On 06/18/2012 10:40 AM, Mark Kerzner wrote:
that sounds very interesting, and I may implement such a workflow, but
can I write back to HDFS in the mapper? In the reducer it is a
Hello Ravi,
Thanks for your response.
I got started by running Rumen and generating the required trace file.
However, while trying to run Gridmix with following command,
java -classpath $JAR_CLASSPATH org.apache.hadoop.mapred.gridmix.Gridmix
-generate 10m ~/Desktop/test_gridmix_output
Hi all,
Can any one give an idea moving data from Hadoop to Cassandra, using bulk
output format class(map reduce job).
Regards
Abhi
Sent from my iPhone
All hadoop contributors/experts,
I am trying to simulate split brain in our installation. There are a few
things we want to know
1. Does data corruption happen?
2. If Yes in #1, how to recover from it.
3. What are the corrective steps to take in this situation e.g. killing one
namenode etc
So
On Mon, Jun 4, 2012 at 8:35 PM, Robert Evans ev...@yahoo-inc.com wrote:
I am happy to announce that I was able to get the license on the Yahoo!
Hadoop tutorial updated from Creative Commons Attribution 3.0 Unported
License to Apache 2.0. I have filed HADOOP-8477
Hi ,
Can you update this link in Jira which Robert created so that we can merge
what ever we can into code examples.
Thanks,
Jagat Singh
---
Sent from Mobile , short and crisp.
On 19-Jun-2012 8:34 AM, JAGANADH G jagana...@gmail.com wrote:
On Mon, Jun 4, 2012 at 8:35 PM, Robert Evans
On Tue, Jun 19, 2012 at 8:44 AM, Jagat Singh jagatsi...@gmail.com wrote:
Hi ,
Can you update this link in Jira which Robert created so that we can merge
what ever we can into code examples.
Hi Jagat
Done
Best regards
--
**
JAGANADH G
http://jaganadhg.in
Thank you,
Would you be willing to volunteer for editing some document also along with
code?
If you go through the links in Jira you can have idea about work done till
now and then you can join accordingly.
Just email me.
Thanks,
Jagat Singh
---
Sent from Mobile , short and crisp.
On
On Tue, Jun 19, 2012 at 9:18 AM, Jagat Singh jagatsi...@gmail.com wrote:
Thank you,
Would you be willing to volunteer for editing some document also along with
code?
If you go through the links in Jira you can have idea about work done till
now and then you can join accordingly.
Just
Thanks Harsh
The problem is, testcase will fail when we run the testcase as a root user
which our jenkins used to do.
I dont know whether its a problem?
Amith
From: Harsh J [ha...@cloudera.com]
Sent: Monday, June 11, 2012 8:50 PM
To:
20 matches
Mail list logo