e. Is 1.0.0 ready for production usage?
>
> thanks,
>
> stan
--
Mahadev Konar
Hortonworks Inc.
http://hortonworks.com/
Bccing common-user and ccing mapred-user. Please use the correct
mailing lists for your questions.
You can use -Dstream.map.output.field.separator=
for specifying the seperator.
The link below should have more information.
http://hadoop.apache.org/common/docs/r0.20.205.0/streaming.html#Custom
Moving it to mapreduce-list.
Sophie,
This could just be a bug a 0.23. 0.23 does not have
jobtrackers/tasktrackers.
Could you see if you can recreate this? If yes, please do file a jira on
this.
thanks
mahadev
On Mon, Dec 19, 2011 at 12:22 PM, Raj V wrote:
> Sophie
>
>
> Are the clocks in syn
Patai,
Did you take a look at Ambari?
http://incubator.apache.org/projects/ambari.html
Might want to get on there dev mailing lists to find out more and see
if you want to join hands.
thanks
mahadev
On Mon, Nov 21, 2011 at 5:56 PM, Patai Sangbutsarakum
wrote:
> Besides SCM from Cloudera, and f
Arun,
This was fixed a week ago or so. Here's the infra ticket.
https://issues.apache.org/jira/browse/INFRA-3960
You should be able to add new contributors now.
thanks
mahadev
On Sun, Oct 16, 2011 at 9:36 AM, Arun C Murthy wrote:
> I've tried, and failed, many times recently to add 'contribut
Hi Prashanth,
Sorry to disappoint :). Thats isnt true.
Folks in HDFS are working on this currently.
Sanjay has uploaded a design doc in case you want to check it out:
https://issues.apache.org/jira/browse/HDFS-1623
thanks
mahadev
On Sun, Jul 24, 2011 at 11:44 PM, Prashant wrote:
> On 07/19/
Jim,
you can use FileUtil.copy() methods to copy files.
Hope that helps.
--
thanks
mahadev
@mahadevkonar
On Fri, May 13, 2011 at 2:00 PM, lohit wrote:
> There is no FileSystem API to copy.
> You could try
> hadoop dfs -cp
>
> which basically reads the file and writes to new file.
> The c
You should be able to use the bin/start-mapred.sh bin/start-dfs.sh
seperately. The script bin/start-mapred.sh needs to run on the
jobtracker which will look at the slaves file on the jobtracker node
and ssh to all the slave nodes and start the tasktracker and so does
the start-dfs.sh on the namenod
http://hadoop.apache.org/common/docs/current/commands_manual.html#fsck
Fsck should be of help.
thanks
mahadev
On Fri, Feb 11, 2011 at 11:53 AM, Edupuganti, Sandhya
wrote:
> Our Namenode is going into Safemode after every restart. It reports the ratio
> to be .98xxx whereas it is looking for 0.
Keith,
I cant see how you can do it. The status is set in PipeMapred.java
which is the streaming plugin that talks to mapreduce framework and
generated this status message. I cant think of neway of hacking arnd
it except for having your own streaming jar :).
thanks
mahadev
On Fri, Feb 4, 2011 a
Hi Roger,
Please use cloudera¹s mailing list for communications regarding cloudera
distributions.
Thanks
mahadev
On 12/15/10 10:43 AM, "Roger Smith" wrote:
> If you would like MR-1938 patch (see link below), "Ability for having user's
> classes take precedence over the system classes for tas
Hi Praveen,
Looks like its your namenode that's still in safemode.
http://wiki.apache.org/hadoop/FAQ
The safemode feature in the namenode waits till a certain number of threshold
for hdfs blocks have been reported by the datanodes, before letting clients
making edits to the namespace. It us
Hi Thomas,
There are a couple of projects inside Yahoo! that use ZooKeeper as an
event manager for feed processing.
I am little bit unclear on your example below. As I understand it-
1. There are 1 million feeds that will be stored in Hbase.
2. A map reduce job will be run on these feeds to f
Hi Rakhi,
Currently Hadoop mapred/hdfs/common do not use Zookeeper. There are some
plans to use it in the JobTracker and NameNode but they are still being
discussed in the community. There are some jira's on hadoop that talk about
it.
http://issues.apache.org/jira/browse/MAPREDUCE-737
http://is
14 matches
Mail list logo