Hello Sir,
I'm currently evaluating hadoop for windows. I would like to know the following
1. It is possible for us to use hadoop without Cygwin as of now? If not, how
feasible is it to use modify the scripts to support windows?
2. Does the efficiently decrease on account of the fact that hadoop
Please see the 'Windows Users' section on
http://wiki.apache.org/hadoop/QuickStart.
On Mon, Apr 21, 2008 at 11:48 PM, Anish Damodaran
[EMAIL PROTECTED] wrote:
Hello Sir,
I'm currently evaluating hadoop for windows. I would like to know the
following
1. It is possible for us to use hadoop
Doug Cutting-4 wrote:
public String toString();
Doug
That's it.
Many thanks.
--
View this message in context:
http://www.nabble.com/Using-ArrayWritable-of-type-IntWritable-tp16807489p16823104.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Hi,
The number of map tasks is supposed to be greater than the number of
machines, so in your configuration, 6 map tasks is ok. However there
should be another problem. Have you changed the code for word count?
Please ensure that the example code is unchanged and your configuration
is
Joydeep Sen Sarma wrote:
as opposed to 200 boxes all not being able to talk to the Namenode? or the jobtracker?
i think this is a topic that requires a little nuance. if there's a small
cluster and a reliable (netapp) filer - then getting jar's off seems like a
good alternative to consider.
As far as I know, you need cygwin to install and run hadoop. The fact
that you are using cygwin to run hadoop has almost negligible impact on
the performance and efficiency of the hadoop cluster. Cyhgin is mostly
needed for the install and configuration scripts. There are a few small
portions of
On 4/22/08 7:12 AM, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:
I am getting this annoying error message every time I start
bin/start-all.sh with one single node
command-line: line 0: Bad configuration option: ConnectTimeout
Do you know what could be the issue?? I can not find it in the FAQs,
Okay, things appear to be fixed now.
Jeremy
On 4/20/08, Jeremy Zawodny [EMAIL PROTECTED] wrote:
Not yet... there seem to be a lot of cooks in the kitchen on this one, but
we'll get it fixed.
Jeremy
On 4/19/08, Cole Flournoy [EMAIL PROTECTED] wrote:
Any news on when the videos are
Is it possible or how may one add to the input path after mapping has
begun? More specifically say my Map process creates more files to
needing to Map and you don't want to have to keep re-initiating
Map/Reduce processes. I tried simply creating files in the InputPath
directory. I have also
On 4/22/08 12:23 PM, Mika Joukainen [EMAIL PROTECTED] wrote:
All right, I have to refrase: like to have storage system for files which
are inserted by the users. Users are going to use normal human operable sw
entities ;) System is going to have: fault tolerance, parallelism etc. ==
HDFS,
Hi Jeremy,
Any chance that these videos could be made in a downloadable format rather
than thru Y!'s player?
For example I'm traveling right now and would love to watch the rest of the
presentations but the next few hours I won't have an internet connection.
So, my request won't help me, but
Hi
Sorry for my ignorance, but I am trying to understand if I can use
Hadoop and Map/Reduce to process video files and images. Encoding and
transcoding videos is an example of what I would like to do.
Thank you for your patience.
Regards
Roland
Yes you can.
One issue is typically that linux based video codecs are not as numberous as
windows based codecs so you may be a bit limited as to what kinds of video
you can process.
Also, most video processing and transcoding is embarrassingly parallel at
the file level with little need for
logs/hadoop-root-datanode-R61-neptun.out
May be this will help you:
I am guessing - from the log file name above - that your hostname has
underscores/dashes. (e.g. R61-neptune). Could you try to use the hostname
without underscores? (e.g. R61neptune or even simple 'hadoop').
I had the
hi,
Can I submit a map-reduce job without creating the jar file (and using
$HADOOP_HOME/bin/hadoop script). I looked into the hadoop script and
it is invoking org.apache.hadoop.util.RunJar class. Should I (or
rather do I) have to do the same thing as this class is doing if I
don't want to use the
Grool might help you.
It allows you to write full on java-based map-reduce programs using a very
simple scripting interface. It has a jar with all of the code needed, but
your scripts wouldn't have to be packaged into anything. As a scripting
language, it is very easy to integrate and the
Has any one tried setting number of reduce to zero and getting map's output
as the final output?
I tried doing the same but my map output does not come to specified output
path for mapred.
let me know if someone has already done that. I am not able to find out,
where my map outputs are written.
Vibhooti Verma wrote:
Has any one tried setting number of reduce to zero and getting map's output
as the final output?
Look at the RandomWriter example
(src/examples/org/apache/hadoop/examples/RandomWriter.java).
Amar
I tried doing the same but my map output does not come to specified
On Apr 22, 2008, at 11:01 AM, Thomas Cramer wrote:
Is it possible or how may one add to the input path after mapping has
begun? More specifically say my Map process creates more files to
needing to Map and you don't want to have to keep re-initiating
Map/Reduce processes. I tried simply
19 matches
Mail list logo