http://www.colpermessodellafortuna.it/i0lwqj.php?s=ot
Hi,
In old API FileInputFormat (org.apache.hadoop.mapred.FileInputFormat)
code mapred.max.split.size parameter is not used. Is any other way we
change the number of mappers in old API?
--
https://github.com/zinnia-phatak-dev/Nectar
Original blog post
http://vorlsblog.blogspot.com/2010/05/running-hadoop-on-windows-without.html
On Thu, Jul 5, 2012 at 6:57 AM, Ravi Shankar Nair
ravishankar.n...@gmail.com wrote:
A document on installing Hadoop on Windows without installing CYGWIN is
available here
hi,
Current implementation of MapWritable only support HashMap implementation.
But for my application I need a LinkedHashMap since order of keys is
important for me. I m trying to customize the MapWritable
to accommodate custom implementation but whenever I make change to
Writable, all the
Hi,
Its possible in Map/Reduce. Look into the code here
https://github.com/zinnia-phatak-dev/Nectar/tree/master/Nectar-regression/src/main/java/com/zinnia/nectar/regression/hadoop/primitive/mapreduce
2012/6/21 Subir S subir.sasiku...@gmail.com
Hi,
Is it possible to implement transpose
Refer this
http://www.cloudera.com/blog/2009/09/apache-hadoop-log-files-where-to-find-them-in-cdh-and-what-info-they-contain/
On Fri, Jun 15, 2012 at 1:49 PM, cldo cldo datk...@gmail.com wrote:
Where are hadoop job history log files ?
thank.
--
https://github.com/zinnia-phatak-dev/Nectar
Hi,
May be namenode is down. Please look into namenode logs.
On Thu, Jun 14, 2012 at 9:37 PM, Yongwei Xing jdxyw2...@gmail.com wrote:
Hi all
My hadoop is running well for some days. Suddenly, the
http://localhost:50070 is not accessible. Give such message like below.
HTTP ERROR 404
Hi,
I am trying to call a hive query from Reducer , but getting following error
Exception in thread Thread-10 java.lang.NoClassDefFoundError:
org/apache/thrift/TBase
at java.lang.ClassLoader.defineClass1(Native Method)
at
Hi,
You can go through the code of this project (
https://github.com/zinnia-phatak-dev/Nectar) to understand how the complex
algorithms are implemented using M/R.
On Fri, May 18, 2012 at 12:16 PM, Ravi Joshi ravi.josh...@yahoo.com wrote:
I am writing my own map and reduce method for
.
On Fri, May 4, 2012 at 7:40 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Please see:
http://hbase.apache.org/book.html#dfs.datanode.max.xcievers
On Fri, May 4, 2012 at 5:46 AM, madhu phatak phatak@gmail.com
wrote:
Hi,
We are running a three node cluster . From two days
Hi,
We are running a three node cluster . From two days whenever we copy file
to hdfs , it is throwing java.IO.Exception Bad connect ack with
firstBadLink . I searched in net, but not able to resolve the issue. The
following is the stack trace from datanode log
2012-05-04 18:08:08,868 INFO
Hi,
In write method ,use writeInt() rather than write method. It should solve
your problem.
On Mon, Apr 30, 2012 at 10:40 PM, Keith Thompson kthom...@binghamton.eduwrote:
I have been running several MapReduce jobs on some input text files. They
were working fine earlier and then I suddenly
As per Oracle, going forward openjdk will be official oracle jdk for linux
. Which means openjdk will be same as the official one.
On Tue, Dec 20, 2011 at 9:12 PM, hadoopman hadoop...@gmail.com wrote:
http://www.omgubuntu.co.uk/**2011/12/java-to-be-removed-**
Please check contents of /etc/hosts for the hostname and ipaddress mapping.
On Thu, Apr 12, 2012 at 11:11 PM, Sujit Dhamale sujitdhamal...@gmail.comwrote:
Hi Friends ,
i am getting UnknownHostException while executing Hadoop Word count program
getting below details from job tracker Web page
Hi,
I am working on a Hadoop project where I want to make automated build to
run M/R test cases on real hadoop cluster. As of now it seems we can only
unit test M/R through MiniDFSCluster /MiniMRCluster/MRUnit. None of this
runs the test cases on Hadoop cluster. Is any other framework or any
Hi,
I am using the following code to generate cross product in hadoop.
package com.example.hadoopexamples.joinnew;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;
Hi,
Mappers run in parallel. So without reducer it is not possible ensure the
sequence.
On Fri, Jan 20, 2012 at 2:32 AM, Mapred Learn mapred.le...@gmail.comwrote:
This is my question too.
What if I want output to be in same order as input without using reducers.
Thanks,
JJ
Sent from my
Hi ,
Nectar already implemented Multiple Linear Regression. You can look into
the code here https://github.com/zinnia-phatak-dev/Nectar .
On Fri, Jan 13, 2012 at 11:24 AM, Saurabh Bajaj
saurabh.ba...@mu-sigma.comwrote:
Hi All,
Could someone guide me how we can do a multiple linear
Hi,
1. Stop the job tracker and task trackers. - bin/stop-mapred.sh
2. Disable namenode safemode - bin/hadoop dfsadmin -safemode leave
3. Start the job tracker and tasktrackers again - bin/start-mapred.sh
On Fri, Jan 13, 2012 at 5:20 AM, Ravi Prakash ravihad...@gmail.com wrote:
Courtesy
Hi Shreya,
Image files binary files . Use SequenceFile format to store the image in
hdfs and SequenceInputFormat to read the bytes . You can use TwoDWritable
to store matrix for image.
On Mon, Apr 2, 2012 at 3:36 PM, Sujit Dhamale sujitdhamal...@gmail.comwrote:
Shreya can u please Explain
Hi,
All commands invoke FSShell.java code. As per my knowledge you have to
change the source code and build to support custom commands.
On Sun, Apr 1, 2012 at 2:11 PM, JAX jayunit...@gmail.com wrote:
Hi guys : I wanted to make se custom Hadoop fs - commands. Is this
feasible/practical? In
Hi,
Can you tell which version of Hadoop you are using? It seems like
duplicate jars are on the classpath.
2012/1/23 Aleksandar Hudić aleksandar.hu...@gmail.com
Hi
I am trying to setup node and test the word count and I have a problem with
last few steps.
after I pack classes in the jar
Hi ,
Security features of 1.0.0. are same as 0.20.203 version .So you should be
able to find documentation under 0.20.203 version.
On Fri, Jan 20, 2012 at 4:03 PM, renuka renumetuk...@gmail.com wrote:
down vote
favorite
share [fb]
share [tw] Hadoop 1.0.0 is released in dec 2011. And its
There is also Nectar. https://github.com/zinnia-phatak-dev/Nectar
On Sat, Feb 4, 2012 at 12:49 AM, praveenesh kumar praveen...@gmail.comwrote:
You can also use R-hadoop package that allows you to run R statistical
algos on hadoop.
Thanks,
Praveenesh
On Fri, Feb 3, 2012 at 10:54 PM, Harsh
Hi Mohit,
HDFS is in safe mode which is read only mod. Run the following command to
get out of safemode
bin/hadoop dfsadmin -safemode leave.
On Thu, Mar 15, 2012 at 5:54 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
When I run client to create files in amazon HDFS I get this error. Does
Hi,
You can use java API's to compile custom java code and create jars. For
example , look at this code from Sqoop
/**
* Licensed to Cloudera, Inc. under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding
Hi,
I am trying to access files hdfs through hftp. When i run following code
from eclipse it works fine.
FsUrlStreamHandlerFactory factory =
new org.apache.hadoop.fs.FsUrlStreamHandlerFactory();
java.net.URL.setURLStreamHandlerFactory(factory);
URL hdfs = new
Hi Owen O'Malley,
Thank you for that Instant reply. It's working now. Can you explain me
what you mean by input to reducer is reused in little detail?
On Tue, Mar 20, 2012 at 11:28 AM, Owen O'Malley omal...@apache.org wrote:
On Mon, Mar 19, 2012 at 10:52 PM, madhu phatak phatak@gmail.com
Hi,
Seems like HDFS is in safemode.
On Fri, Mar 16, 2012 at 1:37 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
This is actually just hadoop job over HDFS. I am assuming you also know why
this is erroring out?
On Thu, Mar 15, 2012 at 1:02 PM, Gopal absoft...@gmail.com wrote:
On
Hi All,
I am using Hadoop 0.20.2 . I am observing a Strange behavior of Java
Collection's . I have following code in reducer
public void reduce(Text text, IteratorText values,
OutputCollectorText, Text collector, Reporter reporter)
throws IOException {
// TODO
Hi,
Use the JobTracker WEB UI at master:50030 and Namenode web UI at
master:50070.
On Fri, Feb 10, 2012 at 9:03 AM, Wq Az azq...@gmail.com wrote:
Hi,
Is there a quick way to check this?
Thanks ahead,
Will
--
Join me at http://hadoopworkshop.eventbrite.com/
Yes you can . Please make sure all Hadoop jars and conf directory is in
classpath.
On Thu, Feb 9, 2012 at 7:02 AM, Sanjeev Verma sanjeev.x.ve...@gmail.comwrote:
This is based on my understanding and no real life experience, so going to
go out on a limb here :-)...assuming that you are planning
Hi,
I have a following issue in Hadoop 0.20.2. When i try to use inheritance
with WritableComparables the job is failing. Example If i create a base
writable called as shape
public abstract class ShapeWritableT implements WritableComparableT
{
}
Then extend this for a concrete class
Hi,
Please make sure that your CustomWritable has a default constructor.
On Sat, Mar 3, 2012 at 4:56 AM, Mark question markq2...@gmail.com wrote:
Hello,
I'm trying to debug my code through eclipse, which worked fine with
given Hadoop applications (eg. wordcount), but as soon as I run it
On Wed, Feb 29, 2012 at 11:34 PM, W.P. McNeill bill...@gmail.com wrote:
I can do perform HDFS operations from the command line like hadoop fs -ls
/. Doesn't that meant that the datanode is up?
No. It is just meta data lookup which comes from Namenode. Try to cat
some file like hadoop fs
Hi,
Please look inside $HADOOP_HOME/contrib/datajoin folder of 0.20.2 version.
You will find the jar.
On Sat, Feb 11, 2012 at 1:09 AM, Bing Li lbl...@gmail.com wrote:
Hi, all,
I am starting to learn advanced Map/Reduce. However, I cannot find the
class DataJoinMapperBase in my downloaded
Hi,
Only HDFS should be enough.
On Fri, Nov 25, 2011 at 1:45 AM, Thanh Do than...@cs.wisc.edu wrote:
hi all,
in order to run DFSIO in my cluster,
do i need to run JobTracker, and TaskTracker,
or just running HDFS is enough?
Many thanks,
Thanh
--
Join me at
:07 AM, madhu phatak phatak@gmail.com
wrote:
Hi,
Only HDFS should be enough.
On Fri, Nov 25, 2011 at 1:45 AM, Thanh Do than...@cs.wisc.edu wrote:
hi all,
in order to run DFSIO in my cluster,
do i need to run JobTracker, and TaskTracker,
or just running HDFS is enough
Hi Mohit ,
A and B refers to two different output files (multipart name). The file
names will be seq-A* and seq-B*. Its similar to r in part-r-0
On Tue, Feb 28, 2012 at 11:37 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
Thanks that's helpful. In that example what is A and B referring
You can use FileSystem.getFileStatus(Path p) which gives you the block size
specific to a file.
On Tue, Feb 28, 2012 at 2:50 AM, Kai Voigt k...@123.org wrote:
hadoop fsck filename -blocks is something that I think of quickly.
Hi,
-libjars doesn't always work.Better way is to create a runnable jar with
all dependencies ( if no of dependency is less) or u have to keep the jars
into the lib folder of the hadoop in all machines.
On Wed, Feb 22, 2012 at 8:13 PM, Ioan Eugen Stan stan.ieu...@gmail.comwrote:
Hello,
I'm
Hi,
Find maven definition for Hadoop core jars
here-http://search.maven.org/#browse|-856937612
.
On Tue, Feb 21, 2012 at 10:48 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
I am trying to search for dependencies that would help me get started with
developing map reduce in eclipse and I
Hi Mohit,
FS is a generic filesystem which can point to any file systems like
LocalFileSystem,HDFS etc. But dfs is specific to HDFS. So when u use fs it
can copy from local file system to hdfs . But when u specify dfs src file
has to be on HDFS.
On Tue, Feb 21, 2012 at 10:46 PM, Mohit Anchlia
Hi,
Did you formatted the HDFS?
On Tue, Feb 21, 2012 at 7:40 PM, Shi Yu sh...@uchicago.edu wrote:
Hi Hadoopers,
We are experiencing a strange problem on Hadoop 0.20.203
Our cluster has 58 nodes, everything is started from a fresh
HDFS (we deleted all local folders on datanodes and
Hi,
Just make sure that Datanode is up. Looking into the datanode logs.
On Sun, Feb 19, 2012 at 10:52 PM, W.P. McNeill bill...@gmail.com wrote:
I am running in pseudo-distributed on my Mac and just upgraded from
0.20.203.0 to 1.0.0. The web interface for HDFS which was working in
0.20.203.0
Hi,
This may be the issue with namenode is not correctly formatted.
On Sat, Feb 18, 2012 at 1:50 PM, Ben Cuthbert bencuthb...@ymail.com wrote:
All sometimes when I startup my hadoop I get the following error
12/02/17 10:29:56 INFO namenode.NameNode: STARTUP_MSG:
Hi,
Its better to use the hos tnames rather than the ipaddress. If you use
hostnames , task_attempt URL will contain the hostname rather than
localhost .
On Fri, Feb 17, 2012 at 10:52 PM, Keith Wiley kwi...@keithwiley.com wrote:
What property or setup parameter determines the URLs displayed
#I_want_to_make_a_large_cluster_smaller_by_taking_out_a_bunch_of_nodes_simultaneously._How_can_this_be_done.3F
)
- Alex
On Sat, Dec 17, 2011 at 12:06 PM, madhu phatak phatak@gmail.com
wrote:
Hi,
I am trying to add nodes dynamically to a running hadoop cluster.I
started
tasktracker and datanode in the node. It works fine. But when some node
try fetch
Hi,
I am trying to add nodes dynamically to a running hadoop cluster.I started
tasktracker and datanode in the node. It works fine. But when some node
try fetch values ( for reduce phase) it fails with unknown host exception.
When i add a node to running cluster do i have to add its hostname to
disable iptables and try again
On Fri, Aug 5, 2011 at 2:20 PM, Manish manish.iitg...@gmail.com wrote:
Hi,
Has anybody been able to run hadoop standalone mode on fedora 15 ?
I have installed it correctly. It runs till map but gets stuck in reduce.
It fails with the error mapred.JobClient
It should. Whats the input value class for reducer you are setting in Job?
2011/7/30 Daniel,Wu hadoop...@163.com
Thanks Joey,
It works, but one place I don't understand:
1: in the map
extends MapperText, Text, Text, IntWritable
so the output value is of type IntWritable
2: in the
Sorry for earlier reply . Is your combiner outputting the Text,Text
key/value pairs?
On Wed, Aug 3, 2011 at 5:26 PM, madhu phatak phatak@gmail.com wrote:
It should. Whats the input value class for reducer you are setting in Job?
2011/7/30 Daniel,Wu hadoop...@163.com
Thanks Joey
May be maven is not able to connect to central repository because of proxy.
On Fri, Jul 29, 2011 at 2:54 PM, Arun K arunk...@gmail.com wrote:
Hi all !
I have downloaded hadoop-0.21.I am behind my college proxy.
I get the following error while building mumak :
$cd
I had issue using IP address in XML files . You can try to use host names in
the place of IP address .
On Thu, Jul 28, 2011 at 5:22 PM, Doan Ninh uitnetw...@gmail.com wrote:
Hi,
I run Hadoop in 4 Ubuntu 11.04 on VirtualBox.
On the master node (192.168.1.101), I configure fs.default.name =
Thank you . Will have a look on it.
On Wed, Jul 27, 2011 at 3:28 PM, Steve Loughran ste...@apache.org wrote:
On 27/07/11 05:55, madhu phatak wrote:
Hi
I am submitting the job as follows
java -cp
Nectar-analytics-0.0.1-**SNAPSHOT.jar:/home/hadoop/**
hadoop-for-nectar/hadoop-0.21.**0/conf
Its the problem of multiple versions of same jar.
On Thu, Jul 21, 2011 at 5:15 PM, Steve Loughran ste...@apache.org wrote:
On 20/07/11 07:16, Juwei Shi wrote:
Hi,
We faced a problem of loading logging class when start the name node. It
seems that hadoop can not find commons-logging-*.jar
us know if it then picks up the proper config (right now,
its using the local mode).
On Wed, Jul 27, 2011 at 10:25 AM, madhu phatak phatak@gmail.com
wrote:
Hi
I am submitting the job as follows
java -cp
Nectar-analytics-0.0.1-SNAPSHOT.jar:/home/hadoop/hadoop-for-nectar/hadoop
Hi,
I am working on a open source project
Nectarhttps://github.com/zinnia-phatak-dev/Nectar where
i am trying to create the hadoop jobs depending upon the user input. I was
using Java Process API to run the bin/hadoop shell script to submit the
jobs. But it seems not good way because the process
in the class path of the application from where you want to submit
the
job.
You can refer this docs for more info on Job API's.
http://hadoop.apache.org/mapreduce/docs/current/api/org/apache/hadoop/mapred
uce/Job.html
Devaraj K
-Original Message-
From: madhu phatak
message / stack trace? Could you also
paste your JT logs?
On Tue, Jul 26, 2011 at 4:05 PM, madhu phatak phatak@gmail.com
wrote:
Hi
I am using the same APIs but i am not able to run the jobs by just
adding
the configuration files and jars . It never create a job in Hadoop , it
just
at 4:32 PM, madhu phatak phatak@gmail.com
wrote:
I am using JobControl.add() to add a job and running job control in
a separate thread and using JobControl.allFinished() to see all jobs
completed or not . Is this work same as Job.submit()??
On Tue, Jul 26, 2011 at 4:08 PM, Harsh J ha
Hi,
We released Nectar,first open source predictive modeling on Apache Hadoop.
Please check it out.
Info page http://zinniasystems.com/zinnia.jsp?lookupPage=blogs/nectar.jsp
Git Hub https://github.com/zinnia-phatak-dev/Nectar/downloads
Reagards
Madhukara Phatak,Zinnia Systems
thank you
On Sun, Jul 24, 2011 at 11:47 AM, Mark Kerzner markkerz...@gmail.comwrote:
Congratulations, looks very interesting.
Mark
On Sun, Jul 24, 2011 at 1:15 AM, madhu phatak phatak@gmail.com
wrote:
Hi,
We released Nectar,first open source predictive modeling on Apache
Hadoop
White paper associated with framework can be found here
http://zinniasystems.com/downloads/sample.jsp?fileName=Distributed_Computing_in_Business_Analytics.pdf
On Sun, Jul 24, 2011 at 11:49 AM, madhu phatak phatak@gmail.com wrote:
thank you
On Sun, Jul 24, 2011 at 11:47 AM, Mark Kerzner
Hadoop : the definitive guide also talks about migration
On Jun 28, 2011 8:31 PM, Shi Yu sh...@uchicago.edu wrote:
On 6/28/2011 7:12 AM, Prashant Sharma wrote:
Hi ,
I have my source code written in 0.19.1 Hadoop API and want to shift
it to newer API 0.20.20. Any clue on good documentation on
When you launch program using bin/hadoop command full cluster info is
available to your program like name node, data node etc ..here your just
submitting binary but the starting is done by hadoop rather than you running
./a.out
On Jun 29, 2011 1:48 AM, jitter chickych...@gmail.com wrote:
hi i m
The console will tell how much time taken by job
On Jul 5, 2011 8:26 AM, sangroya sangroyaa...@gmail.com wrote:
Hi,
I am trying to monitor the time to complete a map phase and reduce
phase in hadoop. Is there any way to measure the time taken to
complete map and reduce phase in a cluster.
Can you see the logs of tasktracker for full stacktrace?
On Thu, Jun 30, 2011 at 12:24 PM, Paolo Castagna
castagna.li...@googlemail.com wrote:
Hi,
I am using Apache Whirr to setup an Hadoop cluster on EC2 using Hadoop
0.22.0 SNAPSHOTs (nightly) builds from Jenkins. For details, see [1,2].
What is the log content? Its the best place to see whats going wrong . If
you give the logs then its easy point out the problem
On Tue, Jun 21, 2011 at 9:06 AM, Kumar Kandasami
kumaravel.kandas...@gmail.com wrote:
Hi Ziyad:
Do you see any errors on the log file ?
I have installed CDH3 in
.
Avi
-Original Message-
From: madhu phatak [mailto:phatak@gmail.com]
Sent: Tuesday, June 21, 2011 12:33 PM
To: common-user@hadoop.apache.org
Subject: Re: Help with adjusting Hadoop configuration files
The utilization of cluster depends upon the no of jobs and no of mappers
HDFS should be available to DataNodes in order to run the jobs and bin/hdfs
just uses the hadoop jobs to access hdfs in datanodes .So if u want read a
file from hdfs inside a job you have to start data nodes when cluster comes
up.
On Fri, Jun 17, 2011 at 4:12 PM, punisher punishe...@hotmail.it
HDFS doesnot support Appending i think . I m not sure about pig , if you are
using Hadoop directly you can zip the files and use zip as the input the
jobs.
On Fri, Jun 17, 2011 at 6:56 AM, Xiaobo Gu guxiaobo1...@gmail.com wrote:
please refer to FileUtil.CopyMerge
On Fri, Jun 17, 2011 at 8:33
Its related with the amount of memory available to Java Virtual machine that
is created for hadoop jobs.
On Fri, Jun 17, 2011 at 1:18 AM, Harsh J ha...@cloudera.com wrote:
The 'heap size' is a Java/program and memory (RAM) thing; unrelated to
physical disk space that the HDFS may occupy (which
I think the jar have some issuses where its not able to read the Main class
from manifest . try unjar the jar and see in Manifest.xml what is the main
class and then run as follows
bin/hadoop jar hadoop-*-examples.jar Full qualified main class grep input
output 'dfs[a-z.]+'
On Thu, Jun 16, 2011
Its better to merge the library with ur code . Other wise u have to copy the
library to every lib folder of HADOOP in every node cluster. libjars is not
working for me also . I used maven shade plugin (eclipse) to get the merged
jar.
On Wed, Jun 15, 2011 at 12:20 AM, Mehmet Tepedelenlioglu
Define Ur own custom Record Reader and its efficient .
On Sun, Jun 12, 2011 at 10:12 AM, Harsh J ha...@cloudera.com wrote:
Mark,
I may not have gotten your question exactly, but you can do further
processing inside of your FileInputFormat derivative's RecordReader
implementation (just
to
HDFS-1060, HADOOP-6239 and HDFS-744?
Are there other issues related to append than the one above?
Tks, Eric
https://issues.apache.org/**jira/browse/HDFS-265https://issues.apache.org/jira/browse/HDFS-265
On 21/06/11 12:36, madhu phatak wrote:
Its not stable . There are some bugs pending
wrote:
Hi,
The block size is configured to 128MB, I've read that it is recommended to
increase it in order to get better performance.
What value do you recommend to set it ?
Avi
-Original Message-
From: madhu phatak [mailto:phatak@gmail.com]
Sent: Tuesday, June 21, 2011 12:54
I think HIVE is best suited for ur use case where it gives you the sql based
interface to the hadoop to make these type of things.
On Fri, Jun 10, 2011 at 2:39 AM, Shi Yu sh...@uchicago.edu wrote:
Hi,
I have two datasets: dataset 1 has the format:
MasterKey1SubKey1SubKey2SubKey3
You can use ControlledJob's addDependingJob to handle dependency between
multiple jobs.
On Tue, Jun 7, 2011 at 4:15 PM, Adarsh Sharma adarsh.sha...@orkash.comwrote:
Harsh J wrote:
Yes, I believe Oozie does have Pipes and Streaming action helpers as well.
On Thu, Jun 2, 2011 at 5:05 PM,
See the tasklog of the slave to see why the task attempt is failing...
On Wed, Feb 16, 2011 at 7:29 PM, Nitin Khandelwal
nitin.khandel...@germinait.com wrote:
Hi,
I am using Hadoop 0.21.0. I am getting Exception as
java.lang.Throwable: Child Error at
tasktracker log *
On Wed, Feb 16, 2011 at 8:00 PM, madhu phatak phatak@gmail.com wrote:
See the tasklog of the slave to see why the task attempt is failing...
On Wed, Feb 16, 2011 at 7:29 PM, Nitin Khandelwal
nitin.khandel...@germinait.com wrote:
Hi,
I am using Hadoop 0.21.0. I
Hadoop is not suited for real time applications
On Thu, Feb 17, 2011 at 9:47 AM, Karthik Kumar karthik84ku...@gmail.comwrote:
Can Hadoop be used for Real time Applications such as banking solutions...
--
With Regards,
Karthik
IP address with not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 2:01 PM, ursbrbalaji ursbrbal...@gmail.com wrote:
Hi Madhu,
The jobtracker logs show the following exception.
2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
IP address wiLL not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 9:58 PM, madhu phatak phatak@gmail.com wrote:
IP address with not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 2:01 PM, ursbrbalaji
You can use the WEB UI of the job tracker to get those status
On Sun, Feb 6, 2011 at 5:44 AM, ahmednagy ahmed_said_n...@hotmail.comwrote:
Dear All,
I need to know how much data transfere occured among the nodes and how much
processing is happening during the job executions based on different
Please see the job tracker logs
On Tue, Feb 8, 2011 at 3:54 PM, ursbrbalaji ursbrbal...@gmail.com wrote:
Hi Prabhu,
I am facing exactly the same problem. I too followed the steps in the below
link.
Please let me know which configuration file was modified and what were the
changes.
Don't use start-all.sh ,use data node daemon script to start the data node .
On Mon, Feb 7, 2011 at 11:52 PM, ahmednagy ahmed_said_n...@hotmail.comwrote:
Dear All,
Please Help. I have tried to start the data nodes with ./start-all.sh on a
7
node cluster however I recieve incompatible
Hi
You can write an InputFormat which create input splits from multiple files .
It will solve your problem.
On Wed, Feb 2, 2011 at 4:04 PM, Shuja Rehman shujamug...@gmail.com wrote:
Hi Folks,
I am having hundreds of small xml files coming each hour. The size varies
from 5 Mb to 15 Mb. As
Read Oreilly.Hadoop.The.Definitive.Guide book. It points out the changes in
new and old api.
On Thu, Feb 3, 2011 at 3:53 AM, Christian Kunz ck...@yahoo-inc.com wrote:
I don't know of a transition guide, but I found a tutorial based on the new
api:
Most of the Hadoop uses includes processing of large data. But in real time
applications , the data provided by user will be relatively small ,in which
its not advised to use Hadoop
On Tue, Feb 1, 2011 at 10:01 PM, Black, Michael (IS) michael.bla...@ngc.com
wrote:
Try this rather small C++
What exactly you want to implement? . From question it seems that you want
to compare the string in two columns of a row
2011/1/20 Rawan AlSaad rawan.als...@hotmail.com
Dear all,
I am looking for an example for a Mapreduce java implementation for strings
comparison [pair-wise comparison].
Reducer will get the Key,Value pair in sorted manner.If you can generate
key in order of required sort you can process in map reduce job
On Tue, Jan 25, 2011 at 6:21 PM, Harsh J qwertyman...@gmail.com wrote:
Vanilla Hadoop does not support this without the intermediate I/O
cost. You can
Its may be SequenceFileInputFormat reading the value as Text,LongWritable
by default. So u can write like
SequenceFileInputFormatText,ByteWritable sequenceInputFormat = new
SequenceFileInputFormatText,ByteWritable();
job.setInputFormat(sequenceInputFormat.getClass());
On Fri, Jan 21, 2011 at 2:25
May be some datanode is down in the cluster ...check datanode logs of nodes
in cluster
On Thu, Jan 20, 2011 at 3:43 PM, Cavus,M.,Fa. Post Direkt
m.ca...@postdirekt.de wrote:
Hi,
I process the wordcount example on my hadoop cluster and get a Could not
obtain block Exception. Did any one know
+ Integer.parseInt(value.toString());
}
context.write(new Text(sum+), outputValue);
}
}
madhu phatak wrote:
Hi
I have a very large file of size 1.4 GB. Each line of the file is a
number
.
I want to find the sum all those numbers.
I wanted to use NLineInputFormat
Hi
I have a very large file of size 1.4 GB. Each line of the file is a number .
I want to find the sum all those numbers.
I wanted to use NLineInputFormat as a InputFormat but it sends only one line
to the Mapper which is very in efficient.
So can you guide me to write a InputFormat which splits
97 matches
Mail list logo