just configure logging appender in log4j setting and rerun the command
On Mar 5, 2015 12:30 AM, SP sajid...@gmail.com wrote:
Hello All,
Why am I getting this error every time I execute a command. It was working
fine with CDH4 version. When I upgraded to CDH5 version this message
started
some job scheduling api oozie to make sure that thrid job kicks off only
when the file1 and file2 are available on the HDFS( the same can be done by
some shell script or JobControl implementation).
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http
the NameNode
was halted.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Mon, Jul 7, 2014 at 1:30 PM, cho ju il tjst...@kgrid.co.kr wrote:
cluster is..
2 namenodes ( ha cluster ), 3
you can get the input from some source(e.g. files) in the mapper setup()
method and emit it to the context.write() so that it can reach to the
reducer.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91
call context.write() to emit keys/values based on some conditions.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, Jun 18, 2014 at 6:14 PM, Shrivastava, Himnshu (GE Global
.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, Jun 11, 2014 at 8:42 AM, Li Li fancye...@gmail.com wrote:
I have a namenode/jobtracker, 5 datanode/tasktracker. hadoop version
I have not try it yet but you can use old fsimage and editlogs to build a
new name node.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Fri, Jun 6, 2014 at 6:26 AM, ch huang justlo
that it can identify your filesystem correctly.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Fri, Jun 6, 2014 at 3:49 PM, Ekta Agrawal ektacloudst...@gmail.com
wrote:
Hi,
I am trying
you can follow the provided link to setup hadoop cluster very easily
http://java.dzone.com/articles/how-set-multi-node-hadoop
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Fri
are you able to compile it using mvn compile -Pnative
-Dmaven.test.skip=true?
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, Jun 4, 2014 at 4:01 AM, Christian Convey
use
$hadoop dfsadmin -report
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Sat, May 31, 2014 at 11:26 AM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i
);
}
}
}
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Tue, Jun 3, 2014 at 5:34 PM, Michael Segel msegel_had...@hotmail.com
wrote:
Just a quick question...
Suppose you have a M
the problem seems with the java 7, install java 6 and retry.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, May 21, 2014 at 6:34 PM, Faisal Rabbani
faisalrabb
the rack goes down.
hope it will clear your doubt.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Sun, May 11, 2014 at 8:25 AM, jianan hu hujia...@gmail.com wrote:
Hi everyone,
See HDFS
Basic Networking practice state using IP addresses is not a very good
practice for running production services. Properly administered
infrastructure will have working DNS and/or hostname resolution.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http
put your data file on the HDFS alternatively you can directly process
the data and save to your HBASE managed on the HDFS.
if your sensor data is log data then you can use flume to load that data
into the HDFS directly.
Thanks
Raj K Singh
http
are you using aggregation mapReduce feature of MongoDb or some scripting
language(python) to emit key/value pair?
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Mon, May 12, 2014
have u tried the hdfs api to create or write a file to the HDFS, will it
preventing u opening multiple OutputStream on the FileSystem?
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
point 2 is right,The framework first calls setup() followed by map() for
each key/value pair in the InputSplit. Finally cleanup() is called
irrespective of no of records in the input split.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http
is different for both the version so
make a different directories for both the installation.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Tue, May 6, 2014 at 9:15 AM, Stanley Shi s
hadoop does not support command completion tool and unfortunately no open
source tool is present.alternatively you can make alias and use them.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0
ReconfigurationServlet is utility for changing a node's
configuration.it Reloads the configuration file, verifies whether
changes are possible and asks the admin to approve the change.
look at the source code available in hadoop bianries.
Raj K Singh
could anyone point me to the good documentation available for the
ApprovalTest API which can give insight of writing MR unit tests.
Thanks
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
it seems that you are running hadoop locally on windows using cygwin and
having a problem of permission.
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Thu, Aug 15, 2013 at 7:10 PM, Pradeep Singh hadoop.guy0
visit tinyURL.com or rarme.com / path.im
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Thu, Aug 15, 2013 at 8:10 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
could you give a hint on how to use it
On Thu, Aug
n
ot sure with the java but javascript can really help you in that,you can
find my examples of url shortening javascript api on web.
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Thu, Aug 15, 2013 at 8:25 PM, Visioner
parameters.
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Tue, Aug 13, 2013 at 4:49 PM, Pavan Sudheendra pavan0...@gmail.comwrote:
Hi,
I'm currently using maven to build the jars necessary for my
map-reduce program to run
Implement raw comparator for your emitted keys to sort the output at the
reducer.
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, Aug 14, 2013 at 1:21 AM, Sam Garrett s...@actionx.com wrote:
I am working
i
ncrease your java heap size.
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Sat, Aug 10, 2013 at 12:17 AM, Jitendra Yadav jeetuyadav200...@gmail.com
wrote:
Hi,
I'm getting below errors in log file while starting
its seems to be a bug in java,run a test jar on ur cluster
fallow
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6727824
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Sat, Aug 10, 2013 at 1:34 PM, Jitendra Yadav
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Tue, Jul 23, 2013 at 12:24 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Sandeep,
You don't have to convert the data in order to copy it into the HDFS. But
you might have to think
ping ur namenode to a certain it up or not
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, Jun 26, 2013 at 12:39 PM, varun kumar varun@gmail.com wrote:
Is your namenode working?
On Wed, Jun 26, 2013 at 12
you can use *getInputFileBasedOutputFileName*(JobConf job, String name)
which Generate the outfile name based on a given anme and the input file
name.
thanks
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Mon, Jun 3
by default hadoop keep intermediate values produced by mapper in the local
file system,you can get the the handle on it using
FileOutputFormat.getWorkOutputPath(context)
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
hadoop assume that you have put the updated file into the input folder.
Raj K Singh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Fri, May 31, 2013 at 8:53 PM, Adamantios Corais
adamantios.cor...@gmail.com wrote:
I am new
35 matches
Mail list logo