Re: Multiple directories for dfs.name.dir

2012-11-17 Thread Bejoy KS
copy of fs image but no current edit log. As the NameNode would already have a new edit log after the previous one was passed to SNN for merging. Regards Bejoy KS Sent from handheld, please excuse typos. -Original Message- From: nagarjuna kanamarlapudi Date: Sat, 17 Nov 2012 18:40:3

Re: Multiple Aggregate functions in map reduce program

2012-10-05 Thread Bejoy KS
aggregated sum and count for each key. Regards Bejoy KS Sent from handheld, please excuse typos. -Original Message- From: iwannaplay games Date: Fri, 5 Oct 2012 12:32:28 To: user; ; hdfs-user Reply-To: u...@hadoop.apache.org Subject: Multiple Aggregate functions in map reduce program Hi All

Re: NameNode fails

2012-07-20 Thread Bejoy Ks
can be used. You can read more here http://wiki.apache.org/hadoop/NameNodeFailover Regards Bejoy KS On Fri, Jul 20, 2012 at 3:58 PM, wrote: > Thanks Mohammad :-), > > I just read this concept of secondary NameNode. Thank you for your reply. > Mohammad I am not finding the way

Re: NameNode fails

2012-07-20 Thread Bejoy KS
Hi Yogesh Please post in the error logs/messages if you find any. Regards Bejoy KS Sent from handheld, please excuse typos. -Original Message- From: Date: Fri, 20 Jul 2012 07:21:24 To: ; Subject: RE: NameNode fails Thanks Bejoy, Mohammad and Vignesh :-). I have done the suggested

Re: NameNode fails

2012-07-19 Thread Bejoy KS
Hi Yogesh Is your dfs.name.dir pointing to /tmp dir? If so try changing that to any other dir . The contents of /tmp may get wiped out on OS restarts. Regards Bejoy KS Sent from handheld, please excuse typos. -Original Message- From: Date: Fri, 20 Jul 2012 06:20:02 To: Reply-To

Re: Loading data in hdfs

2012-07-19 Thread Bejoy Ks
Hi Prabhjot Yes, Just use the filesystem commands hadoop fs -copyFromLocal Regards Bejoy KS On Thu, Jul 19, 2012 at 3:49 PM, iwannaplay games wrote: > Hi, > > I am unable to use sqoop and want to load data in hdfs for testing, > Is there any way by which i can load my csv or t

Re: Hadoop filesystem directories not visible

2012-07-19 Thread Bejoy Ks
Sent: 19 July 2012 15:27 >> To: hdfs-user@hadoop.apache.org; bejoy.had...@gmail.com >> >> >> Subject: Re: Hadoop filesystem directories not visible >> >> >> >> Thanks Bejoy!! >> >> >> On Thu, Jul 19, 2012 at 3:22 PM, Bejoy KS wrote: &

Re: Hadoop filesystem directories not visible

2012-07-19 Thread Bejoy KS
Bejoy KS Sent from handheld, please excuse typos. -Original Message- From: "Yuvrajsinh Chauhan" Date: Thu, 19 Jul 2012 15:16:24 To: Reply-To: hdfs-user@hadoop.apache.org Subject: RE: Hadoop filesystem directories not visible Dear Saniya, I Second to you on this. Am also fi

Re: HDFS Installation / Configuration

2012-07-17 Thread Bejoy KS
Hi Yuvrajsinh In hdfs the directory information exixts as metadata on the name node. The data is stored as hdfs blocks on datanodes. You can list the created dir/files in hdfs using hadoop fs -ls / Regards Bejoy KS Sent from handheld, please excuse typos. -Original Message- From

Re: change hdfs block size for file existing on HDFS

2012-06-26 Thread Bejoy Ks
Hi Anurag, To add on, you can also change the replication of exiting files by hadoop fs -setrep http://hadoop.apache.org/common/docs/r0.20.0/hdfs_shell.html#setrep On Tue, Jun 26, 2012 at 7:42 PM, Bejoy KS wrote: > Hi Anurag, > > The easiest option would be , in your map reduce jo

Re: change hdfs block size for file existing on HDFS

2012-06-26 Thread Bejoy KS
Regards Bejoy KS Sent from handheld, please excuse typos.

Re: Hadoop Namenode Failover

2012-03-22 Thread Bejoy Ks
have a remote copy of fs image on another server even if you fully lose out the running NN There is some awesome work going on for HA within hadoop project itself, for details *https://issues.apache.org/jira/browse/HDFS-1623* Regards Bejoy KS On Thu, Mar 22, 2012 at 7:56 PM, Tibor Korocz wrote

Re: Hadoop Admin API Question

2012-03-18 Thread Bejoy Ks
Hi Clement You can get the full cluster status from the dfsadmin page at http://:50070/dfshealth.jsp . Alternatively you can try hadoop dfs admin -report to get the cluster details http://hadoop.apache.org/common/docs/current/commands_manual.html#dfsadmin Regards Bejoy.K.S On Sun, Mar 18,

Re: Best practice to setup Sqoop,Pig and Hive for a hadoop cluster ?

2012-03-15 Thread Bejoy Ks
Hi Manu Please find my responses inline >I had read about we can install Pig, hive & Sqoop on the client node, no need to install it in cluster. What is the client node actually? Can I use my management-node as a client? On larger clusters we have different node that is out of hadoop cluste

Re: Development Platform

2012-01-12 Thread Bejoy Ks
Hi Apurv AFAIK You can use hadoop jar as well as java jar to run a jar file. When you use java jar only the files related to JRE is loaded. In hadoop jar all the configurations mentioned in hadoop configuration files as core-site.xml, mapred-site.xml and hdfs-site.xml are loaded along with

Re: bin/hadoop fs -copyFromLocal fails when 1 datanode download

2012-01-05 Thread Bejoy Ks
Hi After you stopped one of your data node did you check whether it was shown as dead node in hdfs report. You can view and confirm the same from http://namenodeHost:50070/dfshealth.jsp in dead nodes list . It could be a reason for the error that the datanode is not still marked as dead. Reg

Re: Benchmark testing

2012-01-05 Thread Bejoy Ks
Hi Sheesha Basically for benchmarking purposes there would be multiple options available. We basically use job tracker metrics pretty much available from the job tracker web UI to capture the map reduce statistics like -Timings for atomic levels like map,sort and shuffle,reduce as well as e

Re: structured data split

2011-11-11 Thread Bejoy KS
Thanks Harsh !... 2011/11/11 Harsh J > Sorry Bejoy, I'd typed that URL out from what I remembered on my mind. > Fixed link is: http://wiki.apache.org/hadoop/HadoopMapReduce > > 2011/11/11 Bejoy KS : > > Thanks Harsh for correcting me with that wonderful piece of info

Re: structured data split

2011-11-11 Thread Bejoy KS
Hi Donal I don't have much of an expose to the domain which you are pointing on to, but from a plain map reduce developer terms there would be my way of looking into processing such data format with map reduce - If the data is kind of flowing in continuously then I'd use flume to collect t

Re: structured data split

2011-11-11 Thread Bejoy KS
ly and separately. > I just want to know how to deal with a structure (i.e. a word,a line) that > is split into two blocks. > > Cheers, > Donal > > 在 2011年11月11日 下午7:01,Bejoy KS 写道: > >> Hi Donal >> You can configure your map tasks the way you like to process

Re: structured data split

2011-11-11 Thread Bejoy KS
Hi Donal You can configure your map tasks the way you like to process your input. If you have file of size 100 mb, it would be divided into two input blocks and stored in hdfs ( if your dfs.block.size is default 64 Mb). It is your choice on how you process the same using map reduce - With th

Re: Basic question on HDFS - MR

2011-10-18 Thread Bejoy KS
Hi Stuti You don't need anything manually to do the distribution of your jar across Task Trackers(DN). You place your jar in some dir in LFS specify your jar path in the hadoop jar command then hadoop internally copies the jar to all the required task trackers. Also you can place the jar in