Re: [Urgent] - CDH upgrade v5.4.5 to v5.8 - Cluster to different Cluster upgrade

2016-12-08 Thread Viswanathan J
ything v5 is > same for DistCp. > > Manoj Donga > On 8 Dec 2016 4:11 pm, "Viswanathan J" <jayamviswanat...@gmail.com> wrote: > >> Hi Guys, >> >> Currently we are running CDH version of 5.4.5 and planning to upgrade CDH >> v5.8 in different cluster.

[Urgent] - CDH upgrade v5.4.5 to v5.8 - Cluster to different Cluster upgrade

2016-12-08 Thread Viswanathan J
Hi Guys, Currently we are running CDH version of 5.4.5 and planning to upgrade CDH v5.8 in different cluster. We are planning to migrate data to the new cluster through distcp after cluster upgrade. *CDH (v5.4.5) :* - Hadoop v2.6.0 - HBase v1.0.0 *CDH (v5.8) :* - Hadoop v2.6.0 -

HBase region pre-split and store evenly

2016-04-06 Thread Viswanathan J
Hi, Please help on region pre-split and write data evenly in all the region servers. -- Regards, Viswa.J

Re: HBase incremental backups

2016-04-04 Thread Viswanathan J
Hi Rajat, Thanks for the update. HBase export will impact the cluster performance right? On Fri, Apr 1, 2016 at 5:24 PM, Rajat Dua <rajat.du...@gmail.com> wrote: > Hbase replication or Hbase export utility . > Based on hardware support > > > On Friday 1 April 2016, Viswana

HBase incremental backups

2016-04-01 Thread Viswanathan J
Hi, Which is the best approach for HBase incremental backup in production cluster without any impact? Please help. -- Regards, Viswa.J

Namenode log size keep growing - Hadoop v1.2.1

2015-07-28 Thread Viswanathan J
n our cluster recently we had issue in Namenode file log size, it's keep growing with the following type of logs. 2015-07-28 13:37:38,730 INFO org.apache.hadoop.hdfs. StateChange: BLOCK* addToInvalidates: blk_-2946593971266165812 to 192.168.x.x:50010 2015-07-28 13:37:38,730 INFO

Exception in Jobtracker (java.lang.OutOfMemoryError: Java heap space)

2014-04-12 Thread Viswanathan J
Hi, I'm using Hadoop v1.2.1 and it is running fine so long(3 months) without any issues. Suddenly I got the below error in Jobtracker and jobs are failed to run. Is this issue in JT or TT or Jetty issue? 2014-04-12 02:13:57,618 ERROR org.mortbay.log: EXCEPTION java.lang.OutOfMemoryError: Java

Exception in hadoop jobtracker OOM

2014-04-12 Thread Viswanathan J
Hi, I'm using Hadoop v1.2.1 and it is running fine so long(3 months) without any issues. Suddenly I got the below error in Jobtracker and jobs are failed to run. Is this issue in JT or TT or Jetty issue? 2014-04-12 02:13:57,618 ERROR org.mortbay.log: EXCEPTION java.lang.OutOfMemoryError: Java

Pig with Tez

2014-03-13 Thread Viswanathan J
Hi, Is that apache pig will run with tez?

Apache Tez supporting pig version

2014-03-13 Thread Viswanathan J
Hi, Which pig version supports the Apache Tez? Pig 0.12 version will support the Tez? Or v0.14 yet to release. Pls help.

Re: Hadoop2.x reading data

2014-03-13 Thread Viswanathan J
Thanks Harsh. On Mar 11, 2014 11:19 PM, Harsh J ha...@cloudera.com wrote: This is a Pig problem, not a Hadoop 2.x one - can you please ask it at u...@pig.apache.org? You may have to subscribe to it first. On Tue, Mar 11, 2014 at 1:03 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi

Hadoop2.x reading data

2014-03-11 Thread Viswanathan J
Hi, I'm currently trying to use Hadoop-2.x and noticed when I try to load the file in pig and it shows as follows while reading but file has multiple records and also weird thing is if I dump the variable its shows the pig tuples, Successfully read 0 records from: /tmp/sample.txt Any reason?

Hadoop2 LZO issue with pig

2014-03-08 Thread Viswanathan J
Hi, Getting this issue in hadoop 2.x with pig, java.lang.Exception: java.lang.RuntimeException: java.io.IOException: No codec for fileCaused by: java.io.IOException: No codec for file 2590 at

Re: MR2 Job over LZO data

2014-03-07 Thread Viswanathan J
Hi, Getting the below error while running pig job in hadoop-2.x, Caused by: java.io.IOException: No codec for file found 2639 at com.twitter.elephantbird.mapreduce.input.MultiInputFormat.determineFileFormat(MultiInputFormat.java:176) 2640 at

Hadoop-2.2.0 and Pig-0.12.0 - error IBM_JAVA

2014-01-28 Thread Viswanathan J
Hi Guys, I'm running hadoop 2.2.0 version with pig-0.12.0, when I'm trying to run any job getting the error as below, *java.lang.NoSuchFieldError: IBM_JAVA* Is this because of Java version or compatibility issue with hadoop and pig. I'm using Java version - *1.6.0_31* Please help me out. --

System ulimit for hadoop jobtracker node

2013-12-17 Thread Viswanathan J
Hi, What value(ulimit) will be fair enough for jobtracker node. If it's too high will that cause thread block or any issue in jobtracker. Please help.

Re: System ulimit for hadoop jobtracker node

2013-12-17 Thread Viswanathan J
harm. On Dec 17, 2013, at 9:10 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, What value(ulimit) will be fair enough for jobtracker node. If it's too high will that cause thread block or any issue in jobtracker. Please help.

Getting following error in JT logs while running MR jobs

2013-12-16 Thread Viswanathan J
Hi, I'm getting the following error frequently while running MR jobs. ERROR org.apache.hadoop.mapred.TaskStatus: Trying to set finish time for task attempt_201312040159_126927_m_00_0 when no start time is set, stackTrace is : java.lang.Exception at

Re: Getting following error in JT logs while running MR jobs

2013-12-16 Thread Viswanathan J
On Mon, Dec 16, 2013 at 3:47 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, I'm getting the following error frequently while running MR jobs. ERROR org.apache.hadoop.mapred.TaskStatus: Trying to set finish time for task attempt_201312040159_126927_m_00_0 when no start

Hadoop Jobtracker job and UI hangs - Deadlock detection

2013-12-14 Thread Viswanathan J
Hi, JT memory reaches 6.68/8.89 GB and not able to submit the job and UI is not loading at all. But didn't see any JT OOM exceptions. Have taken the thread dump of Jobtracker, and the JT thread dump as follows: Deadlock Detection: Can't print deadlocks:null Thread 25817: (state = BLOCKED)

Compression LZO class not found issue in Hadoop-2.2.0

2013-12-06 Thread Viswanathan J
Hi Team, Have added the property in mapred/core site xml and copied the hadoop lzo jar in hadoop lib folder. Also installed lzop,lzo-devel package in CentOS version. Getting the below LZO issue in Hadoop-2.2.0, 3 AttemptID:attempt_1386352289515_0001_m_00_0 Info:Error:

Jobtracker fair scheduler

2013-11-21 Thread Viswanathan J
Hi, I'm running hadoop with 1.2.1, all my jobs are running in single queue (Queue 1) only all the time. But I have configured default, queue 12. Why jobs are not scheduled to all the queues. Please help. Running like this will be any issue? Thanks,

Re: Jobtracker fair scheduler

2013-11-21 Thread Viswanathan J
Never wear your best trousers when you go out to fight for freedom and truth.- Henrik Ibsen On Thursday, November 21, 2013 10:25 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, I'm running hadoop with 1.2.1, all my jobs are running in single queue (Queue 1) only all the time

Hadoop jobtracker OOME fix applied and no OOME but JT hanged

2013-11-15 Thread Viswanathan J
Hi guys, I had JT OOME in hadoop version 1.2.1 and applied the patch based on the fix given by Apache contributors for jira issue mapreduce-5508. After applying that fix the heap size gradually increases but after one week jobtracker completely hangs and slowdown but didn't get JT OOME. No error

Re: Hadoop core jar class update

2013-10-25 Thread Viswanathan J
I'm using hadoop-1.2.1, have mentioned in the previous thread. On Oct 24, 2013 11:30 PM, Ravi Prakash ravi...@ymail.com wrote: Viswanathan, What version of Hadoop are you using? What is the change? On Wednesday, October 23, 2013 2:20 PM, Viswanathan J jayamviswanat...@gmail.com wrote

Apache hadoop - 1.2.1 source compilation through Maven or ant

2013-10-20 Thread Viswanathan J
Hi, Please help to compile the source code of Apache hadoop using mvn or ant. I just tried to download the latest hadoop stable source and run ant jar but it is not compiling getting errors. -- Regards, Viswa.J

Re: Apache hadoop - 1.2.1 source compilation through Maven or ant

2013-10-20 Thread Viswanathan J
, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, Please help to compile the source code of Apache hadoop using mvn or ant. I just tried to download the latest hadoop stable source and run ant jar but it is not compiling getting errors. -- Regards, Viswa.J

Re: Apache hadoop - 1.2.1 source compilation through Maven or ant

2013-10-20 Thread Viswanathan J
://www.danielbit.com/blog/tools-for-linux/install-hadoop-on-ubuntu) On Sun, Oct 20, 2013 at 10:48 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi Ted, Thanks for your response. I'm running on Ubuntu, jar built successfully. But the jar generated as snapshot version. Also the source

Re: Apache hadoop - 1.2.1 source compilation through Maven or ant

2013-10-20 Thread Viswanathan J
in build.xml (for branch-1): property name=version value=1.3.0-SNAPSHOT/ You can customize the version string. Mind telling us what improvement you're making ? Cheers On Sun, Oct 20, 2013 at 8:08 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, Thanks, but I need to do some changes

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-14 Thread Viswanathan J
Hi guys, Appreciate your response. Thanks, Viswa.J On Oct 12, 2013 11:29 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi Guys, But I can see the jobtracker OOME issue fixed in hadoop - 1.2.1 version as per the hadoop release notes as below. Please check this URL, https

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-14 Thread Viswanathan J
.. and will keep your HDFS clean On Monday, 14 October 2013 09:52:41 UTC+1, Viswanathan J wrote: Hi guys, Appreciate your response. Thanks, Viswa.J On Oct 12, 2013 11:29 PM, Viswanathan J jayamvis...@gmail.com wrote: Hi Guys, But I can see the jobtracker OOME issue fixed in hadoop - 1.2.1 version

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-14 Thread Viswanathan J
on the jobtracker page that the memory usage remains low over time Antonios On Mon, Oct 14, 2013 at 10:56 AM, Antwnis antw...@gmail.com wrote: After changing mapred-site.xml , you will have to restart the JobTracker to have the changes applied to it On Mon, Oct 14, 2013 at 10:37 AM, Viswanathan J

Re: Hadoop Jobtracker cluster summary of heap size and OOME

2013-10-14 Thread Viswanathan J
memory. Thanks, Viswa On Oct 15, 2013 7:30 AM, Arun C Murthy a...@hortonworks.com wrote: Please don't cross-post. HADOOP_HEAPSIZE of 1024 is too low. You might want to bump it up to 16G or more, depending on: * #jobs * Scheduler you use. Arun On Oct 11, 2013, at 9:58 AM, Viswanathan J

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-12 Thread Viswanathan J
with previous versions. But you can try this if you have memory and see. In my case the issue was gone after I set as above. Thanks Reyane OUKPEDJO On 11 October 2013 13:08, Viswanathan J jayamviswanat...@gmail.comwrote: Hi, I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-12 Thread Viswanathan J
a cron / jenkins task I'll get you the configuration on Monday.. On Friday, 11 October 2013 18:08:55 UTC+1, Viswanathan J wrote: Hi, I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers running in all nodes. *Apache Hadoop :* 1.2.1 It shows the heap size currently

Re: Hadoop Jobtracker heap size calculation and OOME

2013-10-12 Thread Viswanathan J
our I missing anything. Please help. Appreciate your response. Thanks, Viswa.J On Oct 12, 2013 7:57 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Thanks Antonio, hope the memory leak issue will be resolved. Its really nightmare every week. In which release this issue will be resolved

Hadoop Jobtracker cluster summary of heap size and OOME

2013-10-11 Thread Viswanathan J
Hi, I'm running a 14 nodes Hadoop cluster with tasktrackers running in all nodes. Have set the jobtracker default memory size in hadoop-env.sh *HADOOP_HEAPSIZE=1024* * * Have set the mapred.child.java.opts value in mapred-site.xml as, property namemapred.child.java.opts/name

Hadoop Jobtracker heap size calculation and OOME

2013-10-11 Thread Viswanathan J
Hi, I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers running in all nodes. *Apache Hadoop :* 1.2.1 It shows the heap size currently as follows: *Cluster Summary (Heap Size is 5.7/8.89 GB)* * * In the above summary what is the *8.89* GB defines? Is the *8.89* defines maximum

Mapreduce jobtracker recover property

2013-09-23 Thread Viswanathan J
Hi, I'm using the version Hadoop 1.2.1 in production hdfs, i can see the the following properties with the value in jobtracker job.xml, *mapreduce.job.restart.recover - true* *mapred.jobtracker.restart.recover - false** * What is the difference and which property will taken by the jobtracker?

Re: Mapreduce jobtracker recover property

2013-09-23 Thread Viswanathan J
-tom-white-3rd/chapter-5/running-on-a-cluster Regards, Shahab On Mon, Sep 23, 2013 at 10:31 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, I'm using the version Hadoop 1.2.1 in production hdfs, i can see the the following properties with the value in jobtracker job.xml

Re: Hadoop Jobtracker OOME

2013-09-16 Thread Viswanathan J
Appreciate the response. On Sep 16, 2013 1:26 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi Guys, Currently we are running the small Hadoop(1.2.1) cluster with 13 nodes, today we getting OutOfMemory error in jobtracker, java.io.IOException: Call to nn:8020 failed on local exception

Hadoop Jobtracker OOME

2013-09-15 Thread Viswanathan J
Hi Guys, Currently we are running the small Hadoop(1.2.1) cluster with 13 nodes, today we getting OutOfMemory error in jobtracker, java.io.IOException: Call to nn:8020 failed on local exception: java.io.IOException: Couldn't set up IO streams at

Pig jars

2013-09-06 Thread Viswanathan J
In pig latest version, there are two types of jars such as pig jar and pig without hadoop jar. What is the difference between of those jars?

Disc not equally utilized in hdfs data nodes

2013-09-05 Thread Viswanathan J
Hi, The data which are storing in data nodes are not equally utilized in all the data directories. We having 4x1 TB drives, but huge data storing in single disc only at all the nodes. How to balance for utilize all the drives. This causes the hdfs storage size becomes high very soon even though

Re: Disc not equally utilized in hdfs data nodes

2013-09-05 Thread Viswanathan J
of these paths exist and have some DN owned directories under them. Please also keep the lists in CC/TO when replying. Clicking Reply-to-all usually helps do this automatically. On Thu, Sep 5, 2013 at 11:16 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi Harsh, dfs.data.dir

Fwd: Pig GROUP operator - Data is shuffled and wind up together for the same grouping key

2013-08-29 Thread Viswanathan J
Appreciate the response. I'm facing this issue in prod. -- Forwarded message -- From: Viswanathan J jayamviswanat...@gmail.com Date: Thu, Aug 29, 2013 at 2:00 PM Subject: Pig GROUP operator - Data is shuffled and wind up together for the same grouping key To: u...@pig.apache.org

Hadoop Conf properties

2013-08-26 Thread Viswanathan J
As we are upgrading the hdfs cluster with Apache hadoop 1.2.1, what are the key Conf properties that's to be configure for hdfs,MR and Job tracker as well as any best practices. Thanks, Viswa.J

Re: Hadoop Conf properties

2013-08-26 Thread Viswanathan J
Appreciate the response. On Aug 27, 2013 7:47 AM, Viswanathan J jayamviswanat...@gmail.com wrote: As we are upgrading the hdfs cluster with Apache hadoop 1.2.1, what are the key Conf properties that's to be configure for hdfs,MR and Job tracker as well as any best practices. Thanks, Viswa.J

Re: Hadoop Conf properties

2013-08-26 Thread Viswanathan J
Hi Harsh, Appreciate your response. On Aug 27, 2013 8:58 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Appreciate the response. On Aug 27, 2013 7:47 AM, Viswanathan J jayamviswanat...@gmail.com wrote: As we are upgrading the hdfs cluster with Apache hadoop 1.2.1, what are the key

Re: Hadoop Conf properties

2013-08-26 Thread Viswanathan J
tuning advice from other users, perhaps sharing your deployment, use-case and hardware details along with existing configuration may help them suggest something you're missing. On Tue, Aug 27, 2013 at 10:41 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi Harsh, Appreciate your response

Re: Pig upgrade

2013-08-24 Thread Viswanathan J
Had sent mail to pig user group but no response. On Aug 24, 2013 10:47 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Thanks a lot. On Aug 24, 2013 10:38 AM, Harsh J ha...@cloudera.com wrote: The Apache Pig's own lists at u...@pig.apache.org is the right place to ask this. On Sat, Aug

Re: Pig upgrade

2013-08-24 Thread Viswanathan J
Thanks Ted On Aug 24, 2013 8:59 PM, Ted Yu yuzhih...@gmail.com wrote: New features can be found here: https://blogs.apache.org/pig/ I found the above URL through http://search-hadoop.com/m/ib1SlsHMtb1 Cheers On Sat, Aug 24, 2013 at 8:21 AM, Viswanathan J jayamviswanat...@gmail.com

Re: Pig upgrade

2013-08-24 Thread Viswanathan J
, Aug 24, 2013 at 8:51 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Had sent mail to pig user group but no response. On Aug 24, 2013 10:47 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Thanks a lot. On Aug 24, 2013 10:38 AM, Harsh J ha...@cloudera.com wrote

Hadoop upgrade

2013-08-23 Thread Viswanathan J
Hi, We are planning to upgrade our production hdfs cluster from 1.0.4 to 1.2.1 So if I directly upgrade the cluster, it won't affect the edits, fsimage and checkpoints? Also after upgrade is it will read the blocks, files from the data nodes properly? Is the version id conflict occurs with NN?

Pig upgrade

2013-08-23 Thread Viswanathan J
Hi, I'm planning to upgrade pig version from 0.8.0 to 0.11.0, hope this is stable release. So what are the improvements, key features, benefits, advantages by upgrading this? Thanks, Viswa.J

Re: Pig upgrade

2013-08-23 Thread Viswanathan J
Thanks a lot. On Aug 24, 2013 10:38 AM, Harsh J ha...@cloudera.com wrote: The Apache Pig's own lists at u...@pig.apache.org is the right place to ask this. On Sat, Aug 24, 2013 at 10:22 AM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, I'm planning to upgrade pig version from

Re: getting errors on datanode/tasktracker logs

2013-08-10 Thread Viswanathan J
Please go through the hadoop supported java versions, http://wiki.apache.org/hadoop/HadoopJavaVersions On Aug 10, 2013 4:12 PM, Jagat Singh jagatsi...@gmail.com wrote: Moslty it would be with java if things are working good in your prod with 1.6 , i suggest you to try with 1.6 once how it go.

Hadoop upgrade

2013-08-09 Thread Viswanathan J
Hi, Planning to upgrade hadoop from 1.0.3 to 1.1.2, what are the key features or advantages.

Re: Hadoop upgrade

2013-08-09 Thread Viswanathan J
/releasenotes.html On Fri, Aug 9, 2013 at 5:41 PM, Viswanathan J jayamviswanat...@gmail.comwrote: Hi, Planning to upgrade hadoop from 1.0.3 to 1.1.2, what are the key features or advantages.

Hadoop LZO compression class not found error in Pig 0.11

2013-07-27 Thread Viswanathan J
Hi all, I'm trying to upgrade pig version to pig-0.11, but getting error in java.lang. IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found While submitting the query, I got success msg as follows but getting the above error in final stage, 2013-07-27