Ganglia monitoring with apache hadoop 1.0.0

2012-08-31 Thread sandeep
ost_1:8649,yourgangliahost_2:86 49 Thanks, Sandeep sand...@cloudwick.com

Re: Implement Triggers for HDFS

2012-07-26 Thread Sandeep Reddy P
s/3.2.0-incubating/docs/CoordinatorFunctionalSpec.html- > there are triggers which allow you to perform actions when new data > appears on the filesystem in a specified location. > > On 26 July 2012 03:08, Sandeep Reddy P > wrote: > > > Hi, > > Is it possible to i

Re: Error:Hdfs Client for hadoop using native java api

2012-07-22 Thread Sandeep Reddy P
ple, what is the value for > > conf.get("fs.default.name")? > > > > Alternatively, you can set this property directly in your code: > > > > conf.set("fs.default.name", "hdfs://hadoop1.devqa.local:8020"); > > > > HTH, > > Min

Re: Error:Hdfs Client for hadoop using native java api

2012-07-19 Thread Sandeep Reddy P
(some part of it if you don't want to throw the > full code here) and let us know which part of your code is throwing this > error. > > Regards > > ∞ > Shashwat Shriparv > > > > On Thu, Jul 19, 2012 at 6:46 PM, Sandeep Reddy P < > sandeepreddy.3...@gmail.com

Re: Error:Hdfs Client for hadoop using native java api

2012-07-19 Thread Sandeep Reddy P
Hi John, We have applications in windows. So our dev's need to connect to HDFS from eclipse installed in windows. I'm trying to put data from to using java code from windows. On Thu, Jul 19, 2012 at 5:41 AM, John Hancock wrote: > Sandeep, > > I don't understand your s

Error:Hdfs Client for hadoop using native java api

2012-07-18 Thread Sandeep Reddy P
mLocalFile(LocalFileSystem.java:55) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1292) at Hdfs.main(Hdfs.java:18) File not found Can any one please help me with the issue. -- Thanks, sandeep

Re: Hive Client build error

2012-07-11 Thread Sandeep Reddy P
help me how to resolve this? > > -- > Thanks, > sandeep > > -- Thanks, sandeep

Hive Client build error

2012-07-11 Thread Sandeep Reddy P
-archives.apache.org/mod_mbox/hive-user/201102.mbox/%3CB80E3C80BB6402419CAD9190E3BB749C259E9DC5@rpcoex01.rpcorp.local%3E https://issues.apache.org/jira/browse/HIVE-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:changehistory-tabpanel Can you please help me how to resolve this? -- Thanks, sandeep

Hive Client build error

2012-07-11 Thread Sandeep Reddy P
cdh3 downloads.Can any one please help me how to resolve this issue. -- Thanks, sandeep

How to use Hive patch

2012-07-05 Thread Sandeep Reddy P
Hi, I'm using cdh3u4 rpm's, hive-0.7.1. Now i need JDBC driver patch HIVE-2137. can you please guide me how to use this patch.Here is the link for this patch https://issues.apache.org/jira/browse/HIVE-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel -- Thanks, sandeep

Re: Hive/Hdfs Connector

2012-07-05 Thread Sandeep Reddy P
s how to run HQL that are generated from application > using Hive JDBC driver > > https://cwiki.apache.org/confluence/display/Hive/HiveClient > > Regards > Senthil > > On Thu, Jul 5, 2012 at 9:00 PM, Sandeep Reddy P < > sandeepreddy.3...@gmail.com > > wrote: > &g

Re: Hive/Hdfs Connector

2012-07-05 Thread Sandeep Reddy P
Hi Michael, No i didn't we have a 5 node cluster with hive running on one of the nodes. On Thu, Jul 5, 2012 at 11:26 AM, Michael Segel wrote: > Have you tried using Hive's thrift server? > > On Jul 5, 2012, at 10:20 AM, Sandeep Reddy P wrote: > > > We use hive Jdb

Re: Hive/Hdfs Connector

2012-07-05 Thread Sandeep Reddy P
We use hive Jdbc drivers to connect to RDMS. But we need our application which generates HQL to connect directly to Hive. On Thu, Jul 5, 2012 at 11:12 AM, Bejoy KS wrote: > Hi Sandeep > > You can connect to hdfs from a remote machine if that machine is reachable > from the cluster,

Hive/Hdfs Connector

2012-07-05 Thread Sandeep Reddy P
Hive/Hdfs to RDBMS instead we need our application to connect to Hive/Hdfs. -- Thanks, sandeep

Re: Hive error when loading csv data.

2012-06-26 Thread Sandeep Reddy P
> Sent from a remote device. Please excuse any typos... > > Mike Segel > > On Jun 26, 2012, at 8:58 PM, Sandeep Reddy P > wrote: > > > If i do that my data will be d|"abc|def"|abcd my problem is not solved > > > > On Tue, Jun 26, 2012 at 6:4

Re: Hive error when loading csv data.

2012-06-26 Thread Sandeep Reddy P
If i do that my data will be d|"abc|def"|abcd my problem is not solved On Tue, Jun 26, 2012 at 6:48 PM, Michel Segel wrote: > Yup. I just didnt add the quotes. > > Sent from a remote device. Please excuse any typos... > > Mike Segel > > On Jun 26, 2012, at 4:

Re: Hive error when loading csv data.

2012-06-26 Thread Sandeep Reddy P
le with a custom InputFormat class > > that can handle this (Try using OpenCSV readers, they support this), > > instead of relying on Hive to do this for you. If you're successful in > > your approach, please also consider contributing something back to > > Hive/Pig to h

Hive error when loading csv data.

2012-06-26 Thread Sandeep Reddy P
When i open the same csv file with Microsoft Excel i got abc,def How should i solve this error?? -- Thanks, sandeep

Re: Help with java code to move data from local fs to HDFS

2012-06-22 Thread Sandeep Reddy P
Oh no i didn't its working now. Thanks for the reply. On Fri, Jun 22, 2012 at 11:30 AM, Prajakta Kalmegh wrote: > Did you provide the src and dest paths (args[0] and args[1])? > > > On Fri, Jun 22, 2012 at 8:56 PM, Sandeep Reddy P < > sandeepreddy.3...@gmail.com>

Re: Help with java code to move data from local fs to HDFS

2012-06-22 Thread Sandeep Reddy P
Hi Prajakta, Awesome!! Thanks for the reply but got one more issue Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at FileCopy.main(FileCopy.java:17) On Fri, Jun 22, 2012 at 11:13 AM, Prajakta Kalmegh wrote: > Hi Sandeep > > I think it

Help with java code to move data from local fs to HDFS

2012-06-22 Thread Sandeep Reddy P
ation(); FileSystem fs = FileSystem.get(URI.create(dst), conf); OutputStream out = fs.create(new Path(dst), new Progressable() { public void progress() { System.out.print("."); } }); IOUtils.copyBytes(in, out, 4096, true); } } -- Thanks, sandeep

Re: Hardware specs calculation for io

2012-06-13 Thread Sandeep Reddy P
Thanks for the reply Matt, We have 6TB of raw data. We are io bound. On Wed, Jun 13, 2012 at 11:44 AM, Matt Davies wrote: > Sandeep, > > I think one critical piece missing is whether or not you are counting the > 24 TB as raw or as replicated. In a standard environment with a rep

Re: Hadoop on physical Machines compared to Amazon Ec2 / virtual machines

2012-05-31 Thread Sandeep Reddy P
f you ask me there is > no comparison possible if you have the datacenter space to host your > machines. > > Do you really need 10Gbe? We're quite happy with 1Gbe will no > over-subscription. > > Mathias. > -- Thanks, sandeep

Re: Small glitch with setting up two node cluster...only secondary node starts (datanode and namenode don't show up in jps)

2012-05-29 Thread sandeep
Can you see logs for nn and dn Sent from my iPhone On May 27, 2012, at 1:21 PM, Rohit Pandey wrote: > Hello Hadoop community, > > I have been trying to set up a double node Hadoop cluster (following > the instructions in - > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux

Re: Map/Reduce Tasks Fails

2012-05-22 Thread Sandeep Reddy P
appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). attempt_201205211504_0009_m_02_0: log4j:WARN Please initialize the log4j system properly. But other map tasks are running on the same datanode. Thanks, sandeep.

Re: Map/Reduce Tasks Fails

2012-05-22 Thread Sandeep Reddy P
Raj, -Network Card: VMware generic Gigabit Network adapter. As longer as this VMs are only talking to each other, the communication speed will be close to 1Gb. Top is when the systems are idle. ] Th e E

Re: Map/Reduce Tasks Fails

2012-05-22 Thread Sandeep Reddy P
I got samilar errors for Apache Hadoop 1.0.0 Thanks, Sandeep.

Re: Map/Reduce Tasks Fails

2012-05-22 Thread Sandeep Reddy P
*Task Trackers* *Name**Host**# running tasks**Max Map Tasks**Max Reduce Tasks**Task Failures**Directory Failures**Node Health Status**Seconds Since Node Last Healthy**Total Tasks Since Start* *Succeeded Tasks Since Start* *Total Tasks Last Day* *Succeeded Tasks Last Day* *Total Tasks Last Hour* *Su

FW: Help required regarding HDFS-1228 [0.20-append branch]

2011-03-23 Thread sandeep
way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient's) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! _ From: sa

Help required regarding HDFS-1228 [0.20-append branch]

2011-03-17 Thread sandeep
hether HDFS-679 is applicable for the same context /not Thanks sandeep *** This e-mail and attachments contain confidential information from HUAWEI, which is intended only for the person or entity

When namenode goes down while checkpointing and if is started again Checkpoiting is always failing

2011-02-19 Thread sandeep
namenode is started again,during checkpoint it will not divertFileStreams from edits to edits.new as a result of it checkpointing will always fail while validating uploadCheckpoint .. Fsimage.validateCheckpointUpload. Thanks sandeep

confirmation required regarding the BACKUP-NAMENODE-BEHAVIOR when it is performing checkPoint

2011-02-08 Thread sandeep
CAN BE HANDLED Thanks&Regards sandeep *** This e-mail and attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any

RE: How to Achieve TaskTracker Decommission

2011-01-06 Thread sandeep
ginal Message- From: Allen Wittenauer [mailto:awittena...@linkedin.com] Sent: Thursday, January 06, 2011 10:08 PM To: Subject: Re: How to Achieve TaskTracker Decommission On Jan 6, 2011, at 3:35 AM, sandeep wrote: > Can any one you let me know what command do I need to exec

How to Achieve TaskTracker Decommission

2011-01-06 Thread sandeep
[HADOOP-5643 ] ..but I was unable to find what command I need to execute. I have tried ./mapred jobtracker -decommission but not working . Please help me . Thanks sandeep *** This e-mail and attachments

Regarding decommission progress status for datanode

2010-12-16 Thread sandeep
dfsadmin -report Please let me know Thanks sandeep *** This e-mail and attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above

Please let me know in which scenarios the followign exception will come in hadoop-datanode side

2010-12-16 Thread sandeep
java.io.IOException Blockblk_129_3380 is not valid at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java :962). *** This e-mail and attachments contain confidential in

RE: Hadoop upgrade [Do we need to have same value for dfs.name.dir ] while upgrading

2010-12-15 Thread sandeep
Thanks adarsh. i have done the followign for NEW_HADOOP_INSTALL (new hadoop version installation )i have set same values for dfs.name.dir and fs.checkpoint which i have configured in OLD_HADOOP_INSTALL(old hadoop version installation) Now it is working Thanks sandeep

Hadoop upgrade [Do we need to have same value for dfs.name.dir ] while upgrading

2010-12-15 Thread sandeep
Inconsistent state exception as the dfs.name.dir is not present Here My question is while upgrading do we need to have the same old configurations like dfs.name.dir..etc Or Do i need to format that namenode first and then start upgrading? Please let me know Thanks sandeep