Non-DFS used Parameter

2010-08-17 Thread Adarsh Sharma
Hi all, I am not able to understand cleary what *Non-DFS Used *in Hadoop-Namenode Web UI. I think it is the extra space that occured by temporary map-reduce local files. Can anyone Please tell me how to change that parameter and what it is comprised of. Thanks in Advance.

Configure Secondary Namenode

2010-08-18 Thread Adarsh Sharma
I am not able to find any command or parameter in core-default.xml to configure secondary namenode on separate machine. I have a 4-node cluster with jobtracker,master,secondary namenode on one machine and remaining 3 are slaves. Can anyone please tell me. Thanks in Advance

Help in Subscribing Hadoop

2010-08-23 Thread Adarsh Sharma
Hi all, I want to subscribe in hadooop mailing list.Please tell me the proper way to subscribe it. Thanks in advance, Adarsh Sharma

How to know Hive version

2010-08-24 Thread Adarsh Sharma
Dear all, I'm using HadoopDB-SMS Hive. Can anyone please tell me how to determine its version or command to know version. Just we have command to know hadoop version bin/hadoop version Thanks

Minimum Hardware Required

2010-09-08 Thread Adarsh Sharma
Hi all, Can anybody Please tell me the minimum hardware configuration required for Master/Slave in 4 nodes cluster. Thanks in advance

Re: Error starting namenode

2010-09-08 Thread Adarsh Sharma
Mark just write down the following lines in /etc/hosts file Ip-address hostname e. g. 192.168.0.111 ws-test 192.1685.0.165 rahul samr for all nodes Mark wrote: I am getting the following errors from my datanodes when I start the namenode. 2010-09-08 14:17:40,690 INFO org.apache.

Tasktracker fails

2012-02-21 Thread Adarsh Sharma
Dear all, Today I am trying to configure hadoop-0.20.205.0 on a 4 node Cluster. When I start my cluster , all daemons got started except tasktracker, don't know why task tracker fails due to following error logs. Cluster is in private network.My /etc/hosts file contains all IP hostname reso

Re: Tasktracker fails

2012-02-22 Thread Adarsh Sharma
Any update on the below issue. Thanks Adarsh Sharma wrote: Dear all, Today I am trying to configure hadoop-0.20.205.0 on a 4 node Cluster. When I start my cluster , all daemons got started except tasktracker, don't know why task tracker fails due to following error logs. Cluster

Error : conf/configuration :Failed to set setXIncludeAware(true) for parser

2010-09-17 Thread Adarsh Sharma
Dear all, I am trying to connect Hive through my application but i am getting the following error : 12:03:10 ERROR conf.Configuration: Failed to set setXIncludeAware(true) for parser org.apache.xerces.jaxp.documentbuilderfactoryi...@e6c:java.lang.UnsupportedOperationException: This parse

Error

2010-09-17 Thread Adarsh Sharma
Dear all, I am trying to connect Hive through my application but i am getting the following error : 12:03:10 ERROR conf.Configuration: Failed to set setXIncludeAware(true) for parser org.apache.xerces.jaxp.documentbuilderfactoryi...@e6c:java.lang.UnsupportedOperationException: This parse

Re: jobtracker: Cannot assign requested address

2010-09-21 Thread Adarsh Sharma
I am not sure but try 2 things : 1. Just give your proper Ipaddress in value e, g 123.154.0.122:9001 2. Change the port to 9001 and your etc/hosts file must have these values of master and slaves 123.154.0.122 hostname 123.154.0.111 hostname 123.154.0.112 hostname David Ro

Read/Writing into HDFS

2010-09-30 Thread Adarsh Sharma
Dear all, I have set up a Hadoop cluster of 10 nodes. I want to know that how we can read/write file from HDFS (simple). Yes I know there are commands, i read the whole HDFS commands. bin/hadoop -copyFromLocal tells that the file should be in localfilesystem. But I want to know that how we can re

Factorial in Map-Reduce

2010-10-12 Thread Adarsh Sharma
Hi I am practising some programs in Map-Reduce such as WordCount, Word Search , Grep etc Now I want to know is it possible to write Map-Reduce program on hadoop for finding *Factorial of Number*. In that case how we give InputFormat, what are key-values etc. I made this program in Java but no

Re: Not able to execute MaxTemperature example

2010-10-19 Thread Adarsh Sharma
book.com/rohitmishra Hi Rohit, Map-reduce Programs are executed either by *Setting Eclipse Environment for Hadoop *in Eclipse or through command line by making jar of the program. Check the below link that clearly explains that :- http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html

Granting Permissions to HDFS

2010-10-28 Thread Adarsh Sharma
* commands. But I don't want to change users and groups. Can someone Please help me to achieve this or it is possible in HDFS or not. Thanks and Regards Adarsh Sharma

Granting Access

2010-10-28 Thread Adarsh Sharma
Hi all, As all of us know that Hadoop considers the user who starts the hadoop cluster as superuser. It provides all access to HDFS to that user. But know I want to know that how we can R/W access to new user for e.g Tom to access HDFS. Is there any command or we can write code for it. I rea

Re: Granting Access

2010-10-29 Thread Adarsh Sharma
n Fri, Oct 29, 2010 at 11:47 AM, Adarsh Sharma wrote: Hi all, As all of us know that Hadoop considers the user who starts the hadoop cluster as superuser. It provides all access to HDFS to that user. But know I want to know that how we can R/W access to new user for e.g Tom to access HDFS. Is

Re: namenode -format does not read config file

2010-11-09 Thread Adarsh Sharma
nd Regards Adarsh Sharma

Deficiency in Hadoop

2010-11-11 Thread Adarsh Sharma
no historical accounting capability there. So I want to know that is it worthful to use SGE with Hadoop in Production Cluster or not. Please share your views. Thanks in Advance Adarsh Sharma

Re: Deficiency in Hadoop

2010-11-11 Thread Adarsh Sharma
Steve Loughran wrote: On 11/11/10 11:02, Adarsh Sharma wrote: Dear all, Does anyone have an experience on working Hadoop Integration with SGE ( Sun Grid Engine ). It is open -source too ( sge-6.2u5 ). Did SGE really overcomes some of the deficiencies of Hadoop. According to a article

Hadoop and Eucalyptus

2010-11-18 Thread Adarsh Sharma
Dear all, Does anyone have an experience to configure Hadoop on Eucalyptus. I googled a lot , but not able to find any useful Link. Looking forward for some help. Thanks Adarsh

Re: where is example of the configuration about multi nodes on one machine?

2010-11-30 Thread Adarsh Sharma
is best? Thanks & Regards Adarsh Sharma rahul patodi wrote: last option i gave was to run hadoop in fully distributed mode but you can run hadoop in pseudo distributed mode: http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html or standalone mode: http://ha

Re: Abandoning Block

2010-12-06 Thread Adarsh Sharma
li ping wrote: Make sure the VMs can reach each other (e.g,IPtables). And the DNS/ip is correct. On Mon, Dec 6, 2010 at 7:05 PM, Adarsh Sharma wrote: Dear all, I am facing below problem while running Hadoop on VM's. I am using hadoop0-.20.2 with JDK6 My jobtracker log says that :-20

Re: Abandoning Block

2010-12-06 Thread Adarsh Sharma
ech (India) Private Limited, www.impetus.com Mob:09907074413 On Tue, Dec 7, 2010 at 10:38 AM, Adarsh Sharma wrote: li ping wrote: Make sure the VMs can reach each other (e.g,IPtables). And the DNS/ip is correct. On Mon, Dec 6, 2010 at 7:05 PM, Adarsh Sharma wrote:

Reduce Error

2010-12-08 Thread Adarsh Sharma
/job_201012061426_0001/attempt_201012061426_0001_m_000292_0/output/file.out It states that it is not able to locate a file that is created in mapred.local.dir of Hadoop. Thanks in Advance for any sort of information regarding this. Best Regards Adarsh Sharma

Re: Reduce Error

2010-12-08 Thread Adarsh Sharma
Ted Yu wrote: Any chance mapred.local.dir is under /tmp and part of it got cleaned up ? On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote: Dear all, Did anyone encounter the below error while running job in Hadoop. It occurs in the reduce phase of the job

Re: Running not as "hadoop" user

2010-12-08 Thread Adarsh Sharma
Todd Lipcon wrote: The user who started the NN has superuser privileges on HDFS. You can also configure a supergroup by setting dfs.permissions.supergroup (default "supergroup") -Todd On Wed, Dec 8, 2010 at 9:34 PM, Mark Kerzner wrote: Hi, "hadoop" user has some advantages for running Ha

Re: Reduce Error

2010-12-09 Thread Adarsh Sharma
accidentally deleted hadoop.tmp.dir in a node and whenever the reduce job was scheduled on that node that attempt would fail. On Wed, Dec 8, 2010 at 8:21 PM, Adarsh Sharma wrote: Raj V wrote: Go through the jobtracker, find the relevant node that handled attempt_201012061426_0001_m_000292_0 and

Hadoop on Cloud or Not

2010-12-09 Thread Adarsh Sharma
physical host just creates conflict for things like disk, ether and CPU that the virtual OS won't be aware of. Also, VM to disk performance is pretty bad right now, though that's improving. Thanks & Regards Adarsh Sharma

Re: exceptions copying files into HDFS

2010-12-13 Thread Adarsh Sharma
37/conf> cat core-site.xml fs.default.name hdfs://localhost r...@ritter:~/programs/hadoop-0.20.2+737/conf> cat hdfs-site.xml dfs.replication 1 Simply Check through ssh that your slaves are connecting to each other. ssh from 1 slave to another. Best Regards Adarsh Sharma

Libfb303.jar

2010-12-14 Thread Adarsh Sharma
olExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) I know this errrooccurs due to libfb3003.jar bothin Hadoop and Hive lib. Can someone Please tell how to resolve this errror. Thanks & Regards Adarsh Sharma

Re: Hadoop upgrade [Do we need to have same value for dfs.name.dir ] while upgrading

2010-12-15 Thread Adarsh Sharma
Error occurs due to new namespace issue in Hadoop. Did u copy dfs.name.dir and fs.checkpoint dir to new Hadoop directory. Namenode Format would cause u to loose all previous data. Best Regards Adarsh Sharma

Re: How to Speed Up Decommissioning progress of a datanode.

2010-12-15 Thread Adarsh Sharma
sravankumar wrote: Hi, Does any one know how to speed up datanode decommissioning and what are all the configurations related to the decommissioning. How to Speed Up Data Transfer from the Datanode getting decommissioned. Thanks & Regards, Sravan kumar. Chec

Thrift Error

2010-12-16 Thread Adarsh Sharma
e Please tell me why this occurs and how to resolve it. Thanks & Regards Adarsh Sharma

Re: Thrift Error

2010-12-16 Thread Adarsh Sharma
fb303.jar : /usr/lib/hive/lib/ See if this issue solves the problem. I have faced this issue earlier when accessing hive over a thrift server. Thanks, Viral On Thu, Dec 16, 2010 at 2:12 AM, Adarsh Sharma wrote: Hi all, I am googled a lot about the below error but can't able to find the

Re: UI doesn't work

2010-12-27 Thread Adarsh Sharma
maha wrote: Hi, I get Error 404 when I try to use hadoop UI to monitor my job execution. I'm using Hadoop-0.20.2 and the following are parts of my configuration files. in Core-site.xml: fs.default.name hdfs://speed.cs.ucsb.edu:9000 in mapred-site.xml: mapred.job.tracker spe

Re: Retrying connect to server

2010-12-30 Thread Adarsh Sharma
ue occured after configuring Hadoop Cluster. Reason : 1. Your NameNode, JobTracker is not running. Verify through Web UI and jps commands. 2. DNS Resolution. You must have IP hostname enteries if all nodes in /etc/hosts file. Best Regards Adarsh Sharma

HNY-2011

2011-01-03 Thread Adarsh Sharma
Dear all, A very-very Happy New Year 2011 to all. May God Bless all of us to solve future problems. Thanks and Regards Adarsh Sharma

Data for Testing in Hadoop

2011-01-03 Thread Adarsh Sharma
erent sizes ( 10Gb, 20GB, 30 Gb , 50GB ) . I shall be grateful for this kindness. Thanks & Regards Adarsh Sharma

Re-Master Not Running Exception ( Hive/Hbase Integration )

2011-01-05 Thread Adarsh Sharma
it throws the below exception. I checked the size of hive_hbase_handler.jar, hbase-0.20.3.jar , hbase-0.20.3.test.jar. It's same. Please help. Best Regards Adarsh Sharma JVS On Dec 29, 2010, at 5:20 AM, Adarsh Sharma wrote: Dear all, I am following all wiki tutorial for confi

Re: Re-Master Not Running Exception ( Hive/Hbase Integration )

2011-01-05 Thread Adarsh Sharma
Adarsh Sharma wrote: From that wiki page: "If you are not using hbase-0.20.3, you will need to rebuild the handler with the HBase jar matching your version, and change the --auxpath above accordingly. Failure to use matching versions will lead to misleading connection failures su

Error in metadata: javax.jdo.JDOFatalDataStoreException

2011-01-05 Thread Adarsh Sharma
t Derby metastore ) and Hbase-0.20.3. Please tell how this could be resolved. Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers. Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. Would this is the issue. I don't know the number of servers needed for Zookeeper in fully distributed mode. Best Regards Adarsh Sharma

Hive/Hbase Integration Error

2011-01-06 Thread Adarsh Sharma
sktrackers and Regionservers. Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode. Best Regards Adarsh Sharma

Re: How to Achieve TaskTracker Decommission

2011-01-06 Thread Adarsh Sharma
sandeep wrote: Hi Can any one you let me know what command do I need to execute for Decommissioning TaskTracker? Datanode decommissioning I have achieved using hadoop dfsadmin -refreshNodes. Similar to HDFS is there any command for Mapreduce Decommission. I have gone thr

Too-many fetch failure Reduce Error

2011-01-07 Thread Adarsh Sharma
nnelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522) Let's have some discussion. Thanks & Regards Adarsh Sharma

Re: Too-many fetch failure Reduce Error

2011-01-09 Thread Adarsh Sharma
$DiskErrorException: Could not find taskTracker/jobcache/job_201101071129_0001/attempt_201101071129_0001_m_12_0/output/file.out.index esteban. On Fri, Jan 7, 2011 at 06:47, Adarsh Sharma wrote: Dear all, I am researching about the below error and could not able to find the reason : Data Size

Re: ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Datanode state: LV = -19 CTime = 1294051643891 is newer than the namespace state: LV = -19 CTime = 0

2011-01-09 Thread Adarsh Sharma
and then format and start the cluster. This error occurs due to incompatibily in the metadata. Best Regards Adarsh Sharma

Re: TeraSort question.

2011-01-10 Thread Adarsh Sharma
If possible Please also post your configuration parameters like *dfs.data.dir* , *mapred.local.dir* , map and reduce parmeters, java etc. Thanks bharath vissapragada wrote: Ravi, Please post the figures and graphs .. Figures for large clusters (> 200 nodes) are certainly interesting .. Th

No locks available

2011-01-11 Thread Adarsh Sharma
lease help. Thanks & Regards Adarsh Sharma

Re: No locks available

2011-01-11 Thread Adarsh Sharma
Allen Wittenauer wrote: On Jan 11, 2011, at 2:39 AM, Adarsh Sharma wrote: Dear all, Yesterday I was working on a cluster of 6 Hadoop nodes ( Load data, perform some jobs ). But today when I start my cluster I came across a problem on one of my datanodes. Are you running

Re: Too-many fetch failure Reduce Error

2011-01-11 Thread Adarsh Sharma
Any update on this error. Thanks Adarsh Sharma wrote: Esteban Gutierrez Moguel wrote: Adarsh, Dou you have in /etc/hosts the hostnames for masters and slaves? Yes I know this issue. But did you think the error occurs while reading the output of map. I want to know the proper

Re: When applying a patch, which attachment should I use?

2011-01-12 Thread Adarsh Sharma
I am also facing some issues and i think applying hdfs-630-0.20-append.patch would solve my problem. I try to run Hadoop/Hive/Hbase integration in fully Distributed mode. But I am facing master Not Running E

Re: When applying a patch, which attachment should I use?

2011-01-13 Thread Adarsh Sharma
er servers ( HQuorumPeer ) or we must need separate servers for it. My problem arises in running zookeeper. My Hbase is up and running in fully distributed mode too. With Best Regards Adarsh Sharma edward choi wrote: Dear Adarsh, My situation is somewhat different from yours as I am on

Why Hadoop is slow in Cloud

2011-01-16 Thread Adarsh Sharma
ts and if interested comment on it. Thanks & Regards Adarsh Sharma hadoop_testing_new.ods Description: application/vnd.oasis.opendocument.spreadsheet

No locks available

2011-01-17 Thread Adarsh Sharma
Dear all, I know this a silly mistake but not able to find the reason of the exception that causes one datanode to fail to start. I mount /hdd2-1 of a phsical machine into this VM and start datanode,tasktracker. Datanode fails after few seconds. Can someone tell me the root cause. Be

Re: No locks available

2011-01-17 Thread Adarsh Sharma
exception for one datanode. Can i know why it occurs. Thanx On Mon, Jan 17, 2011 at 1:43 PM, Adarsh Sharma wrote: Dear all, I know this a silly mistake but not able to find the reason of the exception that causes one datanode to fail to start. I mount /hdd2-1 of a phsical machine into

Re: No locks available

2011-01-17 Thread Adarsh Sharma
Harsh J wrote: Could you re-check your permissions on the $(dfs.data.dir)s for your failing DataNode versus the user that runs it? On Mon, Jan 17, 2011 at 6:33 PM, Adarsh Sharma wrote: Can i know why it occurs. Thanx Harsh , I know this issue and I cross-check several times

Re: When applying a patch, which attachment should I use?

2011-01-17 Thread Adarsh Sharma
Thanx a Lot Edward, This information is very helpful to me. With Best Regards Adarsh Sharma edward choi wrote: Dear Adarsh, I have a single machine running Namenode/JobTracker/Hbase Master. There are 17 machines running Datanode/TaskTracker Among those 17 machines, 14 are running Hbase

Re: No locks available

2011-01-17 Thread Adarsh Sharma
Edward Capriolo wrote: On Mon, Jan 17, 2011 at 8:13 AM, Adarsh Sharma wrote: Harsh J wrote: Could you re-check your permissions on the $(dfs.data.dir)s for your failing DataNode versus the user that runs it? On Mon, Jan 17, 2011 at 6:33 PM, Adarsh Sharma wrote: Can i know

Re: No locks available

2011-01-17 Thread Adarsh Sharma
Edward Capriolo wrote: On Mon, Jan 17, 2011 at 8:13 AM, Adarsh Sharma wrote: Harsh J wrote: Could you re-check your permissions on the $(dfs.data.dir)s for your failing DataNode versus the user that runs it? On Mon, Jan 17, 2011 at 6:33 PM, Adarsh Sharma wrote: Can i know

Stuck with the Issue : No Lock Available

2011-01-18 Thread Adarsh Sharma
ta folder of that VM. If someone know the reason or anything about this issue, Please be kind to suggest. I find it difficult because I followed the same steps in the other 2 VM's and they are running Thanks & Regards Adarsh Sharma

Re: No locks available

2011-01-18 Thread Adarsh Sharma
Benjamin Gufler wrote: Hi, On 2011-01-17 14:28, Adarsh Sharma wrote: Edward Capriolo wrote: No locks available can mean that you are trying to use hadoop on a filesystem that does not support file level locking. Are you trying to run your name node storage in NFS space? Yes Edward U&#

Re: Why Hadoop is slow in Cloud

2011-01-18 Thread Adarsh Sharma
ay not - or may still - be an advantageous choice. Some reasons of slowness will highly helpful. Any guidance is appreciable. Context is king. Thanks & best Regards Adarsh Sharma On Mon, Jan 17, 2011 at 10:41 AM, Edward Capriolo wrote: Everything you emulate you cut X% perfor

Re: When applying a patch, which attachment should I use?

2011-01-20 Thread Adarsh Sharma
that won't be necessary unless your clusters are very heavily loaded. They also suggest that you give Zookeeper its own hard disk. But I haven't done that myself yet. (Hard disks cost money you know) So I'd say your cluster seems fine. But when you want to expand your cluster, you&#x

Re: When applying a patch, which attachment should I use?

2011-01-20 Thread Adarsh Sharma
Extremely Sorry, Forgot to attach logs : Here they are : Adarsh Sharma wrote: Thanx Edward, Today I look upon your considerations and start working : edward choi wrote: Dear Adarsh, I have a single machine running Namenode/JobTracker/Hbase Master. There are 17 machines running Datanode

CUDA on Hadoop

2011-02-09 Thread Adarsh Sharma
troduction, Configuring & Running CUDA programs in Hadoop Cluster , any White Papers or any sort of helpful information, Please let me know through links or materials. I shall be grateful for any kindness. Thanks & Best Regards Adarsh Sharma

Re: CUDA on Hadoop

2011-02-09 Thread Adarsh Sharma
HDFS. Best Regards Adarsh Sharma Harsh J wrote: You can check-out this project which did some work for Hama+CUDA: http://code.google.com/p/mrcl/ On Wed, Feb 9, 2011 at 6:38 PM, Adarsh Sharma wrote: Dear all, I am going to work on a Project that includes " Working on CUDA in Hadoop E

Re: CUDA on Hadoop

2011-02-09 Thread Adarsh Sharma
He Chen wrote: Hi sharma I shared our slides about CUDA performance on Hadoop clusters. Feel free to modified it, please mention the copyright! Chen On Wed, Feb 9, 2011 at 11:13 AM, He Chen > wrote: Hi Sharma I have some experiences on working Hybrid Had

Re: CUDA on Hadoop

2011-02-10 Thread Adarsh Sharma
hadoop+cuda page and refer to it Yes, This will be very helpful for others too. But This much information is not sufficient , need more. Best Regards Adarsh Sharma

Re: Hadoop in Real time applications

2011-02-17 Thread Adarsh Sharma
I think Facebook, Uses Hadoop, Casandra for their Analytics Purposes. Thanks, Adarsh Michael Segel wrote: Uhm... 'Realtime' is relative. Facebook uses HBase for e-mail, right? Now isn't that a 'realtime' application? ;-) If you're talking about realtime as in like a controller? Or a syst

Library Issues

2011-02-23 Thread Adarsh Sharma
s as [hadoop@cuda1 ~]$ echo $PATH /usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/hadoop/project/hadoop-0.20.2/jcuda.jar:/usr/local/cuda/lib:/home/hadoop/bin [hadoop@cuda1 ~]$ I don't how to resolve this error. Please help Thanks & best Regards, Adarsh Sharma

Library Issue

2011-02-27 Thread Adarsh Sharma
hadoop-0.20.2]$ Actually I need to run a map-reduce code but first if it runs through simple then I will go for it. Please guide me how to solve this issue as CLASSPATH is same through all users. Thanks & best Regards, Adarsh Sharma

Re: Library Issue

2011-02-27 Thread Adarsh Sharma
sions issue with a device, not a Hadoop-related issue. Find a way to let users access the required devices (/dev/nvidiactl is what's reported in your ST, for starters). On Mon, Feb 28, 2011 at 12:05 PM, Adarsh Sharma wrote: Greetings to all, Today i came across a strange problem abou

Re: Library Issue

2011-02-28 Thread Adarsh Sharma
Harsh J wrote: You're facing a permissions issue with a device, not a Hadoop-related issue. Find a way to let users access the required devices (/dev/nvidiactl is what's reported in your ST, for starters). On Mon, Feb 28, 2011 at 12:05 PM, Adarsh Sharma wrote: Greetings to all

Setting java.library.path for map-reduce job

2011-02-28 Thread Adarsh Sharma
ount /user/hadoop/gutenberg /user/hadoop/output1 Please guide how to achieve this. Thanks & best Regards, Adarsh Sharma

Re: Setting java.library.path for map-reduce job

2011-02-28 Thread Adarsh Sharma
:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Please check my attached mapred-site.xml Thanks & best regards, Adarsh Sharma Kaluskar, Sanjay wrote: You will probably have to use distcache to distribute your jar t

Re: Setting java.library.path for map-reduce job

2011-02-28 Thread Adarsh Sharma
ibrary(System.java:1028) at jcuda.driver.CUDADriver.(CUDADriver.java:909) at jcuda.CUDA.init(CUDA.java:62) at jcuda.CUDA.(CUDA.java:42) Thanks & best Regards, Adarsh Sharma On Mon, Feb 28, 2011 at 5:06 PM, Adarsh Sharma wrote: Thanks Sanjay, it seems i found the root c

Re: Setting java.library.path for map-reduce job

2011-02-28 Thread Adarsh Sharma
me how to make it available in it. Thanks & best Regards, Adrash Sharma Thanks and Regards, Sonal <https://github.com/sonalgoyal/hiho>Hadoop ETL and Data Integration<https://github.com/sonalgoyal/hiho> Nube Technologies <http://www.nubetech.co> <http://in.linkedin.

Re: Unable to use hadoop cluster on the cloud

2011-03-03 Thread Adarsh Sharma
Hi Praveen, Check through ssh & ping that your datanodes are communicating with each other or not. Cheers, Adarsh praveen.pe...@nokia.com wrote: Hello all, I installed hadoop0.20.2 on physical machines and everything works like a charm. Now I installed hadoop using the same hadoop-install gz f

Cuda Program in Hadoop Cluster

2011-03-03 Thread Adarsh Sharma
has done it before and guide me how to do this. I attached the both files. Please find the attachment. Thanks & best Regards, Adarsh Sharma

Re: Unable to use hadoop cluster on the cloud

2011-03-06 Thread Adarsh Sharma
node2. U mention the HDFS commands. Simply check from datanode1 as ssh datanode2_ip or ping datanode2_ip Best Rgds, Adarsh Praveen -Original Message- From: ext Adarsh Sharma [mailto:adarsh.sha...@orkash.com] Sent: Friday, March 04, 2011 12:12 AM To: common-user@hadoop.apache.org Su

Reason of Formatting Namenode

2011-03-09 Thread Adarsh Sharma
& best Regards, Adarsh Sharma

Re: Cuda Program in Hadoop Cluster

2011-03-09 Thread Adarsh Sharma
ng in Hadoop Cluster that clarifies my basic concepts so that I program accordingly in future. Looking forward for some more guidance. Thanks once again for your wishes . With best Regards, Adarsh Sharma He Chen wrote: Hi, Adarsh Sharma For C code My friend employ hadoop streaming t

Re: Reason of Formatting Namenode

2011-03-09 Thread Adarsh Sharma
Thanks Harsh, i.e why if we again format namenode after loading some data INCOMATIBLE NAMESPACE ID's error occurs. Best Regards, Adarsh Sharma Harsh J wrote: Formatting the NameNode initializes the FSNameSystem in the dfs.name.dir directories, to prepare for use. The format co

Running Cuda Program through Hadoop Pipes

2011-03-11 Thread Adarsh Sharma
his would definitely help me. Thanks & best Regards, Adarsh Sharma #include #include #include #include #include "stdint.h" // <--- to prevent uint64_t errors! #include "hadoop/Pipes.hh" #include "hadoop/TemplateFactory.hh" #include "hadoop/Str

Re: Cuda Program in Hadoop Cluster

2011-03-11 Thread Adarsh Sharma
So, it means it is impossible to run GPU code ( myfile.cu ) through Hadoop Pipes . It's the requirement to run a C++ code that includes some Cuda code ( Cuda libraries & _global_ function ) in a Hadoop Cluster. Thanks & best Regards, Adarsh Sharma Lance Norskog wrote: One

Not able to Run C++ code in Hadoop Cluster

2011-03-14 Thread Adarsh Sharma
org.apache.hadoop.mapred.Child.main(Child.java:170) I attached the code. Please find the attachment. Thanks & best Regards, Adarsh Sharma #include #include #include #include #include #include #include "stdint.h" // <--- to prevent uint64_t errors! #include "hadoop/Pipe

Re: Not able to Run C++ code in Hadoop Cluster

2011-03-15 Thread Adarsh Sharma
Is it possible to run C++/GPU Code in Map-Reduce Framework through Hadoop Streaming, if there is simple example , Please let me know. Thanks & best Regards, Adarsh Sharma He Chen wrote: Agree with Keith Wiley, we use streaming also. On Mon, Mar 14, 2011 at 11:40 AM, Keith Wiley w

Job Configuration in Hadoop Pipes

2011-03-17 Thread Adarsh Sharma
and other parameters during run-time. Please guide how to do this. Thanks & best regards, Adarsh Sharma

Change Map-Red Parameters

2011-03-17 Thread Adarsh Sharma
. What I want is to run a MAp-Reduce program that changes the mapred-site.xml parameters and the changes got reflected in the next jobs. Please guide me if there is a way to do this. Thanks & best Regards Adarsh Sharma

Re: Change Map-Red Parameters

2011-03-18 Thread Adarsh Sharma
single client. Btw, if you are looking at spawning extra maps, you cannot control it mapreduce subsystem spawns it based on your file input format. HTH On 3/18/11 10:47 AM, "Adarsh Sharma" wrote: Thanks Sreekanth, your reply correspond me to do following steps : 1. Make

Read/Write xml file in Hadoop

2011-03-18 Thread Adarsh Sharma
Dear all, I am researching on howto read/write an xml file through C++ program in Hadoop Pipes. I need to achieve this as this is the requirement. Please guide if there is a trick to do this. Thanks & best Regards, Adarsh

Hadoop Pipes Error

2011-03-29 Thread Adarsh Sharma
am in Hadoop Cluster but don't know why this program fails in Broken Pipe error. Thanks & best regards Adarsh Sharma CC = g++ HADOOP_INSTALL =/home/hadoop/project/hadoop-0.20.2 PLATFORM = Linux-amd64-64 CPPFLAGS = -m64 -I/home/hadoop/project/hadoop-0.20.2/c++/Linux-amd64-64/inclu

How to apply Patch

2011-03-30 Thread Adarsh Sharma
ards, Adarsh Sharma

Re: How to apply Patch

2011-03-30 Thread Adarsh Sharma
Sorry, Just check the attachment now. Adarsh Sharma wrote: Dear all, Can Someone Please tell me how to apply a patch on hadoop-0.20.2 package. I attached the patch. Please find the attachment. I just follow below steps for Hadoop : 1. Download Hadoop-0.20.2.tar.gz 2. Extract the file. 3. Set

Re: How to apply Patch

2011-03-30 Thread Adarsh Sharma
7;. You can use that with a suitable -p(num) argument (man patch, for more info). On Thu, Mar 31, 2011 at 9:41 AM, Adarsh Sharma wrote: Dear all, Can Someone Please tell me how to apply a patch on hadoop-0.20.2 package. I attached the patch. Please find the attachment. I just follow below

Re: Hadoop Pipes Error

2011-03-30 Thread Adarsh Sharma
Any update on the below error. Please guide. Thanks & best Regards, Adarsh Sharma Adarsh Sharma wrote: Dear all, Today I faced a problem while running a map-reduce job in C++. I am not able to understand to find the reason of the below error : 11/03/30 12:09:02 INFO mapred.JobCl

Hadoop Pipe Error

2011-03-30 Thread Adarsh Sharma
am in Hadoop Cluster but don't know why this program fails in Broken Pipe error. Thanks & best regards Adarsh Sharma CC = g++ HADOOP_INSTALL =/home/hadoop/project/hadoop-0.20.2 PLATFORM = Linux-amd64-64 CP

Re: How to apply Patch

2011-03-30 Thread Adarsh Sharma
Thanks a lot for such deep explanation : I have done it now, but it doesn't help me in my original problem for which I'm doing this. Please if you have some idea comment on it. I attached the problem. Thanks & best Regards, Adarsh Sharma Matthew Foley wrote: Hi Adar

[Fwd: Hadoop Pipe Error]

2011-03-30 Thread Adarsh Sharma
Sorry, As usual Please find the attachment here. Thanks & best Regards, Adarsh Sharma --- Begin Message --- Dear all, Today I faced a problem while running a map-reduce job in C++. I am not able to understand to find the reason of the below error : 11/03/30 12:09:02 INFO mapred.JobCl

  1   2   >