where are the old hadoop documentations for v0.22.0 and below ?

2014-07-28 Thread Jane Wayne
where can i get the old hadoop documentation (e.g. cluster setup, xml configuration params) for hadoop v0.22.0 and below? i downloaded the source and binary files but could not find the documentations as a part of the archive file. on the home page at http://hadoop.apache.org/, i only see

Re: where are the old hadoop documentations for v0.22.0 and below ?

2014-07-28 Thread Konstantin Boudnik
I think your best bet might be to check out a particular release tag for 0.22 release and checking the docs out there. Perhaps you might want to run 'ant docs' of whatever the target used to be back then. Cos On Mon, Jul 28, 2014 at 04:06PM, Jane Wayne wrote: where can i get the old hadoop

Image processing with hadoop

2014-07-28 Thread Chhaya Vishwakarma
hi, How can i store images in hadoop/hive and perform some processing on it? Is there any inbuilt library available to do so? how hadoop stores images in HDFS reference: http://blog.cloudera.com/blog/2012/10/sneak-peek-into-skybox-imagings-cloudera-powered-satellite-system/ Regards, Chhaya

Re: Cannot compaile a basic PutMerge.java program

2014-07-28 Thread Harsh J
Please run it in the same style. The binary 'java' accepts a -cp param too: java -cp $($HADOOP_HOME/bin/hadoop classpath):. PutMerge On Mon, Jul 28, 2014 at 11:21 AM, R J rj201...@yahoo.com wrote: Thanks a lot! I could compile with the added classpath: $javac -cp $($HADOOP_HOME/bin/hadoop

Re: Question about sqoop command error

2014-07-28 Thread Harsh J
This appears to be an JDBC end issue (not Sqoop's issue). You likely have a mix of different Oracle JDBC (non-DMS and DMS ones?) jars under $SQOOP_HOME/lib or other locations. This is causing a class loading conflict. On Mon, Jul 28, 2014 at 11:09 AM, R J rj201...@yahoo.com wrote: Thanks a lot

Re: Cannot compaile a basic PutMerge.java program

2014-07-28 Thread Chris MacKenzie
Hi, I can probably help you out with that. I don¹t want to sound patronising though. What is your IDE and have you included the hadoop libraries in your jar ? Regards, Regards, Chris MacKenzie telephone: 0131 332 6967 email: stu...@chrismackenziephotography.co.uk corporate:

Performance on singlenode and multinode hadoop

2014-07-28 Thread sindhu hosamane
Hello , i set up 2 datanodes on a single machine(ubuntu machine) accordingly mentioned in the thread http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/%3ca3ef3f6af24e204b812d1d24ccc8d71a03688...@mse16be2.mse16.exchange.ms%3E Ubuntu machine has 2 processors and 8 cores.

Re: How to set up the conf folder

2014-07-28 Thread Ravindra
Hi, Could you try putting this in .bash_profile export HADOOP_CONF_DIR=/scratch/extra/cm469/hadoop-2.4.1/etc/hadoop/ Regards, Ravindra On Wed, Jul 23, 2014 at 3:17 PM, Chris MacKenzie stu...@chrismackenziephotography.co.uk wrote: Hi, Can anyone shed some light on this for me. Every time

Re: How to set up the conf folder

2014-07-28 Thread Chris MacKenzie
Hi Ravindra, Thanks for replying, it’s much appreciated. That’s always been the case with my setup: export HADOOP_PREFIX=/scratch/extra/cm469/hadoop-2.4.1 export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop I think my issue is that I have not set yarn-env.sh up correctly. TBH I didn’t know it

One datanode is down then write/read starts failing

2014-07-28 Thread Satyam Singh
Hello, I have hadoop cluster setup of one namenode and two datanodes. And i continuously write/read/delete through hdfs on namenode through hadoop client. Then i kill one of the datanode, still one is working but writing on datanode is getting failed for all write requests. I want to

Re: One datanode is down then write/read starts failing

2014-07-28 Thread Wellington Chevreuil
Can you make sure you still have enough HDFS space once you kill this DN? If not, HDFS will automatically enter safemode if it detects there's no hdfs space available. The error message on the logs should have some hints on this. Cheers. On 28 Jul 2014, at 16:56, Satyam Singh

Re: One datanode is down then write/read starts failing

2014-07-28 Thread Satyam Singh
Yes, there is lot of space available at that instant. I am not sure but i have read somewhere that we must have datanodes live = replication factor given at namenode at any point of time. If live datanodes get less than replication factor then this write/read failure occurs. In my case i

Re: Question about sqoop command error

2014-07-28 Thread R J
Thanks a lot. I removed other jdbc jar files except for the one ojdbc6.jar under /home/catfish/scoop/sqoop-1.4.4.bin__hadoop-0.20/lib/. Now I ran the scoop command again: $/home/catfish/scoop/sqoop-1.4.4.bin__hadoop-0.20/bin/sqoop import --driver oracle.jdbc.driver.OracleDriver --connect

How do you create predictive models in Hadoop?

2014-07-28 Thread Adaryl Bob Wakefield, MBA
I’ve been working with predictive models for three years now. My models have been single threaded and written against data in a non distributed environment. I’m not certain how to translate my skills to Hadoop. Mahout yes but I don’t know Java as I tend to work with Python (as do a lot of my

Re: Cannot compaile a basic PutMerge.java program

2014-07-28 Thread R J
Thank you. I compileted with the command: $CLASSPATH=$(ls $HIVE_HOME/lib/hive-serde-*.jar):$(ls $HIVE_HOME/lib/hive-exec-*.jar):$(ls $HADOOP_HOME/hadoop-core-*.jar) $javac -cp $CLASSPATH PutMerge.java $ls PutMerge.class PutMerge.class Now I tried: $java -cp $($HADOOP_HOME/bin/hadoop

Re: One datanode is down then write/read starts failing

2014-07-28 Thread Satyam Singh
@vikas i have initially set 2 but after that i have make one DN down. So you are saying from initial i have to make replication factor as 1 even i have DN 2 active initially. If so then what is the reason? On 07/28/2014 10:02 PM, Vikas Srivastava wrote: What replication have you set for

Re: One datanode is down then write/read starts failing

2014-07-28 Thread Shahab Yunus
The reason being that when you write something in HDFS, it guarantees that it will be written to the specified number of replicas. So if your replication factor is 2 and one of your node (out of 2) is down, then it cannot guarantee the 'write'. The way to handle this to have a cluster of more

Re: One datanode is down then write/read starts failing

2014-07-28 Thread hadoop hive
If you have 2 an live initially and rep set to 2 which is perfectly fine but you killed one dn... There is no place to put another replica of new files as well as old files... Which causing issue in writing blocks. On Jul 28, 2014 10:15 PM, Satyam Singh satyam.si...@ericsson.com wrote: @vikas i

Re: Hadoop 2.4.0 How to change Configured Capacity

2014-07-28 Thread hadoop hive
You need to add each disk inside dfs.name.data.dir parameter. On Jul 28, 2014 5:14 AM, arthur.hk.c...@gmail.com arthur.hk.c...@gmail.com wrote: Hi, I have installed Hadoop 2.4.0 with 5 nodes, each node physically has 4T hard disk, when checking the configured capacity, I found it is about