Re: Max Parallel task executors

2015-11-06 Thread Chris Mawata
Also check that you have more than 31 blocks to process. On Nov 6, 2015 6:54 AM, "sandeep das" wrote: > Hi Varun, > > I tried to increase this parameter but it did not increase number of > parallel tasks but if It is decreased then YARN reduces number of parallel > tasks. I'm bit puzzled why its

Re: processing data evenly

2015-09-03 Thread Chris Mawata
Static only makes sense in the same JVM and classloader. In a distributed setting it is not useful On Sep 2, 2015 5:08 PM, "Arni Sumarlidason" wrote: > I'm having problems getting my data reduced evenly across nodes. > > -> map a 200,000 line single text file and output <0L,line> > -> custom part

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

2015-04-30 Thread Chris Mawata
You are running it with the Java command rather than hadopp jar ... Do you have a mechanism inside your Java code to find hadoop like creating your own Configuration? On Apr 30, 2015 1:54 AM, "Mahmood Naderan" wrote: > Hi, > when I run the following command, I get ipc.client timeout error. > > [m

Re: Beginner Hadoop Code

2015-04-28 Thread Chris Mawata
> > > > On Tuesday, April 28, 2015 4:53 PM, Chris Mawata > wrote: > > > Looks like the framework is having difficulty instantiating your Mapper. > The problem is probably because you made it an instance inner class. Make > it a static nested class > public stati

Re: Beginner Hadoop Code

2015-04-28 Thread Chris Mawata
Looks like the framework is having difficulty instantiating your Mapper. The problem is probably because you made it an instance inner class. Make it a static nested class public static class MaxTemperatureMapper ... and the same for your reducer On Tue, Apr 28, 2015 at 4:27 AM, Anand Murali wrot

Re: FW: A simple insert stuck in hive

2015-04-07 Thread Chris Mawata
If this is a paste you have a typo where you say recduse instead of reduce On Apr 7, 2015 6:47 PM, "Mich Talebzadeh" wrote: > Hi. > > > > I sent this to hive user group but it seems that it is more relevant to > map reduce operation. It is inserting a row into table via hive. So reduce > should n

RE: Does Hadoop 2.6.0 have job level blacklisting?

2015-03-30 Thread Chris Mawata
ttps://hadoop.apache.org/docs/r2.6.0/api/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.html#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest) > > > https://hadoop.apache.org/docs/r2.6.0/api/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.html > > > > Thanks & Re

Does Hadoop 2.6.0 have job level blacklisting?

2015-03-28 Thread Chris Mawata
. Is there still job level blacklisting as there was in earlier versions? Chris Mawata

Re: Split files into 80% and 20% for building model and prediction

2014-12-12 Thread Chris Mawata
How about doing something on the lines of bucketing: Pick a field that is unique for each record and if hash of the field mod 10 is 8 or less it goes in one bin, otherwise into the other one. Cheers Chris On Dec 12, 2014 1:32 AM, "unmesha sreeveni" wrote: > I am trying to divide my HDFS file into

Re: When schedulers consider x% of resources what do they mean?

2014-12-05 Thread Chris Mawata
resources for a queue means that 70% of the total memory > set for Hadoop in the cluster are available for all applications in that > queue. > > Heap sizes are part of the memory requirements for each container. > > HTH > +Vinod > > On Dec 5, 2014, at 5:41 AM, Chris Mawa

When schedulers consider x% of resources what do they mean?

2014-12-05 Thread Chris Mawata
or should it not be number of containers but sum of heap sizes? Cheers Chris Mawata

Re: planning a cluster

2014-07-22 Thread Chris Mawata
If you plan to use it to learn how to program for Hadoop then pseudo distributed (cluster of 1) will do. If you plan to use it to learn how to administer a cluster then 4 or 5 nodes will allow experiments with commissioning and decommissioning nodes, HA, Journaling, etc. If it is a proof of concept

Re: Is it a good idea to delete / move the default configuration xml file ?

2014-07-21 Thread Chris Mawata
ris Nauroth > Hortonworks > http://hortonworks.com/ > > > > On Mon, Jul 21, 2014 at 11:12 AM, Chris Mawata > wrote: > >> Aren't the *-default.xml files supposed to be inside the jars rather than >> loose files? >> Cheers >> Chris Mawata >> On Jul

Re: Is it a good idea to delete / move the default configuration xml file ?

2014-07-21 Thread Chris Mawata
Aren't the *-default.xml files supposed to be inside the jars rather than loose files? Cheers Chris Mawata On Jul 21, 2014 12:59 PM, "Chris Nauroth" wrote: > I recommend against deleting or moving *-default.xml, because these files > may be supplying reasonable default val

Re: Re: HDFS input/output error - fuse mount

2014-07-18 Thread Chris Mawata
usr/java/jdk-1.7* instead? I appreciate the help! > > > On Thu, Jul 17, 2014 at 11:11 PM, Chris Mawata > wrote: > >> Yet another place to check -- in the hadoop-env.sh file there is also a >> JAVA_HOME setting. >> Chris >> On Jul 17, 2014 9:46 PM, "and

Re: Re: HDFS input/output error - fuse mount

2014-07-17 Thread Chris Mawata
om:* andrew touchet >> *Date:* 2014-07-18 09:06 >> *To:* user >> *Subject:* Re: HDFS input/output error - fuse mount >> Hi Chris, >> >> I tried to mount /hdfs with java versions below but there was no change >> in output. >> jre-7u21 >>

Re: Re: HDFS input/output error - fuse mount

2014-07-17 Thread Chris Mawata
09:06 >> *To:* user >> *Subject:* Re: HDFS input/output error - fuse mount >> Hi Chris, >> >> I tried to mount /hdfs with java versions below but there was no change >> in output. >> jre-7u21 >> jdk-7u21 >> jdk-7u55 >> jdk1.6.0_31 >> j

Re: HDFS input/output error - fuse mount

2014-07-17 Thread Chris Mawata
Version 51 ia Java 7 Chris On Jul 17, 2014 7:50 PM, "andrew touchet" wrote: > Hello, > > Hadoop package installed: > hadoop-0.20-0.20.2+737-33.osg.el5.noarch > > Operating System: > CentOS release 5.8 (Final) > > I am mounting HDFS from my namenode to another node with fuse. After > mounting to

Re: Configuration set up questions - Container killed on request. Exit code is 143

2014-07-17 Thread Chris Mawata
rds, > > Chris MacKenzie > telephone: 0131 332 6967 > email: stu...@chrismackenziephotography.co.uk > corporate: www.chrismackenziephotography.co.uk > <http://www.chrismackenziephotography.co.uk/> > <http://plus.google.com/+ChrismackenziephotographyCoUk/posts> > &

Re: Configuration set up questions - Container killed on request. Exit code is 143

2014-07-17 Thread Chris Mawata
Hi Chris MacKenzie, I have a feeling (I am not familiar with the kind of work you are doing) that your application is memory intensive. 8 cores per node and only 12GB is tight. Try bumping up the yarn.nodemanager.vmem-pmem-ratio Chris Mawata On Wed, Jul 16, 2014 at 11:37 PM, Chris

Re: Can someone shed some light on this ? - java.io.IOException: Spill failed

2014-07-16 Thread Chris Mawata
I would post the configuration files -- easier for someone to spot something wrong than to imagine what configuration would get you to that stacktrace. The part Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1405523201400_0006

Re: Copy hdfs block from one data node to another

2014-07-09 Thread Chris Mawata
ionality for my research, basically for fault > tolerance. I can calculate some failure probability for some data nodes > after certain unit of time. So I need to copy all the blocks reside on > these nodes to another nodes. > > Thanks > Yehia > > > On 7 July 2014 20:45, Chri

Re: Copy hdfs block from one data node to another

2014-07-07 Thread Chris Mawata
Can you outline why one would want to do that? The blocks are disposable so it is strange to manipulate them directly. On Jul 7, 2014 8:16 PM, "Yehia Elshater" wrote: > Hi All, > > How can copy a certain hdfs block (given the file name, start and end > bytes) from one node to another node ? > > T

Re: Need to evaluate the price of a Hadoop cluster

2014-07-03 Thread Chris Mawata
Some comments: 3 drives each of capacity 1Tb will be better than one 3 Tb drive. On a small cluster you can not afford to reserve a whole machine for each master daemon. The NameNode and JobTracker will have to cohabit with DataNodes and TaskTrackers. As for pricing if it is for an institution yo

Re: Working of combiner in hadoop

2014-07-03 Thread Chris Mawata
The key/value pairs are processes by the mapper independently of each other. The combiner logic deals with all the outputs from multiple key/value pairs do that logic can not be in the map method. On Jul 4, 2014 1:29 AM, "Chhaya Vishwakarma" < chhaya.vishwaka...@lntinfotech.com> wrote: > Hi, > >

Re: Bugs while installing apache hadoop 2.4.0

2014-07-02 Thread Chris Mawata
What is the hostname of your NaneNode? How are you doing name resolution? On Jul 3, 2014 2:05 AM, "Ritesh Kumar Singh" wrote: > When I try to start dfs using start-dfs.sh I get this error message: > 14/07/03 11:03:21 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your plat

Re: hadoop directory can't add and remove

2014-07-02 Thread Chris Mawata
Also, investigate why it is happening. Usually it is a block replication issue like a replication factor greater than the number of DataNodes. Chris On Jul 2, 2014 9:32 AM, "hadoop hive" wrote: > Use Hadoop dfsadmin -safemode leave > > Then you can delete > On Jul 2, 2014

Re: hadoop directory can't add and remove

2014-07-02 Thread Chris Mawata
The NameNode is in safe mode so it is read only. On Jul 2, 2014 2:28 AM, "EdwardKing" wrote: > I want to remove hadoop directory,so I use hadoop fs -rmr,but it can't > remove,why? > > [hdfs@localhost hadoop-2.2.0]$ hadoop fs -ls > Found 2 items > drwxr-xr-x - hdfs supergroup 0 2014-07

Re: Partitioning and setup errors

2014-06-29 Thread Chris Mawata
It is the part about implementing the tool interface that is the issue. Does your driver class implement Tool? Chris On Jun 29, 2014 8:22 AM, "Chris MacKenzie" < stu...@chrismackenziephotography.co.uk> wrote: > HI Vinod, > > Thanks for your support. I’m packaging my application in Eclipse (Kepler)

Re: Partitioning and setup errors

2014-06-27 Thread Chris Mawata
Probably my fault. I was looking for the extends Configurable implements Tool part. I will double check when I get home rather than send you on a wild goose chase. Cheers Chris On Jun 27, 2014 8:16 AM, "Chris MacKenzie" < stu...@chrismackenziephotography.co.uk> wrote: > Hi, > > I realise my previo

Re: group similar items using pairwise similar items

2014-06-27 Thread Chris Mawata
Since you say mutually similar are you really not looking for maximal cliques rather than connected components. Hi, I have a set of items and a pairwise similar items. I want to group together items that are mutually similar. For ex : if *A B C D E F G* are the items I have the following pairwi

Re: Partitioning and setup errors

2014-06-27 Thread Chris Mawata
The new Configuration() is suspicious. Are you setting configuration information manually? Chris On Jun 27, 2014 5:16 AM, "Chris MacKenzie" < stu...@chrismackenziephotography.co.uk> wrote: > Hi, > > I realise my previous question may have been a bit naïve and I also > realise I am asking an awful

Re: grouping similar items toegther

2014-06-20 Thread Chris Mawata
1. We can't see your reduce algorithm so we can't tell you why the 'group' you think should work is not working. 2. The relation you have is not transitive so you will not have equivalence classes. Chris On Jun 20, 2014 2:51 PM, "parnab kumar" wrote: > Hi, > > I have a set of hashes. Each Has

Re: IOException when using "dfs -put"

2014-04-04 Thread Chris Mawata
How many machines do you have? This could be because you re - formatted the Name Node and the versions are not matching. Your Data Mode would then be rejected by the Name Node. Chris On Apr 4, 2014 2:58 PM, "Jitendra Yadav" wrote: > Use jps and check what all processes are running, is this a sing

Re: HA NN Failover question

2014-03-14 Thread Chris Mawata
Could you have also prevented the standby from communicating with Zookeeper? Chris On Mar 14, 2014 8:22 PM, "dlmarion" wrote: > I was doing some testing with HA NN today. I set up two NN with active > failover (ZKFC) using sshfence. I tested that its working on both NN by > doing 'kill -9 ' on th

Re: Wrong FS hdfs:/localhost:9000 ;expected file///

2014-02-25 Thread Chris Mawata
The hadoop command gives you a configuration object with the configurations that are in your XML files. In your Java code you are probably getting your FileSystem object from a blank Configuration when you don't use the hadoop command. Chris On Feb 24, 2014 7:37 AM, "Chirag Dewan" wrote: > Hi A

Re: Some extinted commands

2014-02-23 Thread Chris Mawata
Some of the shell scripts are now in the sbin directory. The documentation at the website that matches your version will save you a lot of grief. Chris On 2/23/2014 11:38 AM, Mahmood Naderan wrote: Hello We had an old document (I think it was hadoop 0.2) which stated these steps To start Ha

Re: Starting... -help needed

2014-01-27 Thread Chris Mawata
less the same config files that are in the v. 1.2 conf dir. On Mon, 2014-01-27 at 12:19 -0500, Chris Mawata wrote: Check if you have [hadoop]/etc/hadoop as the configuration directory is different in version 2.x On Jan 27, 2014 10:37 AM, "Thomas Bentsen" wrote: Hell

Re: Starting... -help needed

2014-01-27 Thread Chris Mawata
Check if you have [hadoop]/etc/hadoop as the configuration directory is different in version 2.x On Jan 27, 2014 10:37 AM, "Thomas Bentsen" wrote: > Hello everyone > > I have recently decided to try out the Hadoop complex. > > According to the getting started I am supposed to change the config in

Re: HIVE+MAPREDUCE

2014-01-21 Thread Chris Mawata
If you put the sentence "Need to load the data into hive table using mapreduce, using java" into your google search box you will get tons of information. On 1/21/2014 3:21 AM, Ranjini Rathinam wrote: Need to load the data into hive table using mapreduce, using java

Re: what all can be done using MR

2014-01-11 Thread Chris Mawata
write() of mapper? > > > On Wed, Jan 8, 2014 at 8:18 PM, Chris Mawata wrote: > >> Yes. >> Check out, for example, >> http://packtlib.packtpub.com/library/hadoop-mapreduce-cookbook/ch06lvl1sec66# >> >> >> >> On 1/8/2014 2:41 AM, unmesha sreeve

Re: Distributing the code to multiple nodes

2014-01-09 Thread Chris Mawata
copied the data. After copying the data >> there has been no updates on the log files. >> >> >> On Thu, Jan 9, 2014 at 5:08 PM, Chris Mawata wrote: >> >>> Do the logs on the three nodes contain anything interesting? >>> Chris >>> On Jan

Re: Distributing the code to multiple nodes

2014-01-09 Thread Chris Mawata
> > On Thu, Jan 9, 2014 at 2:11 PM, Ashish Jain wrote: > >> Hello Chris, >> >> I have now a cluster with 3 nodes and replication factor being 2. When I >> distribute a file I could see that there are replica of data available in >> other nodes. However when I r

Re: /home/r9r/hadoop-2.2.0/bin/hadoop: line 133: /usr/java/default/bin/java: No such file or directory

2014-01-08 Thread Chris Mawata
What is on the system path? (what do you get at the command console type) $PATH perhaps you have /usr/java/default/bin in there On 1/8/2014 3:12 PM, Allen, Ronald L. wrote: Hello again, I'm trying to install Hadoop 2.2.0 on Redhat 2.6.32-358.23.2.el6.x86_64. I have untar-ed hadoop-2.2.0.tar

Re: what all can be done using MR

2014-01-08 Thread Chris Mawata
Yes. Check out, for example, http://packtlib.packtpub.com/library/hadoop-mapreduce-cookbook/ch06lvl1sec66# On 1/8/2014 2:41 AM, unmesha sreeveni wrote: Can we do aggregation with in Hadoop MR like find min,max,sum,avg of a column in a csv file. -- /Thanks & Regards/ / / Unmesha Sreeveni U.B/

Re: Distributing the code to multiple nodes

2014-01-08 Thread Chris Mawata
2 nodes and replication factor of 2 results in a replica of each block present on each node. This would allow the possibility that a single node would do the work and yet be data local. It will probably happen if that single node has the needed capacity. More nodes than the replication factor are

Re: issue about how to assiging map output to reducer?

2014-01-08 Thread Chris Mawata
Depends on the distribution of the keys and how the partitioner is assigning keys to reducers. (Remember that pairs with the same key have to go to the same reducer). Chris On Jan 8, 2014 2:33 AM, "ch huang" wrote: > hi,maillist: > i look the containers log from " hadoop fs -cat > /v

Re: Newbie: How to set up HDFS file system

2014-01-07 Thread Chris Mawata
Read as much as you can about Hadoop. Installing Hadoop will install HDFS. On Jan 7, 2014 2:03 AM, "Ashish Jain" wrote: > Hello, > > Can someone provide some pointers on how to set up a HDFS file system? I > tried searching in the document but could not find anything with respect to > this. > > -

Re: Write an object into hadoop hdfs issue

2013-12-30 Thread Chris Mawata
Not unique to hdfs. The same thing would happen on your local file system or anywhere and any way you store the state of the object outside of the JVM. That is why singletons should not be serializable. Chris On Dec 30, 2013 5:46 AM, "unmesha sreeveni" wrote: > I am trying to write an object int

Re: Object in mapreduce

2013-12-28 Thread Chris Mawata
Hi Unmesha, I would take a step back and look at the big picture before spending nights working on a non-starter. Without knowing what the ultimate goal is I can't give you a workaround or declare the basic architecture faulty but here are the things that stick out as fundamentally probl

Re: hadoop -Mapreduce

2013-12-21 Thread Chris Mawata
Each mapper is already running in its own JVM. Maybe if you explain why you want to start threads someone might have some suggestions. (If at all possible I suggest you avoid doing that) Chris On Dec 11, 2013 6:26 AM, "Ranjini Rathinam" wrote: > Hi, > > I am fresher to mapreduce concept, > I wou

Re: libjar and Mahout

2013-12-20 Thread Chris Mawata
In your hadoop command I see a space in the part ...-core-0.9-SNAPSHOT.jar /:/apps/mahout/trunk just after .jar Should it not be ...-core-0.9-SNAPSHOT.jar:/apps/mahout/trunk Chris On 12/20/2013 2:44 PM, Sameer Tilak wrote: Hi All, I am running Hadoop 1.0.3 -- probably will upgrade mid-next year

Re: Running Hadoop v2 clustered mode MR on an NFS mounted filesystem

2013-12-20 Thread Chris Mawata
Yong raises an important issue:  You have thrown out the I/O advantages of HDFS and also thrown out the advantages of data locality. It would be interesting to know why you are taking this approach. Chris On 12/20/2013 9:28 AM, java8964 wrote:

Re: External db

2013-12-15 Thread Chris Mawata
Hi Kishore, you are comparing apples and oranges. HDFS is a file system (you read and write files to it). The NoSQL datastores are more like a database. You can query and depending on the type of NoSQL database the querying can be more or less sophisticated e.g. "Give me a document that contai

Re: NO DataNode Daemon on a single node Hadoop 2.2 installation

2013-12-15 Thread Chris Mawata
file. but slave file has localhost. I managed to solve it. I had to delete the data dir of the namenode before I format. And then it worked fine. -- Best Regards, Karim Ahmed Awara On Sun, Dec 15, 2013 at 6:08 AM, Chris Mawata <mailto:chris.maw...@gmail.com>> wrote: What do you hav

Re: Site-specific dfs.client.local.interfaces setting not respected for Yarn MR container

2013-12-15 Thread Chris Mawata
You might have better luck with an alternative approach to avoid having IPV6 which is to add to your hadoop-env.sh HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true Chris On 12/14/2013 11:38 PM, Jeff Stuckman wrote: Hello, I have set up a two-node Hadoop cluster on Ubuntu 12.04 ru

Re: NO DataNode Daemon on a single node Hadoop 2.2 installation

2013-12-14 Thread Chris Mawata
What do you have in your masters and slaves files? Chris On 12/14/2013 5:05 AM, Karim Awara wrote: Hi, After I run start-dfs.sh, I dont get a datanode daemon. And in the log file it generates this exception: "java.io.IOException: Failed on local exception: java.io.EOFException; Host Details

Re:Re: Would it be possible to install two version of Hadoop on the same cluster?

2013-12-13 Thread Chris Mawata
ether I could install two version of hadoop at the > same cluster > > > > > 2013/12/12 Chris Mawata > >> If you are planning to take measurements when both are running they >> won't be representative of either one. For instance, your network could >> look like

Re: hadoop 2.2 build

2013-12-13 Thread Chris Mawata
Try using tge actual ip of your machine where you have the 127.0.1.1 Chris On Dec 13, 2013 10:20 PM, "Karim Awara" wrote: > Hi, > > I found them.. It says this error in the datanode log: > > "java.io.IOException: Failed on local exception: java.io.EOFException; > Host Details : local host is: "DE

Re: Would it be possible to install two version of Hadoop on the same cluster?

2013-12-12 Thread Chris Mawata
If you are planning to take measurements when both are running they won't be representative of either one. For instance, your network could look like a bottleneck when both are running even if it could handle each one on its own. Datanodes that would have had capacity allowing local processing

Re: which is better : form reducer or Driver

2013-11-06 Thread Chris Mawata
multiple mappers and 1 reducer. so which is best? On Tue, Nov 5, 2013 at 6:28 PM, Chris Mawata <mailto:chris.maw...@gmail.com>> wrote: If you have multiple reducers you are doing it in parallel while in the driver it is surely single threaded so my bet would be on the

Re: which is better : form reducer or Driver

2013-11-05 Thread Chris Mawata
If you have multiple reducers you are doing it in parallel while in the driver it is surely single threaded so my bet would be on the reducers. Chris On 11/5/2013 6:15 AM, unmesha sreeveni wrote: I am emiting 'A' value and 'B' value from reducer. I need to do further calculations also. which

Re: Namenode / Cluster scaling issues in AWS environment

2013-11-03 Thread Chris Mawata
You might also consider federation. Chris On 11/3/2013 3:21 AM, Manish Malhotra wrote: Hi All, I'm facing issues in scaling a Hadoop cluster, I have following cluster config. 1. AWS Infrastructure. 2. 400 DN 3. NN : 120 gb memory, 10gb network,32 cores dfs.namenode

Re: Path exception when running from inside IDE.

2013-11-01 Thread Chris Mawata
What does the code look like? Chances are you are using a file:/// url Chris On 11/1/2013 12:59 PM, Omar@Gmail wrote: Hi, Running from inside IDE (intellij idea) getting exception, see below: In the program arguments I specify 'input output' Of course 'input' does exist in HDFS with data file

Re: Non data-local scheduling

2013-10-03 Thread Chris Mawata
Try playing with the block size vs split size. If the blocks are very large and the splits small then multiple splits correspond to the same block and if there are more splits than replicas you get rack local processing. On 10/3/2013 12:57 PM, André Hacker wrote: Hi, I have a 25 node cluster

Re: HDFS / Federated HDFS - Doubts

2013-10-02 Thread Chris Mawata
One more thing, Krishna, when using JounalNodes as opposed to the native file system for the metadata storage you do get replication. Chris On 10/2/2013 12:52 AM, Krishna Kumaar Natarajan wrote: Hi All, While trying to understand federated HDFS in detail I had few doubts and listing them d

Re: HDFS / Federated HDFS - Doubts

2013-10-02 Thread Chris Mawata
Don't know about question 4 but for the first three -- the metadata is in the memory of the namenode at runtime but is also persisted to disk (otherwise it would be lost if you shut down and re-start the namenode). The copy persisted to disk is on the native file system (not HDFS) and no is not

Re: Problems

2013-01-17 Thread Chris Mawata
n 1.6?". -andy On Thu, Jan 17, 2013 at 11:19 AM, Chris Mawata wrote: Do you know what causes 1.7 to fail? I am running 1.7 and so far have not done whatever it takes to make it fail. On 1/17/2013 1:46 PM, Leo Leung wrote: Use Sun/Oracle 1.6.0_32+ Build should be 20.7-b02+ 1.7 causes f

Re: Problems

2013-01-17 Thread Chris Mawata
alidated to the community. see http://wiki.apache.org/hadoop/HadoopJavaVersions JDK 1.6.0_32 to .38 seems safe -Original Message----- From: Chris Mawata [mailto:chris.maw...@gmail.com] Sent: Thursday, January 17, 2013 11:19 AM To: user@hadoop.apache.org Subject: Re: Problems Do you know wha

Re: Problems

2013-01-17 Thread Chris Mawata
Do you know what causes 1.7 to fail? I am running 1.7 and so far have not done whatever it takes to make it fail. On 1/17/2013 1:46 PM, Leo Leung wrote: Use Sun/Oracle 1.6.0_32+ Build should be 20.7-b02+ 1.7 causes failure and AFAIK, not supported, but you are free to try the latest vers