Re: Triangle (Raleigh, Durham, Chapel Hill) Area Hadoop Users Group First Meeting: July 20th

2010-07-09 Thread Abhishek Pratap
Hi Grant The agenda for this meeting looks really interesting for newbies like me. Any chance you guys can record videos. I am assuming getting talks live on air will be tougher. Thanks! -Abhi On Thu, Jul 8, 2010 at 1:07 PM, Grant Ingersoll wrote: > Interested in learning more about Hadoop or

Re: jni files

2010-07-09 Thread Allen Wittenauer
On Jul 9, 2010, at 2:09 AM, amit kumar verma wrote: > I think I got a solution. As I read more about hadoop and JNI, I learned that > I need to copy jni files to HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-xxx. lib/native/xxx are for the native compression libraries. They are not for user-le

Triangle (Raleigh, Durham, Chapel Hill) Area Hadoop Users Group First Meeting: July 20th

2010-07-09 Thread Grant Ingersoll
Interested in learning more about Hadoop or finding out what’s going on with Hadoop in the Triangle? Come out for the first meeting of the Triangle Hadoop Users Group scheduled for 7pm on Tuesday July 20th at Bronto Software. Find out more at http://www.trihug.org Three presenters are lined up

java.lang.OutOfMemoryError: Java heap space

2010-07-09 Thread Shuja Rehman
Hi All I am facing a hard problem. I am running a map reduce job using streaming but it fails and it gives the following error. Caught: java.lang.OutOfMemoryError: Java heap space at Nodemapper5.parseXML(Nodemapper5.groovy:25) java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): su

Last day to submit your Surge 2010 CFP!

2010-07-09 Thread Jason Dixon
Today is your last chance to submit a CFP abstract for the 2010 Surge Scalability Conference. The event is taking place on Sept 30 and Oct 1, 2010 in Baltimore, MD. Surge focuses on case studies that address production failures and the re-engineering efforts that led to victory in Web Application

Re: jni files

2010-07-09 Thread amit kumar verma
Hi Hemanth, Thanks for your kind response and support. But what if I am using third party API and it also uses the java IO File ?? I think there must be some way to use hdfs by default with changing the code !! Thanks, Amit Kumar Verma Verchaska Infotech Pvt. Ltd. On 07/09/2010 03:18 PM,

Re: jni files

2010-07-09 Thread Hemanth Yamijala
Amit, On Fri, Jul 9, 2010 at 3:00 PM, amit kumar verma wrote: >  Hi  Hemanth, > > Yeah I have gone through the api documentation and there is no issue in > accessing files from HDFS, but my concern is what about the API which > already got developed without hadoop. OK, what I mean, I developed an

Re: jni files

2010-07-09 Thread amit kumar verma
Hi Hemanth, Yeah I have gone through the api documentation and there is no issue in accessing files from HDFS, but my concern is what about the API which already got developed without hadoop. OK, what I mean, I developed an application when I didn't know about the hadoop, but as now I need t

Re: jni files

2010-07-09 Thread Hemanth Yamijala
Amit, On Fri, Jul 9, 2010 at 2:39 PM, amit kumar verma wrote: >  Hi Hemant, > > The version are same as copied it to all client machine. > > I think I got a solution. As I read more about hadoop and JNI, I learned > that I need to copy jni files to > HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-

Re: jni files

2010-07-09 Thread amit kumar verma
Hi Hemant, The version are same as copied it to all client machine. I think I got a solution. As I read more about hadoop and JNI, I learned that I need to copy jni files to HADOOP_INSTALLATION_DIR//lib/native/Linux-xxx-xxx. I though my linux machine is Linux-i386-32. then I found in "org.a

Re: jni files

2010-07-09 Thread Hemanth Yamijala
Hi, Possibly another silly question, but can you cross check if the versions of Hadoop on the client and the server are the same ? Thanks hemanth On Thu, Jul 8, 2010 at 10:57 PM, Allen Wittenauer wrote: > > On Jul 8, 2010, at 1:08 AM, amit kumar verma wrote: > >>     DistributedCache.addCacheFi