Finding EOF with libhdfs

2010-09-17 Thread Poole, Samuel [USA]
Is there an easy way to find EOF or the length of a file using libhdfs from C++? Also, if anyone has an example of reading a binary file from hdfs until eof, that would be greatly appreciated. thanks, Sam

RE: Libhdfs use in pipes program

2010-06-17 Thread Poole, Samuel [USA]
-amd64-64/libhdfs.so.0, depending on your OS. Please also make sure that LD_LIBRARY_PATH contains your choice. On Thu, Jun 17, 2010 at 6:37 AM, Poole, Samuel [USA] wrote: > I am using CDH2. I have the package installed, it just isn't found > when running a pipes job. Does the Hadoop pi

RE: Libhdfs use in pipes program

2010-06-17 Thread Poole, Samuel [USA]
Sent: Wednesday, June 16, 2010 11:21 PM To: common-user@hadoop.apache.org Subject: Re: Libhdfs use in pipes program Are you using CDH3 beta? I believe that you can copy that library from CDH2. David On 6/16/2010 10:51 AM, Poole, Samuel [USA] wrote: > Is it possible to use libhdfs in conjunct

Libhdfs use in pipes program

2010-06-16 Thread Poole, Samuel [USA]
Is it possible to use libhdfs in conjunction with hadoop pipes? At runtime, I am getting an error that says it can't find libhdfs.so.0. I deployed hadoop using one of cloudera's distribution packages. Any recommendations would be greatly appreciated. Sam Poole Booz Allen Hamilton

Hadoop for Independant Tasks not using Map/Reduce?

2009-08-19 Thread Poole, Samuel [USA]
I am new to Hadoop (I have not yet installed/configured), and I want to make sure that I have the correct tool for the job. I do not "currently" have a need for the Map/Reduce functionality, but I am interested in using Hadoop for task orchestration, task monitoring, etc. over numerous nodes in

Hadoop for Independant Tasks not using Map/Reduce?

2009-08-18 Thread Poole, Samuel [USA]
I am new to Hadoop (I have not yet installed/configured), and I want to make sure that I have the correct tool for the job. I do not "currently" have a need for the Map/Reduce functionality, but I am interested in using Hadoop for task orchestration, task monitoring, etc. over numerous nodes in