On Apr 9, 2010, at 13:43 , Allen Wittenauer wrote:

> 
> On Apr 9, 2010, at 1:22 PM, Keith Wiley wrote:
> 
>> My C++ pipes program needs to use a shared library.  What are my options?  
>> Can I installed this on the cluster in a way that permits HDFS to access it 
>> from each node as needed?  Can I put it in the distributed cache such that 
>> attempts to link to the library find it in the cache?  Other options?
> 
> Distributed Cache is the way to go.

Is there anyway to simply install all the necessary shared libraries on every 
node of the cluster so they're already there, ready, waiting...and properly 
linkable from an HDFS pipes job, so they don't have to be copied to the 
distributed cache and sent node-to-node around the cluster on every run?

________________________________________________________________________________
Keith Wiley               kwi...@keithwiley.com               www.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio than
when I entered."
  -- Keith Wiley
________________________________________________________________________________




Reply via email to