On Apr 9, 2010, at 13:43 , Allen Wittenauer wrote:

> 
> On Apr 9, 2010, at 1:22 PM, Keith Wiley wrote:
> 
>> My C++ pipes program needs to use a shared library.  What are my options?  
>> Can I installed this on the cluster in a way that permits HDFS to access it 
>> from each node as needed?  Can I put it in the distributed cache such that 
>> attempts to link to the library find it in the cache?  Other options?
> 
> Distributed Cache is the way to go.

Suppose the share library is quite large (or there are numerous required shared 
libraries) and it is therefore costly and tedious to send it (them) to the 
distributed cache for every job.  Is there any way to install them on HDFS 
permanently such that they are found when executing C++ pipes programs?

________________________________________________________________________________
Keith Wiley               kwi...@keithwiley.com               www.keithwiley.com

"And what if we picked the wrong religion?  Every week, we're just making God
madder and madder!"
  -- Homer Simpson
________________________________________________________________________________



Reply via email to