http://code.google.com/p/hdfs-fuse/

On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
<[email protected]> wrote:
> ah, sorry, forgot to mention, it's in hdfs-user mailing list
> [email protected]
>
>
> On Sun, Jan 10, 2010 at 7:17 PM, anilkr <[email protected]> wrote:
>>
>> Aram, where is the discussion about fuse-dfs, could not find the link in your
>> reply....
>>
>> thanks
>>
>>
>>
>> Aram Mkhitaryan wrote:
>>>
>>> here is a discussion with subject 'fuse-dfs',
>>> there they discuss problems with mounting hdfs,
>>> probably you can ask your question there
>>>
>>> On Sun, Jan 10, 2010 at 7:06 PM, Aram Mkhitaryan
>>> <[email protected]> wrote:
>>>> I'm not an expert here, but
>>>> I read somewhere that it's possible to install a module in linux
>>>> which will allow you to mount hdfs folder as a standard linux folder
>>>> if I'm not wrong it was in one of the claudera's distributions,
>>>> probably you can find something there
>>>>
>>>>
>>>> On Sun, Jan 10, 2010 at 5:42 PM, anilkr <[email protected]> wrote:
>>>>>
>>>>> Thank you Ryan,
>>>>> My C# code is also on Linux (as it uses MONO framework on Linux).
>>>>> I understand that some bridging would be required. I am thinking that
>>>>> since
>>>>> Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i
>>>>> write
>>>>> a wrapper in C and make a DLL out of it. Now my C# application will call
>>>>> this DLL for file operations and this DLL will call Hadoop APIs to
>>>>> operate
>>>>> with the files.
>>>>>
>>>>> Do you think this would be a proper way.
>>>>> thanks again
>>>>>
>>>>>
>>>>> Ryan Rawson wrote:
>>>>>>
>>>>>> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a
>>>>>> thick
>>>>>> client in Java. To get access to it from c# would involve bridging to
>>>>>> Java
>>>>>> somehow. The c++ client does this.
>>>>>>
>>>>>> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
>>>>>> windows tech. Maybe the main hadoop list could help you?
>>>>>>
>>>>>> On Jan 9, 2010 11:50 PM, "anilkr" <[email protected]> wrote:
>>>>>>
>>>>>>
>>>>>> Currently my application uses C# with MONO on Linux to communicate to
>>>>>> local
>>>>>> file systems (e.g. ext2, ext3). The basic operations are open a file,
>>>>>> write/read from file and close/delete the file. For this currently i
>>>>>> use
>>>>>> C#
>>>>>> native APIs to operate on the file.
>>>>>>   My Question is: If i install Hadoop file system on my Linux box. Then
>>>>>> what change i need to do to my existing functions so that they
>>>>>> communicate
>>>>>> to hadoop file system to do basic operations on the file. Since Hadoop
>>>>>> infrastructure is based on Java, How any C# (with MONO) application
>>>>>> will
>>>>>> do
>>>>>> basic operations with Hadoop. Do the basic APIs in C# to operate on a
>>>>>> file
>>>>>> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
>>>>>>
>>>>>> Also, If i want to open a file then do i need to mount to Hadoop
>>>>>> filesystem
>>>>>> programmatically? If yes, how?
>>>>>>
>>>>>> Thanks,
>>>>>> Anil
>>>>>> --
>>>>>> View this message in context:
>>>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
>>>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
>>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>> --
>> View this message in context: 
>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27099222.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>

Reply via email to