Options for you:

http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/

Thrift is one of the main ways into HBase and HDFS. Above is an IDL for C# blog 
post.

http://www.mono-project.com/using/relnotes/1.0-beta1.html

Mono has a bytecode to MSil translator built in. Not likely to give high 
performance though, and I doubt it will work TBH

http://caffeine.berlios.de

Provides mono-java interop via JNI: no dynamic bytecode->MSil translation. I 
have never used it and the project looks quite dead but might still do what you 
want.



-----Original Message-----
From: Andrew Purtell [mailto:[email protected]]
Sent: Sun 1/10/2010 8:45 PM
To: [email protected]
Subject: Re: Basic question about using C# with Hadoop filesystems
 
Just to clarify:

> On Windows especially context switching during I/O like that has a 
> high penalty.

should read

> Context switching during I/O like that has a penalty.

I know we are talking about Mono on Linux here. After all the subject
is FUSE. I forgot to fix that statement before hitting 'send'. :-)



----- Original Message ----
> From: Andrew Purtell <[email protected]>
> To: [email protected]
> Sent: Sun, January 10, 2010 11:30:42 AM
> Subject: Re: Basic question about using C# with Hadoop filesystems
> 
> Bear in mind that hdfs-fuse has something like a 30% performance impact
> when compared with direct access via the Java API. The data path is
> something like:
> 
>     your app -> kernel -> libfuse -> JVM -> kernel -> HDFS
> 
>     HDFS -> kernel-> JVM -> libfuse -> kernel -> your app
> 
> On Windows especially context switching during I/O like that has a 
> high penalty. Maybe it would be better to bind the C libhdfs API
> directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
> But, at that point, you have pulled the Java Virtual Machine into the
> address space of your process and are bridging between Java land and
> C# land over the JNI and the C# equivalent. So, at this point, why not
> just use Java instead of C#? Or, just use C and limit the damage to
> only one native-to-managed interface instead of two?
> 
> The situation will change somewhat when/if all HDFS RPC is moved to
> some RPC and serialization scheme which is truly language independent,
> i.e. Avro. I have no idea when or if that will happen. Even if that
> happens, as Ryan said before, the HDFS client is fat. Just talking
> the RPC gets you maybe 25% of the way toward a functional HDFS
> client. 
> 
> The bottom line is the Hadoop software ecosystem has a strong Java
> affinity. 
> 
>    - Andy
> 
> 
> 
> ----- Original Message ----
> > From: Jean-Daniel Cryans 
> > To: [email protected]
> > Sent: Sun, January 10, 2010 8:57:32 AM
> > Subject: Re: Basic question about using C# with Hadoop filesystems
> > 
> > http://code.google.com/p/hdfs-fuse/
> > 
> > On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
> > wrote:
> > > ah, sorry, forgot to mention, it's in hdfs-user mailing list
> > > [email protected]



      


Reply via email to