Indeed, it would be a very nice interface to have (if anyone has some
free time)!
I know a few Caltech people who'd like to see how how their WAN
transfer product (http://monalisa.cern.ch/FDT/) would work with HDFS;
if there was a HDFS NIO interface, playing around with HDFS and FDT
w
Snehal Nagmote wrote:
can you please explain exactly adding NIO bridge means what and how it can be
done , what could
be advantages in this case ?
NIO: java non-blocking IO. It's a standard API to talk to different
filesystems; support has been discussed in jira. If the DFS APIs were
acces
can you please explain exactly adding NIO bridge means what and how it can be
done , what could
be advantages in this case ?
Steve Loughran wrote:
>
> Edward Capriolo wrote:
>> It is a little more natural to connect to HDFS from apache tomcat.
>> This will allow you to skip the FUSE mounts
Edward Capriolo wrote:
It is a little more natural to connect to HDFS from apache tomcat.
This will allow you to skip the FUSE mounts and just use the HDFS-API.
I have modified this code to run inside tomcat.
http://wiki.apache.org/hadoop/HadoopDfsReadWriteExample
I will not testify to how well
Hello Sir,
I am doing mtech in iiit hyderabad and in our project we have similar
requirement of accessing the hdfs from apache server(tomcat) directly, Can
you please explain how to access the same with some example, probably the
same you modified,Does it require hadoop installation directory to
"Yes. IMHO GlusterFS advertises benchmarks vs Luster."
You're right, I've found those now, thanks for the reply - it helped
P
On Fri, Mar 27, 2009 at 5:04 PM, Edward Capriolo wrote:
>>>but does Sun's Lustre follow in the steps of Gluster then
>
> Yes. IMHO GlusterFS advertises benchmarks vs Lust
>>but does Sun's Lustre follow in the steps of Gluster then
Yes. IMHO GlusterFS advertises benchmarks vs Luster.
The main difference is that GlusterFS is a fuse (userspace filesystem)
while Luster has to be patched into the kernel, or a module.
Brian---
Can you share some performance figures for typical workloads with your
HDFS/Fuse setup? Obviously, latency is going to be bad but throughput
will probably be reasonable... but I'm curious to hear about concrete
latency/throughput numbers. And, of course, I'm interested in these
num
HDFS itself has some facilities for serving data over HTTP:
https://issues.apache.org/jira/browse/HADOOP-5010. YMMV.
On Thu, Mar 26, 2009 at 3:47 PM, Brian Bockelman wrote:
>
> On Mar 26, 2009, at 8:55 PM, phil cryer wrote:
>
> When you say that you have huge images, how big is "huge?"
>>>
>>
>>
On Mar 26, 2009, at 8:55 PM, phil cryer wrote:
When you say that you have huge images, how big is "huge?"
Yes, we're looking at some images that are 100Megs in size, but
nothing like what you're speaking of. This helps me understand
Hadoop's usage better and unfortunately it won't be the fit
Have you looked into MogileFS already? Seems like a good fit, based
on your description. This question has come up more than once here,
and MogileFS is an oft-recommended solution.
Norbert
On 3/26/09, phil cryer wrote:
> > When you say that you have huge images, how big is "huge?"
>
>
> Yes, w
> When you say that you have huge images, how big is "huge?"
Yes, we're looking at some images that are 100Megs in size, but
nothing like what you're speaking of. This helps me understand
Hadoop's usage better and unfortunately it won't be the fit I was
hoping for.
> You can use the API or the F
It is a little more natural to connect to HDFS from apache tomcat.
This will allow you to skip the FUSE mounts and just use the HDFS-API.
I have modified this code to run inside tomcat.
http://wiki.apache.org/hadoop/HadoopDfsReadWriteExample
I will not testify to how well this setup will perform
On Mar 26, 2009, at 5:44 PM, Aaron Kimball wrote:
In general, Hadoop is unsuitable for the application you're
suggesting.
Systems like Fuse HDFS do exist, though they're not widely used.
We use FUSE on a 270TB cluster to serve up physics data because the
client (2.5M lines of C++) doesn't
In general, Hadoop is unsuitable for the application you're suggesting.
Systems like Fuse HDFS do exist, though they're not widely used. I don't
know of anyone trying to connect Hadoop with Apache httpd.
When you say that you have huge images, how big is "huge?" It might be
useful if these images
15 matches
Mail list logo