The HBase-Writer team is happy to announce that HBase-Writer 0.90.3 is
available for download:
http://code.google.com/p/hbase-writer/downloads/list
HBase-Writer 0.90.3 is a maintenance release that fixes library
compatibility since older
versions of Heritrix and HBase. More details may be
Sounds like it could be a SPOF.
On Thu, Jun 10, 2010 at 7:47 AM, hmar...@umbc.edu wrote:
Hey,
This is a really neat idea if anyone has a way to do this, could you
share?
I'll bet this could be very interesting! Thanks...
Best,
HAL
Hi,
I wanted to ask if it is possible to
...@gmail.com
wrote:
On Wed, Jan 27, 2010 at 3:08 PM, Ryan Smith ryan.justin.sm...@gmail.com
wrote:
If you just want to use hadoop jars in your maven projects, run your own
caching archive repository manager like Nexus.
What I really want is to publish my own projects with the correct
SS,
If you just want to use hadoop jars in your maven projects, run your own
caching archive repository manager like Nexus.
http://nexus.sonatype.org/
Deploy your hadoop and other 3rd party jars along with your own custom
deployed jars here, then your maven projects can build using the jars
It would be great if someone could update the src code here:
http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Source+Code
On Wed, Oct 21, 2009 at 10:15 AM, Mark Vigeant mark.vige...@riskmetrics.com
wrote:
Hi Oliver,
I ran into the same problem a few weeks ago. What you want
I have a question that i feel i should ask on this thread. Lets say you
want to build a cluster where you will be doing very little map/reduce,
storage and replication of data only on hdfs. What would the hardware
requirements be? No quad core? less ram?
Thanks
-Ryan
On Thu, Oct 1, 2009 at
Maybe someone can correct me if im wrong, but this is what I did to get
libhdfs on 0.20.0 to build:
NOTE: on debian, you need to apply a patch:
https://issues.apache.org/jira/browse/HADOOP-5611
Compile libhdfs: ant compile-contrib -Dlibhdfs=1
Then to install libhdfs in the local hadoop lib:
Hello everyone,
If I have a machine (DN) with 2 network cards, can i get double bandwidth
for my data node in hadoop?
Or is the preferred solution to link aggregate the 2 interfaces on the os
layer? Any thoughts on this would be appreciated.
Thanks in advance.
-Ryan
this, but then again maybe not. I was asking here so i knew
the limitations before i started prototyping failure recovery logic.
-Ryan
On Fri, Jul 24, 2009 at 7:05 AM, Steve Loughran ste...@apache.org wrote:
Ryan Smith wrote:
Todd, excellent info, thank you. I use Ganglia, I will set up nagios
wrote:
Hi Ryan,
Sounds like HADOOP-5611:
https://issues.apache.org/jira/browse/HADOOP-5611
-Todd
On Tue, Jul 14, 2009 at 12:49 PM, Ryan Smith
ryan.justin.sm...@gmail.com
wrote:
Hello,
My problem was I didnt have g++ installed. :) So i installed g++ and
I
re-ran :
ant
I'm having problems dealing with my server mfgr atm. Is there a good mfgr
to go with?
Any advice is helpful, thanks.
-Ryan
11 matches
Mail list logo