%s",
host, port));
this.fs = FileSystem.get(this.conf);
---
I don't see anything wrong with a code above, unless I just missing
something undocumented...
Anyone, please?
Thanks in advance.
--
Bo
On Thu, Oct 22, 2009 at 1:23 PM, Bogdan M. Maryniuk
Hi!
I have quite odd Hadoop behavior. I wrote a client to my app that
simply is trying to talk to HDFS and do stuff. Version of Hadoop is
20.0. I still suspect CLASSPATH, but would be nice to know details.
So, here is a part of a traceback:
On Wed, Oct 21, 2009 at 2:57 AM, Jakob Homan wrote:
> try here: http://www.cloudera.com/hadoop-world-nyc) this is included in the
> current security effort.
Hi, Jakob.
Thanks for the answer :-) Very interesting.
> You can certainly disable access to the web
> interfaces via network routing trick
On Tue, Oct 20, 2009 at 10:44 PM, Todd Lipcon wrote:
> Hi John,
> You can see a short slide deck from the July HUG here that includes some
> info about what's new in 0.19 and 0.20:
>
> http://cloudera-todd.s3.amazonaws.com/hug-20090917.pdf
It is very geeky document, because all what I see is the
Hi!
Well, I have a kinda simple question, but I can not spot a proper doc
for it: how you, guys, restricting access to the web interfaces? :-)
It is somewhere in jetty or there is no feature like this? I am OK
with a simple basic authentication, but I don't really like when
others are staring at l
On Mon, Oct 12, 2009 at 5:56 AM, Martin Hall wrote:
> The license on the beta product is open-ended and we're committed to a free
> version of the product in final release with at least as much functionality
> as you see in the product today.
>
> We're a business and have to find a way to make mon
On Sat, Oct 10, 2009 at 9:47 AM, Shevek wrote:
> We're rather proud to announce an updated beta release of Karmasphere
> Studio for Hadoop, a cross-platform desktop IDE for developing,
> debugging, deploying and monitoring applications based on Hadoop.
>
> [ ... ]
>
> Download it all for free from
On Mon, Sep 7, 2009 at 6:18 PM, Ted Yu wrote:
> We're using hadoop 0.20.0 to analyze large log files from web servers.
> I am looking for better HDFS support so that I don't have to copy log files
> from Linux File System over.
FUSE + syslog + FIFO?
--
Kind regards, BM
Things, that are stupid a
On Mon, Aug 17, 2009 at 9:01 AM, Edward Capriolo wrote:
> Linux is the main target platform.you chose another
> platform you have more work for yourself.
Well, in some cases yes, as long as you have JNI... :-( That's why Sun
discourage people to use it and wants things done in a plain Java.
Howeve
On Mon, Aug 17, 2009 at 12:48 AM, Edward Capriolo wrote:
> My quick fix was to turn off compression. I am probably the ONLY
> person on the internet trying to do this.
Well, yes... Because why do the hell you need that FreeBSD thing with
outdated and nearly unusable ZFS (although they claim they f
On Sat, Aug 15, 2009 at 5:55 AM, Tom Wheeler wrote:
> I'd expect performance between either OS on the same hardware to be
> pretty similar, but it's always hard to speculate on performance. The
> best option would be for you to do a proof of concept with a couple of
> machines so you can gauge what
On Fri, Aug 14, 2009 at 5:07 PM, Vuk Ercegovac wrote:
> I will be out of the office starting 08/14/2009 and will not return until
> 09/02/2009.
>
> I will be in europe and plan to check email.
Awesome! IBM rules.
Now silly autoreply is gonna spam here on each message. :-(
--
Kind regards, BM
T
On Fri, Aug 14, 2009 at 2:21 PM, Todd Lipcon wrote:
>> Also make sure you
>> tuned TCP/IP stack, which is by default too conservative.
>>
>
> Any pointers on this?
You might start here: http://www.sean.de/Solaris/soltune.html
--
Kind regards, BM
Things, that are stupid at the beginning, rarely
On Fri, Aug 14, 2009 at 12:27 PM, Jason Venner wrote:
> Anyone have any performance numbers for Solaris or ZFS based datanodes.
>
> The directory and inode cache sizes are a limiting factor for linux for
> large and busy datanodes.
Uhmm... I do run it on zoned OpenSolaris, but I don't have a real
On Wed, Aug 12, 2009 at 8:05 PM, tim robertson wrote:
> Is fedora a decent choice of OS for a new hadoop cluster? All our
> other stuff is fedora, but is there was a strong case to move to
> something else?
Not that is known to the world. For example, I am using OpenSolaris
and running Hadoop on
2009/7/17 Mathias De Maré :
> I'm using Hadoop 0.20.0 (semidistributed mode, or whatever it's called -- I
> can't look up the name, since the documentation on the site seems to be
> down), and I'm experiencing a JobTracker crash every time I start Hadoop.
What is output of "hadoop dfsadmin -report
you're real idiot, man.
Hello, everybody.
Just installed 0.20 version on 4 nodes. No matter how I configure
(pretty much standard though), it is always says configured capacity
is 0KB and 100% space used. Any try to put a file ends up with an
empty file of 0. Just for a record, all tmp and hdfs image redirected
to a solid
18 matches
Mail list logo