Just a side note: I found that both the video and slides works well for me
from the below URL. However, it seems to me as if it depends on the proxy
which I use. At work I use a proxy located in the US and it works well. At
home I use proxy located in Europe and slides can not be downloaded (at
lea
another way is being proposed in issue 3149. Still not integrated but
you can grab the bits and most likely used it without any change
On Wed, May 14, 2008 at 10:32 PM, Arv Mistry <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I saw the note at the end of the message below: "Note that
> MultipleOutputForm
Have you tried http://research.yahoo.com/node/2104?
but i cannot download the video,and even can not find the ie temp file. Seems
yahoo has done some limit on it.
heyongqiang
2008-05-15
发件人: Cole Flournoy
发送时间: 2008-05-15 03:48:20
收件人: core-user@hadoop.apache.org
抄送:
主题: Re: Hadoop summit
My experience is to call Thread.sleep(100) after calling dfs writes N
(say 1000) times.
> -Original Message-
> From: Xavier Stevens [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, May 14, 2008 10:47 AM
> To: core-user@hadoop.apache.org
> Subject: FileSystem.create
>
> I've having some probl
And the conf dir (/Users/bryanduxbury/hadoop-0.16.3/conf) I hope it
is the similar as the one you are using for your hadoop installation.
I'm not sure I understand this. It isn't similar, it's the same as my
hadoop installation. I'm only operating on localhost at the moment.
I'm just trying
You could do this.
open up hadoop (its a shell script). The last line is the one which executes
the corresponding class of hadoop, instead of exec, make it echo and see what
all is present in your classpath. Make sure your generated class path matches
the same. And the conf dir (/Users/bryanduxb
Thanks Runping.
But, if that is the case, why does it took less time when I ran on a cluster
of size=1. It should have been the same irrespective of whether I am running
on a cluster of size=1 or more. right?
Thanks
Runping Qi wrote:
>
>
> Your diagnose sounds reasonable.
> Since the mappers
Nobody has any ideas about this?
-Bryan
On May 13, 2008, at 11:27 AM, Bryan Duxbury wrote:
I'm trying to create a java application that writes to HDFS. I have
it set up such that hadoop-0.16.3 is on my machine, and the env
variables HADOOP_HOME and HADOOP_CONF_DIR point to the correct
res
That really depends on why the name node is in safemode.
If the reason is system startup in which only a few datanodes have reported
in, then the only problem is that some files may not be fully present.
If the reason is some sort of system corruption, it could be a really big
mistake to force t
What is the implication of manually forcing name node to leave safemode?
What properties do HDFS lose by doing that?
One gain to that is that the file system will be available for writes
immediately.
Cagdas
--
Best Regards, Cagdas Evren Gerede
Home Page: http://cagdasgerede.info
I am going to be arriving at SJC at 3PM.
Anybody want to get started early? I am sure that there is plenty to talk
about.
I hear that there is a Bennigan's just outside the office, but anywhere with
good beer and paper napkins should suffice.
On 5/14/08 12:22 PM, "Ajay Anand" <[EMAIL PROTECTE
They haven't been uploaded yet, we are begging and hoping that whoever has
them will post them somewhere. I second Veoh, hadoop rocks.
Cole
On Wed, May 14, 2008 at 4:11 PM, Otis Gospodnetic <
[EMAIL PROTECTED]> wrote:
> I tried finding those Hadoop videos on Veoh, but got 0 hits:
>
>http://w
I tried finding those Hadoop videos on Veoh, but got 0 hits:
http://www.veoh.com/search.html?type=v&search=hadoop
Got URL, Ted?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Ted Dunning <[EMAIL PROTECTED]>
> To: core-user@hadoop.apac
Hadoop 0.17 hasn't been released yet. I (or Mukund) is hoping to
call a vote this afternoon or tomorrow.
Nige
On May 14, 2008, at 12:36 PM, Jeff Eastman wrote:
I'm trying to bring up a cluster on EC2 using
(http://wiki.apache.org/hadoop/AmazonEC2) and it seems that 0.17 is
the
version to
Hi Jeff,
There is no public 0.17 AMI yet - we need 0.17 to be released first.
So in the meantime you'll have to build your own.
Tom
On Wed, May 14, 2008 at 8:36 PM, Jeff Eastman
<[EMAIL PROTECTED]> wrote:
> I'm trying to bring up a cluster on EC2 using
> (http://wiki.apache.org/hadoop/AmazonEC2)
I'm trying to bring up a cluster on EC2 using
(http://wiki.apache.org/hadoop/AmazonEC2) and it seems that 0.17 is the
version to use because of the DNS improvements, etc. Unfortunately, I
cannot find a public AMI with this build. Is there one that I'm not
finding or do I need to create one?
Jeff
Was there ever any resolution as to if there could be some type of webcam
conferencing or at least a video recording of the meeting for people out of
town?
Thanks,
Cole
On Wed, May 14, 2008 at 3:22 PM, Ajay Anand <[EMAIL PROTECTED]> wrote:
> To clarify, this meeting is intended not just for hado
To clarify, this meeting is intended not just for hadoop users and
developers, but also for pig, mahout, hbase and related technologies.
Ajay
From: Ajay Anand
Sent: Wednesday, May 14, 2008 9:53 AM
To: '[EMAIL PROTECTED]'; '[EMAIL PROTECTED]';
'[EMAIL PROT
(conflict of interest on my part should be noted)
LOL :)
Use Veoh instead. Higher resolution. Higher uptime. Nicer embeds.
And the views get chewed up by hadoop instead of google's implementation!
(conflict of interest on my part should be noted)
On 5/14/08 10:43 AM, "Cole Flournoy" <[EMAIL PROTECTED]> wrote:
> Man, yahoo needs to get there act
I've having some problems creating a new file on HDFS. I am attempting
to do this after my MapReduce job has finished and I am trying to
combine all part-00* files into a single file programmatically. It's
throwing a LeaseExpiredException saying the file I just created doesn't
exist. Any idea wh
Man, yahoo needs to get there act together with their video service (the
videos are still down)! Is there anyway someone can upload these videos to
youTube and provide a link?
Thanks,
Cole
On Wed, Apr 23, 2008 at 11:36 AM, Chris Mattmann <
[EMAIL PROTECTED]> wrote:
> Thanks, Jeremy. Appreciate
Hi,
I saw the note at the end of the message below: "Note that
MultipleOutputFormat is available in Hadoop-0.17"
Is 0.17 out yet? Can we output multiple files another way?
Cheers Arv
-Original Message-
From: Amar Kamat [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 08, 2008 4:56 AM
Agenda for the Hadoop user group meeting on Wednesday 5/21 6:00-7:30 pm
at Yahoo! Mission College:
- Hadoop .17 release - Sameer Paranjpye
- Mahout update - Jeff Eastman
- And plenty of opportunity for networking, discussions
and beer...
Look forward to seeing
I suggest mapping localhost to actual IP in my /etc/hosts file and
running it again.
Akshar
On Wed, May 14, 2008 at 9:13 AM, Shimon <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I've set up a standalone hadoop server , and when I run
> bin/hadoop dfs namenode -format
>
> I get the following message
Hi all,
I've set up a standalone hadoop server , and when I run
bin/hadoop dfs namenode -format
I get the following message ( repeating 10 times ) :
ipc.Client: Retrying connect to server: localhost/127.0.0.1:5
My hadoop-site.xml file is as follows :
fs.default.name
localhos
Does the syslog output from a should-have-failed task contain
something like this?
java.lang.RuntimeException: PipeMapRed.waitOutputThreads():
subprocess failed with code 1
(In particular, I'm curious if it mentions the RuntimeException.)
Tasks that consume all their input and then exit non-
Hi,
I've tested this new option "-jobconf
stream.non.zero.exit.status.is.failure=true". Seems working but still
not good for me. When mapper/reducer program have read all input data
successfully and fails after that, streaming still finishes successfully
so there are no chances to know about
Hi,
I have a working 0.15.3 install and am trying to upgrade to 0.16.4. I
want to start clean with an empty filesystem, so I just reformatted
the filesystem instead of using the upgrade option. When I run
start-all.sh, I get a null pointer exception originating from the
NetUtils.getServerAddress
Hi,
I have a working 0.15.3 install and am trying to upgrade to 0.16.4. I
want to start clean with an empty filesystem, so I just reformatted
the filesystem instead of using the upgrade option. When I run
start-all.sh, I get a null pointer exception originating from the
NetUtils.getServerAddress
Your diagnose sounds reasonable.
Since the mappers of your optimized solution outputs 3 key/value pairs
for each input key/value pair, the map output size may be three times of
the input size for each mapper. That size map exceeds the value of
io.sort.mb in your configuration. If so, the mappers h
Hi,
I have been working on a problem where I have to process a particular data
and return three varieties of data and then, I have to process each of them
and store each variety of data into separate files.
In order to solve the above problem, I have proposed two solutions. One I
called un-optimi
32 matches
Mail list logo