Hi Group,
Thanks for the effort placed into making Hadoop available, and for the tips
and suggestions posted to this mailing list.
I'm currently looking into using Hadoop and, along the way, writing a Ruby DSL
for Cascading (www.cascading.org).
Some homework has generated two related ideas. I'll
It seems that this is not the point. ;=(
And in this case, my cluster is really easy to crash
СP 写道:
> According to the cofiguration settings manaul in this website
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)
> i think it sould be
>
> Master
> Slave1
Thinks, imcaptor and Kyle.
I add lucene-core-2.3.1.jar to lib, It can't correct run, but, suddenly no any
error the day of tomorrow
2008-10-16
chenlbspace
发件人: chenlbspace
发送时间: 2008-10-10 16:35:40
收件人: core-user@hadoop.apache.org
抄送:
主题: hadoop index org/apache/lucene/store/Direct
only the IPs of each node. For example:
192.168.52.129
192.168.49.40
192.168.55.104
192.168.49.148
They are totally different in hardware conf., and is that matter?
Thx!
David
?P 写道:
> What is your configuration in $HADOOP_HOME/conf/salves ?
>
> - Original Message -
> From: "David Wei"
What is your configuration in $HADOOP_HOME/conf/salves ?
- Original Message -
From: "David Wei" <[EMAIL PROTECTED]>
To:
Sent: Thursday, October 16, 2008 11:05 AM
Subject: Why did I only get 2 live datanodes?
We had installed hadoop on 4 machines, and one of them has been chosen
to b
We had installed hadoop on 4 machines, and one of them has been chosen
to be master and also slaves. The rest of machines are configured as
slaves. But it is strange that we can only see 2 live nodes on web
UI:http://192.168.52.129:50070/dfshealth.jsp(master machine). When we
try to refresh the
I ran into the same problem today. Found a solution that works:
bin/hadoop jar -libjars hadoop-xxx-index.jar
-inputPaths...
E.g., in your particular case, you command will look like:
bin/hadoop jar -libjars lib/lucene-core-2.3.1.jar
contrib/index/hadoop-0.17.1-index.jar -inputPaths
src/contrib
-log
-C
On Oct 15, 2008, at 4:10 PM, Sriram Rao wrote:
Hi,
when using distcp, we often find that the distcp logs are in the
target dir. root. for instance,
bin/hadoop dist src dest
We end up with: dest/_distcp_logs_...
Is there a way to (1) tell distcp to put the logs elsewhere and (2)
nu
Is there a way to change number of mappers in Hadoop streaming command line?
I know I can change hadoop-default.xml:
mapred.map.tasks
10
The default number of map tasks per job. Typically set
to a prime several times greater than number of available hosts.
Ignored when mapred.job.track
Hello all,
I'm new to this list and to Hadoop too. I'm testing some basic
configurations before I start to own my own experiments. I've installed
a Hadoop cluster of 2 machines as explained here:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)
I'm not using U
Hi,
when using distcp, we often find that the distcp logs are in the
target dir. root. for instance,
bin/hadoop dist src dest
We end up with: dest/_distcp_logs_...
Is there a way to (1) tell distcp to put the logs elsewhere and (2)
nuke the logs when done?
Sriram
I'm trying to play with Chukwa, but I'm struggling to get anything going.
I've been operating off of the wiki entry (<
http://wiki.apache.org/hadoop/Chukwa_Quick_Start>), making revisions as I go
along. It's unclear to me how to 1) create an adapter and 2) start HICC
(see the wiki for more infor
Hi ,
I didn't RSVP for this event. I would like to join with 2 of my colleagues.
Please let us know if we can ?
Best
Bhupesh
On 10/15/08 11:56 AM, "Steve Gao" <[EMAIL PROTECTED]> wrote:
> I am excited to see the slides. Would you send me a copy? Thanks.
>
> --- On Wed, 10/15/08, Nishant Kh
I am excited to see the slides. Would you send me a copy? Thanks.
--- On Wed, 10/15/08, Nishant Khurana <[EMAIL PROTECTED]> wrote:
From: Nishant Khurana <[EMAIL PROTECTED]>
Subject: Re: Hadoop User Group (Bay Area) Oct 15th
To: core-user@hadoop.apache.org
Date: Wednesday, October 15, 2008, 9:45 AM
In the future, you will generally get a quicker answer to hbase
questions by subscribing to [EMAIL PROTECTED]
There are a couple of ways for a non-hadoop client to access
hbase:
- via Thrift
- via the REST interface
- install hbase on the client with configuration files that
point to your hbase
From my understanding of the problem, you can
- keep the image binary data in sequence files
- copy the image whose similar images will searched to dfs with high
replication.
- in each map, calculate the similarity to the image
- output only the similar images from the map.
- no need a reduce st
!!! MEMBERS OF core-user@hadoop.apache.org, DON'T READ THIS
Hi to all, I started to work on a hadoop-based project.
In our application, there are a huge number of images with a regular
pattern, differing in 4 parts/blocks.
System takes an image as input and looks for a similar im
Hi to all, I'm a new subscriber of the group, I started to work on a
hadoop-based project.
In our application, there are a huge number of images with a regular pattern,
differing in 4 parts/blocks.
System takes an image as input and looks for a similar image, considering if
all these 4 parts ma
I would love to see the slides too. I am specially interested in
implementing database joins with Map Reduce.
On Wed, Oct 15, 2008 at 7:24 AM, Johan Oskarsson <[EMAIL PROTECTED]> wrote:
> Since I'm not based in the San Francisco I would love to see the slides
> from this meetup uploaded somewhere
Thanks steve..
will try working on the patch..
S.Chandravadana
Steve Loughran wrote:
>
> chandravadana wrote:
>> ya.. will write up in hadoop wiki..
>> is there a way other than copying from local filesystem to hdfs...
>> like writing directly to hdfs...?
>>
>
> Can you patch jfreechar
Since I'm not based in the San Francisco I would love to see the slides
from this meetup uploaded somewhere. Especially the database join
techniques talk sounds very interesting to me.
/Johan
Ajay Anand wrote:
> The next Bay Area User Group meeting is scheduled for October 15th at
> Yahoo! 2821 M
But another datanode node2 's log file is different. It shows as follows:
2008-10-15 10:42:48,659 WARN org.apache.hadoop.dfs.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: ProcessReport
from unregisterted node: 192.168.1.3:50010
at
org.apache.hadoop.dfs.FSNamesystem.
This is the content of node4 log files:
2008-10-15 16:18:59,406 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at node4.cluster1domain/192.168.1.5
please check the logs of the nodes that didn't come up.
On Wed, Oct 15, 2008 at 6:46 PM, ZhiHong Fu <[EMAIL PROTECTED]> wrote:
> Yes ,Thanks, I have tried as you suggested. and I leave the safemode , but
> when i run bin/hadoop dfsadmin -report, There still is no datanode
> available.
>
> 2008/1
Yes ,Thanks, I have tried as you suggested. and I leave the safemode , but
when i run bin/hadoop dfsadmin -report, There still is no datanode
available.
2008/10/15 Prasad Pingali <[EMAIL PROTECTED]>
> hello,
> The report shows your dfs is not yet started. Sometimes it may take a
> minute or tw
Amit k. Saha wrote:
On Wed, Oct 15, 2008 at 9:09 AM, David Wei <[EMAIL PROTECTED]> wrote:
It seems that we need to restart the whole hadoop system in order to add new
nodes inside the cluster. Any solution for us that no need for the
rebooting?
From what I know so far, you have to start the HD
chandravadana wrote:
ya.. will write up in hadoop wiki..
is there a way other than copying from local filesystem to hdfs...
like writing directly to hdfs...?
Can you patch jfreechart to write directly to HDFS files? That is the
only way to do it right now, unless you can mount the HFDS filesy
hello,
The report shows your dfs is not yet started. Sometimes it may take a
minute or two to start dfs on a small cluster. Did you wait for sometime for
dfs to start and leave safe mode?
- Prasad.
On Wednesday 15 October 2008 01:57:44 pm ZhiHong Fu wrote:
> Hello:
>
> I have installed
hi
Is this instance possible?
suppose Hbase runs on hadoop and the data is read/written to hbase.
can it be possible for a seperate client program(that does not run on
hadoop) to read/write data from hbase.
--
Best Regards
S.Chandravadana
This e-mail and any files transmitted with it are for
Hello:
I have installed hadoop on a cluster which hava 7 nodes, one is
namenode and the other 6 nodes are datanode . and At that time It runs
normally, and also I runned the wordcount example, It's good.
but today I want to run a mapred application , It reports error. and I
found some da
30 matches
Mail list logo