where i should solve this? why it generated 0 records?
pvvpr wrote:
>
> basically your indexes are empty since no URLs were generated and fetched.
> See
> this,
>
>> > - Generator: 0 records selected for fetching, exiting ...
>> > - Stopping at depth=0 - no more URLs to fetch.
>> > - No URLs
How can I get parameters form mappers ?
I can't do it with jobconf, I've try it
.
-邮件原件-
发件人: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 代表 Chris Dyer
发送时间: 2007年11月7日 6:04
收件人: hadoop-user@lucene.apache.org;
[EMAIL PROTECTED]
主题: Re: configuration for mappers?
Hi S
Billy,
Are you referring to snapshots of the entire DFS or of HBase?
---
Jim Kellerman, Senior Engineer; Powerset
> -Original Message-
> From: news [mailto:[EMAIL PROTECTED] On Behalf Of Billy
> Sent: Tuesday, December 18, 2007 4:29 PM
> To: hadoop-user@lucene.apache.org
> Subject: poin
look in src\contrib\hbase\src\test\org\apache\hadoop\hbase\TestHBaseCluster.java
On Dec 18, 2007 7:51 PM, ma qiang <[EMAIL PROTECTED]> wrote:
> Hi colleague,
> After reading the API docs about hbase,I don't know how to
> manipulate the hase using the java API .Would you please send me some
>
basically your indexes are empty since no URLs were generated and fetched. See
this,
> > - Generator: 0 records selected for fetching, exiting ...
> > - Stopping at depth=0 - no more URLs to fetch.
> > - No URLs to fetch - check your seed list and URL filters.
> > - crawl finished: crawled
when
Hi colleague,
After reading the API docs about hbase,I don't know how to
manipulate the hase using the java API .Would you please send me some
examples?
Thank you!
Ma Qiang
Department of Computer Science and Engineering
Fudan University
Shanghai, P. R. China
i can't solve it now, pls help me
jibjoice wrote:
>
> i use nutch-0.9, hadoop-0.12.2 and i use this command "bin/nutch crawl
> urls -dir crawled -depth 3" have error :
>
> - crawl started in: crawled
> - rootUrlDir = input
> - threads = 10
> - depth = 3
> - Injector: starting
> - Injector: cra
I been looking around jira and can not find a issue on snapshots is there an
snapshot for backup option in the works?
Say I want to do a backup on my data I would run a snapshot and it would be
stored in the dfs as a backup file(s) but I could restore it if needed later
down the road if current
Thanks for that.
I had two blocks I had to delete with hadoop fsck / -delete because they
where corrupted but I am unsure if I lost data from base looks like I still
have data just not sure what the corrupted blocks where if I did lose some
info it was not much.
I would thank there would be a
Just to close the loop on this, and to make sure someone else doesn't have the
same problem, this turned out to be a case of cockpit error.
I had mis-read the documentation concerning
mapred.task.tracker.report.bindAddress and had set it to point to the master
node. I should have left thi
See if last item in FAQ fixes your issue Billy:
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ
St.Ack
Billy wrote:
I have tried to load hbase several times and always keep filing
2007-12-18 14:21:45,062 FATAL org.apache.hadoop.hbase.HRegionServer: Replay
of hlog required. Forcing server resta
Try the following:
hql> create table webtable(
--> contents MAX_VERSIONS=10 COMPRESSION=BLOCK,
--> anchor MAX_LENGTH=256 BLOOMFILTER=COUNTING_BLOOMFILTER
--> VECTOR_SIZE=1 NUM_HASH=4);
* BLOOMFILTER=NONE|BLOOMFILTER|COUNTING_BLOOMFILTER|RETOUCHED_BLOOMFILTER
Thanks,
Edward.
--
I tried to enter some test tables in hbase. The example was taken from
http://wiki.apache.org/lucene-hadoop/Hbase/HbaseShell but it fails no
matter which bloomfiler option I choose. Is there a better tutorial?
Hbase> CREATE TABLE webtable (
--> contents MAX_VERSIONS=10 COMPRESSION=BLOCK,
M.Shiva wrote:
Hi,
We have followed the steps to run the hadoop on linux implementing
multi-node cluster.
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)we're
working 2 nodes. One as master,other as slave. We have started the namenode
but the slave node fai
Hi,
We have followed the steps to run the hadoop on linux implementing
multi-node cluster.
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)we're
working 2 nodes. One as master,other as slave. We have started the namenode
but the slave node fails.
Since we ar
15 matches
Mail list logo