[
http://issues.apache.org/jira/browse/HADOOP-153?page=comments#action_12375457 ]
Runping Qi commented on HADOOP-153:
---
+1
Exceptions in the map and reduce functions that are implemented by the user
should be handled by the user within the functions.
In
[
http://issues.apache.org/jira/browse/HADOOP-153?page=comments#action_12375455 ]
eric baldeschwieler commented on HADOOP-153:
sounds good. The acceptable % should probably be configurable. I'd be
inclined to use something more like 1%. You c
humm,
The client is timing out when it is getting data? Maybe as long as
it is getting data, it should reset its timer? Maybe the server
should fail a client if it is busy? This would let you make informed
decision.
On Apr 20, 2006, at 11:24 AM, paul sutter (JIRA) wrote:
[ http://
Folks can say whether they'll attend at:
http://www.evite.com/app/publicUrl/[EMAIL PROTECTED]/nutch-1
Doug
(with apologies for multiple postings)
Dear Nutch users, Dear Nutch developers, Dear Hadoop developers,
we would love to invite you to the Nutch user meeting in San Francisco.
Date: Thursday, May 18th, 2006
Time: 7 PM.
Location: Cafe Du Soleil, 200 Fillmore St, San Francisco, CA 94117.
(Th
[ http://issues.apache.org/jira/browse/HADOOP-132?page=all ]
Sameer Paranjpye updated HADOOP-132:
Fix Version: 0.2
Version: 0.2
Description:
I'd like to propose adding an API for reporting performance metrics. I will
post some javadoc a
[
http://issues.apache.org/jira/browse/HADOOP-132?page=comments#action_12375438 ]
Doug Cutting commented on HADOOP-132:
-
+1 Looks good to me!
I think there's a typo in the overview, where you should have setMetric() you
instead have setGauge().
> An
[
http://issues.apache.org/jira/browse/HADOOP-108?page=comments#action_12375436 ]
Igor Bolotin commented on HADOOP-108:
-
This problem didn't happen to us anymore after upgrading Hadoop.
Also, based on the description - this one looks like duplicate of H
Reducer threw IOEOFException
-
Key: HADOOP-156
URL: http://issues.apache.org/jira/browse/HADOOP-156
Project: Hadoop
Type: Bug
Reporter: Runping Qi
A job was running with all the map tasks completed.
The reducers were appending the
Add a conf dir parameter to the scripts
---
Key: HADOOP-155
URL: http://issues.apache.org/jira/browse/HADOOP-155
Project: Hadoop
Type: Improvement
Components: conf
Reporter: Owen O'Malley
We'd like a conf_dir parameter o
fsck fails when there is no file in dfs
---
Key: HADOOP-154
URL: http://issues.apache.org/jira/browse/HADOOP-154
Project: Hadoop
Type: Bug
Components: dfs
Versions: 0.1.1
Reporter: Lei Chen
Priority: Trivial
[ http://issues.apache.org/jira/browse/HADOOP-132?page=all ]
David Bowen updated HADOOP-132:
---
Attachment: javadoc.tgz
Here is an updated API, incorporating the feedback I've received so far. The
main
changes are
(1) Ganglia support is included - thi
Hi hadoop developers,
I'm looking for a hint or inspiration for a problem I would love to
solve with the hadoop platform but it is not map reduce related.
My data structure is builded from rows and each row has a set of
columns and column values.
For example row key: cnn.com column keys: user
[ http://issues.apache.org/jira/browse/HADOOP-153?page=all ]
Doug Cutting updated HADOOP-153:
Fix Version: 0.2
> skip records that throw exceptions
> --
>
> Key: HADOOP-153
> URL: http://issues.apache.org
[ http://issues.apache.org/jira/browse/HADOOP-153?page=all ]
Doug Cutting updated HADOOP-153:
Version: 0.2
Assign To: Doug Cutting
> skip records that throw exceptions
> --
>
> Key: HADOOP-153
> URL
[
http://issues.apache.org/jira/browse/HADOOP-141?page=comments#action_12375411 ]
paul sutter commented on HADOOP-141:
A few timeouts would be fine. The problem is when the same files timeout over
and over again, and progress ceases completely.
I was
[
http://issues.apache.org/jira/browse/HADOOP-153?page=comments#action_12375408 ]
[EMAIL PROTECTED] commented on HADOOP-153:
--
+1
This would be a generalization of the checksum handler that tries to skip
records when 'io.skip.checksum.errors' is se
[ http://issues.apache.org/jira/browse/HADOOP-69?page=all ]
Doug Cutting resolved HADOOP-69:
Fix Version: 0.2
Resolution: Fixed
Committed. Sorry, this fell off my radar. Thanks for the reminder. I fixed
something related in:
http://svn.apa
[
http://issues.apache.org/jira/browse/HADOOP-153?page=comments#action_12375405 ]
Sameer Paranjpye commented on HADOOP-153:
-
+1
This would be a cool feature to have. Perhaps the exceptions should also be
made visible at the jobtracker. An extension
skip records that throw exceptions
--
Key: HADOOP-153
URL: http://issues.apache.org/jira/browse/HADOOP-153
Project: Hadoop
Type: New Feature
Components: mapred
Reporter: Doug Cutting
MapReduce should skip records that throw
[
http://issues.apache.org/jira/browse/HADOOP-69?page=comments#action_12375403 ]
Bryan Pendleton commented on HADOOP-69:
---
Bump: Any reason this patch hasn't been applied? It looks like it's still
possible for non-found blocks to return null, causing a
Speculative tasks not being scheduled
-
Key: HADOOP-152
URL: http://issues.apache.org/jira/browse/HADOOP-152
Project: Hadoop
Type: Bug
Components: mapred
Versions: 0.2
Environment: ~30 node Opteron cluster
Reporte
[
http://issues.apache.org/jira/browse/HADOOP-141?page=comments#action_12375401 ]
Doug Cutting commented on HADOOP-141:
-
Some timeouts during the copy phase may not be bad. If too many nodes are
transferring from a given node, then it may time out addi
[ http://issues.apache.org/jira/browse/HADOOP-115?page=all ]
Teppo Kurki updated HADOOP-115:
---
Attachment: hadoop-115_ReduceTask.patch
Patch including
TestReduceTask
- generates a bunch of SequenceFiles and reduces them by running a single
ReduceTask
- tw
Hi Doug,
I don't understand the problem here.
There is no really problem, just a question to better understand hadoop.
My real problem is that the map and reduce task have to have the
same key and value class.
Since changing this is a little bit more work as far I can say that,
I was thin
25 matches
Mail list logo