[ 
https://issues.apache.org/jira/browse/CASSANDRA-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13538664#comment-13538664
 ] 

Jonathan Ellis commented on CASSANDRA-5085:
-------------------------------------------

No, the fixes were to the Cassandra server as well as the InputFormat.
                
> Hadoop map reduce job work wrong with big wide rows 
> ----------------------------------------------------
>
>                 Key: CASSANDRA-5085
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5085
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Hadoop
>    Affects Versions: 1.1.1
>         Environment: Centos 6.0, 7 node cluster (24gb ram, 16 core) , 
> replication factor=2
>            Reporter: tu nguyen khac
>
> I have 7 nodes cassandra (1.1.1) and hadoop (1.03) cluster ( tasktracker 
> install on every cassandra node).
> and my column family use wide row pattern. 1 row contains about 200k columns 
> (max about 300k).
> My problem is that: when i use Hadoop to run analytic jobs ( count numbers of 
> occurrence of a something in data) the result i received is wrong ( result is 
> too lower as I expected in test records)
> there 's one strange when we monitoring on job tracker is map progress task 
> indicate wrong ( in my image below ) , And number of "Map input records" when 
> I rerun job ( same data) is not same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to