[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-08-09 Thread Gijs S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13431887#comment-13431887
 ] 

Gijs S commented on CASSANDRA-4229:
---

I ran into the same problem:

When using cassandra 1.1.2 some map reduce jobs run infinitely. 

The problem is on the line that Bert Passek refers to:
https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob;f=src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java;h=afbda7f59cea8ecca2c7e84cfb8a04e05c0264ae;hb=dec5eedd38cb64093a965ce96b22c50bc5397b11#l338

Inserting this statement after throws an exception because the startToken is 
-1: partitioner.getTokenFactory().validate(startToken);

This happens in a setup with a cluster of 4 cassandra nodes and a hadoop job 
written with cascalog (with a custom cassandra tap for cascading).

I haven't been able to reproduce the problem with the hadoop_word_count example.

> Infinite MapReduce Task while reading via ColumnFamilyInputFormat
> -
>
> Key: CASSANDRA-4229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
> Environment: Debian Squeeze
>Reporter: bert Passek
> Attachments: screenshot.jpg
>
>
> Hi,
> we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
> not execute any hadoop jobs which reads data from cassandra via 
> ColumnFamilyInputFormat.
> A map task is created which is running infinitely. We are trying to read from 
> a super column family with more or less 1000 row keys.
> This is the output from job interface where we already have 17 million map 
> input records !!!
> Map input records 17.273.127  0   17.273.127
> Reduce shuffle bytes  0   391 391
> Spilled Records   3.288   0   3.288
> Map output bytes  639.849.351 0   639.849.351
> CPU time spent (ms)   792.750 7.600   800.350
> Total committed heap usage (bytes)354.680.832 48.955.392  
> 403.636.224
> Combine input records 17.039.783  0   17.039.783
> SPLIT_RAW_BYTES   212 0   212
> Reduce input records  0   0   0
> Reduce input groups   0   0   0
> Combine output records3.288   0   3.288
> Physical memory (bytes) snapshot  510.275.584 96.370.688  
> 606.646.272
> Reduce output records 0   0   0
> Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
> 2.760.970.240
> Map output records17.273.126  0   17.273.126
> We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
> not usable for reading from cassandra.
> Best regards 
> Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-06-29 Thread bert Passek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13403834#comment-13403834
 ] 

bert Passek commented on CASSANDRA-4229:


Hello,

i spend some time for debugging and i could find the reason for the problem 
described above.

This is the map function for reading from cassandra:

protected void map(ByteBuffer key, SortedMap values, 
Context context);

If you are using the row-key, you must duplicate the bytebuffer, otherwise the 
RowIterator in ColumnFamilyRecordReader does not finish correctly.

This is one of the exit-condition in StaticRowIterator:

startToken = 
partitioner.getTokenFactory().toString(partitioner.getToken(Iterables.getLast(rows).key));
if (startToken.equals(split.getEndToken()))
{
  // reached end of the split
  rows = null;
  return;
}

Without duplicating the row-key you will never run into it. This kind of 
behaviour was different to former versions of Cassandra und should be clearly 
documented.

Best Regards

bert Passek

> Infinite MapReduce Task while reading via ColumnFamilyInputFormat
> -
>
> Key: CASSANDRA-4229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
> Environment: Debian Squeeze
>Reporter: bert Passek
> Attachments: screenshot.jpg
>
>
> Hi,
> we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
> not execute any hadoop jobs which reads data from cassandra via 
> ColumnFamilyInputFormat.
> A map task is created which is running infinitely. We are trying to read from 
> a super column family with more or less 1000 row keys.
> This is the output from job interface where we already have 17 million map 
> input records !!!
> Map input records 17.273.127  0   17.273.127
> Reduce shuffle bytes  0   391 391
> Spilled Records   3.288   0   3.288
> Map output bytes  639.849.351 0   639.849.351
> CPU time spent (ms)   792.750 7.600   800.350
> Total committed heap usage (bytes)354.680.832 48.955.392  
> 403.636.224
> Combine input records 17.039.783  0   17.039.783
> SPLIT_RAW_BYTES   212 0   212
> Reduce input records  0   0   0
> Reduce input groups   0   0   0
> Combine output records3.288   0   3.288
> Physical memory (bytes) snapshot  510.275.584 96.370.688  
> 606.646.272
> Reduce output records 0   0   0
> Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
> 2.760.970.240
> Map output records17.273.126  0   17.273.126
> We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
> not usable for reading from cassandra.
> Best regards 
> Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-06-01 Thread bert Passek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287346#comment-13287346
 ] 

bert Passek commented on CASSANDRA-4229:


I'm trying to make a setup for reproduction, will let you know.

> Infinite MapReduce Task while reading via ColumnFamilyInputFormat
> -
>
> Key: CASSANDRA-4229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
> Environment: Debian Squeeze
>Reporter: bert Passek
> Attachments: screenshot.jpg
>
>
> Hi,
> we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
> not execute any hadoop jobs which reads data from cassandra via 
> ColumnFamilyInputFormat.
> A map task is created which is running infinitely. We are trying to read from 
> a super column family with more or less 1000 row keys.
> This is the output from job interface where we already have 17 million map 
> input records !!!
> Map input records 17.273.127  0   17.273.127
> Reduce shuffle bytes  0   391 391
> Spilled Records   3.288   0   3.288
> Map output bytes  639.849.351 0   639.849.351
> CPU time spent (ms)   792.750 7.600   800.350
> Total committed heap usage (bytes)354.680.832 48.955.392  
> 403.636.224
> Combine input records 17.039.783  0   17.039.783
> SPLIT_RAW_BYTES   212 0   212
> Reduce input records  0   0   0
> Reduce input groups   0   0   0
> Combine output records3.288   0   3.288
> Physical memory (bytes) snapshot  510.275.584 96.370.688  
> 606.646.272
> Reduce output records 0   0   0
> Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
> 2.760.970.240
> Map output records17.273.126  0   17.273.126
> We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
> not usable for reading from cassandra.
> Best regards 
> Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13273687#comment-13273687
 ] 

Jonathan Ellis commented on CASSANDRA-4229:
---

How do you want us to proceed?  Can you give us instructions to reproduce?

> Infinite MapReduce Task while reading via ColumnFamilyInputFormat
> -
>
> Key: CASSANDRA-4229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
> Environment: Debian Squeeze
>Reporter: bert Passek
> Attachments: screenshot.jpg
>
>
> Hi,
> we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
> not execute any hadoop jobs which reads data from cassandra via 
> ColumnFamilyInputFormat.
> A map task is created which is running infinitely. We are trying to read from 
> a super column family with more or less 1000 row keys.
> This is the output from job interface where we already have 17 million map 
> input records !!!
> Map input records 17.273.127  0   17.273.127
> Reduce shuffle bytes  0   391 391
> Spilled Records   3.288   0   3.288
> Map output bytes  639.849.351 0   639.849.351
> CPU time spent (ms)   792.750 7.600   800.350
> Total committed heap usage (bytes)354.680.832 48.955.392  
> 403.636.224
> Combine input records 17.039.783  0   17.039.783
> SPLIT_RAW_BYTES   212 0   212
> Reduce input records  0   0   0
> Reduce input groups   0   0   0
> Combine output records3.288   0   3.288
> Physical memory (bytes) snapshot  510.275.584 96.370.688  
> 606.646.272
> Reduce output records 0   0   0
> Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
> 2.760.970.240
> Map output records17.273.126  0   17.273.126
> We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
> not usable for reading from cassandra.
> Best regards 
> Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-09 Thread bert Passek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271643#comment-13271643
 ] 

bert Passek commented on CASSANDRA-4229:


Yes, i can reproduce it. Actually our developing environment simply consists of 
a single node. The hadoop job is very simple,  just reading data from Cassandra 
and writing back to Cassandra.

> Infinite MapReduce Task while reading via ColumnFamilyInputFormat
> -
>
> Key: CASSANDRA-4229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
> Environment: Debian Squeeze
>Reporter: bert Passek
> Attachments: screenshot.jpg
>
>
> Hi,
> we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
> not execute any hadoop jobs which reads data from cassandra via 
> ColumnFamilyInputFormat.
> A map task is created which is running infinitely. We are trying to read from 
> a super column family with more or less 1000 row keys.
> This is the output from job interface where we already have 17 million map 
> input records !!!
> Map input records 17.273.127  0   17.273.127
> Reduce shuffle bytes  0   391 391
> Spilled Records   3.288   0   3.288
> Map output bytes  639.849.351 0   639.849.351
> CPU time spent (ms)   792.750 7.600   800.350
> Total committed heap usage (bytes)354.680.832 48.955.392  
> 403.636.224
> Combine input records 17.039.783  0   17.039.783
> SPLIT_RAW_BYTES   212 0   212
> Reduce input records  0   0   0
> Reduce input groups   0   0   0
> Combine output records3.288   0   3.288
> Physical memory (bytes) snapshot  510.275.584 96.370.688  
> 606.646.272
> Reduce output records 0   0   0
> Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
> 2.760.970.240
> Map output records17.273.126  0   17.273.126
> We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
> not usable for reading from cassandra.
> Best regards 
> Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271524#comment-13271524
 ] 

Jonathan Ellis commented on CASSANDRA-4229:
---

Can you reproduce on a single node?

> Infinite MapReduce Task while reading via ColumnFamilyInputFormat
> -
>
> Key: CASSANDRA-4229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
> Environment: Debian Squeeze
>Reporter: bert Passek
> Attachments: screenshot.jpg
>
>
> Hi,
> we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
> not execute any hadoop jobs which reads data from cassandra via 
> ColumnFamilyInputFormat.
> A map task is created which is running infinitely. We are trying to read from 
> a super column family with more or less 1000 row keys.
> This is the output from job interface where we already have 17 million map 
> input records !!!
> Map input records 17.273.127  0   17.273.127
> Reduce shuffle bytes  0   391 391
> Spilled Records   3.288   0   3.288
> Map output bytes  639.849.351 0   639.849.351
> CPU time spent (ms)   792.750 7.600   800.350
> Total committed heap usage (bytes)354.680.832 48.955.392  
> 403.636.224
> Combine input records 17.039.783  0   17.039.783
> SPLIT_RAW_BYTES   212 0   212
> Reduce input records  0   0   0
> Reduce input groups   0   0   0
> Combine output records3.288   0   3.288
> Physical memory (bytes) snapshot  510.275.584 96.370.688  
> 606.646.272
> Reduce output records 0   0   0
> Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
> 2.760.970.240
> Map output records17.273.126  0   17.273.126
> We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
> not usable for reading from cassandra.
> Best regards 
> Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira