[ 
https://issues.apache.org/jira/browse/CASSANDRA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493597#comment-13493597
 ] 

Michael Kjellman edited comment on CASSANDRA-4813 at 11/9/12 2:03 AM:
----------------------------------------------------------------------

new problem now with current patch.

2012-11-08 15:29:05,323 ERROR 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor: Error in 
ThreadPoolExecutor
java.lang.RuntimeException: java.io.IOException: Broken pipe
        at com.google.common.base.Throwables.propagate(Throwables.java:156)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89)
        at sun.nio.ch.IOUtil.write(IOUtil.java:60)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
        at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
        at java.nio.channels.Channels.writeFully(Channels.java:98)
        at java.nio.channels.Channels.access$000(Channels.java:61)
        at java.nio.channels.Channels$1.write(Channels.java:174)
        at 
com.ning.compress.lzf.LZFChunk.writeCompressedHeader(LZFChunk.java:77)
        at 
com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:132)
        at 
com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
        at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
        at 
org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:218)
        at 
org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:164)
        at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        ... 3 more
                
      was (Author: mkjellman):
    new problem now with this patch.


attempt_201211061048_0010_r_000002_0: Exception in thread "Streaming to 
/10.25.9.5:1" java.lang.RuntimeException: java.io.IOException: Broken pipe
attempt_201211061048_0010_r_000002_0:   at 
com.google.common.base.Throwables.propagate(Throwables.java:156)
attempt_201211061048_0010_r_000002_0:   at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
attempt_201211061048_0010_r_000002_0:   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
attempt_201211061048_0010_r_000002_0:   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
attempt_201211061048_0010_r_000002_0:   at java.lang.Thread.run(Thread.java:722)
attempt_201211061048_0010_r_000002_0: Caused by: java.io.IOException: Broken 
pipe
attempt_201211061048_0010_r_000002_0:   at 
sun.nio.ch.FileDispatcherImpl.write0(Native Method)
attempt_201211061048_0010_r_000002_0:   at 
sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
attempt_201211061048_0010_r_000002_0:   at 
sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89)
attempt_201211061048_0010_r_000002_0:   at 
sun.nio.ch.IOUtil.write(IOUtil.java:60)
attempt_201211061048_0010_r_000002_0:   at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
attempt_201211061048_0010_r_000002_0:   at 
java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
attempt_201211061048_0010_r_000002_0:   at 
java.nio.channels.Channels.writeFully(Channels.java:98)
attempt_201211061048_0010_r_000002_0:   at 
java.nio.channels.Channels.access$000(Channels.java:61)
attempt_201211061048_0010_r_000002_0:   at 
java.nio.channels.Channels$1.write(Channels.java:174)
attempt_201211061048_0010_r_000002_0:   at 
com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
attempt_201211061048_0010_r_000002_0:   at 
com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
attempt_201211061048_0010_r_000002_0:   at 
com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
attempt_201211061048_0010_r_000002_0:   at 
org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:218)
attempt_201211061048_0010_r_000002_0:   at 
org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:164)
attempt_201211061048_0010_r_000002_0:   at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
attempt_201211061048_0010_r_000002_0:   at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
attempt_201211061048_0010_r_000002_0:   ... 3 more
                  
> Problem using BulkOutputFormat while streaming several SSTables 
> simultaneously from a given node.
> -------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-4813
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4813
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.0
>         Environment: I am using SLES 10 SP3, Java 6, 4 Cassandra + Hadoop 
> nodes, 3 Hadoop only nodes (datanodes/tasktrackers), 1 namenode/jobtracker. 
> The machines used are Six-Core AMD Opteron(tm) Processor 8431, 24 cores and 
> 33 GB of RAM. I get the issue on both cassandra 1.1.3, 1.1.5 and I am using 
> Hadoop 0.20.2.
>            Reporter: Ralph Romanos
>            Assignee: Yuki Morishita
>            Priority: Minor
>              Labels: Bulkoutputformat, Hadoop, SSTables
>             Fix For: 1.2.0 rc1
>
>         Attachments: 4813.txt
>
>
> The issue occurs when streaming simultaneously SSTables from the same node to 
> a cassandra cluster using SSTableloader. It seems to me that Cassandra cannot 
> handle receiving simultaneously SSTables from the same node. However, when it 
> receives simultaneously SSTables from two different nodes, everything works 
> fine. As a consequence, when using BulkOutputFormat to generate SSTables and 
> stream them to a cassandra cluster, I cannot use more than one reducer per 
> node otherwise I get a java.io.EOFException in the tasktracker's logs and a 
> java.io.IOException: Broken pipe in the Cassandra logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to