I would first ask if you could upgrade to the latest version of Cassandra
2.1.x (presently 2.1.2).

If the issue still occurs consistently, it would be interesting to turn up
logging on the client side and see if something is causing the client to
disconnect during the metadata refresh following the schema change. If this
yields further information, please raise the issue on the driver's user
mailing list.

Adam Holmberg

On Wed, Jan 28, 2015 at 8:19 PM, Saurabh Sethi <saurabh_se...@symantec.com>
wrote:

> I have a 3 node Cassandra 2.1.0 cluster and I am using datastax 2.1.4
> driver to create a keyspace followed by creating a column family within
> that keyspace from my unit test.
>
> But I do not see the keyspace getting created and the code for creating
> column family fails because it cannot find the keyspace. I see the
> following in the system.log file:
>
> INFO  [SharedPool-Worker-1] 2015-01-28 17:59:08,472
> MigrationManager.java:229 - Create new Keyspace:
> KSMetaData{name=testmaxcolumnskeyspace, strategyClass=SimpleStrategy,
> strategyOptions={replication_factor=1}, cfMetaData={}, durableWrites=true,
> userTypes=org.apache.cassandra.config.UTMetaData@370ad1d3}
> INFO  [MigrationStage:1] 2015-01-28 17:59:08,476
> ColumnFamilyStore.java:856 - Enqueuing flush of schema_keyspaces: 512 (0%)
> on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:22] 2015-01-28 17:59:08,477 Memtable.java:326 -
> Writing Memtable-schema_keyspaces@1664717092(138 serialized bytes, 3 ops,
> 0%/0% of on/off-heap limit)
> INFO  [MemtableFlushWriter:22] 2015-01-28 17:59:08,486 Memtable.java:360 -
> Completed flushing
> /usr/share/apache-cassandra-2.1.0/bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-118-Data.db
> (175 bytes) for commitlog position ReplayPosition(segmentId=1422485457803,
> position=10514)
>
> This issue doesn’t happen always. My test runs fine sometimes but once it
> gets into this state, it remains there for a while and I can constantly
> reproduce this.
>
> Also, when this issue happens for the first time, I also see the following
> error message in system.log file:
>
> ERROR [SharedPool-Worker-1] 2015-01-28 15:08:24,286 ErrorMessage.java:218 - 
> Unexpected exception during request
> java.io.IOException: Connection reset by peer
>         at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_05]
>         at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> ~[na:1.8.0_05]
>         at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
> ~[na:1.8.0_05]
>         at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_05]
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375) 
> ~[na:1.8.0_05]
>         at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:878) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:114)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
>
>
> Anyone has any idea what might be going on here?
>
> Thanks,
> Saurabh
>

Reply via email to