[jira] [Created] (CASSANDRA-6363) CAS not applied on rows containing an expired ttl column
Michał Ziemski created CASSANDRA-6363: - Summary: CAS not applied on rows containing an expired ttl column Key: CASSANDRA-6363 URL: https://issues.apache.org/jira/browse/CASSANDRA-6363 Project: Cassandra Issue Type: Bug Components: Core Environment: Linux/x64 2.0.2 4-node cluster Reporter: Michał Ziemski CREATE TABLE session ( id text, usr text, valid int, PRIMARY KEY (id) ); insert into session (id, usr) values ('abc', 'abc'); update session using ttl 1 set valid = 1 where id = 'abc'; (wait 1 sec) And delete from session where id = 'DSYUCTCLSOEKVLAQWNWYLVQMEQGGXD' if usr ='demo'; Yields: [applied] | usr ---+- False | abc Rather than applying the delete. Executing: update session set valid = null where id = 'abc'; and again delete from session where id = 'DSYUCTCLSOEKVLAQWNWYLVQMEQGGXD' if usr ='demo'; Positively deletes the row. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6362) Test node changing IP address
Jonathan Ellis created CASSANDRA-6362: - Summary: Test node changing IP address Key: CASSANDRA-6362 URL: https://issues.apache.org/jira/browse/CASSANDRA-6362 Project: Cassandra Issue Type: Test Components: Core Reporter: Jonathan Ellis Assignee: Ryan McGuire Priority: Minor Fix For: 1.2.12, 2.0.3 Seeing a cluster running 1.2. where a node changing IP address totally confused things, logging Token X changing ownership from to ... which is the opposite of what we want to happen. Let's see if we can reproduce. [~jjordan] thinks switching from IP-based identity to hostid-based may have broken this. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6349) IOException in MessagingService.run() causes orphaned storage server socket
[ https://issues.apache.org/jira/browse/CASSANDRA-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-6349: --- Attachment: CASSANDRA-2.0-6349.patch One of the options is to close the socket on IOException and continue > IOException in MessagingService.run() causes orphaned storage server socket > --- > > Key: CASSANDRA-6349 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6349 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: cassandra 2.0+ >Reporter: Steven Halaka >Assignee: Mikhail Stepura > Attachments: CASSANDRA-2.0-6349.patch > > > The refactoring of reading the message header in MessagingService.run() vs > IncomingTcpConnection seems to mishandle IOException as the loop is broken > and MessagingService.SocketThread never seems to get reinitialized. > To reproduce: telnet to port 7000 and send random data. This then prevents > any new or restarting node in the cluster from handshaking with this defunct > storage port. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6361) Gossiper not work correctly while upgrade c*1.1.9 to c*1.2.6
Boole Guo created CASSANDRA-6361: Summary: Gossiper not work correctly while upgrade c*1.1.9 to c*1.2.6 Key: CASSANDRA-6361 URL: https://issues.apache.org/jira/browse/CASSANDRA-6361 Project: Cassandra Issue Type: Bug Components: Core Environment: Local env, 4 nodes, 1 cluster Reporter: Boole Guo While I uprade c* from 1.1.9 to 1.2.6. One node cannot update gossip info. The output is: [mis@necaso01 bin]$ ./nodetool ring -h 10.16.40.35 Note: Ownership information does not include topology; for complete information, specify a keyspace Datacenter: DC1 == Address RackStatus State LoadOwns Token 85070591730234615865843651857942052864 10.16.40.35 RAC1Up Normal 115.83 KB 75.00% 42535295865117307932921825928971026432 10.16.40.53 RAC1Up Normal 187.3 KB25.00% 85070591730234615865843651857942052864 [mis@necaso01 bin]$ ./nodetool ring -h 10.16.40.30 Note: Ownership information does not include topology; for complete information, specify a keyspace Datacenter: DC1 == Address RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.16.40.30 RAC1Up Normal 181.84 KB 25.00% 0 10.16.40.35 RAC1Up Normal 115.83 KB 25.00% 42535295865117307932921825928971026432 10.16.40.53 RAC1Up Normal 187.3 KB25.00% 85070591730234615865843651857942052864 10.16.40.56 RAC1Up Normal 191.24 KB 25.00% 127605887595351923798765477786913079296 the nodetool output is: [mis@necaso01 bin]$ ./nodetool gossipinfo -h 10.16.40.35 /10.16.40.30 SEVERITY:0.0 LOAD:186207.0 SCHEMA:b7fa4fd0-081d-3466-a57a-0907365b556d /10.16.40.53 STATUS:NORMAL,85070591730234615865843651857942052864 RELEASE_VERSION:1.2.6.1 RACK:RAC1 RPC_ADDRESS:10.16.40.53 NET_VERSION:6 SEVERITY:0.0 DC:DC1 LOAD:191797.0 SCHEMA:b7fa4fd0-081d-3466-a57a-0907365b556d HOST_ID:669984c2-59f0-43db-af84-0981cf9187c5 /10.16.40.35 STATUS:NORMAL,42535295865117307932921825928971026432 RELEASE_VERSION:1.2.6.1 RPC_ADDRESS:10.16.40.35 RACK:RAC1 NET_VERSION:6 SEVERITY:0.0 DC:DC1 LOAD:118610.0 SCHEMA:b7fa4fd0-081d-3466-a57a-0907365b556d HOST_ID:f5e600e5-38b1-48eb-ab14-61302cc43f70 /10.16.40.56 SEVERITY:2.220446049250313E-16 LOAD:195826.0 SCHEMA:b7fa4fd0-081d-3466-a57a-0907365b556d -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6357) Flush memtables to separate directory
[ https://issues.apache.org/jira/browse/CASSANDRA-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824269#comment-13824269 ] Aleksey Yeschenko commented on CASSANDRA-6357: -- Should handle the best effort failure policy - if the flush directory gets blacklisted, try using one of the writable data dirs instead. > Flush memtables to separate directory > - > > Key: CASSANDRA-6357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6357 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Patrick McFadin >Assignee: Jonathan Ellis > Attachments: 6357.txt > > > Flush writers are a critical element for keeping a node healthy. When several > compactions run on systems with low performing data directories, IO becomes a > premium. Once the disk subsystem is saturated, write IO is blocked which will > cause flush writer threads to backup. Since memtables are large blocks of > memory in the JVM, too much blocking can cause excessive GC over time > degrading performance. In the worst case causing an OOM. > Since compaction is running on the data directories. My proposal is to create > a separate directory for flushing memtables. Potentially we can use the same > methodology of keeping the commit log separate and minimize disk contention > against the critical function of the flushwriter. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6360) Make nodetool cfhistograms output easily understandable
Tyler Hobbs created CASSANDRA-6360: -- Summary: Make nodetool cfhistograms output easily understandable Key: CASSANDRA-6360 URL: https://issues.apache.org/jira/browse/CASSANDRA-6360 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Tyler Hobbs Assignee: Tyler Hobbs Priority: Trivial Almost nobody understands the cfhistograms output without googling it. By default, we shouldn't share an axis across all metrics. We can still provide the current format with a --compact option. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (CASSANDRA-6349) IOException in MessagingService.run() causes orphaned storage server socket
[ https://issues.apache.org/jira/browse/CASSANDRA-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-6349: - Assignee: Mikhail Stepura > IOException in MessagingService.run() causes orphaned storage server socket > --- > > Key: CASSANDRA-6349 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6349 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: cassandra 2.0+ >Reporter: Steven Halaka >Assignee: Mikhail Stepura > > The refactoring of reading the message header in MessagingService.run() vs > IncomingTcpConnection seems to mishandle IOException as the loop is broken > and MessagingService.SocketThread never seems to get reinitialized. > To reproduce: telnet to port 7000 and send random data. This then prevents > any new or restarting node in the cluster from handshaking with this defunct > storage port. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-5906) Avoid allocating over-large bloom filters
[ https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824180#comment-13824180 ] Yuki Morishita commented on CASSANDRA-5906: --- Push this to be released on 2.1, after CASSANDRA-6356 > Avoid allocating over-large bloom filters > - > > Key: CASSANDRA-5906 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5906 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jonathan Ellis >Assignee: Yuki Morishita > Fix For: 2.1 > > > We conservatively estimate the number of partitions post-compaction to be the > total number of partitions pre-compaction. That is, we assume the worst-case > scenario of no partition overlap at all. > This can result in substantial memory wasted in sstables resulting from > highly overlapping compactions. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-5906) Avoid allocating over-large bloom filters
[ https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-5906: -- Fix Version/s: (was: 2.0.3) 2.1 > Avoid allocating over-large bloom filters > - > > Key: CASSANDRA-5906 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5906 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jonathan Ellis >Assignee: Yuki Morishita > Fix For: 2.1 > > > We conservatively estimate the number of partitions post-compaction to be the > total number of partitions pre-compaction. That is, we assume the worst-case > scenario of no partition overlap at all. > This can result in substantial memory wasted in sstables resulting from > highly overlapping compactions. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6357) Flush memtables to separate directory
[ https://issues.apache.org/jira/browse/CASSANDRA-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6357: -- Tester: Quentin Conner Summary: Flush memtables to separate directory (was: Flush memtables to seperate directory) > Flush memtables to separate directory > - > > Key: CASSANDRA-6357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6357 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Patrick McFadin >Assignee: Jonathan Ellis > Attachments: 6357.txt > > > Flush writers are a critical element for keeping a node healthy. When several > compactions run on systems with low performing data directories, IO becomes a > premium. Once the disk subsystem is saturated, write IO is blocked which will > cause flush writer threads to backup. Since memtables are large blocks of > memory in the JVM, too much blocking can cause excessive GC over time > degrading performance. In the worst case causing an OOM. > Since compaction is running on the data directories. My proposal is to create > a separate directory for flushing memtables. Potentially we can use the same > methodology of keeping the commit log separate and minimize disk contention > against the critical function of the flushwriter. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6357) Flush memtables to seperate directory
[ https://issues.apache.org/jira/browse/CASSANDRA-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6357: -- Attachment: 6357.txt Patch that allows splitting out {{flush_directory}} to a separate volume, defaulting to /var/lib/cassandra/flush once patched. Note that it will default to the first data directory if you upgrade without specifying {{flush_directory}}, so you do need to remember to set it because it won't bitch at you if you don't. > Flush memtables to seperate directory > - > > Key: CASSANDRA-6357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6357 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Patrick McFadin > Attachments: 6357.txt > > > Flush writers are a critical element for keeping a node healthy. When several > compactions run on systems with low performing data directories, IO becomes a > premium. Once the disk subsystem is saturated, write IO is blocked which will > cause flush writer threads to backup. Since memtables are large blocks of > memory in the JVM, too much blocking can cause excessive GC over time > degrading performance. In the worst case causing an OOM. > Since compaction is running on the data directories. My proposal is to create > a separate directory for flushing memtables. Potentially we can use the same > methodology of keeping the commit log separate and minimize disk contention > against the critical function of the flushwriter. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6359) sstableloader does not free off-heap memory for index summary
[ https://issues.apache.org/jira/browse/CASSANDRA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-6359: --- Attachment: 0001-Free-off-heap-memory-when-releasing-index-summary.patch Attached patch (and [branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-6359]) properly closes the {{IndexSummary}} before releasing the reference. > sstableloader does not free off-heap memory for index summary > - > > Key: CASSANDRA-6359 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6359 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Tyler Hobbs >Assignee: Tyler Hobbs >Priority: Minor > Attachments: > 0001-Free-off-heap-memory-when-releasing-index-summary.patch > > > Although sstableloader tells {{SSTableReaders}} to release their references > to the {{IndexSummary}} objects, the summary's {{Memory}} is never > {{free()}}'d, causing an off-heap memory leak. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6359) sstableloader does not free off-heap memory for index summary
Tyler Hobbs created CASSANDRA-6359: -- Summary: sstableloader does not free off-heap memory for index summary Key: CASSANDRA-6359 URL: https://issues.apache.org/jira/browse/CASSANDRA-6359 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Tyler Hobbs Assignee: Tyler Hobbs Priority: Minor Although sstableloader tells {{SSTableReaders}} to release their references to the {{IndexSummary}} objects, the summary's {{Memory}} is never {{free()}}'d, causing an off-heap memory leak. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6358) SSTable read meter sync not cancelled when reader is closed
[ https://issues.apache.org/jira/browse/CASSANDRA-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-6358: --- Attachment: 0001-Cancel-SSTR-read-meter-syncer-on-close.patch Attached patch (and [branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-6358]) cancel the scheduled task when the SSTableReader is closed. > SSTable read meter sync not cancelled when reader is closed > --- > > Key: CASSANDRA-6358 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6358 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Tyler Hobbs >Assignee: Tyler Hobbs > Fix For: 2.0.3 > > Attachments: 0001-Cancel-SSTR-read-meter-syncer-on-close.patch > > > We run a fixed-schedule task to sync the read meter for every SSTableReader > periodically. These tasks are not cancelled when the SSTR is closed. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6358) SSTable read meter sync not cancelled when reader is closed
Tyler Hobbs created CASSANDRA-6358: -- Summary: SSTable read meter sync not cancelled when reader is closed Key: CASSANDRA-6358 URL: https://issues.apache.org/jira/browse/CASSANDRA-6358 Project: Cassandra Issue Type: Bug Components: Core Reporter: Tyler Hobbs Assignee: Tyler Hobbs Fix For: 2.0.3 We run a fixed-schedule task to sync the read meter for every SSTableReader periodically. These tasks are not cancelled when the SSTR is closed. -- This message was sent by Atlassian JIRA (v6.1#6144)
git commit: SSTable/SSTableReader cleanup
Updated Branches: refs/heads/trunk fab27bd59 -> f388c9d69 SSTable/SSTableReader cleanup patch by yukim; reviewed by jbellis for CASSANDRA-6355 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f388c9d6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f388c9d6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f388c9d6 Branch: refs/heads/trunk Commit: f388c9d69b855f0c3b146864717a971034fd3dc5 Parents: fab27bd Author: Yuki Morishita Authored: Fri Nov 15 15:23:55 2013 -0600 Committer: Yuki Morishita Committed: Fri Nov 15 15:23:55 2013 -0600 -- .../cassandra/db/CollationController.java | 5 +- .../apache/cassandra/db/ColumnFamilyStore.java | 4 +- .../db/compaction/CompactionController.java | 3 +- .../cassandra/db/compaction/CompactionTask.java | 6 +- .../compaction/LeveledCompactionStrategy.java | 3 +- .../db/compaction/LeveledManifest.java | 14 ++-- .../cassandra/db/compaction/Upgrader.java | 9 +-- .../apache/cassandra/io/sstable/Component.java | 6 -- .../cassandra/io/sstable/KeyIterator.java | 2 +- .../apache/cassandra/io/sstable/SSTable.java| 55 ++- .../cassandra/io/sstable/SSTableMetadata.java | 2 +- .../cassandra/io/sstable/SSTableReader.java | 73 ++-- .../cassandra/io/sstable/SSTableWriter.java | 12 ++-- .../io/util/DataIntegrityMetadata.java | 4 +- .../LongLeveledCompactionStrategyTest.java | 3 +- .../cassandra/db/ColumnFamilyStoreTest.java | 2 +- .../LeveledCompactionStrategyTest.java | 5 +- .../cassandra/io/sstable/SSTableReaderTest.java | 2 +- 18 files changed, 88 insertions(+), 122 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f388c9d6/src/java/org/apache/cassandra/db/CollationController.java -- diff --git a/src/java/org/apache/cassandra/db/CollationController.java b/src/java/org/apache/cassandra/db/CollationController.java index 758d523..9896fde 100644 --- a/src/java/org/apache/cassandra/db/CollationController.java +++ b/src/java/org/apache/cassandra/db/CollationController.java @@ -27,7 +27,6 @@ import org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy; import org.apache.cassandra.db.filter.NamesQueryFilter; import org.apache.cassandra.db.filter.QueryFilter; import org.apache.cassandra.db.marshal.CounterColumnType; -import org.apache.cassandra.io.sstable.SSTable; import org.apache.cassandra.io.sstable.SSTableReader; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.tracing.Tracing; @@ -99,7 +98,7 @@ public class CollationController QueryFilter reducedFilter = new QueryFilter(filter.key, filter.cfName, namesFilter.withUpdatedColumns(filterColumns), filter.timestamp); /* add the SSTables on disk */ -Collections.sort(view.sstables, SSTable.maxTimestampComparator); +Collections.sort(view.sstables, SSTableReader.maxTimestampComparator); // read sorted sstables long mostRecentRowTombstone = Long.MIN_VALUE; @@ -219,7 +218,7 @@ public class CollationController * In othere words, iterating in maxTimestamp order allow to do our mostRecentTombstone elimination * in one pass, and minimize the number of sstables for which we read a rowTombstone. */ -Collections.sort(view.sstables, SSTable.maxTimestampComparator); +Collections.sort(view.sstables, SSTableReader.maxTimestampComparator); List skippedSSTables = null; long mostRecentRowTombstone = Long.MIN_VALUE; long minTimestamp = Long.MAX_VALUE; http://git-wip-us.apache.org/repos/asf/cassandra/blob/f388c9d6/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 73704d4..1b8a1bf 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -422,7 +422,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean Descriptor desc = sstableFiles.getKey(); Set components = sstableFiles.getValue(); -if (components.contains(Component.COMPACTED_MARKER) || desc.temporary) +if (desc.temporary) { SSTable.delete(desc, components); continue; @@ -1010,7 +1010,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean { if (operation != OperationType.CLEANUP || isIndex())
[jira] [Created] (CASSANDRA-6357) Flush memtables to seperate directory
Patrick McFadin created CASSANDRA-6357: -- Summary: Flush memtables to seperate directory Key: CASSANDRA-6357 URL: https://issues.apache.org/jira/browse/CASSANDRA-6357 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Patrick McFadin Flush writers are a critical element for keeping a node healthy. When several compactions run on systems with low performing data directories, IO becomes a premium. Once the disk subsystem is saturated, write IO is blocked which will cause flush writer threads to backup. Since memtables are large blocks of memory in the JVM, too much blocking can cause excessive GC over time degrading performance. In the worst case causing an OOM. Since compaction is running on the data directories. My proposal is to create a separate directory for flushing memtables. Potentially we can use the same methodology of keeping the commit log separate and minimize disk contention against the critical function of the flushwriter. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6355) SSTable/SSTableReader cleanup
[ https://issues.apache.org/jira/browse/CASSANDRA-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824064#comment-13824064 ] Jonathan Ellis commented on CASSANDRA-6355: --- +1 > SSTable/SSTableReader cleanup > - > > Key: CASSANDRA-6355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6355 > Project: Cassandra > Issue Type: Task >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Trivial > Fix For: 2.1 > > Attachments: 6355.txt > > > Trivial SSTable/SSTableReader cleanup: > * Remove SSTable Component constants(marked as TODO) > * Remove deprecated compaction marker component > * Don't reference SSTableReader from SSTable -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change
Yuki Morishita created CASSANDRA-6356: - Summary: Proposal: Statistics.db (SSTableMetadata) format change Key: CASSANDRA-6356 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356 Project: Cassandra Issue Type: Improvement Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Minor Fix For: 2.1 We started to distinguish what's loaded to heap, and what's not from Statistics.db. For now, ancestors are loaded as they needed. Current serialization format is so adhoc that adding new metadata that are not permanently hold onto memory is somewhat difficult and messy. I propose to change serialization format so that a group of stats can be loaded as needed. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6355) SSTable/SSTableReader cleanup
[ https://issues.apache.org/jira/browse/CASSANDRA-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-6355: -- Attachment: 6355.txt also: https://github.com/yukim/cassandra/commits/6355 > SSTable/SSTableReader cleanup > - > > Key: CASSANDRA-6355 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6355 > Project: Cassandra > Issue Type: Task >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Trivial > Fix For: 2.1 > > Attachments: 6355.txt > > > Trivial SSTable/SSTableReader cleanup: > * Remove SSTable Component constants(marked as TODO) > * Remove deprecated compaction marker component > * Don't reference SSTableReader from SSTable -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6355) SSTable/SSTableReader cleanup
Yuki Morishita created CASSANDRA-6355: - Summary: SSTable/SSTableReader cleanup Key: CASSANDRA-6355 URL: https://issues.apache.org/jira/browse/CASSANDRA-6355 Project: Cassandra Issue Type: Task Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Trivial Fix For: 2.1 Trivial SSTable/SSTableReader cleanup: * Remove SSTable Component constants(marked as TODO) * Remove deprecated compaction marker component * Don't reference SSTableReader from SSTable -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs
[ https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824049#comment-13824049 ] Jonathan Ellis commented on CASSANDRA-2527: --- No. > Add ability to snapshot data as input to hadoop jobs > > > Key: CASSANDRA-2527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2527 > Project: Cassandra > Issue Type: New Feature >Reporter: Jeremy Hanna >Assignee: Brandon Williams > Labels: hadoop > Fix For: 2.1 > > > It is desirable to have immutable inputs to hadoop jobs for the duration of > the job. That way re-execution of individual tasks do not alter the output. > One way to accomplish this would be to snapshot the data that is used as > input to a job. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-4511) Secondary index support for CQL3 collections
[ https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-4511: - Reviewer: Aleksey Yeschenko > Secondary index support for CQL3 collections > - > > Key: CASSANDRA-4511 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4511 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.2.0 beta 1 >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 2.1 > > Attachments: 4511.txt > > > We should allow to 2ndary index on collections. A typical use case would be > to add a 'tag set' to say a user profile and to query users based on > what tag they have. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6333) ArrayIndexOutOfBound when using count(*) with over 10,000 rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6333: - Reviewer: Aleksey Yeschenko > ArrayIndexOutOfBound when using count(*) with over 10,000 rows > -- > > Key: CASSANDRA-6333 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6333 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.0.2, Ubuntu 12.04.3 LTS, Oracle Java 1.7.0_21 >Reporter: Tyler Tolley >Assignee: Sylvain Lebresne > Fix For: 2.0.3 > > Attachments: 6333.txt > > > We've been getting a TSocket read 0 bytes error when we try and run SELECT > count(*) FROM if the table has over 10,000 rows. > I've been able to reproduce the problem by using cassandra-stress to insert > different number of rows. When I insert under 10,000, the count is returned. > When I insert exactly 10,000, I get a message that my results were limited to > 10,000 by default. If insert 10,001, I get the exception below. > {code} > ERROR [Thrift:4] 2013-11-12 09:54:04,850 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.lang.ArrayIndexOutOfBoundsException: -1 > at java.util.ArrayList.elementData(ArrayList.java:371) > at java.util.ArrayList.remove(ArrayList.java:448) > at org.apache.cassandra.cql3.ResultSet.trim(ResultSet.java:92) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:848) > at > org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:196) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:57) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:129) > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:145) > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:136) > at > org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1936) > at > org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4394) > at > org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4378) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:722) > {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs
[ https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823991#comment-13823991 ] Alex Liu commented on CASSANDRA-2527: - Is it possible to have a snapshot data with all sstables merged/compacted into one sstable? > Add ability to snapshot data as input to hadoop jobs > > > Key: CASSANDRA-2527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2527 > Project: Cassandra > Issue Type: New Feature >Reporter: Jeremy Hanna >Assignee: Brandon Williams > Labels: hadoop > Fix For: 2.1 > > > It is desirable to have immutable inputs to hadoop jobs for the duration of > the job. That way re-execution of individual tasks do not alter the output. > One way to accomplish this would be to snapshot the data that is used as > input to a job. -- This message was sent by Atlassian JIRA (v6.1#6144)
[Cassandra Wiki] Update of "VirtualNodes/Balance" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "VirtualNodes/Balance" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/VirtualNodes/Balance?action=diff&rev1=8&rev2=9 Comment: statcounter TODO + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[Cassandra Wiki] Update of "VersionsAndBuilds" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "VersionsAndBuilds" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/VersionsAndBuilds?action=diff&rev1=63&rev2=64 Comment: statcounter Instructions for checking out the source code can always be found on the [[http://cassandra.apache.org/download|website]]. + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[jira] [Created] (CASSANDRA-6354) No cleanup of excess gossip connections
Rick Branson created CASSANDRA-6354: --- Summary: No cleanup of excess gossip connections Key: CASSANDRA-6354 URL: https://issues.apache.org/jira/browse/CASSANDRA-6354 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Priority: Minor While trying to cut off communication between two nodes, I noticed a production node had >300 connections active established to another node on the storage port. It looks like there's no check to keep these limited, so they'll just sit around forever. -- This message was sent by Atlassian JIRA (v6.1#6144)
[Cassandra Wiki] Update of "UUID" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "UUID" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/UUID?action=diff&rev1=1&rev2=2 Comment: statcounter == TimeUUIDType == The TimeUUIDType is used for a time based comparison. It uses a [[http://en.wikipedia.org/wiki/Universally_Unique_Identifier#Version_1_.28MAC_address.29|version 1 UUID]]. + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[Cassandra Wiki] Update of "UseCases" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "UseCases" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/UseCases?action=diff&rev1=15&rev2=16 Comment: statcounter [[ThomasBoose/EERD model components to Cassandra Column family's]] + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[Cassandra Wiki] Update of "TimeBaseUUIDNotes" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "TimeBaseUUIDNotes" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/TimeBaseUUIDNotes?action=diff&rev1=2&rev2=3 Comment: statcounter 1. More than one instance of the Class is in the VM in different Class Loaders - this will be mitigated by each Class having its own sequence number. 1. There is no guarantee that two instances of a UUID in the same or different VMs will have a different sequence number - just a reasonable probability that they will. + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}
[Cassandra Wiki] Update of "ThriftInterface" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "ThriftInterface" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/ThriftInterface?action=diff&rev1=18&rev2=19 Comment: statcounter ColumnOrSuperColumn(column=Column(timestamp=1, name='fruit', value='apple'), super_column=None) }}} + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[Cassandra Wiki] Update of "ThriftExamples03" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "ThriftExamples03" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/ThriftExamples03?action=diff&rev1=3&rev2=4 Comment: statcounter } }}} + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[jira] [Comment Edited] (CASSANDRA-6353) Rename row/key "cache" settings in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823847#comment-13823847 ] Jeremiah Jordan edited comment on CASSANDRA-6353 at 11/15/13 5:48 PM: -- I guess partition cache is the wrong word there, (row cache stuff should be that), but I think partition_key_cache and partition_cache would make things easier on people. At the very least we should update the comments to reflect that the settings are for CQL3 partitions, not CQL3 rows. was (Author: jjordan): I guess partition cache is the wrong word there, (row cache stuff should be that), but I think partition_key_cache and partition_cache would make things easier on people > Rename row/key "cache" settings in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Priority: Minor > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_key_cache and row_cache to partition_cache, so that it matches with > CQL3 and what is actually getting cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6353) Rename row/key "cache" settings in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823847#comment-13823847 ] Jeremiah Jordan commented on CASSANDRA-6353: I guess partition cache is the wrong word there, (row cache stuff should be that), but I think partition_key_cache and partition_cache would make things easier on people > Rename row/key "cache" settings in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Priority: Minor > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_cache, so that it matches with CQL3 and what is actually getting > cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6353) Rename row/key "cache" settings in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-6353: --- Description: We should rename all the key_cache settings in the cassandra.yaml to partition_key_cache and row_cache to partition_cache, so that it matches with CQL3 and what is actually getting cached. (was: We should rename all the key_cache settings in the cassandra.yaml to partition_cache, so that it matches with CQL3 and what is actually getting cached.) > Rename row/key "cache" settings in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Priority: Minor > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_key_cache and row_cache to partition_cache, so that it matches with > CQL3 and what is actually getting cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6353) Rename row/key "cache" settings in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-6353: --- Summary: Rename row/key "cache" settings in cassandra.yaml (was: Rename "key_cache" to "partition_cache" in cassandra.yaml) > Rename row/key "cache" settings in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Priority: Minor > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_cache, so that it matches with CQL3 and what is actually getting > cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6353) Rename "key_cache" to "partition_cache" in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823845#comment-13823845 ] Sylvain Lebresne commented on CASSANDRA-6353: - For what it's worth, I'm not sure it's worth bothering. The key_cache doesn't cache partitions so partition_cache is not really a better name. What it does is caching per-sstable position for partition keys. So maybe partition_key_cache could be a new name, but then why not shorten it to just key_cache and not invalidate documentation everywhere? > Rename "key_cache" to "partition_cache" in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Priority: Minor > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_cache, so that it matches with CQL3 and what is actually getting > cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[Cassandra Wiki] Update of "ThriftExamples" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "ThriftExamples" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/ThriftExamples?action=diff&rev1=93&rev2=94 Comment: statcounter }}} The Cassandra.Client object cannot be used concurrently by multiple threads (not thread safe). Each thread must use their own Cassandra.Client object. + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[jira] [Updated] (CASSANDRA-6353) Rename "key_cache" to "partition_cache" in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-6353: Priority: Minor (was: Major) > Rename "key_cache" to "partition_cache" in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Priority: Minor > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_cache, so that it matches with CQL3 and what is actually getting > cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6353) Rename "key_cache" to "partition_cache" in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-6353: --- Labels: lhf (was: ) > Rename "key_cache" to "partition_cache" in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_cache, so that it matches with CQL3 and what is actually getting > cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6353) Rename "key_cache" to "partition_cache" in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-6353: --- Description: We should rename all the key_cache settings in the cassandra.yaml to partition_cache, so that it matches with CQL3 and what is actually getting cached. > Rename "key_cache" to "partition_cache" in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan > Labels: lhf > > We should rename all the key_cache settings in the cassandra.yaml to > partition_cache, so that it matches with CQL3 and what is actually getting > cached. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6353) Rename "key_cache" to "partition_cache" in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-6353: --- Summary: Rename "key_cache" to "partition_cache" in cassandra.yaml (was: Rename "key" to "partition" in cassandra.yaml) > Rename "key_cache" to "partition_cache" in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan > Labels: lhf > -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering
[ https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823830#comment-13823830 ] Alex Liu commented on CASSANDRA-6348: - We should be able to auto page through 2i CF (for native protocol), so if the auto-paging ends in the middle of a index scanning, the next page should start from where the index scanning ends in the previous page. > TimeoutException throws if Cql query allows data filtering and index is too > big and it can't find the data in base CF after filtering > -- > > Key: CASSANDRA-6348 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6348 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Alex Liu >Assignee: Alex Liu > > If index row is too big, and filtering can't find the match Cql row in base > CF, it keep scanning the index row and retrieving base CF until the index row > is scanned completely which may take too long and thrift server returns > TimeoutException. This is one of the reasons why we shouldn't index a column > if the index is too big. > Multiple indexes merging can resolve the case where there are only EQUAL > clauses. (CASSANDRA-6048 addresses it). > If the query has none-EQUAL clauses, we still need do data filtering which > might lead to timeout exception. > We can either disable those kind of queries or WARN the user that data > filtering might lead to timeout exception or OOM. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6353) My guess would be we actually read the data, then throw it away
Jeremiah Jordan created CASSANDRA-6353: -- Summary: My guess would be we actually read the data, then throw it away Key: CASSANDRA-6353 URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 Project: Cassandra Issue Type: Bug Reporter: Jeremiah Jordan -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6353) Rename "key" to "partition" in cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-6353: --- Summary: Rename "key" to "partition" in cassandra.yaml (was: My guess would be we actually read the data, then throw it away) > Rename "key" to "partition" in cassandra.yaml > - > > Key: CASSANDRA-6353 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6353 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan > -- This message was sent by Atlassian JIRA (v6.1#6144)
[Cassandra Wiki] Update of "ThirdPartySupport" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "ThirdPartySupport" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/ThirdPartySupport?action=diff&rev1=41&rev2=42 Comment: statcounter (Other providers are welcome to add themselves to this publicly-editable page.) + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[Cassandra Wiki] Update of "Streaming_JP" by GehrigKunz
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification. The "Streaming_JP" page has been changed by GehrigKunz: https://wiki.apache.org/cassandra/Streaming_JP?action=diff&rev1=4&rev2=5 Comment: statcounter もし全く変更がない場合、または遅いと感じる場合、何かがおかしいです。一つ頭にいれておいてほしいのは、送信元のほうは1回のストリームで 1ファイルしか送信できないようになっていて、受信先のノードは複数のファイルを同時に受け取る事が出来るという事です。 + {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}} +
[jira] [Comment Edited] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering
[ https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823819#comment-13823819 ] Alex Liu edited comment on CASSANDRA-6348 at 11/15/13 5:16 PM: --- It was tested against 1.2.11 release. was (Author: alexliu68): It was tested against on 1.2.11 release. > TimeoutException throws if Cql query allows data filtering and index is too > big and it can't find the data in base CF after filtering > -- > > Key: CASSANDRA-6348 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6348 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Alex Liu >Assignee: Alex Liu > > If index row is too big, and filtering can't find the match Cql row in base > CF, it keep scanning the index row and retrieving base CF until the index row > is scanned completely which may take too long and thrift server returns > TimeoutException. This is one of the reasons why we shouldn't index a column > if the index is too big. > Multiple indexes merging can resolve the case where there are only EQUAL > clauses. (CASSANDRA-6048 addresses it). > If the query has none-EQUAL clauses, we still need do data filtering which > might lead to timeout exception. > We can either disable those kind of queries or WARN the user that data > filtering might lead to timeout exception or OOM. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering
[ https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-6348: Since Version: 1.2.11 > TimeoutException throws if Cql query allows data filtering and index is too > big and it can't find the data in base CF after filtering > -- > > Key: CASSANDRA-6348 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6348 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Alex Liu >Assignee: Alex Liu > > If index row is too big, and filtering can't find the match Cql row in base > CF, it keep scanning the index row and retrieving base CF until the index row > is scanned completely which may take too long and thrift server returns > TimeoutException. This is one of the reasons why we shouldn't index a column > if the index is too big. > Multiple indexes merging can resolve the case where there are only EQUAL > clauses. (CASSANDRA-6048 addresses it). > If the query has none-EQUAL clauses, we still need do data filtering which > might lead to timeout exception. > We can either disable those kind of queries or WARN the user that data > filtering might lead to timeout exception or OOM. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering
[ https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823819#comment-13823819 ] Alex Liu commented on CASSANDRA-6348: - It was tested against on 1.2.11 release. > TimeoutException throws if Cql query allows data filtering and index is too > big and it can't find the data in base CF after filtering > -- > > Key: CASSANDRA-6348 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6348 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Alex Liu >Assignee: Alex Liu > > If index row is too big, and filtering can't find the match Cql row in base > CF, it keep scanning the index row and retrieving base CF until the index row > is scanned completely which may take too long and thrift server returns > TimeoutException. This is one of the reasons why we shouldn't index a column > if the index is too big. > Multiple indexes merging can resolve the case where there are only EQUAL > clauses. (CASSANDRA-6048 addresses it). > If the query has none-EQUAL clauses, we still need do data filtering which > might lead to timeout exception. > We can either disable those kind of queries or WARN the user that data > filtering might lead to timeout exception or OOM. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6275) 2.0.x leaks file handles
[ https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823814#comment-13823814 ] Michael Shuler commented on CASSANDRA-6275: --- My tests were on Ubuntu precise, same kernel as above, with JVM version 1.7_25. > 2.0.x leaks file handles > > > Key: CASSANDRA-6275 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6275 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: java version "1.7.0_25" > Java(TM) SE Runtime Environment (build 1.7.0_25-b15) > Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) > Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT > 2012 x86_64 x86_64 x86_64 GNU/Linux >Reporter: Mikhail Mazursky >Assignee: Marcus Eriksson > Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, > leak.log, position_hints.tgz, slog.gz > > > Looks like C* is leaking file descriptors when doing lots of CAS operations. > {noformat} > $ sudo cat /proc/15455/limits > Limit Soft Limit Hard Limit Units > Max cpu time unlimitedunlimitedseconds > Max file size unlimitedunlimitedbytes > Max data size unlimitedunlimitedbytes > Max stack size10485760 unlimitedbytes > Max core file size00bytes > Max resident set unlimitedunlimitedbytes > Max processes 1024 unlimitedprocesses > Max open files4096 4096 files > Max locked memory unlimitedunlimitedbytes > Max address space unlimitedunlimitedbytes > Max file locksunlimitedunlimitedlocks > Max pending signals 1463314633signals > Max msgqueue size 819200 819200 bytes > Max nice priority 00 > Max realtime priority 00 > Max realtime timeout unlimitedunlimitedus > {noformat} > Looks like the problem is not in limits. > Before load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 166 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 164 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 180 > {noformat} > After load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 967 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 1766 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 2578 > {noformat} > Most opened files have names like: > {noformat} > java 16890 cassandra 1636r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1637r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1638r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1639r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1640r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1641r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1642r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1643r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1644r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1645r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1646r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1647r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1648r REG 202,17 88724987 > 655520 /var/lib/cassandr
[jira] [Commented] (CASSANDRA-6275) 2.0.x leaks file handles
[ https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823806#comment-13823806 ] Pieter Callewaert commented on CASSANDRA-6275: -- I also have the problem on Ubuntu 12.04 (Linux de-cass00 3.8.0-30-generic #44~precise1-Ubuntu SMP Fri Aug 23 18:32:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux) Posted something om mailing list because I was not sure if it was a bug... (Here you can find more info, In some cases I hade a deleted file more than 50k times open) http://www.mail-archive.com/user@cassandra.apache.org/msg32999.html Temporary fix was to raise the nofile limit to 1kk... > 2.0.x leaks file handles > > > Key: CASSANDRA-6275 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6275 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: java version "1.7.0_25" > Java(TM) SE Runtime Environment (build 1.7.0_25-b15) > Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) > Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT > 2012 x86_64 x86_64 x86_64 GNU/Linux >Reporter: Mikhail Mazursky >Assignee: Marcus Eriksson > Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, > leak.log, position_hints.tgz, slog.gz > > > Looks like C* is leaking file descriptors when doing lots of CAS operations. > {noformat} > $ sudo cat /proc/15455/limits > Limit Soft Limit Hard Limit Units > Max cpu time unlimitedunlimitedseconds > Max file size unlimitedunlimitedbytes > Max data size unlimitedunlimitedbytes > Max stack size10485760 unlimitedbytes > Max core file size00bytes > Max resident set unlimitedunlimitedbytes > Max processes 1024 unlimitedprocesses > Max open files4096 4096 files > Max locked memory unlimitedunlimitedbytes > Max address space unlimitedunlimitedbytes > Max file locksunlimitedunlimitedlocks > Max pending signals 1463314633signals > Max msgqueue size 819200 819200 bytes > Max nice priority 00 > Max realtime priority 00 > Max realtime timeout unlimitedunlimitedus > {noformat} > Looks like the problem is not in limits. > Before load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 166 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 164 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 180 > {noformat} > After load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 967 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 1766 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 2578 > {noformat} > Most opened files have names like: > {noformat} > java 16890 cassandra 1636r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1637r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1638r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1639r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1640r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1641r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1642r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1643r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1644r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1645r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassand
[jira] [Commented] (CASSANDRA-6275) 2.0.x leaks file handles
[ https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823805#comment-13823805 ] Duncan Sands commented on CASSANDRA-6275: - I originally saw the issue on Ubuntu 10.04 (kernel 2.6.32) and reproduced it on Ubuntu 13.10 (kernel 3.11.0). > 2.0.x leaks file handles > > > Key: CASSANDRA-6275 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6275 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: java version "1.7.0_25" > Java(TM) SE Runtime Environment (build 1.7.0_25-b15) > Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) > Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT > 2012 x86_64 x86_64 x86_64 GNU/Linux >Reporter: Mikhail Mazursky >Assignee: Marcus Eriksson > Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, > leak.log, position_hints.tgz, slog.gz > > > Looks like C* is leaking file descriptors when doing lots of CAS operations. > {noformat} > $ sudo cat /proc/15455/limits > Limit Soft Limit Hard Limit Units > Max cpu time unlimitedunlimitedseconds > Max file size unlimitedunlimitedbytes > Max data size unlimitedunlimitedbytes > Max stack size10485760 unlimitedbytes > Max core file size00bytes > Max resident set unlimitedunlimitedbytes > Max processes 1024 unlimitedprocesses > Max open files4096 4096 files > Max locked memory unlimitedunlimitedbytes > Max address space unlimitedunlimitedbytes > Max file locksunlimitedunlimitedlocks > Max pending signals 1463314633signals > Max msgqueue size 819200 819200 bytes > Max nice priority 00 > Max realtime priority 00 > Max realtime timeout unlimitedunlimitedus > {noformat} > Looks like the problem is not in limits. > Before load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 166 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 164 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 180 > {noformat} > After load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 967 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 1766 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 2578 > {noformat} > Most opened files have names like: > {noformat} > java 16890 cassandra 1636r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1637r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1638r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1639r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1640r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1641r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1642r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1643r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1644r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1645r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1646r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1647r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1648r REG 202,17 88724987
[jira] [Commented] (CASSANDRA-6275) 2.0.x leaks file handles
[ https://issues.apache.org/jira/browse/CASSANDRA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823796#comment-13823796 ] Marcus Eriksson commented on CASSANDRA-6275: ok, this is what i have so far, i can also reproduce on an m1.medium in EC2, ubuntu 13.10 which has 3.11.x kernel. i cannot reproduce on my laptop (debian squeeze) or my server (rhel 6), both run kernel 2.6.x. (jdk7u45 on all) it happens with trivial tables/data as well, so seems unrelated to TTL or truncate etc just starting cassandra up shows ~50 open FDs for the same Data.db-file > 2.0.x leaks file handles > > > Key: CASSANDRA-6275 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6275 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: java version "1.7.0_25" > Java(TM) SE Runtime Environment (build 1.7.0_25-b15) > Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) > Linux cassandra-test1 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT > 2012 x86_64 x86_64 x86_64 GNU/Linux >Reporter: Mikhail Mazursky >Assignee: Marcus Eriksson > Attachments: c_file-descriptors_strace.tbz, cassandra_jstack.txt, > leak.log, position_hints.tgz, slog.gz > > > Looks like C* is leaking file descriptors when doing lots of CAS operations. > {noformat} > $ sudo cat /proc/15455/limits > Limit Soft Limit Hard Limit Units > Max cpu time unlimitedunlimitedseconds > Max file size unlimitedunlimitedbytes > Max data size unlimitedunlimitedbytes > Max stack size10485760 unlimitedbytes > Max core file size00bytes > Max resident set unlimitedunlimitedbytes > Max processes 1024 unlimitedprocesses > Max open files4096 4096 files > Max locked memory unlimitedunlimitedbytes > Max address space unlimitedunlimitedbytes > Max file locksunlimitedunlimitedlocks > Max pending signals 1463314633signals > Max msgqueue size 819200 819200 bytes > Max nice priority 00 > Max realtime priority 00 > Max realtime timeout unlimitedunlimitedus > {noformat} > Looks like the problem is not in limits. > Before load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 166 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 164 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 180 > {noformat} > After load test: > {noformat} > cassandra-test0 ~]$ lsof -n | grep java | wc -l > 967 > cassandra-test1 ~]$ lsof -n | grep java | wc -l > 1766 > cassandra-test2 ~]$ lsof -n | grep java | wc -l > 2578 > {noformat} > Most opened files have names like: > {noformat} > java 16890 cassandra 1636r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1637r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1638r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1639r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1640r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1641r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1642r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1643r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1644r REG 202,17 88724987 > 655520 /var/lib/cassandra/data/system/paxos/system-paxos-jb-644-Data.db > java 16890 cassandra 1645r REG 202,17 161158485 > 655420 /var/lib/cassandra/data/system/paxos/system-paxos-jb-255-Data.db > java 16890 cassandra 1646r REG 202,17 88724987 > 655520 /var/
[jira] [Commented] (CASSANDRA-6352) Cluster does not repond to new SELECT query after a timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823787#comment-13823787 ] Ngoc Minh Vo commented on CASSANDRA-6352: - Thanks for your quick answer! We will wait for 2.0.3 release and confirm whether the issue is resolved. Best regards > Cluster does not repond to new SELECT query after a timeout > --- > > Key: CASSANDRA-6352 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6352 > Project: Cassandra > Issue Type: Bug > Environment: Windows7, C* v2.0.xx, 4-node cluster, JVM 1.7.0_45-b18 > Xmx16GB, Datastax Java Driver 1.0.4 and 2.0.0-beta2 >Reporter: Ngoc Minh Vo > Attachments: ErrorStack.txt > > > Hello, > We encounter the following issue three times. Here are the descriptions of > the issue: > - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with > BatchStatement (i.e.: batch of PreparedStatement). The performance is quite > impressive. > - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything > goes well. > - but when we use DJD v2.0.0-b2, we got an exception: > {quote} > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > {quote} > - afterward, no Select query works anymore: > -- all query via cqlsh failed with rpc_timeout > -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 > -- these queries have worked perfectly before the first select with DJD v2.0.0 > - nodetool status shows all nodes still Up and Normal > - nodetool flush still works on all nodes > Only a reboot of all nodes could solve the issue. > Unfortunately, we don't have any exploitable informations in log files: > Node1: the handshaking at 11:28:48 is strange because we didn't reboot any > node > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is > 5.06951175012658 (just-counted was 4.902669365509605). calculation took > 140ms for 57108 columns > INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 > OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 > INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) > INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) > Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 > ops) > {quote} > Node2: there is a hinted-handoff at 11:30:02... > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:25:32,897 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_identity') liveRatio is > 6.046071792095967 (just-counted was 5.493829833297251). calculation took 3ms > for 608 columns > INFO [HintedHandoff:1] 2013-11-15 11:30:02,656 HintedHandOffManager.java > (line 322) Started hinted handoff for host: > 2ce9f0a8-795c-4733-9d52-06057fcc690d with IP: /10.30.227.8 > INFO [HintedHandoff:1] 2013-11-15 11:30:12,663 HintedHandOffManager.java > (line 449) Timed out replaying hints to /10.30.227.8; aborting (0 delivered) > INFO [RMI TCP Connection(6)-10.30.224.229] 2013-11-15 11:35:20,096 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-hints@581765413(1028/10280 serialized/live bytes, 2 ops) > {quote} > It seems that the first Select query with DJD v2.0.0-b2 let the cluster in a > "pending"/"anormal" state and it no longer responds to future queries. > I know that without logs it will be hard to reproduce. > Thanks and regards, > Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6352) Cluster does not repond to new SELECT query after a timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823735#comment-13823735 ] Sylvain Lebresne commented on CASSANDRA-6352: - You're almost surely running into CASSANDRA-6299. It will be fixed in 2.0.3 (and is currently fixed on the cassandra-2.0 branch). > Cluster does not repond to new SELECT query after a timeout > --- > > Key: CASSANDRA-6352 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6352 > Project: Cassandra > Issue Type: Bug > Environment: Windows7, C* v2.0.xx, 4-node cluster, JVM 1.7.0_45-b18 > Xmx16GB, Datastax Java Driver 1.0.4 and 2.0.0-beta2 >Reporter: Ngoc Minh Vo > Attachments: ErrorStack.txt > > > Hello, > We encounter the following issue three times. Here are the descriptions of > the issue: > - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with > BatchStatement (i.e.: batch of PreparedStatement). The performance is quite > impressive. > - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything > goes well. > - but when we use DJD v2.0.0-b2, we got an exception: > {quote} > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > {quote} > - afterward, no Select query works anymore: > -- all query via cqlsh failed with rpc_timeout > -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 > -- these queries have worked perfectly before the first select with DJD v2.0.0 > - nodetool status shows all nodes still Up and Normal > - nodetool flush still works on all nodes > Only a reboot of all nodes could solve the issue. > Unfortunately, we don't have any exploitable informations in log files: > Node1: the handshaking at 11:28:48 is strange because we didn't reboot any > node > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is > 5.06951175012658 (just-counted was 4.902669365509605). calculation took > 140ms for 57108 columns > INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 > OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 > INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) > INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) > Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 > ops) > {quote} > Node2: there is a hinted-handoff at 11:30:02... > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:25:32,897 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_identity') liveRatio is > 6.046071792095967 (just-counted was 5.493829833297251). calculation took 3ms > for 608 columns > INFO [HintedHandoff:1] 2013-11-15 11:30:02,656 HintedHandOffManager.java > (line 322) Started hinted handoff for host: > 2ce9f0a8-795c-4733-9d52-06057fcc690d with IP: /10.30.227.8 > INFO [HintedHandoff:1] 2013-11-15 11:30:12,663 HintedHandOffManager.java > (line 449) Timed out replaying hints to /10.30.227.8; aborting (0 delivered) > INFO [RMI TCP Connection(6)-10.30.224.229] 2013-11-15 11:35:20,096 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-hints@581765413(1028/10280 serialized/live bytes, 2 ops) > {quote} > It seems that the first Select query with DJD v2.0.0-b2 let the cluster in a > "pending"/"anormal" state and it no longer responds to future queries. > I know that without logs it will be hard to reproduce. > Thanks and regards, > Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs
[ https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823729#comment-13823729 ] Jeremy Hanna commented on CASSANDRA-2527: - related: HBASE-8369 > Add ability to snapshot data as input to hadoop jobs > > > Key: CASSANDRA-2527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2527 > Project: Cassandra > Issue Type: New Feature >Reporter: Jeremy Hanna >Assignee: Brandon Williams > Labels: hadoop > Fix For: 2.1 > > > It is desirable to have immutable inputs to hadoop jobs for the duration of > the job. That way re-execution of individual tasks do not alter the output. > One way to accomplish this would be to snapshot the data that is used as > input to a job. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs
[ https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823726#comment-13823726 ] Jonathan Ellis commented on CASSANDRA-2527: --- Not really feasible; Hadoop is a special case since we can seq scan sstables without having to fully "open" them (sample indexes, populate key cache, etc) > Add ability to snapshot data as input to hadoop jobs > > > Key: CASSANDRA-2527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2527 > Project: Cassandra > Issue Type: New Feature >Reporter: Jeremy Hanna >Assignee: Brandon Williams > Labels: hadoop > Fix For: 2.1 > > > It is desirable to have immutable inputs to hadoop jobs for the duration of > the job. That way re-execution of individual tasks do not alter the output. > One way to accomplish this would be to snapshot the data that is used as > input to a job. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs
[ https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823722#comment-13823722 ] Jeremiah Jordan commented on CASSANDRA-2527: Now that we have the cql hadoop input format, maybe the better way to do this would be to add "USING SNAPSHOT xyz" to CQL, and let all selects be able to run against a snapshot. > Add ability to snapshot data as input to hadoop jobs > > > Key: CASSANDRA-2527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2527 > Project: Cassandra > Issue Type: New Feature >Reporter: Jeremy Hanna >Assignee: Brandon Williams > Labels: hadoop > Fix For: 2.1 > > > It is desirable to have immutable inputs to hadoop jobs for the duration of > the job. That way re-execution of individual tasks do not alter the output. > One way to accomplish this would be to snapshot the data that is used as > input to a job. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6352) Cluster does not repond to new SELECT query after a timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ngoc Minh Vo updated CASSANDRA-6352: Attachment: ErrorStack.txt Descriptions of our table and indexes: CREATE KEYSPACE myks WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' }; USE myks; CREATE TABLE data ( id text, date int, portfolio text, PRIMARY KEY (id) ); CREATE INDEX ON data(portfolio); CREATE INDEX ON data(date); And the query that failed in DJD v2.0.0-b2 SELECT * FROM data WHERE date=1 AND portfolio='a' ALLOW FILTERING; > Cluster does not repond to new SELECT query after a timeout > --- > > Key: CASSANDRA-6352 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6352 > Project: Cassandra > Issue Type: Bug > Environment: Windows7, C* v2.0.xx, 4-node cluster, JVM 1.7.0_45-b18 > Xmx16GB, Datastax Java Driver 1.0.4 and 2.0.0-beta2 >Reporter: Ngoc Minh Vo > Attachments: ErrorStack.txt > > > Hello, > We encounter the following issue three times. Here are the descriptions of > the issue: > - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with > BatchStatement (i.e.: batch of PreparedStatement). The performance is quite > impressive. > - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything > goes well. > - but when we use DJD v2.0.0-b2, we got an exception: > {quote} > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > {quote} > - afterward, no Select query works anymore: > -- all query via cqlsh failed with rpc_timeout > -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 > -- these queries have worked perfectly before the first select with DJD v2.0.0 > - nodetool status shows all nodes still Up and Normal > - nodetool flush still works on all nodes > Only a reboot of all nodes could solve the issue. > Unfortunately, we don't have any exploitable informations in log files: > Node1: the handshaking at 11:28:48 is strange because we didn't reboot any > node > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is > 5.06951175012658 (just-counted was 4.902669365509605). calculation took > 140ms for 57108 columns > INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 > OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 > INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) > INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) > Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 > ops) > {quote} > Node2: there is a hinted-handoff at 11:30:02... > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:25:32,897 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_identity') liveRatio is > 6.046071792095967 (just-counted was 5.493829833297251). calculation took 3ms > for 608 columns > INFO [HintedHandoff:1] 2013-11-15 11:30:02,656 HintedHandOffManager.java > (line 322) Started hinted handoff for host: > 2ce9f0a8-795c-4733-9d52-06057fcc690d with IP: /10.30.227.8 > INFO [HintedHandoff:1] 2013-11-15 11:30:12,663 HintedHandOffManager.java > (line 449) Timed out replaying hints to /10.30.227.8; aborting (0 delivered) > INFO [RMI TCP Connection(6)-10.30.224.229] 2013-11-15 11:35:20,096 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-hints@581765413(1028/10280 serialized/live bytes, 2 ops) > {quote} > It seems that the first Select query with DJD v2.0.0-b2 let the cluster in a > "pending"/"anormal" state and it no longer responds to future queries. > I know that without logs it will be hard to reproduce. > Thanks and regards, > Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Comment Edited] (CASSANDRA-6352) Cluster does not repond to new SELECT query after a timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823712#comment-13823712 ] Ngoc Minh Vo edited comment on CASSANDRA-6352 at 11/15/13 2:57 PM: --- Descriptions of our table and indexes: {code} CREATE KEYSPACE myks WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' }; USE myks; CREATE TABLE data ( id text, date int, portfolio text, PRIMARY KEY (id) ); CREATE INDEX ON data(portfolio); CREATE INDEX ON data(date); {code} And the query that failed in DJD v2.0.0-b2 {code} SELECT * FROM data WHERE date=1 AND portfolio='a' ALLOW FILTERING; {code} was (Author: vongocminh): Descriptions of our table and indexes: CREATE KEYSPACE myks WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' }; USE myks; CREATE TABLE data ( id text, date int, portfolio text, PRIMARY KEY (id) ); CREATE INDEX ON data(portfolio); CREATE INDEX ON data(date); And the query that failed in DJD v2.0.0-b2 SELECT * FROM data WHERE date=1 AND portfolio='a' ALLOW FILTERING; > Cluster does not repond to new SELECT query after a timeout > --- > > Key: CASSANDRA-6352 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6352 > Project: Cassandra > Issue Type: Bug > Environment: Windows7, C* v2.0.xx, 4-node cluster, JVM 1.7.0_45-b18 > Xmx16GB, Datastax Java Driver 1.0.4 and 2.0.0-beta2 >Reporter: Ngoc Minh Vo > Attachments: ErrorStack.txt > > > Hello, > We encounter the following issue three times. Here are the descriptions of > the issue: > - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with > BatchStatement (i.e.: batch of PreparedStatement). The performance is quite > impressive. > - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything > goes well. > - but when we use DJD v2.0.0-b2, we got an exception: > {quote} > com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout > during read query at consistency ONE (1 responses were required but only 0 > replica responded) > {quote} > - afterward, no Select query works anymore: > -- all query via cqlsh failed with rpc_timeout > -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 > -- these queries have worked perfectly before the first select with DJD v2.0.0 > - nodetool status shows all nodes still Up and Normal > - nodetool flush still works on all nodes > Only a reboot of all nodes could solve the issue. > Unfortunately, we don't have any exploitable informations in log files: > Node1: the handshaking at 11:28:48 is strange because we didn't reboot any > node > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is > 5.06951175012658 (just-counted was 4.902669365509605). calculation took > 140ms for 57108 columns > INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 > OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 > INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) > INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) > Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 > ops) > {quote} > Node2: there is a hinted-handoff at 11:30:02... > {quote} > INFO [MemoryMeter:1] 2013-11-15 11:25:32,897 Memtable.java (line 444) > CFS(Keyspace='hector', ColumnFamily='pdl_identity') liveRatio is > 6.046071792095967 (just-counted was 5.493829833297251). calculation took 3ms > for 608 columns > INFO [HintedHandoff:1] 2013-11-15 11:30:02,656 HintedHandOffManager.java > (line 322) Started hinted handoff for host: > 2ce9f0a8-795c-4733-9d52-06057fcc690d with IP: /10.30.227.8 > INFO [HintedHandoff:1] 2013-11-15 11:30:12,663 HintedHandOffManager.java > (line 449) Timed out replaying hints to /10.30.227.8; aborting (0 delivered) > INFO [RMI TCP Connection(6)-10.30.224.229] 2013-11-15 11:35:20,096 > ColumnFamilyStore.java (line 734) Enqueuing flush of > Memtable-hints@581765413(1028/10280 serialized/live bytes, 2 ops) > {quote} > It seems that the first Select query with DJD v2.0.0-b2 let the cluster in a > "pending"/"anormal" state and it no longer responds to future queries. > I know that without logs it will be hard to reproduce. > Thanks and regards, > Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6352) Cluster does not repond to new SELECT query after a timeout
Ngoc Minh Vo created CASSANDRA-6352: --- Summary: Cluster does not repond to new SELECT query after a timeout Key: CASSANDRA-6352 URL: https://issues.apache.org/jira/browse/CASSANDRA-6352 Project: Cassandra Issue Type: Bug Environment: Windows7, C* v2.0.xx, 4-node cluster, JVM 1.7.0_45-b18 Xmx16GB, Datastax Java Driver 1.0.4 and 2.0.0-beta2 Reporter: Ngoc Minh Vo Hello, We encounter the following issue three times. Here are the descriptions of the issue: - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with BatchStatement (i.e.: batch of PreparedStatement). The performance is quite impressive. - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything goes well. - but when we use DJD v2.0.0-b2, we got an exception: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded) - afterward, no Select query works anymore: -- all query via cqlsh failed with rpc_timeout -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 -- these queries have worked perfectly before the first select with DJD v2.0.0 - nodetool status shows all nodes still Up and Normal - nodetool flush still works on all nodes Only a reboot of all nodes could solve the issue. Unfortunately, we don't have any exploitable informations in log files: Node1: the handshaking at 11:28:48 is strange because we didn't reboot any node {quote} INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is 5.06951175012658 (just-counted was 4.902669365509605). calculation took 140ms for 57108 columns INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 ColumnFamilyStore.java (line 734) Enqueuing flush of Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) {quote} Node2: there is a hinted-handoff at 11:30:02... {quote} INFO [MemoryMeter:1] 2013-11-15 11:25:32,897 Memtable.java (line 444) CFS(Keyspace='hector', ColumnFamily='pdl_identity') liveRatio is 6.046071792095967 (just-counted was 5.493829833297251). calculation took 3ms for 608 columns INFO [HintedHandoff:1] 2013-11-15 11:30:02,656 HintedHandOffManager.java (line 322) Started hinted handoff for host: 2ce9f0a8-795c-4733-9d52-06057fcc690d with IP: /10.30.227.8 INFO [HintedHandoff:1] 2013-11-15 11:30:12,663 HintedHandOffManager.java (line 449) Timed out replaying hints to /10.30.227.8; aborting (0 delivered) INFO [RMI TCP Connection(6)-10.30.224.229] 2013-11-15 11:35:20,096 ColumnFamilyStore.java (line 734) Enqueuing flush of Memtable-hints@581765413(1028/10280 serialized/live bytes, 2 ops) {quote} It seems that the first Select query with DJD v2.0.0-b2 let the cluster in a "pending"/"anormal" state and it no longer responds to future queries. I know that without logs it will be hard to reproduce. Thanks and regards, Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6352) Cluster does not repond to new SELECT query after a timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ngoc Minh Vo updated CASSANDRA-6352: Description: Hello, We encounter the following issue three times. Here are the descriptions of the issue: - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with BatchStatement (i.e.: batch of PreparedStatement). The performance is quite impressive. - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything goes well. - but when we use DJD v2.0.0-b2, we got an exception: {quote} com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded) {quote} - afterward, no Select query works anymore: -- all query via cqlsh failed with rpc_timeout -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 -- these queries have worked perfectly before the first select with DJD v2.0.0 - nodetool status shows all nodes still Up and Normal - nodetool flush still works on all nodes Only a reboot of all nodes could solve the issue. Unfortunately, we don't have any exploitable informations in log files: Node1: the handshaking at 11:28:48 is strange because we didn't reboot any node {quote} INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is 5.06951175012658 (just-counted was 4.902669365509605). calculation took 140ms for 57108 columns INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 ColumnFamilyStore.java (line 734) Enqueuing flush of Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) {quote} Node2: there is a hinted-handoff at 11:30:02... {quote} INFO [MemoryMeter:1] 2013-11-15 11:25:32,897 Memtable.java (line 444) CFS(Keyspace='hector', ColumnFamily='pdl_identity') liveRatio is 6.046071792095967 (just-counted was 5.493829833297251). calculation took 3ms for 608 columns INFO [HintedHandoff:1] 2013-11-15 11:30:02,656 HintedHandOffManager.java (line 322) Started hinted handoff for host: 2ce9f0a8-795c-4733-9d52-06057fcc690d with IP: /10.30.227.8 INFO [HintedHandoff:1] 2013-11-15 11:30:12,663 HintedHandOffManager.java (line 449) Timed out replaying hints to /10.30.227.8; aborting (0 delivered) INFO [RMI TCP Connection(6)-10.30.224.229] 2013-11-15 11:35:20,096 ColumnFamilyStore.java (line 734) Enqueuing flush of Memtable-hints@581765413(1028/10280 serialized/live bytes, 2 ops) {quote} It seems that the first Select query with DJD v2.0.0-b2 let the cluster in a "pending"/"anormal" state and it no longer responds to future queries. I know that without logs it will be hard to reproduce. Thanks and regards, Minh was: Hello, We encounter the following issue three times. Here are the descriptions of the issue: - data are imported via Datastax Java driver (DJD) v2.0.0-b2 with BatchStatement (i.e.: batch of PreparedStatement). The performance is quite impressive. - if we query the cluster via cqlsh (C* 2.0.x) and DJD v1.0.4, everything goes well. - but when we use DJD v2.0.0-b2, we got an exception: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded) - afterward, no Select query works anymore: -- all query via cqlsh failed with rpc_timeout -- all query via DJD v1.0.4 failed with the same exception as the v2.0.0-b2 -- these queries have worked perfectly before the first select with DJD v2.0.0 - nodetool status shows all nodes still Up and Normal - nodetool flush still works on all nodes Only a reboot of all nodes could solve the issue. Unfortunately, we don't have any exploitable informations in log files: Node1: the handshaking at 11:28:48 is strange because we didn't reboot any node {quote} INFO [MemoryMeter:1] 2013-11-15 11:27:11,724 Memtable.java (line 444) CFS(Keyspace='hector', ColumnFamily='pdl_caching') liveRatio is 5.06951175012658 (just-counted was 4.902669365509605). calculation took 140ms for 57108 columns INFO [HANDSHAKE-/10.30.226.166] 2013-11-15 11:28:48,550 OutboundTcpConnection.java (line 386) Handshaking version with /10.30.226.166 INFO [RMI TCP Connection(4)-10.30.224.229] 2013-11-15 11:32:29,256 ColumnFamilyStore.java (line 734) Enqueuing flush of Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) INFO [FlushWriter:76] 2013-11-15 11:32:29,257 Memtable.java (line 328) Writing Memtable-sstable_activity@2142066849(0/0 serialized/live bytes, 24 ops) {quote} Node2: there is
[jira] [Updated] (CASSANDRA-6351) When dropping a CF, row cache is not invalidated
[ https://issues.apache.org/jira/browse/CASSANDRA-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6351: -- Reviewer: Jonathan Ellis Component/s: Core Priority: Minor (was: Trivial) Fix Version/s: 2.0.3 1.2.12 Assignee: Fabien Rousseau Issue Type: Bug (was: Improvement) > When dropping a CF, row cache is not invalidated > > > Key: CASSANDRA-6351 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6351 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Fabien Rousseau >Assignee: Fabien Rousseau >Priority: Minor > Fix For: 1.2.12, 2.0.3 > > Attachments: 0001-invalidate-row-cache-when-dropping-CF.patch > > > When dropping a ColumnFamily with row cache enabled, then row cache is not > invalidated for this CF. > This can be a bit annoying if the ColumnFamily is recreated because it will > be empty, but row cache won't. > Note : this is similar to a "TRUNCATE" command (and TRUNCATE does invalidate > the cache...) > Attached is patch which removes the rows of the currently dropped CF from row > cache. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6351) When dropping a CF, row cache is not invalidated
[ https://issues.apache.org/jira/browse/CASSANDRA-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fabien Rousseau updated CASSANDRA-6351: --- Attachment: 0001-invalidate-row-cache-when-dropping-CF.patch > When dropping a CF, row cache is not invalidated > > > Key: CASSANDRA-6351 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6351 > Project: Cassandra > Issue Type: Improvement >Reporter: Fabien Rousseau >Priority: Trivial > Attachments: 0001-invalidate-row-cache-when-dropping-CF.patch > > > When dropping a ColumnFamily with row cache enabled, then row cache is not > invalidated for this CF. > This can be a bit annoying if the ColumnFamily is recreated because it will > be empty, but row cache won't. > Note : this is similar to a "TRUNCATE" command (and TRUNCATE does invalidate > the cache...) > Attached is patch which removes the rows of the currently dropped CF from row > cache. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (CASSANDRA-6351) When dropping a CF, row cache is not invalidated
Fabien Rousseau created CASSANDRA-6351: -- Summary: When dropping a CF, row cache is not invalidated Key: CASSANDRA-6351 URL: https://issues.apache.org/jira/browse/CASSANDRA-6351 Project: Cassandra Issue Type: Improvement Reporter: Fabien Rousseau Priority: Trivial Attachments: 0001-invalidate-row-cache-when-dropping-CF.patch When dropping a ColumnFamily with row cache enabled, then row cache is not invalidated for this CF. This can be a bit annoying if the ColumnFamily is recreated because it will be empty, but row cache won't. Note : this is similar to a "TRUNCATE" command (and TRUNCATE does invalidate the cache...) Attached is patch which removes the rows of the currently dropped CF from row cache. -- This message was sent by Atlassian JIRA (v6.1#6144)
[1/3] git commit: Make the CL native protocol code match the on in 2.0
Updated Branches: refs/heads/trunk c41eedf22 -> fab27bd59 Make the CL native protocol code match the on in 2.0 patch by slebresne; reviewed by jasobrown for CASSANDRA-6347 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d7b5671 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d7b5671 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d7b5671 Branch: refs/heads/trunk Commit: 9d7b5671ff725cb5e2749bac763bc6f5f5fd99dd Parents: 4c08800 Author: Sylvain Lebresne Authored: Fri Nov 15 15:36:22 2013 +0100 Committer: Sylvain Lebresne Committed: Fri Nov 15 15:36:22 2013 +0100 -- CHANGES.txt| 2 ++ src/java/org/apache/cassandra/db/ConsistencyLevel.java | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d7b5671/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 39a88f8..9ee6657 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -18,6 +18,8 @@ * Fix missing one row in reverse query (CASSANDRA-6330) * Fix reading expired row value from row cache (CASSANDRA-6325) * Fix AssertionError when doing set element deletion (CASSANDRA-6341) + * Make CL code for the native protocol match the one in C* 2.0 + (CASSANDRA-6347) 1.2.11 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d7b5671/src/java/org/apache/cassandra/db/ConsistencyLevel.java -- diff --git a/src/java/org/apache/cassandra/db/ConsistencyLevel.java b/src/java/org/apache/cassandra/db/ConsistencyLevel.java index 25fb25b..4d72767 100644 --- a/src/java/org/apache/cassandra/db/ConsistencyLevel.java +++ b/src/java/org/apache/cassandra/db/ConsistencyLevel.java @@ -48,7 +48,7 @@ public enum ConsistencyLevel ALL (5), LOCAL_QUORUM(6, true), EACH_QUORUM (7), -LOCAL_ONE (8, true); +LOCAL_ONE (10, true); private static final Logger logger = LoggerFactory.getLogger(ConsistencyLevel.class);
[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Conflicts: src/java/org/apache/cassandra/db/ConsistencyLevel.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09b2470b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09b2470b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09b2470b Branch: refs/heads/trunk Commit: 09b2470b33c47d14e4f6ef6befd4eb8f9ec5aacf Parents: 6fe83cd 9d7b567 Author: Sylvain Lebresne Authored: Fri Nov 15 15:41:38 2013 +0100 Committer: Sylvain Lebresne Committed: Fri Nov 15 15:41:38 2013 +0100 -- CHANGES.txt| 2 ++ src/java/org/apache/cassandra/db/ConsistencyLevel.java | 1 - 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/09b2470b/CHANGES.txt -- diff --cc CHANGES.txt index 17e97d5,9ee6657..b6d2e73 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -42,42 -18,11 +42,44 @@@ Merged from 1.2 * Fix missing one row in reverse query (CASSANDRA-6330) * Fix reading expired row value from row cache (CASSANDRA-6325) * Fix AssertionError when doing set element deletion (CASSANDRA-6341) + * Make CL code for the native protocol match the one in C* 2.0 +(CASSANDRA-6347) -1.2.11 +2.0.2 + * Update FailureDetector to use nanontime (CASSANDRA-4925) + * Fix FileCacheService regressions (CASSANDRA-6149) + * Never return WriteTimeout for CL.ANY (CASSANDRA-6032) + * Fix race conditions in bulk loader (CASSANDRA-6129) + * Add configurable metrics reporting (CASSANDRA-4430) + * drop queries exceeding a configurable number of tombstones (CASSANDRA-6117) + * Track and persist sstable read activity (CASSANDRA-5515) + * Fixes for speculative retry (CASSANDRA-5932, CASSANDRA-6194) + * Improve memory usage of metadata min/max column names (CASSANDRA-6077) + * Fix thrift validation refusing row markers on CQL3 tables (CASSANDRA-6081) + * Fix insertion of collections with CAS (CASSANDRA-6069) + * Correctly send metadata on SELECT COUNT (CASSANDRA-6080) + * Track clients' remote addresses in ClientState (CASSANDRA-6070) + * Create snapshot dir if it does not exist when migrating + leveled manifest (CASSANDRA-6093) + * make sequential nodetool repair the default (CASSANDRA-5950) + * Add more hooks for compaction strategy implementations (CASSANDRA-6111) + * Fix potential NPE on composite 2ndary indexes (CASSANDRA-6098) + * Delete can potentially be skipped in batch (CASSANDRA-6115) + * Allow alter keyspace on system_traces (CASSANDRA-6016) + * Disallow empty column names in cql (CASSANDRA-6136) + * Use Java7 file-handling APIs and fix file moving on Windows (CASSANDRA-5383) + * Save compaction history to system keyspace (CASSANDRA-5078) + * Fix NPE if StorageService.getOperationMode() is executed before full startup (CASSANDRA-6166) + * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212) + * Add reloadtriggers command to nodetool (CASSANDRA-4949) + * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139) + * Fix sstable loader (CASSANDRA-6205) + * Reject bootstrapping if the node already exists in gossip (CASSANDRA-5571) + * Fix NPE while loading paxos state (CASSANDRA-6211) + * cqlsh: add SHOW SESSION command (CASSANDRA-6228) +Merged from 1.2: + * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114) * Add a warning for small LCS sstable size (CASSANDRA-6191) * Add ability to list specific KS/CF combinations in nodetool cfstats (CASSANDRA-4191) * Mark CF clean if a mutation raced the drop and got it marked dirty (CASSANDRA-5946) http://git-wip-us.apache.org/repos/asf/cassandra/blob/09b2470b/src/java/org/apache/cassandra/db/ConsistencyLevel.java -- diff --cc src/java/org/apache/cassandra/db/ConsistencyLevel.java index 4fffc8a,4d72767..cbb4bb1 --- a/src/java/org/apache/cassandra/db/ConsistencyLevel.java +++ b/src/java/org/apache/cassandra/db/ConsistencyLevel.java @@@ -37,7 -37,7 +37,6 @@@ import org.apache.cassandra.locator.Abs import org.apache.cassandra.locator.NetworkTopologyStrategy; import org.apache.cassandra.transport.ProtocolException; -- public enum ConsistencyLevel { ANY (0),
[3/3] git commit: Merge branch 'cassandra-2.0' into trunk
Merge branch 'cassandra-2.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fab27bd5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fab27bd5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fab27bd5 Branch: refs/heads/trunk Commit: fab27bd59792b1e00cf7b6bc6035f2a86891bd9c Parents: c41eedf 09b2470 Author: Sylvain Lebresne Authored: Fri Nov 15 15:42:09 2013 +0100 Committer: Sylvain Lebresne Committed: Fri Nov 15 15:42:09 2013 +0100 -- CHANGES.txt| 2 ++ src/java/org/apache/cassandra/db/ConsistencyLevel.java | 1 - 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fab27bd5/CHANGES.txt --
[1/2] git commit: Make the CL native protocol code match the on in 2.0
Updated Branches: refs/heads/cassandra-2.0 6fe83cdac -> 09b2470b3 Make the CL native protocol code match the on in 2.0 patch by slebresne; reviewed by jasobrown for CASSANDRA-6347 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d7b5671 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d7b5671 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d7b5671 Branch: refs/heads/cassandra-2.0 Commit: 9d7b5671ff725cb5e2749bac763bc6f5f5fd99dd Parents: 4c08800 Author: Sylvain Lebresne Authored: Fri Nov 15 15:36:22 2013 +0100 Committer: Sylvain Lebresne Committed: Fri Nov 15 15:36:22 2013 +0100 -- CHANGES.txt| 2 ++ src/java/org/apache/cassandra/db/ConsistencyLevel.java | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d7b5671/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 39a88f8..9ee6657 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -18,6 +18,8 @@ * Fix missing one row in reverse query (CASSANDRA-6330) * Fix reading expired row value from row cache (CASSANDRA-6325) * Fix AssertionError when doing set element deletion (CASSANDRA-6341) + * Make CL code for the native protocol match the one in C* 2.0 + (CASSANDRA-6347) 1.2.11 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d7b5671/src/java/org/apache/cassandra/db/ConsistencyLevel.java -- diff --git a/src/java/org/apache/cassandra/db/ConsistencyLevel.java b/src/java/org/apache/cassandra/db/ConsistencyLevel.java index 25fb25b..4d72767 100644 --- a/src/java/org/apache/cassandra/db/ConsistencyLevel.java +++ b/src/java/org/apache/cassandra/db/ConsistencyLevel.java @@ -48,7 +48,7 @@ public enum ConsistencyLevel ALL (5), LOCAL_QUORUM(6, true), EACH_QUORUM (7), -LOCAL_ONE (8, true); +LOCAL_ONE (10, true); private static final Logger logger = LoggerFactory.getLogger(ConsistencyLevel.class);
[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0
Merge branch 'cassandra-1.2' into cassandra-2.0 Conflicts: src/java/org/apache/cassandra/db/ConsistencyLevel.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09b2470b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09b2470b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09b2470b Branch: refs/heads/cassandra-2.0 Commit: 09b2470b33c47d14e4f6ef6befd4eb8f9ec5aacf Parents: 6fe83cd 9d7b567 Author: Sylvain Lebresne Authored: Fri Nov 15 15:41:38 2013 +0100 Committer: Sylvain Lebresne Committed: Fri Nov 15 15:41:38 2013 +0100 -- CHANGES.txt| 2 ++ src/java/org/apache/cassandra/db/ConsistencyLevel.java | 1 - 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/09b2470b/CHANGES.txt -- diff --cc CHANGES.txt index 17e97d5,9ee6657..b6d2e73 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -42,42 -18,11 +42,44 @@@ Merged from 1.2 * Fix missing one row in reverse query (CASSANDRA-6330) * Fix reading expired row value from row cache (CASSANDRA-6325) * Fix AssertionError when doing set element deletion (CASSANDRA-6341) + * Make CL code for the native protocol match the one in C* 2.0 +(CASSANDRA-6347) -1.2.11 +2.0.2 + * Update FailureDetector to use nanontime (CASSANDRA-4925) + * Fix FileCacheService regressions (CASSANDRA-6149) + * Never return WriteTimeout for CL.ANY (CASSANDRA-6032) + * Fix race conditions in bulk loader (CASSANDRA-6129) + * Add configurable metrics reporting (CASSANDRA-4430) + * drop queries exceeding a configurable number of tombstones (CASSANDRA-6117) + * Track and persist sstable read activity (CASSANDRA-5515) + * Fixes for speculative retry (CASSANDRA-5932, CASSANDRA-6194) + * Improve memory usage of metadata min/max column names (CASSANDRA-6077) + * Fix thrift validation refusing row markers on CQL3 tables (CASSANDRA-6081) + * Fix insertion of collections with CAS (CASSANDRA-6069) + * Correctly send metadata on SELECT COUNT (CASSANDRA-6080) + * Track clients' remote addresses in ClientState (CASSANDRA-6070) + * Create snapshot dir if it does not exist when migrating + leveled manifest (CASSANDRA-6093) + * make sequential nodetool repair the default (CASSANDRA-5950) + * Add more hooks for compaction strategy implementations (CASSANDRA-6111) + * Fix potential NPE on composite 2ndary indexes (CASSANDRA-6098) + * Delete can potentially be skipped in batch (CASSANDRA-6115) + * Allow alter keyspace on system_traces (CASSANDRA-6016) + * Disallow empty column names in cql (CASSANDRA-6136) + * Use Java7 file-handling APIs and fix file moving on Windows (CASSANDRA-5383) + * Save compaction history to system keyspace (CASSANDRA-5078) + * Fix NPE if StorageService.getOperationMode() is executed before full startup (CASSANDRA-6166) + * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212) + * Add reloadtriggers command to nodetool (CASSANDRA-4949) + * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139) + * Fix sstable loader (CASSANDRA-6205) + * Reject bootstrapping if the node already exists in gossip (CASSANDRA-5571) + * Fix NPE while loading paxos state (CASSANDRA-6211) + * cqlsh: add SHOW SESSION command (CASSANDRA-6228) +Merged from 1.2: + * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114) * Add a warning for small LCS sstable size (CASSANDRA-6191) * Add ability to list specific KS/CF combinations in nodetool cfstats (CASSANDRA-4191) * Mark CF clean if a mutation raced the drop and got it marked dirty (CASSANDRA-5946) http://git-wip-us.apache.org/repos/asf/cassandra/blob/09b2470b/src/java/org/apache/cassandra/db/ConsistencyLevel.java -- diff --cc src/java/org/apache/cassandra/db/ConsistencyLevel.java index 4fffc8a,4d72767..cbb4bb1 --- a/src/java/org/apache/cassandra/db/ConsistencyLevel.java +++ b/src/java/org/apache/cassandra/db/ConsistencyLevel.java @@@ -37,7 -37,7 +37,6 @@@ import org.apache.cassandra.locator.Abs import org.apache.cassandra.locator.NetworkTopologyStrategy; import org.apache.cassandra.transport.ProtocolException; -- public enum ConsistencyLevel { ANY (0),
git commit: Make the CL native protocol code match the on in 2.0
Updated Branches: refs/heads/cassandra-1.2 4c08800b4 -> 9d7b5671f Make the CL native protocol code match the on in 2.0 patch by slebresne; reviewed by jasobrown for CASSANDRA-6347 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d7b5671 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d7b5671 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d7b5671 Branch: refs/heads/cassandra-1.2 Commit: 9d7b5671ff725cb5e2749bac763bc6f5f5fd99dd Parents: 4c08800 Author: Sylvain Lebresne Authored: Fri Nov 15 15:36:22 2013 +0100 Committer: Sylvain Lebresne Committed: Fri Nov 15 15:36:22 2013 +0100 -- CHANGES.txt| 2 ++ src/java/org/apache/cassandra/db/ConsistencyLevel.java | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d7b5671/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 39a88f8..9ee6657 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -18,6 +18,8 @@ * Fix missing one row in reverse query (CASSANDRA-6330) * Fix reading expired row value from row cache (CASSANDRA-6325) * Fix AssertionError when doing set element deletion (CASSANDRA-6341) + * Make CL code for the native protocol match the one in C* 2.0 + (CASSANDRA-6347) 1.2.11 http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d7b5671/src/java/org/apache/cassandra/db/ConsistencyLevel.java -- diff --git a/src/java/org/apache/cassandra/db/ConsistencyLevel.java b/src/java/org/apache/cassandra/db/ConsistencyLevel.java index 25fb25b..4d72767 100644 --- a/src/java/org/apache/cassandra/db/ConsistencyLevel.java +++ b/src/java/org/apache/cassandra/db/ConsistencyLevel.java @@ -48,7 +48,7 @@ public enum ConsistencyLevel ALL (5), LOCAL_QUORUM(6, true), EACH_QUORUM (7), -LOCAL_ONE (8, true); +LOCAL_ONE (10, true); private static final Logger logger = LoggerFactory.getLogger(ConsistencyLevel.class);
[jira] [Updated] (CASSANDRA-6333) ArrayIndexOutOfBound when using count(*) with over 10,000 rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-6333: Attachment: 6333.txt Ok, this is due to the fact that SP.getRangeSlice might return more results than asked (due to reconciliation, you need > 1 node) which was confusing the pager logic. Attaching patch so that the pager trim the result in that case. > ArrayIndexOutOfBound when using count(*) with over 10,000 rows > -- > > Key: CASSANDRA-6333 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6333 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.0.2, Ubuntu 12.04.3 LTS, Oracle Java 1.7.0_21 >Reporter: Tyler Tolley >Assignee: Sylvain Lebresne > Fix For: 2.0.3 > > Attachments: 6333.txt > > > We've been getting a TSocket read 0 bytes error when we try and run SELECT > count(*) FROM if the table has over 10,000 rows. > I've been able to reproduce the problem by using cassandra-stress to insert > different number of rows. When I insert under 10,000, the count is returned. > When I insert exactly 10,000, I get a message that my results were limited to > 10,000 by default. If insert 10,001, I get the exception below. > {code} > ERROR [Thrift:4] 2013-11-12 09:54:04,850 CustomTThreadPoolServer.java (line > 212) Error occurred during processing of message. > java.lang.ArrayIndexOutOfBoundsException: -1 > at java.util.ArrayList.elementData(ArrayList.java:371) > at java.util.ArrayList.remove(ArrayList.java:448) > at org.apache.cassandra.cql3.ResultSet.trim(ResultSet.java:92) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:848) > at > org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:196) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:163) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:57) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:129) > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:145) > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:136) > at > org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1936) > at > org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4394) > at > org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4378) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:722) > {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (CASSANDRA-6095) INSERT query adds new value to collection type
[ https://issues.apache.org/jira/browse/CASSANDRA-6095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne resolved CASSANDRA-6095. - Resolution: Cannot Reproduce Thanks for confirming it. > INSERT query adds new value to collection type > -- > > Key: CASSANDRA-6095 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6095 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Ngoc Minh Vo >Assignee: Sylvain Lebresne > > Hello, > I don't know if somebody has reported this regression in v2.0.1: INSERT query > adds new value to collection type (eg. List) instead of replacing it. > CQL3 docs: > http://cassandra.apache.org/doc/cql3/CQL.html#collections > {quote} > Note: An INSERT will always replace the entire list. > {quote} > We do not encounter this issue with v1.2.9. > Could you please have a look at the issue? > Thanks for your help. > Best regards, > Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6095) INSERT query adds new value to collection type
[ https://issues.apache.org/jira/browse/CASSANDRA-6095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823666#comment-13823666 ] Ngoc Minh Vo commented on CASSANDRA-6095: - It looks like that the issue is resolved in v2.0.2. Thanks for your help. > INSERT query adds new value to collection type > -- > > Key: CASSANDRA-6095 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6095 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Ngoc Minh Vo >Assignee: Sylvain Lebresne > > Hello, > I don't know if somebody has reported this regression in v2.0.1: INSERT query > adds new value to collection type (eg. List) instead of replacing it. > CQL3 docs: > http://cassandra.apache.org/doc/cql3/CQL.html#collections > {quote} > Note: An INSERT will always replace the entire list. > {quote} > We do not encounter this issue with v1.2.9. > Could you please have a look at the issue? > Thanks for your help. > Best regards, > Minh -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-6350) Timestamp-with-timezone type
[ https://issues.apache.org/jira/browse/CASSANDRA-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6350: -- Priority: Minor (was: Major) Issue Type: Wish (was: Bug) Summary: Timestamp-with-timezone type (was: cqlsh: shows the timestamp column value in local time instead of inserted timestamp value) Edited title to reflect what you are really asking for. > Timestamp-with-timezone type > > > Key: CASSANDRA-6350 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6350 > Project: Cassandra > Issue Type: Wish > Components: Core >Reporter: Ramkumar S >Priority: Minor > > Create a table with a timestamp column. > Insert a value from US time Zone. > Try querying the value from a different time zone like India. > The timestamp column value shown in the select query result is converted to > Indian Local time instead of showing the actual value. > This becomes a problem when we want to narrow down the query using where > condition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering
[ https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823528#comment-13823528 ] Sylvain Lebresne commented on CASSANDRA-6348: - What version is that test case against? Because requiring ALLOW FILTERING is definitively the intent of the following code from SelectStatement: {noformat} // Make sure this queries is allowed (note: only key range can involve filtering underneath) if (!parameters.allowFiltering && stmt.isKeyRange) { // We will potentially filter data if either: // - Have more than one IndexExpression // - Have no index expression and the column filter is not the identity if (stmt.metadataRestrictions.size() > 1 || (stmt.metadataRestrictions.isEmpty() && !stmt.columnFilterIsIdentity())) throw new InvalidRequestException("Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. " + "If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING"); } {noformat} > TimeoutException throws if Cql query allows data filtering and index is too > big and it can't find the data in base CF after filtering > -- > > Key: CASSANDRA-6348 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6348 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Alex Liu >Assignee: Alex Liu > > If index row is too big, and filtering can't find the match Cql row in base > CF, it keep scanning the index row and retrieving base CF until the index row > is scanned completely which may take too long and thrift server returns > TimeoutException. This is one of the reasons why we shouldn't index a column > if the index is too big. > Multiple indexes merging can resolve the case where there are only EQUAL > clauses. (CASSANDRA-6048 addresses it). > If the query has none-EQUAL clauses, we still need do data filtering which > might lead to timeout exception. > We can either disable those kind of queries or WARN the user that data > filtering might lead to timeout exception or OOM. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)
[ https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Mazursky updated CASSANDRA-4687: Reproduced In: 2.0.2 Just run into this issue too with C* 2.0.2. {noformat} ERROR [ReadStage:124] 2013-11-15 12:59:48,680 CassandraDaemon.java (line 187) Exception in thread Thread[ReadStage:124,5,main] java.lang.AssertionError: DecoratedKey(6601594501494835072, VERY_LONG_HEX_STRING) != DecoratedKey(-5016939311527297185, 2f313439) in /var/lib/cassandra/data/keyspace/table/keyspace-table-jb-1-Data.db at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:114) at org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:62) at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:87) at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62) at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:120) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1467) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1286) at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332) at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) {noformat} > Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) > --- > > Key: CASSANDRA-4687 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4687 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: CentOS 6.3 64-bit, Oracle JRE 1.6.0.33 64-bit, single > node cluster >Reporter: Leonid Shalupov >Priority: Minor > Attachments: 4687-debugging.txt > > > Under heavy write load sometimes cassandra fails with assertion error. > git bisect leads to commit 295aedb278e7a495213241b66bc46d763fd4ce66. > works fine if global key/row caches disabled in code. > {quote} > java.lang.AssertionError: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) in > /var/lib/cassandra/data/...-he-1-Data.db > at > org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:60) > at > org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67) > at > org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79) > at > org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256) > at > org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) > at > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1345) > at > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1207) > at > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1142) > at org.apache.cassandra.db.Table.getRow(Table.java:378) > at > org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:819) > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1253) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)