[jira] [Comment Edited] (CASSANDRA-15124) Virtual tables API endpoints for sidecar

2019-05-16 Thread Chris Lohfink (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841875#comment-16841875
 ] 

Chris Lohfink edited comment on CASSANDRA-15124 at 5/17/19 3:31 AM:


Review feedback from [~andrew.tolbert] added:

* Somewhat confusing that the title for threadpools endpoint is Thread stats, 
accidentally found myself typing in /api/v1/threadstats.  Can probably just 
name it Thread pools
* List of many of the thread pools in Cassandra and their state seems kinda 
awkward, could just steal this from tpstats docs: Provides usage statistics of 
thread pools.
* With the json API, one thing that might be nice is that instead of returning 
an array of arrays, would be to just key by a field and return a json object 
with the keys being the value of the primary field.  I.E. it'd be easier to do 
obj['tombstone_warn_threshold'] than to parse through the array to find the 
relevant key.  Would require further transformation but could be nice. (see 
change in http://localhost:9043/docs/swagger/#/visibility/settings)
* same for threadpools to key off name result['MutationStage'] (change in 
http://localhost:9043/docs/swagger/#/visibility/threadpools)
* When start sidecar would be nice to log something indicating how to get to 
the docs endpoint.
* CQLSession.address is public when you probably didn't mean it to be, and some 
field variables could be made final.
* At CQLSession line 159, should log the exception instead of the message so 
you can get a full stack trace (logger.error("Cassandra configuration is 
incorrect.", e);)




was (Author: cnlwsu):
Review feedback from [~andrew.tolbert] added:

* Somewhat confusing that the title for threadpools endpoint is Thread stats, 
accidentally found myself typing in /api/v1/threadstats.  Can probably just 
name it Thread pools
* List of many of the thread pools in Cassandra and their state seems kinda 
awkward, could just steal this from tpstats docs: Provides usage statistics of 
thread pools.
* With the json API, one thing that might be nice is that instead of returning 
an array of arrays, would be to just key by a field and return a json object 
with the keys being the value of the primary field.  I.E. it'd be easier to do 
obj['tombstone_warn_threshold'] than to parse through the array to find the 
relevant key.  Would require further transformation but could be nice.
* same for threadpools to key off name result['MutationStage']
* When start sidecar would be nice to log something indicating how to get to 
the docs endpoint.
* CQLSession.address is public when you probably didn't mean it to be, and some 
field variables could be made final.
* At CQLSession line 159, should log the exception instead of the message so 
you can get a full stack trace (logger.error("Cassandra configuration is 
incorrect.", e);)



> Virtual tables API endpoints for sidecar
> 
>
> Key: CASSANDRA-15124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15124
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Sidecar
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Expose the existing virtual tables in sidecar api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15124) Virtual tables API endpoints for sidecar

2019-05-16 Thread Chris Lohfink (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-15124:
--
Reviewers: Andy Tolbert, Dinesh Joshi, Vinay Chella  (was: Dinesh Joshi, 
Vinay Chella)

> Virtual tables API endpoints for sidecar
> 
>
> Key: CASSANDRA-15124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15124
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Sidecar
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Expose the existing virtual tables in sidecar api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15124) Virtual tables API endpoints for sidecar

2019-05-16 Thread Chris Lohfink (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841875#comment-16841875
 ] 

Chris Lohfink commented on CASSANDRA-15124:
---

Review feedback from [~andrew.tolbert] added:

* Somewhat confusing that the title for threadpools endpoint is Thread stats, 
accidentally found myself typing in /api/v1/threadstats.  Can probably just 
name it Thread pools
* List of many of the thread pools in Cassandra and their state seems kinda 
awkward, could just steal this from tpstats docs: Provides usage statistics of 
thread pools.
* With the json API, one thing that might be nice is that instead of returning 
an array of arrays, would be to just key by a field and return a json object 
with the keys being the value of the primary field.  I.E. it'd be easier to do 
obj['tombstone_warn_threshold'] than to parse through the array to find the 
relevant key.  Would require further transformation but could be nice.
* same for threadpools to key off name result['MutationStage']
* When start sidecar would be nice to log something indicating how to get to 
the docs endpoint.
* CQLSession.address is public when you probably didn't mean it to be, and some 
field variables could be made final.
* At CQLSession line 159, should log the exception instead of the message so 
you can get a full stack trace (logger.error("Cassandra configuration is 
incorrect.", e);)



> Virtual tables API endpoints for sidecar
> 
>
> Key: CASSANDRA-15124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15124
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Sidecar
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Expose the existing virtual tables in sidecar api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10190) Python 3 support for cqlsh

2019-05-16 Thread Patrick Bannister (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841844#comment-16841844
 ] 

Patrick Bannister commented on CASSANDRA-10190:
---

[~spo...@gmail.com], this has been progressing, and we'd like your review and 
feedback. I have a branch from cassandra trunk that will make cqlsh, cqlshlib, 
and the accompanying Python unit tests work for Python 2.7 and 3.6: 
https://github.com/ptbannister/cassandra/tree/10190-rebase-20190329

If the cqlsh and cqlshlib changes are good, then we'd circle back to update the 
dtests too, but in a separate PR.

> Python 3 support for cqlsh
> --
>
> Key: CASSANDRA-10190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Tools
>Reporter: Andrew Pennebaker
>Assignee: Patrick Bannister
>Priority: Normal
>  Labels: cqlsh
> Attachments: coverage_notes.txt
>
>
> Users who operate in a Python 3 environment may have trouble launching cqlsh. 
> Could we please update cqlsh's syntax to run in Python 3?
> As a workaround, users can setup pyenv, and cd to a directory with a 
> .python-version containing "2.7". But it would be nice if cqlsh supported 
> modern Python versions out of the box.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-05-16 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841835#comment-16841835
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-15013 at 5/17/19 12:54 AM:
--

Incorporated the feedback from your branch (naming and TODOs) and from the jira 
comments.
Here is the updated change: 
https://github.com/sumanth-pasupuleti/cassandra/commit/45e31829e839d7e74b08566d7e501a46ed818330.

A couple of major changes
* Dispatcher would never query the map for getting EndpointPayloadTracker, 
rather it uses the reference it already has.
* FlushItem gets a reference to the corresponding Dispatcher, so it calls 
releaseItem on the right Dispatcher.
* I implemented tryRef and release that manage refCount on 
EndpointPayloadTracker, which "should" be thread safe


All UTs and DTests pass.
https://circleci.com/workflow-run/bb6b2eb6-daa6-41c1-9a3d-44b53bc7fb50



was (Author: sumanth.pasupuleti):
Incorporated the feedback from your branch (naming and TODOs) and from the jira 
comments.
Here is the updated change: 
https://github.com/sumanth-pasupuleti/cassandra/commit/45e31829e839d7e74b08566d7e501a46ed818330.

A couple of major changes
* Dispatcher would never query the map for getting EndpointPayloadTracker, 
rather it uses the reference it already has.
* FlushItem gets a reference to the corresponding Dispatcher, so it calls 
releaseItem on the right Dispatcher.

All UTs and DTests pass.
https://circleci.com/workflow-run/bb6b2eb6-daa6-41c1-9a3d-44b53bc7fb50


> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-05-16 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841835#comment-16841835
 ] 

Sumanth Pasupuleti commented on CASSANDRA-15013:


[~benedict] incorporated the feedback from your branch (naming and TODOs) and 
from the jira comments.
Here is the updated change: 
https://github.com/sumanth-pasupuleti/cassandra/commit/45e31829e839d7e74b08566d7e501a46ed818330.

A couple of major changes
* Dispatcher would never query the map for getting EndpointPayloadTracker, 
rather it uses the reference it already has.
* FlushItem gets a reference to the corresponding Dispatcher, so it calls 
releaseItem on the right Dispatcher.

All UTs and DTests pass.
https://circleci.com/workflow-run/bb6b2eb6-daa6-41c1-9a3d-44b53bc7fb50


> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-05-16 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841835#comment-16841835
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-15013 at 5/17/19 12:53 AM:
--

Incorporated the feedback from your branch (naming and TODOs) and from the jira 
comments.
Here is the updated change: 
https://github.com/sumanth-pasupuleti/cassandra/commit/45e31829e839d7e74b08566d7e501a46ed818330.

A couple of major changes
* Dispatcher would never query the map for getting EndpointPayloadTracker, 
rather it uses the reference it already has.
* FlushItem gets a reference to the corresponding Dispatcher, so it calls 
releaseItem on the right Dispatcher.

All UTs and DTests pass.
https://circleci.com/workflow-run/bb6b2eb6-daa6-41c1-9a3d-44b53bc7fb50



was (Author: sumanth.pasupuleti):
[~benedict] incorporated the feedback from your branch (naming and TODOs) and 
from the jira comments.
Here is the updated change: 
https://github.com/sumanth-pasupuleti/cassandra/commit/45e31829e839d7e74b08566d7e501a46ed818330.

A couple of major changes
* Dispatcher would never query the map for getting EndpointPayloadTracker, 
rather it uses the reference it already has.
* FlushItem gets a reference to the corresponding Dispatcher, so it calls 
releaseItem on the right Dispatcher.

All UTs and DTests pass.
https://circleci.com/workflow-run/bb6b2eb6-daa6-41c1-9a3d-44b53bc7fb50


> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14654) Reduce heap pressure during compactions

2019-05-16 Thread Jeff Jirsa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14654:
---
Fix Version/s: (was: 4.x)
   4.0

> Reduce heap pressure during compactions
> ---
>
> Key: CASSANDRA-14654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14654
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>  Labels: Performance, pull-request-available
> Fix For: 4.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Small partition compactions are painfully slow with a lot of overhead per 
> partition. There also tends to be an excess of objects created (ie 
> 200-700mb/s) per compaction thread.
> The EncoderStats walks through all the partitions and with mergeWith it will 
> create a new one per partition as it walks the potentially millions of 
> partitions. In a test scenario of about 600byte partitions and a couple 100mb 
> of data this consumed ~16% of the heap pressure. Changing this to instead 
> mutably track the min values and create one in a EncodingStats.Collector 
> brought this down considerably (but not 100% since the 
> UnfilteredRowIterator.stats() still creates 1 per partition).
> The KeyCacheKey makes a full copy of the underlying byte array in 
> ByteBufferUtil.getArray in its constructor. This is the dominating heap 
> pressure as there are more sstables. By changing this to just keeping the 
> original it completely eliminates the current dominator of the compactions 
> and also improves read performance.
> Minor tweak included for this as well for operators when compactions are 
> behind on low read clusters is to make the preemptive opening setting a 
> hotprop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15124) Virtual tables API endpoints for sidecar

2019-05-16 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15124:
-
Reviewers: Dinesh Joshi, Vinay Chella

> Virtual tables API endpoints for sidecar
> 
>
> Key: CASSANDRA-15124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15124
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Sidecar
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Expose the existing virtual tables in sidecar api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14629) Abstract Virtual Table for very large result sets

2019-05-16 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14629:
-
Reviewers: Dinesh Joshi, Vinay Chella  (was: Dinesh Joshi)

> Abstract Virtual Table for very large result sets
> -
>
> Key: CASSANDRA-14629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14629
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL, Legacy/Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Low
>  Labels: pull-request-available, virtual-tables
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For virtual tables that are very large we cannot use existing 
> abstractvirtualtable since it would OOM the node possibly. An example would 
> be a table to view the internal cache contents or to view contents of 
> sstables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15041) UncheckedExecutionException if authentication/authorization query fails

2019-05-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841705#comment-16841705
 ] 

Per Otterström commented on CASSANDRA-15041:


So, this is mostly a cosmetic issue in the sense that there is no way to 
resolve the underlying problem - that not enough replicas are available to read 
from the system_auth tables. Still, this generates error messages that cause 
operators to jump, and Cassandra isn't behaving well towards clients when this 
happens.

I've updated the auth_test.py to use consistent cache settings and created a 
few new test cases to reproduce this issue in the scenario where not enough 
replicas are available. They revile unwanted behavior with small variation on 
different releases. In short:
 * TC1 will trigger Cassandra to perform a background update of a cached 
credentials/roles/permissions entries.
 * TC2 will trigger authorization when cached entries have passed both 
update-interval and validity (blocking update)
 * TC3 will trigger authorization when cache is disabled.

Link to dtest 
[patch|https://github.com/apache/cassandra-dtest/compare/master...eperott:cassandra-15041].

I'm expecting Cassandra to fail gracefully on TC1, possibly with a warning, but 
no stack trace.

I'm expecting TC2 and TC3 to reject the request with exception to indicate 
not-authorized|unavailable|timeout, not sure which makes most sense. In any 
case, TC2 and TC3 should fail the same way and there should be no errors or 
stack traces in the log.

Results on different releases:
 * 4.0: TC2 fail
 * 3.11: TC1, TC2, TC3 and existing test_login fail
 * 3.0: TC1, TC2 and TC3 fail
 * 2.2: TC1, TC2 and TC3 fail

4.0 behaves generally better since a similar ticket as this one was fixed in 
CASSANDRA-13113. The reason TC2 fails is that the request actually will be 
authorized, even though the cached entries should have timed out. From what I 
can tell the cache is handing out stale entries.

The reason test_login fail on 3.11 branch is that we're caching credentials 
since 3.4.

All new test cases fail on 3.11, 3.0 and 2.2 as reported in this ticket.

So far I've made no attempt to work on a fix for this. Before we dive into 
that, I'd like to get some feedback on the dtests and my findings above.

TC2 and TC3 currently expect an UnavailableException and a message similar to 
"Cannot achieve consistency level QUORUM" at the client side, simply because 
this is the behavior in 4.0 branch. I feel this might confuse users a bit as 
the exception and message is related to the internal lookup on the 
system_auth.* table, rather than the actual query sent to the cluster. Would it 
make more sense to throw back an UnauthorizedException?

[~beobal] and [~ifesdjeen], you were both much involved in CASSANDRA-13113. 
What are your thoughts on this?

> UncheckedExecutionException if authentication/authorization query fails
> ---
>
> Key: CASSANDRA-15041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15041
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Authorization
>Reporter: Per Otterström
>Priority: Normal
>
> If cache update for permissions/credentials/roles fails with 
> UnavailableException this comes back to client as UncheckedExecutionException.
> Stack trace on server side:
> {noformat}
> ERROR [Native-Transport-Requests-1] 2019-03-04 16:30:51,537 
> ErrorMessage.java:384 - Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) 
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3937) 
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) 
> ~[guava-18.0.jar:na]
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>  ~[guava-18.0.jar:na]
> at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:97) 
> ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>  ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:439) 
> ~[apache-cassandra-3.11.4.jar:3.11.4]
> at 
> org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:368)
>  

[jira] [Commented] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-16 Thread Panneerselvam (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841331#comment-16841331
 ] 

Panneerselvam commented on CASSANDRA-15128:
---

Got It . Here, Cassandra supports "OpenJDK + Hotspot VM" but  it doesn't 
support "OpenJDK + OpenJ9 VM".

Am I correct? .

OpenJ9  is not officially supported at the moment . Is there any plan to 
support Openj9 VM as well , may be in future?

 

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
> Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
>     at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>     at java.lang.Integer.parseInt(Integer.java:580)
>     at java.lang.Integer.parseInt(Integer.java:615)
>     at 
> org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
>     at 
> org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.java:31)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-16 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841224#comment-16841224
 ] 

Aleksey Yeschenko commented on CASSANDRA-15128:
---

We support OpenJDK 1.8.0_202 just fine, it's OpenJ9 that's not officially 
supported at the moment.

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
> Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
>     at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>     at java.lang.Integer.parseInt(Integer.java:580)
>     at java.lang.Integer.parseInt(Integer.java:615)
>     at 
> org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
>     at 
> org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.java:31)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-16 Thread Panneerselvam (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841023#comment-16841023
 ] 

Panneerselvam edited comment on CASSANDRA-15128 at 5/16/19 6:35 AM:


*+Please find the details below before and after the jdk change.+*

 

*+Before changing to oracle jdk :+*   Caused Issue 

 

C:\Users\panneer>java -version

openjdk version "1.8.0_202"

OpenJDK Runtime Environment (build 1.8.0_202-b08)

Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Windows 8.1 amd64-64-Bit 
Compressed References 20190205_265 (JIT enabled, AOT enabled)

OpenJ9   - 90dd8cb40

OMR  - d2f4534b

JCL  - d002501a90 based on jdk8u202-b08)

 

*+After changing to oracle jdk:+*    Working version

 

C:\Users\panneer>java -version

java version "1.8.0_141"

Java(TM) SE Runtime Environment (build 1.8.0_141-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)


was (Author: panneerboss):
*+Please find the details below before and after the jdk change.+*

 

*+Before changing to oracle jdk :+  * Caused Issue++

 

C:\Users\panneer>java -version

openjdk version "1.8.0_202"

OpenJDK Runtime Environment (build 1.8.0_202-b08)

Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Windows 8.1 amd64-64-Bit 
Compressed References 20190205_265 (JIT enabled, AOT enabled)

OpenJ9   - 90dd8cb40

OMR  - d2f4534b

JCL  - d002501a90 based on jdk8u202-b08)

 

*+After changing to oracle jdk:+*   ++  Working version

 

C:\Users\panneer>D:\Softwares\Java\jdk1.8.0_141\bin\java -version

java version "1.8.0_141"

Java(TM) SE Runtime Environment (build 1.8.0_141-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> 

[jira] [Commented] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-16 Thread Panneerselvam (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841023#comment-16841023
 ] 

Panneerselvam commented on CASSANDRA-15128:
---

*+Please find the details below before and after the jdk change.+*

 

*+Before changing to oracle jdk :+  * Caused Issue++

 

C:\Users\panneer>java -version

openjdk version "1.8.0_202"

OpenJDK Runtime Environment (build 1.8.0_202-b08)

Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Windows 8.1 amd64-64-Bit 
Compressed References 20190205_265 (JIT enabled, AOT enabled)

OpenJ9   - 90dd8cb40

OMR  - d2f4534b

JCL  - d002501a90 based on jdk8u202-b08)

 

*+After changing to oracle jdk:+*   ++  Working version

 

C:\Users\panneer>D:\Softwares\Java\jdk1.8.0_141\bin\java -version

java version "1.8.0_141"

Java(TM) SE Runtime Environment (build 1.8.0_141-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
> Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
>     at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>     at java.lang.Integer.parseInt(Integer.java:580)
>     at java.lang.Integer.parseInt(Integer.java:615)
>     at 
> org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
>     at 
> org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.java:31)
> 

[jira] [Updated] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-16 Thread maxwellguo (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maxwellguo updated CASSANDRA-15129:
---
Attachment: CASSANDRA-15129.txt

> Cassandra unit test with compression occurs BUILD FAILED 
> -
>
> Key: CASSANDRA-15129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: maxwellguo
>Priority: Normal
> Attachments: CASSANDRA-15129.txt
>
>
> under cassandra source code dir ,when I run the command : ant 
> test-compression will occurs npe exception . 
> {panel:title= log}
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
>  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit-timeout]
> [junit-timeout] Null Test:Caused an ERROR
> [junit-timeout] 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout] java.lang.ClassNotFoundException: 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout]   at java.lang.Class.forName0(Native Method)
> [junit-timeout]   at java.lang.Class.forName(Class.java:264)
> [junit-timeout]
> [junit-timeout]
> [junit-timeout] Test 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
>  FAILED
> {panel}
> for we use ant test-compression ,then the unit dir's test and the 
> dristributed dir test will be run with compression configure.  but in the 
> build.xml configure for testlist-compression macrodef, only unit test dir was 
> as the input .and the target test-compression use two fileset dir 
> "test.unit.src" and "test.distributed.src" , so when comes to distributed 
> dir's test with compression ,there occurs an CLASSANOT FOUND exception . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-16 Thread maxwellguo (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841019#comment-16841019
 ] 

maxwellguo commented on CASSANDRA-15129:


I tried to fix this problem with little change to build.xml  but failed for i 
am not familiar with ant . I have got two suggests : 1. if we can make 
test-compression with unit and distribued separately, not under on 
test-compression target ; 2. delete line  for we got dtest, and compresion test with 
distribute can be move to here . [~ifesdjeen] [~aweisberg] 

> Cassandra unit test with compression occurs BUILD FAILED 
> -
>
> Key: CASSANDRA-15129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: maxwellguo
>Priority: Normal
>
> under cassandra source code dir ,when I run the command : ant 
> test-compression will occurs npe exception . 
> {panel:title= log}
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
>  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit-timeout]
> [junit-timeout] Null Test:Caused an ERROR
> [junit-timeout] 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout] java.lang.ClassNotFoundException: 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout]   at java.lang.Class.forName0(Native Method)
> [junit-timeout]   at java.lang.Class.forName(Class.java:264)
> [junit-timeout]
> [junit-timeout]
> [junit-timeout] Test 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
>  FAILED
> {panel}
> for we use ant test-compression ,then the unit dir's test and the 
> dristributed dir test will be run with compression configure.  but in the 
> build.xml configure for testlist-compression macrodef, only unit test dir was 
> as the input .and the target test-compression use two fileset dir 
> "test.unit.src" and "test.distributed.src" , so when comes to distributed 
> dir's test with compression ,there occurs an CLASSANOT FOUND exception . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-16 Thread maxwellguo (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maxwellguo updated CASSANDRA-15129:
---
Description: 
under cassandra source code dir ,when I run the command : ant test-compression 
will occurs npe exception . 

{panel:title= log}
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
[junit-timeout]
[junit-timeout] Null Test:  Caused an ERROR
[junit-timeout] 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] java.lang.ClassNotFoundException: 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] at java.lang.Class.forName0(Native Method)
[junit-timeout] at java.lang.Class.forName(Class.java:264)
[junit-timeout]
[junit-timeout]
[junit-timeout] Test 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
 FAILED
{panel}

for we use ant test-compression ,then the unit dir's test and the dristributed 
dir test will be run with compression configure.  but in the build.xml 
configure for testlist-compression macrodef, only unit test dir was as the 
input .and the target test-compression use two fileset dir "test.unit.src" and 
"test.distributed.src" , so when comes to distributed dir's test with 
compression ,there occurs an CLASSANOT FOUND exception . 






  was:
under cassandra source code dir ,when I run the command : ant test-compression 
will occurs npe exception . 

{panel:title=My title}
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
[junit-timeout]
[junit-timeout] Null Test:  Caused an ERROR
[junit-timeout] 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] java.lang.ClassNotFoundException: 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] at java.lang.Class.forName0(Native Method)
[junit-timeout] at java.lang.Class.forName(Class.java:264)
[junit-timeout]
[junit-timeout]
[junit-timeout] Test 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
 FAILED
{panel}

for we use ant test-compression ,then the unit dir's test and the dristributed 
dir test will be run with compression configure.  but in the build.xml 
configure for testlist-compression macrodef, only unit test dir was as the 
input .and the target test-compression use two fileset dir "test.unit.src" and 
"test.distributed.src" , so when comes to distributed dir's test with 
compression ,there occurs an CLASSANOT FOUND exception . 







> Cassandra unit test with compression occurs BUILD FAILED 
> -
>
> Key: CASSANDRA-15129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: maxwellguo
>Priority: Normal
>
> under cassandra source code dir ,when I run the command : ant 
> test-compression will occurs npe exception . 
> {panel:title= log}
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
>  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit-timeout]
> [junit-timeout] Null Test:Caused an ERROR
> [junit-timeout] 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout] 

[jira] [Created] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-16 Thread maxwellguo (JIRA)
maxwellguo created CASSANDRA-15129:
--

 Summary: Cassandra unit test with compression occurs BUILD FAILED 
 Key: CASSANDRA-15129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
 Project: Cassandra
  Issue Type: Bug
  Components: Test/unit
Reporter: maxwellguo


under cassandra source code dir ,when I run the command : ant test-compression 
will occurs npe exception . 

{panel:title=My title}
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
[junit-timeout]
[junit-timeout] Null Test:  Caused an ERROR
[junit-timeout] 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] java.lang.ClassNotFoundException: 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] at java.lang.Class.forName0(Native Method)
[junit-timeout] at java.lang.Class.forName(Class.java:264)
[junit-timeout]
[junit-timeout]
[junit-timeout] Test 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
 FAILED
{panel}

for we use ant test-compression ,then the unit dir's test and the dristributed 
dir test will be run with compression configure.  but in the build.xml 
configure for testlist-compression macrodef, only unit test dir was as the 
input .and the target test-compression use two fileset dir "test.unit.src" and 
"test.distributed.src" , so when comes to distributed dir's test with 
compression ,there occurs an CLASSANOT FOUND exception . 








--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org