Please forget the part in my sentence.
For more correctly, maybe I should have said like "He could compact 10
sstables each of them have a 15GB partition".
What I wanted to say is we can store much more rows(and columns) in a
partition than before 3.6.
2016-10-15 15:34 GMT+09:00 Kant Kodali :
> "
I understand Secondary Indexes in general are inefficient on high
cardinality columns but since SASI is built from scratch I wonder if the
same argument applies there? If not, Why? Because I believe primary keys in
Cassandra are indeed indexed and since Primary key is supposed to be the
column with
"Robert said he could treat safely 10 15GB partitions at his presentation"
This sounds like there is there is a row limit too not only columns??
If I am reading this correctly 10 15GB partitions means 10 partitions
(like 10 row keys, thats too small) with each partition of size 15GB.
(thats like
"Robert said he could treat safely 10 15GB partitions at his presentation"
This sounds like there is there is a row limit too not only columns??
If I am reading this correctly 10 15GB partitions means 10 partitions
(like 10 row keys, thats too small) with each partition of size 15GB.
(thats like
Are you sure you aren't using batches? These will assign the same timestamp
to your inserts which can lead to unexpected behaviors.
On Fri, Oct 14, 2016 at 9:45 PM Vladimir Yudovin
wrote:
> Did you try the same quires with Java driver without using prepared
> statements?
>
>
> Best regards, Vlad
Thanks to CASSANDRA-11206, I think we can have much larger partition than
before 3.6.
(Robert said he could treat safely 10 15GB partitions at his presentation.
https://www.youtube.com/watch?v=N3mGxgnUiRY)
But is there still 2B columns limit on the Cassandra code?
If so, out of curiosity, I'd like
Did you try the same quires with Java driver without using prepared statements?
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra on Azure and SoftLayer.
Launch your cluster in minutes.
On Fri, 14 Oct 2016 15:13:38 -0400Aoi Kadoya
wrote ---
Hi all,
Many people here have troubles with repair so I would like to share my
experience regarding the backport of CASSANDRA-12580 "Fix merkle tree size
calculation" (thanks Paulo!) in our C* 2.1.16. I was expecting some minor
improvements but the results are impressive on some tables.
Becaus
Hi Vladimir,
In fact I am having difficulty to reproduce this issue by cqlsh.
I was reported this issue by one of our developers and he is using his
client application that uses cassandra java driver 3.0.3. (we're using
DSE5.0.1)
app A:
2016-10-11 13:28:23,014 [TRACE] [core.QueryLogger.NORMAL]
I resolved this by doing more rolling restarts on the nodes that had this
WARN it's just more restarts than I thought I would have to do.
annoying!
On Wed, Oct 12, 2016 at 1:08 PM, Yucheng Liu wrote:
> *Env: * apache cassandra 2.1.8, 6-nodes
>
> *Problem: *one node had kernel panic and cras
Hi Jean,
I had the same problem, I removed the lines in /etc/init.d/cassandra template
(we use Chef to deploy) and now the HeapDumpPath is not overridden anymore.The
same goes for -XX:ErrorFile.
Best,
Romain
Le Mardi 4 octobre 2016 9h25, Jean Carlo a
écrit :
Yes, we did it.
So if th
Thank you for the update.
The repair fails with the Error 'Failed Creating merkle tree' but does not give
any additional details.
With -pr running on all DC nodes, we see a peer connection reset error, which
then results in hanged repair process even though the TCP connection settings
looks
unsubscribe
3.10.2016 16:25 tarihinde Edward Capriolo yazdı:
The phrase is defensible, but that is the root of the problem. Take
for example a skateboard.
"A skateboard is like a bike because it has wheels and you ride on it."
That is true and defensively true. :) However with not much more
13 matches
Mail list logo