Thanks for your help.
I've added those flags as well as some others I saw in another thread that
redirects stdout to a file. What information is it that you need?
2014-01-29 Benedict Elliott Smith belliottsm...@datastax.com:
It's possible the time attributed to GC is actually spent somewhere
Once we set nodes to act as virtualnodes, there is an way to revert to
manual assigned token?
I have two nodes for testing and there I set 'num_tokens: 256' and let
initial_token line commented. VirtualNodes worked fine.
But then I tried to switch back by commenting 'num_tokens' line and
You should expect to see lines of output like:
vmop[threads: total initially_running
wait_to_block][time: spin block sync cleanup vmop] page_trap_count
0.436: Deoptimize [ 10 0
0] [ 0 0 0 0 0]
Hello all
I've read some materials on the net about Cassandra anti patterns, among
which is mentionned the very large wide-row anti pattern.
The main rationale to avoid too wide rows are:
1) fragmentation of data on multiple SStables when the row is very wide,
leading to very slow reads by
4 node, byte ordered, LCS, 3 Compaction Executors, replication factor 1
Code is 2.0.4 version but with patch for
CASSANDRA-6638https://issues.apache.org/jira/browse/CASSANDRA-6638 However,
no cleanup is run so patch should not play a roll
4 node cluster is started and insert/queries are done up
The join with auto bootstrap itself was finished. So I restarted the added
node. During restart I saw a message indicating that something is wrong about
this row and sstable.
Of course, in my case I did not drop sstable from another node. But I did
decommission and add the node, so that is
Hey,
When adding a new data center to our production C* datacenter using the
procedure described in [1], some of our application requests were returning
null/empty values. Rebuild was not complete in the new datacenter, so my
guess is that some requests were being directed to the brand new
On Fri, Jan 31, 2014 at 6:52 AM, DuyHai Doan doanduy...@gmail.com wrote:
4) hard limit of 2*10⁹ columns per physical row
b. maximum number of items to be processed is 24*10⁶, far below the hard
limit of 2*10⁹ columns so point 4) does not apply either
Before discarding this point, try
On Fri, Jan 31, 2014 at 5:08 AM, Víctor Hugo Oliveira Molinar
vhmoli...@gmail.com wrote:
Once we set nodes to act as virtualnodes, there is an way to revert to
manual assigned token?
On a given node? My understanding is that there is no officially supported
way. You now have 256 contiguous
Durable write has been removed from the entire keyspace already.
I'll run a bench on a 24*10⁶ wide row and give feedback soon
On Fri, Jan 31, 2014 at 7:55 PM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Jan 31, 2014 at 6:52 AM, DuyHai Doan doanduy...@gmail.com wrote:
4) hard limit of
fyi
-- Forwarded message --
From: Vivek Mishra vivek.mis...@impetus.co.in
Date: Sat, Feb 1, 2014 at 1:18 AM
Subject: {kundera-discuss} Kundera 2.10 released
To: kundera-disc...@googlegroups.com kundera-disc...@googlegroups.com
Hi All,
We are happy to announce the Kundera 2.10
The only drawback for ultra wide row I can see is point 1). But if I use
leveled compaction with a sufficiently large value for sstable_size_in_mb
(let's say 200Mb), will my read performance be impacted as the row grows ?
For this use case, you would want to use SizeTieredCompaction and
Thanks Nat for your ideas.
This could be as simple as adding year and month to the primary key (in
the form 'mm'). Alternatively, you could add this in the partition in
the definition. Either way, it then becomes pretty easy to re-generate
these based on the query parameters.
The thing is
13 matches
Mail list logo