> The write path (not replicate on write) for counters involves a read,
>
I'm afraid you got it wrong. The read done during counter writes *is* done
by the replicate on write taks. Though really, the replicate on write taks
are just one part of the counter write path (they are not "not the write
p
Thanks Jeremiah, those are great suggestions.
Unfortunately, I have done a full repair and compaction on that CF, but the
ranged tombstones remain.
-Jeff
On Wed, Jul 3, 2013 at 7:54 PM, Jeremiah D Jordan wrote:
> To force clean out a tombstone.
>
> 1. Stop doing deletes on the CF, or switch t
To force clean out a tombstone.
1. Stop doing deletes on the CF, or switch to performing all deletes at ALL
2. Run a full repair of the cluster for that CF.
3. Change GC grace to be small, like 5 seconds or something for that CF
Either:
4. Find all sstables which have that row key in them using ss
We are on 1.2.5 with a 4 node cluster (RF 3) and have a cql3 wide row
table. each row has about 2000 columns. While running some test data
through it, it started throwing rpc_timeout errors when returning a couple
specific rows (with Consistency ONE).
After hunting through sstable2json results a
Can someone remind me why replicate on write tasks might be related to the
high disk I/O? My understanding is the replicate on write involves sending
the update to other nodes, so it shouldn't involve any disk activity --
disk activity would be during the mutation/write phase.
The write path (not
On 3 July 2013 22:18, Sávio Teles wrote:
> We were able to implement ByteOrderedPartition on Cassandra 1.1 and
> insert an object in a specific machine.
>
> However, with Cassandra 1.2 and VNodes we can't implement VNode with
> ByteOrderedPartitioner
> to insert an object in a specific machine.
We were able to implement ByteOrderedPartition on Cassandra 1.1 and insert
an object in a specific machine.
However, with Cassandra 1.2 and VNodes we can't implement VNode with
ByteOrderedPartitioner
to insert an object in a specific machine.
2013/7/3 Richard Low
> On 3 July 2013 21:04, Sávio
On 3 July 2013 21:04, Sávio Teles wrote:
We're using ByteOrderedPartition to programmatically choose the machine
> which a objet will be inserted.*
>
> *How can I use *ByteOrderedPartition *with vnode on Cassandra 1.2?
>
Don't. Managing tokens with ByteOrderedPartitioner is very hard anyway,
bu
We're using ByteOrderedPartition to programmatically choose the machine
which a objet will be inserted.*
*How can I use *ByteOrderedPartition *with vnode on Cassandra 1.2?
*
*
--
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Mestrando em
I've been running cassandra a while, and have used the PHP api and
cassandra-cli, but never gave cqlsh a shot.
I'm not quite getting it. My most simple CF is a dumping ground for
testing things created as:
create column family stats;
I was putting random stats I was computing in it. All keys, co
On Wed, Jul 3, 2013 at 6:02 AM, Hiller, Dean wrote:
>
> We loaded 5 million columns into a single row and when accessing the
first 30k and last 30k columns we saw no performance difference. We tried
just loading 2 rows from the beginning and end and saw no performance
difference. I am sure rever
For what its worth. I did this when I had this problem. It didn't work
out for me. Perhaps I did something wrong.
On Wed, Jul 3, 2013 at 11:06 AM, Robert Coli wrote:
> On Wed, Jul 3, 2013 at 7:04 AM, ifjke wrote:
>
>> I found that one of my cassandra nodes died recently (machine hangs). I
>
On Wed, Jul 3, 2013 at 7:04 AM, ifjke wrote:
> I found that one of my cassandra nodes died recently (machine hangs). I
> restarted the node an run a nodetool repair, while running it has thrown a
> org.apache.cassandra.io.**compress.**CorruptBlockException. Is there any
> way to recover from this
On Wed, Jul 3, 2013 at 9:59 AM, Andrew Bialecki
wrote:
> 2. I'm assuming in our case the cause is incrementing counters because
> disk reads are part of the write path for counters and are not for
> appending columns to a row. Does that logic make sense?
>
That's a pretty reasonable assumption if
In one of our load tests, we're incrementing a single counter column as
well as appending columns to a single row (essentially a timeline). You can
think of it as counting the instances of an event and then keeping a
timeline of those events. The ratio is of increments to "appends" is 1:1.
When we
thanks Nick, I'll give it a try
Regards
--
Cyril SCETBON
On Jul 3, 2013, at 5:16 PM, Nick Bailey
mailto:n...@datastax.com>> wrote:
OpsCenter uses http auth so the credentials will be saved by your browser.
There are a couple things you could do.
* Clear the local data/cache on your browser
*
We are using Cassandra 1.2 Embedded in a production environment.
We are some issues with these lines:
SocketAddress remoteSocket.get = socket ();
assert socket! = null;
ThriftClientState cState = activeSocketSessions.get (socket);
The connection is maintained by remoteSocket thread. How
OpsCenter uses http auth so the credentials will be saved by your browser.
There are a couple things you could do.
* Clear the local data/cache on your browser
* Open your browser in private browsing/incognito mode
* Manually enter credentials into the url: http://
:@:/
-Nick
On Wed, Jul 3,
Hi together,
I found that one of my cassandra nodes died recently (machine hangs). I
restarted the node an run a nodetool repair, while running it has thrown
a org.apache.cassandra.io.compress.CorruptBlockException. Is there any
way to recover from this? Or would it be best to delete the nodes
Hi all,
I have a Cassandra Cluster running and we recently duplicated the cluster.
After following all the steps, the cassandra clients started failing with the
following message:
"AuthenticationException(why='Username and/or password are incorrect')"
The problem is that even I can't login to
Shubham,
You are right, my point is that with non schema-update thrift calls you can
tune the consistency level used.
bye.
On Wed, Jul 3, 2013 at 10:10 AM, Shubham Mittal wrote:
> hi Alexis,
>
> Even if I create keyspaces, column families using cassandra-cli, the
> column creation and insertio
hi Alexis,
Even if I create keyspaces, column families using cassandra-cli, the column
creation and insertion work will still need thrift calls.
On Wed, Jul 3, 2013 at 6:05 PM, Alexis Rodríguez wrote:
> That repo for libcassandra works for cassandra 0.7.x due to changes in
> the thrift interfa
On Wed, Jul 3, 2013 at 2:06 AM, Silas Smith wrote:
> Franc,
> We manage our schema through the Astyanax driver. It runs in a listener at
> application startup. We read a self-defined schema version, update the
> schema if needed based on the version number, and then write the new schema
> version
We loaded 5 million columns into a single row and when accessing the first 30k
and last 30k columns we saw no performance difference. We tried just loading 2
rows from the beginning and end and saw no performance difference. I am sure
reverse sort is there for a reason though. In what context
Hi,
When we connect to Opscenter using an account, we do not see any disconnect
button to connect under another account.
Thanks
--
Cyril SCETBON
_
Ce message et ses pieces j
That repo for libcassandra works for cassandra 0.7.x due to changes in the
thrift interface we have faced some problems in the past.
May be you can take a look to my fork of libcassandra https://github.com/axs
-mvd/libcassandra that we are using with cassandra 1.1.11.
Besides that, I recommend th
Hey,
I found out that the problem is caused by this line :
c->createKeyspace(ks_def);
because the below code works fine.
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
using namespace std;
using namespace libcas
I wonder if one particular node is having trouble; when you notice the
missing column, what happens if you execute the read manually from cqlsh or
cassandra-cli independently directly on each node?
On Wed, Jul 3, 2013 at 2:00 AM, Blake Eggleston wrote:
> Hi All,
>
> We're having a problem with o
no Jordan, the cassandra version I have is Cassandra 1.1.12
On Wed, Jul 3, 2013 at 5:21 PM, Shubham Mittal wrote:
> This is the gdb output
>
> [Thread debugging using libthread_db enabled]
> terminate called after throwing an instance of
> 'org::apache::cassandra::InvalidRequestException'
> wh
This is the gdb output
[Thread debugging using libthread_db enabled]
terminate called after throwing an instance of
'org::apache::cassandra::InvalidRequestException'
what(): Default TException.
Program received signal SIGABRT, Aborted.
0x770a0b25 in raise () from /lib/libc.so.6
On
30 matches
Mail list logo