cfid is internal, not sure if it is exposed anywhere. You may see it in DEBUG
level logs at start up.
> 1744830464
Is a very high number, it starts at 1000 and increments by one. Something seems
odd.
The error you are seeing may be a symptom of a diverged schema. Check for
schema agreement u
I still have wrong results (I simulated an event 5 times and it was
counted 3 times by some counters 4 or 5 times by others.
I have also wrong results with counters in 1.0.8, many times updates to
counter column are just lost and sometimes counters are going backwards
even if our app uses only
Well if what you're saying is true, then the help is inconsistent:
> It is also valid to specify the fully-qualified class name to a class that
> extends org.apache.Cassandra.db.marshal.AbstractType.
[default@unknown] help assume;
assume comparator as ;
assume sub_comparator as ;
assume valid
I don't think thats possible with the cli. You'd have to embellish
CliClient.Function
On 03/23/2012 09:59 PM, Drew Kutcharian wrote:
I actually have a custom type, I put the BytesType in the example to
demonstrate the issue is not with my custom type.
-- Drew
On Mar 23, 2012, at 6:46 PM, D
I actually have a custom type, I put the BytesType in the example to
demonstrate the issue is not with my custom type.
-- Drew
On Mar 23, 2012, at 6:46 PM, Dave Brosius wrote:
> I think you want
>
> assume UserDetails validator as bytes;
>
>
>
> On 03/23/2012 08:09 PM, Drew Kutcharian wrote
I think you want
assume UserDetails validator as bytes;
On 03/23/2012 08:09 PM, Drew Kutcharian wrote:
Hi Everyone,
I'm having an issue with cassandra-cli's assume command with a custom type. I
tried it with the built-in BytesType and got the same error:
[default@test] assume UserDetails v
Hi Everyone,
I'm having an issue with cassandra-cli's assume command with a custom type. I
tried it with the built-in BytesType and got the same error:
[default@test] assume UserDetails validator as
org.apache.cassandra.db.marshal.BytesType;
Syntax error at position 35: missing EOF at '.'
I al
Hi Viktor,
We are only deleting columns. Rows are never deleted.
We are continually adding new columns that are then deleted. Existing
columns (deleted or otherwise) are never updated.
Can you provide any pointers as to what I should investigate to determine
what is occurring? Our application
I am out of the office until 03/27/2012.
My computer is in for repair...contact me by phone or
For anything urgent please contact Clara Liang (Clara C Liang/Silicon
Valley/IBM)
Note: This is an automated response to your message "Re:
ReplicateOnWriteStage exception causes a backlog in Mutat
Hello,
I am a bit confused about how to store and retrieve columns in Reversed
order.
Currently I store comments for every blog post in a wide row per post.
I want to store and retrieve comments for each blog post in
reversed/descending order for efficiency as we display comments in
descending o
The main issue turned out to be a bug in our code whereby we were writing a
lot of new columns to the same row key instead of a new row key, turning
what we expected to be a skinny rowed CF into a CF with one very, very wide
row. These writes on the single key were putting pressure on the 3 nodes
h
I'm dealing with a similar issue, with an additional complication. We are
collecting time series data, and the amount of data per time period varies
greatly. We will collect and query event data by account, but the biggest
account will accumulate about 10,000 times as much data per time period as
t
Should not.
Scenario 1, write & delete in one memtable
T1 write column
T2 delete row
T3 flush memtable, sstable 1 contains empty row tombstone
T4 row tombstone expires
T5 compaction/cleanup, row disappears from sstable 2
Scenario 2, write & delete different sstables
T1 write column
T2 flush memta
Example:
T1 < T2 < T3
at T1 write column
at T2 delete row
at T3 > tombstone expiration do compact ( T1 + T2 ) and drop expired
tombstone
column from T1 will be alive again?
cqlsh> select * from whois.ipbans;
KEY,80.65.56.165
KEY,204.229.100.77
KEY,75.144.148.1
KEY,111.191.88.7
'int' object has no attribute 'replace'
cqlsh>
its counter CF
create column family ipbans
with column_type = 'Standard'
and comparator = 'AsciiType'
and default_validation_class = '
> You are explaining that if i have expired row tombstone and there exists
> later timestamp on this row that tombstone is not deleted? If this works that
> way, it will be never deleted.
Exactly. It is merged with new one.
Example 1: a row with 1 column in sstable. delete a row, not a column.
I wonder why are memtable estimations so bad.
1. its not possible to run them more often? There should be some limit -
run live/serialized calculation at least once per hour. They took just
few seconds.
2. Why not use data from FlusherWriter to update estimations? Flusher
knows number of ops a
Hello
I am new to Cassandra and when I run tpstats on my node (Cassandra 1.0.7) I get
following output:
Pool NameActive Pending Completed Blocked All
time blocked
ReadStage 0 0 12 0
0
RequestResp
During compaction of selected sstables Cassandra checks the whole Column
Family for the latest timestamp of the column/row, including other
sstables and memtable.
You are explaining that if i have expired row tombstone and there exists
later timestamp on this row that tombstone is not deleted
Yes, continued deletions of the same columns/rows will prevent removing them
from final sstable upon compaction due to new timestamp.
You're getting sliding tombstone gc grace period in that case.
During compaction of selected sstables Cassandra checks the whole Column Family
for the latest time
Yup, all repairs are complete. I'm reading at a CL of ONE pretty much
everywhere.
Caleb Rackliffe | Software Developer
M 949.981.0159 | ca...@steelhouse.com
[cid:47487E9A-F738-4BAE-9A15-E6824E9D1834]
From: aaron morton mailto:aa...@thelastpickle.com>>
Reply-To: "user@cassandra.apache.org
21 matches
Mail list logo