my five cents -
token and key are not same. it was like this long time ago (single MD5
assumed single key)
if you want ordered, you probably can arrange your data in a way so you can
get it in ordered fashion.
for example long ago, i had single column family with single key and about
2-3 M
hi all:
I,m testing the new CqlStorage() with cassandra 1.28 and pig 0.11.1
I am using this sample data test:
http://frommyworkshop.blogspot.com.es/2013/07/hadoop-map-reduce-with-cassandra.html
And I load and dump data Righ with this script:
*rows = LOAD
Oops, I made a mistake thought I was paging on partition key when I
actually was paging on columns. No need of token and columns are ordered.
Sorry about bothering the ones who read this, it was a PEBCAK.
Alain
2013/8/21 Alain RODRIGUEZ arodr...@gmail.com
Hi, I am sorry about digging this
Hello,
I am having a problem with a node in a test environment I have at
amazon. I am using cassandra 1.2.3 in Amazon EC2. Here is my nodetool ring
output:
$ nodetool ring
Note: Ownership information does not include topology; for complete
information, specify a keyspace
Datacenter: us-east
Isn't this the log file from 10.0.0.146??? And this 10.0.0.146 sees that
10.0.0.111 is up, then sees it dead and in the log we can see it bind with this
line
INFO 12:16:23,108 Binding thrift service to
ip-10-0-0-146.ec2.internal/10.0.0.146:9160http://10.0.0.146:9160
What is the log file look
Ufff
I almost won the Charles Darwin prize now.
Thanks!
[]s
2013/8/22 Hiller, Dean dean.hil...@nrel.gov
Isn't this the log file from 10.0.0.146??? And this 10.0.0.146 sees that
10.0.0.111 is up, then sees it dead and in the log we can see it bind with
this line
INFO 12:16:23,108 Binding
On Thu, Aug 22, 2013 at 10:24 AM, Jay Svc jaytechg...@gmail.com wrote:
In our cluster, the commit log is getting filled up as write progresses.
It is noticed that once the commit log is flushed to SSTable the commit log
files are not removed/deleted. The result of that the commit log volume is
Hi Users,
In our cluster, the commit log is getting filled up as write progresses. It
is noticed that once the commit log is flushed to SSTable the commit log
files are not removed/deleted. The result of that the commit log volume is
getting filled with commit log files.
Any reason why commit
its DSE 3.1 Cassandra 2.1
On Thu, Aug 22, 2013 at 10:28 AM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 22, 2013 at 10:24 AM, Jay Svc jaytechg...@gmail.com wrote:
In our cluster, the commit log is getting filled up as write progresses.
It is noticed that once the commit log is
Hello,
Has anyone used aws VPC for cassandra cluster? The static private ips of
VPC must be helpful in case of node replacement.
Please share any experiences related or suggest ideas for static ips in ec2
for cassandra.
-Rashmi
On Thu, Aug 22, 2013 at 11:13 AM, rash aroskar rashmi.aros...@gmail.comwrote:
Has anyone used aws VPC for cassandra cluster? The static private ips of
VPC must be helpful in case of node replacement.
Please share any experiences related or suggest ideas for static ips in
ec2 for cassandra.
Hi,
create eni, set ip and attach in cassandra instance.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
[]'s
--
Julio Quierati
User Linux #492973
OpenPGP Key: 0xC9A064FA578E0D60
8D70 B111 ECE9 D3E9 E661 305B C9A0 64FA 578E 0D60
2013/8/22 Robert Coli rc...@eventbrite.com
On Sun, Aug 18, 2013 at 3:19 PM, Rodrigo Felix
rodrigofelixdealme...@gmail.com wrote:
I've noticed that, at least in my enviroment (Cassandra 1.1.12 running
on Amazon EC2), decommission operations take about 3-4 minutes while
bootstrap can take more than 20 minutes.
What is the reason
On Fri, Aug 16, 2013 at 4:46 PM, Keith Freeman 8fo...@gmail.com wrote:
I have a 3-node cluster running 1.2.8, and with no clients connected (for
about an hour) opscenter is showing a heartbeat-like pattern for total
writes in the Cluster Reads Writes panel on the dashboard ranging from
about
Hi Nick,
token and key are not same. it was like this long time ago (single MD5
assumed single key)
True. That reminds me of making a test with the latest 1.2 instead of our
current 1.0!
if you want ordered, you probably can arrange your data in a way so you
can get it in ordered fashion.
Hi, Cassandra experts,
Currently we are running major compaction (triggered daily by cron job), as
our application continue creating new columns in each row with old columns
automatically expire/deleted by TTL.
We are going to switch to using LeveledCompactionStrategy, and we are
wondering if we
Yep - OpsCenter stores it's own data in Cassandra. Thus the activity.
You could also turn on debug logging for StorageProxy on one of the nodes
if you really want to know.
On Thu, Aug 22, 2013 at 6:18 PM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Aug 16, 2013 at 4:46 PM, Keith Freeman
We've also noticed fairly poor streaming performance during a bootstrap
operation, albeit with 1.2.x. Streaming takes much longer than the physical
hardware capacity, even with the limits set high or off:
https://issues.apache.org/jira/browse/CASSANDRA-5726
On Sun, Aug 18, 2013 at 6:19 PM,
No . Leveled tables can not be manually compacted.
On Thursday, August 22, 2013, brianchang brian_chan...@yahoo.com wrote:
Hi, Cassandra experts,
Currently we are running major compaction (triggered daily by cron job),
as
our application continue creating new columns in each row with old
Thanks much Edward!
One more follow-up question: can we freely switch back to
SizeTieredCompactionStrategy later (and also resume running major compaction
cron job), if we find LeveledCompactionStrategy does not end up with better
performance (e.g., if we should experience those intensive I/O
On Wed, Aug 14, 2013 at 10:56 PM, Faraaz Sareshwala
fsareshw...@quantcast.com wrote:
- All writes invalidate the entire row (updates thrown out the cached
row)
This is not correct. Writes are added to the row, if it is in the row
cache. If it's not in the row cache, the row is not
On Thu, Aug 22, 2013 at 5:24 PM, brianchang brian_chan...@yahoo.com wrote:
One more follow-up question: can we freely switch back to
SizeTieredCompactionStrategy later (and also resume running major
compaction
cron job), if we find LeveledCompactionStrategy does not end up with better
Thanks much Rob!
Brian
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Continue-running-major-compaction-after-switching-to-LeveledCompactionStrategy-tp7589839p7589846.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive
If you are using off-heap memory for row cache, all writes invalidate the
entire row should be correct.
Boris
On Fri, Aug 23, 2013 at 8:32 AM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Aug 14, 2013 at 10:56 PM, Faraaz Sareshwala
fsareshw...@quantcast.com wrote:
- All writes
We are using 1.0. Our observation is that if you are using secondary index,
building secondary index after streaming is time consuming. And the
bootstrap needs to wait for the process of building secondary indexes to
complete.
I am not sure if this also applies to 1.1/1.2. You could set the log
After a bit of searching, I think I've found the answer I've been looking for.
I guess I didn't search hard enough before sending out this email. Thank you
all for the responses.
According to the datastax documentation [1], there are two types of row cache
providers:
row_cache_provider
Hello,
I also have some doubts about changing to leveled compaction:
1) Is this change computationally expensive? My sstables have around 7gb of
data, I'm afraid the nodes won't handle the pressure of compactions, maybe
dying by OOM or getting an extremely high latency during the compactions...
27 matches
Mail list logo