scylladb

2015-11-05 Thread tommaso barbugli
Hi guys, did anyone already try Scylladb (yet another fastest NoSQL database in town) and has some thoughts/hands-on experience to share? Cheers, Tommaso

Re: scylladb

2015-11-05 Thread Jon Haddad
Nope, no one I know. Let me know if you try it I'd love to hear your feedback. > On Nov 5, 2015, at 9:22 AM, tommaso barbugli wrote: > > Hi guys, > > did anyone already try Scylladb (yet another fastest NoSQL database in town) > and has some thoughts/hands-on experience to share? > > Cheers,

Re: scylladb

2015-11-05 Thread Carlos Rolo
I will not try until multi-DC is implemented. More than an month has passed since I looked for it, so it could possibly be in place, if so I may take some time to test it. Regards, Carlos Juzarte Rolo Cassandra Consultant Pythian - Love your data rolo@pythian | Twitter: @cjrolo | Linkedin: *lin

Re: scylladb

2015-11-05 Thread Dani Traphagen
As of two days ago, they say they've got it @cjrolo. https://github.com/scylladb/scylla/wiki/RELEASE-Scylla-0.11-Beta On Thursday, November 5, 2015, Carlos Rolo wrote: > I will not try until multi-DC is implemented. More than an month has > passed since I looked for it, so it could possibly be

why cassanra max is 20000/s on a node ?

2015-11-05 Thread 郝加来
hi veryone i setup cassandra 2.2.3 on a node , the machine 's environment is openjdk-1.8.0 , 512G memory , 128core cpu , 3T ssd . the token num is 256 on a node , the program use datastax driver 2.1.8 and use 5 thread to insert data to cassandra on the same machine , the data 's capcity is 6G

Re: scylladb

2015-11-05 Thread Carlos Rolo
Something to do on a expected rainy weekend. Thanks for the information. Regards, Carlos Juzarte Rolo Cassandra Consultant Pythian - Love your data rolo@pythian | Twitter: @cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo * Mobile: +351 91 891 81 0

Replication Factor Change

2015-11-05 Thread Yulian Oifa
Hello to all. I am planning to change replication factor from 1 to 3. Will it cause data read errors in time of nodes repair? Best regards Yulian Oifa

Re: Can't save Opscenter Dashboard

2015-11-05 Thread Kai Wang
It happens again after I reboot another node. This time I see errors in agent.log. It seems to be related to the previous dead node. INFO [clojure-agent-send-off-pool-2] 2015-11-05 09:48:41,602 Attempting to load stored metric values. ERROR [clojure-agent-send-off-pool-2] 2015-11-05 09:48:41,61

Re: Question for datastax java Driver

2015-11-05 Thread Eric Stevens
In short: Yes, but it's not a good idea. To do it, you want to look into WhiteListPolicy for your loadbalancer policy, if your WhiteListPolicy contains only the same host(s) that you added as contact points, then the client will only connect to those hosts. However it's probably not a good idea f

Cassandra 2.0 Batch Statement for timeseries schema

2015-11-05 Thread Sachin Nikam
I currently have a keyspace with table definition that looks like this. CREATE TABLE *orders*( order-id long PRIMARY KEY, order-blob text ); This table will have a write load of ~40-100 tps and a read load of ~200-400 tps. We are now considering adding another table definition which closely

Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread Jack Krupansky
I don't know what current numbers are, but last year the idea of getting 1 million writes per second on a 96 node cluster was considered a reasonable achievement. That would be roughly 10,000 writes per second per node and you are getting twice that. See: http://www.datastax.com/1-million-writes

Re: Does datastax java driver works with ipv6 address?

2015-11-05 Thread Eric Stevens
The server is binding to the IPv4 "all addresses" reserved address (0.0.0.0), but binding it as IPv4 over IPv6 (:::0.0.0.0), which does not have the same meaning as the IPv6 all addresses reserved IP (being ::, aka 0:0:0:0:0:0:0:0). My guess is you have an IPv4 address of 0.0.0.0 in rpc_addres

Re: Replication Factor Change

2015-11-05 Thread Eric Stevens
If you switch reads to CL=LOCAL_ALL, you should be able to increase RF, then run repair, and after repair is complete, go back to your old consistency level. However, while you're operating at ALL consistency, you have no tolerance for a node failure (but at RF=1 you already have no tolerance for

Re: Cassandra 2.0 Batch Statement for timeseries schema

2015-11-05 Thread DuyHai Doan
""Get me the count of orders changed in a given sequence-id range"" --> Can you give an example of SELECT statement for this query ? Because given the table structure, you have to provide the shard-and-date partition key and I don't see how you can know this value unless you create as many SELECT

RE: Replication Factor Change

2015-11-05 Thread aeljami.ext
Hello, If current CL = ONE, Be careful on production at the time of change replication factor, 3 nodes will be queried while data is being transformed ==> So data read errors! De : Yulian Oifa [mailto:oifa.yul...@gmail.com] Envoyé : jeudi 5 novembre 2015 16:02 À : user@cassandra.apache.org Objet

Re: Cassandra 2.0 Batch Statement for timeseries schema

2015-11-05 Thread Eric Stevens
If you're talking about logged batches, these absolutely have an impact on performance of about 30%. The whole batch will succeed or fail as a unit, but throughput will go down and load will go up. Keep in mind that logged batches are atomic but are not isolated - i.e. it's totally possible to ge

Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread Eric Stevens
> 512G memory , 128core cpu This seems dramatically oversized for a Cassandra node. You'd do *much* better to have a much larger cluster of much smaller nodes. On Thu, Nov 5, 2015 at 8:25 AM Jack Krupansky wrote: > I don't know what current numbers are, but last year the idea of getting 1 > m

Re: Replication Factor Change

2015-11-05 Thread Yulian Oifa
Hello OK i got it , so i should set CL to ALL for reads, otherwise data may be retrieved from nodes that does not have yet current record. Thanks for help. Yulian Oifa On Thu, Nov 5, 2015 at 5:33 PM, Eric Stevens wrote: > If you switch reads to CL=LOCAL_ALL, you should be able to increase RF, >

Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread Tyler Hobbs
> > the program use datastax driver 2.1.8 and use 5 thread to insert data to > cassandra on the same machine The client with five threads is probably your bottleneck. Try running the cassandra stress tool for comparison. You should see at least double the throughput. On Thu, Nov 5, 2015 at 9:5

Re: Does nodetool cleanup clears tombstones in the CF?

2015-11-05 Thread Robert Coli
On Wed, Nov 4, 2015 at 12:56 PM, K F wrote: > Quick question, in order for me to purge tombstones on particular nodes if > I run nodetool cleanup will that help in > purging the tombstones from that node? > cleanup is for removing data from ranges the node no longer owns. It is unrelated to t

cassandra-stress and "op rate"

2015-11-05 Thread Herbert Fischer
Hi, I'm doing some hardware benchmarks for Cassandra and trying to figure out what is the best setup with the hardware options I have. I'm testing a single-node Cassandra with three different setups: - 1 HDD for commit log and 6 HDDs for data logs - 1 HDD for commit log and 1 HDDs for data logs

What are the repercussions of a restart during anticompaction?

2015-11-05 Thread Bryan Cheng
Hey list, Tried to find an answer to this elsewhere, but turned up nothing. We ran our first incremental repair after a large dc migration two days ago; the cluster had been running full repairs prior to this during the migration. Our nodes are currently going through anticompaction, as expected.

Re: Does nodetool cleanup clears tombstones in the CF?

2015-11-05 Thread K F
Thanks Rob, I will look into checksstablegarbage utility. However, I don't want to run major compaction as that would result in too big of a sstable. Regards,K F From: Robert Coli To: "user@cassandra.apache.org" ; K F Sent: Thursday, November 5, 2015 1:53 PM Subject: Re: Does nodetool

Re: Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread 郝加来
Cassandra is designed for clusters with lots of nodes, right , i know it , but a single node 's throughput only 2/s ? and all table 's total throughput is 2 /s ? so i think it is a single thread to deal the all table's command . normal , a database 's all table 's total throughput is ab

Re: Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread 郝加来
right , but wo want a node 's throught is above million , so if the system hava fifty table , a single table can achive 2/s . 郝加来 From: Eric Stevens Date: 2015-11-05 23:56 To: user@cassandra.apache.org Subject: Re: why cassanra max is 2/s on a node ? > 512G memory , 128core cpu Thi

Re: Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread Venkatesh Arivazhagan
I agree with Tyler! Have you tries increasing the the client threads from 5 to a higher number? On Nov 5, 2015 6:46 PM, "郝加来" wrote: > right , > but wo want a node 's throught is above million , so if the system hava > fifty table , a single table can achive 2/s . > > > --

Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread Graham Sanderson
Agreed too. It also matters what you are inserting… if you are inserting to the same (or small set of) partition key(s) you will be limited because writes to the same partition key on a single node are atomic and isolated. > On Nov 5, 2015, at 8:49 PM, Venkatesh Arivazhagan > wrote: > > I agr

Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread Graham Sanderson
Also it sounds like you are reading the data from a single file - the problem could easily be with your load tool try (as someone suggested) using cassandra stress > On Nov 5, 2015, at 9:06 PM, Graham Sanderson wrote: > > Agreed too. It also matters what you are inserting… if you are inserting

Re: Re: why cassanra max is 20000/s on a node ?

2015-11-05 Thread 郝加来
hi , the same partition key on a single node are atomic and isolated? sorry,i don't read the source code ,but i think the cassandra is single thread on the same keyspace ,not he partition key, and the same keyspace is atomic and isolated . because, when the client insert data into table a and b ,

Fwd: store avro to cassandra

2015-11-05 Thread Lu Niu
Hi, cassandra users my data is in avro format and the schema is huge. Is there any way that I can automatically convert the avro schema to the schema that cassandra could use? also, the api that I could store and fetch the data? Thank you! Best, Lu