Hi,
thank you for replying.
Unfortunately, the computer DevCenter is running on doesn’t have Internet
access (for security reasons). As a result, I can’t use the pom.xml.
Furthermore, I’ve tried running a Groovy program whose classpath included the
DevCenter (2.x) lib directory, but to no avail
Get the JARS from Cassandra lib folder and put it in your build path. Or
else use Pom.xml maven project to directly download from repository.
Thanks and Regards,
Goutham Reddy Aenugu.
On Sat, Mar 10, 2018 at 9:30 AM Philippe de Rochambeau
wrote:
> Hello,
> has anyone tried
Hello,
has anyone tried running CQL queries from a Java program using the jars
provided with DevCenter?
Many thanks.
Philippe
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail:
I think it's case by case, depended on chasing read performance or write
performance, or both.
Ours are used for application, read request 10 times larger than writing,
application wants read performance and doesn't care writing, we use 4 SSD
each 380 GB for each node (total 1.5T a node), read
My 1.5T bound is for high throughput for read and write with hundreds of nodes
— specifically with needs for quick bootstrap / repairs when adding / replacing
nodes.
Lower the density the faster it is to add nodes.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Mar 9, 2018, 11:30
Anyway, I cant get to a approximate method to calculate storage per row for
Cassandra records.
For example right now my table schema is as follow:
c1 tinyint,
c2 smallint,
c3 bigint,
c4 int,
c5 int,
c6 boolean
primary key ((c1,c2,c3),c4,c5)
According to cql data types document this should
You can use variable-length zig-zag coding to encode an interview if using
a blob. It is used in able and protocol buffers.
Some examples:
valuehex
0 00
-1 01
1 02
-2 03
2 04
...
-64 7f
64 80 01
...
On Sat, 10 Mar 2018, 07:52 onmstester onmstester,
wrote:
> I've find out