tombstones per slice' when running cfstats
Wouldn't it suggest a delete heavy workload, rather than update?
On Mon, Jul 6, 2015 at 5:21 PM Robert Coli
mailto:rc...@eventbrite.com>> wrote:
On Mon, Jul 6, 2015 at 4:19 PM, Venkatesh Kandaswamy
mailto:ve...@walmartlabs.com>> wrote
Hello,
I cannot find documentation on the last two parameters given by cfstats
below. It looks like most examples show 0 for these metrics, but our table has
large numbers. What do these mean?
Read Count: 60601817
Read Latency: 6.107321156707232 ms.
Write Count: 185864222
Write Latency:
hing? I guess I am missing something very
fundamental and I cannot figure it out from the manuals or the source code
for CqlInputFormat and CqlRecordReader.
Anyone have a working sample code they can share?
————
Venky Kandaswamy
925-200-7124
On 6/29/15, 8:46
save me a few hours of digging
through the code).
Venky Kandaswamy
925-200-7124
On 6/29/15, 8:40 PM, "Venkatesh Kandaswamy" wrote:
>I was going through the WordCount example in the latest 2.1.7 Apache C*
>source and there i
. Is there a reason why this
was removed? But the example includes it. I am confused. Please shed some
light if you know the answer.
Venky Kandaswamy
925-200-7124
On 6/29/15, 1:15 PM, "Venkatesh Kandaswamy" wrote:
>All,
> I conve
All,
I converted one of my C* programs to Hadoop 2.x and C* datastax drivers for
2.1.0. The original program (Hadoop 1.x) worked fine when we specified
InputCQLPageRowSize and InputSplitSize to reasonable values. For example, if we
had 60K rows, a row size of 100 and split size of 1 will