Hello Sean,

here my schema and RF:

-------------------------------------------------------------------------
CREATE KEYSPACE my_keyspace WITH replication = {'class':
'NetworkTopologyStrategy', 'DC1': '1'}  AND durable_writes = true;

CREATE TABLE my_keyspace.my_table (
    pkey text,
    event_datetime timestamp,
    agent text,
    ft text,
    ftt text,
    some_id bigint,
    PRIMARY KEY (pkey, event_datetime)
) WITH CLUSTERING ORDER BY (event_datetime DESC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 90000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

-------------------------------------------------------------------------

Queries I make are very simple:

select pkey, event_datetime, ft, some_id, ftt from my_keyspace.my_table
where pkey = ? limit ?;
and
insert into my_keyspace.my_table (event_datetime, pkey, agent, some_id, ft,
ftt) values (?,?,?,?,?,?);

About Retry policy, the answer is yes, actually when a write fails I store
it somewhere else and, after a period, a try to write it to Cassandra
again. This way I can store almost all my data, but when the problem is the
read I don't apply any Retry policy (but this is my problem)


Thanks
Marco


Il giorno ven 21 dic 2018 alle ore 17:18 Durity, Sean R <
sean_r_dur...@homedepot.com> ha scritto:

> Can you provide the schema and the queries? What is the RF of the keyspace
> for the data? Are you using any Retry policy on your Cluster object?
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Marco Gasparini <marco.gaspar...@competitoor.com>
> *Sent:* Friday, December 21, 2018 10:45 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Writes and Reads with high latency
>
>
>
> hello all,
>
>
>
> I have 1 DC of 3 nodes in which is running Cassandra 3.11.3 with
> consistency level ONE and Java 1.8.0_191.
>
>
>
> Every day, there are many nodejs programs that send data to the
> cassandra's cluster via NodeJs cassandra-driver.
>
> Every day I got like 600k requests. Each request makes the server to:
>
> 1_ READ some data in Cassandra (by an id, usually I get 3 records),
>
> 2_ DELETE one of those records
>
> 3_ WRITE the data into Cassandra.
>
>
>
> So every day I make many deletes.
>
>
>
> Every day I find errors like:
>
> "All host(s) tried for query failed. First host tried, 10.8.0.10:9042
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__10.8.0.10-3A9042_&d=DwMFaQ&c=MtgQEAMQGqekjTjiAhkudQ&r=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ&m=Y2zNzOyvqOiHqZ5yvB1rO_X6C-HivNjXYN0bLLL-yZQ&s=2v42cyvuxcXJ0oMfUrRcY-kRno1SkM4CTEMi4n1k0Wo&e=>:
> Host considered as DOWN. See innerErrors...."
>
> "Server timeout during write query at consistency LOCAL_ONE (0 peer(s)
> acknowledged the write over 1 required)...."
>
> "Server timeout during write query at consistency SERIAL (0 peer(s)
> acknowledged the write over 1 required)...."
>
> "Server timeout during read query at consistency LOCAL_ONE (0 peer(s)
> acknowledged the read over 1 required)...."
>
>
>
> nodetool tablehistograms tells me this:
>
>
>
> Percentile  SSTables     Write Latency      Read Latency    Partition
> Size        Cell Count
>
>                               (micros)          (micros)           (bytes)
>
> 50%             8.00            379.02           1955.67
> 379022                 8
>
> 75%            10.00            785.94         155469.30
> 654949                17
>
> 95%            12.00          17436.92         268650.95
>  1629722                35
>
> 98%            12.00          25109.16         322381.14
>  2346799                42
>
> 99%            12.00          30130.99         386857.37
>  3379391                50
>
> Min             0.00              6.87             88.15
>  104                 0
>
> Max            12.00          43388.63         386857.37
> 20924300               179
>
>
>
> in the 99% I noted that write and read latency is pretty high, but I don't
> know how to improve that.
>
> I can provide more statistics if needed.
>
>
>
> Is there any improvement I can make to the Cassandra's configuration in
> order to not to lose any data?
>
>
>
> Thanks
>
>
>
> Regards
>
> Marco
>
> ------------------------------
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>

Reply via email to