Re:RE: Re:RE: data not beening syncd in different nodes

2018-09-02 Thread David Ni


Hi Kyrylo,
 The fact is that whenever I query from node 1,it returns about 60 
rows,from node 2,it returns about 140 rows,from node 3,it returns about 30 rows 
,from node it returns about 140 rows,from node 5 it returns about 70 rows,from 
node 6 it returns about 140 rows,I tried so many times,the result from every 
single node is the same,but different from other nodes.





At 2018-08-31 18:08:01, "Kyrylo Lebediev"  
wrote:


TTL 60 seconds – small value (even smaller than compaction window). This means 
that even if all replicas are consistent, data is deleted really quickly so 
that results may differ even for 2 consecutive queries. How about this theory?

 

CL in your driver – depends on which CL is default for your particular driver.

 

Regards,

Kyrill

 

From: David Ni 
Sent: Friday, August 31, 2018 12:53 PM
To:user@cassandra.apache.org
Subject: Re:RE: data not beening syncd in different nodes

 

Hi Kyrylo

I have already tried consistency quorum and all,still the same result.

the java code write data to cassandra does not set CL,does this mean the 
default CL is one?

the tpstats out is like below ,there are some dropped mutations, but it doesn't 
grow during a very long time

Pool NameActive   Pending  Completed   Blocked  All 
time blocked

MutationStage0 0300637997 0 
0

ViewMutationStage 0 0  0 0  
   0

ReadStage 0 04357929 0  
   0

RequestResponseStage0 0  306954791 0
 0

ReadRepairStage  0 0 472027 0   
  0

CounterMutationStage   0 0  0 0 
0

MiscStage  0 0  0 0 
0

CompactionExecutor 0 0   17976139 0 
0

MemtableReclaimMemory 0 0  53018 0 0

PendingRangeCalculator   0 0 11 0   
  0

GossipStage  0 0   59889799 0   
  0

SecondaryIndexManagement 0 0  0 0 0

HintsDispatcher 0 0  7 0
 0

MigrationStage  0 0101 0
 0

MemtablePostFlush  0 0  41470 0 
0

PerDiskMemtableFlushWriter_0 0 0  52779 0   
  0

ValidationExecutor0 0 80 0  
   0

Sampler0 0  0 0 
0

MemtableFlushWriter   0 0  40301 0  
   0

InternalResponseStage0 0 70 0   
  0

AntiEntropyStage  0 0352 0  
   0

CacheCleanupExecutor0 0  0 0
 0

Native-Transport-Requests   0 0  158242159 0 
13412

 

Message type  Dropped

READ 0

RANGE_SLICE  0

_TRACE1

HINT  0

MUTATION34

COUNTER_MUTATION 0

BATCH_STORE  0

BATCH_REMOVE 0

REQUEST_RESPONSE 0

PAGED_RANGE  0

READ_REPAIR  0

 

and yes,we are inserting data with TTL=60 seconds

we have 200 vehicles and updating this table every 5 or 10 seconds;

 


At 2018-08-31 17:10:50, "Kyrylo Lebediev"  
wrote:



Looks like you’re querying the table at CL = ONE which is default for cqlsh.

If you run cqlsh on nodeX it doesn’t mean you retrieve data from this node. 
What this means is that nodeX will be coordinator, whereas actual data will be 
retrieved from any node, based on token range + dynamic snitch data (which, I 
assume, you use as it’s turned on by default).

Which CL you use when you write data?

Try querying using CL = QUORUM or ALL. What’s your result in this case?

If you run ‘nodetool tpstats’ across all the nodes, are there dropped mutations?

 

 

As you use TimeWindowCompactionStrategy, do you insert data with TTL?

These buckets seem to be too small for me: 'compaction_window_size': '2', 
'compaction_window_unit': 'MINUTES'.

Do you have such a huge amount of writes so that such bucket size makes sense?

 

Regards,

Kyrill

 

From: David Ni 
Sent: Friday, August 31, 2018 11:39 AM
To:user@cassandra.apache.org
Subject: data not beening syncd in different nodes

 

Hi Experts,

I am using 3.9 

Re: Large sstables

2018-09-02 Thread shalom sagges
If there are a lot of droppable tombstones, you could also run User Defined
Compaction on that (and on other) SSTable(s).

This blog post explains it well:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html

On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
mohamadrezarosta...@gmail.com> wrote:

> Hi,Dear Vitali
> The best option for you is migrating data to the new table and change
> portion key patterns to a better distribution of data and you sstables
> become smaller but if your data already have good distribution and your
> data is really big you must add new server to your datacenter, if you
> change compassion strategy it has some risk.
>
> > On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa  wrote:
> >
> > Either of those are options, but there’s also sstablesplit to break it
> up a bit
> >
> > Switching to LCS can be a problem depending on how many sstables
> /overlaps you have
> >
> > --
> > Jeff Jirsa
> >
> >
> >> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk  wrote:
> >>
> >> Hi,
> >> Some of the sstables got too big 100gb and more so they are not
> compactiong any more so some of the disks are running out of space. I'm
> running C* 3.0.17, RF3 with 10 disks/jbod with STCS.
> >> What are my options? Completely delete all data on this node and rejoin
> it to the cluster, change CS to LCS then run repair?
> >> Vitali.
> >>
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>