Dear everyone 

I am troubled by the solution of the write-in load of Cassandra.  
Please teach, if there is a good solution. 
Cassandra is working only by one server and an action is Cassandra-1.1.7. 
Cassandra-1.1.8 was the same. 

<Load procedure> 
1. Where a certain amount of load is applied from a thrift client, write in. 
2. A disk fills and an exception begins to happen. 
   (However, a Cassandra process continuation.)  
3. Stop Cassandra in this state. 
4. Archive the file stored in the SSTABLE domain by tar, and extend a disk 
domain. 
5. Reboot Cassandra. (In series action of a reboot of Cassandra, data is 
returned to 
   SSTABLE from a commitment log.)  

The storage location of the commitment log was different from SSTABLE, 
and disk storage capacity had sufficient margin. 

6. The contents of the data returned to SSTABLE were checked. 


<Wish of advice> 
<Question1> 
After the disk filled, also where writing is stopped from a thrift client, the 
log from 
Memtable to a disk which is going to flush continued being outputted to 1 
second 
at intervals of about 1 time. 
(Although an exception does not come out, it is unknown whether writing was 
successful.)  
- Owing to what does a log continue coming out? (Is it presumed?)  

<Question2> 
The data of the contents of SSTABLE after a reboot was lost in large 
quantities. 
(The data of the first part of load writing was lost especially.)  

- Is this an unavoidable phenomenon in employment of one server? 
- What is the cause? (Thinking is ?)  - what is the method of the measure? 


<Request 3> (changing a viewpoint)
If the disk domains where Cassandra writes in a commitment log run short, 
the core dump would occur and Cassandra will have carried out the process down. 
- Although the design of a resource is important, isn't there any method of 
   perceiving a process down beforehand? 
----------
Hiroshi KIse
Hitachi, Ltd., Information & Telecommunication System Company

Reply via email to