> - Owing to what does a log continue coming out? (Is it presumed?)  
Were you noticing file times changing?
The log files are recycled so it may have been that or from the 10 second 
commit log fsync.

Can you provide more details on what you saw?

> <Question2> 
> The data of the contents of SSTABLE after a reboot was lost in large 
> quantities. 
> (The data of the first part of load writing was lost especially.)  
Did you put the SSTables back?
Once data is committed to the SSTable the relevant parts of the commit log are 
marked as no longer necessary. Once the commit log segment is recycled that 
data is gone.

> If the disk domains where Cassandra writes in a commitment log run short, 
> the core dump would occur and Cassandra will have carried out the process down

I don't think Cassandra would shut down in that case, thought it may have 
changed. It would probably block the writes.

>  Although the design of a resource is important, isn't there any method of 
>   perceiving a process down beforehand? 
Not sure what you mean here.

Cheers
-----------------
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 8/01/2013, at 23:59, hiroshi.kise...@hitachi.com wrote:

> 
> Dear everyone 
> 
> I am troubled by the solution of the write-in load of Cassandra.  
> Please teach, if there is a good solution. 
> Cassandra is working only by one server and an action is Cassandra-1.1.7. 
> Cassandra-1.1.8 was the same. 
> 
> <Load procedure> 
> 1. Where a certain amount of load is applied from a thrift client, write in. 
> 2. A disk fills and an exception begins to happen. 
>   (However, a Cassandra process continuation.)  
> 3. Stop Cassandra in this state. 
> 4. Archive the file stored in the SSTABLE domain by tar, and extend a disk 
> domain. 
> 5. Reboot Cassandra. (In series action of a reboot of Cassandra, data is 
> returned to 
>   SSTABLE from a commitment log.)  
> 
> The storage location of the commitment log was different from SSTABLE, 
> and disk storage capacity had sufficient margin. 
> 
> 6. The contents of the data returned to SSTABLE were checked. 
> 
> 
> <Wish of advice> 
> <Question1> 
> After the disk filled, also where writing is stopped from a thrift client, 
> the log from 
> Memtable to a disk which is going to flush continued being outputted to 1 
> second 
> at intervals of about 1 time. 
> (Although an exception does not come out, it is unknown whether writing was 
> successful.)  
> - Owing to what does a log continue coming out? (Is it presumed?)  
> 
> <Question2> 
> The data of the contents of SSTABLE after a reboot was lost in large 
> quantities. 
> (The data of the first part of load writing was lost especially.)  
> 
> - Is this an unavoidable phenomenon in employment of one server? 
> - What is the cause? (Thinking is ?)  - what is the method of the measure? 
> 
> 
> <Request 3> (changing a viewpoint)
> If the disk domains where Cassandra writes in a commitment log run short, 
> the core dump would occur and Cassandra will have carried out the process 
> down. 
> - Although the design of a resource is important, isn't there any method of 
>   perceiving a process down beforehand? 
> ----------
> Hiroshi KIse
> Hitachi, Ltd., Information & Telecommunication System Company

Reply via email to