On Wed, Jan 18, 2012 at 9:14 AM, Harsh Bora <ha...@linux.vnet.ibm.com> wrote:
> On 01/10/2012 10:11 PM, Stefan Hajnoczi wrote:
>>> +            unused = fwrite(&record, ST_V2_REC_HDR_LEN, 1, trace_fp);
>>>             writeout_idx += num_available;
>>>         }
>>>
>>>         idx = writeout_idx % TRACE_BUF_LEN;
>>> -        while (get_trace_record(idx,&record)) {
>>> -            trace_buf[idx].event = 0; /* clear valid bit */
>>> -            unused = fwrite(&record, sizeof(record), 1, trace_fp);
>>> -            idx = ++writeout_idx % TRACE_BUF_LEN;
>>> +        while (get_trace_record(idx,&recordptr)) {
>>> +            unused = fwrite(recordptr, recordptr->length, 1, trace_fp);
>>> +            writeout_idx += recordptr->length;
>>> +            g_free(recordptr);
>>> +            recordptr = (TraceRecord *)&trace_buf[idx];
>>>
>>> +            recordptr->event = 0;
>>> +            idx = writeout_idx % TRACE_BUF_LEN;
>>>         }
>>
>>
>> I'm wondering if it's worth using a different approach here.  Writing
>> out individual records has bothered me.
>>
>> If we have a second buffer, as big as trace_buf[], then a function can
>> copy out all records and make them available in trace_buf again:
>>
>> /**
>>  * Copy completed trace records out of the ring buffer
>>  *
>>  * @idx    offset into trace_buf[]
>>  * @buf    destination buffer
>>  * @len    size of destination buffer
>>  * @ret    the number of bytes consumed
>>  */
>> size_t consume_trace_records(unsigned int idx, void *buf, size_t len);
>>
>> That means consume gobbles up as many records as it can:
>>  * Until it reaches an invalid record which has not been written yet
>>  * Until it reaches the end of trace_buf[], the caller can call again
>> with idx wrapped to 0
>>
>> After copying into buf[] it clears the valid bit in trace_buf[].
>>
>> Then the loop which calls consume_trace_records() does a single
>> fwrite(3) and increments idx/writeout_idx.
>>
>> The advantage to this is that we write many records in one go and I
>> think it splits up the writeout steps a little nicer than what we've
>> previously done.
>>
>
> I think this optimization can be introduced as a separate patch later.
> Let me know if you think otherwise.

Yes, that could be done later.  However there is something incorrect
here.  Threads will continue to write trace records into the
ringbuffer while the write-out thread is doing I/O.  Think about what
happens when threads overtake the write-out index modulo ringbuffer
size.  Since records are variable-length the write-out thread's next
index could point into the middle of an overwritten record.  And that
means the ->length field is junk - we may crash if we use it.

>>>
>>>         fflush(trace_fp);
>>> @@ -231,7 +196,7 @@ void st_set_trace_file_enabled(bool enable)
>>>         static const TraceRecord header = {
>>>             .event = HEADER_EVENT_ID,
>>>             .timestamp_ns = HEADER_MAGIC,
>>> -            .x1 = HEADER_VERSION,
>>> +            .length = HEADER_VERSION,
>>
>>
>> Hmm...this is kind of ugly (see comment about using .length above) but
>> in this case most parsers will have a special-case anyway to check the
>> magic number.  We need to use the .length field because historically
>> that's where the version is located.
>>
>
> So, lets keep the version here only, right ?

Yes, it's necessary to do .length = HEADER_VERSION.

Stefan

Reply via email to