[ 
https://issues.apache.org/jira/browse/TS-3744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636450#comment-14636450
 ] 

Pavel Vazharov edited comment on TS-3744 at 7/22/15 7:48 AM:
-------------------------------------------------------------

Thank you for the response. 

May I ask something that I still don't understand in the suggested scheme?
Suppose we have a case when we receive huge amount of data in the 
transformation i.e. we expect to receive multiple events IMMEDIATE from the 
upstream/server VIO in the transformation. I'll try to present it in steps.
1. We setup the transformation and a buffer (with two independent readers - one 
for client and one for cache) to write the input data from the transformation.
2. We receive the first event IMMEDIATE from the upstream.
2.1. We copy the incoming data to our intermediate buffer marking them as 
consumed for the upstream reader. Last we do TSContCall to the upstream VIO 
with event VCONN_WRITE_READY in order to inform it that we have consumed the 
data and we wait for new data.
2.2. We setup the downstream/client VIO and copy the data from the intermediate 
buffer to the downstream buffer, mark the data as consumed for the client side 
intermediate reader, and last we reenable the downstream VIO in order to 
receive WRITE_READY event from it later.
2.3. We start cache write (TSCacheWrite). Usually we receive event 
CACHE_OPEN_WRITE inside the call, because it is reentrant. Here we setup the 
cache write VIO and copy the data from the intermediate buffer to the cache 
write buffer. Then we mark the data as consumed for the cache side intermediate 
reader. Last thing here is to reenable the cachestream VIO in order to receive 
WRITE_READY event from it later.
3. Let's say that we receive WRITE_READY event in the cache write continuator 
before we receive new event IMMEDIATE from the upstream i.e. the cache is ready 
for new data but we still don't have such from the server. As far as I tested 
it, this WRITE_READY event usually happens inside the TSCacheWrite call when we 
reenable the cache write VIO. We can't do anything here and we just exit from 
the event handler.
4. We receive new event IMMEDIATE from the upstream/server VIO.
4.1. We copy the new data to the intermediate buffer marking them as consumed 
for the upstream reader. Then we do TSContCall to the upstream VIO with event 
VCONN_WRITE_READY.
HERE STARTS THE PART THAT I DON'T UNDERSTAND. I'll skip the writing to the 
client because it's not important (IMO) and works as expected.
We know that the cache VIO is ready for new data. We copy the data to it's 
buffer using the cache itermediate reader. However, as far as I understand, we 
need to reenable the cache VIO after that, in order to inform it to consume the 
data and send us VCONN_WRITE_READY later. But the problem is that we are inside 
a different continuator and we can't reenable the cache VIO from inside it. 

I know that we can just copy the data to the intermediate buffer and stream 
them to the client simultaneously. Later, when all data from the upstream are 
received, we can do the cache write with the whole amount and this works as 
expected. I'm not sure how this will work for the case with huge amount of data 
from the server, because as far as I understand, it'll keep the whole data 
alive in the memory and this could be a problem.

Thanks again,
Pavel.


was (Author: pavel_v):
Thank you for the response. 

May I ask something that I still don't understand in the suggested scheme?
Suppose we have a case when we receive huge amount of data in the 
transformation i.e. we expect to receive multiple events IMMEDIATE from the 
upstream/server VIO in the transformation. I'll try to present it in steps.
1. We setup the transformation and a buffer (with two independent readers - one 
for client and one for cache) to write the input data from the transformation.
2. We receive the first event IMMEDIATE from the upstream.
2.1. We copy the incoming data to our intermediate buffer marking them as 
consumed for the upstream reader. Last we do TSContCall to the upstream VIO 
with event VCONN_WRITE_READY in order to inform it that we have consumed the 
data and we wait for new data.
2.2. We setup the downstream/client VIO and copy the data from the intermediate 
buffer to the downstream buffer, mark the data as consumed for the client side 
intermediate reader, and last we reenable the downstream VIO in order to 
receive WRITE_READY event from it later.
2.3. We start cache write (TSCacheWrite). Usually we receive event 
CACHE_OPEN_WRITE inside the call, because it is reentrant. Here we setup the 
cache write VIO and copy the data from the intermediate buffer to the cache 
write buffer. Then we mark the data as consumed for the cache side intermediate 
reader. Last thing here is to reenable the cachestream VIO in order to receive 
WRITE_READY event from it later.
3. Let's say that we receive WRITE_READY event in the cache write continuator 
before we receive new event IMMEDIATE from the upstream i.e. the cache is ready 
for new data but we still don't have such from the server. As far as I tested 
it, this WRITE_READY event usually happens inside the TSCacheWrite call when we 
reenable the cache write VIO. We can't do anything here and we just exit from 
the event handler.
HERE STARTS THE PART THAT I DON'T UNDERSTAND. I'll skip the writing to the 
client because it's not important (IMO) and works as expected.
4. We receive new event IMMEDIATE from the upstream/server VIO.
4.1. We copy the new data to the intermediate buffer marking them as consumed 
for the upstream reader. Then we do TSContCall to the upstream VIO with event 
VCONN_WRITE_READY.
We know that the cache VIO is ready for new data. We copy the data to it's 
buffer using the cache itermediate reader. However, as far as I understand, we 
need to reenable the cache VIO after that, in order to inform it to consume the 
data and send us VCONN_WRITE_READY later. But the problem is that we are inside 
a different continuator and we can't reenable the cache VIO from inside it. 

I know that we can just copy the data to the intermediate buffer and stream 
them to the client simultaneously. Later, when all data from the upstream are 
received, we can do the cache write with the whole amount and this works as 
expected. I'm not sure how this will work for the case with huge amount of data 
from the server, because as far as I understand, it'll keep the whole data 
alive in the memory and this could be a problem.

Thanks again,
Pavel.

> Crash (Seg Fault) when reenabling a VIO from a continuator which is different 
> from the VIO's continuator.
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: TS-3744
>                 URL: https://issues.apache.org/jira/browse/TS-3744
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Plugins, TS API
>    Affects Versions: 5.3.0
>            Reporter: Pavel Vazharov
>             Fix For: 6.1.0
>
>
> Hi,
> I'm trying to create ATS plugin which uses the API for cache write 
> (TSCacheWrite, TSVConnWrite). For the write part, from a transformation, I'm 
> trying to stream the data to both the client and the cache in the same time. 
> The problem described below IMO can be summarized as - crash reenabling of 
> one VIO from a continuator which is different from the VIO's continuator.
> Here is the backtrace of the crash. 
> traffic_server: Segmentation fault (Address not mapped to object [0x28])
> traffic_server - STACK TRACE: 
> /usr/local/bin/traffic_server(_Z19crash_logger_invokeiP9siginfo_tPv+0x8e)[0x4ad13e]
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x2b9092c2a340]
> /usr/local/bin/traffic_server(_ZN7CacheVC8reenableEP3VIO+0x28)[0x6db868]
> /home/freak82/ats/src/plugins/ccontent/ccontent.so(+0x29e5)[0x2b9096bce9e5]
> /home/freak82/ats/src/plugins/ccontent/ccontent.so(+0x3094)[0x2b9096bcf094]
> /usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x120)[0x767ea0]
> /usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x81b)[0x768aab]
> /usr/local/bin/traffic_server(main+0xee6)[0x495436]
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x2b909387dec5]
> /usr/local/bin/traffic_server[0x49ba6f]
> It's like the VIO mutex->thread_holding or the VIO object itself are in some 
> inappropriate state or invalid. The VIO has the same memory address as the 
> originally created one and it's continuator is not explicitly destroyed 
> (TSContDestroy). The associated buffer and reader are also alive.
> I'm not sure if the thing (writing "in-parallel") that I'm trying to do is 
> possible with the current API, by design? Is it possible/allowed by design to 
> copy bytes to one VIO buffer and reenable the same VIO from another 
> continuator, not the same continuator as the one of the VIO.
> If it's possible am I doing something wrong or this is a bug?
> Basically, I'm trying to do it in the following way. The explanations skip 
> the error handling.
> 1. On transformation start, on the first EVENT_IMMEDIATE from the upstream, 
> the code initializes the client stream (TSBuffer, TSBufferReader and TSVIO as 
> in the null-transform plugin) and then start the cache write (TSCacheWrite) 
> with a created and digested cache key (TSCacheKey).
> 2. On EVENT_CACHE_OPEN_WRITE, the code initializes the cache stream 
> (TSBuffer, TSBufferReader and TSVIO) in the same way as the client stream, 
> but using the passed TSCont and TSVConn from the event data. So far, it works 
> as expected.
> 3. Both continuator callbacks, for the transformation and for the cache 
> write, are handling events WRITE_READY and WRITE_COMPLETE. The transformation 
> callback also handles EVENT_IMMEDIATE to know when there is more data from 
> the upstream.
> I was thinking to mark every stream as ready when the corresponding callback 
> receives WRITE_READY, and when both streams are ready to copy the available 
> data from the upstream to them, then reenable the both streams and the 
> upstream. Then when there are new data available from the upstream, to copy 
> them again when the both streams becomes ready, etc, etc.
> Usually the first writes/copies and reenables are made from inside the 
> TSCacheWrite, because it's reentrant and generates WRITE_READY for the cache 
> continuator. These operations succeeds. The problem is that the plugin leads 
> to crash in the ATS when it tries to reeenable the cache VIO from inside the 
> transform continuator.
> I tried to pass whole data from the upstream to the client first, copying 
> (TSIOBufferCopy) "in-paralles" them to a temporary buffer, and initiate cache 
> write at the end of the transformation and then write the data from the 
> buffer to the cache VIO (similarly to the metalink plugin). This also works 
> as expected.
> Thanks,
> Pavel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to