[ 
https://issues.apache.org/jira/browse/TS-4735?focusedWorklogId=27676&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-27676
 ]

ASF GitHub Bot logged work on TS-4735:
--------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Sep/16 01:24
            Start Date: 01/Sep/16 01:24
    Worklog Time Spent: 10m 
      Work Description: Github user kshri23 commented on the issue:

    https://github.com/apache/trafficserver/pull/872
  
    James, 
    I believe that this patch doesn't address just a symptom, it addresses a 
fundamental flaw in the startup code. A race condition. As I mentioned in the 
bug description, this issue is not because of the time required for the initial 
message exchange, but it is because of TS-4646 where repeated and unnecessary 
messages are being sent with a frequency which is exactly the same as 
mgmt_read_timeout. Of course, by fixing TS-4646, we will not hit this, agreed. 
But this design of waiting for 10k messages before yielding is flawed. We 
cannot do that because traffic_cop expects a few things from traffic_server.
    
    The only reason I decided to address the issue this way is because of the 
precedent set by 'timeout' in the same class. Initially, timeout is set to '0'. 
And once the startup is complete, it is set back to the configured value.
    
    I am confused by your reference to TS-4646. As I mentioned there, TS-4646 
should happen all the time and I think it does. It is easy to verify that by 
enabling debug logs. Its just that TS-4646 does not always result in TS-4735 in 
all cases. It is a race condition.
    
    However, on our VM's it happened all the time. We used this patch to solve 
the issue and it seems to be working for past few months. Please let me know if 
you have any concerns.


Issue Time Tracking
-------------------

    Worklog Id:     (was: 27676)
    Time Spent: 50m  (was: 40m)

> Possible deadlock on traffic_server startup
> -------------------------------------------
>
>                 Key: TS-4735
>                 URL: https://issues.apache.org/jira/browse/TS-4735
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 6.2.0
>            Reporter: Shrihari
>            Assignee: Shrihari
>             Fix For: 7.0.0
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> As part of startup, traffic_server creates two threads (to begin with).
> 1. (main) Thread (1) blocks till its signaled by another thread
> 1. Thread 2 polls for messages from traffic_manager
> It is waiting for a message from traffic_manager which contains all the 
> configuration required for it to go ahead with initialization. Hence, it is 
> critical that the main Thread (1) wait till it gets the configuration.
> Thread 2 which polls for message from traffic_manager works like this:
> {noformat}
> for(;;) {
>   if (pmgmt->require_lm) {     <--- Always True (when using traffic_cop)
>     pmgmt->pollLMConnection();  <--- | for (count = 0; count < 10000; count 
> ++) 
>                                                            |   num = 
> mgmt_read_timeout(...) <---- Blocking call. returns 0 if nothing was received 
> for 1 second
>                                                            |   if !num: break 
> <--- Break out of the loop and return from function 
>                                                            |   else: 
> read(fd), add_to_event_queue, continue the loop, 
>                                                            | Back to fetching 
> another message
>   }
>   pmgmt->processEventQueue();  <--  Process the messages received in 
> pollLMConnection()
>   pmgmt->processSignalQueue();
>   mgmt_sleep_sec(pmgmt->timeout); 
> }
> {noformat}
> RCA:
> There are two problems here:
> 1. If we look into the above code, we should observe that the 
> pollLMConnection might not return back for a very long time if it keeps 
> getting messages. As a result, we may not call processEventQueue() which 
> processes the received messages. And unless we process the messages, we 
> cannot signal the main Thread (1) to continue, which is still blocked. Hence 
> we see the issue where traffic_server won't complete initialization for a 
> very long time.
> 2. The second problem is that why is traffic_server receiving so many 
> messages at boot-up? The problem lies in the configuration. In 6.2.x, we 
> replaced 
> 'proxy.process.ssl.total_success_handshake_count' with 
> 'proxy.process.ssl.total_success_handshake_count_in'. 
> In order to provide backwards compatibility, we defined the old stat in 
> stats.config.xml. The caveat here is that, since this statconfig is defined 
> in stats.config.xml, traffic_manager assumes the responsibility of updating 
> this stat. According to the code:
> {noformat}
> if (i_am_not_owner_of(stat)) : send traffic_server a notify message.
> {noformat}
> Ideally, this code should not be triggered because, traffic_manager does own 
> the stat. However, the ownership in the code is determined solely based on 
> the 'string name'. If the name contains 'process', it is owned by 
> traffic_server. This leads to an interesting scenario where traffic_manger 
> keeps updating its own stat and sends unnecessary events to traffic_server. 
> These updates happen every 1 second (Thanks James for helping me understand 
> this period) which is the same as our timeout in traffic_server.  Due to 
> 'Problem 1' we can prevent traffic_server from processing any messages for up 
> to 10,000 seconds! (Just imagine the case where the message is received just 
> before the timout of 1 second happens)
> I saw this happening with 100% on a VM but 0% on a physical box. I don't have 
> any other results as of now though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to