Looking at your remap.config in that repo, I see that all the mappings are all 
commented out. Was that intentional?    On Thursday, January 10, 2019, 8:28:53 
PM PST, Hobin Yoon <[email protected]> wrote:  
 
 Yes, they are also ATS nodes and they worked fine when requests were made to 
them directly.
I uploaded the config files here, if you could take a look: 
https://gitlab.com/hobinyoon/trafficserver-config
Hobin


On Thu, Jan 10, 2019 at 7:31 PM Miles Libbey <[email protected]> wrote:

Typically that error means the request doesn't match a rule in
remap.config. Is the error coming from the first hop or one of the
123.123.123.[1-4] nodes? That is, if 123.123.123.[1-4] are ATS nodes,
are they configured to accept the requests they are getting?

On Thu, Jan 10, 2019 at 2:51 PM Hobin Yoon <[email protected]> wrote:
>
> With parent.config
>   dest_domain=. scheme=http parent="123.123.123.1:80,123.123.123.2:80" 
>round_robin=consistent_hash go_direct=false
>
> I'm getting
>
> Not Found on Accelerator
> ________________________________
> Description: Your request on the specified host was not found. Check the 
> location and try again.
>
> I must be missing something ...
>
> Hobin
>
> On Thu, Jan 10, 2019 at 1:18 PM Hobin Yoon <[email protected]> wrote:
>>
>> Alan, that is the only map rule we have with the varying number of cache 
>> nodes. During the down time, ATS doesn't return "HTTP/1.1 200 OK" for the 
>> requests. I didn't check what it returned.
>>
>> Miles, I'll check out parent plugin!
>>
>> Hobin
>>
>> On Thu, Jan 10, 2019 at 1:13 PM Alan Carroll <[email protected]> 
>> wrote:
>>>
>>> It could be an artifact of reloading plugin configurations if you have a 
>>> lot of remap rules with plugins, although internally ATS should do the load 
>>> and then swap the configuration. During the down time, does ATS process any 
>>> traffic, or there is traffic but no caching?
>>>
>>> On Thu, Jan 10, 2019 at 10:53 AM Miles Libbey <[email protected]> wrote:
>>>>
>>>> We don't experience downtime when using traffic_ctl config reload (we
>>>> use that ~daily).
>>>>
>>>> We don't use the balancer plugin. Instead, we use parent.config
>>>> (https://docs.trafficserver.apache.org/en/8.0.x/admin-guide/files/parent.config.en.html)
>>>> to achieve the same consistent hash. Your config would translate to
>>>>
>>>> remap.config
>>>> map / http://127.0.0.1 @plugin=cachekey.so
>>>> @pparam=--include-params=p0,p1 @pparam=--sort-params=true
>>>>
>>>> parent.config
>>>> dest_domain=. scheme=http
>>>> parent="123.123.123.1:80,23.123.123.2:80,123.123.123.3:80,123.123.123.4:80"
>>>> round_robin=consistent_hash go_direct=false
>>>> dest_domain=. scheme=https
>>>> parent="123.123.123.1:443,23.123.123.2:443,123.123.123.3:443,123.123.123.4:443"
>>>> round_robin=consistent_hash go_direct=false
>>>>
>>>> miles
>>>>
>>>> On Wed, Jan 9, 2019 at 10:53 PM Hobin Yoon <[email protected]> wrote:
>>>> >
>>>> > Hi,
>>>> >
>>>> > We are noticing there is quite a bit of delay when we reload the config 
>>>> > with traffic_ctl config reload. The delay is up to about 30 seconds, 
>>>> > during which period we don't get any caching. We are using consistency 
>>>> > hashing plugin. The number of nodes changes dynamically between 5 to 30.
>>>> >
>>>> > Here is an example balancer (consistent hash) configuration in 
>>>> > remap.config.
>>>> >
>>>> > map / http://127.0.0.1 @plugin=cachekey.so 
>>>> > @pparam=--include-params=p0,p1 @pparam=--sort-params=true 
>>>> > @plugin=balancer.so @pparam=--policy=hash,key @pparam=123.123.123.1 
>>>> > @pparam=123.123.123.2 @pparam=123.123.123.3 @pparam=123.123.123.4 ...
>>>> >
>>>> > Is this downtime normal? How do you guys avoid the service downtime 
>>>> > while reconfiguring the cache nodes in the cluster?
>>>> >
>>>> > Hobin
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>
>>>
>>>
>>> --
>>> Beware the fisherman who's casting out his line in to a dried up riverbed.
>>> Oh don't try to tell him 'cause he won't believe. Throw some bread to the 
>>> ducks instead.
>>> It's easier that way. - Genesis : Duke : VI 25-28

  

Reply via email to