Thanks Miles, that's helpful

I think the issue here is that what I want isn't parent-child caches but peer 
caches. I'm looking into the header rewrite plugin to see if I can do something 
there to break the loop.

Josh Gitlin
Principal DevOps Engineer
[email protected]<mailto:[email protected]>

PINNACLE 21
www.pinnacle21.com

On May 7, 2020, at 3:31 PM, Miles Libbey 
<[email protected]<mailto:[email protected]>> wrote:

For our cache hierarchy we do:
child:
map inbound inbound
parent.config: dest_domain=inbound scheme=http parent="..." go_direct=false

parent:
map inbound origin

We do it this way because:
- we can keep https throughout the hierarchy-- the children and
parents can use the same certificates. We sometimes go to external
sources for origin -- for instance a cloud storage provider. We
wouldn't be able to get a cert for those domains.
- We can have as many/as few layers of hierarchy as you want. Want 4
layers? Make the first 3 children, the last a parent.

But, we have distinct hardware for children and parents -- we've not
tried to have a machine act both as a child and a parent at the same
time. I suppose you'd turn off loop detection, list the other machine
in the parent.config's parent section, and "self" in the secondary
ring?
miles

On Thu, May 7, 2020 at 12:05 PM Josh Gitlin 
<[email protected]<mailto:[email protected]>> wrote:

The more I dig into this, the more I realize I have gone horribly wrong 
somewhere, as I seem to have just created an infinite parent proxy loop. So I 
may need to RTFM again to fix this broken design! :)

Josh Gitlin
Principal DevOps Engineer
[email protected]<mailto:[email protected]>

PINNACLE 21
www.pinnacle21.com

On May 7, 2020, at 1:51 PM, Josh Gitlin <[email protected]> wrote:

Hello,

Apologies if this was covered in the docs or a previous message; I couldn't 
find an answer in my search.

I am having an issue with remapping and parent caching. I have two Apache 
Traffic Server instances for HA, and each one has the other configured as its 
parent cache. The goal being a shared cache, because the two instances are 
behind a load balancer with leastconn distribution.

I am seeing an issue where cache misses on server B get forwarded to server A 
with the remapped URL and server A refuses to serve because it does not 
recognize the URL in it's remap config. (Error "ERR_INVALID_URL") I know I can 
resolve this by simply adding the original URL to the remap config, but that 
felt like the wrong fix.

Contents of remap.config now:

map http://www.proxy.example.com http://www.example.com/
map https://www.proxy.example.com https://www.example.com/


Proposed fix to my config:

map http://www.proxy.example.com http://www.example.com/
map https://www.proxy.example.com https://www.example.com/
map http://www.example.com http://www.example.com/
map https://www.example.com https://www.example.com/

Is this the "right" way to fix this issue? The duplication feels like there 
must be a better way...

Josh Gitlin
Principal DevOps Engineer
[email protected]

PINNACLE 21
www.pinnacle21.com



Reply via email to