On 1 March 2011 00:09, Gordon Mohr <[email protected]> wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) > > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? > > Thanks for any warnings/ideas.
With that many entries, I expect you'll find that configuration will be quite slow, as there are no index structures in VCL and it compiles down to simple procedural C code. I think you'd be better off taking the approach of integrating with an external database library for the lookup. This blog pos shows how to search for values in an xml file http://www.enrise.com/2011/02/mobile-device-detection-with-wurfl-and-varnish/ but I expect you'll see better performance using sqlite or bdb. Laurence _______________________________________________ varnish-misc mailing list [email protected] http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
