I think I owe Liam at least a bottle of Scotch—I found some old code I had copied that was producing XHTML with the HTML DTD as the REST response. I replaced that with just HTML as the result type and it seems to have resolved my responsiveness problems.
I was also able to optimize my index creation and lookup significantly: * Use node IDs for all lookup keys (was using URI/database paths * Eliminate one lookup by adding referencing map to my where-used table (I had just been recording the individual topicrefs, from which I can get the map, but that requires a second action to get the topicref’s containing document element) * Short-circuit long processes when I know I’m done looking for things (had a map used 700 times by 750 other maps but I know how many top-level maps there are, so when its list of containing-top-level maps is all of them, stop looking) So my server is starting to work as I need it to… Cheers, E. _____________________________________________ Eliot Kimber Sr Staff Content Engineer O: 512 554 9368 M: 512 554 9368 servicenow.com<https://www.servicenow.com> LinkedIn<https://www.linkedin.com/company/servicenow> | Twitter<https://twitter.com/servicenow> | YouTube<https://www.youtube.com/user/servicenowinc> | Facebook<https://www.facebook.com/servicenow> From: Liam R. E. Quin <l...@fromoldbooks.org> Date: Sunday, February 6, 2022 at 12:08 AM To: Eliot Kimber <eliot.kim...@servicenow.com>, basex-talk@mailman.uni-konstanz.de <basex-talk@mailman.uni-konstanz.de> Subject: Re: [basex-talk] Managing/Debugging Server Load and Performance [External Email] On Sun, 2022-02-06 at 03:34 +0000, Eliot Kimber wrote: > > * Using the JRE provided with Oxygen, allocated with 4GB (we are > also using this server to run Oxygen via scripting and it needs 8GB > to handle our insanely huge DITA maps) Make sure you have e.g. 64 gigabytes or more of swap configures; free -h will tell you this > * Set parallel to 4 (to match the number of cores, but just > guessing that this is a useful setting based on the docs) check /proc/cpuinfo (e.g, less /proc/cpuinfo) and you'll prolly find it can run 8 threads > > I’m seeing some apparent occasional slowness on pages that should not > be slow (don’t reflect long-running queries or huge data volumes) make sure there are no xml catalogs or DTDs to be fetched externally - or, if there are catalogs, e.g. used with fn:transform(), that those catalog files do NOT start with a doctype that causes a network fetch of a dtd... > but I’m not really sure how to diagnose it or even verify that I’ve > succeeded in giving BaseX all the resources it needs. maybe in an ssh/terminal window, keep "top" running while you fetch a page, and see if the system gets really busy. Note also the centos system is probably using a hard drive, not an SSD, so file access may be slower - make sure you have indexes! hope this helps at least a little, -- Liam Quin, https://urldefense.com/v3/__https://www.delightfulcomputing.com/__;!!N4vogdjhuJM!U0hEyXTeoTf2B18jzQmUco2XDS97VUqRet5HS3OjWp_cIEbY9gMS9UJKn8aBFs61KYmFdA$<https://urldefense.com/v3/__https:/www.delightfulcomputing.com/__;!!N4vogdjhuJM!U0hEyXTeoTf2B18jzQmUco2XDS97VUqRet5HS3OjWp_cIEbY9gMS9UJKn8aBFs61KYmFdA$> Available for XML/Document/Information Architecture/XSLT/ XSL/XQuery/Web/Text Processing/A11Y training, work & consulting. Barefoot Web-slave, antique illustrations: https://urldefense.com/v3/__http://www.fromoldbooks.org__;!!N4vogdjhuJM!U0hEyXTeoTf2B18jzQmUco2XDS97VUqRet5HS3OjWp_cIEbY9gMS9UJKn8aBFs5ZXch2EA$<https://urldefense.com/v3/__http:/www.fromoldbooks.org__;!!N4vogdjhuJM!U0hEyXTeoTf2B18jzQmUco2XDS97VUqRet5HS3OjWp_cIEbY9gMS9UJKn8aBFs5ZXch2EA$>