Folks, TGIF and FYI -- some more detailed logging and feedback from office
staff has opened this performance problem can of worms further.

Last Monday my main suspect was SQL Server, but it might be innocent and is
actually a victim. Dozens of SQL timeout exceptions are logged, but they're
all over the place, even in trivial selects of a single row which would
happen in a blink. Office staff report unpredictable "slow-downs" in
different apps. Web app users are suffering random pauses as well, as we
can see them clicking buttons multiple times in frustration (thereby
causing other problems).

So this problem is system wide in an in-house computer centre. A meeting
with the hardware/network support guy revealed many links in the chain.
There are a dozen workstations, several servers, VMs, multiple networks and
routers, two high-speed external internet connections, NAS boxes, terminal
services, Sophos and Anitmalware running, etc. There are so many places to
investigate that we're all a bit befuddled at the moment and are looking
for angles to investigate. I'm only involved in the office apps, who are
probably innocent victims, but I'm in the loop to help if I can.

This reminds of the 80s again on mainframes, but finding performance
problems back then was reasonably straightforward because there weren't so
many parts of the clockwork to examine. These days it's like trying to
debug an atomic clock.


*Greg K*

>
-- 
ozdotnet mailing list 
To manage your subscription, access archives: https://codify.mailman3.com/ 

Reply via email to