Re: sample robots.txt to reduce WWW load

2024-04-08 Thread Eric Wong
Eric Wong  wrote:
> I've unleashed the bots again and let them run rampant on the
> https://80x24.org/lore/ HTML pages.  Will need to add malloc
> tracing on my own to generate reproducible results to prove it's
> worth adding to glibc malloc...

Unfortunately, mwrap w/ tracing is expensive enough to affect
memory use:  slower request/response processing due to slower
malloc means larger queues build up.

Going to have to figure out lower-overhead tracing mechanisms if
I actually want to prove size classes work to reduce
fragmentation



Re: sample robots.txt to reduce WWW load

2024-04-03 Thread Eric Wong
Konstantin Ryabitsev  wrote:
> On Mon, Apr 01, 2024 at 01:21:45PM +, Eric Wong wrote:
> > Performance is still slow, and crawler traffic patterns tend to
> > do bad things with caches at all levels, so I've regretfully had
> > to experiment with robots.txt to mitigate performance problems.
> 
> This has been the source of grief for us, because aggressive bots don't appear
> to be paying any attention to robots.txt, and they are fudging their
> user-agent string to pretend to be a regular browser. I am dealing with one
> that is hammering us from China Mobile IP ranges and is currently trying to
> download every possible snapshot of torvalds/linux, while pretending to be
> various versions of Chrome.

Ouch, that's from cgit doing `git archive` on every single commit?
Yeah, that's a PITA and not something varnish can help with :/

I suppose you're already using some nginx knobs to throttle
or limit requests from their IP ranges?

It's been years since I've used nginx myself, but AFAIK nginx
buffering is either full (buffer everything before sending)
or not buffered at all.  IOW, (AFAIK) there's no lazy buffering
that tries to send whatever it can, but falls back to buffering
when a client is the bottleneck.

I recommend "proxy_buffering off" in nginx for
public-inbox-{httpd,netd} since the lazy buffering done by our
Perl logic is ideal for git-{archive,http-backend} trickling to
slow clients.  This ensures the git memory hogs finish as
quickly as possible and we can slowly trickle to slow (or
throttled) clients with minimal memory overhead.

When I run cgit nowadays, it's _always_ being run by
public-inbox-{httpd,netd} to get this lazy buffering behavior.
Previously, I used another poorly-marketed (epoll|kqueue)
multi-threaded Ruby HTTP server to get the same lazy buffering
behavior (I still rely on that server to do HTTPS instead of
nginx since I don't yet have a Perl reverse proxy).

All that said, PublicInbox::WwwCoderepo (JS-free cgit
replacement + inbox integration UI) only generates archive links
for tags and not every single commit.

> So, while I welcome having a robots.txt recommendation, it kinda assumes that
> robots will actually play nice and won't try to suck down as much as possible
> as quickly as possible for training some LLM-du-jour.

robots.txt actually made a significant difference before I
started playing around with jemalloc-inspired size classes for
malloc in glibc[1] and mwrap-perl[2].

I've unleashed the bots again and let them run rampant on the
https://80x24.org/lore/ HTML pages.  Will need to add malloc
tracing on my own to generate reproducible results to prove it's
worth adding to glibc malloc...

[1] https://public-inbox.org/libc-alpha/20240401191925.M515362@dcvr/
[2] https://80x24.org/mwrap-perl/20240403214222.3258695-...@80x24.org/



Re: sample robots.txt to reduce WWW load

2024-04-03 Thread Konstantin Ryabitsev
On Mon, Apr 01, 2024 at 01:21:45PM +, Eric Wong wrote:
> Performance is still slow, and crawler traffic patterns tend to
> do bad things with caches at all levels, so I've regretfully had
> to experiment with robots.txt to mitigate performance problems.

This has been the source of grief for us, because aggressive bots don't appear
to be paying any attention to robots.txt, and they are fudging their
user-agent string to pretend to be a regular browser. I am dealing with one
that is hammering us from China Mobile IP ranges and is currently trying to
download every possible snapshot of torvalds/linux, while pretending to be
various versions of Chrome.

So, while I welcome having a robots.txt recommendation, it kinda assumes that
robots will actually play nice and won't try to suck down as much as possible
as quickly as possible for training some LLM-du-jour.

/end rant

-K