An additional reason to do it with Racket's HTTP libraries is so that
one bot someday going crazy doesn't make a devops/sysadmin see that it's
Racket, and think unhappy thoughts about Racket. :)
Agreed.
Best regards,
Dmitry
--
You received this message because you are subscribed to the
BTW, it is often (not always) good practice for a Web crawler/scraper to
set a `User-Agent` header specific to the particular program.
This convention dates from when the Web was much more cooperative, but a
lot of people still do it.
An additional reason to do it with Racket's HTTP
Neil,
Thank you so much, I did not know that I can play with
the header so easily in get-pure-port.
It turned out that the server expects "Accept:" field
in the request (but does not care much about its value).
So the following code works
#lang racket
(require net/url)
(let* ((url
I looked at this really quickly, it wasn't my first 2 guesses, but
here's a workaround for which I can't immediately say why it works (and
something else odd might also be going on with that site)...
The cause seems to be that the site is extremely sensitive to the HTTP
request headers (*not*
Hello,
My workflow broke because of some mess going on with SSL certificates
or something. I do not understand what is going on and how to fix it.
Could please anybody give an idea?
Here is a link:
https://datacenter.iers.org/data/latestVersion/9_FINALS.ALL_IAU2000_V2013_019.txt
It opens in
5 matches
Mail list logo