W dniu 7.02.2022 o 17:23, Mike Fischer pisze:
Am 06.02.2022 um 22:48 schrieb Brian Brombacher <br...@planetunix.net>:
At this point I would reconfigure httpd to use two separate ports (80, 81) for
each site, or two local IP addresses (::1, ::2, I wouldn’t personally do this,
I would go multi port), and then use PF rules to forward the (em0) port 80 as
usual and then (em1) port 80 I would forward to rdomain 0, port 81 (example
port).
You mean: have only one instance of httpd listen on IPs in rdomain 0 for
different ports and use PF to forward packets for IPs in rdomain 1 to these
IP/port combinations in rdomain 0?
I’ll give that a try in the next few days…
All of this is beyond the scope of a normal setup. I would usually just do as
described by others and rely on hostname rather than IP for httpd to process
requests. If for some reason this isn’t feasible, I’d be curious why.
This is mainly for learning. In a production setup I’d agree that this seems
much too complicated. Also generally HTTPS would be used which allows for SNI
to choose the virtual hosts. For services other than HTTPS that might be more
difficult.
There might be actual use cases for this in home/small office settings though.
Buisness internet line should have static prefix
On Feb 6, 2022, at 4:51 PM, Brian Brombacher <br...@planetunix.net> wrote:
From your posts I know why you don’t want to use hostnames.
Not quite true. I do use DNS and for practical applications I also use HTTPS and SNI. But
DNS is secondary and sometimes adds another layer of complexity. Also SNI is not
available for services not secured by SSL/TLS to my knowledge. E.g. in my example for a
web server on port 80 the hostname comes into play only to resolve the IP. The actual
request would be "GET / HTTP/1.1" — no hostname in sight.
Actually the request is:
GET / HTTP/1.1
Host: example.com
Host header is REQUIRED by HTTP/1.1 specification:
https://datatracker.ietf.org/doc/html/rfc2616#section-14.23
HTTPS also sends host header, but SNI is still used to choose correct
certificate.
I can see utility in using different IPs for different sites if you don’t
want to advertise that the sites are related by their IP.
Yes, though in truth having the same prefix would be unavoidable and would let
an outsider know that the services are related in some way. It would leave open
whether the services are using the same host though.
Not really, since that IP could possibly point to loadbalancer or
reverse proxy, instead of end server
Like I wrote this is mainly for learning at the moment. I am somewhat amazed at
the subtle differences between IPv4 and IPv6. IPv6 is obviously not just IPv4
with more address space. My approach is to figure out how things work and what
is possible, then for practical applications decide whether a particular
solution is too complicated to maintain or to set up, or too fragile to be of
long term use.
I wouldn't be learning about hosting on dynamic prefix - it's not really
what you would do in real world. Just set static IPs and pretend that
they don't change, for the sake of learning.
Or maybe your ISP could give you static prefix
As for privacy my aim is to be able to leak as little information as possible
to reduce any attack surface. Naturally when hosting a service on the public
Internet the service itself is exposed. That can’t be helped. But anything not
directly related to the service should IMHO stay hidden as much as possible.
If you have a.example.com with A record 1.2.3.4 and AAAA record
2001:db8::dead:beef and b.example.com with A record 1.2.3.4 and AAAA
record 2001:db8::c0:ffee, then potential attacker already can tell, that
either:
- 2001:db8::dead:beef and 2001:db8::c0:ffee is the same machine
- 1.2.3.4 is reverse proxy or load balancer, possibly serving more sites
Or, you could even use something like cloudflare to hide your IP - then
your service will share IP with probably hundreds of other (unrelated)
services, so IP will not tell attacker anything.
Thanks!
Mike
--
Łukasz Moskała