You need to install certificates as well as on your clients. I don’t normally 
catch ISO. I only catch updates regarding windows. I have issues where I have 
to reserve the updates rather than download them over and over again it saves 
on bandwidth and costs. I wish you the best of luck, however you do need 
certificates for the ability to catch HTTPS. without certificates, it will not 
function so you must own the devices as well for this function. Windows is a 
different story as the updates come over just HTTPtherefore they can be caught 
without intercepting. I hope that helps if you’re using a transparent proxy.
Sent from my iPhone

> On Apr 12, 2024, at 09:30, PinPin Poola <pinpinpo...@hotmail.com> wrote:
> 
> 
> Hi Jonathan,
> 
> No, I didn't have a refresh_pattern for .ISO/etc, so thank you. BTW, what are 
> the "43800 100% 129600" values?
> 
> I realised that I had not actually configured "SSL Bump" in that last 
> /etc/squid/squid.conf file I posted, as the access.log showed my https 
> connections as being tunnelled. 🙁
> 
> I have tried to enable SSL Bump as best I understand how to and my squid.conf 
> now looks like:
> 
> acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
> acl localnet src 10.0.0.0/8             # RFC 1918 local private network (LAN)
> acl localnet src 100.64.0.0/10          # RFC 6598 shared address space (CGN)
> acl localnet src 169.254.0.0/16         # RFC 3927 link-local (directly 
> plugged) machines
> acl localnet src 172.16.0.0/12          # RFC 1918 local private network (LAN)
> acl localnet src 192.168.0.0/16         # RFC 1918 local private network (LAN)
> acl SSL_ports port 443
> acl Safe_ports port 80          # http
> acl Safe_ports port 21          # ftp
> acl Safe_ports port 443         # https
> acl Safe_ports port 70          # gopher
> acl Safe_ports port 210         # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280         # http-mgmt
> acl Safe_ports port 488         # gss-http
> acl Safe_ports port 591         # filemaker
> acl Safe_ports port 777         # multiling http
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost manager
> http_access deny manager
> http_access allow localhost
> http_access allow localnet
> http_access deny to_localhost
> http_access deny to_linklocal
> include /etc/squid/conf.d/*.conf
> http_access deny all
> http_port 3128 ssl-bump generate-host-certificates=on 
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
> sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db -M 4MB
> ssl_bump peek all
> ssl_bump splice all
> coredump_dir /var/spool/squid
> refresh_pattern ^ftp:           1440    20%     10080
> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
> refresh_pattern .               0       20%     4320
> refresh_pattern -i \.(rar|jar|gz|tgz|tar|bz2|iso)(\?|$)     43800 100% 129600
> shutdown_lifetime 10 seconds
> maximum_object_size 35 GB
> cache_mem 256 MB
> maximum_object_size_in_memory 512 KB
> cache_replacement_policy heap LFUDA
> range_offset_limit -1
> quick_abort_min -1 KB
> cache_dir aufs /var/spool/squid 150000 16 256 min-size=1048576
> 
> I read in one blog that the cache_dir had to be listed after 
> maximum_object_size so I moved it. 
> 
> I also reduced the cache_dir / min-size value from 1 GB to 1 MB for testing 
> and switched to a smaller .ISO file as I was getting bored wating for the big 
> one to download repeatedly.
> 
> So now:
> 
> 1) A https download works, but is still tunnelled as mentioned above:
> 
> root@client1 [ /tmp ]# wget -e https_proxy=10.40.1.250:3128 --ca-certificate 
> ~/myCA.pem 
> https://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> --2024-04-12 15:42:44--  
> https://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> Connecting to 10.40.1.250:3128... connected.
> Proxy request sent, awaiting response... 200 OK
> Length: 1016070144 (969M) [application/x-iso9660-image]
> Saving to: ‘ubuntu-18.04.6-live-server-amd64.iso’
> 
> ubuntu-18.04.6-live-server-amd64.iso            
> 100%[=======================================================================================================>]
>  969.00M  20.0MB/s    in 53s
> 
> 2024-04-12 15:43:37 (18.4 MB/s) - ‘ubuntu-18.04.6-live-server-amd64.iso’ 
> saved [1016070144/1016070144]
> 
> and the access.log entry looks like this:
> 
> 1712936617.285  52629 10.40.1.2 TCP_TUNNEL/200 1017438604 CONNECT 
> releases.ubuntu.com:443 - HIER_DIRECT/185.125.190.40 -
> 
> 
> 2) A new http download works and is cached to disk now:
> 
> root@client1 [ /tmp ]# wget -e http_proxy=10.40.1.250:3128 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> --2024-04-12 15:44:15--  
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> Connecting to 10.40.1.250:3128... connected.
> Proxy request sent, awaiting response... 200 OK
> Length: 1016070144 (969M) [application/x-iso9660-image]
> Saving to: ‘ubuntu-18.04.6-live-server-amd64.iso.1’
> 
> ubuntu-18.04.6-live-server-amd64.iso.1          
> 100%[=======================================================================================================>]
>  969.00M  16.0MB/s    in 52s
> 
> 2024-04-12 15:45:07 (18.6 MB/s) - ‘ubuntu-18.04.6-live-server-amd64.iso.1’ 
> saved [1016070144/1016070144]
> 
> and the access.log entry looks like this:
> 
> 1712936707.689  52198 10.40.1.2 TCP_MISS/200 1016070508 GET 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso - 
> HIER_DIRECT/185.125.190.40 application/x-iso9660-image
> 
> 
> 3) A subsequent http download of the same file does pull it from cache:
> 
> root@client1 [ /tmp ]# wget -e http_proxy=10.40.1.250:3128 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> --2024-04-12 15:45:23--  
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> Connecting to 10.40.1.250:3128... connected.
> Proxy request sent, awaiting response... 200 OK
> Length: 1016070144 (969M) [application/x-iso9660-image]
> Saving to: ‘ubuntu-18.04.6-live-server-amd64.iso.2’
> 
> ubuntu-18.04.6-live-server-amd64.iso.2          
> 100%[=======================================================================================================>]
>  969.00M  30.4MB/s    in 36s
> 
> 2024-04-12 15:45:58 (27.0 MB/s) - ‘ubuntu-18.04.6-live-server-amd64.iso.2’ 
> saved [1016070144/1016070144]
> 
> and the access.log entry looks like this:
> 
> 1712936758.943  35825 10.40.1.2 TCP_HIT/200 1016070518 GET 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso - 
> HIER_NONE/- application/x-iso9660-image
> 
> 
> I am making progress, I just need to understand where I am going wrong with 
> SSL Bump for https connections. Why it is still tunnelling? If I fix that I 
> think it will cache/pull from cache the https downloads too. #fingerscrossed
> 
> Any suggestions or decent web blogs/etc on how to configure it?
> 
> Have a great weekend,
> 
> Many Thanks
> Pin
> 
> From: Jonathan Lee <jonathanlee...@gmail.com>
> Sent: 12 April 2024 15:10
> To: PinPin Poola <pinpinpo...@hotmail.com>
> Cc: squid-users@lists.squid-cache.org <squid-users@lists.squid-cache.org>
> Subject: Re: [squid-users] Squid Cache 6.9 on Ubuntu 22.04.3 LTS. Not caching 
> large files to disk.
>  
> Do you have a refresh pattern for .ISO to do this. The defaults for the cache 
> does not cache .ISO files, you have to add a custom refresh pattern for it
> 
> Something like this 
> 
> refresh_pattern -i \.(rar|jar|gz|tgz|tar|bz2|iso)(\?|$)                       
>   43800 100% 129600        # RAR | JAR | GZ | TGZ | TAR | BZ2 | ISO
> 
> ~~~~~
_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

Reply via email to