[squid-users] Rate limiting outbound connections with http_access?

2023-07-13 Thread Mike Glover
Hi,

My project makes user-initiated requests to a selection of HTTPS API,  I'm 
using squid 5.7 as a forward proxy with SSL bumping to aggressively cache 
results, and it's working great for that.

One of the API (let's call it 'foobar.org') has a strict 1 request per second 
limit. I would like to throttle outbound requests from my server using squid.*

I've written a simple external ACL program (rate_limit.py) that works as a 
throttle, and I've hooked it up like this in my config:

acl delayhosts dstdomain foobar.org

external_acl_type rate1 ttl=0 children-max=1 children-startup=1 %ACL \
./rate_limit.py
acl 1ps external rate1

acl putdelay annotate_transaction needs_delay=1
acl checkdelay all-of !CONNECT delayhosts putdelay
acl getdelay note needs_delay
acl dodelay all-of getdelay 1ps

# dodelay can and should move somewhere after the cache check
http_access allow checkdelay dodelay

This is almost what I'm looking for.**  The problem is that the delay happens 
before the cache check, so I'm needlesslly throttling requests that I can serve 
locally. 

I can't find any hook post-cache-check that will accept a slow ACL.  Does such 
a thing exist in squid?

Best,

-mg

* Yes, perhaps this would be simpler with iptables.  I'm not currently using 
iptables in this project, I'm not terribly familiar with it, and everything 
else works happily unprivileged, so even a slightly kludgy solution in squid 
would be preferable (at this stage, at least) than learning, configuring, 
monitoring, and debugging another component.

** And yes, better than iptables rn

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-27 Thread Mike Yates
  Thanks Alex,

Let me ask this then 

I just want squid to redirect any requests (http for instance) to a
specific external url so for instance http://mysuidserver:80 to
http://externalserver:80 ...

Does that help 

I'm just sure what is the minimal conf file I would need to achieve this
...

On Mon, Sep 27, 2021 at 9:23 AM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 9/27/21 8:44 AM, Mike Yates wrote:
>
> > Sorry Alex but if using postman I just post to the internal URL with no
> > certificates and everything works fine. All I'm trying to do is post to
> > the squid server that will then redirect the post to the external url.
> > Its that simple ..
>
> Unfortunately, since I am not familiar with postman, I cannot convert
> the pending questions about protocols the server software uses into
> questions about your postman configuration. Hopefully, somebody else on
> the list can do that for you. We also need to make sure that what you do
> in postman matches what your servers are actually doing (as far as
> communication protocols are concerned) -- the API server may support
> several protocols, and it is possible, at least in theory, that postman
> tests use a different set of communication protocols compared to the
> actual servers you need to handle.
>
> Without answers to those pending (or similar) questions and without
> traffic samples, it is very difficult to guess what is actually going on
> in your environment. And without that knowledge, it is not clear how to
> configure Squid to do what you want.
>
> I know your environment sounds simple to you, but since you want more
> than an "It is simple, just use Squid" answer, we need protocol-level
> details to give specific advice.
>
> Alex.
>
> > On Sat, Sep 25, 2021 at 11:55 AM Alex Rousskov wrote:
> >
> > On 9/25/21 5:23 AM, Mike Yates wrote:
> > > There are no certificates to worry about, the api is expecting a
> token
> > > to be included in the payload of the call.   So all squid needs to
> > do is
> > > accept the post from the internal server and pass that post to the
> > > external servers url including the payload.
> >
> > > I hope that helps.
> >
> > Not yet. Forget about payload for a second. Our problems are still
> at a
> > much higher level: In your examples, you used https://... URLs. Do
> > internal servers make HTTPS requests (i.e. HTTP requests over SSL/TLS
> > connections)? If yes, then why are you saying that there are no
> > certificates to worry about? TLS connections normally involve
> > certificate validation!..
> >
> > Perhaps those internal servers make plain HTTP requests, and you used
> > "https://...; URLs in your examples by accident?
> >
> > BTW, if you do not know the answers to some of the questions, please
> > just say so -- there is nothing wrong with that. If you can share a
> > small packet capture of a single request/response (in libpcap
> format),
> > that may reduce the number of questions we have to ask.
> >
> > Alex.
> >
> >
> > > On Fri, Sep 24, 2021, 18:01 Alex Rousskov wrote:
> > >
> > > On 9/24/21 5:26 PM, Mike Yates wrote:
> > > > Ok so let's say the new server outside the dmz has a
> > different name. I
> > > > need a squid server configuration that will just forward the
> > api calls
> > > > to an external address.  So my internal servers will still
> point
> > > to Fred
> > > > ( which is now a squid server and has access to the outside
> > world) and
> > > > will then forward the requests to the new server I have in
> > the cloud.
> > > > Long story short I just need a pass through squid server.
> > >
> > > Will those internal servers trust the certificate you
> > configure Squid
> > > with? In your example, you used "https://...;. That usually
> > means the
> > > internal servers are going to validate the server certificate.
> > Can you
> > > make them trust the Squid certificate? Or does the API
> > communication
> > > have to be signed by a fred.mydomain.com
> > <http://fred.mydomain.com> <http://fred.mydomain.com
> > <http://fred.mydomain.com>>
> > > certificate that you do not
> > > control?
> > >
> > > The other pen

Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-27 Thread Mike Yates
Hi Grant,

So my idea is to install a single squid server and redirect the internal
servers to that url instead of the original one. Squid will then redirect
the post to the correct external server asi it is installed on a server
that has external access  I hope this is possible

On Fri, Sep 24, 2021 at 5:59 PM Grant Taylor 
wrote:

> On 9/24/21 3:26 PM, Mike Yates wrote:
> > Ok so let's say the new server outside the dmz has a different name.
>
> Are you going to re-configure the clients to use the new / different
> name?  Or do you need to re-configure either the intermediate Squid or
> the target; Fred, also running squid, to translate from the old API
> hostname to the new / different hostname?
>
> > I need a squid server configuration that will just forward the api
> > calls to an external address.  So my internal servers will still point
> > to Fred ( which is now a squid server and has access to the outside
> > world) and will then forward the requests to the new server I have in
> > the cloud.
>
> Are there two Squid servers in play now that Fred is running Squid?
>
> Is there a proxy server, Squid or otherwise, between clients and Fred?
> Or is Fred the Squid server that you were referencing in your emails?
>
> > Long story short I just need a pass through squid server.
>
> I suspect you might need to do more than simply pass the requests
> through.  It sounds to me like you need to translates requests for
> https://fred.mydomain.com/api/event to https://dmz.mydomain.com/api/event.
>
> It seems to me like you are going to want to configure Squid on Fred to
> act as a Reverse Proxy (Accelerator).
>
> Link - Reverse Proxy Mode
>   - https://wiki.squid-cache.org/SquidFaq/ReverseProxy
>
>
>
> --
> Grant. . . .
> unix || die
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-27 Thread Mike Yates
Sorry Alex but if using postman I just post to the internal URL with no
certificates and everything works fine. All I'm trying to do is post to the
squid server that will then redirect the post to the external url.  Its
that simple ..

On Sat, Sep 25, 2021 at 11:55 AM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 9/25/21 5:23 AM, Mike Yates wrote:
> > There are no certificates to worry about, the api is expecting a token
> > to be included in the payload of the call.   So all squid needs to do is
> > accept the post from the internal server and pass that post to the
> > external servers url including the payload.
>
> > I hope that helps.
>
> Not yet. Forget about payload for a second. Our problems are still at a
> much higher level: In your examples, you used https://... URLs. Do
> internal servers make HTTPS requests (i.e. HTTP requests over SSL/TLS
> connections)? If yes, then why are you saying that there are no
> certificates to worry about? TLS connections normally involve
> certificate validation!..
>
> Perhaps those internal servers make plain HTTP requests, and you used
> "https://...; URLs in your examples by accident?
>
> BTW, if you do not know the answers to some of the questions, please
> just say so -- there is nothing wrong with that. If you can share a
> small packet capture of a single request/response (in libpcap format),
> that may reduce the number of questions we have to ask.
>
> Alex.
>
>
> > On Fri, Sep 24, 2021, 18:01 Alex Rousskov wrote:
> >
> > On 9/24/21 5:26 PM, Mike Yates wrote:
> > > Ok so let's say the new server outside the dmz has a different
> name. I
> > > need a squid server configuration that will just forward the api
> calls
> > > to an external address.  So my internal servers will still point
> > to Fred
> > > ( which is now a squid server and has access to the outside world)
> and
> > > will then forward the requests to the new server I have in the
> cloud.
> > > Long story short I just need a pass through squid server.
> >
> > Will those internal servers trust the certificate you configure Squid
> > with? In your example, you used "https://...;. That usually means
> the
> > internal servers are going to validate the server certificate. Can
> you
> > make them trust the Squid certificate? Or does the API communication
> > have to be signed by a fred.mydomain.com <http://fred.mydomain.com>
> > certificate that you do not
> > control?
> >
> > The other pending question is whether those internal servers are
> > configured to use a proxy (see the previous email on this thread) or
> > always talk directly to (what they think is) the API service?
> >
> > Alex.
> >
> >
> > > On Fri, Sep 24, 2021, 17:18 Alex Rousskov wrote:
> > >
> > > On 9/24/21 5:09 PM, Mike Yates wrote:
> > > > I have a bunch of internal machines that do not have internet
> > > access and
> > > > any one of them is sending api post requests to another
> > system on prem
> > > > and having no issues ….
> > > >
> > > >
> > > >
> > > > Example would be https://fred.mydomain.com/api/event
> > <https://fred.mydomain.com/api/event>
> > > <https://fred.mydomain.com/api/event
> > <https://fred.mydomain.com/api/event>>
> > > >
> > > >
> > > >
> > > > Now the problem becomes the fred server is being moved to
> > the cloud so
> > > > the same https://fred.mydomain.com/api/event
> > <https://fred.mydomain.com/api/event>
> > > <https://fred.mydomain.com/api/event
> > <https://fred.mydomain.com/api/event>> is still valid but none of my
> > > > internal server can see fred and I don’t have access to the
> > backend
> > > > servers to change their api calls.
> > >
> > > AFAICT from your summary, "moved to cloud" here means that the
> API
> > > protocol stays the same, the API server domain name stays the
> > same, the
> > > API URL path stays the same, but the IP address of that domain
> > name will
> > > change. Please clarify if that conclusion is wrong.
> > >
> > > If it is correct, then it is not clear how the change of an IP
> > address
> > > would affect those making API requests using the domain name,
> > and what
> > > role Squid is playing here.
> > >
> > > Alex.
> > >
> >
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-25 Thread Mike Yates
There are no certificates to worry about, the api is expecting a token to
be included in the payload of the call.   So all squid needs to do is
accept the post from the internal server and pass that post to the external
servers url including the payload.

I hope that helps.

On Fri, Sep 24, 2021, 18:01 Alex Rousskov 
wrote:

> On 9/24/21 5:26 PM, Mike Yates wrote:
> > Ok so let's say the new server outside the dmz has a different name. I
> > need a squid server configuration that will just forward the api calls
> > to an external address.  So my internal servers will still point to Fred
> > ( which is now a squid server and has access to the outside world) and
> > will then forward the requests to the new server I have in the cloud.
> > Long story short I just need a pass through squid server.
>
> Will those internal servers trust the certificate you configure Squid
> with? In your example, you used "https://...;. That usually means the
> internal servers are going to validate the server certificate. Can you
> make them trust the Squid certificate? Or does the API communication
> have to be signed by a fred.mydomain.com certificate that you do not
> control?
>
> The other pending question is whether those internal servers are
> configured to use a proxy (see the previous email on this thread) or
> always talk directly to (what they think is) the API service?
>
> Alex.
>
>
> > On Fri, Sep 24, 2021, 17:18 Alex Rousskov wrote:
> >
> > On 9/24/21 5:09 PM, Mike Yates wrote:
> > > I have a bunch of internal machines that do not have internet
> > access and
> > > any one of them is sending api post requests to another system on
> prem
> > > and having no issues ….
> > >
> > >
> > >
> > > Example would be https://fred.mydomain.com/api/event
> > <https://fred.mydomain.com/api/event>
> > >
> > >
> > >
> > > Now the problem becomes the fred server is being moved to the
> cloud so
> > > the same https://fred.mydomain.com/api/event
> > <https://fred.mydomain.com/api/event> is still valid but none of my
> > > internal server can see fred and I don’t have access to the backend
> > > servers to change their api calls.
> >
> > AFAICT from your summary, "moved to cloud" here means that the API
> > protocol stays the same, the API server domain name stays the same,
> the
> > API URL path stays the same, but the IP address of that domain name
> will
> > change. Please clarify if that conclusion is wrong.
> >
> > If it is correct, then it is not clear how the change of an IP
> address
> > would affect those making API requests using the domain name, and
> what
> > role Squid is playing here.
> >
> > Alex.
> >
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-24 Thread Mike Yates
Ok so let's say the new server outside the dmz has a different name. I need
a squid server configuration that will just forward the api calls to an
external address.  So my internal servers will still point to Fred ( which
is now a squid server and has access to the outside world) and will then
forward the requests to the new server I have in the cloud.  Long story
short I just need a pass through squid server.

On Fri, Sep 24, 2021, 17:18 Alex Rousskov 
wrote:

> On 9/24/21 5:09 PM, Mike Yates wrote:
> > I have a bunch of internal machines that do not have internet access and
> > any one of them is sending api post requests to another system on prem
> > and having no issues ….
> >
> >
> >
> > Example would be https://fred.mydomain.com/api/event
> >
> >
> >
> > Now the problem becomes the fred server is being moved to the cloud so
> > the same https://fred.mydomain.com/api/event is still valid but none of
> my
> > internal server can see fred and I don’t have access to the backend
> > servers to change their api calls.
>
> AFAICT from your summary, "moved to cloud" here means that the API
> protocol stays the same, the API server domain name stays the same, the
> API URL path stays the same, but the IP address of that domain name will
> change. Please clarify if that conclusion is wrong.
>
> If it is correct, then it is not clear how the change of an IP address
> would affect those making API requests using the domain name, and what
> role Squid is playing here.
>
> Alex.
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-24 Thread Mike Yates
I have a bunch of internal machines that do not have internet access and any 
one of them is sending api post requests to another system on prem and having 
no issues …. 

 

Example would be  <https://fred.mydomain.com/api/event> 
https://fred.mydomain.com/api/event

 

Now the problem becomes the fred server is being moved to the cloud so the same 
 <https://fred.mydomain.com/api/event> https://fred.mydomain.com/api/event is 
still valid but none of my internal server can see fred and I don’t have access 
to the backend servers to change their api calls.

 

I have looked at various ways to configure this in squid and I’m afraid I’m a 
little lost on how my conf file should look..

 

Any suggestions would be very very welcome ..

 

Thanks in advance .. 

 

Mike 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] measuring latency of squid in different scenarios

2020-10-02 Thread Mike Rumph
Hello Rafal,

I have run some performance tests with WRK for Squid running as a proxy to
a backend Apache httpd server.
This gives an example of latency measurements for Squid.
-
https://github.com/mrumph/futurewei-ecosystems/blob/master/tests/wrk/squid_proxy.txt

Maybe this will be useful for you.

Thanks,

Mike Rumph

On Thu, Oct 1, 2020 at 2:45 AM Rafał Stanilewicz  wrote:

>  Hi Gabriel,
>
> thank you very much, I confirm I downloaded successfully the document, and
> I'm going to read it carefully, although it will take me some time.
>
> Still, my second question remains: is there any way of measuring the time
> of getting some resource through squid?
>
> Best regards,
>
> Rafal Stanilewicz
>
> On Wed, 30 Sep 2020 at 14:23, Service MV  wrote:
>
>> Below I leave the link. I think that with this you could achieve your
>> goal. In this project there are more things that you might not want to use
>> or maybe you do. To begin I believe that it is well.
>>
>>
>>- High availability load balancing frontend between users and backend
>>proxy nodes.
>>- VIP (floating IP) for the load balancers.
>>- Automatic configuration script for internal routing.
>>- Proxy pool with integrated Kerberos and LDAP authentication in
>>Active Directory
>>- Domain, IP, and port filtering
>>- Active Directory group browsing permissions
>>- Navigation reports by cost centers and/or individual users
>>- Bandwidth usage control per user.
>>
>>
>> https://drive.google.com/file/d/1L3HiYs0LXaDZJOEHXVz8WrRFeJXXUBzU/view?usp=sharing
>>
>> Any question you may have, please reply with a copy to SQUID's mailing
>> list in order to share with the community of users information that they
>> may find useful.
>>
>> Best regards,
>> Gabriel
>>
>> El mié., 30 de sep. de 2020 a la(s) 05:12, Rafał Stanilewicz (
>> r...@fomar.com.pl) escribió:
>>
>>> Hi Gabriel,
>>>
>>> although I do not know Spanish, a few of my friends do. Also, the most
>>> important pieces will be code samples, which do not need translation. So if
>>> you would be so kind and share the manual with me, I'd appreciate it very
>>> much!
>>>
>>> Rafal
>>>
>>> On Tue, 29 Sep 2020 at 23:07, Service MV  wrote:
>>>
>>>> Hi Rafal, if you wish I've a manual redacted in SPANISH for build a VM
>>>> whit Debian 10.5 running SQUID compiled from source, with kerberos and LDAP
>>>> authentication, plus AD groups authorizations.
>>>>
>>>> I haven't had time to translate it into English yet.
>>>> Let me know if it works for you and I'll share it with you.
>>>>
>>>> Best regards,
>>>> Gabriel
>>>>
>>>>
>>>>
>>>>
>>>> El lun., 28 sep. 2020 10:19, Rafał Stanilewicz 
>>>> escribió:
>>>>
>>>>> Hello,
>>>>>
>>>>> I'm planning the deployment of web proxy in my environment. It's not
>>>>> very big, around 80 typical windows 10 workstations, active directory, 
>>>>> plus
>>>>> some DMZ servers. For now, there is very basic L7 inspection on the edge
>>>>> firewall.
>>>>>
>>>>> I plan to use two separate squid instances, one for explicit proxy
>>>>> traffic, forced by AD GPO settings, and second for traffic still being 
>>>>> sent
>>>>> directly to the Internet (as several applications we use tend to ignore 
>>>>> the
>>>>> system proxy settings). The first instance will use (hopefully) AD
>>>>> authentication, while the second will use only srcIP-based rules. I will 
>>>>> be
>>>>> grateful for any comments, what should I focus on, or some quirks - I've
>>>>> never deployed squid from scratch.
>>>>>
>>>>> But my main point of writing is:
>>>>>
>>>>> I'd like to get some numbers about squid-introduced latency of getting
>>>>> some particular web resource. Is there any benchmarking program I could
>>>>> use? I'd like to see what is the current latency of getting the resource
>>>>> without any proxying, then of getting the same resource with explicit 
>>>>> proxy
>>>>> settings, then of implicit (intercepting) proxy option, as well as for
>>>>> different options of caching.
>>>>>
>>>>> How should I start? Is there any software I can use to measure that,
>>>>> besides analysis of HAR files?
>>>>>
>>>>> So far, I used squid only in home environment, and without a need for
>>>>> granular measurement.
>>>>>
>>>>> Best regards,
>>>>>
>>>>> Rafal Stanilewicz
>>>>>
>>>>> ___
>>>>> squid-users mailing list
>>>>> squid-users@lists.squid-cache.org
>>>>> http://lists.squid-cache.org/listinfo/squid-users
>>>>>
>>>>
>>>
>>> --
>>> Zanim wydrukujesz, pomyśl o środowisku.
>>>
>>
>
> --
> Zanim wydrukujesz, pomyśl o środowisku.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-users Digest, Vol 58, Issue 31

2019-06-30 Thread Mike Golf
I'm looking for help modifying the stock squid config file, within the
GUI I can bypass the proxy completely (HTTP + HTTPS) for certain LAN
IP's; however this will also stop them from accessing the cached HTTP
data. I don't want this rather I want the IP addresses in the range of
192.168.1.2 - 192.168.1.200 to be excluded from HTTPS caching but
still being able to access/cache with the HTTP proxy. I don't know how
to modify the standard configuration files to allow this, PFSense will
bypass(HTTP + HTTPS) any IP I add to "Bypass Proxy for These Source
IPs".

I specified these IP's as DHCP just for a bit of context since my
personal devices 192.168.1.200-192.168.1.254 are statically assigned
devices which I was going to deploy the CA's on, I wanted to avoid
having to deploy CA's to every single device which makes up my DHCP
range. It won't be fun having to install CA's on someones device every
time a guest asks me for my WiFi password. Regarding SSL I made a
mistake on this I just offhandedly generalized all HTTPS stuff as
"SSL" since I'm just used to people saying TLS/SSL when they refer to
HTTPS.

I'm running the HTTP proxy in transparent mode and I've included the
current configuration I'm using for reference, could you walk me
through how I would go about modifying the configuration file. I'm not
to familiar with squid terminology so could you please explain it to
me like I'm 5 (ELI5). I don't know how to structure the directives and
ACL's to allow this since the GUI menu uses a a "blanket"
configuration for whatever you input, I need help with specifying the
custom options.



# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128
http_port 127.0.0.1:3128 intercept
icp_port 0
digest_generation off
dns_v4_first off
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname localhost
cache_mgr admin@localhost
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger

logfile_rotate 1
debug_options rotate=1
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/24
forwarded_for delete
via off
httpd_suppress_version_string on
uri_whitespace strip


cache_mem 2048 MB
maximum_object_size_in_memory 20480 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
minimum_object_size 0 KB
maximum_object_size 256 MB
cache_dir aufs /var/squid/cache 36864 16 256
offline_mode off
cache_swap_low 90
cache_swap_high 95
cache allow all
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:1440  20%  10080
refresh_pattern ^gopher:  1440  0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
refresh_pattern .0  20%  4320


#Remote proxies


# Setup some default acls
# ACLs all, manager, localhost, and to_localhost are predefined.
acl allsrc src all
acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901  3128
3129 1025-65535
acl sslports port 443 563

acl purge method PURGE
acl connect method CONNECT

# Define protocols used for redirects
acl HTTP proto HTTP
acl HTTPS proto HTTPS
http_access allow manager localhost

http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !safeports
http_access deny CONNECT !sslports

# Always allow localhost connections
http_access allow localhost

request_body_max_size 0 KB
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_initial_bucket_level 100
delay_access 1 allow allsrc

# Reverse Proxy settings


# Custom options before auth


# Setup allowed ACLs
# Allow local network(s) on interface(s)
http_access allow localnet
# Default block all to be sure
http_access deny allsrc


> Send squid-users mailing list submissions to
>  squid-users@lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>  http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
>  squid-users-requ...@lists.squid-cache.org
>
> You can reach the person managing the list at
>  squid-users-ow...@lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>  1. Re: Bypassing SSL Man In the Middle Filtering For Certain LAN
>  IP's (Amos Jeffries)
>
>
> --
>
> Message: 1
> Date: Sun, 30 Jun 2019 18:36:19 +1200
> From: Amos Jeffries 
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Bypassing SSL Man In the Middle Filtering
>  For Certain LAN IP's
> Message-ID:

[squid-users] Bypassing SSL Man In the Middle Filtering For Certain LAN IP's

2019-06-29 Thread Mike Golf
Hi All,

I've setup a squid proxy server on my PFSense router, is there any way of
bypassing HTTPS/SSL filtering for certain LAN IP's. I have IP addresses
192.168.1.0-192.168.1.200 allocated through DHCP and I want these devices
to bypass SSL interception but not the standard HTTP proxy.

Since most modern sites use HTTPS by default HTTP caching isn't that
effective anymore, however I want my personal devices to use the SSL proxy
so I can get the fastest possible browsing experience without having to
install certificate authorities on my guests devices which use the DHCP
range.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SslBump Peek and Splice using Squid-4.1-5 in Amazon1 Linux with Squid Helpers

2018-12-18 Thread Mike Quentel
Resending this message using corrected subject line...

Many thanks Enrico and Amos for the advice you each shared. I have
incorporated the suggested changes into squid.conf and almost have the
desired results in Squid.

Squid is successfully blocking access to TLS sites by an IP address,
but not the same sites using the domain names.

For example, Squid blocking the IP address (taken from nslookup) of
www.google.com successfully works (ERR_ACCESS_DENIED):
https://172.217.1.4

But, attempting to access https://www.google.com will still download
the page (200).

How can I force Squid to block the TLS FQDN versions of web sites that
are not in the white list?

This is the updated squid.conf:

---

visible_hostname squid

http_port 3129 intercept

sslcrtd_children 10

acl CONNECT method CONNECT

acl url_domains dstdomain .amazonaws.com
acl url_domains dstdomain .docker.io
acl url_domains dstdomain .docker.com
acl url_domains dstdomain .congiu.com

https_port 3130 ssl-bump intercept generate-host-certificates=3Don
dynamic_cert_mem_cache_size=3D100MB cert=3D/etc/squid/squid.pem
acl SSL_ports port 443
http_access allow SSL_ports

acl tls_servers ssl::server_name .amazonaws.com
acl tls_servers ssl::server_name .docker.io
acl tls_servers ssl::server_name .docker.com
acl tls_servers ssl::server_name .congiu.com
acl tls_servers ssl::server_name .fedoraproject.org
acl tls_servers ssl::server_name mirror.csclub.uwaterloo.ca
acl tls_servers ssl::server_name .sumologic.com

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

always_direct allow url_domains
sslproxy_cert_error allow all

http_access deny CONNECT !SSL_Ports
http_access allow url_domains
http_access allow tls_servers
http_access deny all

cache deny all

ssl_bump peek step1 all
ssl_bump peek step2 tls_servers
ssl_bump splice step3 tls_servers
ssl_bump stare step2
ssl_bump bump step3
ssl_bump terminate step2 all

# debug_options ALL,1 80,5
debug_options ALL,1 33,4

---

Thanks, Mike Quentel

On Tue, 11 Dec 2018 at 18:08,  w=
rote:
>
> Send squid-users mailing list submissions to
> squid-users@lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
> squid-users-requ...@lists.squid-cache.org
>
> You can reach the person managing the list at
> squid-users-ow...@lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>1. SslBump Peek and Splice using Squid-4.1-5 in Amazon1  Linux
>   with Squid Helpers (Mike Quentel)
>2. Re: SslBump Peek and Splice using Squid-4.1-5 in  Amazon1
>   Linux with Squid Helpers (Enrico Heine)
>3. Re: SslBump Peek and Splice using Squid-4.1-5 in Amazon1
>   Linux with Squid Helpers (Amos Jeffries)
>
>
> ------
>
> Message: 1
> Date: Tue, 11 Dec 2018 10:41:56 -0500
> From: Mike Quentel 
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] SslBump Peek and Splice using Squid-4.1-5 in
> Amazon1 Linux with Squid Helpers
> Message-ID:
> 
> Content-Type: text/plain; charset=3D"UTF-8"
>
> Hi, I have been unsuccessfully trying to get Squid-4.1-5 in AWS
> (Amazon 1 Linux) to allow transparent proxy of certain domains, as
> well as IPs associated with those domains, whilst rejecting everything
> else.
>
> I have been referencing documentation at
> https://wiki.squid-cache.org/Features/SslPeekAndSplice
>
> Version of Squid: 4.1-5 for Amazon 1 Linux available at
> http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/ (many thanks to
> @elico for these packages) specifically, the following:
>
> 1) http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-4.1-5.amzn1.x=
86_64.rpm
> 2) http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-helpers-4.1-5=
.amzn1.x86_64.rpm
>
> Example of tests that I am running:
>
> 1) curl -kv https://service.us2.sumologic.com (EXPECTED: successfully
> accessed; OBSERVED: successfully accessed)
> 2) curl -kv https://54.149.155.70 (EXPECTED: successfully accessed
> because it resolves to service.us2.sumologic.com; OBSERVED:
> "Certificate does not match domainname"  [No Error] (TLS code:
> SQUID_X509_V_ERR_DOMAIN_MISMATCH))
> 3) curl -kv https://www.google.com (EXPECTED: failed to access;
> OBSERVED: failed to access)
> 4) curl -kv https://172.217.13.164 (EXPECTED: failed to access;
> OBSERVED: "Certificate does not match domainname"  [No Error] (TLS
> code: SQUID_X509_V_ERR_DOMAIN_MISMATCH))
>
> Below is the latest version of the squid.conf b

Re: [squid-users] squid-users Digest, Vol 52, Issue 13

2018-12-18 Thread Mike Quentel
Many thanks Enrico and Amos for the advice you each shared. I have
incorporated the suggested changes into squid.conf and almost have the
desired results in Squid.

Squid is successfully blocking access to TLS sites by an IP address,
but not the same sites using the domain names.

For example, Squid blocking the IP address (taken from nslookup) of
www.google.com successfully works (ERR_ACCESS_DENIED):
https://172.217.1.4

But, attempting to access https://www.google.com will still download
the page (200).

How can I force Squid to block the TLS FQDN versions of web sites that
are not in the white list?

This is the updated squid.conf:

---

visible_hostname squid

http_port 3129 intercept

sslcrtd_children 10

acl CONNECT method CONNECT

acl url_domains dstdomain .amazonaws.com
acl url_domains dstdomain .docker.io
acl url_domains dstdomain .docker.com
acl url_domains dstdomain .congiu.com

https_port 3130 ssl-bump intercept generate-host-certificates=on
dynamic_cert_mem_cache_size=100MB cert=/etc/squid/squid.pem
acl SSL_ports port 443
http_access allow SSL_ports

acl tls_servers ssl::server_name .amazonaws.com
acl tls_servers ssl::server_name .docker.io
acl tls_servers ssl::server_name .docker.com
acl tls_servers ssl::server_name .congiu.com
acl tls_servers ssl::server_name .fedoraproject.org
acl tls_servers ssl::server_name mirror.csclub.uwaterloo.ca
acl tls_servers ssl::server_name .sumologic.com

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

always_direct allow url_domains
sslproxy_cert_error allow all

http_access deny CONNECT !SSL_Ports
http_access allow url_domains
http_access allow tls_servers
http_access deny all

cache deny all

ssl_bump peek step1 all
ssl_bump peek step2 tls_servers
ssl_bump splice step3 tls_servers
ssl_bump stare step2
ssl_bump bump step3
ssl_bump terminate step2 all

# debug_options ALL,1 80,5
debug_options ALL,1 33,4

---

Thanks, Mike Quentel

On Tue, 11 Dec 2018 at 18:08,  wrote:
>
> Send squid-users mailing list submissions to
> squid-users@lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
> squid-users-requ...@lists.squid-cache.org
>
> You can reach the person managing the list at
> squid-users-ow...@lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>1. SslBump Peek and Splice using Squid-4.1-5 in Amazon1  Linux
>   with Squid Helpers (Mike Quentel)
>2. Re: SslBump Peek and Splice using Squid-4.1-5 in  Amazon1
>   Linux with Squid Helpers (Enrico Heine)
>3. Re: SslBump Peek and Splice using Squid-4.1-5 in Amazon1
>   Linux with Squid Helpers (Amos Jeffries)
>
>
> ------
>
> Message: 1
> Date: Tue, 11 Dec 2018 10:41:56 -0500
> From: Mike Quentel 
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] SslBump Peek and Splice using Squid-4.1-5 in
> Amazon1 Linux with Squid Helpers
> Message-ID:
> 
> Content-Type: text/plain; charset="UTF-8"
>
> Hi, I have been unsuccessfully trying to get Squid-4.1-5 in AWS
> (Amazon 1 Linux) to allow transparent proxy of certain domains, as
> well as IPs associated with those domains, whilst rejecting everything
> else.
>
> I have been referencing documentation at
> https://wiki.squid-cache.org/Features/SslPeekAndSplice
>
> Version of Squid: 4.1-5 for Amazon 1 Linux available at
> http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/ (many thanks to
> @elico for these packages) specifically, the following:
>
> 1) 
> http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-4.1-5.amzn1.x86_64.rpm
> 2) 
> http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-helpers-4.1-5.amzn1.x86_64.rpm
>
> Example of tests that I am running:
>
> 1) curl -kv https://service.us2.sumologic.com (EXPECTED: successfully
> accessed; OBSERVED: successfully accessed)
> 2) curl -kv https://54.149.155.70 (EXPECTED: successfully accessed
> because it resolves to service.us2.sumologic.com; OBSERVED:
> "Certificate does not match domainname"  [No Error] (TLS code:
> SQUID_X509_V_ERR_DOMAIN_MISMATCH))
> 3) curl -kv https://www.google.com (EXPECTED: failed to access;
> OBSERVED: failed to access)
> 4) curl -kv https://172.217.13.164 (EXPECTED: failed to access;
> OBSERVED: "Certificate does not match domainname"  [No Error] (TLS
> code: SQUID_X509_V_ERR_DOMAIN_MISMATCH))
>
> Below is the latest version of the squid.conf being used. Apologies
> for any obvious errors--new to Squid

[squid-users] SslBump Peek and Splice using Squid-4.1-5 in Amazon1 Linux with Squid Helpers

2018-12-11 Thread Mike Quentel
Hi, I have been unsuccessfully trying to get Squid-4.1-5 in AWS
(Amazon 1 Linux) to allow transparent proxy of certain domains, as
well as IPs associated with those domains, whilst rejecting everything
else.

I have been referencing documentation at
https://wiki.squid-cache.org/Features/SslPeekAndSplice

Version of Squid: 4.1-5 for Amazon 1 Linux available at
http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/ (many thanks to
@elico for these packages) specifically, the following:

1) 
http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-4.1-5.amzn1.x86_64.rpm
2) 
http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-helpers-4.1-5.amzn1.x86_64.rpm

Example of tests that I am running:

1) curl -kv https://service.us2.sumologic.com (EXPECTED: successfully
accessed; OBSERVED: successfully accessed)
2) curl -kv https://54.149.155.70 (EXPECTED: successfully accessed
because it resolves to service.us2.sumologic.com; OBSERVED:
"Certificate does not match domainname"  [No Error] (TLS code:
SQUID_X509_V_ERR_DOMAIN_MISMATCH))
3) curl -kv https://www.google.com (EXPECTED: failed to access;
OBSERVED: failed to access)
4) curl -kv https://172.217.13.164 (EXPECTED: failed to access;
OBSERVED: "Certificate does not match domainname"  [No Error] (TLS
code: SQUID_X509_V_ERR_DOMAIN_MISMATCH))

Below is the latest version of the squid.conf being used. Apologies
for any obvious errors--new to Squid here. I have been grappling with
this for weeks, with many iterations of squid.conf so any advice is
greatly appreciated; many thanks in advance.

---

visible_hostname squid

host_verify_strict off

# Handling HTTP requests
http_port 3128
http_port 3129 intercept

sslcrtd_children 10

acl CONNECT method CONNECT

# AWS services domain
acl allowed_http_sites dstdomain .amazonaws.com
# docker hub registry
acl allowed_http_sites dstdomain .docker.io
acl allowed_http_sites dstdomain .docker.com
acl allowed_http_sites dstdomain www.congiu.net

# Handling HTTPS requests
# https_port 3130 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=100MB cert=/etc/squid/squid.pem
https_port 3130 intercept ssl-bump dynamic_cert_mem_cache_size=100MB
cert=/etc/squid/squid.pem
acl SSL_port port 443

# AWS services domain
acl allowed_https_sites ssl::server_name .amazonaws.com
# docker hub registry
acl allowed_https_sites ssl::server_name .docker.io
acl allowed_https_sites ssl::server_name .docker.com

# project specific
acl allowed_https_sites ssl::server_name www.congiu.net
acl allowed_https_sites ssl::server_name mirrors.fedoraproject.org
acl allowed_https_sites ssl::server_name mirror.csclub.uwaterloo.ca

# nslookup resolved IPs for collectors.sumologic.com
# workaround solution to support sumologic collector
acl allowed_https_sites ssl::server_name .sumologic.com
# THE FOLLOWING TWO LINES DO NOT SEEM TO WORK AS EXPECTED
# acl allowed_https_sites ssl::server_name --server-provided
service.sumologic.com sslflags=DONT_VERIFY_PEER
# acl allowed_https_sites ssl::server_name --server-provided
service.us2.sumologic.com sslflags=DONT_VERIFY_PEER

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

ssl_bump peek step1 all
ssl_bump peek step2 allowed_https_sites
# http://lists.squid-cache.org/pipermail/squid-users/2018-September/019150.html
ssl_bump bump
ssl_bump splice step3 allowed_https_sites
ssl_bump bump
ssl_bump terminate step2 all

http_access allow CONNECT

# http_access allow SSL_port

http_access deny CONNECT !allowed_https_sites
http_access deny CONNECT !allowed_http_sites
http_access allow allowed_https_sites
http_access allow allowed_http_sites
http_access deny all

cache deny all

debug_options "ALL,9"
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 4.2 is available

2018-08-16 Thread Mike Surcouf
I hung onto CentOS 6 for a while but it’s no longer secure enough.  You really 
ought to move versions.

I would prefer to see Eliezer efforts used to make 4.2 available in the stable 
repo.

Thanks

Mike

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Eliezer Croitoru
Sent: 14 August 2018 01:43
To: 'Dan Charlesworth'
Cc: 'squid-users'
Subject: Re: [squid-users] [squid-announce] Squid 4.2 is available

Well… there is a need to move forward to CentOS 7 if possible.
Squid 4.x has couple compiler compatibility which I do not remember but it was 
mentioned in the wiki and the release notes.

Thanks,
Eliezer


Eliezer Croitoru<http://ngtech.co.il/lmgtfy/>
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
[cid:image001.png@01D43541.E269D500]

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Dan Charlesworth
Sent: Tuesday, August 14, 2018 2:45 AM
To: squid-users 
Subject: Re: [squid-users] [squid-announce] Squid 4.2 is available

I'd be all over any Squid 4 RPMs for EL6, for what that's worth.

I had downloaded your source RPM for EL7 at one point and tried to build one 
for EL6. Dealing with the compiler issues was a bit beyond me though, sadly.

On Tue, 14 Aug 2018 at 05:46, Eliezer Croitoru 
mailto:elie...@ngtech.co.il>> wrote:
I need to test it but I didn't had plans to release 4.X branch for CentOS 6.
It takes me time to test it and I hope I will have more time for it.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il<mailto:elie...@ngtech.co.il>


-Original Message-
From: squid-users 
[mailto:squid-users-boun...@lists.squid-cache.org<mailto:squid-users-boun...@lists.squid-cache.org>]
 On Behalf Of Walter H.
Sent: Saturday, August 11, 2018 12:47 PM
To: squid-users@lists.squid-cache.org<mailto:squid-users@lists.squid-cache.org>
Subject: Re: [squid-users] [squid-announce] Squid 4.2 is available

On 10.08.2018 07:41, Amos Jeffries wrote:
> The Squid HTTP Proxy team is very pleased to announce the availability
> of the Squid-4.2 release!
>
>

will there be a RPM for latest CentOS 6 available?

Walter


___
squid-users mailing list
squid-users@lists.squid-cache.org<mailto:squid-users@lists.squid-cache.org>
http://lists.squid-cache.org/listinfo/squid-users


--
Getbusi
p +61 3 6165 1555
e d...@getbusi.com<mailto:d...@getbusi.com>
w getbusi.com<http://getbusi.com>

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Exchange OWA 2016 behind squid

2018-07-11 Thread Mike Surcouf
I am sure Amos wont mind me saying but nginx is the right tool for that 
scenario.
Squid is a great  forward proxy and I use it for our network but form incoming 
connections nginx is more flexible and designed for the job.

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Pedro Guedes
Sent: 11 July 2018 12:41
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Exchange OWA 2016 behind squid

Hi

I have been reading some material on this and
trying to reverse proxying squid on a diferent ssl port
like 2020 an then connect to port 443 on the exchange.

Al the examples follow the configs on the 443 port, same
on squid and exchange.

Looks like is no possible to putsquid  listening on a diferent
port than 443 and then connecting to port 443 on
exchange.

Is this true?
By the architecture it is not possible to make exchange owa
work on a diferent port than 443.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.1 for CentOS rpms

2018-07-03 Thread Mike Surcouf
Hi Eliezer

I have been using your repos on CentOS for many years thank you for your hard 
work.
Are you planning a stable repo for v4 now it's out.

Many Thanks

Mike


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] When will Squid 3.5.26 be available on Debian?

2017-06-28 Thread Mike Surcouf
Just to say I have been using Eliezers centos repo for a few years as the 
centos/rhel repos are always slow to react to new versions.
I think Eliezers repos are well respected out there.

Regards

Mike

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Enrico Heine
Sent: 28 June 2017 15:44
To: Eliezer Croitoru
Cc: squid-us...@squid-cache.org
Subject: Re: [squid-users] When will Squid 3.5.26 be available on Debian?

Dear Eliezer,

thank you for the offer, but unfortunately there is a trust issue if it's not 
from the official repo and I always build from source grapping the debian 
source and build rules. This way the possibility of overseen a needed 
adjustment for debian is very low and it is very handy to do so. I have many 
squid servers running in a large environment.

best regards,
Enrico

Am 2017-06-28 16:20, schrieb Eliezer Croitoru:
> Hey Enrico,
> 
> I didn't got any response from users about the debian package I am 
> releasing, Probably because it's not officially fully tested.
> You can try to use the repo:
> http://ngtech.co.il/repo/debian/jessie/
> 
> or download manually the deb package:
> - http://ngtech.co.il/repo/debian/jessie/amd64/squid_3.5.26_amd64.deb
> - http://ngtech.co.il/repo/debian/jessie/i386/squid_3.5.26_i386.deb
> 
> If you are up for testing it.
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
> On Behalf Of Enrico Heine
> Sent: Wednesday, June 28, 2017 14:02
> To: squid-us...@squid-cache.org
> Subject: [squid-users] When will Squid 3.5.26 be available on Debian?
> 
> Hello together,
> 
> anybody knows when Squid 3.5.26 will enter Debian testing nor unstable?
> 
> Best regards,
> Flashdown
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS6 and squid34 package ...

2017-05-25 Thread Mike
Walter, what I've found is when compiling to squid 3.5.x and higher, the 
compile options change. Also remember that many of the options that were 
available with 3.1.x are depreciated and likely will not work with 3.4.x 
and higher.


The other issue is that squid is only supposed to be handling HTTP and 
HTTPS traffic, not FTP. trying to use it as a FTP proxy will need a 
different configuration than the standard HTTP/Secure proxy.



Mike


On 5/25/2017 14:07 PM, Walter H. wrote:

On 25.05.2017 12:50, Amos Jeffries wrote:

On 25/05/17 20:19, Walter H. wrote:

Hello

what is the essential difference between the default squid package 
and this squid34 package,


Run "squid -v" to find out if there are any build options different. 
Usually its just two alternative versions from the vendor.



Squid Cache: Version 3.4.14
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--enable-internal-dns' 
'--disable-strict-error-checking' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' 
'--enable-auth-basic=LDAP,MSNT,NCSA,PAM,SMB,POP3,RADIUS,SASL,getpwnam,NIS,MSNT-multi-domain' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=file_userip,LDAP_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' 
'--enable-http-violations' '--with-aio' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-dl' '--with-openssl' 
'--with-pthreads' '--disable-arch-native' 
'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 'CXXFLAGS=-O2 -g 
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'


and

Squid Cache: Version 3.1.23
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--enable-internal-dns' 
'--disable-strict-error-checking' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' 
'--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth' 
'--enable-ntlm-auth-helpers=smb_lm,no_check,fakeauth' 
'--enable-digest-auth-helpers=password,ldap,eDirectory' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' 
'--enable-http-violations' '--with-aio' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-dl' '--with-openssl' 
'--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 'LDFLAGS=-pie' 
'CXXFLAGS=-O2 -g -pipe

Re: [squid-users] kerb auth groups KV note acl config

2017-03-16 Thread Mike Surcouf
Ok I see Markus code moved into the main package for 4.
Quick question his code in there seems almost identical to 3.5 (at least on 
github mirror)
Currently cache is on Centos v6 and I use Eliezer's excellent rpms.

Do you think this will work with squid and squid-helpers 3.5.23?

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: 16 March 2017 10:54
To: Mike Surcouf; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] kerb auth groups KV note acl config

On 16/03/2017 11:12 p.m., Mike Surcouf wrote:
> @Amos
> 
> Thanks for this
> 
> so to recap if I currently have
> 
> auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
> auth_param negotiate children 20
> auth_param negotiate keep_alive on
> 
> external_acl_type InternetAccessBanking %LOGIN 
> /usr/lib64/squid/ext_kerberos_ldap_group_acl -u 
> ldaps://aesdc02.surcouf.local:636 -b cn=SSSUsers,dc=surcouf,dc=local  -g 
> InternetAccessBanking
> 
> I could replace it by
> 
> auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
> auth_param negotiate children 20
> auth_param negotiate keep_alive
> 
> acl InternetAccessBanking note group 
> S-1-5-21-123456789-123456789-123456789-1234
> 
> 
> Note where S-1-5-21-123456789-123456789-123456789-1234 is the SID for the 
> group InternetAccessBanking
> 
> 

Yes.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kerb auth groups KV note acl config

2017-03-16 Thread Mike Surcouf
@Amos

Thanks for this

so to recap if I currently have

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
auth_param negotiate children 20
auth_param negotiate keep_alive on

external_acl_type InternetAccessBanking %LOGIN 
/usr/lib64/squid/ext_kerberos_ldap_group_acl -u 
ldaps://aesdc02.surcouf.local:636 -b cn=SSSUsers,dc=surcouf,dc=local  -g 
InternetAccessBanking

I could replace it by

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
auth_param negotiate children 20
auth_param negotiate keep_alive

acl InternetAccessBanking note group S-1-5-21-123456789-123456789-123456789-1234


Note where S-1-5-21-123456789-123456789-123456789-1234 is the SID for the group 
InternetAccessBanking


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: 16 March 2017 09:24
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] kerb auth groups KV note acl config

On 15/03/2017 10:18 p.m., Mike Surcouf wrote:
> This is bulleted as a new feature for v4.
> Yet there is no way to test this without a quick reply letting me know the 
> basic usage.
> Anyone  got a snippet on how this is setup 
> 

[ For TL;DR skip to the end of this mail. All this is first block is
just describing how it works. ]


This should be doable with Squid-3.4+ or at least 3.5. It requires only
the note ACL in squid plus a helper that sends group= response annotations.

It is marked as v4 becasue that is where the first helper with such
support is bundled. You can run that helper with older Squid, for
example by downloading Markus lastest release and building your own helper.


An auth helper which supports it does not needs anything configured by
you. It will "just work" (or not if it lacks annotation support). That
part is just a matter of finding out / ensuring your auth helper
provides the group kv-pairs. The usual command-line tests can probably
show that.

The auth helper by Markus should be producing a set of group=X
annotations automatically, one for each group the user is a member of.
Where the X is what AD calls a "SID" value representing a unique ID for
each group.


After those are received by Squid the note ACL type can be used in
squid.conf to match any of them quickly without an external helper
lookup for the group details. That enables reliable group ACLs anywhere
in squid.conf where they were previously at the mercy of external helper
result timeouts.


In absence of that input from the auth helper, an external_acl_type
helper or *any* helper really :-) can also send the same annotations to
Squid - with the same note ACL config later.

In its current form this is obviously most useful if you know the SID
that group names map to and can configure the note ACL appropriately. I
am hopeful that other helpers may be able to produce named groups or
such. But the values are likely to be specific to whatever the auth
system can provide.


For group lookup and comparison by name (the 'old' way) you can still
use an external helper. As I understand it AD requires two lookups; one
to find the users SID memberships and one to find the group name->SID
mapping for the group(s) being checked - then compare. The first is not
needed if the SID (%note{group}) is passed to the helper instead of
username (%LOGIN).
 This part does require v4, and has not been much tested to see where
the %note format code works for external_acl_type helpers (and where
not). YMMV.

IIRC Markus was waiting on support for %note{group} format code on
external_acl_type config lines. But that happened a long while back now.



> -Original Message-
> From: Mike Surcouf
> 
> Outputting the groups as KV pairs in AD environments  on auth seems like a 
> great performance enhancement and will allow me to ditch my ldap lookups.
> Is there any docs on how to set this up?
> Even looking at the source I can't seem to work it out.
> I would like to test and potentially contribute to the DOCS although I am 
> only a git user and bazaar would be new to me so I may just post my 
> experience in this thread.
> 
> From what I can see I need to setup a note acl but I am unsure of the key 
> names etc.

Correct. The key name is "group" ;-)


> 
> A short example would be great.
> 

As far as I am aware it should look like this:

  acl blah note group SID-12345-762576257263
  request_max_size 1 MB blah

Maybe also the -m flag on the ACL definition if recent changes merged
the group notes into a list.

HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kerb auth groups KV note acl config

2017-03-15 Thread Mike Surcouf
This is bulleted as a new feature for v4.
Yet there is no way to test this without a quick reply letting me know the 
basic usage.
Anyone  got a snippet on how this is setup 

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Mike Surcouf
Sent: 07 March 2017 15:21
To: 'squid-users@lists.squid-cache.org'
Subject: [squid-users] kerb auth groups KV note acl config

Outputting the groups as KV pairs in AD environments  on auth seems like a 
great performance enhancement and will allow me to ditch my ldap lookups.
Is there any docs on how to set this up?
Even looking at the source I can't seem to work it out.
I would like to test and potentially contribute to the DOCS although I am only 
a git user and bazaar would be new to me so I may just post my experience in 
this thread.

From what I can see I need to setup a note acl but I am unsure of the key names 
etc.

A short example would be great.

Thanks

Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] microsoft edge and proxy auth not working

2017-03-10 Thread Mike Surcouf
Are the browsing machines domain joined?
If so and you are just talking about joining the squid proxies to the domains 
for auth delegation to the dcs this is greatly simplified with realmd now.
Could probably be scripted quite easily.

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Rietzler, Markus (RZF, Aufg 324 / )
Sent: 10 March 2017 09:53
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] microsoft edge and proxy auth not working

Kerberos is on the wishlist for very long. 
one reason was: the setup is a bit complicated and we do have 150 proxies in 
our subsidiaries. so we need 150 different Kerberos setups with 150 trusts and 
tickets and certificates etc. so we work on this to have it someday replaced...

thanxs

> -Ursprüngliche Nachricht-
> Von: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Im 
> Auftrag von Mike Surcouf
> Gesendet: Donnerstag, 9. März 2017 18:58
> An: 'Rafael Akchurin'; Amos Jeffries; 
> squid-users@lists.squid-cache.org
> Betreff: Re: [squid-users] microsoft edge and proxy auth not working
> 
> Hi Rafael
> 
> Is there any reason you can't use Kerberos.
> Note you will need to create a keytab but the setup is not that hard 
> and in the docs.
> I use it very successfully on window AD network.
> 
> auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
> auth_param negotiate children 20
> auth_param negotiate keep_alive on
> 
> Thanks
> 
> Mike
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
> On Behalf Of Rafael Akchurin
> Sent: 09 March 2017 17:01
> To: Amos Jeffries; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] microsoft edge and proxy auth not working
> 
> Hello Amos, Markus, all,
> 
> Just as a side note - I also suffered  from this error sometime before 
> with Edge and our custom NTLM relay to domain controllers (run as auth 
> helper by Squid). The strange thing it went away after installing some
> (unknown) Windows update.
> 
> I do have the "auth_param ntlm keep_alive off" in the config though.
> 
> It all makes me quite suspicious the error was/is in Edge or in my 
> curly hands.
> 
> Best regards,
> Rafael Akchurin
> Diladele B.V.
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
> On Behalf Of Amos Jeffries
> Sent: Thursday, March 9, 2017 5:12 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] microsoft edge and proxy auth not working
> 
> On 8/03/2017 11:28 p.m., Rietzler, Markus (RZF, Aufg 324 /
> ) wrote:
> > i should add that we are using squid 3.5.24.
> >
> 
> Try with "auth_param ntlm keep_alive off". Recently the browsers have 
> been needing that.
> 
> Though frankly I am surprised if Edge supports NTLM at all. It was 
> deprecated in April 2006 and MS announced removal was being actively 
> pushed in all thier software since Win7.
> 
> >
> >> -Ursprüngliche Nachricht-
> >> Von: Rietzler, Markus
> >>
> >> we have some windows 10 clients using microsoft edge browser.
> >> access to internet is only allowed for authenticated users. we are 
> >> using samba/winbind auth
> >>
> >> auth_param ntlm program /usr/bin/ntlm_auth
> >> --helper-protocol=squid-2.5- ntlmssp auth_param ntlm children 64
> >> startup=24 idle=12 auth_param ntlm keep_alive on acl auth_user 
> >> proxy_auth REQUIRED
> >>
> >> on windows 10 clients with IE11 it is working (with ntlm automatic
> >> auth) on the same machine, with Microsoft edge I get TCP_Denied/407
> message.
> >> seems I only get one single TCP_DENIED/407 line in accesslog and an 
> >> auth dialog pops up. I have disabled basic auth via ntlm.
> >> shouldn't there be 3 lines for proxy auth? with IE11 I see those 
> >> three lines (2x TCP_DENIED/407 and 1x TCP_MISS/200), no popup at all.
> 
> Not specifically. There should be 1+ for NTLM. Success with NTLM shows
> 2+. Failure shows 1 or 3 or infinite loop (hello Safari and Firefox 
> 2+30-
> ish).
> 
> 
> >>
> >> winbind/samba itself seems to work, as I can do an user auth 
> >> against apache with winbind/samba - even over some squid proxies 
> >> with connection-auth allowed. but not for proxy-auth.
> >> is there any option in squid.conf which prevents Edge to do a 
> >> successful auth?
> 
> If other software succeeds then the only thing that might be related 
> is the keep-alive option mentioned above. Otherwise the problem is in 
&

Re: [squid-users] microsoft edge and proxy auth not working

2017-03-09 Thread Mike Surcouf
Ah OK sorry
I am curious why you have a reason to use NTLM over Kerberos? :-)

-Original Message-
From: Rafael Akchurin [mailto:rafael.akchu...@diladele.com] 
Sent: 09 March 2017 18:01
To: Mike Surcouf
Cc: Amos Jeffries; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] microsoft edge and proxy auth not working

Hello Mike,

I specifically was debugging our NTLM implementation with Edge :)

Kerberos works just fine, you are correct.

Best regards,
Rafael Akchurin

> Op 9 mrt. 2017 om 18:57 heeft Mike Surcouf <mi...@surcouf.co.uk> het volgende 
> geschreven:
> 
> Hi Rafael
> 
> Is there any reason you can't use Kerberos.
> Note you will need to create a keytab but the setup is not that hard and in 
> the docs.
> I use it very successfully on window AD network.
> 
> auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
> auth_param negotiate children 20
> auth_param negotiate keep_alive on
> 
> Thanks
> 
> Mike
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
> On Behalf Of Rafael Akchurin
> Sent: 09 March 2017 17:01
> To: Amos Jeffries; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] microsoft edge and proxy auth not working
> 
> Hello Amos, Markus, all,
> 
> Just as a side note - I also suffered  from this error sometime before with 
> Edge and our custom NTLM relay to domain controllers (run as auth helper by 
> Squid). The strange thing it went away after installing some (unknown) 
> Windows update.
> 
> I do have the "auth_param ntlm keep_alive off" in the config though.
> 
> It all makes me quite suspicious the error was/is in Edge or in my curly 
> hands.
> 
> Best regards,
> Rafael Akchurin
> Diladele B.V.
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
> On Behalf Of Amos Jeffries
> Sent: Thursday, March 9, 2017 5:12 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] microsoft edge and proxy auth not working
> 
> On 8/03/2017 11:28 p.m., Rietzler, Markus (RZF, Aufg 324 /
> ) wrote:
>> i should add that we are using squid 3.5.24.
>> 
> 
> Try with "auth_param ntlm keep_alive off". Recently the browsers have been 
> needing that.
> 
> Though frankly I am surprised if Edge supports NTLM at all. It was deprecated 
> in April 2006 and MS announced removal was being actively pushed in all thier 
> software since Win7.
> 
>> 
>>> -Ursprüngliche Nachricht-
>>> Von: Rietzler, Markus
>>> 
>>> we have some windows 10 clients using microsoft edge browser.
>>> access to internet is only allowed for authenticated users. we are 
>>> using samba/winbind auth
>>> 
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5- ntlmssp auth_param ntlm children 64
>>> startup=24 idle=12 auth_param ntlm keep_alive on acl auth_user 
>>> proxy_auth REQUIRED
>>> 
>>> on windows 10 clients with IE11 it is working (with ntlm automatic
>>> auth) on the same machine, with Microsoft edge I get TCP_Denied/407 message.
>>> seems I only get one single TCP_DENIED/407 line in accesslog and an 
>>> auth dialog pops up. I have disabled basic auth via ntlm.
>>> shouldn't there be 3 lines for proxy auth? with IE11 I see those 
>>> three lines (2x TCP_DENIED/407 and 1x TCP_MISS/200), no popup at all.
> 
> Not specifically. There should be 1+ for NTLM. Success with NTLM shows
> 2+. Failure shows 1 or 3 or infinite loop (hello Safari and Firefox 30-ish).
> 
> 
>>> 
>>> winbind/samba itself seems to work, as I can do an user auth against 
>>> apache with winbind/samba - even over some squid proxies with 
>>> connection-auth allowed. but not for proxy-auth.
>>> is there any option in squid.conf which prevents Edge to do a 
>>> successful auth?
> 
> If other software succeeds then the only thing that might be related is the 
> keep-alive option mentioned above. Otherwise the problem is in Edge itself.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kerb auth groups KV note acl config

2017-03-09 Thread Mike Surcouf
@Markus

I would really like to give this a go.
Good to get some people using this stuff

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Mike Surcouf
Sent: 07 March 2017 15:21
To: 'squid-users@lists.squid-cache.org'
Subject: [squid-users] kerb auth groups KV note acl config

Outputting the groups as KV pairs in AD environments  on auth seems like a 
great performance enhancement and will allow me to ditch my ldap lookups.
Is there any docs on how to set this up?
Even looking at the source I can't seem to work it out.
I would like to test and potentially contribute to the DOCS although I am only 
a git user and bazaar would be new to me so I may just post my experience in 
this thread.

From what I can see I need to setup a note acl but I am unsure of the key names 
etc.

A short example would be great.

Thanks

Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] kerb auth groups KV note acl config

2017-03-07 Thread Mike Surcouf
Outputting the groups as KV pairs in AD environments  on auth seems like a 
great performance enhancement and will allow me to ditch my ldap lookups.
Is there any docs on how to set this up?
Even looking at the source I can't seem to work it out.
I would like to test and potentially contribute to the DOCS although I am only 
a git user and bazaar would be new to me so I may just post my experience in 
this thread.

From what I can see I need to setup a note acl but I am unsure of the key names 
etc.

A short example would be great.

Thanks

Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New to proxies

2016-04-20 Thread Mike
These are code words, they're looking to setup proxies to bypass 
filters, corporate networks, school blocks, and other setups designed to 
restrict their use (which they agreed to by using these limited 
networks). Another possibility is scammer/spammer using a virus with a 
proxy to reroute all sales based terms through their compromised links 
instead of legitimate searches and retailers, instead sending them to 
sellers of fake items in China and the Middle East.



On 4/20/2016 9:02 AM, Antony Stone wrote:

On Wednesday 20 April 2016 at 14:34:07, cjwengler wrote:


I use the proxies for my sneaker program and I need one proxy per account
for that.

Why?


Sometimes I run up to 1000 accounts.

Do you have 1000 IP addresses?


The proxies are used for purchasing sneakers and clothing on sites such as
Nike, Adidas, Supreme, Footlocker, Eastbay, Champs, Finishline, etc.

And, er, why are proxies, indeed multiple ones, needed for this?


Antony.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.5 vs 4.0

2016-04-04 Thread Mike
Is there any list or page with any comparison information, say for the 2 
latest versions 3.5.16 and 4.0.8 beta? I understand many of the fixes 
coming out are being done for both, but so far I do not see any 
information that describes any benefit to using 4.0 over 3.5. any help 
would be appreciated.


Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid with ICAP filter?

2016-03-19 Thread Mike Summers
We have a situation where we need to filter compressed HTTP traffic through
an ICAP service, logging failures (4xx) or passing the original compressed
payload to it's target destination on 2xx.

Something like this:

   - Incoming compressed HTTP
   - Decompress and forward to ICAP service
   - Log and discard if ICAP service returns 4xx
   - Send original, compressed payload to destination if ICAP returns 2xx

Is that an appropriate use for Squid? If so what sort of configuration
commands would we use? We're not certain where to begin.

Thanks in advance.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ICAP filter?

2016-03-19 Thread Mike Summers
Thanks Alex.

You are correct, the message bodies are compressed (gzip). For reasons
unknown the ICAP service can't or won't deal with compressed data. Also
correct, the ICAP service is a black box for us.

Much thanks for the response, it gives us a place to start.

--Mike


On Thu, Mar 17, 2016 at 2:47 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 03/17/2016 12:25 PM, Mike Summers wrote:
> > We have a situation where we need to filter compressed HTTP traffic
> > through an ICAP service, logging failures (4xx) or passing the original
> > compressed payload to it's target destination on 2xx.
> >
> > Something like this:
> >
> >   * Incoming compressed HTTP
> >   * Decompress and forward to ICAP service
> >   * Log and discard if ICAP service returns 4xx
> >   * Send original, compressed payload to destination if ICAP returns 2xx
> >
> > Is that an appropriate use for Squid? If so what sort of configuration
> > commands would we use? We're not certain where to begin.
>
> I do not know what you mean by "compressed HTTP". If compressed HTTP
> means something like "HTTP where message bodies contain zipped or
> gzipped content", then you can accomplish the above by sandwiching your
> ICAP service between two eCAP services in a single adaptation chain.
>
>   http://www.squid-cache.org/Doc/config/adaptation_service_chain/
>   http://www.squid-cache.org/Doc/config/icap_service/
>   http://www.squid-cache.org/Doc/config/ecap_service/
>
> Without going into many necessary details, the overall scheme may work
> similar to this:
>
>  0. Squid receives "compressed" message M.z.
>
>  1. eCAP decompression service gets message M.z from Squid,
> decompresses M.z body, and
> sends the decompressed message M back to Squid.
>
>  2. Your ICAP service gets message M and either blocks or allows it.
>
>  3. If message M was allowed in #2,
> eCAP compression service gets message M from Squid,
> compresses M body, and
> sends the compressed M.z back to Squid.
>
>  4. Squid forwards M.z to the next hop.
>
> The above can be done using standard eCAP/ICAP interfaces and squid.conf
> directives without reinventing the wheel, provided your ICAP service is
> compatible with Squid. Certain performance optimizations are possible
> with more work (e.g., the eCAP services may cache and reuse the
> compressed version of the message).
>
> If you want to reinvent the wheel by writing an ICAP client, then you
> can write a single eCAP or ICAP service that talks directly to your ICAP
> service, without using Squid adaptation chains. From Squid point of
> view, there will be just one eCAP or ICAP service doing everything.
>
> Needless to say that adding decompression support to the original ICAP
> service itself would be the fastest and simplest option (but it requires
> modifying the existing ICAP service code which I am guessing you cannot
> or do not want to do).
>
>
> HTH,
>
> Alex.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ICAP filter?

2016-03-18 Thread Mike Summers
Hi Eliezer,

We're a couple of contractors trying to prove to a potential customer that
we can build the app they want that integrates with their ICAP service.

Our guess is they really don't want to do business as they're not forth
coming about the nature of the ICAP service other than "it won't accepted
compressed data".

I suspect once we overcome all of the 'objections' the real issue will
surface.

--Mike

On Thu, Mar 17, 2016 at 3:09 PM, Eliezer Croitoru <elie...@ngtech.co.il>
wrote:

> Hey Mike,
>
> What do you mean by black box to us? who is us?
>
> Eliezer
>
> On 17/03/2016 21:52, Mike Summers wrote:
>
>> Thanks Alex.
>>
>> You are correct, the message bodies are compressed (gzip). For reasons
>> unknown the ICAP service can't or won't deal with compressed data. Also
>> correct, the ICAP service is a black box for us.
>>
>> Much thanks for the response, it gives us a place to start.
>>
>> --Mike
>>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Modelling behaviour of old version of squid with the latest using rules ?

2016-02-17 Thread Mike Corlett
Hi all,

I recently went to demo a website on a company site that used an old
version of squid ( think 2.x ), so that all the PATCH posts to our website
from a browser were turned into METHOD_OTHER, which broke the website for
us and nothing worked.
I want to be able to recreate this rule so we can build an in-house squid
server to test against things like this, and just wondered if I can map
REQUEST types to simulate this behaviour. Therefore we can tweak our
website to work with older versions of squid.

Obviously I could just download and install an old version of squid, but
this would then mean I suffer the security problems associated with old
versions !, so wondered if this one rule could be modelled. So far I've
worked out how to totally block PATCH requests, but that's not really good
enough.

Any help welcome !

Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Compile install Squid, configure default options.

2016-01-12 Thread Mike

When I used CentOS 7 (a variation of it), this is what I had to use:

 *

   yum -y install perl gcc gcc-c++ autoconf automake make

 *

   *yum -**y **install epel-release*

 o

   (has a few packages we need below)

 *

   yum -y install libxml2-devel libcap-devel avr-gcc-c++

 *

   yum -y install libtool-ltdl-devel openssl-devel

 *

   yum -y install ksh perl-Crypt-OpenSSL-X509

I prefer separate lines with only a few to be installed since if theres 
a problem with one, it is more likely to show an error rather than be 
buried.



With 3.5.5 they made some changes so for 3.5.5 and newer, certain 
configure options no longer work that previously did as far back as 
3.1.x. This is for 64bit, there are a few small differences for 32 bit OS.


 *

   ./configure '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
   '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
   '--datadir=/usr/share' '--includedir=/usr/include'
   '--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
   '--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
   '--infodir=/usr/share/info' '--exec_prefix=/usr'
   '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
   '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
   '--with-logdir=$(localstatedir)/log/squid'
   '--with-pidfile=$(localstatedir)/run/squid.pid'
   '--disable-dependency-tracking' '--enable-follow-x-forwarded-for'
   '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
   '--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
   '--enable-ident-lookups' '--enable-linux-netfilter'
   '--enable-removal-policies=heap,lru' '--enable-snmp'
   '--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2'
   '--enable-esi' '--enable-ssl' '--enable-ssl-crtd' '--enable-icmp'
   '--with-aio' '--with-default-user=squid'
   '--with-filedescriptors=1024' '--with-dl' '--with-openssl'
   '--with-pthreads' '--with-included-ltdl' '--disable-arch-native'
   'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
   -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
   'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
   -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
   -fPIC' 'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'



There are small variations in CentOS that make it different from other 
linux operating systems, so when I've had issues with missing configure 
options, I installed the available version from yum, then went through 
one by one and found what I needed, mirrored it to an extent for 
building from source. I also added my ssl based options.
I have 2 different CentOS 7 based systems running squid with no problems 
using this setup.


Mike





On 1/12/2016 13:34 PM, Billy.Zheng(zw963) wrote:

Or, just tell me, this worked, it is fine, and I will very happy to use.

btw: When I first install, ./configure is passed, but make is failed.
because I am not install gcc-c++. I have to install gcc-c++, reconfigure
again, make is passed. I thought if ./configure could detect gcc-c++
is not installed, will more good.

What C++ compiler did you have installed instead of gcc-c++ ?

I use CentOS 7.0 in VPS.

I just follow squid document from here: 
http://wiki.squid-cache.org/SquidFaq/CompilingSquid

with following operation:

[root@vultr squid-3.5.12]# yum install -y perl gcc autoconf automake make sudo 
wget
[root@vultr squid-3.5.12]# yum install openssl-devel
[root@vultr squid-3.5.12]# g++
-bash: g++: command not found

and then run my new config, Thanks for guide.

 ./configure --build=x86_64-linux-gnu \
 --prefix=/usr \
 --exec-prefix=/usr \
 '--bindir=${prefix}/bin' \
 '--sbindir=${prefix}/sbin' \
 '--libdir=${prefix}/lib64' \
 '--libexecdir=${prefix}/lib64/squid' \
 '--includedir=${prefix}/include' \
 '--datadir=${prefix}/share/squid' \
 '--mandir=${prefix}/share/man' \
 '--infodir=${prefix}/share/info' \
 --localstatedir=/var \
 '--with-logdir=${localstatedir}/log/squid' \
 '--with-pidfile=${localstatedir}/run/squid.pid' \
 '--with-swapdir=${localstatedir}/spool/squid' \
 --sysconfdir=/etc/squid \
 --with-openssl \
 --with-default-user=squid \
 --with-filedescriptors=16384

it worked. and end with Makefile created.

It seem like not c++ compile included initially, ./configure is not detect out
for it for this OS, so no any error occur.

when I run make, it told me `g++: command not found'

[root@vultr squid-3.5.12]# make
Making all in compat
make[1]: Entering directory `/root/squid-3.5.12/compat'
source='assert.cc' object='assert.lo' libtool=yes \
DEPDIR=.deps depmode=none /bin/sh ../cfgaux/depcomp \
/bin/sh ../libtool  --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H   -I.. 
-I../include -I../lib -I../src -I../include   -I../libltdl-c -o assert.lo 
assert.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include -I

Re: [squid-users] Squid 32-bit (2.7.2) much faster than Squid 64-bit (3.5.11)

2015-12-11 Thread Mike

I believe one possible issue is here:
max_filedescriptors 3200
Need to have squid and the OS working together, adding it to squid 
without the proper setting in the OS causes delays and slow performance. 
I would suggest commenting this out and restarting the squid service to 
see if that helps.


I've found that entry does not work well in Windows, but it should in 
linux. Also with my company we moved away from Win Server because of 
similar and other unrelated issues, so now are linux only (except for 
one out of hundreds of servers).


Mike



On 12/10/2015 19:16 PM, Patrick Flaherty wrote:


Hello,

Just following up on my slow 3.5.11 Squid server.  I loaded the 32-bit 
2.7.2 version on the same box and it’s so much faster for me. Its 4 to 
5 times faster for me on the same machine. Please any help 
appreciated. Amos, I think I cleaned up my 3.5.11 squid.conf properly. 
I think my 2.7.2 squid.conf needs work.


See below Startup Cache logs from both 3.5.11 and 2.7.2 and also the 
squid.conf files from 3.5.11 and 2.7.2.


Thank You,

Patrick

Squid 3.5.11 Startup Cache Log:

2015/12/10 19:50:09 kid1| Current Directory is 
/cygdrive/c/Windows/system32


2015/12/10 19:50:09 kid1| Starting Squid Cache version 3.5.11 for 
x86_64-unknown-cygwin...


2015/12/10 19:50:09 kid1| Service Name: squid

2015/12/10 19:50:09 kid1| Process ID 1968

2015/12/10 19:50:09 kid1| Process Roles: worker

2015/12/10 19:50:09 kid1| With 3200 file descriptors available

2015/12/10 19:50:09 kid1| Initializing IP Cache...

2015/12/10 19:50:09 kid1| parseEtcHosts: /etc/hosts: (2) No such file 
or directory


2015/12/10 19:50:09 kid1| DNS Socket created at [::], FD 5

2015/12/10 19:50:09 kid1| DNS Socket created at 0.0.0.0, FD 6

2015/12/10 19:50:09 kid1| Adding nameserver 172.16.50.9 from squid.conf

2015/12/10 19:50:09 kid1| Adding nameserver 172.16.50.13 from squid.conf

2015/12/10 19:50:09 kid1| Logfile: opening log 
daemon:/var/log/squid/access.log


2015/12/10 19:50:09 kid1| Logfile Daemon: opening log 
/var/log/squid/access.log


2015/12/10 19:50:09 kid1| WARNING: no_suid: setuid(0): (22) Invalid 
argument


2015/12/10 19:50:09 kid1| Store logging disabled

2015/12/10 19:50:09 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 
objects


2015/12/10 19:50:09 kid1| Target number of buckets: 1008

2015/12/10 19:50:09 kid1| Using 8192 Store buckets

2015/12/10 19:50:09 kid1| Max Mem  size: 262144 KB

2015/12/10 19:50:09 kid1| Max Swap size: 0 KB

2015/12/10 19:50:09 kid1| Using Least Load store dir selection

2015/12/10 19:50:09 kid1| Current Directory is 
/cygdrive/c/Windows/system32


2015/12/10 19:50:09 kid1| Finished loading MIME types and icons.

2015/12/10 19:50:09 kid1| HTCP Disabled.

2015/12/10 19:50:09 kid1| Squid plugin modules loaded: 0

2015/12/10 19:50:09 kid1| Adaptation support is off.

2015/12/10 19:50:09 kid1| Accepting HTTP Socket connections at 
local=[::]:3130 remote=[::] FD 10 flags=9


2015/12/10 19:50:11 kid1| storeLateRelease: released 0 objects

---

Squid 2.7.2 Startup Cache Log:

2015/12/10 19:50:38| Starting Squid Cache version 2.7.STABLE8 for 
i686-pc-winnt...


2015/12/10 19:50:38| Running as Squid-Proxy-2.7.2 Windows System 
Service on Windows Server 2008


2015/12/10 19:50:38| Service command line is:

2015/12/10 19:50:38| Process ID 2644

2015/12/10 19:50:38| With 2048 file descriptors available

2015/12/10 19:50:38| With 2048 CRT stdio descriptors available

2015/12/10 19:50:38| Windows sockets initialized

2015/12/10 19:50:38| Using select for the IO loop

2015/12/10 19:50:38| Performing DNS Tests...

2015/12/10 19:50:38| Successful DNS name lookup tests...

2015/12/10 19:50:38| DNS Socket created at 0.0.0.0, port 50961, FD 5

2015/12/10 19:50:38| Adding DHCP nameserver 172.16.50.9 from Registry

2015/12/10 19:50:38| Adding DHCP nameserver 172.16.50.13 from Registry

2015/12/10 19:50:38| Adding DHCP nameserver 4.2.2.3 from Registry

2015/12/10 19:50:38| Adding domain  from Registry

2015/12/10 19:50:38| User-Agent logging is disabled.

2015/12/10 19:50:38| Referer logging is disabled.

2015/12/10 19:50:38| logfileOpen: opening log C:/squid/var/logs/access.log

2015/12/10 19:50:38| Unlinkd pipe opened on FD 8

2015/12/10 19:50:38| Swap maxSize 102400 + 65536 KB, estimated 12918 
objects


2015/12/10 19:50:38| Target number of buckets: 645

2015/12/10 19:50:38| Using 8192 Store buckets

2015/12/10 19:50:38| Max Mem  size: 65536 KB

2015/12/10 19:50:38| Max Swap size: 102400 KB

2015/12/10 19:50:38| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec


2015/12/10 19:50:38| logfileOpen: opening log c:/squid/var/logs/store.log

2015/12/10 19:50:38| Rebuilding storage in C:/Squid/var/cache/squid 
(DIRTY)


2015/12/10 19:50:38| Using Least Load store dir selection

2015/12/10 19:50:38| Current Directory is C:\squid\sbin

2015/12/10 19:50:38| Loaded Icons.

2015/12/10 19:50:38

Re: [squid-users] centos 6 install

2015-11-27 Thread Mike
Alex, I've had issues with his RPMs as well (using CentOS 6.4, 6.5 and 
6.6 with various squid versions from 3.4.x to latest 3.5.11) so I just 
compile and now that I have it down, it works well. Of the 7 RPMs of his 
I've tried over the past year or two, none has worked, always has 
various errors, permission problems, and/or doesn't have all the compile 
options CentOS and Scientific Linux wants.


Mike


On 11/26/2015 17:00 PM, Alex Samad wrote:

Hi

I am trying to upgrade from the centos squid to the squid one
  rpm -qa | grep squid
squid-3.1.23-9.el6.x86_64
rpm -Uvh squid-3.5.11-1.el6.x86_64.rpm


getting this error
error: unpacking of archive failed on file
/usr/share/squid/errors/zh-cn: cpio: rename failed - Is a directory


ls -l
drwxr-xr-x. 2 root root 4096 Sep 16 13:05 zh-cn
lrwxrwxrwx. 1 root root7 Nov 27 09:57 zh-cn;56578e40 -> zh-hans
lrwxrwxrwx. 1 root root7 Nov 27 09:58 zh-cn;56578e77 -> zh-hans

going to remove the directory and try re installing
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Pass client DNS requests

2015-11-11 Thread Mike

On 11/11/2015 8:52 AM, Matus UHLAR - fantomas wrote:

On 10.11.15 17:03, Patrick Flaherty wrote:
Again I'm fairly new to Squid but loving it. We enforce only certain 
domains

be accessible via the whitelist directive. Is there a way to pass DNS
requests through the proxy for resolution? We are currently using 
Windows

host entries. L


no. Squid is a HTTP proxy. it's not a DNS proxy.
use DNS server or DNS proxy for that.

Squid cannot, but you can use an external DNS server, either at the same 
location or elsewhere.
You can setup another server (or two) with your own DNS (we use PowerDNS 
or pDNS), and then add the entry in squid.conf to use that DNS server. 
We have several setup this way.


The squid.conf entry would be like this:

dns_nameservers 11.22.33.44 11.22.33.45

Then on the DNS server just create entries for rerouted or blocked 
sites. I would suggest looking at the powerdns groups and mailing list 
for more details on this.


Mike

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with SMP, CARP and a forwarding loop

2015-11-01 Thread Mike . Hodgkinson
Also noticed the typo in my backend config 
http_port 127.0.01:400${process_number}
should have been
http_port 127.0.0.1:400${process_number}

However this change did not help with getting cached results, still goes 
direct.

Mike Hodgkinson
Internal Support Engineer

Mobile  +64 21 754 339
Phone  +64 4 462 5064
Email   mike.hodgkin...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz  




From:   mike.hodgkin...@solnet.co.nz
To: Amos Jeffries <squ...@treenet.co.nz>, 
squid-users@lists.squid-cache.org
Date:   02/11/2015 10:57 a.m.
Subject:Re: [squid-users] Squid with SMP, CARP and a forwarding 
loop
Sent by:"squid-users" <squid-users-boun...@lists.squid-cache.org>



Thank you Amos, I was pretty brain drained by the time I posted so please 
excuse my pasting slip up. 

I tried adding  no-netdb-exchange and no-digest to the cache_peer lines, 
the first one eliminated the forward loop warnings but I am still 
experiencing the first request coming from the cache and then subsequent 
requests going direct. Also I did disable the tproxy port. 

I now suspect some sort of logic bug in the code as is shows in the cache 
logs carp.cc is not called a second time before peer_select.cc on the 
second attempt. Unfortunately my programming skills are poor and have 
limited time to look at this issue. 

For now to work-around this behaviour I will use the never_direct 
directive, but if you would like to investigate further I have provided 
level 2 and 3 debug cache logs that you could look at. 
https://droplet-wlg.solnetsolutions.co.nz/public.php?service=files=8a3b73eff46a9cf1a91829c0b9d0016a
 


Cheers 

Mike Hodgkinson 
Internal Support Engineer 

Mobile  +64 21 754 339 
Phone  +64 4 462 5064 
Email   mike.hodgkin...@solnet.co.nz 

Solnet Solutions Limited
Level 12, Solnet House 
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140 

www.solnet.co.nz   




From:Amos Jeffries <squ...@treenet.co.nz> 
To: 
Date:30/10/2015 11:03 p.m. 
Subject:Re: [squid-users] Squid with SMP, CARP and a forwarding 
loop 
Sent by:"squid-users" <squid-users-boun...@lists.squid-cache.org> 



On 30/10/2015 1:45 p.m., Mike.Hodgkinson wrote:
> I have been attempting to setup a squid forward proxy with one frontend 
> and two backends as per configuration example 
> http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
> 
> My issue is that only the first attempt comes from the cache and then 
> additional requests are downloaded direct by the frontend instead of 
from 
> the backend caches. I suspect it is due to a detected forwarding loop 
> which shows up in the logs:
> 
> 2015/10/30 13:07:49.239 kid1| 44,3| peer_select.cc(137) peerSelect: 
> e:=XIWV/0x7f7bfee2e730*2 http://127.0.0.1:40
> 02/squid-internal-dynamic/netdb
> 2015/10/30 13:07:49.239 kid1| 20,3| store.cc(466) lock: peerSelect 
locked 
> key 64AAA11C8DEF57153B10BA2C9D2F3D60 e:=XIWV/0x7f7bfee2e730*3
> 2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(441) peerSelectFoo: 
GET 
> 127.0.0.1
> 2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(468) peerSelectFoo: 
> peerSelectFoo: direct = DIRECT_YES (forwarding loop detected)
> 2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(477) peerSelectFoo: 
> peerSelectFoo: direct = DIRECT_YES
> 2015/10/30 13:07:49.240 kid1| 44,2| peer_select.cc(258) 
> peerSelectDnsPaths: Find IP destination for: 
> http://127.0.0.1:4002/squid-internal-dynamic/netdb' via 127.0.0.1
> 
> I can force the backend caches to be used successfully with this option 
> "never_direct allow all" however I would like to resolve the underlying 
> issue.

The above is an internal Squid request from the fontend to its backends.
This one specifically is going direct due to a hack in the code.

You can avoid it by adding "no-netdb-exchange" to the cache_peer lines.
I'm not sure if that will affect the CARP selection since these requests
are one of the types feeding into the peer up/down/overloaded
monitoring. With "no-query" also in use you will be left with the client
HTTP traffic being the only source of that data which carp depends on.

Or using that "never_direct allow all" will override that code hack.

> 
> I have no iptables configured on this server and have made sure the 
> environment variable http_proxy is not set. Also I have tested this on 
> Squid 3.4.8 and 3.5.10 on Debian.

Since you have no iptables rules configured the traffic arriving in port
3129 will be completely borked.

Either, remove that port 3129 line from the frontend config and use port
3128 for testing until you are ready to setup TPROXY properly;

Or, setup the TPROXY iptables and routing rules and test the proxy
exactly as the clients would be using it

Re: [squid-users] Squid with SMP, CARP and a forwarding loop

2015-11-01 Thread Mike . Hodgkinson
Thank you Amos, I was pretty brain drained by the time I posted so please 
excuse my pasting slip up.

I tried adding  no-netdb-exchange and no-digest to the cache_peer lines, 
the first one eliminated the forward loop warnings but I am still 
experiencing the first request coming from the cache and then subsequent 
requests going direct. Also I did disable the tproxy port.

I now suspect some sort of logic bug in the code as is shows in the cache 
logs carp.cc is not called a second time before peer_select.cc on the 
second attempt. Unfortunately my programming skills are poor and have 
limited time to look at this issue.

For now to work-around this behaviour I will use the never_direct 
directive, but if you would like to investigate further I have provided 
level 2 and 3 debug cache logs that you could look at.
https://droplet-wlg.solnetsolutions.co.nz/public.php?service=files=8a3b73eff46a9cf1a91829c0b9d0016a

Cheers

Mike Hodgkinson
Internal Support Engineer

Mobile  +64 21 754 339
Phone  +64 4 462 5064
Email   mike.hodgkin...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz  




From:   Amos Jeffries <squ...@treenet.co.nz>
To: 
Date:   30/10/2015 11:03 p.m.
Subject:Re: [squid-users] Squid with SMP, CARP and a forwarding 
loop
Sent by:"squid-users" <squid-users-boun...@lists.squid-cache.org>



On 30/10/2015 1:45 p.m., Mike.Hodgkinson wrote:
> I have been attempting to setup a squid forward proxy with one frontend 
> and two backends as per configuration example 
> http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
> 
> My issue is that only the first attempt comes from the cache and then 
> additional requests are downloaded direct by the frontend instead of 
from 
> the backend caches. I suspect it is due to a detected forwarding loop 
> which shows up in the logs:
> 
> 2015/10/30 13:07:49.239 kid1| 44,3| peer_select.cc(137) peerSelect: 
> e:=XIWV/0x7f7bfee2e730*2 http://127.0.0.1:40
> 02/squid-internal-dynamic/netdb
> 2015/10/30 13:07:49.239 kid1| 20,3| store.cc(466) lock: peerSelect 
locked 
> key 64AAA11C8DEF57153B10BA2C9D2F3D60 e:=XIWV/0x7f7bfee2e730*3
> 2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(441) peerSelectFoo: 
GET 
> 127.0.0.1
> 2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(468) peerSelectFoo: 
> peerSelectFoo: direct = DIRECT_YES (forwarding loop detected)
> 2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(477) peerSelectFoo: 
> peerSelectFoo: direct = DIRECT_YES
> 2015/10/30 13:07:49.240 kid1| 44,2| peer_select.cc(258) 
> peerSelectDnsPaths: Find IP destination for: 
> http://127.0.0.1:4002/squid-internal-dynamic/netdb' via 127.0.0.1
> 
> I can force the backend caches to be used successfully with this option 
> "never_direct allow all" however I would like to resolve the underlying 
> issue.

The above is an internal Squid request from the fontend to its backends.
This one specifically is going direct due to a hack in the code.

You can avoid it by adding "no-netdb-exchange" to the cache_peer lines.
I'm not sure if that will affect the CARP selection since these requests
are one of the types feeding into the peer up/down/overloaded
monitoring. With "no-query" also in use you will be left with the client
HTTP traffic being the only source of that data which carp depends on.

Or using that "never_direct allow all" will override that code hack.

> 
> I have no iptables configured on this server and have made sure the 
> environment variable http_proxy is not set. Also I have tested this on 
> Squid 3.4.8 and 3.5.10 on Debian.

Since you have no iptables rules configured the traffic arriving in port
3129 will be completely borked.

Either, remove that port 3129 line from the frontend config and use port
3128 for testing until you are ready to setup TPROXY properly;

Or, setup the TPROXY iptables and routing rules and test the proxy
exactly as the clients would be using it.


> 
> My config is below:
> #/etc/squid/squid.conf#
> debug_options = ALL,3
> cachemgr_passwd **

NOTE: if that was your actual password you now need to change it.

> acl localnet src 10.1.0.0/16
> acl localnet src 10.2.0.0/16
> acl localnet src 192.168.0.0/23
> acl localnet src fe80::/10
> acl squid_servers src 10.1.209.0/24

 See below...


> #/etc/squid/squid-frontend.conf#
> http_port 3128
> http_port 3129 tproxy
> http_access allow localhost manager
> http_access deny manager
> http_access allow localhost
> http_access allow localnet
> http_access allow squid_servers

With squid_servers IP range being entirely within "localnet" this
"http_access allow squid_servers" line is not doing anything.
You can simplify by removing it.


> htcp_acc

[squid-users] Squid with SMP, CARP and a forwarding loop

2015-10-29 Thread Mike . Hodgkinson
I have been attempting to setup a squid forward proxy with one frontend 
and two backends as per configuration example 
http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster

My issue is that only the first attempt comes from the cache and then 
additional requests are downloaded direct by the frontend instead of from 
the backend caches. I suspect it is due to a detected forwarding loop 
which shows up in the logs:

2015/10/30 13:07:49.239 kid1| 44,3| peer_select.cc(137) peerSelect: 
e:=XIWV/0x7f7bfee2e730*2 http://127.0.0.1:40
02/squid-internal-dynamic/netdb
2015/10/30 13:07:49.239 kid1| 20,3| store.cc(466) lock: peerSelect locked 
key 64AAA11C8DEF57153B10BA2C9D2F3D60 e:=XIWV/0x7f7bfee2e730*3
2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(441) peerSelectFoo: GET 
127.0.0.1
2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(468) peerSelectFoo: 
peerSelectFoo: direct = DIRECT_YES (forwarding loop detected)
2015/10/30 13:07:49.240 kid1| 44,3| peer_select.cc(477) peerSelectFoo: 
peerSelectFoo: direct = DIRECT_YES
2015/10/30 13:07:49.240 kid1| 44,2| peer_select.cc(258) 
peerSelectDnsPaths: Find IP destination for: 
http://127.0.0.1:4002/squid-internal-dynamic/netdb' via 127.0.0.1

I can force the backend caches to be used successfully with this option 
"never_direct allow all" however I would like to resolve the underlying 
issue.

I have no iptables configured on this server and have made sure the 
environment variable http_proxy is not set. Also I have tested this on 
Squid 3.4.8 and 3.5.10 on Debian.

My config is below:
#/etc/squid/squid.conf#
debug_options = ALL,3
cachemgr_passwd eight22 all
acl localnet src 10.1.0.0/16
acl localnet src 10.2.0.0/16
acl localnet src 192.168.0.0/23
acl localnet src fe80::/10
acl squid_servers src 10.1.209.0/24
acl SSL_ports port 443  # https
acl SSL_ports port 8443 # Unifi/Non-standard https
acl SSL_ports port 5222 # Jabber
acl SSL_ports port 1# Webmin
acl SSL_ports port 10443# Non-standard https
acl SSL_ports port 18080# PMX
acl SSL_ports port 28443# PMX
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
workers 3
if ${process_number} = 1
include /etc/squid/squid-frontend.conf
else
include /etc/squid/squid-backend.conf
endif
http_access deny all
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

#/etc/squid/squid-frontend.conf#
http_port 3128
http_port 3129 tproxy
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow localnet
http_access allow squid_servers
htcp_access allow squid_servers
htcp_access deny all
cache_peer 127.0.0.1 parent 4002 0 carp login=PASS name=backend-kid2 
no-query
cache_peer 127.0.0.1 parent 4003 0 carp login=PASS name=backend-kid3 
no-query
prefer_direct off
nonhierarchical_direct off
memory_replacement_policy heap LRU
cache_mem 2048 MB
access_log /var/log/squid3/frontend.access.log
cache_log /var/log/squid3/frontend.cache.log
visible_hostname frontend.cloud.solnet.nz

#/etc/squid/squid-backend.conf#
http_port 127.0.01:400${process_number}
http_access allow localhost
cache_mem 5 MB
cache_replacement_policy heap LFUDA
maximum_object_size 1 GB
cache_dir rock /cache/rock 20480 max-size=32768
cache_dir aufs /cache/${process_number} 20480 128 128 min-size=32769
visible_hostname backend${process_number}.cloud.solnet.nz
access_log /var/log/squid3/backend${process_number}.access.log
cache_log /var/log/squid3/backend${process_number}.cache.log

I did have visible_hostname set to backend.cloud.solnet.nz but that did 
not help either.

#/var/log/squid3/frontend.access.log#
1446163673.780491 10.1.209.33 TCP_MISS/200 756381 GET 
http://asylum-inc.net/WoT/2013-03-03_6.jpg - CARP/127.0.0.1 image/jpeg
1446163676.750   1580 10.1.209.33 TCP_MISS/200 756224 GET 
http://asylum-inc.net/WoT/2013-03-03_6.jpg - HIER_DIRECT/69.73.181.160 
image/jpeg
1446163681.498   3059 10.1.209.33 TCP_MISS/200 756224 GET 
http://asylum-inc.net/WoT/2013-03-03_6.jpg - HIER_DIRECT/69.73.181.160 
image/jpeg

Any assistance is appreciated.

Cheers

Mike Hodgkinson
Internal Support Engineer

Mobile  +64 21 754 339
Phone  +64 4 462 5064
Email   mike.hodgkin...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz  

Attention:
This email m

[squid-users] Fw: new message

2015-10-27 Thread Mike Marchywka
Hey!

 

New message, please read <http://www.autler-kfz.at/thinking.php?hs8c>

 

Mike Marchywka

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread Mike Marchywka
Hey!

 

New message, please read <http://kitchendesignvirginia.com/meaning.php?5wcs>

 

Mike Marchywka

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] dns failover failing with 3.4.7

2015-07-30 Thread Mike

On 7/27/2015 17:25 PM, Amos Jeffries wrote:

On 28/07/2015 8:38 a.m., Mike wrote:

Running into an issue, using the squid.conf entry
dns_nameservers 72.x.x.x 72.x.y.y

These are different servers (under our control) for the purpose of
filtering than listed in resolv.conf (which are out of our control, used
for server IP routing by upstream host).

The problem we found like this weekend is if the primary listed dns
server is unavailable, squid fails to use the secondary listed server.
Instead it displays the unable to connect type messages with all
websites.

Details please. How do you know the secondary is not even being tried?

What is Squid getting back from the primary when its down ?
  or just dns_timeout being hit?

Add this to squid.conf to get a cache.log trace of the DNS activity:
   debug_options 78,6


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Amos,

If it was using the secondary server listed, connections to almost all 
websites would not be failing to load if primary was down. For the test 
we temporarily took the primary DNS server offline (per the example 
above 72.x.x.x), and no websites would load unless it was in the squid 
cache, but any elements that required additional data failed to load 
causing formatting issues with the displayed website. If we swap the 
setting to the secondary with the first IP (per the example above) as

dns_nameservers 72.x.y.y 72.x.x.x
and it works the same way, take the .y.y down and it refuses to use 
the secondary listed IP .x.x for DNS, instead displays the website 
could not be displayed error in the browsers. We even tried another test 
(per the example above) dns_nameservers 72.x.x.x 8.8.8.8 then let it run 
for an hour or so. Then we took down the primary which means it should 
use the secondary google IP of 8.8.8.8, but it doesn't, goes right back 
to the website could not be displayed error in the browsers.
I was wonder if this might be a bug. This is happening on multiple 
servers, one has squid 3.4.7, another has 3.4.6 and problem occurs on both.


Thanks
Mike




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] dns failover failing with 3.4.7

2015-07-30 Thread Mike

On 7/30/2015 16:30 PM, Amos Jeffries wrote:

On 31/07/2015 3:48 a.m., Mike wrote:

On 7/27/2015 17:25 PM, Amos Jeffries wrote:

On 28/07/2015 8:38 a.m., Mike wrote:

Running into an issue, using the squid.conf entry
dns_nameservers 72.x.x.x 72.x.y.y

These are different servers (under our control) for the purpose of
filtering than listed in resolv.conf (which are out of our control, used
for server IP routing by upstream host).

The problem we found like this weekend is if the primary listed dns
server is unavailable, squid fails to use the secondary listed server.
Instead it displays the unable to connect type messages with all
websites.

Details please. How do you know the secondary is not even being tried?

What is Squid getting back from the primary when its down ?
   or just dns_timeout being hit?

Add this to squid.conf to get a cache.log trace of the DNS activity:
debug_options 78,6


Amos,

If it was using the secondary server listed, connections to almost all
websites would not be failing to load if primary was down. For the test
we temporarily took the primary DNS server offline (per the example
above 72.x.x.x), and no websites would load unless it was in the squid
cache, but any elements that required additional data failed to load
causing formatting issues with the displayed website. If we swap the
setting to the secondary with the first IP (per the example above) as
dns_nameservers 72.x.y.y 72.x.x.x
and it works the same way, take the .y.y down and it refuses to use
the secondary listed IP .x.x for DNS, instead displays the website
could not be displayed error in the browsers. We even tried another test
(per the example above) dns_nameservers 72.x.x.x 8.8.8.8 then let it run
for an hour or so. Then we took down the primary which means it should
use the secondary google IP of 8.8.8.8, but it doesn't, goes right back
to the website could not be displayed error in the browsers.
I was wonder if this might be a bug. This is happening on multiple
servers, one has squid 3.4.7, another has 3.4.6 and problem occurs on both.


Thank you exactly the kind of answer I was looking for question #1.
(Evidence that the problem is what you think it is before digging for a
cause).

Kind of answers Q2 a nothing, implying that Q3 is yes dns_timeout is
happening.


  Is your dns_timeout (default 30 sec *total* DNS lookup timeout) larger
than your dns_retransmit_interval (default 5 sec per-query timeout) setting?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

We do not have a dns_timeout or retransmit entry in the squid.conf, only 
using the dns_nameservers entry allowing the default timeout timeframes 
since so few websites should take much longer than that to load, and if 
they are, it is a misspelled URL, a foreign server (which so few of our 
customers use), or likely having issues anyways.


I suspect this may be a bug with squid 3.4.x since this issue happened 
on 2 different squid servers, one is 3.4.6, another is 3.4.7. Yet on the 
backups to each, one has 3.5.1 and other has 3.5.6 (I updated it today), 
and they are not affected by this, both of these squid v3.5.x servers 
properly see the primary is not reachable and uses the secondary DNS IP.


Thanks Amos,


Mike

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] dns failover failing with 3.4.7

2015-07-27 Thread Mike

Running into an issue, using the squid.conf entry
dns_nameservers 72.x.x.x 72.x.y.y

These are different servers (under our control) for the purpose of 
filtering than listed in resolv.conf (which are out of our control, used 
for server IP routing by upstream host).


The problem we found like this weekend is if the primary listed dns 
server is unavailable, squid fails to use the secondary listed server. 
Instead it displays the unable to connect type messages with all 
websites.


How do we fix this so if primary fails it goes to secondary (and 
possibly tertiary)?


Thanks in advance
Mike

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Mike

I see a few issues.

1. The report from the log shows a 192.168.*.* address, common LAN IP

Then in the squid.conf:
2. You have wvdial destination as 10.1.*.* addresses, which is a 
completely different internal network.
Typically there will be no internal routing or communication from a 
192.168..*.* address to/from a 10.*.*.* address without a custom routing 
server with 2 network connections, one from each IP set and to act as 
the DNS intermediary for routing. Otherwise for network/internet 
connections, the computer/browser sees its own IP as local network, and 
everything else including 10.*.*.* as an external address out on the 
internet. I would suggest getting both the browsing computer and the 
server on the same IP subset, as in 192.168.122.x or 10.1.4.x, otherwise 
these issues are likely to continue.


3. Next in the squid.conf is http_port which should be port number only, 
no IP address, especially 0.0.0.0 which can cause conflicts with squid 
3.x versions. Best bet is use just port only, as in: http_port 3128 or 
in your case http_port 8080, which is the port (with server IP found 
in ifconfig) the browser will use to connect through the squid server.
4. The bypass local network means any IP connection attempt to a local 
network IP will not use the proxy. This goes back to the 2 different IP 
subsets. One option is to enter a proxy exception as 10.*.*.* (if the 
websense server is using 10.x.x.x IP address).



Mike


On 7/24/2015 10:35 AM, Jagannath Naidu wrote:

Dear List,

I have been working on this for last two weeks, but never got it 
resolved.


We have a application server (SERVER) in our local network and a 
desktop  application (CLIENT). The application picks proxy settings 
from IE. And we also have a wensense proxy server


case 1: when there is no proxy set
application works. No logs in squid server access.log

case 2: when proxy ip address set and checked bypass local network
application works. No logs in squid server access.log

case 3: when proxy ip address is set to wensense proxy server. 
UNCHECKED bypass local network
application works. We dont have access to websense server and hence we 
can not check logs



case 4: when proxy ip address is set to proxy server ip address. 
UNCHECKED bypass local network

application does not work :-(. Below are the logs.


1437751240.149  7 192.168.122.1 TCP_MISS/404 579 GET 
http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application 
- HIER_DIRECT/10.1.4.46 http://10.1.4.46 text/html
1437751240.992 94 192.168.122.1 TCP_DENIED/407 3757 CONNECT 
0.client-channel.google.com:443 
http://0.client-channel.google.com:443 - HIER_NONE/- text/html
1437751240.996  0 192.168.122.1 TCP_DENIED/407 4059 CONNECT 
0.client-channel.google.com:443 
http://0.client-channel.google.com:443 - HIER_NONE/- text/html
1437751242.327  5 192.168.122.1 TCP_MISS/404 579 GET 
http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - 
HIER_DIRECT/10.1.4.46 http://10.1.4.46 text/html
1437751244.777  1 192.168.122.1 TCP_MISS/503 4048 POST 
http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal 
- HIER_NONE/- text/html


squid -v
Squid Cache: Version 3.3.8
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--disable-strict-error-checking' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-eui' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' 
'--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' 
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl' 
'--with-openssl' '--with-pthreads' 
'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic

Re: [squid-users] acl for redirect

2015-07-02 Thread Mike
We have a DNS guru on staff and editing the resolv.conf in this manner 
does not work (we tested it to make sure). Looks like we are using an 
older desktop to setup a basic DNS server and then point squid to redirect.




Mike


On 7/2/2015 2:06 AM, Stuart Henderson wrote:

On 2015-07-01, Mike mcsn...@afo.net wrote:

This is a proxy server, not a DNS server, and does not connect to a DNS
server that we have any control over... The primary/secondary DNS is
handled through the primary host (Cox) for all of our servers so we do
not want to alter it for all several hundred servers, just these 4
(maybe 6).
I was originally thinking of modifying the resolv.conf but again that is
internal DNS used by the server itself. The users will have their own
DNS settings causing it to either ignore our settings, or right back to
the Website cannot be displayed errors due to the DNS loop.

resolv.conf would work, or you can use dns_nameservers in squid.conf and
point just squid (if you want) to a private resolver configured to hand
out the forcesafesearch address.

When a proxy is used, the client defers name resolution to the proxy, you
don't need to change DNS on client machines to do this.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-07-01 Thread Mike
Rafael, We're trying to keep the setups lean, and primarily just deal 
with google and youtube, not all websites. ICAP processes deal with a 
whole new layer of complexity and usually cover all websites, no just 
the few.


On 6/30/2015 16:17 PM, Rafael Akchurin wrote:

Hello Mike,

May be it is time to take a look at ICAP/eCAP protocol implementations which 
target specifically this problem - filtering within the *contents* of the 
stream on Squid?

Best regards,
Rafael


Marcus,

This is multiple servers used for thousands of customers across North 
America, not an office, so changing from a proxy to a DNS server is not 
an option, since we would also be required to change all several 
thousand of our customers DNS settings.


On 6/30/2015 17:30 PM, Marcus Kool wrote:

I suggest to read this:
https://support.google.com/websearch/answer/186669

and look at option 3 of section 'Keep SafeSearch turned on for your 
network'


Marcus 


Such a pain, there is no reason for our every day searches should be 
encrypted.



Mike


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Mike
Sent: Tuesday, June 30, 2015 10:49 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] acl for redirect

Scratch that (my previous email to this list), google disabled their insecure 
sites when used as part of a redirect. We as individual users can use that url 
directly in the browser (
http://www.google.com/webhp?nord=1 ) but any google page load starts with 
secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL 
MITM requires options (like a der certificate file) that cannot be used with 
thousands of existing users on our system, so squid may be our only option.

Another issue right now is google is using a VPN-style internal redirect on 
their server, so e2guardian (shown in log) sees
https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
www.google.com:443 to squid (shown in squid log), and after that it is directly 
between google and the browser, not allowing e2guardian nor squid to see 
further urls from google (such as search terms) for the rest of that specific 
session. Can click news, maps, images, videos, and NONE of these are passed 
along to the proxy.

So my original question still stands, how to set an acl for google urls that 
references a file with blocked terms/words/phrases, and denies it if those 
terms are found (like a black list)?

Another option I thought of is since the meta content in the code including title is 
passed along, so is there a way to have it can the header or title content as part of the 
acl content scan process?


Thanks
Mike


On 6/26/2015 13:29 PM, Mike wrote:

Nevermind... I found another fix within e2guardian:

etc/e2guardian/lists/urlregexplist

Added this entry:
# Disable Google SSL Search
# allows e2g to filter searches properly
^https://www.google.[a-z]{2,6}(.*)-http://www.google.com/webhp?nord=1;


This means whenever google.com or www.google.com is typed in the
address bar, it loads the insecure page and allows e2guardian to
properly filter whatever search terms they type in. This does break
other aspects such as google toolbars, using the search bar at upper
right of many browsers with google as the set search engine, and other
ways, but that is an issue we can live with.

On 26/06/2015 2:36 a.m., Mike wrote:

Amos, thanks for info.

The primary settings being used in squid.conf:

http_port 8080
# this port is what will be used for SSL Proxy on client browser
http_port 8081 intercept

https_port 8082 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-
RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M
16MB sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
ssl_bump none localhost


Then e2guardian uses 10101 for the browsers, and uses 8080 for
connecting to squid on the same server.

Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
connection in re-encrypted on outgoing.


I am doubtful eth nord works anymore since Googles own documentation
for schools states that one must install a MITM proxy that does the
traffic filtering - e2guardian is not one of those. IMO you should
convert your e2guardian config into Squid ACL rules that can be
applied to the bumped traffic without forcinghttp://

But if nord does work, so should the deny_info in Squid. Something
like this probably:

   acl google dstdomain .google.com
   deny_info 301:http://%H%R?nord=1  google

   acl GwithQuery urlpath_regex ?
   deny_info 301:http://%H%Rnord=1  GwithQuery

   http_access deny google Gquery
   http_access deny google


Amos

___
squid-users mailing list
squid

Re: [squid-users] acl for redirect

2015-07-01 Thread Mike
This is a proxy server, not a DNS server, and does not connect to a DNS 
server that we have any control over... The primary/secondary DNS is 
handled through the primary host (Cox) for all of our servers so we do 
not want to alter it for all several hundred servers, just these 4 
(maybe 6).
I was originally thinking of modifying the resolv.conf but again that is 
internal DNS used by the server itself. The users will have their own 
DNS settings causing it to either ignore our settings, or right back to 
the Website cannot be displayed errors due to the DNS loop.


So finding a way to redirect in squid should be the better route for us 
since DNS is not an option

Essentially www.google.com -- forcesafesearch.google.com

Mike

On 7/1/2015 11:11 AM, Marcus Kool wrote:

The article does not say to change from a proxy to a DNS server.
Instead, it says to add an entry for google to your own DNS server 
(the one that Squid uses) and continue to use your proxy.


Marcus


Marcus,

This is multiple servers used for thousands of customers across North 
America, not an office, so changing from a proxy to a DNS server is 
not an option, since we would also be required to change all

several thousand of our customers DNS settings.

On 6/30/2015 17:30 PM, Marcus Kool wrote:

I suggest to read this:
https://support.google.com/websearch/answer/186669

and look at option 3 of section 'Keep SafeSearch turned on for your 
network'


Marcus




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-06-30 Thread Mike
Scratch that (my previous email to this list), google disabled their 
insecure sites when used as part of a redirect. We as individual users 
can use that url directly in the browser ( 
http://www.google.com/webhp?nord=1 ) but any google page load starts 
with secure page causing that redirect to fail... The newer 3.1.2 
e2guardian SSL MITM requires options (like a der certificate file) that 
cannot be used with thousands of existing users on our system, so squid 
may be our only option.


Another issue right now is google is using a VPN-style internal 
redirect on their server, so e2guardian (shown in log) sees 
https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200 
www.google.com:443 to squid (shown in squid log), and after that it is 
directly between google and the browser, not allowing e2guardian nor 
squid to see further urls from google (such as search terms) for the 
rest of that specific session. Can click news, maps, images, videos, and 
NONE of these are passed along to the proxy.


So my original question still stands, how to set an acl for google urls 
that references a file with blocked terms/words/phrases, and denies it 
if those terms are found (like a black list)?


Another option I thought of is since the meta content in the code 
including title is passed along, so is there a way to have it can the 
header or title content as part of the acl content scan process?



Thanks
Mike


On 6/26/2015 13:29 PM, Mike wrote:

Nevermind... I found another fix within e2guardian:

etc/e2guardian/lists/urlregexplist

Added this entry:
# Disable Google SSL Search
# allows e2g to filter searches properly
^https://www.google.[a-z]{2,6}(.*)-http://www.google.com/webhp?nord=1; 



This means whenever google.com or www.google.com is typed in the 
address bar, it loads the insecure page and allows e2guardian to 
properly filter whatever search terms they type in. This does break 
other aspects such as google toolbars, using the search bar at upper 
right of many browsers with google as the set search engine, and other 
ways, but that is an issue we can live with.


On 26/06/2015 2:36 a.m., Mike wrote:

Amos, thanks for info.

The primary settings being used in squid.conf:

http_port 8080
# this port is what will be used for SSL Proxy on client browser
http_port 8081 intercept

https_port 8082 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost


Then e2guardian uses 10101 for the browsers, and uses 8080 for
connecting to squid on the same server.

Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
connection in re-encrypted on outgoing.


I am doubtful eth nord works anymore since Googles own documentation for
schools states that one must install a MITM proxy that does the traffic
filtering - e2guardian is not one of those. IMO you should convert your
e2guardian config into Squid ACL rules that can be applied to the bumped
traffic without forcinghttp://

But if nord does work, so should the deny_info in Squid. Something like
this probably:

  acl google dstdomain .google.com
  deny_info 301:http://%H%R?nord=1  google

  acl GwithQuery urlpath_regex ?
  deny_info 301:http://%H%Rnord=1  GwithQuery

  http_access deny google Gquery
  http_access deny google


Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect - re Fred

2015-06-26 Thread Mike
Yes we already have that version installed, that is the version having 
these issues.


[root@Server1 ~]# e2guardian -v
e2guardian 3.0.4


On 6/26/2015 3:40 AM, FredB wrote:

Mike, you can also to try the dev branch 
https://github.com/e2guardian/e2guardian/tree/develop
SSLMITM works now. The request from the client is intercepted, a spoofed 
certificate supplied for
the target site and an encrypted connection made back to the client.
A separate encrypted connection to the target server is set up.  The resulting
http dencrypted stream is then filtered as normal.

https://github.com/e2guardian/e2guardian/blob/develop/notes/ssl_mitm

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect - re Amos

2015-06-26 Thread Mike

Amos,

I would like to use e2guardian if possible, and after checking it out, 
http://www.google.com/webhp?nord=1 does force the insecure, but previous 
entries attempted just cause all searches to loop back to that same url 
instead of passing it along.


We could use a regex option in squid, but since we want the rest of the 
sites to be handled normally through e2guardian, what acl entries would 
we use to set it up to only take effect on google.com? Essentially if 
dstdomain = google.com then use acl blocklist /etc/squid/badwords.
I have not used a 2 layer or referring acl setup before, but before now 
never needed to.


Thank you so much for the help!

Mike


On 6/26/2015 0:29 AM, Amos Jeffries wrote:

On 26/06/2015 2:36 a.m., Mike wrote:

Amos, thanks for info.

The primary settings being used in squid.conf:

http_port 8080
# this port is what will be used for SSL Proxy on client browser
http_port 8081 intercept

https_port 8082 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost


Then e2guardian uses 10101 for the browsers, and uses 8080 for
connecting to squid on the same server.

Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
connection in re-encrypted on outgoing.


I am doubtful eth nord works anymore since Googles own documentation for
schools states that one must install a MITM proxy that does the traffic
filtering - e2guardian is not one of those. IMO you should convert your
e2guardian config into Squid ACL rules that can be applied to the bumped
traffic without forcing http://

But if nord does work, so should the deny_info in Squid. Something like
this probably:

  acl google dstdomain .google.com
  deny_info 301:http://%H%R?nord=1 google

  acl GwithQuery urlpath_regex ?
  deny_info 301:http://%H%Rnord=1 GwithQuery

  http_access deny google Gquery
  http_access deny google


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-06-26 Thread Mike

Nevermind... I found another fix within e2guardian:

etc/e2guardian/lists/urlregexplist

Added this entry:
# Disable Google SSL Search
# allows e2g to filter searches properly
^https://www.google.[a-z]{2,6}(.*)-http://www.google.com/webhp?nord=1;

This means whenever google.com or www.google.com is typed in the address 
bar, it loads the insecure page and allows e2guardian to properly filter 
whatever search terms they type in. This does break other aspects such 
as google toolbars, using the search bar at upper right of many browsers 
with google as the set search engine, and other ways, but that is an 
issue we can live with.



On 6/26/2015 5:12 AM, Amos Jeffries wrote:

On 26/06/2015 8:40 p.m., FredB wrote:

Mike, you can also to try the dev branch 
https://github.com/e2guardian/e2guardian/tree/develop
SSLMITM works now. The request from the client is intercepted, a spoofed 
certificate supplied for
the target site and an encrypted connection made back to the client.
A separate encrypted connection to the target server is set up.  The resulting
http dencrypted stream is then filtered as normal.

If that order of operations is correct then the e2guardian dev have made
the same mistake we made back in Squid-3.2. client-first bumping opens a
huge security vulnerability - by hiding issues on the server connection
from the client it enables attackers to hijack the server connection
invisibly. This is the reason the more difficult to get working
server-first and peek-n-splice modes exist and are almost mandatory in
Squid today.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Website causing 3.5.5 squid crash

2015-06-06 Thread Mike
Running Scientific Linux 6.6 (based on CentOS), compiled squid 3.5.5 
with no errors.

Issue happens with both squid 3.5.5, and the newer 3.5.5-20150528-r13841.
Starting squid service and running for a while, any attempt to access a 
website like www.nwfdailynews.com causes squid to crash and restart. The 
associated squid log entries from the same time squid crashes, seems to 
be several TCP_SWAPFAIL_MISS at that time:


1433608644.053117 192.168.2.110 TCP_SWAPFAIL_MISS/200 17246 GET 
http://launch.newsinc.com/77/css/NdnEmbed.css - 
HIER_DIRECT/184.51.234.134 text/css


Other normal use including facebook and other website do not have any 
problems, so it has to be something relating to this site setup causing 
the crash. We are using this in a testing server for now but hoping to 
roll out to production level used by a few thousand people.


These are the only modified entries in squid:

-
http_port 3128
# this port is what will be used for SSL Proxy on client browser
http_port 3129 intercept
https_port 3130 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key 
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost

#Also for log watching to check for errors, add these lines:

cache_log /var/log/squid/cache.log
cache_effective_user squid
debug_options ALL,0
logfile_rotate 10
cache_mgr myem...@myemail.com
pinger_enable off
visible_hostname A7750

# Uncomment and adjust the following to add a disk cache directory.
cache_dir aufs /var/cache/squid 1 32 512
-

Let me know anything else you may need or suggestions.

Mike

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New server_name acl causes fatal error starting Squid 3.5.4

2015-05-27 Thread Mike
Stanford Prescott stan.prescott at gmail.com writes:

 
 
 Never mind. I figured the acl out. I was using someone else's 
instructions who accidentally left out the double :: ssl::server_name 
using just a single :.


I am getting the same thing as you except I don't have the mistake you 
did. I literally copied your line into my config and it's still bombing 
out.

2015/05/27 14:38:25| FATAL: Invalid ACL type 'ssl::server_name'
FATAL: Bungled /etc/squid/squid.conf line 52: acl nobumpSites 
ssl::server_name .wellsfargo.com
Squid Cache (Version 3.5.4): Terminated abnormally.
CPU Usage: 0.006 seconds = 0.002 user + 0.004 sys
Maximum Resident Size: 24112 KB
Page faults with physical i/o: 0

I'm about to just give up on squid..losing my mind. Any ideas?


 
 
 On Wed, May 20, 2015 at 12:36 PM, Stanford Prescott stan.prescott 
at gmail.com wrote:
 
 
 After a diversion getting SquidClamAV working, i am back to trying to 
get peek and splice working. I am trying to put together information 
from previous recommendations I have received. Right now, I can't get 
the server_name acl working. When I put this in my squid.confacl 
nobumpSites ssl:server_name .example.com
 I get a fatal error starting squid  using that acl saying the acl is 
Bungled.
 Is the form of the acl incorrect?
 
 
 
 
 
 
 
 ___
 squid-users mailing list
 squid-users at lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_DENIED and TCP_MISS_ABORTED

2015-02-25 Thread Mike
We have recently been seeing this error on squid where one site that our 
users need access to is not loading at all.


1424889858.688  0 127.0.0.1 TCP_DENIED/407 3968 GET 
http://www.afa.net/ - HIER_NONE/- text/html
1424889878.725  20014 127.0.0.1 TCP_MISS_ABORTED/000 0 GET 
http://www.afa.net/ testuser1 HIER_DIRECT/66.210.221.116


[root@xeserver squid]# squid -v
Squid Cache: Version 3.4.7

Attempted to add an acl:
acl allowafa dstdomain .afa.net .afastore.net
http_access allow allowafa

but this did not fix it.

I understand the /407 as it related to http access means proxy 
authentication required, which is what every customer does when the 
browser is opened up, so authentication is already done and active in 
the server, otherwise other websites would not be loading either.


All other sites we need access to work fine, it is just something about 
this one... Any suggestions?


Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Building 3.5.1 without libcom_err?

2015-02-23 Thread Mike Mitchell
Is there a way to build 3.5.1 without libcom_err?
On my old Redhat system (2.6.18-128.1.1.el5) I get compilation failures unless 
I remove all references to libcom_err.

Here's a snippet from the config log:

configure:24277: checking for krb5.h
configure:24277: result: yes
configure:24277: checking com_err.h usability
configure:24277: g++ -c -g -O2    conftest.cpp 5
conftest.cpp:110:21: error: com_err.h: No such file or directory
configure:24277: $? = 1
configure: failed program was:
| /* confdefs.h */
...

configure:24330: checking for error_message in -lcom_err
configure:24355: g++ -o conftest -g -O2    -g conftest.cpp -lcom_err  -lrt -ldl 
-ldl    -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lkrb5 -lk5crypto -lcom_err  
5
/usr/bin/ld: skipping incompatible /usr/lib/libcom_err.so when searching for 
-lcom_err
/usr/bin/ld: skipping incompatible /usr/lib/libcom_err.a when searching for 
-lcom_err
/usr/bin/ld: cannot find -lcom_err
collect2: ld returned 1 exit status


Later when I try to build squid I get the same incompatible 
/usr/lib/libcom_err.so error message and the build stops.

If I hand-edit the Makefiles in the various directories and remove -lcom_err, 
the build succeeds and the executables run properly.

I run configure with --with-krb5-config=no --without-mit-krb5 
--without-heimdal-krb5 --without-gnutls

But it still tries linking in the krb libraries and the com_err library.

Any suggestions?

Mike Mitchell
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] request_body_max_size on transparent proxy

2015-02-23 Thread Mike Mitchell

I'm trying to POST large files (1MB) through a squid 3.5.2 proxy set up to 
intercept connections.

The client is including an 'Expect: 100-continue' header, and sends all headers 
in a single network packet.
POSTs of content smaller than 1MB go through, but larger POSTs do not.
The client's TCP connection is being reset without squid sending any sort of 
error page.
Nothing is logged in squid -- not in the access log, not in the cache log.  
It's as if that request never happened.
The client just gets a closed connection.

I'm running with the default 'request_body_max_size', it is not specified in my 
configuration.
That should mean unlimited for the request body.

If I configure the client to explicitly use the same proxy on a different, 
non-transparent port, the large POSTs go through correctly.  It is as if 
request_body_max_size does not function on a port marked 'transparent'.

Has anyone else seen this problem?
I've found one reference to it in my searches, 
http://nerdanswer.com/answer.php?q=336233

Mike Mitchell

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2015-01-23 Thread Mike

For a Red Hat/CentOS based OS, selinux causes that.

The fix I found in this case:

Before the below “audit2allow” command will work, you will need to 
install the needed tool for selinux:


* yum -yinstall policycoreutils-python
(which will also install a few other dependencies).

To temporarily set selinux to permissive:

* echo 0 /selinux/enforce

To re-enable after it is fixed:
* echo 1 /selinux/enforce

Check the /var/log/audit/audit.log for the type=AVC relating to the 
ssl_crtd entries (easy way is grep AVC audit.log | less ).


To find out WHY it is happening in selinux, use this:
grep ssl_crtd /var/log/audit/audit.log | audit2allow -w


Start in /tmp/ folder since we will not need these files for long.

* grep ssl_crtd /var/log/audit/audit.log | audit2allow -m ssl_crtdlocal 
 ssl_crtdlocal.te
- outputs the suggested settings into the file ssl_crtdlocal.te, which 
we will review below in “cat”

* cat ssl_crtdlocal.te
- to review the created file and show what will be donein selinux
* grep ssl_crtd /var/log/audit/audit.log | audit2allow -M ssl_crtdlocal
- Note the capital M, this Makes the needed file, ready for selinux to 
import, and then the next command below actually enables it.

* semodule -i ssl_crtdlocal.pp
- Used to enable the new policy in selinux

As long as it is now working properly, can delete the *.te and *.pp 
files created in the /tmp/ folder.


Now all of this is mute if selinux is not used so there may likely be 
other explanations, but this at least covers RedHat based OS's with 
selinux. I documented all of this since our servers ran into the same 
issue due to selinux, and this was how we resolved it.



Mike



On 1/22/2015 6:17 AM, HackXBack wrote:

hello,
every day i found this error and my cache stop

then i remove the ssl database then restart squid

next day the problem happen again ,
am using squid 3.4.11

what may cause this problem ?

thanks.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/FATAL-The-ssl-crtd-helpers-are-crashing-too-rapidly-need-help-tp4669257.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question on throughput

2014-10-15 Thread Mike

On 10/15/2014 3:41 AM, Jacques Kruger wrote:


Hi,

I’ve implemented my fair share of squid proxies over the past couple 
of years and I’ve always been able to find a solution in the mail 
archive, but this time around I’m stumped. This is the first time I’ve 
used squid with a fast (in our context) internet connection, 
specifically a 4G connection that the provider claims can run up to 
100Mbps. Claims aside, my real-world testing is not what I’m 
expecting. I’ve used two squid instances, one on PFsence (2.7.9) and 
one on Windows (2.7Stable8) and compared the throughput to a 
connection without squid and what I’ve found is, when testing with 
www.speedtest.net http://www.speedtest.net the throughput is roughly 
half with squid compared to a direct connection. I’ve left to 
configuration pretty much default and have tried to tweak, both 
without success.


What are the directives that have the most effect on throughput?

Regards,

Jacques Kruger



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Throughput on squid 2.x (plus the linux kernel from that timeframe) is 
limited as we recently found out on one of our servers. In the past with 
my testing, Windows with cygwin is even further limited. In our case 
with a small ISP level 200Mbps connection, the best our customers could 
get with their systems was 20Mbps through linux 2.0.x kernel and squid 2.3.
Same server with updated OS (Scientific Linux 6.5 with latest updates), 
same connection using compiled squid 3.4.7, typically up to 4000 
customers connecting to it at any given moment, and customers with 
50Mbps connections (some of the fastest home connection) were seeing 
less than 20% drop, even spanning across the entire US.



Mike

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Best way to deny access to URLs in Squid 3.3.x?

2014-10-14 Thread Mike

On 10/14/2014 12:37 PM, Mirza Dedic wrote:
Just curious, what are some of you doing in your Squid environment as 
far as URL filtering goes? It seems there are a few options out 
there.. squidguard... dansguardian.. plain block lists.


What is the best practice to implement some sort of block list into 
squid? I've found urlblacklist.com that has a pretty good broken down 
URL block list by category, what would be the best way to go.. use 
dansguardian with this list or set it up in squid.conf as an acl 
dstdomain and feed in the block list file without calling an external 
helper application?


Thanks.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


We have used dansguardian before, but there is a newer updated fork by 
some of the original crew called e2guardian that can also handle some 
SSL urls via blacklisting (as long as squid is also setup with ssl-bump 
in 3.4.x).
Otherwise within squid itself, the dstdomain and regex_dstdomain acls 
are an option, but that does not provide much for filtering content of 
the websites themselves.



Mike
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] server failover/backup

2014-08-18 Thread Mike
Question, when we copy the /etc/squid/passwd file itself from server 1 
to server 2, and when using the same squid authentication, why does 
server 2 not accept the username and passwords in the file that works on 
server 1?

Is that file encrypted by server 1?
Do we need to create a new passwd file from scratch on server 2, and use 
a script to import it into that new passwd file from server 1?


The main differences:
Server 1 is 64 bit OS Fedora 8 using squid Version 2.6.STABLE19
Server 2 is recently installed OS with 32 bit CentOS 6.5 i686 (due to 
hardware being 32bit), squid 3.4.5.


Does that 64 versus 32 bit file setup and creation make an impact? Or 
how about the 2.6.x versus 3.4.x?


The squid.conf specifics, older server 1:

auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive on

acl ourCustomers proxy_auth REQUIRED
http_access allow ourCustomers



The squid.conf specifics, newer OS server 2:

auth_param basic program 
/usr/src/squid-3.4.5/helpers/basic_auth/NCSA/basic_ncsa_auth 
/etc/squid/passwd

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive on

acl ourCustomers proxy_auth REQUIRED
http_access allow ourCustomers

http_access deny all


Thanks!
Mike


Re: [squid-users] Re: server failover/backup

2014-08-18 Thread Mike

On 8/18/2014 4:27 PM, nuhll wrote:

Question: why u spam my thread?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667249.html
Sent from the Squid - Users mailing list archive at Nabble.com.

This is an email list. I created a new email to 
squid-users@squid-cache.org for assistance from anyone that uses the 
email list. I was told some time ago that Nabble is not recommended 
since it does not always place them in a proper layout according to the 
email user list, so to use it via email, not the website.






Re: [squid-users] server failover/backup

2014-08-18 Thread Mike

On 8/18/2014 6:56 PM, Amos Jeffries wrote:

1) long passwords encrypted with DES.

The current releases Squid NCSA helper checks length of DES passwords
and rejects if they are more than 8 charecters long instead of silently
truncating and accepting bad input.

If your users have long passwords and you encrypted them into the
original file with DES then they need to be upgraded. Logging in with
only the first 8 characters of their password should still work with DES.


Thanks Amos.
That seemed to be the issue.
I did some digging and we found we had to use MD5 when recreating the 
user/pass file using the htpasswd -mb /etc/squid/password user pass 
and didn't have to change anything in squid.conf. The basic_ncsa_auth 
automatically picks up the md5 hash used on the new file and the issue 
is resolved.


Thanks again
Mike



Re: [squid-users] HTTP/HTTPS transparent proxy doesn't work

2014-08-13 Thread Mike
 Generic Routing Encapsulation (GRE). PPTP provides a low-cost, 
private connection to a corporate network through the Internet. PPTP 
works well for people who work from home or travel and need to access 
their corporate networks. It is often used to access a Microsoft Remote 
Access Server (RAS).


Mike




Re: [squid-users] Re: HTTP/HTTPS transparent proxy doesn't work

2014-08-13 Thread Mike

On 8/13/2014 12:52 PM, agent_js03 wrote:

Awesome, so if I change my squid.conf accordingly, do I redirect all traffic
to port 3128 or do I redirect http to 3129 and https to 3130 accordingly?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-HTTPS-transparent-proxy-doesn-t-work-tp4667193p4667201.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Just use the primary port 3128 for all proxy requests. Think of 3128 as 
the gateway, and then the other 2 as the internal routing ports.


http_port 3129 intercept will intercept the insecure http requests,
https_port 3130 intercept ssl-bump... will intercept the secure site 
requests.




Re: [squid-users] Squid exiting on its own at sys startup

2014-07-09 Thread Mike
Unfortunately no, because each system has minor differences, the desired 
rules to be used by squid varies based on other programs and 
interactions within the system. This is why I just typed them out here 
so others can figure out why squid or pinger or ssl_crtd is getting 
caught by selinux, and how to at least allow it through selinux, for 
that specific system.


These rules are not setup via setsebool, instead they are installed 
directly via semodule.




Mike

On 7/8/2014 8:30 PM, Eliezer Croitoru wrote:

Hey Mike,

I was wondering if you have these Selinux rules in binary or another 
format(src) which I can try to use and package them in RPM?


Thanks,
Eliezer

On 06/27/2014 12:08 AM, Mike wrote:

After some deeper digging, it seems selinux was only temporarily
disabled (via  echo 0 /selinux/enforce), not disabled in the primary
config file. But this actually allowed me to track down a fix to keep
using selinux (which we definitely need for server security). I am going
to add it here for others that may run into the same problem (in RedHat,
CentOS and Scientific Linux) and how to fix it. This allows us to use
ssl-bump with selinux. I had one where pinger was also having an issue
so I am including it here.
Scientific Linux 6.5 (would also work for RedHat and CentOS 6)
squid 3.4.5 and 3.4.6

Edit /etc/selinux/config and change to “permissive”. Then cycle the
audit logs:
cd /var/log/audit/
mv audit.log audit.log.0
touch audit.log

Thenreboot the system and let selinux come back up and catch the items
in its log (usually ssl_crtd and pinger) located at
/var/log/audit/audit.log. Many times squid will try to start but end up
with “the ssl_crtd helpers are crashing too quickly” which will shut the
squid service down.

  *

Install the needed tool for selinux: yum install
policycoreutils-python (which will also install a few other needed
dependencies).

ssl_crtd: Start in /tmp/ folder since we will not need these files for
long.

  *

grep ssl_crtd /var/log/audit/audit.log | audit2allow -m
ssl_crtdlocal  ssl_crtdlocal.te

  o

outputs the suggested settings into the file ssl_crtdlocal.te,
which we will review below in “cat”

  *

cat ssl_crtdlocal.te # to review the created file and show what will
be done

  *

grep ssl_crtd /var/log/audit/audit.log | audit2allow -M 
ssl_crtdlocal


  o

Note the capital M, this makes the needed file, ready for
selinux to import, and then the next command below actually
enables it.

  *

semodule -i ssl_crtdlocal.pp


1.

Now for pinger (if needed):

  *

grep pinger /var/log/audit/audit.log | audit2allow -m pingerlocal 
pingerlocal.te

  *

cat pingerlocal.te # to review the created file and show what will
be done

  *

grep pinger /var/log/audit/audit.log | audit2allow -M pingerlocal

  *

semodule -i pingerlocal.pp

After those are entered, go back in and edit /etc/selinux/config and
change to “enforcing”. Reboot the system one more time and watch the
logs for any other entries relating to squid like “ssl_crtd” or “pinger”
(look at the comm=ssl_crtd aspect) to see if any other squid based
items need an allowance:

  *

type=AVC msg=audit(1403808338.272:24): avc: denied { read } for
pid=1457 comm=ssl_crtd name=index.txt dev=dm -0 ino=5376378
scontext=system_u:system_r:squid_t:s0
tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file

  o

-OR-

  *

type=SYSCALL msg=audit(1403808338.272:24): arch=c03e syscall=2
success=yes exit=3 a0=cfe2e8 a1=0 a2=1b6 a3=0 items=0 ppid=1454
pid=1457 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500
egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295
comm=ssl_crtd exe=/usr/lib64/squid/ssl_crtd
subj=system_u:system_r:squid_t:s0 key=(null)



Thanks all
Mike






Re: [squid-users] Squid v3.4.6 SMP errors

2014-07-09 Thread Mike
Running into this issue on one powerful system. OS (Scientific Linux 
6.5) sees 16 CPU cores (which is 2 CPU sockets, each with 4 cores + 
Hyperthreading). The unusual part is that this same setup works fine on 
another system with dual core + HT using 3 workers.


I tried to setup the SMP options in squid.conf which work on other 
systems but not this one. I first tried with 7 workers, then 3 but 
neither worked, continued getting the error mentioned at the bottom of 
this message. Only if I use the standard cache setup, it works without a 
problem. I use odd numbers with the cpu_affinity_map so the parent-coord 
can use the first core, and then the kids will be tied to the other 
cores mentioned. This allows more single affinity processes to use the 
first core as needed with minimal i/o impact.

The /var/cache/squid (and all subfolders) shows ownership as squid:squid
Also worth a mention: selinux is disabled.

Squid.conf basics with ssl-bump:


http_port 8080
# above port is what will be used for SSL Proxy on client browser
http_port 8081 intercept
https_port 8082 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key 
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost
cache_log /var/log/squid/cache.log
cache_effective_user squid
debug_options ALL,0
logfile_rotate 10
cache_mgr zz...@z.net
pinger_enable off

The SMP related items since everything else is fairly standard, here are 
2 options I tried:


workers 3
cpu_affinity_map process_numbers=1,2,3 cores=2,3,4

workers 7
cpu_affinity_map process_numbers=1,2,3,4,5,6,7 cores=2,3,4,9,10,11,12

I used cores 2-4 and 9-12, since 5-8 is the first CPU Hyperthread cores.
CPU0 - core: 1-4, HT: 5-8
CPU1 - core: 9-12, HT: 13-16

and the related cache_dir entries, with workers 7 had process number 
up to 8 in the same manner (1 for coord, 7 for workers). Showing it as 
commented since that is how it currently sits:


#if ${process_number} = 1
#cache_dir ufs /var/cache/squid/1 1 32 512
#endif

#if ${process_number} = 2
#cache_dir ufs /var/cache/squid/2 1 32 512
#endif

#if ${process_number} = 3
#cache_dir ufs /var/cache/squid/3 1 32 512
#endif

#if ${process_number} = 4
#cache_dir ufs /var/cache/squid/4 1 32 512
#endif


The error:

(squid-coord-8): Ipc::Mem::Segment::attach failed to 
mmap(/squid-squid-page-pool.shm): (22) Invalid argument


Which then kills the squid kid processes resulting in process 1234 will 
not be restarted due to repeated, frequent failures


Now I saw mentions on the squid page 
http://wiki.squid-cache.org/Features/SmpScale

with this info, which did not work:

Add the following line to your */etc/fstab file*:

shm/dev/shmtmpfsnodev,nosuid,noexec00

After that use (as root):

mount shm


The only other thing I can think of is for process_numbers, does that 
need to count to workers +1 (for the coord/parent)? So 4 or 8 in my 
case? I have it as 3 on another working system with no problems.

Any help is greatly appreciated.

Mike


Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port

2014-06-29 Thread Mike

Here is my entries for ssl-bump:

http_port 3128
http_port 3129 intercept
https_port 3130 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key


In many cases you will need to recreate the certificates as copying them 
over does not always work, or are tied to that specific machine via 
encryption.


Also it helps to set the proxy as different ports such as 3128 or 8080 
instead of trying to use 80 and 443, as those are for server based 
websites, not proxies, and generally causes more problems in the long 
run. Most servers see an incoming connection to port 80 or 443 and tries 
to respond via Apache.



Mike


On 6/29/2014 1:30 PM, John Gardner wrote:

I wonder if some of you can help me in figuring out an issue.  For the
last three years, we've had a Squid Reverse Proxy running on
Oracle Linux 5 (64 bit) with version 2.6 of Squid (which came with the
distro) and it's been a total success and never missed a beat.

Now, I realised that this version is getting old so I thought I would
install a more recent version and get some more features as well,
I installed the 32 bit version of Eliezer's 3.4.3 RPM and managed to
get everything back up an running successfully.  However, when
I was testing this environment I noticed that every so often in the
log I got a FATAL: Received Segment Violation...dying. message and
then
Squid just stopped responding. So, I then decided to build a version 6
version of Oracle Linux instance and then install the 64 bit 3.4.3 RPM
on it,
copying over all of the config and certficates.

Now I've got a new problem, although Squid now starts successfully
when I only put http_port into the squid.conf, when I add https_port
entries
I get the following message;

FATAL: No valid signing SSL certificate configured for https_port
10.x.x.95:443 and Squid terminates.

Does anyone know why I'm getting this issue?  Would it be because in
moving from OEL 5 to OEL 6 I've also moved from OpenSSL 0.98 to
OpenSSL 1.0
and the certificate formats are now different or is it something else?

All help greatly appreciated.

John





Re: [squid-users] What is a reasonable size for squid.conf?

2014-06-27 Thread Mike
My squid.conf is 3380 bytes, and 99 total lines, with around 35 lines 
blank or commented out. If you had been upgrading from any 3.1 or older 
squid, they had a LOT of unnecessary lines in there for TAG related 
entries and excess documentation of every little line.


Mike


On 6/27/2014 2:51 PM, Owen Crow wrote:

I am running a non-caching reverse proxy using version 3.3.10.

My squid.conf is currently clocking in 60k lines (not including
comments or blank lines). Combined with the conf files in my conf.d
directory, I have a total of 89k lines in configuration.

I have definitely noticed -k reconfigure calls taking on the order
of 20 seconds to run when it used to be less than a couple seconds.
(Same results with -k test).

I've tried searching for anything related to max lines and similar,
but it usually talks about squid.conf configuration options and not
the file itself.

If this is not documented per se, are there any anecdotal examples
that have this many lines or more? I only see this growing over time.

Thanks,
Owen





Re: [squid-users] Suggested init.d startup script.

2014-06-26 Thread Mike
Yes we always disable selinux or at least change to the non-blocking 
permissive mode until server is ready for development.



Mike


On 6/26/2014 12:57 AM, Eliezer Croitoru wrote:

Can you verify if SELINUX is enabled\enforced?
If so change it to disabled as a basic test to the ssl_crtd issue.

Eliezer

On 06/26/2014 07:37 AM, Mike wrote:

I am looking for suggestions on a newer or slightly altered startup
script for use with squid 3.4.5 and CentOS based system (Scientific
Linux 6.5).

The issue is after a system reboot, during startup the ssl_crtd helpers
are crashing causing squid to not load on startup. Yet we can do a
service squid start immediately after it stops, and it starts and
works fine until the next reboot. I suspect there is something needed in
the script to avert this issue since it is a newer squid. I tried the
one that came with the 3.4.5 (squid.rc) but it is not functioning
properly on this system.
We have tried a delay script of up to 2 minutes and that is not helping,
any initial statup still has the same problem.

This is a remote server and we need it to work on startup without
needing to do extra time via SSH after it reboots to start it up every
time, especially once we roll this out to the 5 other servers. I've
checked the squid.out, cache.log and other squid and system related logs
and none of them give us any idea of why it is doing that only at 
startup.


12 seconds after initial startup attempt and multiple ssl_crtd helper
crashes:
Jun 25 23:25:47 i3540 (squid-1): The ssl_crtd helpers are crashing too
rapidly, need help!
Jun 25 23:25:47 i3540 squid[1674]: Squid Parent: (squid-1) process 1762
exited with status 1
Jun 25 23:25:47 i3540 squid[1674]: Squid Parent: (squid-1) process 1762
will not be restarted due to repeated, frequent failures
Jun 25 23:25:47 i3540 squid[1674]: Exiting due to repeated, frequent
failures

Then after we do a service squid start:
Jun 25 23:26:24 i3540 squid[1810]: Squid Parent: will start 1 kids
Jun 25 23:26:25 i3540 squid[1810]: Squid Parent: (squid-1) process 1812
started

and no more crashes.

I have tried at least 3 or 4 versions online and none of them work.
Either they do not work properly with service squid start or there are
other issues.

My current squid init script which was borrowed from a previous version
(3.1.10). again, everything works except the ssl_crtd crashing ONLY on
startup after a reboot:

=

#!/bin/bash
# chkconfig: - 90 25
# pidfile: /var/run/squid.pid
# config: /etc/squid/squid.conf
#
### BEGIN INIT INFO
# Provides: squid
# Short-Description: starting and stopping Squid Internet Object Cache
# Description: Squid - Internet Object Cache. Internet object caching 
is \
#   a way to store requested Internet objects (i.e., data 
available \

#   via the HTTP, FTP, and gopher protocols) on a system closer to
the \
#   requesting site than to the source. Web browsers can then use 
the \
#   local Squid cache as a proxy HTTP server, reducing access 
time as \

#   well as bandwidth consumption.
### END INIT INFO


PATH=/usr/bin:/sbin:/bin:/usr/sbin
export PATH

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

if [ -f /etc/sysconfig/squid ]; then
 . /etc/sysconfig/squid
fi

# don't raise an error if the config file is incomplete
# set defaults instead:
SQUID_OPTS=${SQUID_OPTS:-}
SQUID_PIDFILE_TIMEOUT=${SQUID_PIDFILE_TIMEOUT:-20}
SQUID_SHUTDOWN_TIMEOUT=${SQUID_SHUTDOWN_TIMEOUT:-60}
SQUID_CONF=${SQUID_CONF:-/etc/squid/squid.conf}
SQUID_PIDFILE_DIR=/var/run/squid
SQUID_USER=squid
SQUID_DIR=squid

# determine the name of the squid binary
[ -f /usr/sbin/squid ]  SQUID=squid

prog=$SQUID

# determine which one is the cache_swap directory
CACHE_SWAP=`sed -e 's/#.*//g' $SQUID_CONF | \
 grep cache_dir | awk '{ print $3 }'`

RETVAL=0

probe() {
 # Check that networking is up.
 [ ${NETWORKING} = no ]  exit 1

 [ `id -u` -ne 0 ]  exit 4

 # check if the squid conf file is present
 [ -f $SQUID_CONF ] || exit 6
}

start() {
#   echo 1 minute startup delay - to give ssl_crtd time to restart
properly
#   sleep 60
 # Check if $SQUID_PIDFILE_DIR exists and if not, lets create it
and give squid permissions.
 if [ ! -d $SQUID_PIDFILE_DIR ] ; then mkdir $SQUID_PIDFILE_DIR
; chown -R $SQUID_USER.$SQUID_DIR $SQUID_PIDFILE_DIR; fi
 probe

 parse=`$SQUID -k parse -f $SQUID_CONF 21`
 RETVAL=$?
 if [ $RETVAL -ne 0 ]; then
 echo -n $Starting $prog: 
 echo_failure
 echo
 echo $parse
 return 1
 fi
 for adir in $CACHE_SWAP; do
 if [ ! -d $adir/00 ]; then
 echo -n init_cache_dir $adir... 
 $SQUID -z -F -f $SQUID_CONF 
/var/log/squid/squid.out 21
 fi
 done
 echo -n $Starting

Re: [squid-users] Squid exiting on its own at sys startup

2014-06-26 Thread Mike
OS is CentOS based Scientific Linux 6.5. Squid is version 3.4.6 (updated 
today) but was happening as well with 3.4.5.


This happens only after a reboot, so there has to be an issue in the 
/etc/init.d/squid startup script causing this. Something on initial 
startup is causing it to start and then immediately exit with the status 
0. Subsequent startup attempts by it causes the ssl_crtd helpers to 
crash, so I want to prevent that initial automated exit with status 0.

A manual service squid start allows it to start without a problem.
We even tried a delayed secondary startup in /etc/rc.local pointing to a 
basic (chmod +x) script that says

#!/bin/bash
sleep 60
service squid start

but that doesn't help, the exact same thing happens when it tries to 
start, so I suspect something in the init.d script.


Permissions are all set, selinux is disabled.

From the var/log/messages

Jun 26 11:41:05 cogicm01 squid[1544]: Squid Parent: will start 1 kids
Jun 26 11:41:05 cogicm01 squid[1544]: Squid Parent: (squid-1) process 
1547 started
Jun 26 11:41:05 cogicm01 squid[1544]: Squid Parent: (squid-1) process 
1547 exited with status 0

Jun 26 11:41:10 cogicm01 squid[1561]: Squid Parent: will start 1 kids
Jun 26 11:41:10 cogicm01 squid[1561]: Squid Parent: (squid-1) process 
1563 started
Jun 26 11:41:10 cogicm01 squid[1561]: Squid Parent: (squid-1) process 
1563 exited with status 0

Jun 26 11:41:15 cogicm01 squid[1566]: Squid Parent: will start 1 kids
Jun 26 11:41:15 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1568 started
Jun 26 11:41:15 cogicm01 (squid-1): The ssl_crtd helpers are crashing 
too rapidly, need help!
Jun 26 11:41:16 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1568 exited with status 1
Jun 26 11:41:19 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1577 started
Jun 26 11:41:19 cogicm01 (squid-1): The ssl_crtd helpers are crashing 
too rapidly, need help!
Jun 26 11:41:19 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1577 exited with status 1
Jun 26 11:41:22 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1610 started
Jun 26 11:41:22 cogicm01 (squid-1): The ssl_crtd helpers are crashing 
too rapidly, need help!
Jun 26 11:41:22 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1610 exited with status 1
Jun 26 11:41:25 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1617 started
Jun 26 11:41:25 cogicm01 (squid-1): The ssl_crtd helpers are crashing 
too rapidly, need help!
Jun 26 11:41:25 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1617 exited with status 1
Jun 26 11:41:28 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1624 started
Jun 26 11:41:29 cogicm01 (squid-1): The ssl_crtd helpers are crashing 
too rapidly, need help!
Jun 26 11:41:29 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1624 exited with status 1
Jun 26 11:41:29 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1624 will not be restarted due to repeated, frequent failures
Jun 26 11:41:29 cogicm01 squid[1566]: Exiting due to repeated, frequent 
failures



Based on my last email, I adjusted things since it kept trying to remove 
a pid folder that is there but empty, whereas the pid file itself is 
within the normal /var/run/ folder, not the /var/run/squid/ folder. This 
means on shutdown or service restart, it was not removing the old pid 
file. So I adjusted it on stop or restart to remove the .pid file which 
works. But the above issue on system startup remains.


The init script:

#!/bin/bash
# chkconfig: - 90 25
# pidfile: /var/run/squid.pid
# config: /etc/squid/squid.conf
#
### BEGIN INIT INFO
# Provides: squid
# Short-Description: starting and stopping Squid Internet Object Cache
# Description: Squid - Internet Object Cache. Internet object caching is \
#   a way to store requested Internet objects (i.e., data available \
#   via the HTTP, FTP, and gopher protocols) on a system closer to the \
#   requesting site than to the source. Web browsers can then use the \
#   local Squid cache as a proxy HTTP server, reducing access time as \
#   well as bandwidth consumption.
### END INIT INFO


PATH=/usr/bin:/sbin:/bin:/usr/sbin
export PATH

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

if [ -f /etc/sysconfig/squid ]; then
. /etc/sysconfig/squid
fi

# don't raise an error if the config file is incomplete
# set defaults instead:
SQUID=/usr/sbin/squid  SQUID=squid
SQUID_OPTS=${SQUID_OPTS:-}
SQUID_PIDFILE_TIMEOUT=${SQUID_PIDFILE_TIMEOUT:-20}
SQUID_SHUTDOWN_TIMEOUT=${SQUID_SHUTDOWN_TIMEOUT:-30}
SQUID_CONF=${SQUID_CONF:-/etc/squid/squid.conf}
SQUID_PIDFILE_DIR=/var/run/squid
SQUID_USER=squid
SQUID_DIR=squid

# determine the name of the squid binary
[ -f /usr/sbin/squid ]  SQUID=squid

prog=$SQUID

# determine which one is the cache_swap directory
CACHE_SWAP=`sed -e 's/#.*//g' $SQUID_CONF | \
grep cache_dir | awk '{ print $3 }'`

RETVAL=0

probe() {
 

Re: [squid-users] Squid exiting on its own at sys startup

2014-06-26 Thread Mike
After some deeper digging, it seems selinux was only temporarily 
disabled (via  echo 0 /selinux/enforce), not disabled in the primary 
config file. But this actually allowed me to track down a fix to keep 
using selinux (which we definitely need for server security). I am going 
to add it here for others that may run into the same problem (in RedHat, 
CentOS and Scientific Linux) and how to fix it. This allows us to use 
ssl-bump with selinux. I had one where pinger was also having an issue 
so I am including it here.

Scientific Linux 6.5 (would also work for RedHat and CentOS 6)
squid 3.4.5 and 3.4.6

Edit /etc/selinux/config and change to “permissive”. Then cycle the 
audit logs:

cd /var/log/audit/
mv audit.log audit.log.0
touch audit.log

Thenreboot the system and let selinux come back up and catch the items 
in its log (usually ssl_crtd and pinger) located at 
/var/log/audit/audit.log. Many times squid will try to start but end up 
with “the ssl_crtd helpers are crashing too quickly” which will shut the 
squid service down.


 *

   Install the needed tool for selinux: yum install
   policycoreutils-python (which will also install a few other needed
   dependencies).

ssl_crtd: Start in /tmp/ folder since we will not need these files for 
long.


 *

   grep ssl_crtd /var/log/audit/audit.log | audit2allow -m
   ssl_crtdlocal  ssl_crtdlocal.te

 o

   outputs the suggested settings into the file ssl_crtdlocal.te,
   which we will review below in “cat”

 *

   cat ssl_crtdlocal.te # to review the created file and show what will
   be done

 *

   grep ssl_crtd /var/log/audit/audit.log | audit2allow -M ssl_crtdlocal

 o

   Note the capital M, this makes the needed file, ready for
   selinux to import, and then the next command below actually
   enables it.

 *

   semodule -i ssl_crtdlocal.pp


1.

   Now for pinger (if needed):

 *

   grep pinger /var/log/audit/audit.log | audit2allow -m pingerlocal 
   pingerlocal.te

 *

   cat pingerlocal.te # to review the created file and show what will
   be done

 *

   grep pinger /var/log/audit/audit.log | audit2allow -M pingerlocal

 *

   semodule -i pingerlocal.pp

After those are entered, go back in and edit /etc/selinux/config and 
change to “enforcing”. Reboot the system one more time and watch the 
logs for any other entries relating to squid like “ssl_crtd” or “pinger” 
(look at the comm=ssl_crtd aspect) to see if any other squid based 
items need an allowance:


 *

   type=AVC msg=audit(1403808338.272:24): avc: denied { read } for
   pid=1457 comm=ssl_crtd name=index.txt dev=dm -0 ino=5376378
   scontext=system_u:system_r:squid_t:s0
   tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file

 o

   -OR-

 *

   type=SYSCALL msg=audit(1403808338.272:24): arch=c03e syscall=2
   success=yes exit=3 a0=cfe2e8 a1=0 a2=1b6 a3=0 items=0 ppid=1454
   pid=1457 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500
   egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295
   comm=ssl_crtd exe=/usr/lib64/squid/ssl_crtd
   subj=system_u:system_r:squid_t:s0 key=(null)



Thanks all
Mike

On 6/26/2014 12:13 PM, Mike wrote:
OS is CentOS based Scientific Linux 6.5. Squid is version 3.4.6 
(updated today) but was happening as well with 3.4.5.


This happens only after a reboot, so there has to be an issue in the 
/etc/init.d/squid startup script causing this. Something on initial 
startup is causing it to start and then immediately exit with the 
status 0. Subsequent startup attempts by it causes the ssl_crtd 
helpers to crash, so I want to prevent that initial automated exit 
with status 0.

A manual service squid start allows it to start without a problem.
We even tried a delayed secondary startup in /etc/rc.local pointing to 
a basic (chmod +x) script that says

#!/bin/bash
sleep 60
service squid start

but that doesn't help, the exact same thing happens when it tries to 
start, so I suspect something in the init.d script.


Permissions are all set, selinux is disabled.

From the var/log/messages

Jun 26 11:41:05 cogicm01 squid[1544]: Squid Parent: will start 1 kids
Jun 26 11:41:05 cogicm01 squid[1544]: Squid Parent: (squid-1) process 
1547 started
Jun 26 11:41:05 cogicm01 squid[1544]: Squid Parent: (squid-1) process 
1547 exited with status 0

Jun 26 11:41:10 cogicm01 squid[1561]: Squid Parent: will start 1 kids
Jun 26 11:41:10 cogicm01 squid[1561]: Squid Parent: (squid-1) process 
1563 started
Jun 26 11:41:10 cogicm01 squid[1561]: Squid Parent: (squid-1) process 
1563 exited with status 0

Jun 26 11:41:15 cogicm01 squid[1566]: Squid Parent: will start 1 kids
Jun 26 11:41:15 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1568 started
Jun 26 11:41:15 cogicm01 (squid-1): The ssl_crtd helpers are crashing 
too rapidly, need help!
Jun 26 11:41:16 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1568 exited with status 1
Jun 26 11:41:19 cogicm01 squid[1566]: Squid Parent: (squid-1) process 
1577 started
Jun

[squid-users] Suggested init.d startup script.

2014-06-25 Thread Mike
 ]; then
RETVAL=1
break
fi
sleep 10  echo -n .
timeout=$((timeout+1))
done
fi
[ $RETVAL -eq 0 ]  touch /var/lock/subsys/$SQUID
[ $RETVAL -eq 0 ]  echo_success
[ $RETVAL -ne 0 ]  echo_failure
echo
return $RETVAL
}

stop() {
echo -n $Stopping $prog: 
$SQUID -k check -f $SQUID_CONF  /var/log/squid/squid.out 21
RETVAL=$?
if [ $RETVAL -eq 0 ] ; then
$SQUID -k shutdown -f $SQUID_CONF 
rm -f /var/lock/subsys/$SQUID
timeout=0
while : ; do
[ -f /var/run/squid.pid ] || break
if [ $timeout -ge $SQUID_SHUTDOWN_TIMEOUT ]; then
echo
return 1
fi
sleep 2  echo -n .
timeout=$((timeout+2))
done
echo_success
echo
else
echo_failure
if [ ! -e /var/lock/subsys/$SQUID ]; then
RETVAL=0
fi
echo
fi
rm -rf $SQUID_PIDFILE_DIR/*
return $RETVAL
}

reload() {
$SQUID $SQUID_OPTS -k reconfigure -f $SQUID_CONF
}

restart() {
stop
rm -rf $SQUID_PIDFILE_DIR/*
start
}

condrestart() {
[ -e /var/lock/subsys/squid ]  restart || :
}

rhstatus() {
status $SQUID  $SQUID -k check -f $SQUID_CONF
}


case $1 in
start)
start
;;

stop)
stop
;;

reload|force-reload)
reload
;;

restart)
restart
;;

condrestart|try-restart)
condrestart
;;

status)
rhstatus
;;

probe)
probe
;;

*)
echo $Usage: $0 
{start|stop|status|reload|force-reload|restart|try-restart|probe}

exit 2
esac

exit $?

=


Any help on this would be appreciated


Mike



Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-19 Thread Mike
I got it figured out... it was the acl rules at the top... since the 
server IP was within the company network and I was testing and accessing 
from outside, it was not working. So at least for testing I needed to 
add my own IP as part of the acl localnet... after that it worked 
perfectly.


Thanks to everyone for their help.


Mike


On 6/18/2014 4:59 PM, Mike wrote:
I (think) got it figured out... seemed that port 3128 was the 
problem... not sure why this provider blocks that port but as soon as 
I changed the squid.conf http_port to 8080, it worked right away.


Thanks for everyones help!

Mike


On 6/18/2014 12:35 PM, Mike wrote:
I compiled source 3.4.5 from squid-cache.org with all the needed 
rules and it is still refusing all connections.
OS (on all 3 tested systems) is Scientific Linux 6.5, kernel is 
2.6.32.431.17.1.el6. The latest squid version available in their repo 
is 3.1.10 which does not have the needed SSL related options.


No unusual errors with configure, make or make install.




So any suggestions or other items to check?


Mike

On 6/17/2014 12:34 AM, Amos Jeffries wrote:

On 17/06/2014 10:30 a.m., Mike wrote:

Running into another issue, not sure whats going on here.

ALL HTTPS connections are being denied. Temporarily, selinux is 
disabled
and firewall is off. We have it working on 2 other servers with 
same OS,
same kernel, same settings but it is just this one that refuses to 
allow

connections to HTTPS sites.

We went with this version since none of the other rpms (3.4x and 
newer)

we could find included the ssl_crtd without manually compiling the
entire thing, which we wanted to stay away from if possible, due to 
ease

of updating squid at some point down the road on many servers without
having to recompile on dozens (or maybe hundreds by then) when it 
comes

time.

The cache.log shows no errors. squid -k parse shows no errors.

[root@servername $]# yum info squid
Loaded plugins: security
Installed Packages
Name: squid
Arch: x86_64
Epoch   : 7
Version : 3.5.0.001
Release : 1.el6
Size: 8.2 M
Repo: installed

[root@servername $]# squid -v
Squid Cache: Version 3.HEAD-20140127-r13248

Hi Mike,
  that package is several months old now and this sounds like one of 
the
bugs now fixed. I'm sending Eliezer a request to update the package, 
you

may want to do so as well.

I dont see any http_access lines at all in the below config file. Squid
security policy is closed by default, so if you omit all access
permissions noting is permitted.



 From access.log:
TCP_DENIED/403 3742 CONNECT www.facebook.com:443 - HIER_NONE/- 
text/html

TCP_DENIED/403 3733 CONNECT startpage.com:443 - HIER_NONE/- text/html
TCP_DENIED/403 3736 CONNECT www.google.com:443 - HIER_NONE/- text/html

Rules are same as previously mentioned:

# Squid normally listens to port 3128
http_port 3128
http_port 3129 intercept
https_port 3130 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db 
-M 16MB

sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost
always_direct allow all

visible_hostname x.xx.net
cache_mgr x...@xx.net
dns_nameservers xx.xx.xx.xx yy.yy.yy.yy zz.zz.zz.zz
hosts_file /etc/hosts

#cache_access_log /dev/null
#cache_store_log none
#cache_log /dev/null
# acl blacklist dstdomain -i /etc/squid/domains
# http_access deny blacklist


#  Below line is for troubleshooting only, comment out when sys 
goes to

production
cache_access_log /var/log/squid/access.log

The above line should be:
   access_log /var/log/squid/access.log

Also, the cache_log and debug_options lines shoud remain like this in
production if at all possible. You can start Squid with the -s command
line option to pipe the cache critical messages to syslog but Squid
should always have a cache.log for a backup troubleshooting information
source.


cache_store_log /var/log/squid/store.log
cache_log /var/log/squid/cache.log
debug_options ALL,0

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 1 32 512
cache_effective_user squid

The cache store (store.log) shows a lot of entries like this:
RELEASE -1  10808232E705173EC05BDDACC2C6F47F   ? ?
? ? ?/? ?/? ? ?

Not to worry, temporary files used as disk-backing store for some
transactions. We have not yet fully removed the need for this type of
file from Squid.


HTH
Amos









Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-18 Thread Mike
I compiled source 3.4.5 from squid-cache.org with all the needed rules 
and it is still refusing all connections.
OS (on all 3 tested systems) is Scientific Linux 6.5, kernel is 
2.6.32.431.17.1.el6. The latest squid version available in their repo is 
3.1.10 which does not have the needed SSL related options.


No unusual errors with configure, make or make install.
-
./configure '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-follow-x-forwarded-for' 
'--enable-auth' 
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group' '--enable-cache-digests' 
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools' 
'--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' 
'--enable-linux-netfilter' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' 
'--enable-esi' '--enable-ssl' '--enable-ssl-crtd' '--enable-icmp' 
'--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
'--with-dl' '--with-openssl' '--with-pthreads' '--with-included-ltdl' 
'--disable-arch-native' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'

-

It works fine on the 2 test servers, yet once again on the production 
server it is not working. Iptables is turned off, selinux is disabled. 
The rules and the settings is all the same as previously mentioned below 
and yes we kept the default acl rules.


All traffic (both secure and insecure) is being stopped like this in the 
access.log:


TCP_DENIED/403 3725 GET http://yahoo.com/ - HIER_NONE/- text/html
TCP_DENIED/403 3811 GET http://www.squid-cache.org/ - HIER_NONE/- text/html

TCP_DENIED/403 3674 CONNECT local.nixle.com:443 - HIER_NONE/- text/html
TCP_DENIED/403 3677 CONNECT www.facebook.com:443 - HIER_NONE/- text/html
TCP_DENIED/403 3671 CONNECT www.google.com:443 - HIER_NONE/- text/html

Now the strange part is when we enable dansguardian via port 10101, all 
insecure traffic works just fine.



Before configuring, make and make install, I grabbed all the needed 
packages:


 *

   wget http://www.squid-cache.org/Versions/v3/3.4/squid-3.4.5.tar.gz


Usual build chain:

 *

   yum install perl gcc autoconf automake make wget


Extra pkgs for CentOS/SL based installs

 *

   yum install libxml2-devel libcap-devel gcc gcc-c++ avr-gcc-c++
   libtool-ltdl-devel openssl-devel ksh perl-Crypt-OpenSSL-X509.x86_64

During the build I got no unusual errors in the configure, make or make 
install process.




Here is the full squid.conf:


# Mike 20140618 commented unneeded networks
# acl localnet src 10.0.0.0/8   # RFC1918 possible internal network
acl localnet src 66.xx.0.0/16  # our internal network
# acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
# acl localnet src fc00::/7   # RFC 4193 local private network range
# acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny

Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-18 Thread Mike
I got it figured out... seemed that port 3128 was the problem... not 
sure why this provider blocks that port but as soon as I changed the 
squid.conf http_port to 8080, it worked right away.


Thanks for everyones help!

Mike


On 6/18/2014 12:35 PM, Mike wrote:
I compiled source 3.4.5 from squid-cache.org with all the needed rules 
and it is still refusing all connections.
OS (on all 3 tested systems) is Scientific Linux 6.5, kernel is 
2.6.32.431.17.1.el6. The latest squid version available in their repo 
is 3.1.10 which does not have the needed SSL related options.


No unusual errors with configure, make or make install.




So any suggestions or other items to check?


Mike

On 6/17/2014 12:34 AM, Amos Jeffries wrote:

On 17/06/2014 10:30 a.m., Mike wrote:

Running into another issue, not sure whats going on here.

ALL HTTPS connections are being denied. Temporarily, selinux is 
disabled
and firewall is off. We have it working on 2 other servers with same 
OS,
same kernel, same settings but it is just this one that refuses to 
allow

connections to HTTPS sites.

We went with this version since none of the other rpms (3.4x and newer)
we could find included the ssl_crtd without manually compiling the
entire thing, which we wanted to stay away from if possible, due to 
ease

of updating squid at some point down the road on many servers without
having to recompile on dozens (or maybe hundreds by then) when it comes
time.

The cache.log shows no errors. squid -k parse shows no errors.

[root@servername $]# yum info squid
Loaded plugins: security
Installed Packages
Name: squid
Arch: x86_64
Epoch   : 7
Version : 3.5.0.001
Release : 1.el6
Size: 8.2 M
Repo: installed

[root@servername $]# squid -v
Squid Cache: Version 3.HEAD-20140127-r13248

Hi Mike,
  that package is several months old now and this sounds like one of the
bugs now fixed. I'm sending Eliezer a request to update the package, you
may want to do so as well.

I dont see any http_access lines at all in the below config file. Squid
security policy is closed by default, so if you omit all access
permissions noting is permitted.



 From access.log:
TCP_DENIED/403 3742 CONNECT www.facebook.com:443 - HIER_NONE/- 
text/html

TCP_DENIED/403 3733 CONNECT startpage.com:443 - HIER_NONE/- text/html
TCP_DENIED/403 3736 CONNECT www.google.com:443 - HIER_NONE/- text/html

Rules are same as previously mentioned:

# Squid normally listens to port 3128
http_port 3128
http_port 3129 intercept
https_port 3130 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db 
-M 16MB

sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost
always_direct allow all

visible_hostname x.xx.net
cache_mgr x...@xx.net
dns_nameservers xx.xx.xx.xx yy.yy.yy.yy zz.zz.zz.zz
hosts_file /etc/hosts

#cache_access_log /dev/null
#cache_store_log none
#cache_log /dev/null
# acl blacklist dstdomain -i /etc/squid/domains
# http_access deny blacklist


#  Below line is for troubleshooting only, comment out when sys goes to
production
cache_access_log /var/log/squid/access.log

The above line should be:
   access_log /var/log/squid/access.log

Also, the cache_log and debug_options lines shoud remain like this in
production if at all possible. You can start Squid with the -s command
line option to pipe the cache critical messages to syslog but Squid
should always have a cache.log for a backup troubleshooting information
source.


cache_store_log /var/log/squid/store.log
cache_log /var/log/squid/cache.log
debug_options ALL,0

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 1 32 512
cache_effective_user squid

The cache store (store.log) shows a lot of entries like this:
RELEASE -1  10808232E705173EC05BDDACC2C6F47F   ? ?
? ? ?/? ?/? ? ?

Not to worry, temporary files used as disk-backing store for some
transactions. We have not yet fully removed the need for this type of
file from Squid.


HTH
Amos







Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-16 Thread Mike

Running into another issue, not sure whats going on here.

ALL HTTPS connections are being denied. Temporarily, selinux is disabled 
and firewall is off. We have it working on 2 other servers with same OS, 
same kernel, same settings but it is just this one that refuses to allow 
connections to HTTPS sites.


We went with this version since none of the other rpms (3.4x and newer) 
we could find included the ssl_crtd without manually compiling the 
entire thing, which we wanted to stay away from if possible, due to ease 
of updating squid at some point down the road on many servers without 
having to recompile on dozens (or maybe hundreds by then) when it comes 
time.


The cache.log shows no errors. squid -k parse shows no errors.

[root@servername $]# yum info squid
Loaded plugins: security
Installed Packages
Name: squid
Arch: x86_64
Epoch   : 7
Version : 3.5.0.001
Release : 1.el6
Size: 8.2 M
Repo: installed

[root@servername $]# squid -v
Squid Cache: Version 3.HEAD-20140127-r13248
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-eui' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group' '--enable-cache-digests' 
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools' 
'--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' 
'--enable-linux-netfilter' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-ssl-crtd' 
'--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' '--enable-esi' 
'--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
'--with-dl' '--with-openssl' '--with-pthreads' '--disable-arch-native' 
'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'




From access.log:
TCP_DENIED/403 3742 CONNECT www.facebook.com:443 - HIER_NONE/- text/html
TCP_DENIED/403 3733 CONNECT startpage.com:443 - HIER_NONE/- text/html
TCP_DENIED/403 3736 CONNECT www.google.com:443 - HIER_NONE/- text/html

Rules are same as previously mentioned:

# Squid normally listens to port 3128
http_port 3128
http_port 3129 intercept
https_port 3130 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost
always_direct allow all

visible_hostname x.xx.net
cache_mgr x...@xx.net
dns_nameservers xx.xx.xx.xx yy.yy.yy.yy zz.zz.zz.zz
hosts_file /etc/hosts

#cache_access_log /dev/null
#cache_store_log none
#cache_log /dev/null
# acl blacklist dstdomain -i /etc/squid/domains
# http_access deny blacklist


#  Below line is for troubleshooting only, comment out when sys goes to 
production

cache_access_log /var/log/squid/access.log
cache_store_log /var/log/squid/store.log
cache_log /var/log/squid/cache.log
debug_options ALL,0

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 1 32 512
cache_effective_user squid

The cache store (store.log) shows a lot of entries like this:
RELEASE -1  10808232E705173EC05BDDACC2C6F47F   ? ? 
? ? ?/? ?/? ? ?


So any idea why this one system is always showing the TCP denied on the 
secure sites despite having same settings as other servers at the same 
location?


Thanks,
Mike


Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-13 Thread Mike

On 6/13/2014 10:02 AM, Alex Rousskov wrote:

On 06/12/2014 08:36 PM, Mike wrote:


So then next question is how do I know for sure ssl-bump is working?

A simple test is to look at the root CA certificate shown by the browser
at the *top* of the certificate chain for a secure (https) site. Please
note that you should not be looking at the site certificate. You should
be looking at the certificate that was used to sign the site certificate
(or the certificate that was used to sign the certificate that was used
to sign the site certificate, etc. -- go to the root of the certificate
chain).

If that root certificate is yours, then the site was bumped. If it is an
official root CA from a well-known company, the site was not bumped.

To check SslBump for many sites, you have to examine Squid logs which is
more difficult, especially if you test this with a mix of secure and
insecure traffic.


HTH,

Alex.

If thats the case then ssl-bump is not working. The root certificates 
all show the mainstream companies, Digicert, Godaddy, Verisign, etc.



Mike


[squid-users] Issues with ssl-bump in 3.HEAD

2014-06-12 Thread Mike
I have been racking my brain trying to get this working and each time, 
it refuses to connect to secure sites. In the end we need a working 
squid proxy for SSL connections within the company network which will 
serve over 1000 users (thus the larger 8MB cert cache size). We already 
have the insecure HTTP proxy working fine (thus the use of port 3129 
below).
Since it will be SSL based, I know it needs https_port (not http_port), 
ssl-bump, and intercept (required by ssl-bump). The https_port and 
ssl-bump documentation also mentioned the preference for sslflags (which 
may or may not be working in 3.HEAD) and cipher.


OS is Scientific Linux 6.5 (based on CentOS) fully up to date with yum. 
Server is quad core 3.4GHz, 8GB DDR3 with no other uses (like web 
server, etc).
SELinux has been set to permissive mode so it only reports, doesn't 
block the needed connections (although I also tested with it disabled 
and made no difference).

[root@localhost ~]# sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: permissive
Mode from config file: permissive
Policy version: 24
Policy from config file: targeted

Essential squid.conf lines (I have tested it with and without the 
sslflags, does not impact it working or not working):


https_port 3129 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key 
sslflags=DELAYED_AUTH 
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 8MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost

Local certs have been created and self signed, and the .der cert has 
been imported into the test browser (Firefox 30.0).


Squid info (includes the needed '--enable-ssl' '--enable-ssl-crtd' 
'--with-openssl'):


[root@localhost ~]# squid -v
Squid Cache: Version 3.HEAD-20140127-r13248
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-eui' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group' '--enable-cache-digests' 
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools' 
'--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' 
'--enable-linux-netfilter' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-ssl-crtd' 
'--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' '--enable-esi' 
'--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
'--with-dl' '--with-openssl' '--with-pthreads' '--disable-arch-native' 
'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'



In the end testing with only the SSL proxy set to this server via port 
3129, it tries loading the secure website for 2-3 minutes and then times 
out. Checking top, it shows squid running at 12.1g VIRT, 2.0g RES, 
54.5% of MEM (server has 8GB) and using 100% of CPU2. The 
../squid/access.log and cache_access.log shows no new entries at all. We 
had to disable the cache.log (cache_log /dev/null) as it continuously 
recorded everything and quickly took up all the space on the 80GB hard 
drive.


So the question is what is going wrong that it is refusing to let ANY 
secure site load and how can we get this resolved?

We greatly appreciate any help on this.

Mike


Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-12 Thread Mike

On 6/12/2014 2:06 PM, Guy Helmer wrote:

On Jun 12, 2014, at 1:01 PM, Mike mcsn...@afo.net wrote:


I have been racking my brain trying to get this working and each time,it 
refuses to connect to secure sites. In the end we need a working squid proxy 
for SSL connections within the company network which will serve over 1000 users 
(thus the larger 8MB cert cache size). We already have theinsecure HTTP proxy 
working fine (thus the use of port 3129 below).
Since it will be SSL based, I know it needs https_port (not http_port), 
ssl-bump, and intercept (required by ssl-bump). The https_port and ssl-bump 
documentation also mentioned the preference for sslflags (which may or may not 
be working in 3.HEAD) and cipher.

OS is Scientific Linux 6.5 (based on CentOS) fully up to date with yum. Server 
is quad core 3.4GHz, 8GB DDR3 with no other uses (like web server, etc).
SELinux has been set to permissive mode so it only reports, doesn't block the 
needed connections (although I also tested with it disabled and made no 
difference).
[root@localhost ~]# sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: permissive
Mode from config file: permissive
Policy version: 24
Policy from config file: targeted

Essential squid.conf lines (I have tested it with and without the sslflags, 
does not impact it working or not working):

https_port 3129 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key 
sslflags=DELAYED_AUTH 
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 8MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost

Local certs have been created and self signed, and the .der cert has been 
imported into the test browser (Firefox 30.0).

Squid info (includes the needed '--enable-ssl' '--enable-ssl-crtd' 
'--with-openssl'):

[root@localhost ~]# squid -v
Squid Cache: Version 3.HEAD-20140127-r13248
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' 
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' 
'--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=wbinfo_group,kerber!

os_ldap_
group,AD_group' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' 
'--enable-esi' '--with-aio' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-dl' '--with-openssl' '--with-pthreads' 
'--disable-arch-native' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu' 
'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 
-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'



In the end testing with only the SSL proxy set to this server via port3129, it tries 
loading the secure website for 2-3 minutes and then timesout. Checking top, 
it shows squid running at 12.1g VIRT, 2.0g RES, 54.5% of MEM (server has 8GB) and using 
100% of CPU2. The ../squid/access.log and cache_access.log shows no new entries at all. 
We had to disable thecache.log (cache_log /dev/null) as it continuously recorded 
everything and quickly took up all the space on the 80GB hard drive.

So the question is what is going wrong that it is refusing to let ANY secure 
site load and how can we get this resolved?
We greatly appreciate any help on this.

Mike

If I understand correctly, you are attempting to use port 3129 as a forward 
proxy. If so, you shouldn’t need the “intercept” option on 3129, and you should 
change it to http_port since squid will be directly

Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-12 Thread Mike

On 6/12/2014 2:46 PM, Guy Helmer wrote:

On Jun 12, 2014, at 2:25 PM, Mike mcsn...@afo.net wrote:


On 6/12/2014 2:06 PM, Guy Helmer wrote:

On Jun 12, 2014, at 1:01 PM, Mike mcsn...@afo.net wrote:


I have been racking my brain trying to get this working and each time,it 
refuses to connect to secure sites. In the end we need a working squid proxy 
for SSL connections within the company network which will serve over 1000 users 
(thus the larger 8MB cert cache size). We already have theinsecure HTTP proxy 
working fine (thus the use of port 3129 below).
Since it will be SSL based, I know it needs https_port (not http_port), 
ssl-bump, and intercept (required by ssl-bump). The https_port and ssl-bump 
documentation also mentioned the preference for sslflags (which may or may not 
be working in 3.HEAD) and cipher.

OS is Scientific Linux 6.5 (based on CentOS) fully up to date with yum. Server 
is quad core 3.4GHz, 8GB DDR3 with no other uses (like web server, etc).
SELinux has been set to permissive mode so it only reports, doesn't block the 
needed connections (although I also tested with it disabled and made no 
difference).
[root@localhost ~]# sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: permissive
Mode from config file: permissive
Policy version: 24
Policy from config file: targeted

Essential squid.conf lines (I have tested it with and without the sslflags, 
does not impact it working or not working):

https_port 3129 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key 
sslflags=DELAYED_AUTH 
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 8MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost

Local certs have been created and self signed, and the .der cert has been 
imported into the test browser (Firefox 30.0).

Squid info (includes the needed '--enable-ssl' '--enable-ssl-crtd' 
'--with-openssl'):

[root@localhost ~]# squid -v
Squid Cache: Version 3.HEAD-20140127-r13248
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' 
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' 
'--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=wbinfo_group,kerber!

os_ldap_
group,AD_group' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' 
'--enable-esi' '--with-aio' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-dl' '--with-openssl' '--with-pthreads' 
'--disable-arch-native' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu' 
'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 
-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'


In the end testing with only the SSL proxy set to this server via port3129, it tries 
loading the secure website for 2-3 minutes and then timesout. Checking top, 
it shows squid running at 12.1g VIRT, 2.0g RES, 54.5% of MEM (server has 8GB) and using 
100% of CPU2. The ../squid/access.log and cache_access.log shows no new entries at all. 
We had to disable thecache.log (cache_log /dev/null) as it continuously recorded 
everything and quickly took up all the space on the 80GB hard drive.

So the question is what is going wrong that it is refusing to let ANY secure 
site load and how can we get this resolved?
We greatly appreciate any help on this.

Mike

If I understand correctly, you are attempting to use port 3129 as a forward 
proxy. If so, you shouldn’t need

Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-12 Thread Mike

On 6/12/2014 4:11 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 13/06/2014 7:46 a.m., Guy Helmer wrote:

On Jun 12, 2014, at 2:25 PM, Mike mcsn...@afo.net wrote:


On 6/12/2014 2:06 PM, Guy Helmer wrote:

On Jun 12, 2014, at 1:01 PM, Mike mcsn...@afo.net wrote:


I have been racking my brain trying to get this working and
each time,it refuses to connect to secure sites. In the end we
need a working squid proxy for SSL connections within the
company network which will serve over 1000 users (thus the
larger 8MB cert cache size). We already have theinsecure HTTP
proxy working fine (thus the use of port 3129 below). Since it
will be SSL based, I know it needs https_port (not http_port),
ssl-bump, and intercept (required by ssl-bump). The https_port
and ssl-bump documentation also mentioned the preference for
sslflags (which may or may not be working in 3.HEAD) and
cipher.

OS is Scientific Linux 6.5 (based on CentOS) fully up to date
with yum. Server is quad core 3.4GHz, 8GB DDR3 with no other
uses (like web server, etc). SELinux has been set to permissive
mode so it only reports, doesn't block the needed connections
(although I also tested with it disabled and made no
difference). [root@localhost ~]# sestatus SELinux status:
enabled SELinuxfs mount: /selinux Current mode: permissive Mode
from config file: permissive Policy version: 24 Policy from
config file: targeted

Essential squid.conf lines (I have tested it with and without
the sslflags, does not impact it working or not working):

https_port 3129 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
sslflags=DELAYED_AUTH
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH



sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 8MB

sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
ssl_bump none localhost

Local certs have been created and self signed, and the .der
cert has been imported into the test browser (Firefox 30.0).

Squid info (includes the needed '--enable-ssl'
'--enable-ssl-crtd' '--with-openssl'):

[root@localhost ~]# squid -v Squid Cache: Version
3.HEAD-20140127-r13248 Service Name: squid configure options:
'--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu'
'--target=x86_64-redhat-linux-gnu' '--program-prefix='
'--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include'
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--exec_prefix=/usr'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-eui'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake'
'--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=wbinfo_group,kerber!

os_ldap_ group,AD_group' '--enable-cache-digests'
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools'
'--enable-epoll' '--enable-icap-client' '--enable-ident-lookups'
'--enable-linux-netfilter' '--enable-removal-policies=heap,lru'
'--enable-snmp' '--enable-ssl' '--enable-ssl-crtd'
'--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2'
'--enable-esi' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=16384' '--with-dl' '--with-openssl'
'--with-pthreads' '--disable-arch-native'
'build_alias=x86_64-redhat-linux-gnu'
'host_alias=x86_64-redhat-linux-gnu'
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'


In the end testing with only the SSL proxy set to this server
via port3129, it tries loading the secure website for 2-3
minutes and then timesout. Checking top, it shows squid
running at 12.1g VIRT, 2.0g RES, 54.5% of MEM (server has 8GB)
and using 100% of CPU2. The ../squid/access.log and
cache_access.log shows no new entries at all. We had to disable
thecache.log (cache_log /dev/null) as it continuously recorded
everything and quickly took up all the space on the 80GB hard
drive.

So the question is what is going wrong that it is refusing to
let ANY secure site load and how can we get this resolved? We
greatly appreciate any help on this.

The answer is those cache.og messages you disabled.

Start by re-enabling

Re: [squid-users] Issues with ssl-bump in 3.HEAD

2014-06-12 Thread Mike
So then next question is how do I know for sure ssl-bump is working? So 
far all the certificates match the servers, and I am able to use acls 
like 'acl blacklist1 dstdom_regex -i /etc/blacklists/dom_bl' to block 
domains, even if it is a secure site... but overall I have not figured 
out how to tell if ssl-bump is actually working or not.


Thank you

Mike


On 6/12/2014 6:04 PM, Eliezer Croitoru wrote:

On 06/13/2014 01:04 AM, Mike wrote:

So I re-add it for testing:

http_port 3128
http_port 3129 intercept ssl-bump... blah blah

You cannot use this and the cache.log will tell you that...
Try to setup the server like this:
http_port 3128
http_port 13129 intercept
https_port 13130 intercept ssl-bump ...

With just basic settings.
And still it looks like a look so what are the iptables rules you are 
using?


Eliezer





[squid-users] Re: BUG 3279: HTTP reply without Date

2014-06-03 Thread Mike Mitchell
I followed the advice found here:
  http://www.mail-archive.com/squid-users@squid-cache.org/msg95078.html

Switching to diskd from aufs fixed the crashes for me.
I still get 
  WARNING: swapfile header inconsistent with available data
messages in the log.  They appear within a hour of starting with a clean cache.
When I clean the cache I stop squid, rename the cache directory, create a new 
cache directory,
start removing the old cache directory, then run squid -z before starting 
squid.

I run the following commands:

/etc/init.d/squid stop
sleep 5
rm -f /var/squid/core*
rm -f /var/squid/swap.state*
rm -rf /var/squid/cache.hold
mv /var/squid/cache /var/squid/cache.hold
rm -rf /var/squid/cache.hold 
squid -z
/etc/init.d/squid start

I'm running on a Red Hat Linux VM.
Here is the output of 'uname -rv':
   2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010

Squid Cache: Version 3.4.5-20140514-r13135
configure options: '--with-maxfd=16384' '--enable-storeio=diskd' 
'--enable-removal-policies=heap' '--enable-delay-pools' '--enable-wccpv2' 
'--enable-auth-basic=DB NCSA NIS POP3 RADIUS fake getpwnam' 
'--enable-auth-digest=file' '--with-krb5-config=no' '--disable-auth-ntlm'  
'--disable-auth-negotiate' '--disable-external-acl-helpers' '--disable-ipv6' 
--enable-ltdl-convenience

Mike Mitchell





[squid-users] Re: BUG 3279: HTTP reply without Date

2014-06-02 Thread Mike Mitchell
I too see this problem with squid 3.4.5  under aufs.  I switched 100+ Linux 
caches over from aufs to diskd and no longer see crashes.

Mike Mitchell



[squid-users] Re: swapfile header inconsistent

2014-05-22 Thread Mike Mitchell
Amos Jeffries wrote:
 Are you using the StoreID or SMP features of Squid?
  Is there another Squid instance running on the same box and perhapse
 altering the cache?
 
 Amos

The answer is no to both questions.  I am not using StoreID or SMP features.  
There is not a another Squid instance running.

I see this behavior regularly on all of my caches.  Most are running 3.3.12, 
but I've started switching to 3.4.5 in hopes of reducing the 'isEmpty()' 
crashes.  The 'isEmpty()' crashes are preceded by  'missing date' messages (bug 
3279).
 
Mike Mitchell


[squid-users] swapfile header inconsistent

2014-05-21 Thread Mike Mitchell
I'm running squid 3.4.5-20140514-r13135

I started switching over to diskd from aufs because I was tired of all the 
is_empty() crashes.
I stopped squid, removed the cache directory and swapfile completely, then 
started squid with the '-z' option to rebuild the cache directory.

Within a half-hour my cache.log file started reporting lines like:

2014/05/21 11:09:26 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:09:56 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:10:49 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:11:04 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:11:19 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:11:34 kid1| WARNING: swapfile header inconsistent with available 
data
2014/05/21 11:12:04 kid1| WARNING: swapfile header inconsistent with available 
data

At least it is not crashing.  This instance was started with a clean cache.

Mike Mitchell






Re: [squid-users] IPv6 + Intercept proxy

2013-10-23 Thread Mike Cardwell
* on the Wed, Oct 23, 2013 at 05:14:00PM +1300, Amos Jeffries wrote:

   For starters NAT has never been transparent proxy. NAT is the lazy 
 admins replacement, using the proxy IP on outbound to avoid having to 
 setup proper routing rules.
 For the real Transparent Proxy use TPROXY interception (TPROXY being 
 an abbreviation of transparent proxy

Thanks. I was not aware of TPROXY. That sounds like a superior solution.

-- 
Mike Cardwell  https://grepular.com/ http://cardwellit.com/
OpenPGP Key35BC AF1D 3AA2 1F84 3DC3  B0CF 70A5 F512 0018 461F
XMPP OTR Key   8924 B06A 7917 AAF3 DBB1  BF1B 295C 3C78 3EF1 46B4


signature.asc
Description: Digital signature


[squid-users] IPv6 + Intercept proxy

2013-10-22 Thread Mike Cardwell
http://wiki.squid-cache.org/Features/IPv6#NAT_Interception_Proxy_.28aka_.22Transparent.22.29

NAT simply does not exist in IPv6. By Design.

This is no longer true as of Linux 3.7 + IPTables 1.4.17.

I wanted to introduce a transparent caching web proxy on my network,
however most of my clients are dual IP stack. As it stands, if I use
Squid, whenever those clients connect to an IPv6 address instead of
an IPv4 address, they will bypass the caching proxy.

Is there a plan to make the intercept argument to http_port work
with IPv6?

P.S. Sorry if this email comes through twice. I sent it from the wrong
address last time.

-- 
Mike Cardwell  https://grepular.com/ http://cardwellit.com/
OpenPGP Key35BC AF1D 3AA2 1F84 3DC3  B0CF 70A5 F512 0018 461F
XMPP OTR Key   8924 B06A 7917 AAF3 DBB1  BF1B 295C 3C78 3EF1 46B4


signature.asc
Description: Digital signature


RE: [squid-users] parent request order

2013-06-26 Thread Mike Mitchell
I use

cache_peer pp01 parent 3128 0 name=dp round-robin weight=100
cache_peer pp02 parent 3128 0 name=p1 round-robin
cache_peer pp03 parent 3128 0 name=p2 round-robin

This puts the three parents into a round-robin pool, but weights pp01
much heavier.  pp01 will be chosen over pp02 and pp03 unless
pp01 stops responding.

Squid resets the counters used for comparison every five minutes,
so you don't have to worry about pp01 accumulating so many
requests that the other parents are used.

There is one problem with this.  The counters are reset to zero,
and the comparison is done by dividing the count by the weight.
Zero divided by a large number is still zero, same as the other
parents.  Every five minutes all the parents are equally preferred,
until each parent gets one request.

I have patched my version of squid so that it resets the counters to
one instead of zero.

The patch follows:

*** src/cache_cf.cc.orig2013-04-26 23:07:29.0 -0400
--- src/cache_cf.cc 2013-05-03 16:41:03.0 -0400
***
*** 2044,2049 
--- 2044,2050 
  p-icp.port = CACHE_ICP_PORT;
  p-weight = 1;
  p-basetime = 0;
+ p-rr_count = 1;
  p-stats.logged_state = PEER_ALIVE;
  
  if ((token = strtok(NULL, w_space)) == NULL)
*** src/neighbors.cc.orig   2013-04-26 23:07:29.0 -0400
--- src/neighbors.cc2013-05-07 11:15:25.0 -0400
***
*** 421,427 
  {
  peer *p = NULL;
  for (p = Config.peers; p; p = p-next) {
! p-rr_count = 0;
  }
  }
  
--- 421,427 
  {
  peer *p = NULL;
  for (p = Config.peers; p; p = p-next) {
! p-rr_count = 1;
  }
  }
  
Mike Mitchell


From: T Ls [t...@pries.pro]
Sent: Monday, June 24, 2013 4:15 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] parent request order

Am 24.06.2013 21:51, schrieb Marcus Kool:

 ... Can't you make a setup where S1 uses P2 if P1 fails?

No, this mapping is fix.

 In an old thread I read Squid has a configuration option FIRST_UP_PARENT
 so it can be configured to use P1 and only P2 if P1 is not available.

Yep, this kind of prioritization is exactly what I'm looking for, I'm
going to search for this FIRST_UP_PARENT-option tomorrow.


Thanks so far.

Thomas





[squid-users] RE: Squid CPU 100% infinite loop

2013-06-24 Thread Mike Mitchell
It appears that moving to 3.3.5-20130607-r12573 from 3.2.11-20130524-r11822
has eliminated my problem.  I have seen a few unexplainable spikes in CPU
usage, but they haven't lasted long and squid has remained responsive.
I've been running 3.3.5-20130607-r12573 for just over two weeks without a
problem.

Mike Mitchell
__
From: Stuart Henderson [s...@spacehopper.org]
Sent: Friday, June 21, 2013 9:57 AM
To: squid-users@squid-cache.org
Subject: Re: Squid CPU 100% infinite loop

On 2013-05-28, Stuart Henderson s...@spacehopper.org wrote:
 On 2013-05-17, Alex Rousskov rouss...@measurement-factory.com wrote:
 On 05/17/2013 01:28 PM, Loïc BLOT wrote:

 I have found the problem. In fact it's the problem mentionned on my
 last mail, is right. Squid FD limit was reached, but squid doesn't
 mentionned every time the freeze appear that it's a FD limit
 problem, then the debug was so difficult.

 Squid should warn when it runs out of FDs. If it does not, it is a
 bug. If you can reproduce this, please open a bug report in bugzilla
 and post relevant logs there.

 FWIW, I cannot confirm or deny whether reaching FD limit causes what
 you call an infinite loop -- there was not enough information in your
 emails to do that. However, if reaching FD limit causes high CPU
 usage, it is a [minor] bug.

 I've just hit this one, ktrace shows that it's in a tight loop doing
 sched_yield(), I'll try and reproduce on a non-production system and open
 a ticket if I get more details..

I haven't reproduced this in squid yet, but I recently hit a case with
similar symptoms with another threaded program on OpenBSD which hit a loop
on sched_yield if it received a signal while forking, this has now been
fixed in the thread library. So if anyone knows how to reproduce, please
try again after updating src/lib/librthread/rthread_fork.c to r1.8.






[squid-users] RE: Squid CPU 100% infinite loop

2013-06-12 Thread Mike Mitchell
The FD limit is 16384.  During the day I see peak utilization around
8,000.  At night the utilization is less than 1,000.  During the four
hours the CPU rises from 10% to 100% the FD utilization stays
less than 1,000.

Again, I have not seen this problem under load, only while
squid is relatively idle.  It usually starts around 10:00 PM, and
is not related to log rotation.

# uname -a
Linux pxsrv03 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 
x86_64 GNU/Linux

# /opt/squid/bin/squid -v
Squid Cache: Version 3.3.5-20130607-r12573
configure options:  '--prefix=/opt/squid' '--with-maxfd=16384' 
'--with-pthreads' '--enable-storeio=aufs' '--enable-removal-policies=heap' 
'--disable-external-acl-helpers' '--disable-ipv6' --enable-ltdl-convenience


Mike Mitchell

On 11/06/2013 10:42:32 -0700, Loïc BLOT wrote:
 Hello mike,
 please look at the number of system file descriptors opened, the squid
 limit and the squid user limit. I have this problem on 3.2 and 3.3
 because squid was at the FD limit. (look at the system fd limit for
 squid, ulimit -n with the squid user)
 -- 
 Best regards,
 Loïc BLOT, 
 UNIX systems, security and network expert
 http://www.unix-experience.fr


[squid-users] RE: Squid CPU 100% infinite loop

2013-06-11 Thread Mike Mitchell
I dropped the cache size to 150 GB instead of 300 GB.  Cached object count 
dropped
from ~7 million to ~3.5 million.  After a week I saw one occurrence of the same 
problem.
CPU usage climbed steadily over 4 hours from 10% to 100%, then squid became
unresponsive for 20 minutes.  After that it picked up as if nothing had 
happened -- no
error messages in any logs, no restarts, no core dumps.

I'm now testing again using version 3.3.5-20130607-r12573 instead of 
3.2.11-20130524-r11822.
I've left everything else the same, with the cache size still at 150 GB.

Mike Mitchell

On 30/05/2013 08:43:24 -0700, Ron Wheeler wrote:

 Some ideas here.
 http://www.freeproxies.org/blog/2007/10/03/squid-cache-disk-io-performance-enhancements/
 http://www.gcsdstaff.org/roodhouse/?p=2784
 
 
 You might try dropping your disk cache to 50Gb and see what happens.
 
 I am not sure that caching 7 Million pages gives you much of an advantage 
 over 1 million. The 1,000,001th most  popular page probably does not come up 
 that often and by the time you get down to a page that is 7,000,000 in the 
 list of most accessed pages, you are not seeing much demand for that page.
 
 Probably most of the cache is just accessed once.
 
 Your cache_mem looks low but is not related to your problem but would improve 
 performance a lot. Getting a few  thousand of the most active pages in 
 memory is worth a lot more than 6 million of the least active pages sitting 
 on a disk.
 
 
 I am not a big squid expert but have run squid for a long time.
 
 Ron



[squid-users] Squid CPU 100% infinite loop

2013-05-30 Thread Mike Mitchell
What garbage collection parameters can I change?
I'm not using authentication, so the default
   auth_param digest nonce_garbage_interval 5 minutes
doesn't really apply.
I also run with
   client_db off
so the default
  authenticate_cache_garbage_interval 1 hour
doesn't apply either.

The lock-up happens randomly across the four servers.  I can go several
days without a lock-up.  I've only seen one lock-up in a night.  Over the
last two nights I had lock-ups both nights, but on different servers.

# squid -v
Squid Cache: Version 3.2.11-20130524-r11822
configure options:  '--prefix=/opt/squid' '--with-maxfd=16384' 
'--with-pthreads' '--enable-storeio=aufs' '--enable-removal-policies=heap'  
'--disable-external-acl-helpers' '--disable-ipv6' --enable-ltdl-convenience

Here are the relevant parts of the configuration:

acl CIDR_A  src 10.0.0.0/8
ident_lookup_access allow CIDR_A
http_port 3128
cache_mem 1024 MB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /cache/squid 323368 64 253 max-size=512000
maximum_object_size 500 KB
cache_swap_low 95
cache_swap_high 97
cache_store_log none
client_idle_pconn_timeout 5 seconds
check_hostnames on
allow_underscore on
dns_defnames on
dns_v4_first on
ipcache_size 4096
fqdncache_size 8192
client_db off

I have about 7,000,000 objects in the cache.  During the four
hours the CPU rises from 10% to 100%, the number of objects
does not change by very much.  The cache utilization sits at
95% during the four hours.



[squid-users] RE: Squid CPU 100% infinite loop

2013-05-29 Thread Mike Mitchell
I've hit something similar.  I have four identically configured systems with 
16K squid FD limit, 24 GB RAM, 300 GB cache directory.  I've seen the same 
failure randomly on all four systems.  During the day the squid process handles 
 100 requests/second, with a peak FD usage around 8K FDs.  In the evenings the 
load drops to about 20 requests/second, with an FD usage around 1K FDs.  CPU 
usage hovers less than 10% during this time.
Randomly one of the four systems will start increasing its CPU usage.  It takes 
about 4 hours to go from less than 10% to 100%.  During the four hours the FD 
usage stays at 1K and the request rate stays right around 20 requests/second.  
Once the CPU reaches 100% the squid service stops responding.  About 20 minutes 
later it starts responding again with CPU levels back down below 10%.  There is 
nothing in the cache log to indicate a problem.  The squid process did not core 
dump, nor did the parent restart a child.

I have not seen the problem during the day, only after the load drops.  The 
hangs do not coincide with the scheduled log rotates.  The one last night 
recovered a half-hour before the log rotated at 2:00 AM.

Every one of my hangs have been proceeded with a rise in CPU usage, and squid 
recovers on its own without logging anything.

I have a script that does
  GET cache_object://localhost/info
  GET cache_object://localhost/counters
every five minutes and puts the interesting (to me) bits into RRD files.
Obviously the script fails during the 20 minutes the squid process is 
non-responsive.


From: Stuart Henderson [s...@spacehopper.org]
Sent: Tuesday, May 28, 2013 12:01 PM
To: squid-users@squid-cache.org
Subject: Re: Squid CPU 100% infinite loop

On 2013-05-17, Alex Rousskov rouss...@measurement-factory.com wrote:
 On 05/17/2013 01:28 PM, Loïc BLOT wrote:

 I have found the problem. In fact it's the problem mentionned on my
 last mail, is right. Squid FD limit was reached, but squid doesn't
 mentionned every time the freeze appear that it's a FD limit
 problem, then the debug was so difficult.

 Squid should warn when it runs out of FDs. If it does not, it is a
 bug. If you can reproduce this, please open a bug report in bugzilla
 and post relevant logs there.

 FWIW, I cannot confirm or deny whether reaching FD limit causes what
 you call an infinite loop -- there was not enough information in your
 emails to do that. However, if reaching FD limit causes high CPU
 usage, it is a [minor] bug.

I've just hit this one, ktrace shows that it's in a tight loop doing
sched_yield(), I'll try and reproduce on a non-production system and open
a ticket if I get more details..






[squid-users] Can squid be a fully transparent proxy ?

2013-01-17 Thread Holmes, Michael A (Mike)
Basically, can squid be the endpoint for TCP connections, and establish a new 
outgoing TCP connection to the destination server?

Mike



[squid-users] RE: exceeding cache_dir size

2013-01-16 Thread Mike Mitchell
The patch did not have the desired effect.
I still exceeded the disk space specified on the partition.
After many
  diskHandleWrite: FD 35: disk write error: (28) No space left on device
messages, squid terminated with
  WARNING: swapfile header inconsistent with available data
  FATAL: Received Segment Violation...dying.

Mike Mitchell

From: Mike Mitchell
Sent: Monday, January 14, 2013 3:35 PM
To: squid-users@squid-cache.org
Subject: RE: exceeding cache_dir size

I'm using a belt-and-suspenders approach.
I've installed 3.2.6 with the patch from 
http://bugs.squid-cache.org/show_bug.cgi?id=3686
My cache_dir line now looks like

  cache_dir aufs /cache/squid/aufs 3800 15 253 max-size=134217728
  maximum_object_size 131072 KB
  cache_swap_state /var/squid/swap.state

I now specify a max-size on the cache_dir line, and I've moved the
swap.state file to a different disk partition.

So far I've updated 14 of my 61 proxy servers running 3.2.
I have another 53 that are stuck on 2.7STABLE9.

Mike Mitchell




[squid-users] RE: exceeding cache_dir size

2013-01-16 Thread Mike Mitchell
Turns out the cache directory was filled up with
core files cause by bug #3732.
http://bugs.squid-cache.org/show_bug.cgi?id=3732

I had compiled with --disable-ipv6, yet the core file
shows that Ip::Address::GetAddrInfo() was called
with force set to zero and m_SocketAddr.sin6_addr
set to all zeros.  This fails the (force == AF_UNSPEC  IsIPv4())
test, causing an assert.

Yet another patch to try

From: Mike Mitchell
Sent: Wednesday, January 16, 2013 11:54 AM
To: squid-users@squid-cache.org
Subject: RE: exceeding cache_dir size

The patch did not have the desired effect.
I still exceeded the disk space specified on the partition.
After many
  diskHandleWrite: FD 35: disk write error: (28) No space left on device
messages, squid terminated with
  WARNING: swapfile header inconsistent with available data
  FATAL: Received Segment Violation...dying.

Mike Mitchell





[squid-users] RE: exceeding cache_dir size

2013-01-14 Thread Mike Mitchell
I'm using a belt-and-suspenders approach.
I've installed 3.2.6 with the patch from 
http://bugs.squid-cache.org/show_bug.cgi?id=3686
My cache_dir line now looks like

  cache_dir aufs /cache/squid/aufs 3800 15 253 max-size=134217728
  maximum_object_size 131072 KB
  cache_swap_state /var/squid/swap.state

I now specify a max-size on the cache_dir line, and I've moved the
swap.state file to a different disk partition.

So far I've updated 14 of my 61 proxy servers running 3.2.
I have another 53 that are stuck on 2.7STABLE9.
 
Mike Mitchell




[squid-users] exceeding cache_dir size

2013-01-09 Thread Mike Mitchell
I'm having problems with squid 3.2.5 exceeding the cache_dir size.
I have a 5 GB disk partition with nothing else on it, with a cache_dir
size of 3800 MB:

cache_dir aufs /cache/squid/aufs 3800 15 253
maximum_object_size 131072 KB

Today I found squid had terminated and the /cache partition was
100% full.

After a little investigation in the cache directory I found this file:
# ls -l 02/8C/00027F4A
-rw-r- 1 nobody nobody 915697664 Jan  8 21:15 02/8C/00027F4A

Very strange, a 900 MB file stored when I have a maximum_object_size of 128 MB.

Here's the header of the file, with the initial binary data stripped out:

http://152.2.63.23/WUNC HTTP/1.0 200 OK
Content-Type: application/x-mms-framed
Server: Cougar/9.00.00.3372
Date: Tue, 08 Jan 2013 10:14:18 GMT
Pragma: no-cache, client-id=3433994619, xResetStrm=1, features=broadcast, 
timeout=6, AccelBW=350, AccelDuration=2, Speed=1.000
Cache-Control: no-cache
Last-Modified: Tue, 08 Jan 2013 10:14:18 GMT
Supported: com.microsoft.wm.srvppair, com.microsoft.wm.sswitch, 
com.microsoft.wm.predstrm, com.microsoft.wm.fastcache

It is a streaming audio of a local radio station.

I'm guessing that since there isn't a content-length header squid will store
the data until it all arrives, then flush it from disk later on.  This is a 
problem
because my swap.state files are on the same partition.  When squid
can no longer write to swap.state because of the full disk, it terminates.

The only solution is to move the swap.state files, but that is
counter to the recommendation in the squid.conf.documented file:

#  TAG: cache_swap_state
#   Location for the cache swap.state file. This index file holds
#   the metadata of objects saved on disk.  It is used to rebuild
#   the cache during startup.  Normally this file resides in each
#   'cache_dir' directory, but you may specify an alternate
#   pathname here.  Note you must give a full filename, not just
#   a directory. Since this is the index for the whole object
#   list you CANNOT periodically rotate it!
...
#   them).  We recommend you do NOT use this option.  It is
#   better to keep these index files in each 'cache_dir' directory.

Since it is possible to have files much larger than maximum_object_size
in the cache_dir directory, there is always a possibility of running out
of space.  Not being able to write swap.state causes squid to abort,
which leads me to believe that swap.state should never be on the
same partition as the cache_dir directory.

Mike Mitchell



[squid-users] RE: Memory leak in 3.2.5

2012-12-25 Thread Mike Mitchell
With the ident patch the memory leaks are at manageable levels.
It looks like I'm leaking HttpHeaderEntry, Short String, and ConnOpener
structures.

After 1,000,000 requests I have 740,968 HttpHeaderEntry structures in use,
with 1,600,341 Short Strings in use.  The two go hand-in-hand, as the
HttpHeaderEntry structure allocates two Short Strings.

I also have 22,314 ConnOpener structures in use.  I found one leak of
ConnOpener structures in the internalDNS routines.  I doubt it is large
enough to account for the over 22,000 structures in 1,000,000 queries.
The leak would only happen if the DNS query switched to TCP and the
TCP connection failed.

Mike Mitchell



  1   2   3   4   >