[squid-users] Re: Service Times

2014-01-15 Thread Nyamul Hassan
Hi,

We were trying to find out a correlation between our CACTI graphs and
the Service Times that Squid.cgi shows.

Squid version is 2.7.STABLE9.

I have attached links to the following two images:

Squid.CGI - Median Service Times
http://116.193.170.3:8181/proxy24-cacti-service-times.png

CACTI - Squid - Service Times
http://116.193.170.3:8181/proxy24-squidcgi-service-times.png

Can someone help us correlate the data?

Thank you in advance for your assistance.

Regards
HASSAN


[squid-users] Re: Squid 2.7.STABLE5 hangs with "HEAD" request in LOG

2014-01-15 Thread 4eversr
Squid Cache: Version 3.1.12

configure options:  '--prefix=/usr' '--sysconfdir=/etc/squid'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var'
'--libexecdir=/usr/sbin' '--datadir=/usr/share/squid'
'--mandir=/usr/share/man' '--libdir=/usr/lib64'
'--sharedstatedir=/var/squid' '--with-logdir=/var/log/squid'
'--with-swapdir=/var/cache/squid' '--with-pidfile=/var/run/squid.pid'
'--with-dl' '--enable-storeio'
'--enable-disk-io=AIO,Blocking,DiskDaemon,DiskThreads'
'--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-delay-pools'
'--enable-esi' '--enable-icap-client' '--enable-useragent-log'
'--enable-referer-log' '--enable-kill-parent-hack' '--enable-arp-acl'
'--enable-ssl' '--enable-forw-via-db' '--enable-cache-digests'
'--enable-linux-netfilter' '--with-large-files' '--enable-underscores'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=DB,LDAP,MSNT,NCSA,PAM,POP3,SASL,SMB,YP,getpwnam,multi-domain-NTLM,squid_radius_auth'
'--enable-ntlm-auth-helpers=fakeauth,no_check,smb_lm'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-digest-auth-helpers=eDirectory,ldap,password'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-ntlm-fail-open' '--enable-stacktraces'
'--enable-x-accelerator-vary' '--with-default-user=squid'
'CFLAGS=-fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector
-funwind-tables -fasynchronous-unwind-tables -g -fPIE -fPIC
-fno-strict-aliasing' 'LDFLAGS=-pie' 'CXXFLAGS=-fmessage-length=0 -O2 -Wall
-D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables
-fasynchronous-unwind-tables -g -fPIC -fno-strict-aliasing'
--with-squid=/usr/src/packages/BUILD/squid-3.1.12

I will test with IE11 later this day.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-STABLE5-hangs-with-HEAD-request-in-LOG-tp4664219p4664288.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] la belle affaire......

2014-01-15 Thread Alexandre Chappaz
Hello,

ca y est je vends ma voiture, voilà l'annonce :

http://www.leboncoin.fr/voitures/603661288.htm

beaucoup de monde me répond pour envoyer la bête en Afrique, mais si
je peux faire un(e) heureux(se) en France et conclure l'affaire sans
emmerdement ça m'arrange ! faites donc circuler.


Cdlt


Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread m . shahverdi
Ok, so what should I do if I want to pass SSH requests through squid?



> On 01/13/2014 06:28 AM, m.shahve...@ece.ut.ac.ir wrote:
>
>> Which protocols does squid not support exactly?
>
> In the context of this discussion, Squid officially supports HTTP
> protocol [optionally encrypted using SSL or TLS] only. Unofficially (for
> now), Squid can also support native FTP requests.
>
> All other protocols are not supported.
>
> However, please note that there are many protocol "layers". It is
> difficult to define exactly which layer we are discussing. For example,
> it is possible to stream video using HTTP requests that wrap around the
> actual video streaming protocol OR, as Amos said, it is possible to
> tunnel other protocols through a forward Squid proxy using an HTTP
> CONNECT request.
>
>
> Hope this clarifies,
>
> Alex.
>
>
>
>>> On 2014-01-13 01:06, m.shahve...@ece.ut.ac.ir wrote:
 Hi,
 What does squid do against unsupported request protocols such as audio
 and
 video streaming for example RTMP or RTSP?
 Thanks
>>> non-HTTP protocols are expected to use the CONNECT tunnel mechanism of
>> HTTP. Software which does so is relayed by Squid (subject to access
>> control configuration).
>>> Software which sends such protocols directly to the proxy or over port
>> 80 are rejected.
>>> Amos
>>
>>
>>
>
>




Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Antony Stone
On Wednesday 15 January 2014 at 14:04:19, m.shahve...@ece.ut.ac.ir wrote:

> Ok, so what should I do if I want to pass SSH requests through squid?

Why would you want to do this, or indeed expect it to be possible?

What benefits from passing SSH through Squid so you expect to get, instead of 
just routing SSH directly over the network as usual?



Regards,


Antony.

-- 
"The tofu battle I saw last weekend was quite brutal."

 - Marija Danute Brigita Kuncaitis

 Please reply to the list;
   please don't CC me.


Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread m . shahverdi
I want to pass all traffics through squid not only traffics are received
on port 80 and handling them in some ways. Now when I am doing so SSH
requests freeze without any response!



> On Wednesday 15 January 2014 at 14:04:19, m.shahve...@ece.ut.ac.ir wrote:
>
>> Ok, so what should I do if I want to pass SSH requests through squid?
>
> Why would you want to do this, or indeed expect it to be possible?
>
> What benefits from passing SSH through Squid so you expect to get, instead
> of
> just routing SSH directly over the network as usual?
>
>
>
> Regards,
>
>
> Antony.
>
> --
> "The tofu battle I saw last weekend was quite brutal."
>
>  - Marija Danute Brigita Kuncaitis
>
>  Please reply to the
> list;
>please don't CC
> me.
>




Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Leonardo Rodrigues


If your SSH client can use a HTTPS proxy, than it will probably 
work without major changes, as connections will be proxied as CONNECT 
ones. In the case of CONNECT method, squid already works almost as a 
passthrough proxy.


If your SSH client cannot use a HTTPS proxy, than probably you wont 
be able to do that simply because squid cannot handle with SSH protocol.


Please note that 'i want to pass all traffic through squid' is 
simply the wrong way. Squid is NOT a multi-purpose proxy, it's a 
HTTP/HTTPS proxy and, as an HTTPS proxy, can deal with CONNECT 
connections which can be used to tunnel some other traffics. This 
possibility of dealing with 'other traffics' is VERY different from 
imaging it can deal ANY traffic.




Em 15/01/14 11:17, m.shahve...@ece.ut.ac.ir escreveu:

I want to pass all traffics through squid not only traffics are received
on port 80 and handling them in some ways. Now when I am doing so SSH
requests freeze without any response!




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Leonardo Rodrigues

Em 15/01/14 11:04, m.shahve...@ece.ut.ac.ir escreveu:

Ok, so what should I do if I want to pass SSH requests through squid?


using an SSH client that can proxy requests through an HTTP/HTTPS 
proxy should do it. If your client cant do that, than it probably wont 
be possible as squid does not recognize SSH protocol (and never intended 
to do so).



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





AW: [squid-users] ask three times authentication

2014-01-15 Thread Rietzler, Markus (RZF, SG 324 / )
wonder why there are popups at all. or popups at all. NTLM should work without 
any popups. 
which browser do you use? IE?

could you try to discard the group-check auth?
we are using NTLM but everyone is allowed, after authentication. so we do not 
use external_acl_type.


we only use

acl auth_user proxy_auth REQUIRED
http_access allow auth_surfer all


> -Ursprüngliche Nachricht-
> Von: Usuário do Sistema [mailto:maico...@ig.com.br]
> Gesendet: Dienstag, 14. Januar 2014 13:27
> An: Eliezer Croitoru
> Cc: squid-users@squid-cache.org
> Betreff: Re: [squid-users] ask three times authentication
> 
> Thank you,
> 
> From 2.6 to 3.1.10, was there any other change in the system?
> 
>  yes, I have changed my squid from an machine with S.O Red Hat 5.9
> to other machine with S.O CentOS 6.5
> 
> the issue it's seems to be something about authentication
> compatibility between Browse and new squid version 3.1.10
> 
> I have the old machine yet. I have done some test and from a client
> machine when I put the old proxy on browse all it's work.
> but the strange I use the same squid.conf either old proxy machine as
> well as new proxy machine so why the pop-up authentication appear
> three times only at the new proxy squid version 3.1.10 ?
> 
> my question is if there is any problem with squid version 3.1.10 about
> authentication ?
> 
> Follow my squid.conf.
> 
> 
> 
> #
> # Squid.conf autenticacao AD
> #
> #
> 
> ## Autenticacao
> 
> auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-
> ntlmssp
> auth_param ntlm children 50
> auth_param ntlm keep_alive on
> 
> #auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-
> basic
> #auth_param basic children 30
> 
> ## comentadas
> 
> auth_param basic realm Acesso a Internet teste SA
> auth_param basic credentialsttl 2 hours
> 
> authenticate_cache_garbage_interval 1 hour
> authenticate_ttl 120 seconds
> 
> external_acl_type NT_global_group children=50 %LOGIN
> /usr/lib64/squid/squid_unix_group
> 
> ## SQSTAT
> 
> 
> acl ntlm_users proxy_auth REQUIRED
> 
> #cache_store_log none
> #cache_log /var/log/squid/cache.log
> #cache_log none
> #request_entities on
> 
> # debug_options rotate=16 ALL,1
> #debug_options ALL,9
> #debug_options ALL,1 33,2
> #debug_options ALL
> 
> 
> visible_hostname proxy.teste.com
> http_port 8080
> http_port 127.0.0.1:3128
> hierarchy_stoplist cgi-bin ?
> 
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> 
> access_log /var/log/squid/access.log squid
> 
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> 
> ie_refresh on
> 
> max_filedesc 4096
> 
> 
> ###
> # Parametros de Cache NAO ALTERAR #
> ###
> 
> #cache_dir aufs /var/spool/squid 6000 16 256
> #cache_dir ufs /var/spool/squid 5000 64  1024
> #cache_dir ufs /var/spool/squid 2048 64 64
> 
> diskd_program   /usr/lib64/squid/diskd-daemon
> 
> cache_dir diskd /var/spool/squid/1  1000 16 128 Q1=64 Q2=72
> cache_dir diskd /var/spool/squid/2  1000 16 128 Q1=64 Q2=72
> cache_dir diskd /var/spool/squid/3  1000 16 128 Q1=64 Q2=72
> cache_dir diskd /var/spool/squid/4  1000 16 128 Q1=64 Q2=72
> 
> 
> #This stops squid from holding onto ram that it is no longer actively
> using.
> memory_pools off
> 
> #Buffers the write-out to log files. This can increase performance
> slightly
> buffered_logs on
> 
> cache_mem 1024 MB
> 
> half_closed_clients off
> cache_swap_low 80%
> cache_swap_high 100%
> 
> maximum_object_size 10 MB
> maximum_object_size_in_memory 2048 KB
> 
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap GDSF
> 
> ###
> 
> ftp_passive on
> acl ftp_21 port 21
> 
> 
> #
> # Regras Padrao
> #
> 
> 
> 
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 20 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # companyling http
> acl Safe_ports port 10080 # Porta http das unidades remotas teste.
> acl Safe_ports port 8181 # Publicacao
> acl Safe_ports port 10082 # DBMessenger
> acl Safe_ports port 9082
> acl ftp proto FTP
> acl CONNECT method CONNECT
> 
> 
> #
> # Origens
> ###

Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Helmut Hullen
Hallo, m.shahverdi,

Du meintest am 15.01.14:

> I want to pass all traffics through squid not only traffics are
> received on port 80 and handling them in some ways.

Sorry - I can't see any benefit in that desired configuration.

Viele Gruesse!
Helmut



Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread m . shahverdi
So what do you mean by "an SSH client that can proxy requests through an
HTTP/HTTPS proxy" exactly?


> Em 15/01/14 11:04, m.shahve...@ece.ut.ac.ir escreveu:
>> Ok, so what should I do if I want to pass SSH requests through squid?
>>
>>
>  using an SSH client that can proxy requests through an HTTP/HTTPS
> proxy should do it. If your client cant do that, than it probably wont
> be possible as squid does not recognize SSH protocol (and never intended
> to do so).
>
>
> --
>
>
>   Atenciosamente / Sincerily,
>   Leonardo Rodrigues
>   Solutti Tecnologia
>   http://www.solutti.com.br
>
>   Minha armadilha de SPAM, NÃO mandem email
>   gertru...@solutti.com.br
>   My SPAMTRAP, do not email it
>
>
>
>




[squid-users] Re: Squid 2.7.STABLE5 hangs with "HEAD" request in LOG

2014-01-15 Thread 4eversr
Hi,

amazingly IE11 works perfect. Absolutely no "HEAD"-request appears in Squid
Logs and the video loads in seconds.

But this is no real solution for our IE9 / IE10 problem, because we are
forced to use IE10 (or IE9) because we must use some web applications which
are not supported with IE11 yet.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-STABLE5-hangs-with-HEAD-request-in-LOG-tp4664219p4664298.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Laurikainen, Tuukka
Hi,

Maybe what you are trying to achieve here is a combined proxy and firewall. 
That could do; All HTTP traffic would be handled by Squid and all other traffic 
filtered for example by iptables and just routed (not proxied) through the 
server. These can be combined into a single server.

Regards,

Tuukka

-Original Message-
From: m.shahve...@ece.ut.ac.ir [mailto:m.shahve...@ece.ut.ac.ir] 
Sent: Wednesday, January 15, 2014 2:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid and unsupported request protocols

I want to pass all traffics through squid not only traffics are received on 
port 80 and handling them in some ways. Now when I am doing so SSH requests 
freeze without any response!



> On Wednesday 15 January 2014 at 14:04:19, m.shahve...@ece.ut.ac.ir wrote:
>
>> Ok, so what should I do if I want to pass SSH requests through squid?
>
> Why would you want to do this, or indeed expect it to be possible?
>
> What benefits from passing SSH through Squid so you expect to get, 
> instead of just routing SSH directly over the network as usual?
>
>
>
> Regards,
>
>
> Antony.
>
> --
> "The tofu battle I saw last weekend was quite brutal."
>
>  - Marija Danute Brigita Kuncaitis
>
>  Please reply to 
> the list;
>please 
> don't CC me.
>




Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Leonardo Rodrigues

Em 15/01/14 12:06, m.shahve...@ece.ut.ac.ir escreveu:

So what do you mean by "an SSH client that can proxy requests through an
HTTP/HTTPS proxy" exactly?


i mean exactly what i wrote ... if you have an SSH client that can 
proxy requests through an HTTP/HTTPS proxy, than you can use SSH through 
squid. If your SSH client cant do that, which i bet it cant, than you 
cannot do it.


My SSH client, for instance, which is ZOC for Mac, only supports 
SOCKS proxy servers, not HTTP/HTTPS ones. So, in my case, i wouldnt be 
able, with ZOC, to proxy SSH requests through squid.


there's really no other way to write what i wrote, it's plain and 
clear.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





AW: [squid-users] TIMEOUT_DIRECT after enabling siblings

2014-01-15 Thread Grooz, Marc (regio iT)
I use Squid 3.1.19. I have four squid boxes. All of them got direct Internet 
Access. To share the diskcache of each Proxy, I configure them as siblings to 
each other. 

On Squid A:

cache_peer B sibling 8080 3130 proxy-only
cache_peer C sibling 8080 3130 proxy-only
cache_peer D sibling 8080 3130 proxy-only

and of course I configure a cache_peer_access rule to prevent a request loop.

For cache access I use icp.

After I configure the cache_peer I got the error messages with timeout_direct 
to random destinations. 



-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Montag, 13. Januar 2014 22:59
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] TIMEOUT_DIRECT after enabling siblings

On 2014-01-14 02:34, Grooz, Marc (regio iT) wrote:
> Hi,
> 
> is see some of squid request with TIMEOUT_DIRECT/IP_Address in squid 
> log after enabling siblings.
> 
> Any idea?
> 
> Kind regards
> 
> Marc

Any other details which might narrow this down?
squid version(s), what you mean by "enabled siblings", whether the squid box in 
question has TCP issues contacting the mentioned IP, any messages appearing in 
cache.log, things like that could help.

Amos


[squid-users] Immediate "This page can't be displayed" on HTTPS requests (UNCLASSIFIED)

2014-01-15 Thread Raczek, Alan J CTR USARMY SEC (US)
Classification: UNCLASSIFIED
Caveats: NONE

We are running Squid 2.7 on a Windows Server 2003 machine. With a few
different HTTPS URL's we are getting an instantaneous "This page can't be
displayed" in Internet Explorer, doesn't matter what version of IE. In
Mozilla Firefox we get "the connection has timed out" . Doesn't even think
about it. I tried a few registry hacks having to do with bad proxy timeouts
with Windows but no luck. From the cache log I see that the proxy is
"allowing" the URL and it gets a hit (we whitelist everything). I am at a
loss as to what is happening. Other HTTPS URL's work, many other websites
work through the proxy and they DO seem to taking a little  time to bring up
the site which tells me Squid is doing the handshaking for sure. 
 

WHAT'S UP??

 
PS our network setup:
LAN - proxy server - ASA 5510 - Internet


***
* Alan Raczek *

* Principal Network Engineer  *   
* CACI*
* Work: (443) 395-5133*
* Cell: (732) 245-4351*
* alan.racz...@us.army.mil*
***

 


Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Immediate "This page can't be displayed" on HTTPS requests (UNCLASSIFIED)

2014-01-15 Thread Jakob Curdes



We are running Squid 2.7 on a Windows Server 2003 machine. With a few
different HTTPS URL's we are getting an instantaneous "This page can't be
displayed" in Internet Explorer, doesn't matter what version of IE. In
Mozilla Firefox we get "the connection has timed out" . Doesn't even think
about it.
I currently observe a similar problem in Firefox and sometimes also in 
IE WITHOUT using a proxy: for some sites, e.g. Google, I sometimes get 
"the request is redirected in a way that it can never be terminated" 
(translation from german). Then again, things work as expected. So, two 
questions: a) is that the same problem you are seeing? and b) What's 
going on?


JC


RE: [squid-users] Immediate "This page can't be displayed" on HTTPS requests (UNCLASSIFIED)

2014-01-15 Thread Raczek, Alan J CTR USARMY SEC (US)
Classification: UNCLASSIFIED
Caveats: NONE

Sir,

No that is not the same issue. Some HTTPS sites work, some don't. The
browser does not even try to think about a response, just throws
the "This page can't be displayed" message in IE. And outr proxy is the only
means for Internet access so we can't go without.

..ar

-Original Message-
From: Jakob Curdes [mailto:j...@info-systems.de] 
Sent: Wednesday, January 15, 2014 10:46 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Immediate "This page can't be displayed" on HTTPS
requests (UNCLASSIFIED)


> We are running Squid 2.7 on a Windows Server 2003 machine. With a few 
> different HTTPS URL's we are getting an instantaneous "This page can't 
> be displayed" in Internet Explorer, doesn't matter what version of IE. 
> In Mozilla Firefox we get "the connection has timed out" . Doesn't 
> even think about it.
I currently observe a similar problem in Firefox and sometimes also in IE
WITHOUT using a proxy: for some sites, e.g. Google, I sometimes get "the
request is redirected in a way that it can never be terminated" 
(translation from german). Then again, things work as expected. So, two
questions: a) is that the same problem you are seeing? and b) What's going
on?

JC

Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Re: Squid 3.1.12 hangs with "HEAD" request in LOG

2014-01-15 Thread babajaga
So it looks like squid3.1.12 does not handle HEAD properly. Having a short
glance at the old squid bugs, I did not see anything directly relevant to
this. Anyway, as it is an old version, no bug fixes will be done, I guess.
So I would suggest you to upgrade at least to very last 3.1.xx and check
again. As a supporting action, you might get rid of any configure-pptions,
you do not really need: The less code, the less probability of having a bug
inside :-)
Or, at least first get rid of unused squid-features before upgrading to
3.1.very-last. 
And, if that does not help, upgrade to 3.4
In case, problem with HEAD still persists, you will be able to file an
official bug.

My therory still is, that, because of a bug, your squid simply waits for the
211k of real data, which never will show up. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-12-hangs-with-HEAD-request-in-LOG-tp4664219p4664305.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Immediate "This page can't be displayed" on HTTPS requests (UNCLASSIFIED)

2014-01-15 Thread babajaga
Although being a "fan" of MS, I would assume a problem of
squid2.7/Windows-specific. 
Because I still have several squid2.7/ubuntu running, and can not remember
such a problem for my Windows-users. 
But I am using persistent server-conns with very last squid2.7



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Immediate-This-page-can-t-be-displayed-on-HTTPS-requests-UNCLASSIFIED-tp4664302p4664306.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] ICP and HTCP and StoreID

2014-01-15 Thread Niki Gorchilov
Hi All,

I know, according to wiki
(http://wiki.squid-cache.org/Features/StoreID) ICP & HTCP are not
supported. “URL queries received from cache_peer siblings are not
passed through StoreID helper. So the resulting store/cache lookup
will MISS on URLs normally alterd by StoreID.”

Still, in lab test show that cache peer is queried by the StoreID
altered URL, so this part is actually working. At least in version
3.4.1.

The real problem is, when a siblink replies with UDP_HIT, the actual
HTTP request sent to this peer uses the altered URL, instead of the
original one.

Here's a simple example:

1. Peer A receives request for URL like
http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback?
2. Peer A's StoreID helper normalizes the URL to
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
3. Peer A makes an ICP/HTCP query to Peer B for
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
4. Peer B replies with "UDP_HIT/000 0 HTCP_TST
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839";
5. Peer A creates HTTP connection to Peer B and makes "GET
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
HTTP/1.1"
6. Peer B, tries to connect to the non-existent host
c.youtube.com.squid.internal and as a result generates HTTP 504 error.
7. Peer A receives "TCP_MISS/504 4181 GET
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
- HIER_NONE/- text/html" and goes directly to youtube "TCP_MISS/200
237938 GET http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback?
- HIER_DIRECT/r2---sn-bavc5aoxu-nv4e.googlevideo.com
application/octet-stream"

The question is how to force squid to use the original URL when making
the TCP/HTTP connection to the peer after UDP_HIT? UDP request shall
keep using the normalized/rewritten URL.

Thank in advance for any ideas.

Best,
Niki


Re: [squid-users] cache not working?

2014-01-15 Thread spiderslack
Hi, after various test get work. This problems is head with option 
cache-control=no-cache for example.

My doubt is, its possible alter the head for caching?

I trying using the option "cache allow all" but website with option 
cache-control not worked.


Regards

On 01/06/2014 12:03 AM, Eliezer Croitoru wrote:

Hey Spider,

Are you sure you are wrong?
What version of squid are you using?
What is the result for the same request when you use "curl" or "wget"?
In order to cache the request you are talking about there is a need to 
make sure that the request and the response do support caching and 
allow them.


There are many cases which there is a need for the file to not be 
cached by the server request or by the client request and squid obeys 
them.


We can determine it manually by looking at the request and response or 
maybe you can even try the tool redbot:

http://redbot.org/

It is very simple to use.
Feel free to just ask about the subject.

Eliezer

On 06/01/14 04:30, spiderslack wrote:

Hi all

I am setting up a proxy with squid and realized that he is not a cache,
or my understanding is incorrect examine me follow my setup.

visible_hostname galileu
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl manager url_regex -i ^cache_object:///squid-internal-mgr/
acl localhost src 192.168.1.0/24
http_access allow manager localhost
http_access deny manager
http_access allow localhost manager
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128
cache_dir ufs /var/squid/cache/squid 1000 16 256
coredump_dir /var/squid/cache/squid

cache allow all

the command "cache allow all" was just to test but still did not work


I try to access a site with static content where the html and simple
thing like " test " but does not work

In the log,in the logs I see only TCP_MISS not TCP_HIT this is correct?

according to the official website of squuid

http://wiki.squid-cache.org/SquidFaq/SquidLogs#Squid_result_codes

TCP_MISS: The response object delivered was the network response object.
TCP_HIT: The response object delivered was the local cache object.

1388784386.986130 192.168.1.112 TCP_MISS/200 399 GET
http:///~leandro/test.html   - HIER_DIRECT/xxx.xxx.xxx.xxx text/html
1388784387.105 65 192.168.1.112 TCP_MISS/200 399 GET
http:///~leandro/test.html   - HIER_DIRECT/xxx.xxx.xxx.xxx text/html
1388784387.278 84 192.168.1.112 TCP_MISS/200 399 GET
http:///~leandro/test.html   - HIER_DIRECT/xxx.xxx.xxx.xxx text/html


any idea where I am going wrong?









Re: [squid-users] ask three times authentication

2014-01-15 Thread Usuário do Sistema
Thanks for yours tips.

But I figure out the issue other way. I have done roll back to my old
machine what has squid 2.6 version so all it's working



2014/1/15 Rietzler, Markus (RZF, SG 324 / )
:
> wonder why there are popups at all. or popups at all. NTLM should work 
> without any popups.
> which browser do you use? IE?
>
> could you try to discard the group-check auth?
> we are using NTLM but everyone is allowed, after authentication. so we do not 
> use external_acl_type.
>
>
> we only use
>
> acl auth_user proxy_auth REQUIRED
> http_access allow auth_surfer all
>
>
>> -Ursprüngliche Nachricht-
>> Von: Usuário do Sistema [mailto:maico...@ig.com.br]
>> Gesendet: Dienstag, 14. Januar 2014 13:27
>> An: Eliezer Croitoru
>> Cc: squid-users@squid-cache.org
>> Betreff: Re: [squid-users] ask three times authentication
>>
>> Thank you,
>>
>> From 2.6 to 3.1.10, was there any other change in the system?
>>
>>  yes, I have changed my squid from an machine with S.O Red Hat 5.9
>> to other machine with S.O CentOS 6.5
>>
>> the issue it's seems to be something about authentication
>> compatibility between Browse and new squid version 3.1.10
>>
>> I have the old machine yet. I have done some test and from a client
>> machine when I put the old proxy on browse all it's work.
>> but the strange I use the same squid.conf either old proxy machine as
>> well as new proxy machine so why the pop-up authentication appear
>> three times only at the new proxy squid version 3.1.10 ?
>>
>> my question is if there is any problem with squid version 3.1.10 about
>> authentication ?
>>
>> Follow my squid.conf.
>>
>>
>> 
>> #
>> # Squid.conf autenticacao AD
>> #
>> #
>>
>> ## Autenticacao
>>
>> auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-
>> ntlmssp
>> auth_param ntlm children 50
>> auth_param ntlm keep_alive on
>>
>> #auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-
>> basic
>> #auth_param basic children 30
>>
>> ## comentadas
>>
>> auth_param basic realm Acesso a Internet teste SA
>> auth_param basic credentialsttl 2 hours
>>
>> authenticate_cache_garbage_interval 1 hour
>> authenticate_ttl 120 seconds
>>
>> external_acl_type NT_global_group children=50 %LOGIN
>> /usr/lib64/squid/squid_unix_group
>>
>> ## SQSTAT
>>
>>
>> acl ntlm_users proxy_auth REQUIRED
>>
>> #cache_store_log none
>> #cache_log /var/log/squid/cache.log
>> #cache_log none
>> #request_entities on
>>
>> # debug_options rotate=16 ALL,1
>> #debug_options ALL,9
>> #debug_options ALL,1 33,2
>> #debug_options ALL
>>
>>
>> visible_hostname proxy.teste.com
>> http_port 8080
>> http_port 127.0.0.1:3128
>> hierarchy_stoplist cgi-bin ?
>>
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> acl apache rep_header Server ^Apache
>>
>> access_log /var/log/squid/access.log squid
>>
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern .   0   20% 4320
>>
>> ie_refresh on
>>
>> max_filedesc 4096
>>
>>
>> ###
>> # Parametros de Cache NAO ALTERAR #
>> ###
>>
>> #cache_dir aufs /var/spool/squid 6000 16 256
>> #cache_dir ufs /var/spool/squid 5000 64  1024
>> #cache_dir ufs /var/spool/squid 2048 64 64
>>
>> diskd_program   /usr/lib64/squid/diskd-daemon
>>
>> cache_dir diskd /var/spool/squid/1  1000 16 128 Q1=64 Q2=72
>> cache_dir diskd /var/spool/squid/2  1000 16 128 Q1=64 Q2=72
>> cache_dir diskd /var/spool/squid/3  1000 16 128 Q1=64 Q2=72
>> cache_dir diskd /var/spool/squid/4  1000 16 128 Q1=64 Q2=72
>>
>>
>> #This stops squid from holding onto ram that it is no longer actively
>> using.
>> memory_pools off
>>
>> #Buffers the write-out to log files. This can increase performance
>> slightly
>> buffered_logs on
>>
>> cache_mem 1024 MB
>>
>> half_closed_clients off
>> cache_swap_low 80%
>> cache_swap_high 100%
>>
>> maximum_object_size 10 MB
>> maximum_object_size_in_memory 2048 KB
>>
>> cache_replacement_policy heap LFUDA
>> memory_replacement_policy heap GDSF
>>
>> ###
>>
>> ftp_passive on
>> acl ftp_21 port 21
>>
>> 
>> #
>> # Regras Padrao
>> #
>> 
>>
>>
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 20 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # compa

[squid-users] Re: ICP and HTCP and StoreID

2014-01-15 Thread babajaga
Interesting question  Did you compare this behaviour to squid2.7 using
storeurl ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ICP-and-HTCP-and-StoreID-tp4664307p4664310.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Is there a precedence in the allowed sites ACL ? (UNCLASSIFIED)

2014-01-15 Thread Raczek, Alan J CTR USARMY SEC (US)
Classification: UNCLASSIFIED
Caveats: NONE


Just curious that if there is an order that Squid goes in to match a site in
the allowed sites
ACL. Top down?? 

...Alan



***
* Alan Raczek *

* Principal Network Engineer  *   
* CACI*
* Work: (443) 395-5133*
* Cell: (732) 245-4351*
* alan.racz...@us.army.mil*
***

 



Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Auth loop for non ActiveDirectory members

2014-01-15 Thread Christian Scholz

Hello together,

I'm new on this list therefore I want to introduce myself shortly. My 
Name is Christian and I'm working in a IT department.
Currently I'm setting up a squid3 (3.1.20-2.2) proxy connected with the 
MS ActiveDirectory. Kerberos, NTLM and Basic authentication are already 
working fine.


Now I've problems to set up the acls. Computer and users which are 
member of the domain have no problems to authenticate.
But when I use a computer, which is not part of the ActiveDirectory the 
auth dialog pops up again and again. I've tried it with firefox, 
internet explorer and google chrome. With firefox I've to type in the 
credentials for every request. For google.com it means 10 times or so. 
With IE and Google Chrome the user can't authenticate even if the 
credentials are correct.


Concerning the acls I use the following:

  # Authentication required, otherwise Pop-Up
  acl Authenticated_Users proxy_auth REQUIRED
  http_access deny !Authenticated_Users

  acl Internet_Users external ldap_group Internet_Users
  http_access allow Internet_Users

  http_access deny all

Under http://wiki.squid-cache.org/Features/Authentication I've read the 
part about auth loops. But I'm not sure if I've understood them 
correctly. My understanding is that an acl which based on proxy_auth, 
proxyauth_regex, or an external using %LOGIN shouldn't be the last entry 
in http_access like I've done it above. But then the following example 
should be correct:


  acl Authenticated_Users proxy_auth REQUIRED
  acl dummy_acl src 254.254.254.254/32

  http_access deny !Authenticated_Users dummy_acl
  http_access allow Authenticated_Users all


  http_access deny all

Is there anything other that I'm doing wrong? I am grateful for any 
help.






Re: [squid-users] Re: ICP and HTCP and StoreID

2014-01-15 Thread Nikolai Gorchilov
On Wed, Jan 15, 2014 at 8:35 PM, babajaga  wrote:
> Interesting question  Did you compare this behaviour to squid2.7 using
> storeurl ?

Nope. I just tried 4.3.2. Same result - both UDP and TCP requests go
with altered URLs.


Re: [squid-users] ask three times authentication

2014-01-15 Thread Amos Jeffries

On 2014-01-16 06:54, Usuário do Sistema wrote:

Thanks for yours tips.

But I figure out the issue other way. I have done roll back to my old
machine what has squid 2.6 version so all it's working



Perhapse instead of rolling backwards to 2.7 you could roll forwards and 
try to the latest versions?


3.4.2 is the current stable and supported Squid version. The 3.1.* and 
2.7.* releases are all very old and filled with hundreds of known and 
now-fixed bugs, and a handful of major security vulnerabilities.


You can find information on updated RPMs in our wiki 
http://wiki.squid-cache.org/KnowledgeBase/RedHat or 
http://wiki.squid-cache.org/KnowledgeBase/CentOS.


Amos



Re: [squid-users] Is there a precedence in the allowed sites ACL ? (UNCLASSIFIED)

2014-01-15 Thread Leonardo Rodrigues

Em 15/01/14 17:08, Raczek, Alan J CTR USARMY SEC (US) escreveu:


Just curious that if there is an order that Squid goes in to match a site in
the allowed sites
ACL. Top down??


Yeah ... basically top down.

http://wiki.squid-cache.org/SquidFaq/SquidAcl#Access_Lists

http_access allow|deny acl AND acl AND ...
OR
http_access allow|deny acl AND acl AND ...
OR
...


The action allow/deny will be inforced only if ALL rules (ACLs) are 
matched. On a 3 ACLs http_access line, for example, if two gives a match 
and the third not, the action will not be inforced.


Note that not inforcing a 'allow' rule is different from denying. 
Not inforcing a 'deny' rule, on the same logic, is different from allowing.


If a http_access action is not enforced, it will evaluate the next 
http_access line until it reaches the end of all http_access rules.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] cache not working?

2014-01-15 Thread spiderslack

Hi all

Trying

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator

Thanks :)

On 01/15/2014 12:02 PM, spiderslack wrote:
Hi, after various test get work. This problems is head with option 
cache-control=no-cache for example.

My doubt is, its possible alter the head for caching?

I trying using the option "cache allow all" but website with option 
cache-control not worked.


Regards

On 01/06/2014 12:03 AM, Eliezer Croitoru wrote:

Hey Spider,

Are you sure you are wrong?
What version of squid are you using?
What is the result for the same request when you use "curl" or "wget"?
In order to cache the request you are talking about there is a need 
to make sure that the request and the response do support caching and 
allow them.


There are many cases which there is a need for the file to not be 
cached by the server request or by the client request and squid obeys 
them.


We can determine it manually by looking at the request and response 
or maybe you can even try the tool redbot:

http://redbot.org/

It is very simple to use.
Feel free to just ask about the subject.

Eliezer

On 06/01/14 04:30, spiderslack wrote:

Hi all

I am setting up a proxy with squid and realized that he is not a cache,
or my understanding is incorrect examine me follow my setup.

visible_hostname galileu
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged)

machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl manager url_regex -i ^cache_object:///squid-internal-mgr/
acl localhost src 192.168.1.0/24
http_access allow manager localhost
http_access deny manager
http_access allow localhost manager
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128
cache_dir ufs /var/squid/cache/squid 1000 16 256
coredump_dir /var/squid/cache/squid

cache allow all

the command "cache allow all" was just to test but still did not work


I try to access a site with static content where the html and simple
thing like " test " but does not work

In the log,in the logs I see only TCP_MISS not TCP_HIT this is correct?

according to the official website of squuid

http://wiki.squid-cache.org/SquidFaq/SquidLogs#Squid_result_codes

TCP_MISS: The response object delivered was the network response 
object.

TCP_HIT: The response object delivered was the local cache object.

1388784386.986130 192.168.1.112 TCP_MISS/200 399 GET
http:///~leandro/test.html   - HIER_DIRECT/xxx.xxx.xxx.xxx 
text/html

1388784387.105 65 192.168.1.112 TCP_MISS/200 399 GET
http:///~leandro/test.html   - HIER_DIRECT/xxx.xxx.xxx.xxx 
text/html

1388784387.278 84 192.168.1.112 TCP_MISS/200 399 GET
http:///~leandro/test.html   - HIER_DIRECT/xxx.xxx.xxx.xxx 
text/html



any idea where I am going wrong?













[squid-users] SmpScale + ERROR: No forward-proxy ports configured

2014-01-15 Thread Will Roberts

Hi,

I'm working with an SmpScale configuration with 2 workers defined. Each 
worker has its own set of unique ports that it listens on. The 
coordinator process doesn't have any http_port lines and generates tons 
of these warnings:


ERROR: No forward-proxy ports configured

That doesn't seem great, am I missing something? I'm sure I can work 
around this by giving it some dummy port to listen on, but I'd rather 
not if it doesn't really need it.


Thanks,
--Will


[squid-users] Re: ICP and HTCP and StoreID

2014-01-15 Thread Niki Gorchilov
Actually, it is working. I found two mistakes in my config - a typo in
cache_peer_access directive and absence of 'allow-miss' in the
cache_peer definition.

After fixing them, inter cache communication is working only with
altered URLs but this still does the job:
- If UDP is MISS the originating peer makes a TCP connection to
destination server and caches the result
- if UDP is HIT, the call is forwarded via sibling with modified URL,
but the sibling handles the request without problems

UDP HIT request example:
peer B: UDP_HIT/000 0 HTCP_TST
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.712704-950271
- HIER_NONE/- -
peer B: TCP_HIT/200 237948 GET
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.712704-950271
- HIER_NONE/- application/octet-stream
peer A: TCP_MISS/200 237948 GET
http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback? -
SIBLING_HIT/peerb application/octet-stream

UDP MISS request example:
peer B: UDP_MISS/000 0 HTCP_TST
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.2138112-2375679
- HIER_NONE/- -
peer A: TCP_MISS/200 237938 GET
http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback? -
HIER_DIRECT/r2---sn-bavc5aoxu-nv4e.googlevideo.com
application/octet-stream

Case closed!

On Wed, Jan 15, 2014 at 6:22 PM, Niki Gorchilov  wrote:
> Hi All,
>
> I know, according to wiki
> (http://wiki.squid-cache.org/Features/StoreID) ICP & HTCP are not
> supported. “URL queries received from cache_peer siblings are not
> passed through StoreID helper. So the resulting store/cache lookup
> will MISS on URLs normally alterd by StoreID.”
>
> Still, in lab test show that cache peer is queried by the StoreID
> altered URL, so this part is actually working. At least in version
> 3.4.1.
>
> The real problem is, when a siblink replies with UDP_HIT, the actual
> HTTP request sent to this peer uses the altered URL, instead of the
> original one.
>
> Here's a simple example:
>
> 1. Peer A receives request for URL like
> http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback?
> 2. Peer A's StoreID helper normalizes the URL to
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
> 3. Peer A makes an ICP/HTCP query to Peer B for
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
> 4. Peer B replies with "UDP_HIT/000 0 HTCP_TST
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839";
> 5. Peer A creates HTTP connection to Peer B and makes "GET
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
> HTTP/1.1"
> 6. Peer B, tries to connect to the non-existent host
> c.youtube.com.squid.internal and as a result generates HTTP 504 error.
> 7. Peer A receives "TCP_MISS/504 4181 GET
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
> - HIER_NONE/- text/html" and goes directly to youtube "TCP_MISS/200
> 237938 GET http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback?
> - HIER_DIRECT/r2---sn-bavc5aoxu-nv4e.googlevideo.com
> application/octet-stream"
>
> The question is how to force squid to use the original URL when making
> the TCP/HTTP connection to the peer after UDP_HIT? UDP request shall
> keep using the normalized/rewritten URL.
>
> Thank in advance for any ideas.
>
> Best,
> Niki


Re: [squid-users] SmpScale + ERROR: No forward-proxy ports configured

2014-01-15 Thread Nathan Hoad
Hi Will,

Why are you giving each worker a unique set of ports? Typically you
configure one set of ports for all workers, and let the operating
system handle the underlying machinery of sharing the ports across the
worker processes.

See here for more details:
http://wiki.squid-cache.org/Features/SmpScale#Who_decides_which_worker_gets_the_request.3F

Thanks,

Nathan.
--
Nathan Hoad
Software Developer
www.getoffmalawn.com


On Thu, Jan 16, 2014 at 9:19 AM, Will Roberts  wrote:
> Hi,
>
> I'm working with an SmpScale configuration with 2 workers defined. Each
> worker has its own set of unique ports that it listens on. The coordinator
> process doesn't have any http_port lines and generates tons of these
> warnings:
>
> ERROR: No forward-proxy ports configured
>
> That doesn't seem great, am I missing something? I'm sure I can work around
> this by giving it some dummy port to listen on, but I'd rather not if it
> doesn't really need it.
>
> Thanks,
> --Will


Re: [squid-users] SmpScale + ERROR: No forward-proxy ports configured

2014-01-15 Thread Will Roberts

Nathan,

I used to run two squids on my servers which provide very similar, but 
different services. I'm using SmpScale to simplify that configuration 
into a single squid instance with 2 workers with slightly different 
configurations. Honestly, there's one line difference, here's the top of 
my squid.conf:


workers 2

if ${process_number} = 1
http_port 80
endif

if ${process_number} = 2
http_port 81
hosts_file /etc/squid3/hosts
endif


I realize that in the future the DNS cache may be shared between 
workers, but for the moment it isn't so this setup works :)


So I could add something like this to make squid happy, and it won't 
even open the port:


if ${process_number} = 3
http_port = 64000
endif

But that seems like a hack to me.

--Will


On 01/15/2014 06:14 PM, Nathan Hoad wrote:

Hi Will,

Why are you giving each worker a unique set of ports? Typically you
configure one set of ports for all workers, and let the operating
system handle the underlying machinery of sharing the ports across the
worker processes.

See here for more details:
http://wiki.squid-cache.org/Features/SmpScale#Who_decides_which_worker_gets_the_request.3F

Thanks,

Nathan.
--
Nathan Hoad
Software Developer
www.getoffmalawn.com


On Thu, Jan 16, 2014 at 9:19 AM, Will Roberts  wrote:

Hi,

I'm working with an SmpScale configuration with 2 workers defined. Each
worker has its own set of unique ports that it listens on. The coordinator
process doesn't have any http_port lines and generates tons of these
warnings:

ERROR: No forward-proxy ports configured

That doesn't seem great, am I missing something? I'm sure I can work around
this by giving it some dummy port to listen on, but I'd rather not if it
doesn't really need it.

Thanks,
--Will




[squid-users] Issue with Web Traffic through IPSEC Tunnel to a Squid Proxy

2014-01-15 Thread RKGD512
Hi All-
So I have a need to direct all web traffic through an IPSEC Tunnel to a
Squid Proxy server on the other end of the tunnel.

Sounds complicated but the concept is really easy however I am having
issues.  

So let me gather as much info as I can:

*Location 1 Subnet:* 192.168.1.0/24
*Location 1 Router 1:* Netgear WNR2000v3 running Firmware: DD-WRT v24-sp2
(02/09/12) std 
*Location 1 Router 2:* TPLink TL-R600VPN - VPN Router Housing the IPSEC
Tunnel
 
*Location 2 Subnet:* 192.168.100.0/24
*Location 2 Router 1:* Linksys WRT310Nv2 running Firmware: DD-WRT v24-sp2
(08/12/10) std-nokaid-small 
*Location 2 Router 2:* TPLink TL-R600VPN - VPN Router Housing the IPSEC
Tunnel

Location 1's proxy server is housed on VMware Workstation Version 10 with
Centos 6.4 Minimal with squid proxy installed.

*Description of Issue* So when I enter the proxy server info in System proxy
and open a webpage, the page sits there until it times out.  It never
displays anything.  I can see that the proxy server is interpreting the
request but on the client from Location 2 to location 1's proxy server is
unable to browse the internet.

Now the funny thing is, as a test I created the same proxy on location 2's
side, location 1 can browse the internet fine and I can tell from
whatismyip.com as well as from logs that everything is fine.  I checked all
required firewalls (iptables) and squid configs.  Even tried turning off
iptables on the router as well as on the proxy server and included
"http_access allow all" with no success.

Why it works one direction versus the other?  I have no idea.  I validated
every Hops config and they are all identical in their firewall settings and
squid proxy settings.  

Any help would be greatly appreciated!

Showing configs below:

Here's the squid Config:
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1 192.168.2.0/24 192.168.100.0/24
192.168.1.0/24
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost

http_access deny all

http_port 80

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


Here are some logs to show the request is hitting the squid server:
&user_id=150566193&nid=2&ts=1389816137 - NONE/- text/html
1389816227.699 58 192.168.100.73 TCP_MISS/200 360 GET
http://notify4.dropbox.com/subscribe? - DIRECT/108.160.162.51 text/plain
1389816279.774  0 192.168.100.73 TCP_MEM_HIT/301 736 GET
http://google.com/ - NONE/- text/html
1389816279.934136 192.168.100.73 TCP_MISS/302 1186 GET
http://www.google.com/ - DIRECT/74.125.239.17 text/html
1389816285.846   5857 192.168.100.73 TCP_MISS/200 3539 CONNECT
www.google.com:443 - DIRECT/74.125.239.17 -
1389816288.123  0 192.168.100.73 TCP_MEM_HIT/301 736 GET
http://google.com/ - NONE/- text/html
1389816288.207 42 192.168.100.73 TCP_MISS/302 1186 GET
http://www.google.com/ - DIRECT/74.125.239.17 text/html
1389816294.935   6671 192.168.100.73 TCP_MISS/200 3539 CONNECT
www.google.com:443 - DIRECT/74.125.239.17 -
1389816378.040  60130 192.168.100.73 TCP_MISS/200 3828 CONNECT
client-lb.dropbox.com:443 - DIRECT/108.160.165.83 -
1389816387.059  60128 192.168.100.73 TCP_MISS/200 4242 CONNECT
d.dropbox.com:443 - DIRECT/108.160.165.189 -
1389816408.033 180281 192.168.100.73 TCP_MISS/200 3828 CONNECT
client-lb.dropbox.com:443 - DIRECT/108.160.166.9 -
1389816422.068  0 192.168.100.73 NONE/400 3874 GET
/subscribe?host_int=819546594&ns_map=241516770_170677946892514,261374389_5265891279285,241514999_1122846426610167&user_id=150566193&nid=2&ts=1389816421
- NONE/- text/html

*IPTables on squid server:*
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 192.168.100.0/24 -p tc

[squid-users] Re: Issue with Web Traffic through IPSEC Tunnel to a Squid Proxy

2014-01-15 Thread RKGD512
I noticed this could be complicated to the readers so I have drew up a Visio
Diagram to illustrate the flow so my question and issue is better
represented.

 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Issue-with-Web-Traffic-through-IPSEC-Tunnel-to-a-Squid-Proxy-tp4664319p4664322.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] SmpScale + ERROR: No forward-proxy ports configured

2014-01-15 Thread Amos Jeffries

On 2014-01-16 11:19, Will Roberts wrote:

Hi,

I'm working with an SmpScale configuration with 2 workers defined.
Each worker has its own set of unique ports that it listens on. The
coordinator process doesn't have any http_port lines and generates
tons of these warnings:

ERROR: No forward-proxy ports configured

That doesn't seem great, am I missing something? I'm sure I can work
around this by giving it some dummy port to listen on, but I'd rather
not if it doesn't really need it.


Something strange going on here with your Coordinator. That error is 
only produced when actively generating a response that needs to embed a 
URI for some resource served by Squid.


What is your coordinator doing that needs it to be aware of the worker 
service port(s) so often?


Amos


Re: [squid-users] SmpScale + ERROR: No forward-proxy ports configured

2014-01-15 Thread Will Roberts

On 01/15/2014 07:32 PM, Amos Jeffries wrote:
Something strange going on here with your Coordinator. That error is 
only produced when actively generating a response that needs to embed 
a URI for some resource served by Squid.


What is your coordinator doing that needs it to be aware of the worker 
service port(s) so often?


Amos,

I think it's from the mime handling. I get 177 of those errors, and 
there are 177 non-comment lines in mime.conf If I make 
/usr/share/squid3/mime.conf an empty file then I get none.


--Will


Re: [squid-users] Auth loop for non ActiveDirectory members

2014-01-15 Thread Brett Lymn
On Wed, Jan 15, 2014 at 08:40:16PM +0100, Christian Scholz wrote:
> 
> Is there anything other that I'm doing wrong? I am grateful for any 
> help.
> 

Modify the settings in the browser.  You need to turn off "Enable
Integrated windows Authentication" in the internet options.

-- 
Brett Lymn
This email has been sent on behalf of one of the following companies within the 
BAE Systems Australia group of companies:

BAE Systems Australia Limited - Australian Company Number 008 423 005
BAE Systems Australia Defence Pty Limited - Australian Company Number 006 
870 846
BAE Systems Australia Logistics Pty Limited - Australian Company Number 086 
228 864

Our registered office is Evans Building, Taranaki Road, Edinburgh Parks,
Edinburgh, South Australia, 5111. If the identity of the sending company is
not clear from the content of this email please contact the sender.

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy or
disclose its content, but please reply to this email immediately and highlight
the error to the sender and then immediately delete the message.



Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Alex Rousskov
On 01/15/2014 07:06 AM, m.shahve...@ece.ut.ac.ir wrote:

> So what do you mean by "an SSH client that can proxy requests through an
> HTTP/HTTPS proxy" exactly?

One way to rephrase the above would be "an SSH client that can be
configured to use an HTTP proxy as the next hop for SSH traffic (wrapped
in HTTP)". A manual page for that client would mention the corresponding
--http-proxy option or equivalent, for example.

Another way to rephrase the above is "an SSH client that, when properly
configured, emits HTTP requests".


I do agree with others that in the vast majority of cases Squid should
not see traffic from SSH clients. YMMV, but most likely you need to
adjust your deployment requirements.


Hope this clarifies,

Alex.



>> Em 15/01/14 11:04, m.shahve...@ece.ut.ac.ir escreveu:
>>> Ok, so what should I do if I want to pass SSH requests through squid?
>>>
>>>
>>  using an SSH client that can proxy requests through an HTTP/HTTPS
>> proxy should do it. If your client cant do that, than it probably wont
>> be possible as squid does not recognize SSH protocol (and never intended
>> to do so).



Re: [squid-users] Re: ICP and HTCP and StoreID

2014-01-15 Thread Alex Rousskov
On 01/15/2014 03:31 PM, Niki Gorchilov wrote:
> Actually, it is working. [...] inter cache communication is working only with
> altered URLs but this still does the job:
> - If UDP is MISS the originating peer makes a TCP connection to
> destination server and caches the result
> - if UDP is HIT, the call is forwarded via sibling with modified URL,
> but the sibling handles the request without problems
> 
> UDP HIT request example:
> peer B: UDP_HIT/000 0 HTCP_TST
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.712704-950271
> - HIER_NONE/- -
> peer B: TCP_HIT/200 237948 GET
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.712704-950271
> - HIER_NONE/- application/octet-stream
> peer A: TCP_MISS/200 237948 GET
> http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback? -
> SIBLING_HIT/peerb application/octet-stream
> 
> UDP MISS request example:
> peer B: UDP_MISS/000 0 HTCP_TST
> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.2138112-2375679
> - HIER_NONE/- -
> peer A: TCP_MISS/200 237938 GET
> http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback? -
> HIER_DIRECT/r2---sn-bavc5aoxu-nv4e.googlevideo.com
> application/octet-stream
> 
> Case closed!

Store ID is not a URL (even if it is often convenient to think that
way). If Squid sends requests with StoreIDs, Squid is broken (even if it
happens to usually work in your particular case). Consider the situation
where the peer returned UDP_HIT but then purged the cached entry. What
will it do with the c.youtube.com.squid.internal request?

Somebody who cares about this _and_ can reproduce it, should file a bug
report, but please double check that the requests are indeed using Store
IDs using cache.log debugging or a packet trace. Do not just assume that
Squid will never log a Store ID instead of a URL.


Thank you,

Alex.


> On Wed, Jan 15, 2014 at 6:22 PM, Niki Gorchilov  wrote:
>> Hi All,
>>
>> I know, according to wiki
>> (http://wiki.squid-cache.org/Features/StoreID) ICP & HTCP are not
>> supported. “URL queries received from cache_peer siblings are not
>> passed through StoreID helper. So the resulting store/cache lookup
>> will MISS on URLs normally alterd by StoreID.”
>>
>> Still, in lab test show that cache peer is queried by the StoreID
>> altered URL, so this part is actually working. At least in version
>> 3.4.1.
>>
>> The real problem is, when a siblink replies with UDP_HIT, the actual
>> HTTP request sent to this peer uses the altered URL, instead of the
>> original one.
>>
>> Here's a simple example:
>>
>> 1. Peer A receives request for URL like
>> http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback?
>> 2. Peer A's StoreID helper normalizes the URL to
>> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
>> 3. Peer A makes an ICP/HTCP query to Peer B for
>> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
>> 4. Peer B replies with "UDP_HIT/000 0 HTCP_TST
>> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839";
>> 5. Peer A creates HTTP connection to Peer B and makes "GET
>> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
>> HTTP/1.1"
>> 6. Peer B, tries to connect to the non-existent host
>> c.youtube.com.squid.internal and as a result generates HTTP 504 error.
>> 7. Peer A receives "TCP_MISS/504 4181 GET
>> http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.950272-1187839
>> - HIER_NONE/- text/html" and goes directly to youtube "TCP_MISS/200
>> 237938 GET http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback?
>> - HIER_DIRECT/r2---sn-bavc5aoxu-nv4e.googlevideo.com
>> application/octet-stream"
>>
>> The question is how to force squid to use the original URL when making
>> the TCP/HTTP connection to the peer after UDP_HIT? UDP request shall
>> keep using the normalized/rewritten URL.
>>
>> Thank in advance for any ideas.
>>
>> Best,
>> Niki



Re: [squid-users] Re: ICP and HTCP and StoreID

2014-01-15 Thread Eliezer Croitoru

Hey There,

Just note that the StoreID wiki was written during the design and testing.
I can think about a way to make squid do what you are talking about.

Eliezer

On 16/01/14 00:31, Niki Gorchilov wrote:

Actually, it is working. I found two mistakes in my config - a typo in
cache_peer_access directive and absence of 'allow-miss' in the
cache_peer definition.

After fixing them, inter cache communication is working only with
altered URLs but this still does the job:
- If UDP is MISS the originating peer makes a TCP connection to
destination server and caches the result
- if UDP is HIT, the call is forwarded via sibling with modified URL,
but the sibling handles the request without problems

UDP HIT request example:
peer B: UDP_HIT/000 0 HTCP_TST
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.712704-950271
- HIER_NONE/- -
peer B: TCP_HIT/200 237948 GET
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.712704-950271
- HIER_NONE/- application/octet-stream
peer A: TCP_MISS/200 237948 GET
http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback? -
SIBLING_HIT/peerb application/octet-stream

UDP MISS request example:
peer B: UDP_MISS/000 0 HTCP_TST
http://c.youtube.com.squid.internal/videoplayback/09ebf166f4892e4f.140.2138112-2375679
- HIER_NONE/- -
peer A: TCP_MISS/200 237938 GET
http://r2---sn-bavc5aoxu-nv4e.googlevideo.com/videoplayback? -
HIER_DIRECT/r2---sn-bavc5aoxu-nv4e.googlevideo.com
application/octet-stream

Case closed!




[squid-users] squid 3.3.11 on SLES11 SP3 and couldn't squid -k reconfigure ?

2014-01-15 Thread Josef Karliak

  Good morning,
  I've squid 3.3.11.xx on SLES11 SP3, all OK, but I often used "squid  
-k reconfigure" after changing some configuration (adding forbiden  
domains or so). But there is some problem - the command mentioned  
above tell me some errors:

proxy:/etc/squid # squid -k reconfigure
squid: ERROR: No running copy

  But proxy is running, we browse on the internet. What went wrong ?
  Thanks for kicking to the right way and best regards
  J.Karliak.

--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)
policy and check. If you've problem with sending emails to me, start
using email origin methods mentioned above. Thank you.


This message was sent using IMP, the Internet Messaging Program.