[squid-users] delay pool negativ value

2014-07-01 Thread Grooz, Marc (regio iT)
Hi,

If I watch the delay pool with squidclient mgr:delay, what does a negativ
value in the current field means?

Is there a description of the values in that output?

Kind regards

marc



[squid-users] squidguard on special port

2014-02-17 Thread Grooz, Marc (regio iT)
Hi Squid Usergroup,

I want that a redirector like squidgard is only ask if a client connect to port 
3128 and on Port 8080 the request should be passed without the rewriting. Is 
that possible with squid?

Kind regards

Marc


AW: [squid-users] squidguard on special port

2014-02-17 Thread Grooz, Marc (regio iT)
My suggestion was:

http_port 3128 name=squidguard

url_rewrite_access allow squidguard
url_rewrite_access deny all

or

http_port 8080 name=unfiltred

url_rewrite_access allow !unfiltred

Is that right?

-Ursprüngliche Nachricht-
Von: n...@gorchilov.com [mailto:n...@gorchilov.com] Im Auftrag von Nikolai 
Gorchilov
Gesendet: Montag, 17. Februar 2014 12:27
An: Grooz, Marc (regio iT)
Cc: squid-users@squid-cache.org
Betreff: Re: [squid-users] squidguard on special port

Hi, Marc,

Yes, it is possible. RTFM about myport/myportname ACL at 
http://www.squid-cache.org/Doc/config/acl/

Best,
Niki

On Mon, Feb 17, 2014 at 12:48 PM, Grooz, Marc (regio iT) 
marc.gr...@regioit.de wrote:
 Hi Squid Usergroup,

 I want that a redirector like squidgard is only ask if a client connect to 
 port 3128 and on Port 8080 the request should be passed without the 
 rewriting. Is that possible with squid?

 Kind regards

 Marc


AW: [squid-users] TIMEOUT_DIRECT after enabling siblings

2014-01-15 Thread Grooz, Marc (regio iT)
I use Squid 3.1.19. I have four squid boxes. All of them got direct Internet 
Access. To share the diskcache of each Proxy, I configure them as siblings to 
each other. 

On Squid A:

cache_peer B sibling 8080 3130 proxy-only
cache_peer C sibling 8080 3130 proxy-only
cache_peer D sibling 8080 3130 proxy-only

and of course I configure a cache_peer_access rule to prevent a request loop.

For cache access I use icp.

After I configure the cache_peer I got the error messages with timeout_direct 
to random destinations. 



-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Montag, 13. Januar 2014 22:59
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] TIMEOUT_DIRECT after enabling siblings

On 2014-01-14 02:34, Grooz, Marc (regio iT) wrote:
 Hi,
 
 is see some of squid request with TIMEOUT_DIRECT/IP_Address in squid 
 log after enabling siblings.
 
 Any idea?
 
 Kind regards
 
 Marc

Any other details which might narrow this down?
squid version(s), what you mean by enabled siblings, whether the squid box in 
question has TCP issues contacting the mentioned IP, any messages appearing in 
cache.log, things like that could help.

Amos


[squid-users] TIMEOUT_DIRECT after enabling siblings

2014-01-13 Thread Grooz, Marc (regio iT)
Hi,

is see some of squid request with TIMEOUT_DIRECT/IP_Address in squid log after 
enabling siblings.

Any idea?

Kind regards

Marc


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] ##palin AW: [squid-users] #Can't access certain webpages

2013-11-26 Thread Grooz, Marc (regio iT)
Hi Kinkie,

yes i made a capture but don't see the cause.

I send you my traces.

Kind regards.

Marc

-Ursprüngliche Nachricht-
Von: Kinkie [mailto:gkin...@gmail.com] 
Gesendet: Montag, 25. November 2013 15:45
An: Grooz, Marc (regio iT)
Cc: squid-users@squid-cache.org
Betreff: Re: [squid-users] #Can't access certain webpages

On Mon, Nov 25, 2013 at 3:21 PM, Grooz, Marc (regio iT) marc.gr...@regioit.de 
wrote:
 Hi,

 Currently I use Squid 3.3.8 and I can't use/access two webservers thru squid. 
 If I bypass squid this websites work great.

 One of this websites is a fileupload/download website with a generated 
 downloadlink. When I upload a file I receive the following Squidlog Entrys:

 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?
 .
 .
 TCP_MISS_ABORTED/000 0 GET http:// w.y.x.z/cgi-bin/upload_status.cgi?
 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?

 And the downloadlink never gets generated.


 In the second case you never get a webpage back from squid. If I use lynx 
 from the commandline of the squid system the Webpage gets loaded.
 With a tcpdump I see that if squid makes the request then the Webserver 
 didn't answer.

Well, this is consistent with the behavior in squid's logs.
Have you tried accessing the misbehaving server from a client running on the 
squid box, and comparing the differences in the network traces?


-- 
/kinkie


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] ##palin AW: [squid-users] #Can't access certain webpages

2013-11-26 Thread Grooz, Marc (regio iT)
In my first case:

Squid request:

-MGET 
/cgi-bin/upload_status.cgi?uid=060950223627files=:iso-27001-router-security-audit-checklist.xlsok=1
 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Referer: http://xyz/
Accept-Language: de-DE
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Accept-Encoding: gzip, deflate
Host: xyz
X-Forwarded-For: unknown, unknown
Cache-Control: max-age=0
Connection: keep-alive

Webserver answer:
[-MHTTP/1.1 200 OK
Date: Mon, 25 Nov 2013 12:48:57 GMT
Server: Apache/2.2.22 (Linux/SUSE)
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

 Squid send the first request again and again.

Direct request without squid:

Gm/GET /cgi-bin/upload_status.cgi?uid=318568766743files=:aukirche.JPGok=1 
HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Referer: http://xyz/
Accept-Language: de-DE
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Accept-Encoding: gzip, deflate
Host: xyz
DNT: 1
Connection: Keep-Alive

Webserver answer:
GmHTTP/1.1 200 OK
Date: Tue, 26 Nov 2013 10:36:25 GMT
Server: Apache/2.2.22 (Linux/SUSE)
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

Website gets displayed.



In my second case:

Squid request:

SGET / HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: de-DE
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Accept-Encoding: gzip, deflate
If-Modified-Since: Tue, 26 Nov 2013 10:52:01 GMT
DNT: 1
Host: xyz
Pragma: no-cache
X-Forwarded-For: unknown, unknown
Cache-Control: max-age=259200
Connection: keep-alive

 No answer from Host

Direct request without squid:

S   GET / HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: de-DE
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Accept-Encoding: gzip, deflate
Host: xyz
If-Modified-Since: Tue, 26 Nov 2013 10:52:01 GMT
DNT: 1
Connection: Keep-Alive

 successful answer from Webserver.

Kind regards marc


-Ursprüngliche Nachricht-
Von: Grooz, Marc (regio iT) [mailto:marc.gr...@regioit.de] 
Gesendet: Dienstag, 26. November 2013 11:55
An: Kinkie
Cc: squid-users@squid-cache.org
Betreff: [squid-users] ##palin AW: [squid-users] #Can't access certain webpages

Hi Kinkie,

yes i made a capture but don't see the cause.

I send you my traces.

Kind regards.

Marc

-Ursprüngliche Nachricht-
Von: Kinkie [mailto:gkin...@gmail.com] 
Gesendet: Montag, 25. November 2013 15:45
An: Grooz, Marc (regio iT)
Cc: squid-users@squid-cache.org
Betreff: Re: [squid-users] #Can't access certain webpages

On Mon, Nov 25, 2013 at 3:21 PM, Grooz, Marc (regio iT) marc.gr...@regioit.de 
wrote:
 Hi,

 Currently I use Squid 3.3.8 and I can't use/access two webservers thru squid. 
 If I bypass squid this websites work great.

 One of this websites is a fileupload/download website with a generated 
 downloadlink. When I upload a file I receive the following Squidlog Entrys:

 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?
 .
 .
 TCP_MISS_ABORTED/000 0 GET http:// w.y.x.z/cgi-bin/upload_status.cgi?
 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?

 And the downloadlink never gets generated.


 In the second case you never get a webpage back from squid. If I use lynx 
 from the commandline of the squid system the Webpage gets loaded.
 With a tcpdump I see that if squid makes the request then the Webserver 
 didn't answer.

Well, this is consistent with the behavior in squid's logs.
Have you tried accessing the misbehaving server from a client running on the 
squid box, and comparing the differences in the network traces?


-- 
/kinkie


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] ##palin AW: [squid-users] #Can't access certain webpages

2013-11-26 Thread Grooz, Marc (regio iT)
I've got it. I set the option forwared-for from off to delete and now both 
website gets displayed thru squid.

Kind regrads
Marc


-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Dienstag, 26. November 2013 13:11
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] ##palin AW: [squid-users] #Can't access certain 
webpages

On 27/11/2013 1:00 a.m., Grooz, Marc (regio iT) wrote:
 In my first case:
 
 Squid request:
 
 -MGET 
 /cgi-bin/upload_status.cgi?uid=060950223627files=:iso-27001-router-se
 curity-audit-checklist.xlsok=1 HTTP/1.1
 Accept: text/html, application/xhtml+xml, */*
 
 Webserver answer:
 [-MHTTP/1.1 200 OK
 Date: Mon, 25 Nov 2013 12:48:57 GMT

 Squid send the first request again and again.
 
 Direct request without squid:
 
 Gm/GET 
 /cgi-bin/upload_status.cgi?uid=318568766743files=:aukirche.JPGok=1 
 HTTP/1.1
 
 Webserver answer:
 GmHTTP/1.1 200 OK

 
 Website gets displayed.
 


Are those -M Gm/ cgaracters really in front of the GET method name and the 
HTTP/1.1 response version label?

It looks like you may be receiving SOCKS protocol traffic.

Amos


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] #Can't access certain webpages

2013-11-25 Thread Grooz, Marc (regio iT)
Hi,

Currently I use Squid 3.3.8 and I can't use/access two webservers thru squid. 
If I bypass squid this websites work great.

One of this websites is a fileupload/download website with a generated 
downloadlink. When I upload a file I receive the following Squidlog Entrys:

TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?
.
.
TCP_MISS_ABORTED/000 0 GET http:// w.y.x.z/cgi-bin/upload_status.cgi?
TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?

And the downloadlink never gets generated.


In the second case you never get a webpage back from squid. If I use lynx from 
the commandline of the squid system the Webpage gets loaded.
With a tcpdump I see that if squid makes the request then the Webserver didn't 
answer.

Any ideas or suggestions? 

Kind regards

Marc




AW: [squid-users] Vary object loop

2013-10-15 Thread Grooz, Marc (regio iT)
Thanks, so nothing to worry about.

Kind regards Marc

-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Dienstag, 15. Oktober 2013 04:03
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] Vary object loop

On 15/10/2013 1:08 a.m., Grooz, Marc (regio iT) wrote:
 Hi,

 what does that message mean in squid 3.3.9?

 varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://...' 
 'accept-encoding=identity,gzip,deflate'
 clientProcessHit: Vary object loop!

 Kind regrads

 Marc

Same thing it mens in any Squid version.

* The object at the given URL uses Vary header feature of HTTP.
  - Squid indexes the cache store by URL, so do deal with such features we 
place a special vary object marker in the cache at the index where URL lookup 
can find it. That object indicates that Vary is used by this resource and what 
Vary: header key is to be used. Squid then must perform a second cache lookup 
with the new key to find an object relevant for the current client. The object 
found by the second lookup is expected to be the real response.

* In your case the second lookup discovered a loop back to the same or another 
vary object.
  - Squid is just warning you that the cache is corrupted by that loop and will 
fetch a new object from the network.


Amos


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Vary object loop

2013-10-14 Thread Grooz, Marc (regio iT)
Hi,

what does that message mean in squid 3.3.9?

varyEvaluateMatch: Oops. Not a Vary object on second attempt, 'http://...' 
'accept-encoding=identity,gzip,deflate'
clientProcessHit: Vary object loop!

Kind regrads

Marc


smime.p7s
Description: S/MIME cryptographic signature


AW: [squid-users] remote= in Squid Log

2013-08-28 Thread Grooz, Marc (regio iT)
When i set the http_port to http_port 1.1.1.1:3128 then only the local= is 
equal to 1.1.1.1 but witch directive is remote=?

Marc

-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz]
Gesendet: Mittwoch, 28. August 2013 06:44
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] remote= in Squid Log

On 28/08/2013 12:45 a.m., Grooz, Marc (regio iT) wrote:
 Hi,

 Does anybody know what the remote= in this Squid Log entry means?

 Accepting HTTP Socket connections at local=127.0.0.1:3128 remote=[::]
 FD 8 flags=9

It is the wildcard IP address. It is equivalent the * on netstat listings for 
LISTENING addresses.

Amos



[squid-users] remote= in Squid Log

2013-08-27 Thread Grooz, Marc (regio iT)
Hi,

Does anybody know what the remote= in this Squid Log entry means?

Accepting HTTP Socket connections at local=127.0.0.1:3128 remote=[::] FD 8 
flags=9

Kind regards

Marc



[squid-users] Squid auth question

2013-01-07 Thread Grooz, Marc (regio iT)
Hi ,

i've got a question about a external_acl. We use an own external helper
to check if a user is in a particular group and then assign a special
outgoing ip address.

Here is an example:

external_acl_type HELPER ttl=3600 negative_ttl=300 children=10
concurrency=0 cache=0 grace=0 protocol=2.5 %SRC /path/to/helper

acl group1 external HELPER group1
acl group2 external HELPER group2

http_access allow group1 
tcp_outgoing_address 1.2.3.4 group1

http_access allow group2 
tcp_outgoing_address 1.2.3.5 group2

In the helper protocol I notice that squid try to reauthenticate User
that belongs to group2 every 10 minutes in group1, even when they
already allowed in group2. Is there an option that squid tell to
remember successful authentications?

Kind regards

Marc


AW: [squid-users] Squid auth question

2013-01-07 Thread Grooz, Marc (regio iT)
Sorry i have to correct myself: squid reauthentication every 5 minutes not
10.

Mit freundlichen Grüßen

Marc Grooz

-Ursprüngliche Nachricht-
Von: Grooz, Marc (regio iT) [mailto:marc.gr...@regioit.de] 
Gesendet: Montag, 7. Januar 2013 15:27
An: squid-users@squid-cache.org
Betreff: [squid-users] Squid auth question

Hi ,

i've got a question about a external_acl. We use an own external helper to
check if a user is in a particular group and then assign a special outgoing
ip address.

Here is an example:

external_acl_type HELPER ttl=3600 negative_ttl=300 children=10
concurrency=0 cache=0 grace=0 protocol=2.5 %SRC /path/to/helper

acl group1 external HELPER group1
acl group2 external HELPER group2

http_access allow group1
tcp_outgoing_address 1.2.3.4 group1

http_access allow group2
tcp_outgoing_address 1.2.3.5 group2

In the helper protocol I notice that squid try to reauthenticate User that
belongs to group2 every 10 minutes in group1, even when they already allowed
in group2. Is there an option that squid tell to remember successful
authentications?

Kind regards

Marc


smime.p7s
Description: S/MIME cryptographic signature