On 2014-07-01 21:23, Roberto PATRICOLO wrote:
Hi all
I'm new in this environments, so I've a problem related to an
application in an environment that uses a NTLM authentication. This
kind of
authentication is not supported by the sw I'm using so the support
said me that the best way to solve the
Hi all
I'm new in this environments, so I've a problem related to an application in an
environment that uses a NTLM authentication. This kind of
authentication is not supported by the sw I'm using so the support said me that
the best way to solve the issue is installing a squid proxy server
in
On 3/05/2014 11:46 a.m., Soporte Técnico wrote:
> Hi people, i had a couple of freebsds boxes with squid 2.7 transparent mode
> running, i recently see zph configuration directive, i didn´t know about
> that.
>
> The question is: zph override source ip from cached HIT object?
No. ZPH does nothing
Hi people, i had a couple of freebsds boxes with squid 2.7 transparent mode
running, i recently see zph configuration directive, i didn´t know about
that.
The question is: zph override source ip from cached HIT object?
I had no mikrotik hardware runing together, so i had firewall rules (ipfw)
tha
Hi Amos, thank you very much for your detailed reply. Appreciate the
time you've taken.
:)
i was thinking of a script where squid pipes downloads of type .iso , .exe,
.img to a download manager like Axel or Aria2 (segmented downloading). The
download manager will act independently.
User request c
On 2014-03-30 23:10, Fahim Mohammed wrote:
Hi all,
Could you please give me some tip / point me in the direction of how
i can achieve the following?
I need squid to redirect certain download types to a download manager
like aria2 or axel - which will download the file using segmented
downloadi
Hi all,
Could you please give me some tip / point me in the direction of how
i can achieve the following?
I need squid to redirect certain download types to a download manager
like aria2 or axel - which will download the file using segmented
downloading and pass the file to squid or to the user
-help
Questo messaggio, con gli eventuali allegati, contiene informazioni
riservate. Chiunque lo ricevesse pur non essendone il destinatario è pregato
di avvisare al più presto il mittente, di cancellarlo dal proprio sistema e
di non copiarlo, diffonderne il contenuto o utilizzarlo in alcun modo.
Hey there,
Try couple things:
- curl http://XimageXlocation
- red bot on the same image (http://redbot.org/)
Please attach the access.log output.
I suspect that there is a miss understanding in how it works.
Eliezer
On 10/11/2013 03:23 PM, Amos Jeffries wrote:
On 12/10/2013 1:05 a.m., Calode
On 12/10/2013 1:46 a.m., Calode wrote:
Please correct me if I am wrong !?
the Cache-Control is : public, max-age=21600
if I understand correctly the refresh patterns for such a case , it is
to be handled with : ignore-auth and override-expire
There is no auth to begin with.
Public has no mean
On 12/10/2013 1:05 a.m., Calode wrote:
Hi Guys ,
I am testing squid 3.4.0.2 on some jpg from ytimg ! , and it seems I can't make squid cache .
no matter what I tried it just doesn't cache it !!! .
my conf :
acl = > acl rewritedoms url_regex -i \.ytimg\.com.*.(jpg|png)
I tried a bunch of refre
Hi Guys ,
I am testing squid 3.4.0.2 on some jpg from ytimg ! , and it seems I can't make
squid cache .
no matter what I tried it just doesn't cache it !!! .
my conf :
acl = > acl rewritedoms url_regex -i \.ytimg\.com.*.(jpg|png)
I tried a bunch of refresh pattern ...
refresh => refresh_pattern \
Thanks very much, Amos.
I will do more math on the statistics i have captured, now having in
mind what you said, just to have a better understanding of the
behaviour of these proxies.
As for the code diving, i'm not sure if i have the necessary skills.
thanks,
Carlos Defoe
On Fri, Mar 29, 2013
On 30/03/2013 12:48 a.m., Carlos Defoe wrote:
Hello,
I'm investigating cache manager statistics (60min).
In one sample, i have:
client_http.requests = 22.785904/sec
client_http.hits = 6.354478/sec
client_http.errors = 0.006388/sec
server.http.requests = 16.956939/sec
server.http.errors = 0.
Hello,
I'm investigating cache manager statistics (60min).
In one sample, i have:
client_http.requests = 22.785904/sec
client_http.hits = 6.354478/sec
client_http.errors = 0.006388/sec
server.http.requests = 16.956939/sec
server.http.errors = 0.00/sec
aborted_requests = 0.094992/sec
My thoug
Hi Amos,
Thanks for your help. By adding
"generate-host-certificates=on" to the config I could see the host
servers' certificates being mimicked.
https_port 3129 intercept generate-host-certificates=on
cert=/etc/squid/ssl_cert/myCA.pem ssl-bump
Regards,
Prasanna
On 2/15/13, Amos Jeff
On 15/02/2013 2:23 a.m., Prasanna Venkateswaran wrote:
Hi,
I have been trying to set up squid which can intercept https
traffic without client (read it as browser proxy) changes. I am using
the latest squid 3.3.1. When I actually open a https site I still see
the certificate with the param
Hi,
I have been trying to set up squid which can intercept https
traffic without client (read it as browser proxy) changes. I am using
the latest squid 3.3.1. When I actually open a https site I still see
the certificate with the parameters I provided (for myCA.pem) and I
dont see any of the
Hi All
I will make use of your suggestions, but this is not just netflix
related, basically whatever site I visit I get this error about
LookupHostIP: Given
Non-IP 'signup.netflix.com': Name or service not known
Of course with the variation of the hostname at hand.
Regards
On Tue, Dec 18, 2012 at
On 18/12/2012 1:31 p.m., Joshua B. wrote:
Netflix doesn't work through Squid
The only option you have to allow Netflix to work through a proxied
environment without adding exceptions on all your clients, is to put
this code in your configuration file:
acl netflix dstdomain .netflix.com
cache
On 12/18/2012 2:31 AM, Joshua B. wrote:
Netflix doesn't work through Squid
The only option you have to allow Netflix to work through a proxied
environment without adding exceptions on all your clients, is to put
this code in your configuration file:
acl netflix dstdomain .netflix.com
cache deny
Netflix doesn't work through Squid
The only option you have to allow Netflix to work through a proxied
environment without adding exceptions on all your clients, is to put
this code in your configuration file:
acl netflix dstdomain .netflix.com
cache deny netflix
That allows Netflix to fully
Hi
I am trying to setup an HTTPS transparent proxy with latest stable
squid with --enable-ssl compiled. Problem is that the squid server
returns an error connection refused, but the thing is that it was
trying to connect to itself. I did also check using tcpdump and
actually no https requests are l
Hi friends,
I built a squid to reverse proxy for our web server behind.
It’s done with http.
But I don’t know how to apply ssl certificate. I bought from GoDaddy
Someone give me a help pls!
These are my reverse config:
reverse proxy ##
http_port 8008
On 27/06/2012 9:41 p.m., Ming-Ching Tiew wrote:
I have a configuration where if I start squid with -N, it works. But if I run
it without that, I will get child started, child exited a few times and
eventually the parent process will die too. Because there is nothing in between
the 'started' an
I have a configuration where if I start squid with -N, it works. But if I run
it without that, I will get child started, child exited a few times and
eventually the parent process will die too. Because there is nothing in between
the 'started' and 'exited' of the child process, I have no clues a
On 20.06.2012 03:48, Diego Maciel Gomes wrote:
Hi all!
This is my first post. I have one doubt about how to use this acl
max_user_ip
Well, I put it in my squid.conf, look:
acl max_user max_user_ip -s 1
http_access deny max_user
Im running squid 3.0 stable25
Please consider an upgrade. Ser
Hi all!
This is my first post. I have one doubt about how to use this acl max_user_ip
Well, I put it in my squid.conf, look:
acl max_user max_user_ip -s 1
http_access deny max_user
Im running squid 3.0 stable25
I saw that max_user_ip doesnt show to me in yellow font. Is it a problem? Mayb
Gentlemen,
I'm here again asking for your help, I currently have a 02 doing
balancing proxy servers with RR with heart ... But I have been facing
a big problem with respect to the account of my users, they are being
blocked automatically by Active Directory, probably due to high
attempts handshake
We need help resolving his bug building Squid 3.2 on OpenIndiana:
http://bugs.squid-cache.org/show_bug.cgi?id=3463
Is anybody able to test if this still exists in the latest Squid-3.2
daily snapshot?
And if so, provide an answer the request for info in comment #2 of that bug.
Amos
[mailto:jferreira...@gmail.com]
Sent: 15 March 2012 10:52 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Help-me
Hello,
I'm trying to configure squid 3.1.19 on CentOS 6.0 authenticating with Active
Directory, the helper is the authentication NEGOTIATE with KERBERO.
infrastructure
Hello,
I'm trying to configure squid 3.1.19 on CentOS 6.0 authenticating with
Active Directory, the helper is the authentication NEGOTIATE with
KERBERO.
infrastructure
Squid: 03/01/19
Operating System: Windows Server 2008 R2 and CentOS 6.0
Other software: Winbind and Kerberos.
Problem: Every ti
Hello again,
Does anyone else have any ideas on this?
Thank You
James
- Original Message -
From: "James Ashton"
To: "Amos Jeffries"
Cc: squid-users@squid-cache.org
Sent: Tuesday, March 13, 2012 8:44:54 AM
Subject: Re: [squid-users] Help with a tcp_miss/200 issue
T
"
To: squid-users@squid-cache.org
Sent: Monday, March 12, 2012 10:39:13 PM
Subject: Re: [squid-users] Help with a tcp_miss/200 issue
On 13.03.2012 03:13, James Ashton wrote:
> Any thoughts guys?
>
> This has me baffled. I am digging through list archives, but nothing
> relevant so far
On 13.03.2012 03:13, James Ashton wrote:
Any thoughts guys?
This has me baffled. I am digging through list archives, but nothing
relevant so far.
I figure it has to be a response header issue. I just don't see it.
Could be. You will need to know the headers being sent into Squid
"squid1.ke
ssage -
From: "James Ashton"
To: squid-users@squid-cache.org
Sent: Friday, March 9, 2012 9:45:07 AM
Subject: [squid-users] Help with a tcp_miss/200 issue
Hello all,
I am trying to improve caching/acceleration on a series of wordpress sites.
Almost all objects are being cached at this point
Hello all,
I am trying to improve caching/acceleration on a series of wordpress sites.
Almost all objects are being cached at this point other than the page HTML
itself.
All I am getting there is TCP_MISS/200 log lines.
The request is a GET for the URL http://planetphotoshop.com
At the moment
Le 7 février 2012 08:45, Stephen McGuinness a écrit :
> I am trying to force the users behind my proxy to be forced into a
> human interaction ACL at a certain time every night. I have it working
> pretty well, but there is still traffic that is not getting blocked.
>
> From what I can figure out
On 11/02/2012 19:03, João Paulo Ferreira wrote:
Is there any way to know what parameters were used by the YUM installation?
2012/2/11 Andrew Beverley:
On Sat, 2012-02-11 at 11:36 -0200, João Paulo Ferreira wrote:
Does anyone know how do I recompile my squid that was installing the
tool using y
Is there any way to know what parameters were used by the YUM installation?
2012/2/11 Andrew Beverley :
> On Sat, 2012-02-11 at 11:36 -0200, João Paulo Ferreira wrote:
>> Does anyone know how do I recompile my squid that was installing the
>> tool using yum (centos)?
>
> I've never used yum, but y
On Sat, 2012-02-11 at 11:36 -0200, João Paulo Ferreira wrote:
> Does anyone know how do I recompile my squid that was installing the
> tool using yum (centos)?
I've never used yum, but you should be able to recompile by downloading
the packaged sources. The following page will probably help:
http
Hello,
Does anyone know how do I recompile my squid that was installing the
tool using yum (centos)?
I need to change the parameter: - with-filedescriptors = 16384 to 10.
thank you
--
Atenciosamente,
João Paulo Ferreira
Computer Science Student
+ 55 (71) 9297 - 1260
jferreira...@gmail.co
I am trying to force the users behind my proxy to be forced into a
human interaction ACL at a certain time every night. I have it working
pretty well, but there is still traffic that is not getting blocked.
>From what I can figure out so far, if connections are active before
the time ACL kicks in,
Thanks guys! <---noob! it worked! I did try apt-get purge by itself,
wasnt aware that I needed to also include the names of the packages, but
IT WORKED! THANK YOU! well tommarow Ill get back at compiling and I will
check out that link of requisites for building squid amos.
THANKS AGAIN! BACK IN PR
On 31/12/2011 10:45 p.m., Pieter De Wit wrote:
On 31/12/2011 21:32, someone wrote:
Ok, I rm -rf`d all directories named squid from my box thinking that
attempting to do a fresh install after would fix everything NOPE, and
wtf, apparently the install binary wont recreate the directories now,
yay!
On 31/12/2011 11:56 p.m., someone wrote:
THANK YOU for your response Peter.
deviant:/home/devadmin# dpkg -l | grep squid
ii sarg 2.2.5-2
squid analysis report generator
rc squid 2.7.STABLE9-2.1
Internet object cac
THANK YOU for your response Peter.
deviant:/home/devadmin# dpkg -l | grep squid
ii sarg 2.2.5-2
squid analysis report generator
rc squid 2.7.STABLE9-2.1
Internet object cache (WWW proxy cache)
ii squid-cgi
On 31/12/2011 21:32, someone wrote:
Ok, I rm -rf`d all directories named squid from my box thinking that
attempting to do a fresh install after would fix everything NOPE, and
wtf, apparently the install binary wont recreate the directories now,
yay! wtf symlink madness any suggestions how to
Ok, I rm -rf`d all directories named squid from my box thinking that
attempting to do a fresh install after would fix everything NOPE, and
wtf, apparently the install binary wont recreate the directories now,
yay! wtf symlink madness any suggestions how to just get squid to
reinstall from apt w
Hi All,
Was looking through the archives and kind of found some answers but I wanted
to make sure. I had a few questions actually.
1) Looks like Squid supports Single Forest Multiple domain setup and I found
the following thread.
http://squid-web-proxy-cache.1019090.n4.nabble.com/Single-Forest
On Fri, 23 Sep 2011, Bill Arlofski wrote:
unsubscribe
Every message from the list contains how to unsubscribe in the
message headers. Please take a look there for where to send your
unsubscribe request.
--
John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
jhar...@imps
unsubscribe
On Tue, 20 Sep 2011 22:31:29 +, Momen, Mazdak wrote:
Hi,
We're configuring our site which is hosted on an IIS server to use
our squid server as a proxy (using the defaultproxy setting in
machine.config). I'm trying to decipherer the access log, or more
understand why it is using the same IP
Hi,
We're configuring our site which is hosted on an IIS server to use our squid
server as a proxy (using the defaultproxy setting in machine.config). I'm
trying to decipherer the access log, or more understand why it is using the
same IP address. For example, one entry is:
1316557568.358 149
On 10/09/11 19:03, Kumar P wrote:
HI dear,
I am Kumar, Here is my Squid configuration file. ( Squid v.3.0 )
I would like to give specific users access to specific web content,
But through this configuration file, if I give permission for a
specific user to access the tutorial, social networking
HI dear,
I am Kumar, Here is my Squid configuration file. ( Squid v.3.0 )
I would like to give specific users access to specific web content,
But through this configuration file, if I give permission for a
specific user to access the tutorial, social networking is blocked but
movie is accessible.
On Wed, 8 Jun 2011 20:02:20 -0300, Soporte Técnico wrote:
Hi Amos, thanks for the help, two question:
acl even src 192.168.0.0/0.0.0.1
would work ?
No. Client IPv4 with first 31 bits erased to zero will not match
192.168.*
There´s any detailed information about sourcehash ? I tested befo
sourcehash.
Jorge.
-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz]
Enviado el: miércoles, 08 de junio de 2011 12:25 a.m.
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] help with acl src and par or impar ip (odd number or
even number i think.).
On Tue, 7 Jun
On Tue, 7 Jun 2011 13:35:33 -0300, Soporte Técnico wrote:
I have 2 two parents for my main squid (Freebsd).
I want to load balancing to each one parent the par ip (odd) and the
impar
ip (even).
Ex.
acl pares src 192.168.0.10, 192.168.0.12, 192.168.0.14, 192.168.0.16
…
192.168.0.254
acl imp
Supongo estas usando una red con mascara /24. Creo sería mejor usar
ACLs con una máscara /25 para decir que la primra red sea
192.168.0.0/25 y la segunda red
sea 192.168.0.128/25. Así indicas que tu primera red podria ser de la
192.168.0.1-192.168.0.126 y tu segunda red sería
192.168.0.128-
I have 2 two parents for my main squid (Freebsd).
I want to load balancing to each one parent the par ip (odd) and the impar
ip (even).
Ex.
acl pares src 192.168.0.10, 192.168.0.12, 192.168.0.14, 192.168.0.16
192.168.0.254
acl impares src 192.168.0.11, 192.168.0.13, 192.168.0.15, 192.168.0.17
On 26/05/11 23:41, Camilo Cadena wrote:
hi,
the iptables configuration i'm using:
iptables-t nat-P PREROUTING ACCEPT
iptables-t nat-P POSTROUTING ACCEPT
iptables-t nat-A POSTROUTING-s 192.168.1.0/24 -o eth0-j MASQUERADE
You have nothing involving NAT interception for the pro
On 26/05/11 20:48, Camilo Cadena wrote:
hi, my name is Camilo and i'm just finish my quid configuration and
iptables.
the thing is, i have an ubuntu server 10.4 with squid in it and a
router 3G. So, first i build my wifi network, after that i setup an
ip static address for my server, and at the
hi,
my name is Camilo and i'm just finish my quid configuration and iptables.
the thing is, i have an ubuntu server 10.4 with squid in it and a router 3G.
So, first i build my wifi network, after that i setup an ip static address for
my server, and at the end i install squid. When i use an appl
On 17/05/11 21:57, Le Trung Kien wrote:
Hi, I use both HEAD and GET and always get MISS for invalid_URLs but
with valid_URLs, HIT still be returned.
Kien Le.
You said "v4" which exact release version number is this?
NP: and if its older than 3.1.9 can you try and see if an upgrade to
3.1.10
Hi, I use both HEAD and GET and always get MISS for invalid_URLs but
with valid_URLs, HIT still be returned.
Kien Le.
On Tue, May 17, 2011 at 1:31 PM, Amos Jeffries wrote:
> On 13/05/11 17:36, Le Trung Kien wrote:
>>
>> I have just added the hard Expires value, but still MISS
>>
>> squidclient -
On 13/05/11 17:36, Le Trung Kien wrote:
I have just added the hard Expires value, but still MISS
squidclient -m HEAD http://invalid_URL
HTTP/1.0 404 Not Found
Cache-Control: public
Content-Length: 1635
Content-Type: text/html
Expires: Sat, 14 May 2011 16:00:00 GMT
Server: Microsoft-IIS/6.0
X-Pow
I have just added the hard Expires value, but still MISS
squidclient -m HEAD http://invalid_URL
HTTP/1.0 404 Not Found
Cache-Control: public
Content-Length: 1635
Content-Type: text/html
Expires: Sat, 14 May 2011 16:00:00 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Fri, 13 May 2011 05
On 13/05/11 16:34, Le Trung Kien wrote:
I have just modified the HTTP header respond of IIS Servers
squidclient -m HEAD http://invalid_URL
HTTP/1.0 404 Not Found
Cache-Control: public
Content-Length: 1731
Content-Type: text/html
Expires: 1000
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date:
I have just modified the HTTP header respond of IIS Servers
squidclient -m HEAD http://invalid_URL
HTTP/1.0 404 Not Found
Cache-Control: public
Content-Length: 1731
Content-Type: text/html
Expires: 1000
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Fri, 13 May 2011 04:30:09 GMT
X-Cache: MI
On 13/05/11 15:55, Le Trung Kien wrote:
Hi,
On the original servers, I'm using IIS6.0 and the 404b.html is the
page returned when client requests non-existing pages.
I attempt to add a header like this on that page:
The page cannot be found
This header is the same on all pages generated by
Hi,
On the original servers, I'm using IIS6.0 and the 404b.html is the
page returned when client requests non-existing pages.
I attempt to add a header like this on that page:
The page cannot be found
This header is the same on all pages generated by our web applications
and could be cached.
On 12/05/11 17:10, Le Trung Kien wrote:
I realized that the server reply both 403 and 404.
About 404, but I don't know how to cache 404 File Not Found reply from
original servers, should I add a default error page on web application
for invalid URLs ?
I tested and saw that cache misses on those U
I realized that the server reply both 403 and 404.
About 404, but I don't know how to cache 404 File Not Found reply from
original servers, should I add a default error page on web application
for invalid URLs ?
I tested and saw that cache misses on those URLs because we don't have
a default error
On Wed, 11 May 2011 10:01:59 +0700, Le Trung Kien wrote:
Hi, I checked that "negative_ttl" and it's definitely not in my
squid.conf :)
And, I'm also checking our orignial servers for 404 and 30x return
codes if they don't work properly.
I have one more question for sure: Will Squid remembers inv
Hi, I checked that "negative_ttl" and it's definitely not in my squid.conf :)
And, I'm also checking our orignial servers for 404 and 30x return
codes if they don't work properly.
I have one more question for sure: Will Squid remembers invalid URLs
(for moment) and return the error page without val
On 10/05/11 20:58, Le Trung Kien wrote:
Hi, we're trying Squid v3 for reverse proxy using, and explore that we
receive too many access requests for old invalid URLs from client and
this makes our Squid Caches slow down our original servers by
attempting sending requests to retrieve information fr
Hi, we're trying Squid v3 for reverse proxy using, and explore that we
receive too many access requests for old invalid URLs from client and
this makes our Squid Caches slow down our original servers by
attempting sending requests to retrieve information from original
servers.
This is from our squ
On Mon, 9 May 2011 18:15:09 +0200, Luiz Gomes wrote:
Hello everybody,
I have a problem with the squid.conf file.
If i open an ssh tunnel for http_proxy Squid blocks all the content I
have set, that's GREAT!
how? why?
ssh user@remote.server -L 3128
setting the browser with HTTP proxy: loca
Hello everybody,
I have a problem with the squid.conf file.
If i open an ssh tunnel for http_proxy Squid blocks all the content I
have set, that's GREAT!
ssh user@remote.server -L 3128
setting the browser with HTTP proxy: localhost port: 3128
Nice, I have acl and http_allow/deny working fine!!
Hi,
I'm trying to configure Kerberos Authentication for squid. I'm
running Squid 3.1.12 and Windows 2008 R2 SP2. I have followed the
kerberos authentication guide on squid-cache and many other guides, I
always end up with these logs in my cache.log. My client browser keeps
prompting for username/
Hi Amos,
At first big thanks. By putting "forwarded_for transparent" and "via
off", the host info at www.whatismyip.com removed and also no email
view
problem at hotmail or live.com. All this configuration working
perfectly with Squid as router.
But problem not solved with Router using Wccp2.
At L
On Sun, 17 Apr 2011 23:21:44 +0600, AZHAR CHOWDHURY wrote:
Hi Amos,
OK, it was my fault that I posted before run in real network with
WCCP. We are running Squid+tproxy under Policy Based routing without
any major trouble (pls see below of problem are we facing).
This week we will move squid from
Hi Amos,
OK, it was my fault that I posted before run in real network with
WCCP. We are running Squid+tproxy under Policy Based routing without
any major trouble (pls see below of problem are we facing).
This week we will move squid from PBR to Wccp. The mentioned example
based on vlan dot1q, let
On 17/04/11 05:14, AZHAR CHOWDHURY wrote:
Hi,
I am following http://wiki.squid-cache.org/Features/Tproxy4 strictly
but failed to configure with CISCO router& WCCP2.
My setup as follow:
Clients PCs>-[Core
switch]>>---[Edge CISCO Router with
WCCP2
Hi,
I am following http://wiki.squid-cache.org/Features/Tproxy4 strictly
but failed to configure with CISCO router & WCCP2.
My setup as follow:
Clients PCs>-[Core
switch]>>---[Edge CISCO Router with
WCCP2]--->Internet
On 20/03/11 06:38, Jim Binder wrote:
Think I finally figured it out... It was internal routing as I had expected.
Remember, eth0 (inside), eth1(admin), eth2(inet)...
The issue was that i had two interfaces on the same network 192.168.1.x... (br0
and eth1) One being the bridge (br0) and th
On Sat, 19 Mar 2011 21:10:51 -, Steve wrote:
Hi all,
I wonder if anyone can help. I am using Squid 3.2.0.4 compiled with
-disable-ipv6. Whenever I access hosts that have records as well
as A
records then an attempt is made to access them using IPv6. My network
doesn't have IPv6 so it
Hi all,
I wonder if anyone can help. I am using Squid 3.2.0.4 compiled with
-disable-ipv6. Whenever I access hosts that have records as well as A
records then an attempt is made to access them using IPv6. My network
doesn't have IPv6 so it is no surprise the attempts to connect to IPv6
destin
Think I finally figured it out... It was internal routing as I had expected.
Remember, eth0 (inside), eth1(admin), eth2(inet)...
The issue was that i had two interfaces on the same network 192.168.1.x... (br0
and eth1) One being the bridge (br0) and the other being the Admin interface I
wa
On 16/03/11 22:03, Jim Binder wrote:
Amos,
Back at it again tonight -- So, when you did this (and I'm assuming you have --
maybe incorrectly ); ) how many nics did you have enabled.
I've only had login with one client machine briefly that was doing it.
Worked perfectly. The rest, including
Amos,
Back at it again tonight -- So, when you did this (and I'm assuming you have --
maybe incorrectly ); ) how many nics did you have enabled.
Also, for grins, I just to ubuntu 11.04 with same config and tested with both
2.7Stable9 and 3.HEAD and still get it to work.
it's running on
L
Amos,
Thanks the follow up and for the reminder on SELinux but at this point, I have
it off (I don't think I need to relabel after turning off -- any one know?).
I'm at a loss too -- starting to add more debugging logic (maybe will even
instrument a kernel) to see if I can figure out what's goi
On Tue, 15 Mar 2011 07:41:28 -0700, Jim Binder wrote:
If I try and add the route, both fail with file exists err
[root@fw01 ~]# ip route add local 0.0.0.0/0 dev eth0 table 100
RTNETLINK answers: File exists
[root@fw01 ~]# ip route add local 0.0.0.0/0 dev eth2 table 100
RTNETLINK answers: File
If I try and add the route, both fail with file exists err
[root@fw01 ~]# ip route add local 0.0.0.0/0 dev eth0 table 100
RTNETLINK answers: File exists
[root@fw01 ~]# ip route add local 0.0.0.0/0 dev eth2 table 100
RTNETLINK answers: File exists
James S. Binder
Vice President, Engineering
On 15/03/11 20:22, Jim Binder wrote:
Trying this one more time to see if anyone might know what's wrong in getting
my transparent bridging with squid to work.
Config... pings work thought the box (the bridge is working however; the 3129
socket never pops with an HTTP request)
Admin on Eth1, I
Trying this one more time to see if anyone might know what's wrong in getting
my transparent bridging with squid to work.
Config... pings work thought the box (the bridge is working however; the 3129
socket never pops with an HTTP request)
Admin on Eth1, Internet on eth0 and Inside (client) i
Osmany,
I can help you but I think it is better to do this off list.
You can send me to my private email
- the latest version of the script and
- the unedited relevant lines from access.log
Marcus
Osmany wrote:
thanks for the reply. Ok so now I've modified the script with your
suggestion and
thanks for the reply. Ok so now I've modified the script with your
suggestion and I get this in my access.log
http://dnl-16.geo.kaspersky.com/ftp://dnl-kaspersky.quimefa.cu:2122/Updates/.com/index/u0607g.xml.klz
I'm pretty sure this is not working for the clients. I'm looking for it
to return som
Osmany,
look in access.log.
It should say what is happening:
I expect this:
... TCP_MISS/301 GET http://kaspersky
... TCP_MISS/200 GET ftp://dnl-kaspersky.quimefa.cu:2122/Updates
and does the client use Squid for the ftp protocol ??
And the RE matches too many strings.
I recommend to r
1 - 100 of 792 matches
Mail list logo