On 23/10/2012 5:07 a.m., Ben wrote:
Hi,
My squid 3.2.3 latest version getting restart automatically with error
FATAL: Bungled (null) line 192: icap_retry deny all. What could be
reason behind this problem? How to resolve it.?
Did you ./configure using --enable-icap-client ?
Amos
Thank you for replying so quickly. I'll upgrade my squid.
However, in this case we have in use 2673 from 4096 and the squid is stuck
because of the reserved number of file descriptors rises from 100 to 1400.
The normal situation is
File descriptor usage for squid:
Maximum number of
Hi,
On 23/10/2012 5:07 a.m., Ben wrote:
Hi,
My squid 3.2.3 latest version getting restart automatically with
error FATAL: Bungled (null) line 192: icap_retry deny all. What
could be reason behind this problem? How to resolve it.?
Did you ./configure using --enable-icap-client ?
Yes, i
On 23/10/2012 8:10 p.m., Ben wrote:
Hi,
On 23/10/2012 5:07 a.m., Ben wrote:
Hi,
My squid 3.2.3 latest version getting restart automatically with
error FATAL: Bungled (null) line 192: icap_retry deny all. What
could be reason behind this problem? How to resolve it.?
Did you ./configure
Hello all,
I am trying out 3.3.0.1 beta on Fedora 16 64 bit.(kernel 3.4.11-1.fc16.x86_64
#1 SMP)
I have created RPM file using same spec file and patches as 3.2.1 (which I have
been using from a month without any issues).
In squid.conf, I have max_filedescriptors 4096
When I start squid
On 23/10/2012 7:53 p.m., RODRIGUEZ CEBERIO, Iñigo wrote:
Thank you for replying so quickly. I'll upgrade my squid.
However, in this case we have in use 2673 from 4096 and the squid is stuck
because of the reserved number of file descriptors rises from 100 to 1400.
Oh right yes, Squid will
This problem starts randomly and recovers randomly too (after restarting,
rebooting, etc). We restart httpd and it works fine 2 min. and the problem
starts again, reboot the whole server and again appears the problem, sometimes,
the problem disappears after removing cache directory and
Hello guys,
I'm becoming crazy!
I am a student and I am working with Squid for a project about content
delivery networks.
Setting up the system (all in IPv6) I have found out some issues.
To set up it I followed http://wiki.squid-cache.org/Features/Tproxy4 because
I need a trasparent proxy and
On 23/10/2012 9:17 p.m., RODRIGUEZ CEBERIO, Iñigo wrote:
This problem starts randomly and recovers randomly too (after restarting,
rebooting, etc). We restart httpd and it works fine 2 min. and the problem
starts again, reboot the whole server and again appears the problem, sometimes,
the
On 23/10/2012 10:21 p.m., alberto.desi wrote:
Hello guys,
I'm becoming crazy!
I am a student and I am working with Squid for a project about content
delivery networks.
Setting up the system (all in IPv6) I have found out some issues.
To set up it I followed
I'm trying to do succesfull deb package in Debian Squeeze with ssl support.
When I add line
--enable-ssl \
in ./squid3-3.1.6/debian/rules I have such error without this line
(...) ./squid3-3.1.6# debuild -us -uc -b
works fine. What is wrong?
make[3]: Entering directory
Yes, I've read the mail but I think that it is better to post it here... ;-)
The main problem is that if I rewrite the $url (localhost [::1], or
[5001::52] [3001::52] that are the addresses of the interfaces) it doesn't
work. If I rewrite $url with 302: code in front it works... but the
behavior
Hi,
On 23/10/2012 8:10 p.m., Ben wrote:
Hi,
On 23/10/2012 5:07 a.m., Ben wrote:
Hi,
My squid 3.2.3 latest version getting restart automatically with
error FATAL: Bungled (null) line 192: icap_retry deny all. What
could be reason behind this problem? How to resolve it.?
Did you
On Mon, Oct 22, 2012 at 10:40 PM, Amos Jeffries squ...@treenet.co.nz wrote:
If I am reading that correctly you are saying the ICMPv6 'too big' packets
are not going to Squid, but to the client machine?
I will have to try and run a tcpdump on the edge router itself when I
get off work today, but
we are using squid 3.2.3 with smp workers.
in http://wiki.squid-cache.org/Features/SmpScale it is written that workers can
share logs.
in the docu it is also mentioned, that one should upgrade to the faster and
better logfile daemon.
we are using the following (log) config:
workers 3
In build-dep dependiencies maintainers did not included libssl-dev and
devscripts, so the building with ssl support is imposible without
them.
Fortunatelly not only me has such problem so I have found the answer:
http://www.banym.de/linux/build-squid-with-enable-ssl-on-debian
Regards.
Bartosz.
hi,
we want to use squid with smp workers.
workers are running fine. now also logroate works (although not as expected.
see my other mail [squid-users] question of understanding: squid smp/workers
and logfiles, works only with access_log for each worker not one single one).
now there is only
On 10/23/2012 1:53 PM, Matthew Goff wrote:
I don't know if Squid had already processed the packets for re-writing
before Wireshark displays them or not, so I'll check a tcpdump at the
router itself to see where it originally directed the packet to before
my Squid box had any chance to mangle it.
Hi,
Hi,
On 23/10/2012 8:10 p.m., Ben wrote:
Hi,
On 23/10/2012 5:07 a.m., Ben wrote:
Hi,
My squid 3.2.3 latest version getting restart automatically with
error FATAL: Bungled (null) line 192: icap_retry deny all. What
could be reason behind this problem? How to resolve it.?
Did you
Hi everyone,
is it possible to have squid use the same Source Port to connect to the Web=
server as the client uses to connect to squid ?
My problem is the following setup:
Various Citrix Server
URL Filtering with Identity Awareness
Squid 3.1 as Cache Proxy
I had to
On 10/23/2012 8:55 PM, alexander@heidelberg.de wrote:
Any help is appreciated:)
Best Regards
Alex
Take a peek at TPROXY.
if you can share your squid.conf you can get better help.
(notice that your email looks bad with lots of spaces)
Regards,
Eliezer
--
Eliezer Croitoru
On Mon, Oct 22, 2012 at 10:40 PM, Amos Jeffries squ...@treenet.co.nz wrote:
If I am reading that correctly you are saying the ICMPv6 'too big' packets
are not going to Squid, but to the client machine?
Which would make it a TPROXY bug, since the outbound connection from Squid
is where the MTU
On 23.10.2012 23:08, alberto.desi wrote:
Yes, I've read the mail but I think that it is better to post it
here... ;-)
The main problem is that if I rewrite the $url (localhost [::1], or
[5001::52] [3001::52] that are the addresses of the interfaces) it
doesn't
work. If I rewrite $url with
On 24.10.2012 01:32, Rietzler, Markus (RZF, SG 324 /
RIETZLER_SOFTWARE) wrote:
we are using squid 3.2.3 with smp workers.
in http://wiki.squid-cache.org/Features/SmpScale it is written that
workers can share logs.
in the docu it is also mentioned, that one should upgrade to the
faster and
On 24.10.2012 03:38, Rietzler, Markus (RZF, SG 324 /
RIETZLER_SOFTWARE) wrote:
hi,
we want to use squid with smp workers.
workers are running fine. now also logroate works (although not as
expected. see my other mail [squid-users] question of understanding:
squid smp/workers and logfiles, works
On 24.10.2012 07:55, Alexander.Eck wrote:
Hi everyone,
is it possible to have squid use the same Source Port to connect to
the Web=
server as the client uses to connect to squid ?
No. One gets errors when bind() is used on an already open port.
connect() and sendto() do not supply the OS
Hi,
How cancel this mailing list
Thanks
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Wednesday, October 24, 2012 8:35 AM
Subject: Re: [squid-users] Squid 3.1 Client Source Port Identity Awareness
On 24.10.2012 07:55,
On 24.10.2012 13:43, Kavin Xiao wrote:
Hi,
How cancel this mailing list
Thanks
Follow the instructions right next to the ones you used to sign up:
http://www.squid-cache.org/Support/mailing-lists.html
Amos
28 matches
Mail list logo