Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Adrian Chadd
First thing - don't use ufs, use aufs.



Adrian

On Tue, Apr 29, 2008, Usrbich wrote:
> 
> Hi2all!
> 
> My users are experiencing problems with squid few hours after it starts.
> I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6. This
> is a virtual machine and also a DNS server.
> Number of active users at one time is about 40-50. The problem is, when I
> start Squid, it works fine for couple of hours, and then the behaviour from
> the client side is that pages stop to download 10-20 secs, like everything
> stops, then it starts back and so on. When it stops, I hit refresh button
> and then it starts to download again. In that time, my free memory is around
> 400MB, that's some 45%, it isn't swapping, cpu is low. I believe my
> configuration is wrong, and need some help tunning it. Parameters are
> majorly by default values. So, I attach my squid.conf:
> 
> http_port 10.19.2.3:8080 
> 
> hierarchy_stoplist cgi-bin ?
> 
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> 
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> 
> cache_mem 32 MB
> 
> cache_swap_low 90
> cache_swap_high 95
> 
> maximum_object_size 4096 KB
> 
> memory_replacement_policy lru
> 
> cache_dir ufs /var/spool/squid 1500 16 256
> 
> access_log /var/log/squid/access.log squid
> 
> cache_log /var/log/squid/cache.log
> 
> cache_store_log /var/log/squid/store.log
> 
> pid_filename /var/run/squid.pid
> 
> check_hostnames on
> 
> dns_nameservers 10.19.2.3 195.29.149.196
> 
> hosts_file /etc/hosts
> 
> refresh_pattern ^ftp: 144020% 10080
> refresh_pattern ^gopher:  14400%  1440
> refresh_pattern . 0   20% 4320
> 
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443   # https
> acl Safe_ports port 70# gopher
> acl Safe_ports port 210   # wais
> acl Safe_ports port 1025-65535# unregistered ports
> acl Safe_ports port 280   # http-mgmt
> acl Safe_ports port 488   # gss-http
> acl Safe_ports port 591   # filemaker
> acl Safe_ports port 777   # multiling http
> acl CONNECT method CONNECT
> 
> http_access allow all
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> 
> acl zbw_network src 10.19.0.0/16
> 
> http_access allow zbw_network
> http_access allow localhost
> http_access deny all
> 
> http_reply_access allow all
> 
> icp_access allow all
> 
> cache_mgr [EMAIL PROTECTED]
> 
> mail_from [EMAIL PROTECTED]
> 
> mail_program postfix
> 
> visible_hostname nameserver.zbw.intranet
> 
> snmp_port 1234
> 
> delay_class 1 2
> 
> delay_access 1 allow zbw_network
> delay_access 1 deny all
> 
> delay_parameters 1 -1/-1 128000/164
> 
> coredump_dir /var/spool/squid
> -- 
> View this message in context: 
> http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16976682.html
> Sent from the Squid - Users mailing list archive at Nabble.com.

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Usrbich

I will try this, can you tell me considering my hardware conf and the rest of
the parameters, do I need something more to change, like cache_mem, or rest
of the cache_dir parameter?



Adrian Chadd wrote:
> 
> First thing - don't use ufs, use aufs.
> 
> 
> 
> Adrian
> 
> On Tue, Apr 29, 2008, Usrbich wrote:
>> 
>> Hi2all!
>> 
>> My users are experiencing problems with squid few hours after it starts.
>> I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6.
>> This
>> is a virtual machine and also a DNS server.
>> Number of active users at one time is about 40-50. The problem is, when I
>> start Squid, it works fine for couple of hours, and then the behaviour
>> from
>> the client side is that pages stop to download 10-20 secs, like
>> everything
>> stops, then it starts back and so on. When it stops, I hit refresh button
>> and then it starts to download again. In that time, my free memory is
>> around
>> 400MB, that's some 45%, it isn't swapping, cpu is low. I believe my
>> configuration is wrong, and need some help tunning it. Parameters are
>> majorly by default values. So, I attach my squid.conf:
>> 
>> http_port 10.19.2.3:8080 
>> 
>> hierarchy_stoplist cgi-bin ?
>> 
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> 
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>> 
>> cache_mem 32 MB
>> 
>> cache_swap_low 90
>> cache_swap_high 95
>> 
>> maximum_object_size 4096 KB
>> 
>> memory_replacement_policy lru
>> 
>> cache_dir ufs /var/spool/squid 1500 16 256
>> 
>> access_log /var/log/squid/access.log squid
>> 
>> cache_log /var/log/squid/cache.log
>> 
>> cache_store_log /var/log/squid/store.log
>> 
>> pid_filename /var/run/squid.pid
>> 
>> check_hostnames on
>> 
>> dns_nameservers 10.19.2.3 195.29.149.196
>> 
>> hosts_file /etc/hosts
>> 
>> refresh_pattern ^ftp:144020% 10080
>> refresh_pattern ^gopher: 14400%  1440
>> refresh_pattern .0   20% 4320
>> 
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl Safe_ports port 80   # http
>> acl Safe_ports port 21   # ftp
>> acl Safe_ports port 443  # https
>> acl Safe_ports port 70   # gopher
>> acl Safe_ports port 210  # wais
>> acl Safe_ports port 1025-65535   # unregistered ports
>> acl Safe_ports port 280  # http-mgmt
>> acl Safe_ports port 488  # gss-http
>> acl Safe_ports port 591  # filemaker
>> acl Safe_ports port 777  # multiling http
>> acl CONNECT method CONNECT
>> 
>> http_access allow all
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> 
>> acl zbw_network src 10.19.0.0/16
>> 
>> http_access allow zbw_network
>> http_access allow localhost
>> http_access deny all
>> 
>> http_reply_access allow all
>> 
>> icp_access allow all
>> 
>> cache_mgr [EMAIL PROTECTED]
>> 
>> mail_from [EMAIL PROTECTED]
>> 
>> mail_program postfix
>> 
>> visible_hostname nameserver.zbw.intranet
>> 
>> snmp_port 1234
>> 
>> delay_class 1 2
>> 
>> delay_access 1 allow zbw_network
>> delay_access 1 deny all
>> 
>> delay_parameters 1 -1/-1 128000/164
>> 
>> coredump_dir /var/spool/squid
>> -- 
>> View this message in context:
>> http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16976682.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
> 
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
> Support -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16977727.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Adrian Chadd
On Wed, Apr 30, 2008, Usrbich wrote:
> 
> I will try this, can you tell me considering my hardware conf and the rest of
> the parameters, do I need something more to change, like cache_mem, or rest
> of the cache_dir parameter?

It looks fine. I forget about the delay pools stuff - try disabling delay
pools in your config and see if that changes anything. But start with changing
ufs -> aufs.



Adrian

> 
> 
> 
> Adrian Chadd wrote:
> > 
> > First thing - don't use ufs, use aufs.
> > 
> > 
> > 
> > Adrian
> > 
> > On Tue, Apr 29, 2008, Usrbich wrote:
> >> 
> >> Hi2all!
> >> 
> >> My users are experiencing problems with squid few hours after it starts.
> >> I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6.
> >> This
> >> is a virtual machine and also a DNS server.
> >> Number of active users at one time is about 40-50. The problem is, when I
> >> start Squid, it works fine for couple of hours, and then the behaviour
> >> from
> >> the client side is that pages stop to download 10-20 secs, like
> >> everything
> >> stops, then it starts back and so on. When it stops, I hit refresh button
> >> and then it starts to download again. In that time, my free memory is
> >> around
> >> 400MB, that's some 45%, it isn't swapping, cpu is low. I believe my
> >> configuration is wrong, and need some help tunning it. Parameters are
> >> majorly by default values. So, I attach my squid.conf:
> >> 
> >> http_port 10.19.2.3:8080 
> >> 
> >> hierarchy_stoplist cgi-bin ?
> >> 
> >> acl QUERY urlpath_regex cgi-bin \?
> >> cache deny QUERY
> >> 
> >> acl apache rep_header Server ^Apache
> >> broken_vary_encoding allow apache
> >> 
> >> cache_mem 32 MB
> >> 
> >> cache_swap_low 90
> >> cache_swap_high 95
> >> 
> >> maximum_object_size 4096 KB
> >> 
> >> memory_replacement_policy lru
> >> 
> >> cache_dir ufs /var/spool/squid 1500 16 256
> >> 
> >> access_log /var/log/squid/access.log squid
> >> 
> >> cache_log /var/log/squid/cache.log
> >> 
> >> cache_store_log /var/log/squid/store.log
> >> 
> >> pid_filename /var/run/squid.pid
> >> 
> >> check_hostnames on
> >> 
> >> dns_nameservers 10.19.2.3 195.29.149.196
> >> 
> >> hosts_file /etc/hosts
> >> 
> >> refresh_pattern ^ftp:  144020% 10080
> >> refresh_pattern ^gopher:   14400%  1440
> >> refresh_pattern .  0   20% 4320
> >> 
> >> acl all src 0.0.0.0/0.0.0.0
> >> acl manager proto cache_object
> >> acl localhost src 127.0.0.1/255.255.255.255
> >> acl to_localhost dst 127.0.0.0/8
> >> acl SSL_ports port 443
> >> acl Safe_ports port 80 # http
> >> acl Safe_ports port 21 # ftp
> >> acl Safe_ports port 443# https
> >> acl Safe_ports port 70 # gopher
> >> acl Safe_ports port 210# wais
> >> acl Safe_ports port 1025-65535 # unregistered ports
> >> acl Safe_ports port 280# http-mgmt
> >> acl Safe_ports port 488# gss-http
> >> acl Safe_ports port 591# filemaker
> >> acl Safe_ports port 777# multiling http
> >> acl CONNECT method CONNECT
> >> 
> >> http_access allow all
> >> http_access allow manager localhost
> >> http_access deny manager
> >> http_access deny !Safe_ports
> >> http_access deny CONNECT !SSL_ports
> >> 
> >> acl zbw_network src 10.19.0.0/16
> >> 
> >> http_access allow zbw_network
> >> http_access allow localhost
> >> http_access deny all
> >> 
> >> http_reply_access allow all
> >> 
> >> icp_access allow all
> >> 
> >> cache_mgr [EMAIL PROTECTED]
> >> 
> >> mail_from [EMAIL PROTECTED]
> >> 
> >> mail_program postfix
> >> 
> >> visible_hostname nameserver.zbw.intranet
> >> 
> >> snmp_port 1234
> >> 
> >> delay_class 1 2
> >> 
> >> delay_access 1 allow zbw_network
> >> delay_access 1 deny all
> >> 
> >> delay_parameters 1 -1/-1 128000/164
> >> 
> >> coredump_dir /var/spool/squid
> >> -- 
> >> View this message in context:
> >> http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16976682.html
> >> Sent from the Squid - Users mailing list archive at Nabble.com.
> > 
> > -- 
> > - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
> > Support -
> > - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
> > 
> > 
> 
> -- 
> View this message in context: 
> http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16977727.html
> Sent from the Squid - Users mailing list archive at Nabble.com.

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] rtmp protocol

2008-04-30 Thread sonjaya
Dear all

I have setup squid with delay pools , but some user using rtmp
protocol and using port 443 port for download  file .flv
I try to see in access.log  but nothing  recorder , how come ...?

so my question how to make rtmp protocol to join in delay pools  or
should i forward rtmp protocol to squid ?

Thank's

-- 
sonjaya
http://sicute.blogspot.com


Re: [squid-users] Reverse proxy with URL rewriting

2008-04-30 Thread Mathieu Kretchner

Have you visit this url :
http://wiki.squid-cache.org/SquidFaq/ReverseProxy

I try this but it doesn't work on my server and nobody here know why !
If you can try and give me a feed back, it would be great.

Thanks.

Sylvain Beaux a écrit :

Hi all,

I've a squid installed and it works in reverse-proxy mode.
I would like to add two new backend servers but those ones require url
rewriting. The following schema shows how the user will request the 
servers :


client <-> | squid |<-> Server 1
   http://squid.ext.com/serv1  |   <>  |  http://server1.intranet.com/
   |   |<-> Server 2
   http://squid.ext.com/serv2  |   <>  |  http://server2.intranet.com/

I read in FAQ that I need to use a external script/program which modify
"on-the-fly" URLs.

But is there other possibilities to rewrite URL, I meen squid native
possibilities ?
if not, is it a feature scheduled in the road map ?

The problem is that we have to implement different scripts if we use a
linux or a NT system so we can't be agnostic from the OS.

Thanks,

Sylvain Beaux


begin:vcard
fn:Mathieu Kretchner
n:Kretchner;Mathieu
org:INRIA;Syslog
adr;dom:;;2007 route des lucioles - BP93;Sophia Antipolis;;06902 CEDEX
email;internet:[EMAIL PROTECTED]
tel;work:04 92 38 76 67
x-mozilla-html:FALSE
version:2.1
end:vcard



Re: [squid-users] TCP connection failed - problem

2008-04-30 Thread Henrik Nordstrom
ons 2008-04-30 klockan 16:11 +1000 skrev myocella:
> I'm working on setting up Squid as proxy + cache on Linux (OpenSuSE
> 10.3) to serve
> around 300 concurrent connections. The proxy was working well for a
> few hours (or less),
> and then it started showing "TCP connection to xxx.xxx.xxx.xxx/8080
> failed" messages
> in cache.log file.

Check if there is any relevant messages in syslog.

> Does anyone how to fix this?

First step is to figure out what is causing it.

It's most likely not Squid itself..

Regards
Henrik



Re: [squid-users] Reverse proxy with URL rewriting

2008-04-30 Thread Henrik Nordstrom

tor 2008-05-01 klockan 00:34 +0200 skrev Sylvain Beaux:

> I read in FAQ that I need to use a external script/program which modify
> "on-the-fly" URLs.

Yes.

> But is there other possibilities to rewrite URL, I meen squid native
> possibilities ?

There is some built-in rewrite capability in the upcoming 2.7 release.

But it's quite trivial to make the needed helper. A simple 2 line perl
program does it nicely.. (add one line per rewrite)

#!/usr/bin/perl -p
BEGIN { $|=1; }
s%^http://www1.example.com%http://intranet.example.com/www1% && next;

But be warned that the web server in this kind of setup doesn't realize
that the externally requested URL is quite different from it's own view
of the URL and this will cause problems for any absolute urls found in
the content, and on redirections sent by the web server such as when you
request a folder without the traling /

> The problem is that we have to implement different scripts if we use a
> linux or a NT system so we can't be agnostic from the OS.

Perl runs fine on both...

Regards
Henrik



[squid-users] Squid not caching text/html mediawiki pages

2008-04-30 Thread Marco
Hi,

I'm using Squid 2.5 with Mediawiki 1.11.

My squid.conf follows the instructions given at
http://meta.wikimedia.org/wiki/Squid_caching

The problem is that the text/html wiki pages are never cached (MISS) while
images are cached correctly (the access.log gives a HIT on them).

The Squid related Mediawiki variables are set as follows:
$wgUseSquid = true;
$wgSquidServers = array('127.0.0.1');

How can I make Squid cache the html wiki pages too?
Any suggestions?

Thanks,
MH



Re: [squid-users] squid 2.4 and support.microsoft.com

2008-04-30 Thread Amos Jeffries

Les F wrote:

I am running squid 2.4 (not by choice), its part of my sidewinder firewall.

Am having users complain because they cannot get to support.microsoft.com

Found a work around that is good for 2.6 but wont work in 2.4

acl support.microsoft.com dstdomain support.microsoft.com
header_access Accept-Encoding deny support.microsoft.com


The header_access line isnt supported in 2.4

I am working on getting a separate (and newer squid online), but until then are
there any rules I could apply in 2.4 that would solve my problem?



If you have any newer caches available to peer from the solution 
mentioned here may help


 http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Testing transparent squid in VM

2008-04-30 Thread Amos Jeffries

Wundy wrote:


Amos Jeffries-2 wrote:
 
You should be able to use just:


  iptables -t nat -A PREROUTING -s ! 192.168.0.12 -p tcp --dport 80 - 
REDIRECT -to-port 3128

  iptables -t nat -A POSTROUTING -j MASQUERADE



At this point I have added the iptables command :
  iptables -t nat -A PREROUTING -p tcp --dport 80 -j 
REDIRECT --to-port 3128

  iptables -t nat -A POSTROUTING -j MASQUERADE

but it does nothing to far.


The "-s ! 192.168.0.12" is important (assuming squid is running on 
192.168.0.12) to block forwarding loops. ie probably those timeouts you 
mention squid having.





Amos Jeffries-2 wrote:

squid.conf:
   http_port 3128 transparent



In my squid.conf I haven't adjusted many things. You can look at it here,
should there be any more problems.
http://www.nabble.com/file/p16962017/squid.conf squid.conf 
I did however have to enable ip4_forward since that was off.
I'm not that familiar with my debian distro so stuff like that is helpful 


Ah forwarding. That kicked me the other day when a kernel upgrade turned 
it off.


Check your run-time settings in /proc/sys/net/ipv4/ip_forward should be '1'
  ( echo 1 >/proc/sys/net/ipv4/ip_forward  )

The persistent settings are in /etc/sysctl.conf

NAT might do with a check as well.
  lsmod  - look for something matching: *_nat



at this point squid behaves as follows: 
the browser without proxy settings does not find squid and doesn't give a

web page.
if I point the browser towards the proxy server then any address I open
loads VERY VERY slowly and times out after a few mins.

Amos Jeffries-2 wrote:

If that still won't work:
  - Ensure that your squid has ONLY one transparent option 
(--enable-linux-netfilter) configured.

  - Check that squid is receiving requests (access.log or cache.log)
  - Check squid has access outbound (usually cache.log)
  - Check whether NAT is failing (cache.log)


squid is recieving request if I point the browser to the proxy server,
otherwise nothing.



Okay, so this may seem simple but is port-80 traffic from the browser 
even going through the squid box naturally?


Take a look at the routing table on the browsers machines routing table 
and check. The default gateway is the machine all its traffic goes 
through. That should be either the squid machine itself or another which 
has been setup to route the port-80 traffic as squid properly.


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Amos Jeffries

Usrbich wrote:

Hi2all!

My users are experiencing problems with squid few hours after it starts.
I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6. This
is a virtual machine and also a DNS server.
Number of active users at one time is about 40-50. The problem is, when I
start Squid, it works fine for couple of hours, and then the behaviour from
the client side is that pages stop to download 10-20 secs, like everything
stops, then it starts back and so on. When it stops, I hit refresh button
and then it starts to download again. In that time, my free memory is around
400MB, that's some 45%, it isn't swapping, cpu is low. I believe my
configuration is wrong, and need some help tunning it. Parameters are
majorly by default values. So, I attach my squid.conf:

http_port 10.19.2.3:8080 


hierarchy_stoplist cgi-bin ?

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

cache_mem 32 MB


Pretty low for a machine with 1+ GB RAM.
You could probably bump this up to 128 or 256 without trouble. That 
would let a lot more happen in memory and bypass any storage slow-down.




cache_swap_low 90
cache_swap_high 95

maximum_object_size 4096 KB

memory_replacement_policy lru

cache_dir ufs /var/spool/squid 1500 16 256

access_log /var/log/squid/access.log squid

cache_log /var/log/squid/cache.log

cache_store_log /var/log/squid/store.log

pid_filename /var/run/squid.pid

check_hostnames on

dns_nameservers 10.19.2.3 195.29.149.196

hosts_file /etc/hosts

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT




http_access allow all


First thing:
  with the above line no other controls you wrote below will ever work.



http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl zbw_network src 10.19.0.0/16

http_access allow zbw_network
http_access allow localhost
http_access deny all

http_reply_access allow all

icp_access allow all

cache_mgr [EMAIL PROTECTED]

mail_from [EMAIL PROTECTED]

mail_program postfix

visible_hostname nameserver.zbw.intranet

snmp_port 1234

delay_class 1 2

delay_access 1 allow zbw_network
delay_access 1 deny all

delay_parameters 1 -1/-1 128000/164

coredump_dir /var/spool/squid


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Amos Jeffries

Usrbich wrote:

Hi2all!

My users are experiencing problems with squid few hours after it starts.
I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6. This
is a virtual machine and also a DNS server.
Number of active users at one time is about 40-50. The problem is, when I
start Squid, it works fine for couple of hours, and then the behaviour from
the client side is that pages stop to download 10-20 secs, like everything
stops, then it starts back and so on. When it stops, I hit refresh button
and then it starts to download again. In that time, my free memory is around


Hmm, just a random thought.
  Does this VM have any controls which might suspend the squid OS or 
network capabilities for any reason?


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] rtmp protocol

2008-04-30 Thread Amos Jeffries

sonjaya wrote:

Dear all

I have setup squid with delay pools , but some user using rtmp
protocol and using port 443 port for download  file .flv
I try to see in access.log  but nothing  recorder , how come ...?

so my question how to make rtmp protocol to join in delay pools  or
should i forward rtmp protocol to squid ?

Thank's



HTTP != RTMP.

Squid is NOT a general-purpose proxy. It is a WEB proxy.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Squid not caching text/html mediawiki pages

2008-04-30 Thread Amos Jeffries

Marco wrote:

Hi,

I'm using Squid 2.5 with Mediawiki 1.11.

My squid.conf follows the instructions given at
http://meta.wikimedia.org/wiki/Squid_caching

The problem is that the text/html wiki pages are never cached (MISS) while
images are cached correctly (the access.log gives a HIT on them).

The Squid related Mediawiki variables are set as follows:
$wgUseSquid = true;
$wgSquidServers = array('127.0.0.1');

How can I make Squid cache the html wiki pages too?
Any suggestions?



Step 1: upgrade to 2.6

Step 2: configure squid as a proper accelerator for your site
  http://wiki.squid-cache.org/SquidFaq/ReverseProxy
  or if you want to use their site, I've updated the bottom section 
about squid 2.6 to show a config with at least a medium-level of security.


Step 3: check that the page Cache-Control headers are set to allow caching.
  http://www.ircache.net/cgi-bin/cacheability.py

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Fwd: HTTP Transparent Proxy on OpenBSD 4.2

2008-04-30 Thread Amos Jeffries

Chris Benesch wrote:

Hi,

First of all, you should change any to any to something more restrictive
like 10.0.0.0/8 to any.  I don't think squid needs to read the packet filter
device, I've got a similar setup with 4.1 and it doesn't need to access the
packet filter directly.


Squid uses system calls to connect up ip-filter and ioctls for PF.
It does this at the highest priority it has available (root when able, 
or the effective-user).

If anything has changed in 4.2 to break this, we'd like to know.



To make OpenBSD reload the configuration file, the easiest way is to just
issue a pfctl -e -f /etc/pf.conf and it should reload the rules.  Just to
make sure you can do pfctl -d; pfctl -e -f /etc/pf.conf.  It will stop then
start pf again.

-Original Message-
From: Indunil Jayasooriya [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 28, 2008 8:38 PM

To: squid-users
Subject: [squid-users] Fwd: HTTP Transparent Proxy on OpenBSD 4.2


 What command I have to issue to complete this task with PF on OpenBSD

4.2?
 >  What should I do?

 Configuring pf
 The pf configuration is /etc/pf.conf. The file is documented in
 pf.conf(5). This is a minimal example of the required rdr rule. Make
 sure you also allow the redirected connections to pass, they'll have
 destination address 127.0.0.1 when the filter rules are evaluated.
 Redirection does not automatically imply passing. Also, the proxy must
 be able to establish outgoing connections to external web servers.

 int_if="gem0"
 ext_if="kue0"

 rdr on $int_if inet proto tcp from any to any port www -> 127.0.0.1 port
3128

 pass in on $int_if inet proto tcp from any to 127.0.0.1 port 3128 keep
state
 pass out on $ext_if inet proto tcp from any to any port www keep state

 Note that squid needs to open /dev/pf in order to query the packet
 filter. The default permissions for this file allow access only to
 root. squid is running as user _squid, group _squid, so one way to
 allow access to squid is by changing the group ID of the file to
 _squid and make it group-accessable:

 # chgrp _squid /dev/pf
 # chmod g+rw /dev/pf

 pls click below URL for more

 http://www.benzedrine.cx/transquid.html


 --
 Thank you
 Indunil Jayasooriya






--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Usrbich

I think there's no control, why do you ask?


Amos Jeffries-2 wrote:
> 
> Usrbich wrote:
>> Hi2all!
>> 
>> My users are experiencing problems with squid few hours after it starts.
>> I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6.
>> This
>> is a virtual machine and also a DNS server.
>> Number of active users at one time is about 40-50. The problem is, when I
>> start Squid, it works fine for couple of hours, and then the behaviour
>> from
>> the client side is that pages stop to download 10-20 secs, like
>> everything
>> stops, then it starts back and so on. When it stops, I hit refresh button
>> and then it starts to download again. In that time, my free memory is
>> around
> 
> Hmm, just a random thought.
>Does this VM have any controls which might suspend the squid OS or 
> network capabilities for any reason?
> 
> Amos
> -- 
> Please use Squid 2.6.STABLE19 or 3.0.STABLE4
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16982391.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] NO_CACHE

2008-04-30 Thread Tiago Durante
Hi!

Well I don't think the ACLs name will make any difference... :)

Let me explain what I want. I've this site, lets say "tiago.com", and
I don't want squid to cache it. It isn't by my internal network that
I'll know what's going to cache or not, but I'll know the external
hosts that I shouldn't cache.

So, what I need is tell my squid: "Dude, please don't cache
'tiago.com', ok!?" :)

I'll show all the ways that I've tried...

# 1
acl dontcachemrsquid dstdomain tiago.com
cache deny dontcachemrsquid

# 2
acl dontcachemrsquid dstdomain tiago.com
no_cache deny dontcachemrsquid

# 3
acl dontcachemrsquid src 10.1.1.0/24
cache deny dontcachemrsquid

# 4
acl dontcachemrsquid src 10.1.1.0/24
no_cache deny dontcachemrsquid


But I don't know if it's working... Doesn't seems like at least,
because the page, that is a horrible system made by some people here,
can't work with cache and get all crazy and unformatted when reached
by squid. :(

What should I see at access.log? There is anybody using such a configuration?

Thanks a lot you all

All the best!

-- 
Tiago Durante

,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,
Perseverance is the hard work you do after you
get tired of doing the hard work you already did.
-- Newt Gingrich


>  Whay are you naming it all_cache? Seems confusing since its the opposite of
> what you are wanting and not what is inside it either?
>
>  How about "acl localnet src 192.168.1.0/24" ?
>
>  Amos
>  --
>  Please use Squid 2.6.STABLE19 or 3.0.STABLE4
>


[squid-users] 2.6.STABLE19 and 2.6.STABLE20 missing from mirrors

2008-04-30 Thread Joshua Root

I sent this to info@, but received no reply yet, so I figured I'd post
it here as well in case the person who checks that address is busy or
something.

I notice that ftp://ftp.squid-cache.org/pub/squid-2/STABLE/ seems to
have stopped being updated after the 2.6.STABLE18 release. Consequently,
none of the mirrors have 2.6.STABLE19 or 2.6.STABLE20.

- Josh



[squid-users] Squid sends TCP_DENIED/407 even on already authenticated users

2008-04-30 Thread Julio Cesar Gazquez
Hi.

We are starting to deploy digest based authentication on a large network, and 
we found a weird problem: Sometimes authenticated requests are answered by 
TCP_DENIED/407 responses.

Below is a sample from the access log:

1209559977.471252 192.168.2.223 TCP_MISS/200 801 GET 
http://www.deautos.com/img/top02.gif lboullo0 FIRST_UP_PARENT/localhost 
image/gif
1209559977.640 67 192.168.2.223 TCP_MISS/200 9208 GET 
http://www.deautos.com/img/tmp/img_comprar.jpg lboullo0 
FIRST_UP_PARENT/localhost image/jpeg
1209559977.647 50 192.168.2.223 TCP_MISS/200 9565 GET 
http://www.deautos.com/img/tmp/img_vender.jpg lboullo0 
FIRST_UP_PARENT/localhost image/jpeg
1209559977.656 77 192.168.2.223 TCP_MISS/200 5629 GET 
http://www.deautos.com/img/tmp/txt_comprar.jpg lboullo0 
FIRST_UP_PARENT/localhost image/jpeg
1209559977.657 63 192.168.2.223 TCP_MISS/200 655 GET 
http://www.deautos.com/img/img_flechita.gif lboullo0
FIRST_UP_PARENT/localhost image/gif
1209559978.080  2 192.168.2.223 TCP_DENIED/407 2765 GET 
http://www.deautos.com/img/img_flechita_blink.gif
lboullo0 NONE/- text/html
1209559978.163 87 192.168.2.223 TCP_MISS/200 2772 GET 
http://www.deautos.com/img/img_vender02.gif lboullo0
 FIRST_UP_PARENT/localhost image/gif
1209559978.219 97 192.168.2.223 TCP_MISS/200 707 GET 
http://www.deautos.com/img/img_flechita_blink.gif lboullo0 
FIRST_UP_PARENT/localhost image/gif

As you can see, the user is happily sending authenticated requests, yet at one 
point it receives a 407 response. 

We are not really sure, but this doesn't seem ok. Worst of all, in certain 
cases seems to be the cause of IE7 asking authentication again.

We tried everything we were able of: Raising the auth children limit, 
disabling Dansguardian, and googled around with no luck. Below is the auth 
configuration. 

=snip
auth_param digest program /usr/lib/squid/digest_ldap_auth 
  -b ou=People,ou=proxy,ou=Servers,o=MCR -u uid 
  -A l -D cn=nss,o=MCR -w x -e -v 3 -h ldap.pm.rosario.gov.ar

auth_param digest realm Clave Navegacion Internet
auth_param digest children 10
=snip

-- 
Julio César Gázquez
Area Seguridad Informática -- Int. 736
Municipalidad de Rosario


Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Usrbich

I have configured parameters like you told me, but still, same thing happens,
but now, i have only about 30 MB's of RAM left, it isn't swapping yet...
Please help :-(


Usrbich wrote:
> 
> I think there's no control, why do you ask?
> 
> 
> Amos Jeffries-2 wrote:
>> 
>> Usrbich wrote:
>>> Hi2all!
>>> 
>>> My users are experiencing problems with squid few hours after it starts.
>>> I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6.
>>> This
>>> is a virtual machine and also a DNS server.
>>> Number of active users at one time is about 40-50. The problem is, when
>>> I
>>> start Squid, it works fine for couple of hours, and then the behaviour
>>> from
>>> the client side is that pages stop to download 10-20 secs, like
>>> everything
>>> stops, then it starts back and so on. When it stops, I hit refresh
>>> button
>>> and then it starts to download again. In that time, my free memory is
>>> around
>> 
>> Hmm, just a random thought.
>>Does this VM have any controls which might suspend the squid OS or 
>> network capabilities for any reason?
>> 
>> Amos
>> -- 
>> Please use Squid 2.6.STABLE19 or 3.0.STABLE4
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16988138.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] SSL Accel - Reverse Proxy

2008-04-30 Thread Tory M Blue
I was wondering if there was a way for Squid to pass on some basic
information to the server citing that the original request was Secure,
so that the backend server will respond correctly.

Right now Squid takes and handles the SSL, passes back to the server
via standard http and the application check, causes "basically a
loop", because it wants to see the client using SSL and not  standard
HTTP..

This is only an issue with same hostname/headers that have access on
both 80/443 as the application needs to know that someone came in
secured and that the Squid box will respond in kind.

Am I missing something basic? i'm not seeing it in the information
currently that Squid passes. Otherwise the application could key off
the originating dest port or similar

Thanks
Tory


Re: [squid-users] 2.6.STABLE19 and 2.6.STABLE20 missing from mirrors (solved)

2008-04-30 Thread Joshua Root

Joshua Root wrote:

I notice that ftp://ftp.squid-cache.org/pub/squid-2/STABLE/ seems to
have stopped being updated after the 2.6.STABLE18 release. Consequently,
none of the mirrors have 2.6.STABLE19 or 2.6.STABLE20.


And, in reply to my own post, Duane has now added the missing files.

- Josh


Re: [squid-users] rtmp protocol

2008-04-30 Thread Paul Bertain
RTMP is Adobe's protocol used for streaming.  As Amos says, RTMP !=  
HTTP but it is going to be delivered over Ports 80 & 443, as you have  
already seen.  Adobe Flash Media Server contains a caching component,  
I believe, so if you control the content, you might want to look into  
that (rather expensive) option.


Paul

On Apr 30, 2008, at 5:12 AM, Amos Jeffries wrote:


sonjaya wrote:

Dear all
I have setup squid with delay pools , but some user using rtmp
protocol and using port 443 port for download  file .flv
I try to see in access.log  but nothing  recorder , how come ...?
so my question how to make rtmp protocol to join in delay pools  or
should i forward rtmp protocol to squid ?
Thank's


HTTP != RTMP.

Squid is NOT a general-purpose proxy. It is a WEB proxy.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4




Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Usrbich

In cache.log, all I get is this messages:
2008/04/30 23:58:25| clientReadRequest: FD 121 (10.19.14.58:2014) Invalid
Request
2008/04/30 23:58:25| clientReadRequest: FD 112 (10.19.14.58:2013) Invalid
Request
2008/04/30 23:58:37| clientReadRequest: FD 170 (10.19.13.54:1317) Invalid
Request
2008/04/30 23:58:38| clientReadRequest: FD 121 (10.19.20.55:1235) Invalid
Request
2008/04/30 23:58:41| clientReadRequest: FD 169 (10.19.13.54:1318) Invalid
Request
2008/04/30 23:58:41| clientReadRequest: FD 169 (10.19.13.54:1319) Invalid
Request
2008/04/30 23:58:50| clientReadRequest: FD 126 (10.19.15.55:1662) Invalid
Request

Is that the cause of my timeout problem?

Is this startup output ok?
Memory usage for squid via mallinfo():
total space in arena:   13768 KB
Ordinary blocks:13023 KB265 blks
Small blocks:   0 KB  5 blks
Holding blocks:   244 KB  1 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 744 KB
Total in use:   13267 KB 95%
Total free:   745 KB 5%


Usrbich wrote:
> 
> I have configured parameters like you told me, but still, same thing
> happens, but now, i have only about 30 MB's of RAM left, it isn't swapping
> yet... Please help :-(
> 
> 
> Usrbich wrote:
>> 
>> I think there's no control, why do you ask?
>> 
>> 
>> Amos Jeffries-2 wrote:
>>> 
>>> Usrbich wrote:
 Hi2all!
 
 My users are experiencing problems with squid few hours after it
 starts.
 I have following configuration: P4 3GHz, 1.1 GB RAM, CentOS, Squid 2.6.
 This
 is a virtual machine and also a DNS server.
 Number of active users at one time is about 40-50. The problem is, when
 I
 start Squid, it works fine for couple of hours, and then the behaviour
 from
 the client side is that pages stop to download 10-20 secs, like
 everything
 stops, then it starts back and so on. When it stops, I hit refresh
 button
 and then it starts to download again. In that time, my free memory is
 around
>>> 
>>> Hmm, just a random thought.
>>>Does this VM have any controls which might suspend the squid OS or 
>>> network capabilities for any reason?
>>> 
>>> Amos
>>> -- 
>>> Please use Squid 2.6.STABLE19 or 3.0.STABLE4
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p16992490.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid-2.6.STABLE19: logfile_rotate 180 not working when changed from 90

2008-04-30 Thread [EMAIL PROTECTED]
Hi,

 I am using Squid-2.6.STABLE19 with Linux for many years.

 Recently I changed "logfile_rotate" from 90 to 180. I have
already used squid -k reconfigure and I have already stopped and
started Squid again.

 After changing, I expected that squid started to rotate files
until 180 was reached, but it does not happen. It stopped at 93, to
access.log and cache.log

 There is at least 24G ( from 38G) of free space in the logs
partition ( ext3 ). There are just 190 files in squid logs directory.

 There is no error in cache.log when squid -k rotate is called.
Computer clock is ok ( date and time ).

 There is free memory and when squid -k rotate happens, there is
almost zero requests to the proxy.

 Everything appears to be working fine, including a lot of ACLs,
authentication helpers, delay pools and squidGuard redirection.

 Have I missed something?

 Any suggestion?

 Thank you.

Regards,


Cássio


Re: [squid-users] Squid-2.6.STABLE19: logfile_rotate 180 not working when changed from 90

2008-04-30 Thread Adrian Chadd
I've not heard of any issues with logfile_rotate failing during a reconfigure.
Please log a bug report with bugzilla.


Adrian

On Wed, Apr 30, 2008, [EMAIL PROTECTED] wrote:
> Hi,
> 
>  I am using Squid-2.6.STABLE19 with Linux for many years.
> 
>  Recently I changed "logfile_rotate" from 90 to 180. I have
> already used squid -k reconfigure and I have already stopped and
> started Squid again.
> 
>  After changing, I expected that squid started to rotate files
> until 180 was reached, but it does not happen. It stopped at 93, to
> access.log and cache.log
> 
>  There is at least 24G ( from 38G) of free space in the logs
> partition ( ext3 ). There are just 190 files in squid logs directory.
> 
>  There is no error in cache.log when squid -k rotate is called.
> Computer clock is ok ( date and time ).
> 
>  There is free memory and when squid -k rotate happens, there is
> almost zero requests to the proxy.
> 
>  Everything appears to be working fine, including a lot of ACLs,
> authentication helpers, delay pools and squidGuard redirection.
> 
>  Have I missed something?
> 
>  Any suggestion?
> 
>  Thank you.
> 
> Regards,
> 
> 
> C?ssio

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Surfing hangs after period of time

2008-04-30 Thread Adrian Chadd
On Wed, Apr 30, 2008, Usrbich wrote:
> 
> In cache.log, all I get is this messages:
> 2008/04/30 23:58:25| clientReadRequest: FD 121 (10.19.14.58:2014) Invalid
> Request
> 2008/04/30 23:58:25| clientReadRequest: FD 112 (10.19.14.58:2013) Invalid
> Request
> 2008/04/30 23:58:37| clientReadRequest: FD 170 (10.19.13.54:1317) Invalid
> Request
> 2008/04/30 23:58:38| clientReadRequest: FD 121 (10.19.20.55:1235) Invalid
> Request
> 2008/04/30 23:58:41| clientReadRequest: FD 169 (10.19.13.54:1318) Invalid
> Request
> 2008/04/30 23:58:41| clientReadRequest: FD 169 (10.19.13.54:1319) Invalid
> Request
> 2008/04/30 23:58:50| clientReadRequest: FD 126 (10.19.15.55:1662) Invalid
> Request
> 
> Is that the cause of my timeout problem?

It'd be ncie to know what that is, but no, that in itself shouldn't hang
browsing activities.

> Is this startup output ok?
> Memory usage for squid via mallinfo():
>   total space in arena:   13768 KB
>   Ordinary blocks:13023 KB265 blks
>   Small blocks:   0 KB  5 blks
>   Holding blocks:   244 KB  1 blks
>   Free Small blocks:  0 KB
>   Free Ordinary blocks: 744 KB
>   Total in use:   13267 KB 95%
>   Total free:   745 KB 5%

Well, squid is using bugger all memory then.

Debugging this will require a little more effort.. you will probably
have to begin by fiddling with your server stats and determine what else
is going on. You may want to run the system call tracer on Squid to see
what its doing when it slows down to see whats going on.

This sort of stuff is precisely why I suggest people graph as much about their
Squid servers as they can!



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] rtmp protocol

2008-04-30 Thread Adrian Chadd
Is there a protocol spec somewhere?



Adrian


On Wed, Apr 30, 2008, Paul Bertain wrote:
> RTMP is Adobe's protocol used for streaming.  As Amos says, RTMP !=  
> HTTP but it is going to be delivered over Ports 80 & 443, as you have  
> already seen.  Adobe Flash Media Server contains a caching component,  
> I believe, so if you control the content, you might want to look into  
> that (rather expensive) option.
> 
> Paul
> 
> On Apr 30, 2008, at 5:12 AM, Amos Jeffries wrote:
> 
> >sonjaya wrote:
> >>Dear all
> >>I have setup squid with delay pools , but some user using rtmp
> >>protocol and using port 443 port for download  file .flv
> >>I try to see in access.log  but nothing  recorder , how come ...?
> >>so my question how to make rtmp protocol to join in delay pools  or
> >>should i forward rtmp protocol to squid ?
> >>Thank's
> >
> >HTTP != RTMP.
> >
> >Squid is NOT a general-purpose proxy. It is a WEB proxy.
> >
> >Amos
> >-- 
> >Please use Squid 2.6.STABLE19 or 3.0.STABLE4

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] rtmp protocol

2008-04-30 Thread Paul Bertain
I've been looking myself but they (Adobe) keep it pretty tightly  
held.  The most common google hit is Routing Table Maintenance  
Protocol from AppleTalk, which is completely unrelated.


Adobe people don't really know much about it and I can't get to the  
Developers.  When I call Adobe and ask about RTMP, I get shuffled  
around to just about every department.


There is Red5, an Open Source reverse engineering project out there  
who may have someone that has more info:

- http://osflash.org/rtmp <-- Brief description
- http://osflash.org/red5 <-- OS Project
- http://swik.net/RTMP <-- Refers to Red5 again

I was hoping it was RTSP-like because it is related to streaming but  
from what I can tell, it is not really that similar.


Paul

On Apr 30, 2008, at 7:52 PM, Adrian Chadd wrote:


Is there a protocol spec somewhere?



Adrian


On Wed, Apr 30, 2008, Paul Bertain wrote:

RTMP is Adobe's protocol used for streaming.  As Amos says, RTMP !=
HTTP but it is going to be delivered over Ports 80 & 443, as you have
already seen.  Adobe Flash Media Server contains a caching component,
I believe, so if you control the content, you might want to look into
that (rather expensive) option.

Paul

On Apr 30, 2008, at 5:12 AM, Amos Jeffries wrote:


sonjaya wrote:

Dear all
I have setup squid with delay pools , but some user using rtmp
protocol and using port 443 port for download  file .flv
I try to see in access.log  but nothing  recorder , how come ...?
so my question how to make rtmp protocol to join in delay pools  or
should i forward rtmp protocol to squid ?
Thank's


HTTP != RTMP.

Squid is NOT a general-purpose proxy. It is a WEB proxy.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -




Re: [squid-users] Squid-2.6.STABLE19: logfile_rotate 180 not working when changed from 90

2008-04-30 Thread [EMAIL PROTECTED]
Adrian,

 Excuse -me but, just to be sure ( my english is not good and
sometimes I can be confusing/confused ), squid -k reconfigure does not
fail or complain.

 When squid -k rotate runs ( every day at 00:01 ), accordingly to
cache.log, squid does not fail or complain, but log files are not
rotated as expected.

 Is it still a bug report candidate?

 Thank you.

Regards,

Cássio

On Wed, Apr 30, 2008 at 11:34 PM, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> I've not heard of any issues with logfile_rotate failing during a reconfigure.
>  Please log a bug report with bugzilla.
>
>
>  Adrian
>
>
>
>  On Wed, Apr 30, 2008, [EMAIL PROTECTED] wrote:
>  > Hi,
>  >
>  >  I am using Squid-2.6.STABLE19 with Linux for many years.
>  >
>  >  Recently I changed "logfile_rotate" from 90 to 180. I have
>  > already used squid -k reconfigure and I have already stopped and
>  > started Squid again.
>  >
>  >  After changing, I expected that squid started to rotate files
>  > until 180 was reached, but it does not happen. It stopped at 93, to
>  > access.log and cache.log
>  >
>  >  There is at least 24G ( from 38G) of free space in the logs
>  > partition ( ext3 ). There are just 190 files in squid logs directory.
>  >
>  >  There is no error in cache.log when squid -k rotate is called.
>  > Computer clock is ok ( date and time ).
>  >
>  >  There is free memory and when squid -k rotate happens, there is
>  > almost zero requests to the proxy.
>  >
>  >  Everything appears to be working fine, including a lot of ACLs,
>  > authentication helpers, delay pools and squidGuard redirection.
>  >
>  >  Have I missed something?
>  >
>  >  Any suggestion?
>  >
>  >  Thank you.
>  >
>  > Regards,
>  >
>  >
>  > C?ssio
>
>  --
>  - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid 
> Support -
>  - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
>


Re: [squid-users] rtmp protocol

2008-04-30 Thread Amos Jeffries
> I've been looking myself but they (Adobe) keep it pretty tightly
> held.  The most common google hit is Routing Table Maintenance
> Protocol from AppleTalk, which is completely unrelated.
>
> Adobe people don't really know much about it and I can't get to the
> Developers.  When I call Adobe and ask about RTMP, I get shuffled
> around to just about every department.
>
> There is Red5, an Open Source reverse engineering project out there
> who may have someone that has more info:
> - http://osflash.org/rtmp <-- Brief description
> - http://osflash.org/red5 <-- OS Project
> - http://swik.net/RTMP <-- Refers to Red5 again
>
> I was hoping it was RTSP-like because it is related to streaming but
> from what I can tell, it is not really that similar.
>
> Paul

Thank you. Assuming the protocol spec is accurate:
  http://osflash.org/documentation/rtmp

The protocol port is NOT 80 or 443. But apparently 1935. And like other
streams would require a proprietary or hacked client software to
interpret.

If you want to proxy it you should look for a general-purpose proxy or SOCKS.

If you want to lock it set port 1935 into your firewall.

Amos


>
> On Apr 30, 2008, at 7:52 PM, Adrian Chadd wrote:
>
>> Is there a protocol spec somewhere?
>>
>>
>>
>> Adrian
>>
>>
>> On Wed, Apr 30, 2008, Paul Bertain wrote:
>>> RTMP is Adobe's protocol used for streaming.  As Amos says, RTMP !=
>>> HTTP but it is going to be delivered over Ports 80 & 443, as you have
>>> already seen.  Adobe Flash Media Server contains a caching component,
>>> I believe, so if you control the content, you might want to look into
>>> that (rather expensive) option.
>>>
>>> Paul
>>>
>>> On Apr 30, 2008, at 5:12 AM, Amos Jeffries wrote:
>>>
 sonjaya wrote:
> Dear all
> I have setup squid with delay pools , but some user using rtmp
> protocol and using port 443 port for download  file .flv
> I try to see in access.log  but nothing  recorder , how come ...?
> so my question how to make rtmp protocol to join in delay pools  or
> should i forward rtmp protocol to squid ?
> Thank's

 HTTP != RTMP.

 Squid is NOT a general-purpose proxy. It is a WEB proxy.

 Amos
 --
 Please use Squid 2.6.STABLE19 or 3.0.STABLE4
>>
>> --
>> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial
>> Squid Support -
>> - $25/pm entry-level VPSes w/ capped bandwidth charges available in
>> WA -
>
>




[squid-users] custom squid message

2008-04-30 Thread E. TRaas
Hi,

how can i adjust the Squid message when a certian site is blocked ??

Thanks in advance
E. Traas