Re: [squid-users] ext4 vs reiserfs

2011-06-27 Thread Amos Jeffries

On 28/06/11 18:36, Mohsen Pahlevanzadeh wrote:

Dear all,

I don't know to use which ext4 stable or reiserFS for squid.
Which has high performance?

Yours,
Mohsen


http://wiki.squid-cache.org/BestOsForSquid

  Halfway down the page.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.9 and 3.1.12.3


Re: [squid-users] Squid SDK

2011-06-27 Thread Amos Jeffries

On 28/06/11 16:21, Mohsen Pahlevanzadeh wrote:

Hi,

We must write a program that along with normal tasks, it had do a
variety of jobs,But i need to PURGE and insert cache.


Filling a glass of water is what I would call a "normal task". It does 
not involve doing anything with a Squid cache. I know you are meaning 
something entirely different.


So I repeat my question:
  What are these "normal tasks" and "variety of jobs"? and how will 
they benefit from playing with the objects stored by Squid?





So, we decided code PURGE and insert that our program doesn't  depend on
other program same as squidclient.


Fine. *If* that is the appropriate way to do one of the not-mentioned 
tasks or jobs you are wanting.


The closest thing to an "API" is the request text structure defined in 
RFC 2616. You are likely to find libraries for whatever language you are 
using that can generate and send HTTP requests as needed.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.9 and 3.1.12.3


[squid-users] ext4 vs reiserfs

2011-06-27 Thread Mohsen Pahlevanzadeh
Dear all,

I don't know to use which ext4 stable or reiserFS for squid.
Which has high performance?

Yours,
Mohsen


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Memory issues

2011-06-27 Thread Amos Jeffries

On 27/06/11 21:02, Go Wow wrote:

Pls find below the link to excel file containing memory info from
squid cache manager.

https://www.yousendit.com/download/MFo3c0w5bTh0TW14dnc9PQ



Shows Squid using 4MB of RAM.



Now my squid.conf looks like this, is this okay?



Looks fine now.




Are you sure it is Squid consuming that memory? Its possibly another
application.
  If you are sure it is Squid please upgrade to a later version. There were
some memory overuse issues fixed between 3.1.8 and 3.1.11.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.9 and 3.1.12.3


Re: [squid-users] Memory issues

2011-06-27 Thread Go Wow
Any info for me regarding my last post?

On 27 June 2011 13:02, Go Wow  wrote:
> Pls find below the link to excel file containing memory info from
> squid cache manager.
>
> https://www.yousendit.com/download/MFo3c0w5bTh0TW14dnc9PQ
>
> Now my squid.conf looks like this, is this okay?
>
> auth_param negotiate program /usr/lib/squid/squid_kerb_auth -d -s 
> GSS_C_NO_NAME
> auth_param negotiate children 10
> auth_param negotiate keep_alive on
> auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
> auth_param ntlm children 8
> auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
> auth_param basic credentialsttl 4 hour
> auth_param basic casesensitive off
> auth_param basic children 7
> auth_param basic realm DOMAIN
> authenticate_cache_garbage_interval 10 seconds
> authenticate_ttl 0 seconds
> acl ad-auth proxy_auth REQUIRED
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
> acl allow_localnet dst 192.168.100.0/24 192.168.18.0/24
> acl allow_localdomain dstdomain .domain.com
> acl local_net_dst dst  192.168.127.0/24
> acl local_net_src src  192.168.137.0/24
> acl Unsafe_Ports port 5050 843 5100 5101 5000-5010 9085
> acl Unsafe_Ports port 1863
> acl Unsafe_Ports port 5222
> acl SSL_ports port 443
> acl Safe_ports port 80 53 443 3268 88 5060 5061 5062 5075 5076 5077
> 50636 587 50389 58941 110 995 993 143 389 636 119 25 465 135 102 3000
> # http
> acl Safe_ports port 21          # ftp
> acl Safe_ports port 443         # https
> acl Safe_ports port 70          # gopher
> acl Safe_ports port 210         # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280         # http-mgmt
> acl Safe_ports port 488         # gss-http
> acl Safe_ports port 591         # filemaker
> acl Safe_ports port 777         # multiling http
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> http_access deny Unsafe_Ports
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> http_access allow allow_localnet
> http_access allow allow_localdomain
> http_access allow ad-auth
> http_access deny all
> http_port 3128
> hierarchy_stoplist cgi-bin ?
> cache_dir aufs /var/squid/cache 128 16 256
> refresh_pattern ^ftp:           1440    20%     10080
> refresh_pattern ^gopher:        1440    0%      1440
> refresh_pattern -i (/cgi-bin/|\?)    0       0%      0
> refresh_pattern .               0       20%     4320
> redirect_program /usr/local/bin/squidGuard -c
> /usr/local/squidGuard/squidGuard.conf
> redirect_children 15
> icp_access deny all
> htcp_access deny all
> cache_mem 128 MB
> access_log /var/log/squid/access.log squid
> icp_port 3130
> pipeline_prefetch off
> cache_mgr m...@domain.com
> cachemgr_passwd password all
> #delay_pools 2
> #delay_class 1 4
> #delay_class 2 4
> #delay_access 1 allow local_net_src
> #delay_access 2 allow local_net_dst
> #delay_parameters 1 -1/-1 -1/-1 -1/-1 51200/51200
> #delay_parameters 2 -1/-1 -1/-1 -1/-1 -1/-1
> #delay_initial_bucket_level 75
> httpd_suppress_version_string on
> forwarded_for off
> hosts_file /etc/hosts
> cache_replacement_policy heap LFUDA
> cache_swap_low 90
> cache_swap_high 95
> maximum_object_size_in_memory 50 KB
> memory_pools off
> maximum_object_size 50 MB
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> log_icp_queries off
> client_db off
> buffered_logs on
> half_closed_clients off
>
> On 26 June 2011 16:19, Amos Jeffries  wrote:
>> On 26/06/11 21:24, Go Wow wrote:
>>>
>>> Hi,
>>>
>>>  I'm using squid 3.1.8 on centos 5.4 with 3.8GB RAM and Dual Core
>>> Processor. My swap is been used and 50% of RAM is used by cache&
>>> buffers. Below link has one week's memory&  CPU utilization
>>> information in form of graph.
>>>
>>> Memory usage -->  http://img.myph.us/Cr8.jpg
>>> CPU usage -->  http://img.myph.us/PgM.jpg
>>>
>>> I'm worried as to why the usage of swap is coming into picture,
>>> logically if Swap is used then I need to increase the RAM but this
>>> machine is serving only 12 users.
>>>
>>>  My squid.conf is here
>>>
>>> auth_param negotiate program /usr/lib/squid/squid_kerb_auth -d -s
>>> GSS_C_NO_NAME
>>> auth_param negotiate children 10
>>> auth_param negotiate keep_alive on
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp
>>> auth_param ntlm children 8
>>> auth_param basic program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-basic
>>> auth_param basic credentialsttl 4 hour
>>> auth_param basic casesensitive off
>>> auth_param basic children 7
>>> auth_param basic realm DOMAINNAME
>>> authenticate_cache_garbage_interval 10 seconds
>>> authenticate_ttl 0 seconds
>>> acl ad-auth proxy_auth REQUIRED
>>> acl manager proto cache_object
>>> acl localhost src 127.0.0.1/32
>>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
>>> acl allow_localnet dst 192.168.110.0/24 192.168.188.0/24
>>> acl allow_localdomai

Re: [squid-users] Squid SDK

2011-06-27 Thread Mohsen Pahlevanzadeh
Hi,

We must write a program that along with normal tasks, it had do a
variety of jobs,But i need to PURGE and insert cache.
So, we decided code PURGE and insert that our program doesn't  depend on
other program same as squidclient.

Yours,
Mohsen
On Tue, 2011-06-28 at 11:30 +1200, Amos Jeffries wrote:
> On Mon, 27 Jun 2011 17:12:32 +0430, Mohsen Pahlevanzadeh wrote:
> > I know it, and compiled it, But can i get hook or i must hack it for 
> > a
> > syscall? i need to on demand delete Object from cache same PURGE, But
> > want to use it in my code.
> > After it, I need to Push to cache in my code.
> > Can you get me name of those func instead of hack?
> 
>  There is no SDK. No syscalls. No API to embed squid into other 
>  software. Squid *is* the top level software.
> 
>  To perform cache object erasures use HTTP protocol to send PURGE 
>  requests. Or use HTCP protocol to send CLR request packets.
> 
>  Since you seem to say that PURGE does not work for you, you may want to 
>  look at HTCP CLR. But both have almost the same operation.
> 
> 
> 
>  It sounds like you are attempting to write some form of external 
>  manipulator for the Squid disk storage. Yes?
>   before we go into details for help ... Why? What end goal will this 
>  achieve for you?
> 
> 
>  Amos
> 
> > --mohsen
> > On Mon, 2011-06-27 at 09:09 -0300, Leonardo Rodrigues wrote:
> >> squid is a completly open-source project, you can simply grab its
> >> entire source code and do whatever modifications you need to 
> >> acchieve
> >> your goals.
> >>
> >>  if you're using squid installed by your distro or download in 
> >> some
> >> binary format and dont have its source, you can go to
> >>
> >> www.squid-cache.org
> >>
> >>  and download it !
> >>
> >> Em 27/06/11 04:33, Mohsen Pahlevanzadeh escreveu:
> >> > Dear all,
> >> >
> >> > I know that squid doesn't release its SDK, but i need to its
> >> > syscall.What i do? Do you know good way for using squid syscall?
> >> >
> >>
> >>
> 



signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Strange 503 on https sites [ipv6 edition]

2011-06-27 Thread Jenny Lee

> Ouch! Add these at least:
> $IPT6 -A INPUT -j REJECT
> $IPT6 -A OUTPUT -j REJECT
> $IPT6 -A FORWARD -j REJECT
> 
> 
> > $IPT6 -P INPUT DROP
> > $IPT6 -P OUTPUT DROP
> > $IPT6 -P FORWARD DROP
> > fi
> >
> 
> And *that* is exactly the type of false "disable" I was talking about.
> 
> Squid and other software will attempt to open an IPv6 socket(). As long 
> as the IPv6 modules are loaded in the kernel that will *succeed*. At 
> first glance this is fine, IPv4 can still come and go through that 
> socket.
> 
> - In TCP they might then try to bind() to an IPv6, that *succeeds*. 
> [bingo! IPv6 enabled and working. Squid will use it.]
> Then try to connect() to an IPv6. That also "succeeds" (partially). 
> But the firewall DROP prevents the SYN packet ever going anywhere. Up to 
> *15 minutes* later TCP will timeout.
> 
> - In UDP things get even stranger. It expects no response, so send() 
> to both IPv4 and IPv6 will *succeed*.
> 
> Does the DNS error "No Servers responding;;" sound all too familiar? 
> then you or a transit network is most likely using DROP somewhere on 
> UDP, TCP or ICMP.

Unlikely to happen. Because we inserted ipv6 disable mechanisms to 50 different 
places. And that was the last line just in case nothing worked.

If it came to that part, it is a mute point if it is dropped or rejected. We 
have bigger problems.

>From a client point, or in testing, I agree with you. REJECT should be used to 
>inform failing clients. Otherwise DROPs will cause lenghty delays.

But on internet-facing production systems, DROP should be used.

- Less network traffic when there are attacks
- More secure
- Immune to spoofing and reflection scans on other systems
- Immune to probes

But as I mentioned, my rules should be considered in the whole context of 
disabling ipv6, whereas the OP's issue might very well be these very DROP rules 
that I advocate.

My intention was to post useful info to those who are trying to disable ipv6 on 
RHEL rather than find a solution to OP's squid problems which is your expertise.

I surely will be bothering you with bugs and mistakes about ipv6 once I compile 
squid with it... But I don't expect that to be before 2020 or until I am left 
as the last person on earth who is not supporting ipv6.

Jenny

PS: I have never seen these "IPV6 DROPPED" entries over the years in logs.  
  

RE: [squid-users] Strange 503 on https sites [ipv6 edition]

2011-06-27 Thread Amos Jeffries

On Tue, 28 Jun 2011 00:20:04 +, Jenny Lee wrote:

NP: (rant warning) if you followed most any online tutorial for
disabling IPv6 in RHEL. Most only go so far as to make the kernel 
drop
IPv6 packets. Rather than actually turning the OFF kernel control 
which
would inform the relevant software that it cannot use IPv6 ports. So 
it

sends a packet, and waits... and waits...
(and yes I know you are connecting to an IPv4 host. Linux "hybrid
stack" which Squid uses can use IPv6 sockets to contact IPv4 space).


It probably is because ipv6 is no longer a module and built into 
kernel.


Most online tutorials would not be working or half-working.

Proper way to disable ipv6 virus in rhel6 is:

/boot/grub/grub.conf
ipv6.disable=1

/etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1

/etc/modprobe.conf
/etc/modprobe.d/local.conf
alias net-pf-10 off
alias ipv6 off

/etc/sysconfig/network
NETWORKING_IPV6=off

echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6

chkconfig ip6tables off

/etc/sysconfig/network-scripts/ifcfg-eth0
make sure ipv6 DNS entries are removed


Doing all above would disable ipv6 both in RHEL5 and RHEL6. Instead
of thinking what is what and what works or not, I run this everywhere
and it covers all my machines.


Yes, that is correct.

This bit is what Squid IPv6 support detection tests and relies on:
"
 /etc/sysctl.conf
  net.ipv6.conf.all.disable_ipv6 = 1

  echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
"




I also run this just in case ipv6 is enabled somewhere, it is 
dropped:


#!/bin/bash
if [ -d "/proc/sys/net/ipv6/conf" ];then
IPT6=/sbin/ip6tables

# Flush all
$IPT6 -F ; $IPT6 -F FORWARD ; $IPT6 -X ; $IPT6 -Z ;

$IPT6 -A INPUT   -j LOG --log-prefix "IPv6 INPUT DROPPED: "
$IPT6 -A OUTPUT  -j LOG --log-prefix "IPv6 OUTPUT DROPPED: "
$IPT6 -A FORWARD -j LOG --log-prefix "IPv6 FORWARD DROPPED: "


Ouch! Add these at least:
  $IPT6 -A INPUT -j REJECT
  $IPT6 -A OUTPUT -j REJECT
  $IPT6 -A FORWARD -j REJECT



$IPT6 -P INPUT DROP
$IPT6 -P OUTPUT DROP
$IPT6 -P FORWARD DROP
fi



And *that* is exactly the type of false "disable" I was talking about.

Squid and other software will attempt to open an IPv6 socket(). As long 
as the IPv6 modules are loaded in the kernel that will *succeed*. At 
first glance this is fine, IPv4 can still come and go through that 
socket.


 - In TCP they might then try to bind() to an IPv6, that *succeeds*. 
[bingo! IPv6 enabled and working. Squid will use it.]
 Then try to connect() to an IPv6. That also "succeeds" (partially). 
But the firewall DROP prevents the SYN packet ever going anywhere. Up to 
*15 minutes* later TCP will timeout.


 - In UDP things get even stranger. It expects no response, so send() 
to both IPv4 and IPv6 will *succeed*.


Does the DNS error "No Servers responding;;" sound all too familiar? 
then you or a transit network is most likely using DROP somewhere on 
UDP, TCP or ICMP.





Little bit old school perhaps, but I don't have knowledge about this
ipv6 and I would rather have it disabled until I learn it instead of
keeping my machines open for another vector of attack.


Treat it like you do IPv4. Preferably with a REJECT if you are using 
the same port in IPv4 but don't want to enable that service yet. DROP if 
you want to DoS the remote end software (ie responding to an attack by 
letting the remote end think its working even as you discard 
everything).




You might not agree with me but this minimalistic approach "Don't use
it now, don't keep it" saved me many times over the years.

Hope someone finds this helpful.

Jenny


DISCLAIMER: Use at your own risk. I am not responsible if it blows up
your house, bites your dog, does your wife.


see above.

Amos


RE: [squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-06-27 Thread Jenny Lee

> Dear Jenny and Amos,
>
> I thought it worth mentioning that I too am having troubles with the
> ACL processing of the "request_header_access User-Agent" configuration
> directive. It seems like Jenny's issue is the same one I am seeing.
>
> Using a "src" ACL in the directive doesn't work when you have a cache
> peer. The ACL is only ever checked to see if the IP address
> 255.255.255.255 exists in the list.
>
> I know this was only reported recently, but I wanted to know if there
> was a fix in the works or if Amos is still waiting for a fix to be
> submitted.
>
> Thanks and Best Regards,
>
> Sean Butler
 
I hired developer time for private patch. Seems to be working now. Will get you 
the patch once all is ready.
 
Jenny 

RE: [squid-users] Strange 503 on https sites [ipv6 edition]

2011-06-27 Thread Jenny Lee

> NP: (rant warning) if you followed most any online tutorial for 
> disabling IPv6 in RHEL. Most only go so far as to make the kernel drop 
> IPv6 packets. Rather than actually turning the OFF kernel control which 
> would inform the relevant software that it cannot use IPv6 ports. So it 
> sends a packet, and waits... and waits...
> (and yes I know you are connecting to an IPv4 host. Linux "hybrid 
> stack" which Squid uses can use IPv6 sockets to contact IPv4 space).

It probably is because ipv6 is no longer a module and built into kernel. 
 
Most online tutorials would not be working or half-working.

Proper way to disable ipv6 virus in rhel6 is:

/boot/grub/grub.conf
ipv6.disable=1
 
/etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
 
/etc/modprobe.conf
/etc/modprobe.d/local.conf
alias net-pf-10 off
alias ipv6 off

/etc/sysconfig/network
NETWORKING_IPV6=off

echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
 
chkconfig ip6tables off

/etc/sysconfig/network-scripts/ifcfg-eth0
make sure ipv6 DNS entries are removed
 
 
Doing all above would disable ipv6 both in RHEL5 and RHEL6. Instead of thinking 
what is what and what works or not, I run this everywhere and it covers all my 
machines.
 
I also run this just in case ipv6 is enabled somewhere, it is dropped:
 
#!/bin/bash
if [ -d "/proc/sys/net/ipv6/conf" ];then
IPT6=/sbin/ip6tables 
 
# Flush all
$IPT6 -F ; $IPT6 -F FORWARD ; $IPT6 -X ; $IPT6 -Z ;
 
$IPT6 -A INPUT   -j LOG --log-prefix "IPv6 INPUT DROPPED: "
$IPT6 -A OUTPUT  -j LOG --log-prefix "IPv6 OUTPUT DROPPED: "
$IPT6 -A FORWARD -j LOG --log-prefix "IPv6 FORWARD DROPPED: "
$IPT6 -P INPUT DROP
$IPT6 -P OUTPUT DROP
$IPT6 -P FORWARD DROP
fi
 
 
Little bit old school perhaps, but I don't have knowledge about this ipv6 and I 
would rather have it disabled until I learn it instead of keeping my machines 
open for another vector of attack. 
 
You might not agree with me but this minimalistic approach "Don't use it now, 
don't keep it" saved me many times over the years.
 
Hope someone finds this helpful.
 
Jenny
 
 
DISCLAIMER: Use at your own risk. I am not responsible if it blows up your 
house, bites your dog, does your wife.
 
 
  

Re: [squid-users] Strange 503 on https sites

2011-06-27 Thread Amos Jeffries

On Mon, 27 Jun 2011 15:40:10 +0800, ICT Department wrote:

Hi,



I am very confused now as to why 99% of https access has 503, even 
yahoo

which is very fast..

This problem arises when my network is at peak use. This problem 
arises when

I upgraded my connection from

Copper connection 4mbps to Fiber optic 6mbps.  Hope could someone 
point me

to the right direction.   Thank you.



503 is "Service Unable". On CONNECT requests for Squid that means the 
TCP connection to that IP address could not be opened. The 59 second 
duration for those requests indicate a TCP setup timeout is happening.


Next steps I'd look at is PMTU issues between you and that server.


Squid-3.1 does IPv6. So if you have that incorrectly disabled Squid 
could be failing to connect to that IPv4-only destination over an IPv6 
socket.
NP: (rant warning) if you followed most any online tutorial for 
disabling IPv6 in RHEL. Most only go so far as to make the kernel drop 
IPv6 packets. Rather than actually turning the OFF kernel control which 
would inform the relevant software that it cannot use IPv6 ports. So it 
sends a packet, and waits... and waits...
 (and yes I know you are connecting to an IPv4 host. Linux "hybrid 
stack" which Squid uses can use IPv6 sockets to contact IPv4 space).




Access.log

1309159630.003  59632 192.168.100.33 TCP_MISS/503 0 CONNECT
124.102.69.115:443 - DIRECT/124.102.69.115 -

1309159630.003  59629 192.168.100.33 TCP_MISS/503 0 CONNECT
140.127.205.122:443 - DIRECT/140.127.205.122 -

1309159632.000  59480 192.168.100.33 TCP_MISS/503 0 CONNECT
218.226.219.106:443 - DIRECT/218.226.219.106 -

1309159632.000  59996 192.168.10.105 TCP_MISS/503 0 CONNECT
login.yahoo.com:443 - DIRECT/124.108.120.31 -

1309159636.001  59997 192.168.100.84 TCP_MISS/503 0 CONNECT
www.facebook.com:443 - DIRECT/69.171.228.11 -

1309159644.000  59906 192.168.100.58 TCP_MISS/503 0 CONNECT
us.data.toolbar.yahoo.com:443 - DIRECT/98.137.53.23 -

1309159656.002  59085 192.168.100.33 TCP_MISS/503 0 CONNECT
118.167.16.72:443 - DIRECT/118.167.16.72 -



My squid is compiled with

Squid Cache: Version 3.1.12

configure options:  '--build=i686-redhat-linux-gnu'
'--host=i686-redhat-linux-gnu' '--target=i386-redhat-linux-gnu'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin'

'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--includedir=/usr/include'
'--libdir=/usr/lib' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/usr/com'
'-mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr'
'--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' 
'--localstatedir=/var'

'--datadir=/usr/share' '--sysconfdir=/etc/squid'
'--enable-removal-policies=heap,lru' 
'--enable-storeio=aufs,diskd,ufs'

'--enable-ssl' '--with-openssl=/usr/kerberos' '--enable-delay-pools'
'--enable-linux-netfilter' '--with-pthreads'
'--enable-ntlm-auth-helpers=fakeauth'

'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-digest-auth-helpers=password' 
'--with-winbind-auth-challenge'

'--enable-useragent-log' '--enable-referer-log'
'--disable-dependency-tracking' 
'--enable-cachemgr-hostname=localhost'

'--enable-underscores' '--enable-useragent_log'

'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain
-NTLM,SASL' '--enable-cache-digests' '--disable-ident-lookups'
'--with-large-files' '--enable-gnuregex' 
'--disable-follow-x-forwarded-for'

'--enable-fd-config' '--with-maxfd=16384' '--enable-internal-dns'
'build_alias=i686-redhat-linux-gnu' 
'host_alias=i686-redhat-linux-gnu'

'target_alias=i386-redhat-linux-gnu' --with-squid=/root/squid-3.1.12
--enable-ltdl-convenience



Amos


Re: [squid-users] Squid SDK

2011-06-27 Thread Amos Jeffries

On Mon, 27 Jun 2011 17:12:32 +0430, Mohsen Pahlevanzadeh wrote:
I know it, and compiled it, But can i get hook or i must hack it for 
a

syscall? i need to on demand delete Object from cache same PURGE, But
want to use it in my code.
After it, I need to Push to cache in my code.
Can you get me name of those func instead of hack?


There is no SDK. No syscalls. No API to embed squid into other 
software. Squid *is* the top level software.


To perform cache object erasures use HTTP protocol to send PURGE 
requests. Or use HTCP protocol to send CLR request packets.


Since you seem to say that PURGE does not work for you, you may want to 
look at HTCP CLR. But both have almost the same operation.




It sounds like you are attempting to write some form of external 
manipulator for the Squid disk storage. Yes?
 before we go into details for help ... Why? What end goal will this 
achieve for you?



Amos


--mohsen
On Mon, 2011-06-27 at 09:09 -0300, Leonardo Rodrigues wrote:

squid is a completly open-source project, you can simply grab its
entire source code and do whatever modifications you need to 
acchieve

your goals.

 if you're using squid installed by your distro or download in 
some

binary format and dont have its source, you can go to

www.squid-cache.org

 and download it !

Em 27/06/11 04:33, Mohsen Pahlevanzadeh escreveu:
> Dear all,
>
> I know that squid doesn't release its SDK, but i need to its
> syscall.What i do? Do you know good way for using squid syscall?
>






Re: [squid-users] Squid DNS Issues

2011-06-27 Thread Amos Jeffries

On Mon, 27 Jun 2011 08:05:59 +0300, Richard Zulu wrote:

Hey,
I have squid version 3.1.9 working as a web forward proxy serving
close to 500 users with over 54000 requests every other day.
However, of recent, it is failing to communicate with the DNS Server
completely which leads to few requests being completed.
This has led to a long queue as to the requests supposed to be
completed which later causes squid to hang.
Shifting the very users to another squid cache causes similar
problems. What could be the issue here?
Some of the errors generated in the cache.log are here below:



getsockopt(SO_ORIGINAL_DST) failed on FD 128:


 NAT failure.

Could be a couple of things. Some seriously bad, and some only trivial.

 * On Linux if you allow non-NAT clients to access a port marked 
"intercept" or "transparent". The ports for direct client->proxy and NAT 
connections need to be separate and the NAT one firewalled away so it 
cant be accessed directly. See the squid wiki config examples for DNAT 
or REDIRECT for the iptables "mangle" rules that protect against these 
security vulnerabilities.

 http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
 http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

 * On OpenBSD 4.7 or later (may or may not need some patches) it can be 
the same as Linux. OR if they have partial but broken SO_ORIGINAL_DST 
support it shows up but means only that the OS is broken.


 * On other non-Linux systems it is a Squid bug. Means nothing, but I 
want to get it fixed/silenced.




squidaio_queue_request: WARNING - Queue congestion


http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion



urlParse: URL too large (12404 bytes)


Exactly what it says. URL is too big for Squid to handle. There should 
be a 4xx status sent back to the client so it can retry or whatever.




statusIfComplete: Request not yet fully sent "POST

http://person.com/ims.manage.phtml?__mp[name]=ims:manage&action=bugreport&js_id=47&";


Server or client disconnected halfway through a POST request.



 WARNING: unparseable HTTP header field {Web Server}


http://wiki.squid-cache.org/KnowledgeBase/UnparseableHeader

Amos


[squid-users] Why doesn't REQUEST_HEADER_ACCESS work properly with aclnames?

2011-06-27 Thread Sean Butler
Dear Jenny and Amos,

I thought it worth mentioning that I too am having troubles with the
ACL processing of the "request_header_access User-Agent" configuration
directive.  It seems like Jenny's issue is the same one I am seeing.

Using a "src" ACL in the directive doesn't work when you have a cache
peer.  The ACL is only ever checked to see if the IP address
255.255.255.255 exists in the list.

I know this was only reported recently, but I wanted to know if there
was a fix in the works or if Amos is still waiting for a fix to be
submitted.

Thanks and Best Regards,

Sean Butler


Re: [squid-users] Squid SDK

2011-06-27 Thread Leonardo Rodrigues


i think you can analyse the squidclient command line utility, which 
is on squid source code, and find out how what the '-m PURGE' option 
calls ... that would be what you need.


you can use that utility for PURGing URLs from command line, for 
example:


squidclient -m PURGE http://whatever/path/file.txt


and i'm sorry i cannot assist you any further than that ... i'm 
really not a developer, i dont have a clue what are the function names 
and how to help you with more tech details.




Em 27/06/11 09:42, Mohsen Pahlevanzadeh escreveu:

I know it, and compiled it, But can i get hook or i must hack it for a
syscall? i need to on demand delete Object from cache same PURGE, But
want to use it in my code.
After it, I need to Push to cache in my code.
Can you get me name of those func instead of hack?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






RE: [squid-users] Reverse Proxy - order of cache_peer_access rules

2011-06-27 Thread Nick Duda
I'm no pro at squid-cache, but I do run a handful of reverse proxies doing 
things similar to what you want. You might want to consider using url_regex? 
Maybe something along the lines of this:

http_port 80 accel defaultsite=www.example1.com vhost

cache_peer 10.0.0.3 parent 80 0 no-query originserver name=server3
cache_peer 10.0.0.1 parent 80 0 no-query originserver name=server1 
cache_peer 10.0.0.2 parent 80 0 no-query originserver name=server2

acl site3 url_regex -I ^http://www.example1.com/?a=1&o=16188 
^http://www.example1.com/?a=1&o=16188
acl site1 url_regex -I ^http://www.example1.com
acl site2 url_regex -I ^http://www.example2.com

cache_peer_access server3 allow site3
cache_peer_access server1 allow site1
cache_peer_access server2 allow site2
cache_peer_access server3 deny all
cache_peer_access server1 deny all
cache_peer_access server2 deny all

http_access allow site3
http_access allow site1
http_access allow site2

- Nick

-Original Message-
From: Oskar Stolc [mailto:oskar.st...@gmail.com] 
Sent: Sunday, June 26, 2011 5:41 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Reverse Proxy - order of cache_peer_access rules

Hi,

I am trying to set up a Squid reverse proxy, but it does not want to work 
according my expectations.

I am serving two sites:
- www.example1.com
- www.example2.com

I have 3 backend servers:
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3

I want Squid to send the
- www.example1.com queries to server 10.0.0.1
- www.example2.com queries to server 10.0.0.2
- if the query contains an o=16188 HTTP parameter I want Squid to send it to 
10.0.0.3 regardless of domain

Example:
- http://www.example1.com/?a=1&b=2 - goes to 10.0.0.1
- http://www.example2.com/?a=1&b=2 - goes to 10.0.0.2
- http://www.example1.com/?a=1&o=16188 - goes to 10.0.0.3
- http://www.example2.com/?a=1&o=16188 - goes to 10.0.0.3

My configuration looks like this:

acl site1 dstdomain www.example1.com
acl site2 dstdomain www.example2.com

acl ocode_param urlpath_regex o=16188

http_access allow site1
http_access allow site2

http_port 80 accel defaultsite=www.example1.com vhost

cache_peer 10.0.0.1 parent 80 0 no-query originserver name=server1 cache_peer 
10.0.0.2 parent 80 0 no-query originserver name=server2 cache_peer 10.0.0.3 
parent 80 0 no-query originserver name=server3

cache_peer_access server3 allow ocode_param

cache_peer_access server1 allow site1
cache_peer_access server2 allow site2

cache_peer_access server1 deny all
cache_peer_access server2 deny all
cache_peer_access server3 deny all


The problem is that the queries with o=16188 don't go to 10.0.0.3, but are 
routed to 10.0.0.1 or 10.0.0.2 instead (based on domain). Does it mean the 
cache_peer_access rules are not "first match first win"
rules? Should I re-order them? How?

I've tried this on Squid 2.6 on CentOS5.6 and Squid 3.1 on Fedora15, both 
behave the same.

Please help, any suggestions appreciated.

Thanks,
Oskar


Re: [squid-users] Squid SDK

2011-06-27 Thread Mohsen Pahlevanzadeh

I know it, and compiled it, But can i get hook or i must hack it for a
syscall? i need to on demand delete Object from cache same PURGE, But
want to use it in my code.
After it, I need to Push to cache in my code.
Can you get me name of those func instead of hack?
--mohsen
On Mon, 2011-06-27 at 09:09 -0300, Leonardo Rodrigues wrote:
> squid is a completly open-source project, you can simply grab its 
> entire source code and do whatever modifications you need to acchieve 
> your goals.
> 
>  if you're using squid installed by your distro or download in some 
> binary format and dont have its source, you can go to
> 
> www.squid-cache.org
> 
>  and download it !
> 
> Em 27/06/11 04:33, Mohsen Pahlevanzadeh escreveu:
> > Dear all,
> >
> > I know that squid doesn't release its SDK, but i need to its
> > syscall.What i do? Do you know good way for using squid syscall?
> >
> 
> 



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid SDK

2011-06-27 Thread Leonardo Rodrigues


squid is a completly open-source project, you can simply grab its 
entire source code and do whatever modifications you need to acchieve 
your goals.


if you're using squid installed by your distro or download in some 
binary format and dont have its source, you can go to


www.squid-cache.org

and download it !

Em 27/06/11 04:33, Mohsen Pahlevanzadeh escreveu:

Dear all,

I know that squid doesn't release its SDK, but i need to its
syscall.What i do? Do you know good way for using squid syscall?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Memory issues

2011-06-27 Thread Go Wow
Pls find below the link to excel file containing memory info from
squid cache manager.

https://www.yousendit.com/download/MFo3c0w5bTh0TW14dnc9PQ

Now my squid.conf looks like this, is this okay?

auth_param negotiate program /usr/lib/squid/squid_kerb_auth -d -s GSS_C_NO_NAME
auth_param negotiate children 10
auth_param negotiate keep_alive on
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 8
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic credentialsttl 4 hour
auth_param basic casesensitive off
auth_param basic children 7
auth_param basic realm DOMAIN
authenticate_cache_garbage_interval 10 seconds
authenticate_ttl 0 seconds
acl ad-auth proxy_auth REQUIRED
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl allow_localnet dst 192.168.100.0/24 192.168.18.0/24
acl allow_localdomain dstdomain .domain.com
acl local_net_dst dst  192.168.127.0/24
acl local_net_src src  192.168.137.0/24
acl Unsafe_Ports port 5050 843 5100 5101 5000-5010 9085
acl Unsafe_Ports port 1863
acl Unsafe_Ports port 5222
acl SSL_ports port 443
acl Safe_ports port 80 53 443 3268 88 5060 5061 5062 5075 5076 5077
50636 587 50389 58941 110 995 993 143 389 636 119 25 465 135 102 3000
# http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny Unsafe_Ports
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow allow_localnet
http_access allow allow_localdomain
http_access allow ad-auth
http_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
cache_dir aufs /var/squid/cache 128 16 256
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?)0   0%  0
refresh_pattern .   0   20% 4320
redirect_program /usr/local/bin/squidGuard -c
/usr/local/squidGuard/squidGuard.conf
redirect_children 15
icp_access deny all
htcp_access deny all
cache_mem 128 MB
access_log /var/log/squid/access.log squid
icp_port 3130
pipeline_prefetch off
cache_mgr m...@domain.com
cachemgr_passwd password all
#delay_pools 2
#delay_class 1 4
#delay_class 2 4
#delay_access 1 allow local_net_src
#delay_access 2 allow local_net_dst
#delay_parameters 1 -1/-1 -1/-1 -1/-1 51200/51200
#delay_parameters 2 -1/-1 -1/-1 -1/-1 -1/-1
#delay_initial_bucket_level 75
httpd_suppress_version_string on
forwarded_for off
hosts_file /etc/hosts
cache_replacement_policy heap LFUDA
cache_swap_low 90
cache_swap_high 95
maximum_object_size_in_memory 50 KB
memory_pools off
maximum_object_size 50 MB
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
client_db off
buffered_logs on
half_closed_clients off

On 26 June 2011 16:19, Amos Jeffries  wrote:
> On 26/06/11 21:24, Go Wow wrote:
>>
>> Hi,
>>
>>  I'm using squid 3.1.8 on centos 5.4 with 3.8GB RAM and Dual Core
>> Processor. My swap is been used and 50% of RAM is used by cache&
>> buffers. Below link has one week's memory&  CPU utilization
>> information in form of graph.
>>
>> Memory usage -->  http://img.myph.us/Cr8.jpg
>> CPU usage -->  http://img.myph.us/PgM.jpg
>>
>> I'm worried as to why the usage of swap is coming into picture,
>> logically if Swap is used then I need to increase the RAM but this
>> machine is serving only 12 users.
>>
>>  My squid.conf is here
>>
>> auth_param negotiate program /usr/lib/squid/squid_kerb_auth -d -s
>> GSS_C_NO_NAME
>> auth_param negotiate children 10
>> auth_param negotiate keep_alive on
>> auth_param ntlm program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp
>> auth_param ntlm children 8
>> auth_param basic program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic
>> auth_param basic credentialsttl 4 hour
>> auth_param basic casesensitive off
>> auth_param basic children 7
>> auth_param basic realm DOMAINNAME
>> authenticate_cache_garbage_interval 10 seconds
>> authenticate_ttl 0 seconds
>> acl ad-auth proxy_auth REQUIRED
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32
>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
>> acl allow_localnet dst 192.168.110.0/24 192.168.188.0/24
>> acl allow_localdomain dstdomain .domain.com
>> acl local_net_dst dst  192.168.117.0/24
>> acl local_net_src src  192.168.117.0/24
>> acl Unsafe_Ports port 5050 843 5100 5101 5000-5010 9085
>> acl Unsafe_Ports port 1863
>> acl Unsafe_Ports port 5222
>> acl SSL_ports port 443
>> acl Safe_ports port 80 53 3268 88 5060 5061 5062 5075

[squid-users] Strange 503 on https sites

2011-06-27 Thread ICT Department
Hi,

 

I am very confused now as to why 99% of https access has 503, even yahoo
which is very fast.. 

This problem arises when my network is at peak use. This problem arises when
I upgraded my connection from

Copper connection 4mbps to Fiber optic 6mbps.  Hope could someone point me
to the right direction.   Thank you.

 

Access.log

1309159630.003  59632 192.168.100.33 TCP_MISS/503 0 CONNECT
124.102.69.115:443 - DIRECT/124.102.69.115 -

1309159630.003  59629 192.168.100.33 TCP_MISS/503 0 CONNECT
140.127.205.122:443 - DIRECT/140.127.205.122 -

1309159632.000  59480 192.168.100.33 TCP_MISS/503 0 CONNECT
218.226.219.106:443 - DIRECT/218.226.219.106 -

1309159632.000  59996 192.168.10.105 TCP_MISS/503 0 CONNECT
login.yahoo.com:443 - DIRECT/124.108.120.31 -

1309159636.001  59997 192.168.100.84 TCP_MISS/503 0 CONNECT
www.facebook.com:443 - DIRECT/69.171.228.11 -

1309159644.000  59906 192.168.100.58 TCP_MISS/503 0 CONNECT
us.data.toolbar.yahoo.com:443 - DIRECT/98.137.53.23 -

1309159656.002  59085 192.168.100.33 TCP_MISS/503 0 CONNECT
118.167.16.72:443 - DIRECT/118.167.16.72 -

 

My squid is compiled with

Squid Cache: Version 3.1.12

configure options:  '--build=i686-redhat-linux-gnu'
'--host=i686-redhat-linux-gnu' '--target=i386-redhat-linux-gnu'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--includedir=/usr/include'
'--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com'
'-mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--localstatedir=/var'
'--datadir=/usr/share' '--sysconfdir=/etc/squid'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs,diskd,ufs'
'--enable-ssl' '--with-openssl=/usr/kerberos' '--enable-delay-pools'
'--enable-linux-netfilter' '--with-pthreads'
'--enable-ntlm-auth-helpers=fakeauth'
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge'
'--enable-useragent-log' '--enable-referer-log'
'--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost'
'--enable-underscores' '--enable-useragent_log'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain
-NTLM,SASL' '--enable-cache-digests' '--disable-ident-lookups'
'--with-large-files' '--enable-gnuregex' '--disable-follow-x-forwarded-for'
'--enable-fd-config' '--with-maxfd=16384' '--enable-internal-dns'
'build_alias=i686-redhat-linux-gnu' 'host_alias=i686-redhat-linux-gnu'
'target_alias=i386-redhat-linux-gnu' --with-squid=/root/squid-3.1.12
--enable-ltdl-convenience




[squid-users] Squid SDK

2011-06-27 Thread Mohsen Pahlevanzadeh
Dear all,

I know that squid doesn't release its SDK, but i need to its
syscall.What i do? Do you know good way for using squid syscall? 

Yours,
Mohsen





signature.asc
Description: This is a digitally signed message part