Re: [squid-users] Forwarding HTTP and HTTPS Traffic to an Upstream Proxy using Cache_Peer on separate ports

2008-02-20 Thread Tony Dodd
On Wed, 20 Feb 2008 19:57:45 -
"Ric Lonsdale" <[EMAIL PROTECTED]> wrote:

 
> However, the Finjan appliance listens on port 8080 for standard HTTP
> traffic, but listens on 8443 for HTTPS (SSL) traffic, and squid
> returns the following error with this setup.
> 
> FATAL: ERROR: cache_peer 10.198.1.2 specified twice

> cache_peer 10.198.1.2 parent 8080 7 no-query
> cache_peer 10.198.1.2 parent 8443 7 no-query
> acl httptraffic proto HTTP
> acl httpstraffic proto HTTPS
> http_access allow httptraffic
> http_access allow httpstraffic
> cache_peer_access 10.198.1.2 allow httptraffic 
> cache_peer_access 10.198.1.2 allow SSL_ports
> never_direct allow all
> 
> Is it possible to change the squid.conf settings to send HTTP and
> HTTPS requests to the same upstream Finjan appliance, but on separate
> ports?

You'll be wanting to do the following:

cache_peer 10.198.1.2 parent 8080 7 no-query name=finjanhttp
cache_peer 10.198.1.2 parent 8443 7 no-query name=finjanhttps

cache_peer_access finjanhttp allow httptraffic
cache_peer_access finjanhttps allow httpstraffic

hth
Tony


Re: [squid-users] Squid 3.0 Stable1 with MySql Logging

2008-02-20 Thread Marcello Romani

Cassiano Martin ha scritto:

Hi Marcello!

I'im interested in rewrite my squid importer to use this new feature, it 
will be much
faster, as I see. How squid behaves when there is a large flow of data 
between squid and
the log reader daemon? I wrote a Squid quota daemon, but it reads the 
log from the
text file, and updates a mysql table. Its fast, but when there is a lot 
of data, it takes some
time.. specially when updating the quota table, some users can go over 
their limit.


Sorry about asking this, but could you show me the socket part of your 
code?
I have not searched about this, yes I'm lazy :-) and I dont have a newer 
squid version

to test.

Thanks!
Cassiano Martin

Marcello Romani wrote:

Marcello Romani ha scritto:

Adrian Chadd ha scritto:

On Mon, Feb 18, 2008, Marcello Romani wrote:

Hi, I have some experience in Perl and mysql. I can't guarantee a 
timely implementation, but I'm interested in this feature and I'm 
willing to contribute.


Could you give me some pointers for where to start ?


look for "logfile_daemon" in squid-2.7 snapshots. Its relatively 
easy from

there.



Adrian



I've written a very small perl script which reads stdin and dumps it 
to a text file. I've set it to be the logfile_daemon, but it seems it 
doesn't get called.


I've not found much info on this configuration directive.

I must be missing something stupid...

Can you help me ? Thanks




Ok I found it. I think the comments in the squid.conf file should 
contain a paragraph like the following:


#   And priority could be any of:
#   err, warning, notice, info, debug.
#
#   To log the request via a logfile writing daemon, specify a 
filepath of "daemon", e.g.:

#   access_log daemon:/var/logs/access.log squid
#   You may also want to change the default value of logfile_daemon to 
point to

#   a custom logfile-writing daemon

HTH






There is no "socket part" of any code. I'll post my logfile daemon 
implementation in perl as soon as I clean it up a little (right now it's 
very primitive and virtually non-configurable, but it works ;-). In the 
meanwhile, here's what I did:


1) download squid source code for the latest 2.7 series;
2) look at src/logfile-daemon.c to learn how a logfile daemon should 
work (it's actually quite simple)
3) write a script which continuously reads from stdin and does something 
with each line it reads (the first byte is the command, the rest is the 
actual log line), e.g. prints it to a file or splits it up in fields and 
sotres them in a mysql table :-)
4) configure --prefix=/somewhere/in/your/home/dir, then make && make 
install, and apply the config i mentioned in my previous post.


After this, you won't be able to call yourself lazy anymore :-)

--
Marcello Romani
Responsabile IT
Ottotecnica s.r.l.
http://www.ottotecnica.com


[squid-users] YAHOO:MSN:GOOGLE

2008-02-20 Thread Tarak Ranjan
hi list,
i have one squid server which is running on FC6 , im
using SQUID
2.6.STABLE16. that is running fine, not any issue with
that, Now i want
to make SQUID some effective.
I want my squid proxy should detect

1) which file users are downloading or uploading
2) using Chat [yahoo, msn, google], which file users
are sending or
receiving , which chat room they are entering , time
duration of login
in messenger .

All these i want to track using Squid, has anyone
implement those
stuff . Any help will be really appreciate. or any
Link


Thanks & Regards,

TArak 


  Why delete messages? Unlimited storage is just a click away. Go to 
http://help.yahoo.com/l/in/yahoo/mail/yahoomail/tools/tools-08.html



RE: [squid-users] Squid currently not working.

2008-02-20 Thread Adam Carter
> Are you running it as root?

I's say he is - I have a fedora 8 box (squid is not actually used on
this box AFAIK);

[EMAIL PROTECTED] ~]$ service squid start
sed: can't read /etc/squid/squid.conf: Permission denied
init_cache_dir /var/spool/squid... /etc/init.d/squid: line 68:
/var/log/squid/squid.out: Permission denied
Starting squid: /etc/init.d/squid: line 72: /var/log/squid/squid.out:
Permission denied
   [FAILED]
[EMAIL PROTECTED] ~]$ su
Password: 
[EMAIL PROTECTED] cartera]# service squid start
init_cache_dir /var/spool/squid... Starting squid: .   [  OK  ]
[EMAIL PROTECTED] cartera]# 

Steve, can you post the output of 'netstat -anp | grep 81' (it should
find nothing).


Re: [squid-users] Squid currently not working.

2008-02-20 Thread Adrian Chadd
Are you running it as root?




adrian

On Wed, Feb 20, 2008, Steve Billig wrote:
> I just don't see why it would not be working if I have had no problems
> before a while back, except for it actually running. Before I had it
> so that it would work, and actually work on port 81. For some reason
> it doesn't want to work now. I would try to use another port but only
> standard ports that are not blocked by my school work, like port 81
> and 3389. This is why I chose port 81
> 
> If I am reading the information right from that command, nothing else
> is running on port 81. Although it came up with quite a few things and
> it could also be that they are running on it.
> 
> As I said above, I just don't see why it wouldn't like the port now...
> 
> On Wed, Feb 20, 2008 at 8:20 PM, Adam Carter <[EMAIL PROTECTED]> wrote:
> > > FATAL: Cannot open HTTP Port
> >  > Squid Cache (Version 2.6.STABLE16): Terminated abnormally.
> >  >
> >
> > > Supposedly by what this says, the port can't be opened. I made sure
> >  > that the firewall had it opened and that my router was forwarding it.
> >
> >  Its not a firewall thing, its the operating system not allowing squid to
> >  open that port. Either the port is already in use, or squid doesn't have
> >  the correct privilages to open the port. Typically you need to be root
> >  open a port <1024.
> >
> >  As root, use 'netstat -anp | grep 81' to check if its in use and what is
> >  using it. I use port 8080 for squid;
> >  rix adam # netstat -anp | grep 8080
> >  tcp0  0 192.168.1.4:80800.0.0.0:*
> >  LISTEN  11852/(squid)
> >  rix adam #
> >
> >
> 
> 
> 
> -- 
> -Steve

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid currently not working.

2008-02-20 Thread Steve Billig
I just don't see why it would not be working if I have had no problems
before a while back, except for it actually running. Before I had it
so that it would work, and actually work on port 81. For some reason
it doesn't want to work now. I would try to use another port but only
standard ports that are not blocked by my school work, like port 81
and 3389. This is why I chose port 81

If I am reading the information right from that command, nothing else
is running on port 81. Although it came up with quite a few things and
it could also be that they are running on it.

As I said above, I just don't see why it wouldn't like the port now...

On Wed, Feb 20, 2008 at 8:20 PM, Adam Carter <[EMAIL PROTECTED]> wrote:
> > FATAL: Cannot open HTTP Port
>  > Squid Cache (Version 2.6.STABLE16): Terminated abnormally.
>  >
>
> > Supposedly by what this says, the port can't be opened. I made sure
>  > that the firewall had it opened and that my router was forwarding it.
>
>  Its not a firewall thing, its the operating system not allowing squid to
>  open that port. Either the port is already in use, or squid doesn't have
>  the correct privilages to open the port. Typically you need to be root
>  open a port <1024.
>
>  As root, use 'netstat -anp | grep 81' to check if its in use and what is
>  using it. I use port 8080 for squid;
>  rix adam # netstat -anp | grep 8080
>  tcp0  0 192.168.1.4:80800.0.0.0:*
>  LISTEN  11852/(squid)
>  rix adam #
>
>



-- 
-Steve


[squid-users] confused config on default squid.conf

2008-02-20 Thread J. Peng
Hello members,

Below is the piece from the default squid.conf:


#  TAG: nonhierarchical_direct
#   By default, Squid will send any non-hierarchical requests
#   (matching hierarchy_stoplist or not cacheable request type) direct
#   to origin servers.
#
#   If you set this to off, Squid will prefer to send these
#   requests to parents.
#
#   Note that in most configurations, by turning this off you will only
#   add latency to these request without any improvement in global hit
#   ratio.
#
#   If you are inside an firewall see never_direct instead of
#   this directive.
#
#Default:
# nonhierarchical_direct on

#  TAG: prefer_direct
#   Normally Squid tries to use parents for most requests. If you for some
#   reason like it to first try going direct and only use a parent if
#   going direct fails set this to on.
#
#   By combining nonhierarchical_direct off and prefer_direct on you
#   can set up Squid to use a parent as a backup path if going direct
#   fails.
#
#   Note: If you want Squid to use parents for all requests see
#   the never_direct directive. prefer_direct only modifies how Squid
#   acts on cacheable requests.
#
#Default:
# prefer_direct off


I'm confused about this statement:

#   By combining nonhierarchical_direct off and prefer_direct on you
#   can set up Squid to use a parent as a backup path if going direct
#   fails.


Why it's "nonhierarchical_direct off and prefer_direct on"?
I think it should be "nonhierarchical_direct on and prefer_direct on".

Thanks for the kind helps.


RE: [squid-users] Squid currently not working.

2008-02-20 Thread Adam Carter
> FATAL: Cannot open HTTP Port
> Squid Cache (Version 2.6.STABLE16): Terminated abnormally.
> 
> Supposedly by what this says, the port can't be opened. I made sure
> that the firewall had it opened and that my router was forwarding it.

Its not a firewall thing, its the operating system not allowing squid to
open that port. Either the port is already in use, or squid doesn't have
the correct privilages to open the port. Typically you need to be root
open a port <1024.

As root, use 'netstat -anp | grep 81' to check if its in use and what is
using it. I use port 8080 for squid;
rix adam # netstat -anp | grep 8080
tcp0  0 192.168.1.4:80800.0.0.0:*
LISTEN  11852/(squid)
rix adam #



RE: [squid-users] What is the ICAP chain exactly?

2008-02-20 Thread S.KOBAYASHI
Alex, 
Thank you very much and sorry my expressions in poor English.
I almost clear "chain" now however I have a bit more questions.
My thoughts "not only processing ICAP" is as bellow.
ICAP chain has only ability to connect to next service, or before go to next
service, squid or previous service can control whether connect to next
service or not (bypass).
icap_service_1 ---> icap_service_2 --> and more. (only sequential process.)
Or
icap_service_1 ---> icap_service_3 --> .. (icap_service_1 can chose the next
service.)

Does that work for you?

Thanks a lot,
Seiji Kobayashi

-Original Message-
From: Alex Rousskov [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 20, 2008 3:38 PM
To: S.KOBAYASHI
Cc: 'Squid Users'
Subject: Re: [squid-users] What is the ICAP chain exactly?


On Wed, 2008-02-20 at 12:43 +0900, S.KOBAYASHI wrote:

> I'v read some documents but I don't understand yet about ICAP chain how to
> work and how to setup in the squid.cof.
> I wonder if the squid can control sequence of connecting to some ICAP
> service.
> It's not only processing ICAP services in a row, but also controls
sequence
> of ICAP services.
> Do my thoughts make sense? Or Is it possible?

For a given HTTP message, Squid 3.0 can select zero or one ICAP service.

Future Squid versions will probably be able to select a list of ICAP
services for a given HTTP message. The services would then be applied
one after another, in a chain- or pipe-like manner, where the output of
the previous service becomes the input of the next one. Developers are
looking for those who need that kind of functionality.

Sorry, I do not know what you mean by "not only processing ICAP services
in a row, but also controls sequence of ICAP services". An example might
help.

Thank you,

Alex.
P.S. Squid2 with some ICAP patches can support ICAP service chaining,
but poorly.





Re: [squid-users] SARG - deny_info problem

2008-02-20 Thread Amos Jeffries
>
> Hi all.
> A few days ago my server died, yesterday I installed and configured again.
> all fine except one deny_info.
> Here is my config:
>
> acl adv url_regex -i "/etc/squid/deny/banner.block"
>
> http_access deny adv
> deny_info http://192.168.1.50/spam/nospam.gif adv
>
> when trying to use the proxy, squid dies sudenly.
>
> here is the cache log:
>
> 2008/02/20 10:14:52| errorTryLoadText:
> '/usr/share/squid/errors/http://192.168.1.50/spam/nospam.gif': (2) No such
> file or dir
> 2008/02/20 10:14:52| errorTryLoadText:
> '/etc/squid/errors/http://192.168.1.50/spam/nospam.gif': (2) No such file
> or
> directory
> FATAL: failed to find or read error text file.
> Squid Cache (Version 2.5.STABLE1): Terminated abnormally.

2.5! Since you have to reinstall anyway. Why not try an upgrade in the
process? I suspect there were a few patches on that _really_ old release.
Most of which are likely to be in 2.6 now.

Amos




Re: [squid-users] Squid currently not working.

2008-02-20 Thread Steve Billig
Yeah I found it sorry about that. This is the last few entries that is
all the way at the bottom of the log file.


2008/02/19 20:08:18| Starting Squid Cache version 2.6.STABLE16 for
i386-redhat-linux-gnu...
2008/02/19 20:08:18| Process ID 2051
2008/02/19 20:08:18| With 1024 file descriptors available
2008/02/19 20:08:18| Using epoll for the IO loop
2008/02/19 20:08:18| DNS Socket created at 0.0.0.0, port 32769, FD 5
2008/02/19 20:08:18| Adding domain hsd1.pa.comcast.net. from /etc/resolv.conf
2008/02/19 20:08:18| Adding nameserver 68.87.64.146 from /etc/resolv.conf
2008/02/19 20:08:18| Adding nameserver 68.87.75.194 from /etc/resolv.conf
2008/02/19 20:08:18| User-Agent logging is disabled.
2008/02/19 20:08:18| Referer logging is disabled.
2008/02/19 20:08:18| Unlinkd pipe opened on FD 10
2008/02/19 20:08:18| Swap maxSize 102400 KB, estimated 7876 objects
2008/02/19 20:08:18| Target number of buckets: 393
2008/02/19 20:08:18| Using 8192 Store buckets
2008/02/19 20:08:18| Max Mem  size: 8192 KB
2008/02/19 20:08:18| Max Swap size: 102400 KB
2008/02/19 20:08:18| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/02/19 20:08:18| Rebuilding storage in /var/spool/squid (DIRTY)
2008/02/19 20:08:18| Using Least Load store dir selection
2008/02/19 20:08:18| Set Current Directory to /var/spool/squid
2008/02/19 20:08:18| Loaded Icons.
2008/02/19 20:08:19| commBind: Cannot bind socket FD 12 to *::
(13) Permission denied
FATAL: Cannot open HTTP Port
Squid Cache (Version 2.6.STABLE16): Terminated abnormally.
CPU Usage: 0.042 seconds = 0.029 user + 0.013 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2524 KB
Ordinary blocks: 2446 KB  2 blks
Small blocks:   0 KB  1 blks
Holding blocks:   244 KB  1 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  77 KB
Total in use:2690 KB 97%
Total free:77 KB 3%

Supposedly by what this says, the port can't be opened. I made sure
that the firewall had it opened and that my router was forwarding it.

Thanks a lot for the help.


-- 
-Steve


RE: [squid-users] Squid currently not working.

2008-02-20 Thread Adam Carter
> Where are the log files that I am supposed to be looking at?

They are defined in squid.conf, eg on my system;

[EMAIL PROTECTED] ~ $ grep cache.log /etc/squid/squid.conf
#  TAG: cache_log
cache_log /var/log/squid/cache.log
#   cache.log log file is written with stdio functions, and as such
#   message to cache.log.  You can allow responses from unknown
#   If set to "warn" then a warning will be emitted in cache.log
[EMAIL PROTECTED] ~ $



Re: [squid-users] Squid currently not working.

2008-02-20 Thread Steve Billig
Where are the log files that I am supposed to be looking at?

On Wed, Feb 20, 2008 at 7:59 AM, Steve Billig <[EMAIL PROTECTED]> wrote:
> > > cache_log seems like a good place to start looking.
>  > >
>  > > What OS is this?
>  > >
>
> Currently I am not home to check the logs, but I do know that it is Fedora 8,
>
>  > Aklso, does your currently logged in user have the root privileges needed
>  > to start squid and access all its required files etc.
>  >
>  > Amos
>  >
>  >
>
>
> Yes, this account does have root privileges, considering it is the
>  root account that I am trying to run it off of.
>
>  > which the port for squid  ?
>  The port that it currently says is, or at least the line it says, is
>  "http_port 81". I did allow this port in Fedora's firewall and I also
>  forwarded it through my router.
>
>  I will check the logs when I get home. VNC won't let me connect to my
>  server from school. : /
>
>
>  --
>  -Steve
>



-- 
-Steve


Re: [squid-users] Problems accessing www.oracle.com (squid-2.5.STABLE6-3.4E.12/ CentOS 4.3)

2008-02-20 Thread Amos Jeffries
> Hi,
>
> We have been using squid for several years with great success and
> excellent reliability. I would like to thank all those who have
> contributed to this cracking app.
>
> Up until yesterday we could browse to www.oracle.com via squid.
>
> In the past we overcame difficulties accessing this site by disabling
> caching:
>
> acl ORA1 dstdomain .oracle.com
> acl ORA2 dstdomain .oracle.nl
> acl QUERY urlpath_regex cgi-bin \?
> no_cache deny QUERY ORA1 ORA2
>
> But now this no longer works. I verified that squid can access the site.
> Connectivity is 100%. I've disabled tcp_ecn on the linux box, but still
> no joy. Literally every other website can be browsed perfectly.
>
> If someone has any suggestions as to how I can fix this I would be most
> grateful.

Sounds like something has changed at oracles end that squid 2.5 can't handle.

It may be the HTTP/1.1 chunked brokenness that has been causing many
problems these last few months.

Use squidclient to detect what is coming/going to squid, or chack your
cache.log for details of what squid doing on those requests.

Amos




[squid-users] Proxy intermission issue

2008-02-20 Thread Jeremy Kim
Hello,

Our squid proxy works fine most of the time but couple times during the 
day it would freeze or slow down causing connection to time out or take 
really long time to get to the website. Then it would be fine again.

I did check the cache logs and there wasn't any warnings about
median warning time.

Would anyone know what might be the cause of this?  My currently cache 
space is 95% full but even if I allocate more space on my cache, I still
have the same problem.

For example my cache space was only 35% full before and I still had this
problem.

Jeremy



[squid-users] Forwarding HTTP and HTTPS Traffic to an Upstream Proxy using Cache_Peer on separate ports

2008-02-20 Thread Ric Lonsdale
Hi,

I am testing a product called Finjan, which is a website
anti-virus/malicious code checker that potentially blocks websites. I'm
using Squid.2.6-STABLE-12 on an IBM x345 server, with RedHat ES3.0, and have
configured the following fields to make the Finjan appliance a parent proxy
to my Squid setup.

However, the Finjan appliance listens on port 8080 for standard HTTP
traffic, but listens on 8443 for HTTPS (SSL) traffic, and squid returns the
following error with this setup.

FATAL: ERROR: cache_peer 10.198.1.2 specified twice

Squid Cache (Version 2.6.STABLE12): Terminated abnormally.
CPU Usage: 0.006 seconds = 0.001 user + 0.005 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Aborted

cache_peer 10.198.1.2 parent 8080 7 no-query
cache_peer 10.198.1.2 parent 8443 7 no-query
acl httptraffic proto HTTP
acl httpstraffic proto HTTPS
http_access allow httptraffic
http_access allow httpstraffic
cache_peer_access 10.198.1.2 allow httptraffic 
cache_peer_access 10.198.1.2 allow SSL_ports
never_direct allow all

Is it possible to change the squid.conf settings to send HTTP and HTTPS
requests to the same upstream Finjan appliance, but on separate ports?

Thanks, Ric





RE: [squid-users] Transparent Proxy not working in 3.0 STable1

2008-02-20 Thread WRIGHT Alan
Totally correct Amos

I rebuilt with netfilter only and works great, thanks

Alan


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: 14 February 2008 22:04
To: WRIGHT Alan
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Transparent Proxy not working in 3.0 STable1

> Hi Folks,
>
> I have installed squid 3.0 stable 1 and have configured it for
> transparent mode.
>
> Somehow it doesn't seem to work correctly.
>
> When it runs, it shows that it is running in transparent mode, but
then
> when HTTP requests hit the box it gives the WARNING: Transparent
> proxying not supported. The web browser shows an error page but from
the
> squid itself (Error: HTTP 400 Bad Request - Invalid URL.).
>
> When I configured the build, I used the tproxy and the netfilter
options
> for transparent proxying as I wasn't sure what one I needed.

At present only one transparency option will work and build. The tproxy
configure option is for kernels patched with the TROXY patch from
balabit.
The netfilter option is for standard kernels using iptables NAT
REDIRECT.

You will need to pick the one that applies to you and re-build squid.

>
> Does anyone have a clue why it will not run in transparent mode.
>
> I am pretty sure my iptables is OK

It probably is, but squid when configured with multiple transparency
options squid prefers the more transparent option (TPROXY is the only
completely transparent).

It sounds like you need to drop the tproxy.

Amos

>
> Here is what the trace shows:
>
> No. TimeSourceDestination
Protocol
> Info
>  20 12.102354   192.168.26.128192.168.130.250   HTTP
> GET / HTTP/1.1
>
> Frame 20 (493 bytes on wire, 493 bytes captured)
> Ethernet II, Src: 00:0c:29:e8:3d:07, Dst: 00:0c:29:01:ce:bc
> Internet Protocol, Src Addr: 192.168.26.128 (192.168.26.128), Dst
Addr:
> 192.168.130.250 (192.168.130.250)
> Transmission Control Protocol, Src Port: 44418 (44418), Dst Port: http
> (80), Seq: 1, Ack: 1, Len: 427
> Hypertext Transfer Protocol
> GET / HTTP/1.1\r\n
> Host: 192.168.130.250\r\n
> User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.1)
> Gecko/20060313 Fedora/1.5.0.1-9 Firefox/1.5.0.1 pango-text\r\n
> Accept:
>
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai
> n;q=0.8,image/png,*/*;q=0.5\r\n
> Accept-Language: en-us,en;q=0.5\r\n
> Accept-Encoding: gzip,deflate\r\n
> Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
> Keep-Alive: 300\r\n
> Connection: keep-alive\r\n
> \r\n
>
> No. TimeSourceDestination
Protocol
> Info
>  22 12.157274   192.168.130.250   192.168.26.128HTTP
> HTTP/1.0 400 Bad Request (text/html)[Short Frame]
>
> Frame 22 (1514 bytes on wire, 500 bytes captured)
> Ethernet II, Src: 00:0c:29:01:ce:bc, Dst: 00:0c:29:e8:3d:07
> Internet Protocol, Src Addr: 192.168.130.250 (192.168.130.250), Dst
> Addr: 192.168.26.128 (192.168.26.128)
> Transmission Control Protocol, Src Port: http (80), Dst Port: 44418
> (44418), Seq: 1, Ack: 428, Len: 1448
> Hypertext Transfer Protocol
> HTTP/1.0 400 Bad Request\r\n
> Server: squid/3.0.STABLE1\r\n
> Mime-Version: 1.0\r\n
> Date: Thu, 14 Feb 2008 04:44:37 GMT\r\n
> Content-Type: text/html\r\n
> Content-Length: 1447\r\n
> Expires: Thu, 14 Feb 2008 04:44:37 GMT\r\n
> X-Squid-Error: ERR_INVALID_URL 0\r\n
> X-Cache: MISS from localhost.localdomain\r\n
> Via: 1.0 localhost.localdomain (squid/3.0.STABLE1)\r\n
> Proxy-Connection: close\r\n
> \r\n
>
> TIA
>
> Alan
>
>
>
>
>





[squid-users] squid_ldap_group - doubt

2008-02-20 Thread Luis Claudio Botelho - Chefe de Tecnologia e Redes

Hi,

I'm trying to test squid_ldap_group.
The scenario is:

dn: CN=lbotelho,OU=Funcionarios,OU=Usuarios,DC=FEINET,DC=FEI,DC=EDU,DC=BR
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: lbotelho
memberOf: CN=funcionarios,CN=Users,DC=FEINET,DC=FEI,DC=EDU,DC=BR
memberOf: CN=Rede,OU=Funcionarios,OU=Usuarios,DC=FEINET,DC=FEI,DC=EDU,DC=BR
memberOf: CN=Domain Admins,CN=Users,DC=FEINET,DC=FEI,DC=EDU,DC=BR
name: lbotelho
sAMAccountName: lbotelho

I got these results through ldapsearch command.

But when I try to run squid_ldap_group, I received an ERR.
Here is the syntax:


./squid_ldap_group -d -P -b "dc=feinet,dc=fei,dc=edu,dc=br" -v 3 -D 
"cn=proxy_user,ou=funcionarios,ou=usuarios,dc=feinet,dc=fei,dc=edu,dc=br" -w 
"123456" -f"(&(uid=%v)(member=%g))" -h 172.16.0.13


After the command above, I entered with


lbotelho Rede


And the result is

Connected OK
group filter '(&(uid=lbotelho)(member=Rede))', searchbase 
'dc=feinet,dc=fei,dc=edu,dc=br'

ERR


I tried a lot of other informations (searching in www.squid-cache.org), but 
it didn't work. To sum up, I know that I'm doing something wrong, but I 
don't know how to solve this.


If someone have something that can help, it would be very nice.

Thanks a lot!


Luis Claudio Botelho
Chefe de Tecnologia e Redes
Coordenadoria Geral de Informática
Centro Universitário da FEI
São Bernardo do Campo - SP
4353-2900 ramal 2117

"The great secret of life is to spend it in something that endures more than 
itself"

"In the box was written: Windows NT, 2000 or better. So I installed Linux"
"Knowing is not enough, we must apply. Willing is not enough, we must do." 





Re: [squid-users] SARG - deny_info problem

2008-02-20 Thread Adrian Chadd
On Wed, Feb 20, 2008, eXtremer wrote:

> FATAL: failed to find or read error text file.
> Squid Cache (Version 2.5.STABLE1): Terminated abnormally.

Squid-2.5.STABLE1? Are you sure what you're doing is supported?




adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] www.cmhc.ca site doesn't load

2008-02-20 Thread Adrian Chadd
On Wed, Feb 20, 2008, Shoebottom, Bryan wrote:
> Adrian,
> 
> Thank you for the suggestions, the problem is with timestamping and
> window scaling.  When I disable both of these, the site works.  Now I am
> debating whether I should do this or have this single site bypass the
> cache entirely.
> Disabling timestamping looks like it's no big deal, but disabling window
> scaling looks like it stops TCP negotiation of window sizes larger then
> 64K.  I am looking at this as a big negative, but would appreciate your
> thoughts as you are more experienced with caching technology.
> 

Its unfortunate, but this is the sort of thing which needs to be done
on various internet-connected services. The problem is that the Window Scaling
stuff can be "slightly off" and still work mostly OK, and this confuses
debugging by "does it work or not" methods (ie, does the user complain.)

You could try setting the scaling factor to 2.

Ideally some logic could be put into Squid to identify when a connection fails/
angs and retry that peer with window scaling/timestamps/pmtu disabled, flagging 
that
peer/network as "busted".



adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid 3.0 Stable1 with MySql Logging

2008-02-20 Thread Adrian Chadd
On Wed, Feb 20, 2008, Marcello Romani wrote:

> Ok I found it. I think the comments in the squid.conf file should 
> contain a paragraph like the following:
> 
> #   And priority could be any of:
> #   err, warning, notice, info, debug.
> #
> #   To log the request via a logfile writing daemon, specify a filepath 
> of "daemon", e.g.:
> #   access_log daemon:/var/logs/access.log squid
> #   You may also want to change the default value of logfile_daemon to 
> point to
> #   a custom logfile-writing daemon

Thanks. I've been meaning to update the documentation for a while; I've just
had other things take my time. I'll try to sneak something more comprehensive
in before 2.7 is (finally) released.



adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Problems accessing www.oracle.com (squid-2.5.STABLE6-3.4E.12/ CentOS 4.3)

2008-02-20 Thread Adrian Chadd
Did you start with tcpdump, to see what the network is doing?



Adrian


On Wed, Feb 20, 2008, Shaun Mccullagh wrote:
> Hi,
> 
> We have been using squid for several years with great success and
> excellent reliability. I would like to thank all those who have
> contributed to this cracking app.
> 
> Up until yesterday we could browse to www.oracle.com via squid.
> 
> In the past we overcame difficulties accessing this site by disabling
> caching:
> 
> acl ORA1 dstdomain .oracle.com
> acl ORA2 dstdomain .oracle.nl
> acl QUERY urlpath_regex cgi-bin \?
> no_cache deny QUERY ORA1 ORA2
> 
> But now this no longer works. I verified that squid can access the site.
> Connectivity is 100%. I've disabled tcp_ecn on the linux box, but still
> no joy. Literally every other website can be browsed perfectly.
> 
> If someone has any suggestions as to how I can fix this I would be most
> grateful.
> 
> Thx
> 
> Shaun
> 
> 
> 
> 
> 
> Op dit e-mailbericht is een disclaimer van toepassing, welke te vinden is op 
> http://www.xb.nl/disclaimer.html
> 
> 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid 3.0 Stable1 with MySql Logging

2008-02-20 Thread Marcello Romani

Marcello Romani ha scritto:

Adrian Chadd ha scritto:

On Mon, Feb 18, 2008, Marcello Romani wrote:

Hi, I have some experience in Perl and mysql. I can't guarantee a 
timely implementation, but I'm interested in this feature and I'm 
willing to contribute.


Could you give me some pointers for where to start ?


look for "logfile_daemon" in squid-2.7 snapshots. Its relatively easy 
from

there.



Adrian



I've written a very small perl script which reads stdin and dumps it to 
a text file. I've set it to be the logfile_daemon, but it seems it 
doesn't get called.


I've not found much info on this configuration directive.

I must be missing something stupid...

Can you help me ? Thanks




Ok I found it. I think the comments in the squid.conf file should 
contain a paragraph like the following:


#   And priority could be any of:
#   err, warning, notice, info, debug.
#
#   To log the request via a logfile writing daemon, specify a filepath 
of "daemon", e.g.:

#   access_log daemon:/var/logs/access.log squid
#   You may also want to change the default value of logfile_daemon to 
point to

#   a custom logfile-writing daemon

HTH

--
Marcello Romani
Responsabile IT
Ottotecnica s.r.l.
http://www.ottotecnica.com


RE: [squid-users] www.cmhc.ca site doesn't load

2008-02-20 Thread Shoebottom, Bryan
Adrian,

Thank you for the suggestions, the problem is with timestamping and
window scaling.  When I disable both of these, the site works.  Now I am
debating whether I should do this or have this single site bypass the
cache entirely.
Disabling timestamping looks like it's no big deal, but disabling window
scaling looks like it stops TCP negotiation of window sizes larger then
64K.  I am looking at this as a big negative, but would appreciate your
thoughts as you are more experienced with caching technology.


--
Thanks,

Bryan Shoebottom CCNA
Network & Systems Analyst
Network Services & Computer Operations
Fanshawe College
Phone:  (519) 452-4430 x4904
Fax:  (519) 453-3231
[EMAIL PROTECTED]


-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 16, 2008 8:13 AM
To: Shoebottom, Bryan
Cc: Adrian Chadd; squid-users@squid-cache.org
Subject: Re: [squid-users] www.cmhc.ca site doesn't load

On Sat, Feb 16, 2008, Shoebottom, Bryan wrote:
> I thought of this because I've had this problem in the past with sites
> like hotmail.  But when I configure the browser for the cache server
> itself and bypass WCCP, I have the same problem.  I was hoping the
> community would be able to tell me if they have any difficulties with
> this site.  Then I could begin to compare configurations.

Various people have issues with these sorts of things. Generally its
because
of stuff like ECN, PMTU discovery, Window Scaling/Timestamping, etc.



Adrian

> 
> 
> --
> Thanks,
> 
> Bryan Shoebottom CCNA
> Network & Systems Analyst
> Network Services & Computer Operations
> Fanshawe College
> Phone:  (519) 452-4430 x4904
> Fax:  (519) 453-3231
> [EMAIL PROTECTED]
> 
> 
> -Original Message-
> From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
> Sent: Friday, February 15, 2008 6:17 PM
> To: Shoebottom, Bryan
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] www.cmhc.ca site doesn't load
> 
> Start by using a packet sniffer and see if you can determine why the
> TCP sessions are hanging.
> 
> It may be WCCPv2 interception. It depends on how you've set it up to
the
> Cat6k.
> 
> 
> 
> Adrian
> 
> On Fri, Feb 15, 2008, Shoebottom, Bryan wrote:
> > Hello,
> > 
> > I am having problems getting to www.cmhc.ca through our cache
servers.
> > We have a 2.6S4 and a 3.0S1 server running transparently with WCCPv2
> and
> > Cisco cat6k equipment.  I have tried to get to the site through the
> > transparent configuration, and with each cache configured in my
> browser,
> > but the site takes a long time to come up (over 10min, I haven't
> stayed
> > around to watch) if it ever completes in any situation.  If I bypass
> the
> > caches completely, I can bring up the site with no problems.
> > There are no errors in cache.log and access.log only shows an entry
> when
> > something finally loads in the browser (i.e. when the icon shows up
> > after 5min, I see the request for favicon.ico).  Since the site
> doesn't
> > load when the browser is configured for a cache, WCCP shouldn't be
the
> > issue.
> > 
> > Can anyone replicate this or have a solution?  If you need any more
> > info, please let me know.
> > 
> > 
> > --
> > Thanks,
> > 
> > Bryan Shoebottom CCNA
> > Network & Systems Analyst
> > Network Services & Computer Operations
> > Fanshawe College
> > Phone:  (519) 452-4430 x4904
> > Fax:  (519) 453-3231
> > [EMAIL PROTECTED]
> > 
> 
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
> Support -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA
-

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


RE: [squid-users] Problems accessing www.oracle.com (squid-2.5.STABLE6-3.4E.12/ CentOS 4.3)

2008-02-20 Thread Shaun Mccullagh

Hi Joop

-Original Message-
From: J Beris [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 20, 2008 1:32 PM
To: Shaun Mccullagh; squid-users@squid-cache.org
Subject: RE: [squid-users] Problems accessing www.oracle.com
(squid-2.5.STABLE6-3.4E.12/ CentOS 4.3)

Hi,

> Up until yesterday we could browse to www.oracle.com via squid.

I am also behind a Squid proxy here (squid-2.6.STABLE6-0.6) and
www.oracle.com works perfectly...
 
> But now this no longer works. I verified that squid can 
> access the site.
> Connectivity is 100%. I've disabled tcp_ecn on the linux box, 
> but still no joy. Literally every other website can be 
> browsed perfectly.

What happens when you go to the site? Time-out? Error message?

The browser waits for data indefinitely. There is no timeout, or error
message.

What do access.log and cache.log tell you about www.oracle.com?

Access.log:
10.31.17.3 - - [20/Feb/2008:14:41:23 +0100] "GET http://www.oracle.com/
HTTP/1.1" 302 495 TCP_MISS:DIRECT

No entry appears in cache.log.

Thanks for speedy response, very much appreciated.

Shaun


Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT





Op dit e-mailbericht is een disclaimer van toepassing, welke te vinden is op 
http://www.xb.nl/disclaimer.html





Re: [squid-users] Squid currently not working.

2008-02-20 Thread Steve Billig
> > cache_log seems like a good place to start looking.
> >
> > What OS is this?
> >
Currently I am not home to check the logs, but I do know that it is Fedora 8,

> Aklso, does your currently logged in user have the root privileges needed
> to start squid and access all its required files etc.
>
> Amos
>
>
Yes, this account does have root privileges, considering it is the
root account that I am trying to run it off of.

> which the port for squid  ?
The port that it currently says is, or at least the line it says, is
"http_port 81". I did allow this port in Fedora's firewall and I also
forwarded it through my router.

I will check the logs when I get home. VNC won't let me connect to my
server from school. : /


-- 
-Steve


RE: [squid-users] Problems accessing www.oracle.com (squid-2.5.STABLE6-3.4E.12/ CentOS 4.3)

2008-02-20 Thread J Beris
Hi,

> Up until yesterday we could browse to www.oracle.com via squid.

I am also behind a Squid proxy here (squid-2.6.STABLE6-0.6) and
www.oracle.com works perfectly...
 
> But now this no longer works. I verified that squid can 
> access the site.
> Connectivity is 100%. I've disabled tcp_ecn on the linux box, 
> but still no joy. Literally every other website can be 
> browsed perfectly.

What happens when you go to the site? Time-out? Error message?
What do access.log and cache.log tell you about www.oracle.com?

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



[squid-users] SARG - deny_info problem

2008-02-20 Thread eXtremer

Hi all.
A few days ago my server died, yesterday I installed and configured again.
all fine except one deny_info.
Here is my config:

acl adv url_regex -i "/etc/squid/deny/banner.block"

http_access deny adv
deny_info http://192.168.1.50/spam/nospam.gif adv

when trying to use the proxy, squid dies sudenly.

here is the cache log:

2008/02/20 10:14:52| errorTryLoadText:
'/usr/share/squid/errors/http://192.168.1.50/spam/nospam.gif': (2) No such
file or dir
2008/02/20 10:14:52| errorTryLoadText:
'/etc/squid/errors/http://192.168.1.50/spam/nospam.gif': (2) No such file or
directory
FATAL: failed to find or read error text file.
Squid Cache (Version 2.5.STABLE1): Terminated abnormally.


opening http://192.168.1.50/spam/nospam.gif - every thing is fine, I see the
icon.


Until my proxy server died the deny_info worked fine.
What's the problem now ? Waiting for a reply.
Thanks in advance.

-- 
View this message in context: 
http://www.nabble.com/SARG---deny_info-problem-tp15586805p15586805.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] delay_parameters: What is difference between aggregate, network and individual bucket?

2008-02-20 Thread Yong Bong Fong

thanks a lot Amos!

Amos Jeffries wrote:

Yong Bong Fong wrote:

Dear friends,

 I am just confuse about the usage of aggregate, network and 
individual bucket.
If not mistaken, aggregate bucket is just like a public bucket that 
all users get the privilege to access and individual bucket is one 
specific for each user?


Say if i set a delay_parameter as follow:
delay_parameters 2 32000/32000 8000/8000 600/8000
then, how does it allocate the bucket limitation to each user?


aggregate bucket
  - ALL traffic has to be within the parameters.

network bucket (/24, /16, /network-size)
  - traffic per /n network as a whole network.
squid may handle more than one /n network at once.

individual bucket
  - each IP address must have its traffic matching these settings.



> delay_parameters 2 32000/32000 8000/8000 600/8000

 - No individual IP can get more than 600bytes/sec. Slow clients are 
given a bit of leeway to grab up to 8000byte chunks to compensate for 
up to 13sec network delays.


 - No network of class  may use more than 
8000bytes/sec.
   ie 12 IP can connect at full rate, any more start cut others speeds 
down.


 - Absolute max cap is set at 32000bytes/sec.
ie 48 IP total can connect at full individual rate, before slowing.
ie 4 network blocks may reach full rate before affecting each 
others speed.



Amos


[squid-users] Problems accessing www.oracle.com (squid-2.5.STABLE6-3.4E.12/ CentOS 4.3)

2008-02-20 Thread Shaun Mccullagh
Hi,

We have been using squid for several years with great success and
excellent reliability. I would like to thank all those who have
contributed to this cracking app.

Up until yesterday we could browse to www.oracle.com via squid.

In the past we overcame difficulties accessing this site by disabling
caching:

acl ORA1 dstdomain .oracle.com
acl ORA2 dstdomain .oracle.nl
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY ORA1 ORA2

But now this no longer works. I verified that squid can access the site.
Connectivity is 100%. I've disabled tcp_ecn on the linux box, but still
no joy. Literally every other website can be browsed perfectly.

If someone has any suggestions as to how I can fix this I would be most
grateful.

Thx

Shaun





Op dit e-mailbericht is een disclaimer van toepassing, welke te vinden is op 
http://www.xb.nl/disclaimer.html





Re: [squid-users] Digest Authentication in Squid through LDAP in Windows 2003 DC

2008-02-20 Thread Amos Jeffries

Luis Claudio Botelho - Chefe de Tecnologia e Redes wrote:

Hi Amos Jeffries,
Thank you for your cooperation..

So I used one of the links you sent to me. And I configured in shell 
scripts the tests, and it's ok.
But when I put into squid.conf, I can't authenticate. I tried but it 
still asking me for a user and password in the web browser.


These are my lines in squid.conf:
==
auth_param digest realm squid-valencia
auth_param digest children 5
auth_param digest program /usr/lib/squid/digest_ldap_auth -b 
"ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br" -u "cn" -A 
"l" -D 
"cn=Proxy_User,ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br" 
-w "123456" -e -v 3 -h 172.16.0.13 -d

==

I think that its right. And I don't know if my problem is now in another 
line:


==
external_acl_type ldap_group %LOGIN /usr/lib/squid/squid_ldap_group -R 
-b "dc=feinet,dc=fei,dc=edu,dc=br" -D 
"cn=proxy_user,ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br" 
-w "123456" -f 
"(&(objectclass=person)(memberof=cn=%a,ou=Funcionarios,ou=Usuarios,dc=feinet,dc=fei,dc=edu,dc=br))" 
-h 172.16.0.13

==

This external_acl_type works fine with basic, and I'm not sure that it's 
the right way to use external_acl_type with digest authentication.


If you could help me once again, it would be very nice.


Sorry. I don't know LDAP myself. All I can do is post the links and hope 
they are helpful.


Amos



Thank you again!

Regards,

Luis - FEI - Brazil



- Original Message - From: "Amos Jeffries" <[EMAIL PROTECTED]>
To: "Luis Claudio Botelho - Chefe de Tecnologia e Redes" 
<[EMAIL PROTECTED]>

Cc: 
Sent: Monday, February 18, 2008 8:26 PM
Subject: Re: [squid-users] Digest Authentication in Squid through LDAP 
in Windows 2003 DC




Hi,

Please, I need some help about Digest Authentication.
We made a new server in our enterprise, using "Fedora 7" (64 bits).
We have Squid 3, installed, and we need to authenticate our users in one
of
the DC's (Windows 2003 Server DC).
The problem:
We started configuring Squid with basic authentication; it worked fine,
but
we got the user's password through "Ethereal Software". This is a 
problem

here, because we have a lot of students and teachers that we need to
guarantee security to them and against them.
So we tried "digest authentication", and our problem started. Our tests
failed, and we didn't find any documentation about how to implement
"digest_ldap_auth" to check the username and password.
We don't know if our idea about digest authentication is right or wrong.
We
imagine that we can simply authenticate in "Windows 2003 Server DC" (as
basic authentication does), without store the user's passord into the
Linux
Server. Is that possible? If yes, where can I find instructions about 
how

to
use it?
If you can help us about this, and even if our idea about digest
authentication between Squid and Windows 2003 Server is wrong, it 
would be

very nice.
I would like to thank you for your time, and sorry for any 
inconvenience.


Regards,



There is a help how-to in the wiki
http://wiki.squid-cache.org/KnowledgeBase/Using_the_digest_LDAP_authetication_helper 



There are also some other auth mechanisms that may beuseful to you:

http://wiki.squid-cache.org/NegotiateAuthentication

http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM

Amos








--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


RE: [squid-users] DNS-based reverse proxy peer selection, 2.5 vs 2.6

2008-02-20 Thread Sven Edge
>From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
>Sven Edge wrote:
>> 
>> Poking around the source for the squid-2.6.STABLE17 release 
>currently in
>> Fedora, there's appears to be another source of DIRECT_NO besides a
>> never_direct, in peer_select.c.
>> http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid/src/peer_select.c
>> I've got version 1.131, where there's an "if
>> (request->flags.accelerated)" that can cause a DIRECT_NO, 
>but the most
>> recent version 1.134 has changed that. Not sure what the 
>code's testing
>> for in either version, but from the commit comment it sounds 
>like up to
>> now 2.6 was deliberately blocking direct access when in accelerator
>> mode. 
>> 
>> Maybe it's just a case of waiting for the next release?
>
>Aha, sounds like that yes. Fortunately Stable 18 is out already so if 
>the change was included there you could use that one.
>Otherwise the 2.6 daily snapshot should be stable enough to use, just 
>with a little testing required to be sure of it.

FYI, if
http://www.squid-cache.org/cgi-bin/cvsweb.cgi/squid3/src/peer_select.cc
is where squid 3.0 comes from, that doesn't have the same change
applied.

Thanks for your help. :)

Sven


Re: [squid-users] Random image generator w/ reverse-proxy

2008-02-20 Thread Amos Jeffries

Keith M. Richard wrote:

Amos,

I have a slightly older version of squid and it is setup as an
accelerator. Let me give you the layout. 


Domain name: www.my-company.org
Domain IP: 204.public address
DMZ IP Addr: 172.220.201.135 (squid server)
Internal IP: 192.1.0.59 (Web Server)


Then the line you configured:
  http_port 192.1.0.59: defaultsite=www.my-company.org

is wrong. It should be:

  http_port  defaultsite=www.my-company.org

both peers need easy names to ref, so changing these to:

  cache_peer 192.1.0.59 parent 443 0 no-query originserver ssl
  login=PASS name=httpsWeb

  cache_peer 192.1.0.59 parent  0 no-query originserver
  name=imgServlet


also adding this:

  acl myWebsite dstdomain www.my-company.org
  acl imgServletPort myport 

  cache_peer_access httpsWeb deny imgServletPort
  cache_peer_access httpsWeb allow myDomain

  cache_peer_access imgServlet allow imgServletPort myDomain


if you want to be a pure-accelerator also add these:
  never_direct allow all
  http_access deny !myDomain

Squid will now perform acceleration without using DNS.


SQUID: Loads with the -D for no DNS and the host file has an entry for
192.1.0.59 as www.my-company.org.


-D just means the DNS servers are not tested before use. Squid still 
needs to and does resolv things its needs while running.


The host file entry should lock that domain name never to be looked up 
remotely though.



I see below you are using 2.6s6. That is new enough that the 
internal-looping comments still stand. So does the solution.


For an acceleration part squid should only be listening on a bare port 
or 204.public-address:port combo.




Here is a dump from my cache.log from the last restart of squid:
2008/02/18 16:32:29| Starting Squid Cache version 2.6.STABLE6 for
i686-redhat-linux-gnu...
2008/02/18 16:32:29| Process ID 23575
2008/02/18 16:32:29| With 1024 file descriptors available
2008/02/18 16:32:29| Using epoll for the IO loop
2008/02/18 16:32:29| DNS Socket created at 0.0.0.0, port 32938, FD 5
2008/02/18 16:32:29| Adding domain groupbenefits.org from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| User-Agent logging is disabled.
2008/02/18 16:32:29| Referer logging is disabled.
2008/02/18 16:32:29| Unlinkd pipe opened on FD 10
2008/02/18 16:32:29| Swap maxSize 1024 KB, estimated 787692 objects
2008/02/18 16:32:29| Target number of buckets: 39384
2008/02/18 16:32:29| Using 65536 Store buckets
2008/02/18 16:32:29| Max Mem  size: 8192 KB
2008/02/18 16:32:29| Max Swap size: 1024 KB
2008/02/18 16:32:29| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/02/18 16:32:29| Rebuilding storage in /var/cache/squid (CLEAN)
2008/02/18 16:32:29| Using Least Load store dir selection
2008/02/18 16:32:29| Current Directory is /
2008/02/18 16:32:29| Loaded Icons.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port 80, FD 12.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port , FD 13.
2008/02/18 16:32:29| Accepting HTTPS connections at 0.0.0.0, port 443,
FD 14.
2008/02/18 16:32:29| Accepting ICP messages at 0.0.0.0, port 3130, FD
15.
2008/02/18 16:32:29| WCCP Disabled.
2008/02/18 16:32:29| Configuring Parent 192.1.0.59/443/0
2008/02/18 16:32:29| Configuring Parent 192.1.0.59//0
2008/02/18 16:32:29| Ready to serve requests.

All I really want to do is setup a http accelerator for this internal
website. I have read everything I can find about this and I guess I do
not understand the options. I do know that the option in the squid.conf
change rapidly and I am not running the newest version. I am running the
version that is loaded on my Red Hat server. I have downloaded the
newest version and am planning an upgrade very soon, but I am needing to
get this going first.

Thanks,
Keith

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Monday, February 18, 2008 5:13 PM
To: Keith M. Richard
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Random image generator w/ reverse-proxy


All,

I have a web page on my site that has a randomly

generated

image (Alpha numeric picture) to allow users to register. I am using

squid

as an accelerator in my DMZ to this internal web server. Right now

the

image is coded as an unsecured (http) link/servlet on port ,

which

is

just a random port. This is embedded in a HTTPS page. If I don't use

squid

it works but through squid if fails to display the image.
I have checked the firewall and it is properly

configured.

When I check the firewalls log, it shows the request to  from

the

outside, but those same requests are never passed through squid for

some

reason. I have also run Wireshark on the squid server to capture the
traffic as users made requests and I see the TCP [SYN] from

Re: [squid-users] strtocFile WARNING: empty ACL

2008-02-20 Thread Amos Jeffries

adrian.wells wrote:

adrian.wells wrote:

SQUID 2.5 stable12


Please try a later version of squid.

Will do once sorted ;-)


Trouble with that is if its an old bug we could spend a week tracing it 
only to find the solution is an upgrade.


After an upgrade the free-support people like myself can usually 
identify the log messages etc. Or at worst trace things down in the code 
itself.


2.5 has not been actively debugged in at least two years, so most of us 
are very rusty on it, if ever learned that is.






Have just built a SuSE linux box and installed squid, as I have done 
many times, copied the conf and ACL text files from a running proxy 
and get the following list of errors for each ...


2008/02/19 10:27:42| strtokFile: /etc/squid/etc/girls.txt
2008/02/19 10:27:42| aclParseAclLine: WARNING: empty ACL: 
auid/etc/girls.txt"


I'm thinking at first glance that "auid" might be a problem.
It's usually a sign of non-ascii characters cut-n-pasted into the 
squid.conf text.
At this stage I've only removed a few unrequired ACL's, not edited the 
text in any way.
While waiting for a reply, I've started rebuilding the box from scratch 
as it only takes a few minutes.

I will check the text for anomalies in the meantime - thanks ;-)

P.S I've lost the original thread, but would like to thank you for 
helping with a previous problem, you suggested that the ISP might be 
block ports, they were! 3128!


I don't recall sorry. So many emails coming and going while I try to 
escape other work.


- I have tried 3129 and it works fine, are 
there any recommended ports to use with squid other than 80 or 8080?


Not really. The admin work for configuration when using non-standard 
ports is pretty much fixed. It won't change much whether you use 1 or 65534.




Regards

Adrian




If I copy and past the address in the error to a file browser, I get 
to see the file!


This works perfectly on the previous machine running the same version 
of SuSE & Squid!

I have re-installed squid.
Can anyone please offer a solution?

Kind regards
Adrian

girls.txt is just a list of MAC addresses
e.g.
00:1B:77:8A:D5:CF # Wir User Name
00:1B:24:7E:CB:B1 # LAN User Name
etc.


# Groups follow...
#

acl girls arp "/etc/squid/etc/girls.txt"
acl temp arp "/etc/squid/etc/temp.txt"

acl boys arp "/etc/squid/etc/boys.txt"
acl staff arp "/etc/squid/etc/staff.txt"
#
# Times follow...
#
acl 24Hr time M T W H F A S 00:00-23:59





Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.






--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] original server down

2008-02-20 Thread Amos Jeffries

J. Peng wrote:

this is my squid and its complied options:

$ /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr/local/squid-2.6'
'--enable-async-io=256' '--enable-epoll' '--disable-carp'
'--enable-removal-policies=heap lru' '--disable-wccp'
'--disable-wccpv2' '--enable-kill-parent-hack' '--with-large-files'
'--with-maxfd=65535' '--disable-ident-lookups'

thanks!

On Wed, Feb 20, 2008 at 5:06 PM, Amos Jeffries <[EMAIL PROTECTED]> wrote:

J. Peng wrote:
 > hello,
 >
 > How to handle the case of the original server was down? I use squid
 > for reverse-proxy, if I add some lines like:
 >
 > cache_peer parentcache.example.com   parent  80 0
 > cache_peer childcache2.example.com   sibling  80 0
 > cache_peer childcache3.example.com   sibling  80 0
 >
 > if original server was down, does squid go to query other caches like above?


Yes.


 > Will squid get cache MISS object from its sibling?


Yes, IFF the sibling has it fresh.


 >  or it can only get
 > cache MISS object from the parent?


Sources by preference I think are:
 local cache
 parent
 sibling
 direct


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Problem changing log files path

2008-02-20 Thread Amos Jeffries

Marco wrote:

hi to all,
i'm using OpenSuSE 10.2 with Squid 2.6 and i need to modify the path 
were to store cache.log, store.log and access.log.
Actually my system use the default directory (/var/log/squid); i would 
like to move to /cache/squid_log (a sub-dir into

the /cache squid directory).
i have modified the file /etc/squid/squid.conf whith the line below,

cache_log /cache/squid_log/store.log
access_log /cache/squid_log/access.log squid
cache_store_log /cache/squid_log/store.log


This is probably not needed. You can set it to "none"
Using the same name as cache_log is not good either way.



and next i have restart squid, but when i try to navigate in internet,  
in the /cache/squid_log directory i don't see
cache.log, store.log and access.log but stranges files (index.html, 
etc)... like URL.


Are you sure that directory was empty before you started squid?



Where i mistake ?
thank's to all



Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] squid access.log transparent debian 4

2008-02-20 Thread Amos Jeffries

mostafa faridi wrote:
I install squid 2.6 on Debian 4 and I think everything is good but when 
I type




Packaged squid or self-built? Which release number?



*
** tail -f /var/log/squid/access.log

*do not show me any thing and I think squid do not make log
this is my IPtables rule

*# Generated by iptables-save v1.3.6 on Tue Feb 19 18:08:20 2008
*nat
:PREROUTING ACCEPT [117:16009]
:POSTROUTING ACCEPT [2:111]
:OUTPUT ACCEPT [23:3703]
-A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
# Completed on Tue Feb 19 18:08:20 2008
# Generated by iptables-save v1.3.6 on Tue Feb 19 18:08:20 2008
*filter
:INPUT ACCEPT [1186:117352]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [550:173868]
-A FORWARD -i eth1 -j ACCEPT
COMMIT
# Completed on Tue Feb 19 18:08:20 2008

*where I make mistake ??


NAT looks okay to me.

Might be your squid build (needs --enable-linux-netfilter) or something 
in the config.


What does /var/log/squid/access.log say?
What is your config (without the comments)?

Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] original server down

2008-02-20 Thread J. Peng
this is my squid and its complied options:

$ /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr/local/squid-2.6'
'--enable-async-io=256' '--enable-epoll' '--disable-carp'
'--enable-removal-policies=heap lru' '--disable-wccp'
'--disable-wccpv2' '--enable-kill-parent-hack' '--with-large-files'
'--with-maxfd=65535' '--disable-ident-lookups'


thanks!

On Wed, Feb 20, 2008 at 5:06 PM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>
> J. Peng wrote:
>  > hello,
>  >
>  > How to handle the case of the original server was down? I use squid
>  > for reverse-proxy, if I add some lines like:
>  >
>  > cache_peer parentcache.example.com   parent  80 0
>  > cache_peer childcache2.example.com   sibling  80 0
>  > cache_peer childcache3.example.com   sibling  80 0
>  >
>  > if original server was down, does squid go to query other caches like 
> above?
>  > Will squid get cache MISS object from its sibling? or it can only get
>  > cache MISS object from the parent?
>
>  Which version of squid? They all have different capabilities for this.
>
>
>  Amos
>  --
>  Please use Squid 2.6STABLE17+ or 3.0STABLE1+
>  There are serious security advisories out on all earlier releases.
>


Re: [squid-users] original server down

2008-02-20 Thread Amos Jeffries

J. Peng wrote:

hello,

How to handle the case of the original server was down? I use squid
for reverse-proxy, if I add some lines like:

cache_peer parentcache.example.com   parent  80 0
cache_peer childcache2.example.com   sibling  80 0
cache_peer childcache3.example.com   sibling  80 0

if original server was down, does squid go to query other caches like above?
Will squid get cache MISS object from its sibling? or it can only get
cache MISS object from the parent?


Which version of squid? They all have different capabilities for this.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] delay_parameters: What is difference between aggregate, network and individual bucket?

2008-02-20 Thread Amos Jeffries

Yong Bong Fong wrote:

Dear friends,

 I am just confuse about the usage of aggregate, network and individual 
bucket.
If not mistaken, aggregate bucket is just like a public bucket that all 
users get the privilege to access and individual bucket is one specific 
for each user?


Say if i set a delay_parameter as follow:
delay_parameters 2 32000/32000 8000/8000 600/8000
then, how does it allocate the bucket limitation to each user?


aggregate bucket
  - ALL traffic has to be within the parameters.

network bucket (/24, /16, /network-size)
  - traffic per /n network as a whole network.
squid may handle more than one /n network at once.

individual bucket
  - each IP address must have its traffic matching these settings.



> delay_parameters 2 32000/32000 8000/8000 600/8000

 - No individual IP can get more than 600bytes/sec. Slow clients are 
given a bit of leeway to grab up to 8000byte chunks to compensate for up 
to 13sec network delays.


 - No network of class  may use more than 
8000bytes/sec.
   ie 12 IP can connect at full rate, any more start cut others speeds 
down.


 - Absolute max cap is set at 32000bytes/sec.
ie 48 IP total can connect at full individual rate, before slowing.
ie 4 network blocks may reach full rate before affecting each 
others speed.



Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Squid 3.0 Stable1 with MySql Logging

2008-02-20 Thread Marcello Romani

Adrian Chadd ha scritto:

On Mon, Feb 18, 2008, Marcello Romani wrote:

Hi, I have some experience in Perl and mysql. I can't guarantee a timely 
implementation, but I'm interested in this feature and I'm willing to 
contribute.


Could you give me some pointers for where to start ?


look for "logfile_daemon" in squid-2.7 snapshots. Its relatively easy from
there.



Adrian



I've written a very small perl script which reads stdin and dumps it to 
a text file. I've set it to be the logfile_daemon, but it seems it 
doesn't get called.


I've not found much info on this configuration directive.

I must be missing something stupid...

Can you help me ? Thanks

--
Marcello Romani
Responsabile IT
Ottotecnica s.r.l.
http://www.ottotecnica.com