Re: [squid-users] pac and dat woes

2007-05-11 Thread Amos Jeffries

David Gameau wrote:

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 


On Fri, May 11, 2007, David Gameau wrote:
I've made WPAD work but I've not made it work with a DHCP 
configuration. I've done mine with DNS.


Does anyone here have an example of a WPAD+DHCP 
configuration? If so I'd like to talk to you and document

it on the Wiki.


Here's what we use to support WPAD+DHCP:
[From dhcpd.conf, in the global section of the file]
  option option-252 code 252 = text;
  option option-252 http://wpad.example.com/wpad.dat\n;;

Note that IE6 truncates the answer it gets (by dropping the
last character), which is why you need to include something
like '\n'.

I'm not sure whether Firefox supports DHCP for its autodiscovery.

Hm! How interesting. Do you have any tech references for that IE6
WPAD behaviour?

Adrian


I can't find the singular authoritative source for the problem.
However, this is probably the best explanation I could find.
[from
http://homepages.tesco.net/J.deBoynePollard/FGA/web-browser-auto-proxy-c
onfiguration.html]

  One caveat: Microsoft's Internet Explorer version 6.01 expects the
   string in option 252 to be NUL-terminated. As such, it
unconditionally
   strips off the final octet of the string before using it. Earlier
versions
   of Microsoft's Internet Explorer do not do this. To satisfy all
versions,
   simply explicitly include a NUL as the last octet of the string.



I can't find any notes for it in my configs but I'm sure I recall 
finding something even more devious than a single-octet truncation being 
done. My memories are of finding it would truncate proxy.pac = 
proxy.pa and others like 2007-proxy.pac = 2007-pro


Amos


Re: [squid-users] proxy.pac config

2007-05-11 Thread Pitti, Raul



Adrian Chadd wrote:

On Thu, May 10, 2007, K K wrote:

On 5/10/07, Adrian Chadd [EMAIL PROTECTED] wrote:
There's plenty of examples of proxy.pac file based load balancing and 
failover.

It's important to keep in mind that some PAC behavior, including
failover, is different for different browsers and browser versions --
this particularly applies to IE, which for example, caches everything
about PAC, included failed proxies, and won't forget until the
iexplore.exe process ends and is restarted.


You can turn that cache behaviour off. I'll hunt around for the instructions
to tell IE not to cache proxy.pac lookups and add it to the documentation.


pls. look at this .reg file
http://www.globaltecsa.com/squid/IE-auto-proxy-cache.reg
hope this helps!
RP



(P.S. Have you heard about the magical PAC refresh option in Microsoft's 
IEAK?)


Nope! Please tell.



Adrian




--

Raúl Pittí Palma, Eng.

Global Engineering and Technology S.A.
mobile (507)-6616-0194
office (507)-390-4338
Republic of Panama
www.globaltecsa.com


Re: [squid-users] proxy.pac config

2007-05-11 Thread Adrian Chadd
On Fri, May 11, 2007, Pitti, Raul wrote:

 pls. look at this .reg file
 http://www.globaltecsa.com/squid/IE-auto-proxy-cache.reg
 hope this helps!

Whats it do? Does this turn off the proxy result cache?



Adrian



Re: [squid-users] Add Specific Header ?

2007-05-11 Thread Matus UHLAR - fantomas
On 09.05.07 19:36, Seonkyu Park wrote:
 My Apache web server  add some header when download avi files.
...
 FilesMatch \.(avi)$
Header set Cache-Control no-store, nocache, must-revalidate
 /FilesMatch

Files *.avi should be a bit more efficient.

But why do you want all videos not to be cached? Do you need to generate
outgoing traffic?

 I use squid for server accelerator.

why?

 How do I configure Squid.conf same as apache.conf ?

what do you want squid to do? To cache the files, but not to allow other
proxies to cache them?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
There's a long-standing bug relating to the x86 architecture that
allows you to install Windows.   -- Matthew D. Fuller


Re: [squid-users] Don't cache Youtube or flash video

2007-05-11 Thread Matus UHLAR - fantomas
On 09.05.07 19:36, Gilbert Ng wrote:
 I got a problem on watching youtube or those flash video in our LAN.
 If I disable our squid, it is smooth but when we start it, I need much
 more time to download
 the video.

of course - it's not cached by squid. IF it's cached by squid, it loads
faster.

 I have try to use no_cache but no use at all:
 
 acl QUERY urlpath_regex cgi-bin \?
 acl NOCACHE dstdomain .youtube.com$
 acl SWF url_regex .swf$
 no_cache deny SWF
 no_cache deny QUERY
 no_cache deny NOCACHE

so you want to disable caching of those files, so the downloads will take
more time even with squid running?

(note that no_cache actually means cache and was renamed in 2.6.
no_cache deny something disables caching of something)

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
How does cat play with mouse? cat /dev/mouse


[squid-users] squid 3.0 crashes

2007-05-11 Thread robert
i used squid 3.0 for two weeks and everything was ok.
now the cache is 30GB and it crashes
it starts ok... it works for 2-4 seconds and then
crashes without a message in the logs.
the only thing i did was starting it with 16834
file descriptors

anybody had this problem ?



[squid-users] trusted squid caching

2007-05-11 Thread Jeff Chua


Is there an option to allow the connection to be held open between two 
squid servers for a long duration?


I want to reduce the intial setup up to connect remotely to an apache 
server with wget since wget doesn't support http 1.1 persistent caching, 
plus wget session does not stay open after it is done.


I'm assuming that having it's faster to establish local squid session, so 
the remote squid can handle the request on behalf of the local squid.


Possible?

Thanks,
Jeff


Re: [squid-users] Don't cache Youtube or flash video

2007-05-11 Thread zulkarnain
--- Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:
 On 09.05.07 19:36, Gilbert Ng wrote:
  I got a problem on watching youtube or those flash
 video in our LAN.
  If I disable our squid, it is smooth but when we
 start it, I need much
  more time to download
  the video.
 
 of course - it's not cached by squid. IF it's cached
 by squid, it loads
 faster.
 

what version of squid that support youtube caching?


   

Moody friends. Drama queens. Your life? Nope! - their life, your story. Play 
Sims Stories at Yahoo! Games.
http://sims.yahoo.com/  


Re: [squid-users] Median Response Time

2007-05-11 Thread Alexandre Correa

Seems now are working fine..

one curious thing...

when squid is under high load (peak time).. the median time ir very
low.. sometimes 30ms!!!

:)

thanks

regards !!!

On 5/9/07, Alexandre Correa [EMAIL PROTECTED] wrote:

Seems this problem happens after i lower my cache_mem ...

i´m using 64mb of cache_mem ..

i turned on server_persistent_connection and set the timout with 120seconds...

[EMAIL PROTECTED] [/home/alexandre]# squidclient -U xxx -W xxx mgr:5min
| grep client_http.all_median_svc_time

client_http.all_median_svc_time = 0.186992 seconds


if i turn server_persistent_connection to OFF .. i got

[EMAIL PROTECTED] [/home/alexandre]# squidclient -U xxx -W xxx mgr:5min
| grep client_http.all_median_svc_time
client_http.all_median_svc_time = 0.321543 seconds

cache_mem increased to 128mb and server_persistent_connection ON

[EMAIL PROTECTED] [/home/alexandre]# squidclient -U xxx -W xxx mgr:5min
| grep client_http.all_median_svc_time
client_http.all_median_svc_time = 0.142521 seconds

and.. this time is lowering ..

i will try cache_mem with 256mb after...

thanks all :)

regards !!!

On 5/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 ons 2007-05-09 klockan 02:20 -0300 skrev Alexandre Correa:

  how squid calculate this median time ? i´m confused about :P

 It's the median of the response times of the requests completed in the
 last 20 minutes.

 response time of a request is measured from when Squid starts trying to
 connect to the server until the last byte of the response has been
 received.

 Regards
 Henrik




--

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net




--

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] Squid configuration problems

2007-05-11 Thread seb
Hi,

I am trying to produce a squid setup as depicted in
www.cdal.co.uk/Proxy2.png 

2 squid instances, one running on port 3128 (frontend) and another on
port 3030 (backend).

3 instances of DansGuardian running on 8080, 8081 and 8082 which act as
cache peers to the frontend squid.

 The frontend (no caching) squid uses NTLM authentication to
authenticate users. Then based upon their group (using wbinfo_group.pl)
determines which cache peer they are allowed to access. 

My cache peers are defined as:

cache_peer students.local parent 8080 0 proxy-only no-query
no-netdb-exchange no-digest
cache_peer staff.local parent 8081 0 proxy-only no-query
no-netdb-exchange no-digest
cache_peer special.local parent 8082 0 proxy-only no-query
no-netdb-exchange no-digest

students.local, staff.local and special.local are all entries
in /etc/hosts resolving to itself

I have managed to get the Dan Guardians and the backend squid to work,
as these can be tested individually.

NTLM Authentication is working as user's names are resolved in the
access.log.

My problem seems to be located in the external_acl_type as when this is
commented out along with other dependent acls the squid process starts
up, otherwise the following error is generated:

FATAL: Bungled squid.3128.conf line 1863: acl special external
ntlm_group it
Squid Cache (Version 2.6.STABLE5): Terminated abnormally.

the problem doesn't seem to be with this line in the config as when
commented out the next line (also acl special) errors producing a
similar error.

My acls are defined as:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https 
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT

acl special external ntlm_group it
acl staff external ntlm_group Staff
acl students external ntlm_group Students

acl ntlm_users proxy_auth REQUIRED

With an external acl of:

external_acl_type ntlm_group concurrency=0 children=5 ttl=0 %
LOGIN /usr/lib/squid/wbinfo_group.pl

My cache_peer_access rules are defined as:

never_direct allow all

#cache_peer_access students.local allow all

cache_peer_access special.local allow special
cache_peer_access special.local deny all

cache_peer_access students.local allow students
cache_peer_access students.local deny all

cache_peer_access staff.local allow staff
cache_peer_access staff.local deny all 

The commented out line is in place to check that the connection between
squid and its peers works.

My http_access is defined as:

http_access allow ntlm_users

When I run the wbinfo_group.pl script manually from the command line the
script returns OK as expected and also gets the correct SID/GID when in
debug mode.

The system will be locked down using IP tables to prevent users from
switching to the backend squid and thus skipping the authentication
procedure however during testing and to avoid complexity IP tables is
off.

I am using Squid 2.6STABLE5, which is the packaged version from the
Ubuntu repositories, with the following output for -version

Squid Cache: Version 2.6.STABLE5
configure options: '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid'
'--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid'
'--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter'
'--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap'
'--enable-snmp' '--enable-delay-pools' '--enable-htcp'
'--enable-cache-digests' '--enable-underscores' '--enable-referer-log'
'--enable-useragent-log' '--enable-auth=basic,digest,ntlm'
'--enable-carp' '--with-large-files' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux'

I am running Ubuntu Feisty Fawn 7.04. I have tried to work through this
problem by looking at the FAQs and googling but to no avail. 

Any help would be much appreciated.

Cheers,
--
Sebastian Harrington
Infrastructure Officer
Longhill High School

e: seb {at} longhill _dot_ brighton-hove _dot_ sch _dot_ uk 


Re: [squid-users] Load balancing algorithms for an accelerator

2007-05-11 Thread Sean Walberg

On 5/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:


 Is there any way to balance based on least connections, or something else?

Not today, but probably quite easy to add.


How would I go about getting this on a developer's radar screen?  I
don't think this is something I could do myself.

Thanks,

Sean

--
Sean Walberg [EMAIL PROTECTED]http://ertw.com/


Re: [squid-users] Delay pools throttle inbound, not outbound

2007-05-11 Thread Henrik Nordstrom
tor 2007-05-10 klockan 17:06 -0700 skrev Justin Dossey:

 Shouldn't delay pools affect the connection between the proxy and the
 Internet when Squid is in web accelerator mode?

Probably, but it's not what it was designed for. Patches welcome
however.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid configuration problems

2007-05-11 Thread Henrik Nordstrom
fre 2007-05-11 klockan 10:06 +0100 skrev seb:

 My problem seems to be located in the external_acl_type as when this is
 commented out along with other dependent acls the squid process starts
 up, otherwise the following error is generated:

The external_acl_type directive must go before any acl's trying to use
that helper.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Bandwidth Requirements

2007-05-11 Thread Dustin Berube



[EMAIL PROTECTED] wrote:

I am looking at implementing squid for one of my clients and have a
question regarding bandwidth usage. In the scenario I will have multiple
locations with very few PC's approximately 2-3 machines per location.

If I setup a main squid server in one of my main locations with a
standard DSL connection (3.0Mbps down and 512K up) and VPN the stores
into that main server, will I notice a large delay when waiting for
pages to load?

My second question is if I use that scenario will the internet traffic
all flow under through the proxy or will it just check the URL and then
use the default route which will be the local internet connect?

Thanks in advance.

Dustin



Um, the best use of Squid is to prevent usage of slow links like your 512K
up. If the clients are on the other end of it to squid then you really
need a great reason to force them to use it.

On the information you have given the answers are definately, and maybe.
But some info on what you are trying to do may change that.



My reason for forcing them to use a proxy server is to setup URL 
filtering and a URL blacklist to block common spyware sites, porn sites, 
myspace and youtube from access on company computers. I am not really 
concerned in caching sites, but rather block access to previously 
mentioned types of sites. An example of one of the locations that I 
would have to use a VPN tunnel is a kiosk they have in a mall. There is 
only 3 computers total at that location an no room to drop in a 
dedicated squid box.


Re: [squid-users] Squid configuration problems

2007-05-11 Thread seb
On Fri, 2007-05-11 at 14:33 +0200, Henrik Nordstrom wrote:
 The external_acl_type directive must go before any acl's trying to use
 that helper.

Thanks that stopped it from 'bungling'.

Now when I try to access a website I get the following in the cache.log:

^T2007/05/11 14:15:48| Failed to select source for
'http://www.cdal.co.uk/'
2007/05/11 14:15:48|   always_direct = 0
2007/05/11 14:15:48|never_direct = 1
2007/05/11 14:15:48|timedout = 0
2007/05/11 14:15:48| Failed to select source for
'http://toolbarqueries.google.co.uk/search?client=navclient-autogoogleip=F;64.233.183.103;0iqrn=rvtDorig=0nCIhie=UTF-8oe=UTF-8querytime=OWfeatures=Rank:q=info:http%3a%2f%2fwww%2ecdal%2eco%2eukch=721257453023'
2007/05/11 14:15:48|   always_direct = 0
2007/05/11 14:15:48|never_direct = 1
2007/05/11 14:15:48|timedout = 0

Which sounds as if the external acl is not returning correctly or being
called correctly?

As per my earlier message the wbinfo_group.pl works fine independently
of squid and is being called using the following directive:

 external_acl_type ntlm_group concurrency=0 children=5 ttl=0 %LOGIN 
 /usr/lib/squid/wbinfo_group.pl

calling wbinfo_group.pl returns:
[EMAIL PROTECTED]:~# /usr/lib/squid/wbinfo_group.pl
seb it
OK
seb Staff
OK
seb admin
ERR

I've done some quick googling and all errors of this nature seem to be
about dead parents, yet the cache.log makes no mention of this. I
believe it is to do with the above.

Any help or strategies appreciated.

Cheers,
--
Sebastian Harrington
Infrastructure Officer
Longhill High School

e: seb {at} longhill _dot_ brighton-hove _dot_ sch _dot_ uk  


[squid-users] Two interfaces

2007-05-11 Thread Omar M
Hello everyone:

I've been looking for this question...Is possible to have one squid
resolving two different interfaces? Explaining...I have three interfaces
in my server, eth0, eth1 and eth2. The configuration is something like:

eth0 (Internet)

eth1 (network 1) 192.168.4.X 

eth2 (network 2) 192.168.104.X 

Could I resolve both networks using one squid or do I need two squids?

Thank you guys.

Regrets.

Omar M



Re: [squid-users] Don't cache Youtube or flash video

2007-05-11 Thread Matus UHLAR - fantomas
On 11.05.07 03:16, zulkarnain wrote:
 --- Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:
  On 09.05.07 19:36, Gilbert Ng wrote:
   I got a problem on watching youtube or those flash video in our LAN.
   If I disable our squid, it is smooth but when we start it, I need much
   more time to download the video.

  of course - it's not cached by squid. IF it's cached by squid, it loads
  faster.

 what version of squid that support youtube caching?

I guess all. It just needs to get the request.

You said that with interception squid turned on all videos load faster, it's
probably because they are cached.

With setup you proposed, the videos would be forced not to be cached, so I'm
asking why do you want to do that...
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
- Have you got anything without Spam in it?
- Well, there's Spam egg sausage and Spam, that's not got much Spam in it.


Re: [squid-users] Two interfaces

2007-05-11 Thread Omar M
Muito obrigado!!!

Thanks :D

I'll try it right away.

Regrets.

Omar M

On Fri, 2007-05-11 at 11:03 -0300, Alexandre Correa wrote:
 you can setup squid to listen on two or more ips.. simples:
 
 http_port 192.168.5.1:3128
 http_port 192.168.104.1:3128
 
 squid will listen on these 2 ports :)
 
 Regards !
 
 On 5/11/07, Omar M [EMAIL PROTECTED] wrote:
  Hello everyone:
 
  I've been looking for this question...Is possible to have one squid
  resolving two different interfaces? Explaining...I have three interfaces
  in my server, eth0, eth1 and eth2. The configuration is something like:
 
  eth0 (Internet)
 
  eth1 (network 1) 192.168.4.X
 
  eth2 (network 2) 192.168.104.X
 
  Could I resolve both networks using one squid or do I need two squids?
 
  Thank you guys.
 
  Regrets.
 
  Omar M
 
 
 
 



Re: [squid-users] Load balancing algorithms for an accelerator

2007-05-11 Thread Adrian Chadd
On Fri, May 11, 2007, Sean Walberg wrote:
 On 5/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 
  Is there any way to balance based on least connections, or something 
 else?
 
 Not today, but probably quite easy to add.
 
 How would I go about getting this on a developer's radar screen?  I
 don't think this is something I could do myself.

You can submit a Wishlist request. I can add it to the Wiki. You can attach
a bounty, or you can say you'll donate to the Squid project on completion.




Adrian



Re: [squid-users] difference with java applets behaviors

2007-05-11 Thread Lionel Déruaz
  The same websites work fine on my old proxy (squid 2.5 stable9), and this
  proxy uses also NTLM authentication.
  Do you know what could have changed in squid that explain this different
  behavior ?

 Same Samba version? The bulk of the NTLM authentication is done by
 Samba, Squid just acts as a relay between the client and Samba..

 The only difference I can think of in Squid is that there is slight
 change in persistent connection management. You can try

   auth_param ntlm keep_alive off

 to see if that makes any difference.


Bad news. No changes.

However, it may be linked to this message i see in the cache.log since we've
upgraded the proxy :

[2007/05/11 08:04:02, 1] libsmb/ntlmssp.c:ntlmssp_update(259)
  got NTLMSSP command 3, expected 1


Re: [squid-users] Two interfaces

2007-05-11 Thread Alexandre Correa

you can setup squid to listen on two or more ips.. simples:

http_port 192.168.5.1:3128
http_port 192.168.104.1:3128

squid will listen on these 2 ports :)

Regards !

On 5/11/07, Omar M [EMAIL PROTECTED] wrote:

Hello everyone:

I've been looking for this question...Is possible to have one squid
resolving two different interfaces? Explaining...I have three interfaces
in my server, eth0, eth1 and eth2. The configuration is something like:

eth0 (Internet)

eth1 (network 1) 192.168.4.X

eth2 (network 2) 192.168.104.X

Could I resolve both networks using one squid or do I need two squids?

Thank you guys.

Regrets.

Omar M





--

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid log with date ext

2007-05-11 Thread Nicolás Ruiz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adrian Chadd wrote:
 On Thu, May 10, 2007, Kinkie wrote:
 On 5/10/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Hi,

 Is it possible to generate squid log file with date extension
 (like /var/log/squid/access.log-`date +%Y%m%d` format) in real time (i mean
 it is not generated by logrotate) ?
 Currently it's not possible.
 You can rename old files after rotating them with squid -k rotate;
 it's a relatively simple exercise in shell scripting.
 
 And if someone writes it up I'd be happy to include it in the base 
 distribution.
 The trick: use head -1 and tail -1 on the rotated logfile to figure out its
 time span, then rename the logfile to that..

I use the following combination of crontab and script to rename and
produce a calamaris report

# Rotate squid logs
0 4 * * * /usr/sbin/squid -k rotate
# And generate report of freshly rotated log
5 4 * * * /usr/local/bin/Daily_Calamaris

Daily_Calamaris:

#!/bin/bash
WHEN=`/bin/date -d yesterday +%Y%m%d`
LOGDIR=/var/spool/squid
OUTDIR=/var/spool/squid
LOGFILE=${LOGDIR}/squid_access.log.0
NEWFILE=${LOGDIR}/squid_access.log.${WHEN}
OUTFILE_TXT=${OUTDIR}/squid_report_${WHEN}.txt
OUTFILE_HTML=${OUTDIR}/squid_report_${WHEN}.html
/bin/mv ${LOGFILE} ${NEWFILE}
/usr/bin/nice -n 19 \
  /bin/cat ${NEWFILE} | \
  /usr/bin/calamaris -a -n --domain-report 50 \
  --requester-report 100  ${OUTFILE_TXT}
/usr/bin/nice -n 19 \
  /bin/cat ${NEWFILE} | \
  /usr/bin/calamaris -F html -a -n --domain-report 50 \
  --requester-report 100   ${OUTFILE_HTML}


- -
What Adrian is asking for could be done with something like (is there a
better way to get the time string than using perl?)

#!/bin/bash
LOGDIR=/var/spool/squid
OUTDIR=/var/spool/squid
LOGFILE=${LOGDIR}/squid_access.log.0
FROM=`head -1 ${LOGFILE} | cut -d. -f1 | perl -e \
  '($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = \
  localtime(STDIN - 4 * 60 * 60 ); \
  printf(%04d:%02d:%02d-%02d:%02d:%02d\n, \
 $year+1900, \
 $mon+1, \
 $mday, \
 $hour, \
 $min, \
 $sec)'`
TO=`tail -1 ${LOGFILE} | cut -d. -f1 | perl -e \
  '($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = \
  localtime(STDIN - 4 * 60 * 60 ); \
  printf(%04d:%02d:%02d-%02d:%02d:%02d\n, \
 $year+1900, \
 $mon+1, \
 $mday, \
 $hour, \
 $min, \
 $sec)'`
NEWFILE=${LOGDIR}/squid_access.log.${FROM}--${TO}
OUTFILE_TXT=${OUTDIR}/squid_report_${FROM}--${TO}.txt
OUTFILE_HTML=${OUTDIR}/squid_report_${FROM}--${TO}.html
/bin/mv ${LOGFILE} ${NEWFILE}
/usr/bin/nice -n 19 \
  /bin/cat ${NEWFILE} | \
  /usr/bin/calamaris -a -n --domain-report 50 \
  --requester-report 100  ${OUTFILE_TXT}
/usr/bin/nice -n 19 \
  /bin/cat ${NEWFILE} | \
  /usr/bin/calamaris -F html -a -n --domain-report 50 \
  --requester-report 100   ${OUTFILE_HTML}

- 
Note that the server is in timezone GMT-4, and the server is using UTC,
that's why (a) the crontab runs at 4 am (midnight local time) and (b) I
have to compute the localtime for STDIN - 4 * 60 * 60.

Hope it helps


 
 
 
 Adrian
 
 

- --
A: Because it destroys the flow of conversation.
Q: Why is top posting dumb?
- --
Juan Nicolás Ruiz| Corporación Parque Tecnológico de Mérida
 | Centro de Cálculo Cientifico ULA
[EMAIL PROTECTED]   | Avenida 4, Edif. Gral Masini, Ofic. B-32
+58-(0)274-252-4192  | Mérida - Edo. Mérida. Venezuela
PGP Key fingerprint = CDA7 9892 50F7 22F8 E379  08DA 9A3B 194B D641 C6FF
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGRJ8hmjsZS9ZBxv8RAl9lAJ41K0kjG2VfiM+Wls72N99ke7QIpQCbBmk0
89Wv/ADxXLJ/tt8OGGYSJP4=
=HjSv
-END PGP SIGNATURE-



Re: [squid-users] proxy.pac config

2007-05-11 Thread K K

On 5/11/07, Adrian Chadd [EMAIL PROTECTED] wrote:

You can turn that cache behaviour off. I'll hunt around for the instructions
to tell IE not to cache proxy.pac lookups and add it to the documentation.


That'd be handy.


 (P.S. Have you heard about the magical PAC refresh option in Microsoft's
 IEAK?)

Nope! Please tell.


Inside Internet Explorer Administration Kit, you can build a custom
installer for IE6 or IE7 and tune just about everything remotely
related to IE.  Great for a corporate deployment, or for the OP's
question about forcing PAC settings to all desktops.

One of the options you can control is Connections Customization.
When you check this in the first menu, after going through a dozen or
so dialogs, deep in Stage 4 you will reach Connection Settings.
This gives you the option to Import the current connection settings
from this machine, and a button for Modify Settings.  If you use
this button, it will open the connections menu, just like under IE,
but there are extra options visible which never normally appear,
including an Advanced button next to the PAC url.

This reveals new options for PAC, including refresh time; changes here
are effective immediately on your local machine.  Once you exit IEAK,
the Advanced button vanishes from the control panel, but the
settings remain in effect -- if you set a proxy URL and refresh time
in the Brigadoon Advanced tab then choosing a new URL in the normal
connection setting window is ineffective.

There's probably a registry hack you could find to accomplish the same
results, and then just push down a .REG file to all the clients.

Kevin


Re: [squid-users] Load balancing algorithms for an accelerator

2007-05-11 Thread leongmzlist
You can setup a IPVS load balancer in front of your squid pool.  I 
use it load balance my 10 squid servers.  See 
http://www.linuxvirtualserver.org/



mike

At 07:10 AM 5/11/2007, Adrian Chadd wrote:

On Fri, May 11, 2007, Sean Walberg wrote:
 On 5/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:

  Is there any way to balance based on least connections, or something
 else?
 
 Not today, but probably quite easy to add.

 How would I go about getting this on a developer's radar screen?  I
 don't think this is something I could do myself.

You can submit a Wishlist request. I can add it to the Wiki. You can attach
a bounty, or you can say you'll donate to the Squid project on completion.




Adrian




[squid-users] PEM error on SSL

2007-05-11 Thread Jason Hitt
So far everything works great on http, working on https now. Even SSH and SNMP 
are working well.

I've exported my cert from my IIS server and it's a .pfx file format. I renamed 
the file to .pem but was sure if that would work. When I launt squid with -N I 
get the following:

Failed to acquire SSL certificate 'usr/local/squid/var/cert.pem'  error: 
0906D06C: PEM routines: PEM_read_bio : no start line. 

I appeal to the Gods of Squid givth me the guidance. This is all I need to be 
done.

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 07, 2007 3:40 PM
To: Jason Hitt
Cc: Squid Users
Subject: RE: [squid-users] FW: failure notice

mån 2007-05-07 klockan 15:20 -0500 skrev Jason Hitt:
 The viconnect FAQ still references the old http_accel lines.
 http://viconnect.visolve.com/vic7/modules/knowledgebase/faqsearch.php?
 productid=22contentid=78nodeid=squidn08visid . The squid-cache FAQ 
 doesn't but doesn't make an sense to me 
 http://wiki.squid-cache.org/SquidFaq/ReverseProxy
 
 All I want to do is have a very basic vanilla https server reverse 
 proxied with Squid. I'll get the .pem cert but I can't even get squid 
 to start up as it is. Any help would be GREATLY appreciated.

You'll need to give a cert (and key) to https_port. And if the origin server is 
also https then use the ssl option on cache_peer.

Configuration is the same as for http, but with the changes above to use https 
instead of http.. so it's just

https_port 443 cert=/path/to/cert.pem key=/path/to/cert_key.pem accel 
defaultsite=the.official.name 

cache_peer ip.of.webserver parent 443 0 no-query originserver ssl


The certificate key needs to be stored unencrypted, or you will need to start 
Squid in foreground mode (-N option) to be able to enter the key encryption 
password.

Regards
Henrik


Re: [squid-users] PEM error on SSL

2007-05-11 Thread Chris Robertson

Jason Hitt wrote:

So far everything works great on http, working on https now. Even SSH and SNMP 
are working well.

I've exported my cert from my IIS server and it's a .pfx file format. I renamed 
the file to .pem but was sure if that would work. When I launt squid with -N I 
get the following:

Failed to acquire SSL certificate 'usr/local/squid/var/cert.pem'  error: 0906D06C: PEM routines: PEM_read_bio : no start line. 

I appeal to the Gods of Squid givth me the guidance. This is all I need to be done.
  


In this case, Google is your friend...

http://www.google.com/search?hl=enq=pfx+pem+SSL

openssl pkcs12 -in mycert.pfx -out mycert.pem -nodes

Chris


Re: [squid-users] Squid Authentication + ldap/samba

2007-05-11 Thread Henrik Nordstrom
fre 2007-05-11 klockan 11:30 +0100 skrev Duarte Lázaro:

 But in NTLM i cannot ( i think ) restrict a user by an attribute, if  
 the user gets authenticated he has net.

You can. But it's two different things. Don't mix up authentication and
authorization.

The purpose of authentication is solely to verify the identity of the
user. You then use this identity in authorization to grant or deny
access.

authentication is done by auth_param settings, and triggered by acls
based on the user name.

authorization is done by http_access, by using acls matching users and
what they are allowed to do.


 Basic/Digest (squid_ldap_auth/group) are more flexible, because u can 
 use a filter and restrict by attribute.The problem is that browsers are 
 always prompting for password allthought the password can  be stored.

You can still use squid_ldap_group with NTLM if you run a Windows Active
Directory.

Digest is a bit troublesome in that you can not use a user directory
backend, and must have a local digest password file on the proxy.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Unable to download files over 2GB of size

2007-05-11 Thread Henrik Nordstrom
ons 2007-05-09 klockan 14:55 -0400 skrev Chris Nighswonger:
 On 5/9/07, Chris Nighswonger [EMAIL PROTECTED] wrote:
  On 5/9/07, Adrian Chadd [EMAIL PROTECTED] wrote:
  
   Could you please do the tcpdump? I'd like to document exactly how/why
   its busted in an article in the Wiki.
 
 Here it is. I'm not sure if the list filters attachments. If it is
 missing, let me know how to get it to you.

This looks like the download stated, but then the browser stopped
reading the response (hang or something) without closing the connetion.
TCP buffers filled waiting for the browser to process the data.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] PEM error on SSL

2007-05-11 Thread Henrik Nordstrom
fre 2007-05-11 klockan 13:48 -0500 skrev Jason Hitt:

 I've exported my cert from my IIS server and it's a .pfx file format. I 
 renamed the file to .pem but was sure if that would work. When I launt squid 
 with -N I get the following:

Won't work. You need to convert the certificate to PEM format first. I
don't remember the exact details, but there is an howto out on the net
somewhere explaining how to do this in order to export an certificate
from IIS to Apache.

Until you figure out how to properly extract the certificate you can
always try with a self-signed dummy certificate.

  openssl req -new -x509 -out self_signed_cert.pem -keyout self_signed_key.pem 
-nodes

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Unable to download files over 2GB of size

2007-05-11 Thread Sathyan, Arjonan

Henrik,

 If anyone is in doubt and having this problem, try using squidclient
to
 fetch the file

  squidclient 'http://...' http_response

As per your suggestion I tried to pull the http_response but there is no
http headers in it... It has only Connection to  failed

# smbclient
http://ftp.suse.com/pub/suse/i386/current/iso/SUSE-10.0-EvalDVD-i386-GM.
iso  http_response

# more http_response

Connection to  failed

 then send me the HTTP headers found in the beginning of http_response.

 alternatively if that does not convince you, capture the data stream
 Squid - MSIE with tcpdump -p -i any -w data.pcap host ip.of.client

I have captured the data stream using tcpdump 
tcpdump -p -i any -w data.pcap host 10.249.192.76

Kindly let me know how I can send this file to you...

Regards,
Sathyan Arjunan
Unix Support | +1 408-962-2500 Extn : 22824
Kindly copy [EMAIL PROTECTED] or reach us @ 22818 for any
correspondence alike to ensure your email are being replied in timely
manner

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 4:49 AM
To: Chris Nighswonger
Cc: Tim Bates; squid-users@squid-cache.org
Subject: Re: [squid-users] Unable to download files over 2GB of size

tis 2007-05-08 klockan 18:48 -0400 skrev Chris Nighswonger:

 Maybe it is a regression?
 
 I built my STABLE12 from source and did an install over top of
STABLE9.

From what I can tell Squid-2.6.STABLE12 works just fine and the problem
is most likely in MSIE.

It's also kind of confirmed by the fact Firefox works fine. 

If anyone is in doubt and having this problem, try using squidclient to
fetch the file

  squidclient 'http://...' http_response

then send me the HTTP headers found in the beginning of http_response.


alternatively if that does not convince you, capture the data stream
Squid - MSIE with tcpdump -p -i any -w data.pcap host ip.of.client
and send that to me or feel free to analyze with ethereal/wireshark to
see if the problem is caused by Squid or IE.

Regards
Henrik


[squid-users] wpad + dhcp issues: please reproduce

2007-05-11 Thread Adrian Chadd
Hi,

I'm talking to one of the Internet Explorer team members about IE6 + the WPAD
DHCP string issue that people have noticed.

He says the coders couldn't spot anything obvious, and the method for
grabbing DHCP parameters changed with IE7. So, for those with IE6 and know
about the WPAD+DHCP URL issue, could you please reproduce it and let me know
all the details?




Adrian



RE: [squid-users] proxy.pac config

2007-05-11 Thread SSCR Internet Admin
That's really informative and ill try this one out.  At least 75% of my
network uses IE, so I have to manually edit 25% which uses firefox and
safari (Mac users who are Spanish, better review my Spanish 101 hehe).  

Last night when in bed thinking over this, ive come up an idea.  When a user
try to browse directly (port 80), iptables should redirect those traffic to
a specific part on your site where it magically configures the browsers to
use PAC.  So no user intervention or manual config will occur, I guess
firefox can be configured automatically.. 

Just my two cents idea, who knows someone has already done this (not me, I
only understand programming algo but not into coding). 

-Original Message-
From: K K [mailto:[EMAIL PROTECTED] 
Sent: Saturday, May 12, 2007 2:04 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] proxy.pac config

On 5/11/07, Adrian Chadd [EMAIL PROTECTED] wrote:
 You can turn that cache behaviour off. I'll hunt around for the
instructions
 to tell IE not to cache proxy.pac lookups and add it to the documentation.

That'd be handy.

  (P.S. Have you heard about the magical PAC refresh option in Microsoft's
  IEAK?)

 Nope! Please tell.

Inside Internet Explorer Administration Kit, you can build a custom
installer for IE6 or IE7 and tune just about everything remotely
related to IE.  Great for a corporate deployment, or for the OP's
question about forcing PAC settings to all desktops.

One of the options you can control is Connections Customization.
When you check this in the first menu, after going through a dozen or
so dialogs, deep in Stage 4 you will reach Connection Settings.
This gives you the option to Import the current connection settings
from this machine, and a button for Modify Settings.  If you use
this button, it will open the connections menu, just like under IE,
but there are extra options visible which never normally appear,
including an Advanced button next to the PAC url.

This reveals new options for PAC, including refresh time; changes here
are effective immediately on your local machine.  Once you exit IEAK,
the Advanced button vanishes from the control panel, but the
settings remain in effect -- if you set a proxy URL and refresh time
in the Brigadoon Advanced tab then choosing a new URL in the normal
connection setting window is ineffective.

There's probably a registry hack you could find to accomplish the same
results, and then just push down a .REG file to all the clients.

Kevin

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] Active Directory Account + Local Account to access Internet

2007-05-11 Thread Sathyan, Arjonan

Hi,

I am using Squid with Active Directory Authentication. Every thing works
fine for me... Now, I would like to know is there a way in which the
local system users (Non-AD Users) are not getting blocked to access
Internet.

Right now, only AD users have access to internet. This is to achieve
detailed log of users who access internet (Domain users + Local system
users).

May be I was thinking something idiotic... 

Regards,
Sathyan Arjunan