Re: [squid-users] SQUID 3 + ICAP + DansGuardian

2008-02-13 Thread Alex Rousskov
On Wed, 2008-02-13 at 19:36 +0100, TOUZEAU DAVID wrote:

> I'm trying to implement dansGuardian using icap protocol has icap server 
> with Squid 3
> After googleize  i didn't found any howto  that  explain  if it 
> possible  or  what  can  be change in settings  to  use this  kind of 
> implementation

I have not configured dansGuardian specifically, but if dansGuardian
supports ICAP, then you need to configure Squid to support ICAP
(--enable-icap-client) and to send HTTP requests and/or responses to
dansGuardian process. 

dansGuardian may have specific requirements for ICAP service URLs, and
you may need more creative ACLs than "all", but the basic setup would
look like this (assuming dansGuardian listens on 127.0.0.1, port 1344):

icap_enable on
icap_service service_req reqmod_precache 0 icap://127.0.0.1:1344/request
icap_service service_resp respmod_precache 0 icap://127.0.0.1:1344/response
icap_class class_req service_req
icap_class class_resp service_resp
icap_access class_req allow all
icap_access class_resp allow all

If you need to send just requests or just responses, then you only need
one of each of the icap_service, icap_class, and icap_access lines.

These options are a bad inheritance from Squid2. They are far from
intuitive and are likely to change in the future.

You may want to search squid.conf.default or online docs for other
Squid3 squid.conf options that start with icap_.

HTH,

Alex.




Re: [squid-users] SQUID cache proxy with SSL (Version 3.0 STABLE-1)

2008-02-13 Thread Chris Robertson

Tomer Brand wrote:

Amos I removed the max-age from the http headers and SQUID kept the file
in the cache directory. 
The problem is that only the first request needs to authenticate.

Is there any way I can configure SQUID / back-end server to force the
authentication on each request but will serve the data from the cache
dir?

Thank you.
  


Use "Cache-Control: Must Revalidate" instead of "Cache-Control: Public".

Chris


Re: [squid-users] squid ver 3 ssl cache proxy

2008-02-13 Thread Chris Robertson

Tomer Brand wrote:

I have notice that the SSL cache doesn't work only if the back end
server requires authentication to serve the data.
Can anyone please tell me if what I am trying to is supported by squid?

Thank you.
  


Well, usually a request that requires authentication has an HTTP 
response header of "Cache-Control: private". See if you can get your 
back end server to change that to a "Cache-Control: must revalidate" 
which will still require authentication to download the object, but will 
allow downloading it from the cache upon successful authentication.


Chris


Re: [squid-users] Error > What is this ?

2008-02-13 Thread Amos Jeffries
> Hi
>
> anyone know this problems :
>
> 1202922662.095  0 192.168.50.200 TCP_DENIED/400 1374 NONE
> error:unsupported-request-method - NONE/- text/html

Unsupported request method? yes we know about it.

Some application is making a request via HTTP that squid has not been
programmed or configured to accept. HTTP is usually GET, POST, PUT,
CONNECT, and a pile of others from teh FRC that squid can handle. There is
also an extension set (or more than one possibly) that new programs may
use defined elsewhere.

All current versions of squid (2.6+ and 3.0) have an extension_method
configuration option that you can set up to 20 of these extensions as
allowed through your squid but non-cachable.

Squid 3.1 has a major design change to accept all the extension methods as
non-special non-cachable web requests.

Amos




Re: [squid-users] About my squid.conf

2008-02-13 Thread Amos Jeffries
> Here in my simple server, the squid works fine, but after post a
> message about radio, Amos sad:
>
> " Squid is actually an
> interceptor, not fully transparent. When they go down clients can expect
> 'Unable to Connect' errors. "
>
> And, this is true. When my squid go down, my clients can't be surf
> because squid is not working.
>
> I don't have anotherr server, and I don't need too.
>
> I need only control the navegation of my clients on the internet.
>
> So, if possible, I want if anyone can see my squid.conf and tell me if
> it is good or need improvement.
>
> Thanks for all.
>
>  My squid.conf:
>
>   http_port 10.0.0.250:3128 transparent
>
>   icp_port 0
>
>   cache_mem 128 MB
>   cache_swap_low 90
>   cache_swap_high 95
>   cache_dir ufs /usr/local/squid/var/cache 1024 16 256
>   cache_access_log /usr/local/squid/var/logs/access.log
>   cache_log /usr/local/squid/var/logs/cache.log
>   cache_store_log none
>   maximum_object_size_in_memory 1 MB
>   maximum_object_size 100 MB
>   minimum_object_size 0 MB
>
>   pid_filename /usr/local/squid/var/logs/squid.pid
>
>   visible_hostname squid.provider.com.br
>
>   cache_effective_user squidaemon
>   cache_effective_group squid
>
>   acl autologinDSA dst 10.0.0.250/32
>
>   acl diretor src 10.0.0.55/32
>   acl recepcao src 10.0.0.57/32
>   acl financeiro src 10.0.0.56/32
>   acl suporte src 10.0.0.248/32
>   acl suporte2 src 10.0.0.13/32
>
>   acl vip1 src 10.0.1.0/28
>   acl vip2 src 10.0.2.0/28
>   acl vip3 src 10.0.3.0/28
>   acl vip4 src 10.0.4.0/28
>
>   acl forbidden_words url_regex -i "/usr/local/squid/etc/forbidden_words"
>   acl forbidden_down url_regex -i "/usr/local/squid/etc/forbidden_down"
>
>  external_acl_type checkip children=40 % SRC
> /usr/local/mwsystem/squid/sbin/checkv2.sh

 no gap in " %SRC "

>
>  acl checkblock external checkip
>
>   acl all src 0.0.0.0/0.0.0.0
>   acl localnet src 10.0.0.0/16
>   acl localhost src 127.0.0.0/32
>   acl method_control proto cache_object
>
>   http_access allow method_control localhost
>   http_access deny method_control
>
>   http_access allow autologinDSa
>
>   http_access deny checkblock !autologinDSA
>
>   http_access allow diretor
>   http_access allow diretor forbidden_down

If s/he is allowed all access, no need to bother with regex.

>
>   http_access allow recepcao autologinDSA

If s/he is allowed all access, no need to bother with some destinations.

>   http_access allow recepcao
>
>   http_access deny financeiro
>
>   http_access allow suporte
>   http_access allow suporte2
>
>   http_access deny forbidden_words
>   http_access deny forbidden_down
>
>   http_access allow vip1
>   http_access allow vip2
>   http_access allow vip3
>   http_access allow vip4
>
>   http_access deny localnet !autologinDSA
>   http_access deny all
>   http_access deny localnet

Only need the middle one there.
For some reason there is no allow for checkbolck people.

They get authenticated, then nothing matches for them until the final
"deny all"

Amos




[squid-users] 64-bit squid source

2008-02-13 Thread J. Peng
I found that 32-bit squid can run max memory of 1.8G.
does a 64-bit squid support much larger memory than the limit above?
where to get a 64-bit squid source? thanks!


[squid-users] Keith Almli has invited you to open a Google mail account

2008-02-13 Thread Keith Almli
I've been using Gmail and thought you might like to try it out. Here's
an invitation to create an account.

---

Keith Almli has invited you to open a free Gmail account.

To accept this invitation and register for your account, visit
http://mail.google.com/mail/a-5287f755b2-ee0949c25c-ec04292774

Once you create your account, Keith Almli will be notified with
your new email address so you can stay in touch with Gmail!

If you haven't already heard about Gmail, it's a new search-based webmail
service that offers:

- Over 2,700 megabytes (two gigabytes) of free storage
- Built-in Google search that instantly finds any message you want
- Automatic arrangement of messages and related replies into
  "conversations"
- Powerful spam protection using innovative Google technology
- No large, annoying ads--just small text ads and related pages that are
  relevant to the content of your messages

To learn more about Gmail before registering, visit:
http://mail.google.com/mail/help/benefits.html

And, to see how easy it can be to switch to a new email service, check
out our new switch guide: http://mail.google.com/mail/help/switch/

We're still working every day to improve Gmail, so we might ask for your
comments and suggestions periodically.  We hope you'll like Gmail.  We
do.  And, it's only going to get better.

Thanks,

The Gmail Team

(If clicking the URLs in this message does not work, copy and paste them
into the address bar of your browser).


Re: [squid-users] Trouble downloading large files with Squid

2008-02-13 Thread mista_eng

Hey guys, just to update this thread in case anyone else comes looking with
the same issue, I think I found the problem. It turns out that I forgot that
Dansguardian comes with ClamAV, an antivirus scanner. 

I was looking through /etc/dansguardian/dansguardian.conf file when I saw
the options for ClamAV down near the end. Several settings that concern
scanned file size, with one at around 400MB, were noted. I'll disable ClamAV
entirely as I do not need it and will check to see if it is indeed the
culprit.
-- 
View this message in context: 
http://www.nabble.com/Trouble-downloading-large-files-with-Squid-tp15277650p15471910.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] squid ver 3 ssl cache proxy

2008-02-13 Thread Chris Robertson

Tomer Brand wrote:

Hi,

I am trying to configure squid 3 to function as ssl cache proxy.
My current status:
1. SQUID receives HTTPS requests
2. Performs ssl termination and download the data via HTTP from my back
end server
3. The data is store in the cache directory

The second time i am asking for the same file squid deletes the file
exist in the cache and download it again from the back end server.

bellow is my squid.conf file. Can any1 tell me what i missed?

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localhostdomain src 10.10.10.10
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port # SQIOD port
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow all
  

Hmmm.  Perhaps it would be better to make an acl like...

acl myHost dst 10.10.10.10

...and then instead of "http_access allow all" use "http_access allow 
myHost".  Maybe using vhost and vport doesn't open you to abuse when you 
have a cache_peer with the originserver directive, but it sure makes me 
nervous...

icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
https_port  vhost vport cert=/home/tomer/Desktop/certificate.pem
key=/home/tomer/Desktop/key.pem
http_port  vhost vport
cache_peer 10.10.10.10 parent 8050 0 originserver default login=PASS 
cache_dir ufs /usr/local/squid/var/cache 100 16 256

maximum_object_size 2097000 KB # A bit below 2 GB - SQUID maximum file
size
  


Laughable, considering you have a 100 MB cache partition.  :o)


hierarchy_stoplist cgi-bin ?
access_log /usr/local/squid/var/logs/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
icp_port 3130
coredump_dir /usr/local/squid/var/cache
visible_hostname ubuntu
  


I see you have further information in another email...

Chris


Re: [squid-users] The requested URL could not be retrieved: invalid url

2008-02-13 Thread Chris Robertson

Dave Coventry wrote:

On Feb 9, 2008 4:20 PM, Adrian Chadd wrote:
  

Ah, that bits more difficult. ;) What authentication scheme are you after?



I was hoping to do it through Samba. What Authentication scheme would
you suggest...
  


The problem arises from the fact that the browser has no knowledge that 
it is passing through a proxy.  It asks the origin web server for a page 
and is confronted with a request for proxy authentication.  What should 
it do?


There are a few suggestions in the mailing list archives, including 
cookie-based authentication, and IP based authentication (I know that 
2.6 has a session helper included in the source that would be a good 
base for this), but no solutions.  Perhaps things are different when 
utilizing NTLM authentication...


Chris


Re: [squid-users] Re: Proxy parent failover

2008-02-13 Thread Amos Jeffries
> On Feb 12, 2008 7:21 PM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>> Josh wrote:
>> > Sorry for the re-post, keyboard went crazy :/
>> >
>> > A little schema of what i want to do:
>> >
>> > Squid proxy --- Proxy Parent 1Link1- Internet
>> >  |
>> >  |---FO--- Proxy Parent 2 Link2-
>> Internet
>> >
>> > if Link1 is available,
>> > Force squid proxy to go through parent 1 only
>> > if Link1 is not available,
>> > Force squid proxy to go through parent 2 only
>> >
>> > I can configure squid with multiple parents but it'll use them both at
>> > the same time.
>> > I couldn't figure out if there's a way to configure squid with
>> > multiple parents in "failover" mode...
>> >
>> > Hope you can give me some hints...
>>
>> Squid has a mode FIRST_UP_PARENT which is exactly what you describe.
>> I believe its the default unless you configure another selection method.
>> So what exactly do you have in your squid.conf for the cache_peer lines?
>> and what release of squid is this in?
>>
>> Amos
>> --
>> Please use Squid 2.6STABLE17+ or 3.0STABLE1+
>> There are serious security advisories out on all earlier releases.
>>
>
> Hi,
>
> Thanks for the replies.
> Please find below my configuration file for Squid Version 2.6.STABLE16.
> So I would need to add a cache_peer line to my conf:
> 
> cache_peer 10.X.X.X parent 8080 0 default no-query no-digest
> no-netdb-exchange
> cache_peer 10.Y.Y.Y parent 8080 0 no-query no-digest no-netdb-exchange
> 
>
> All the requests will go to 10.X.X.X unless it can't reach, am i
> correct to say that ?

I believe so:
  10.X.X.X
  10.Y.Y.Y
  DIRECT
  10.X.X.X default/last-resort (skipped? already tried)
  --> report failure.

Amos

>
> Thanks again,
> Josh
>
> squid.conf:
> --
> http_port 8080
> icp_port 0
> cache_peer 10.X.X.X parent 8080 0 default no-query no-digest
> no-netdb-exchange
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> cache_mem 1536 MB
> cache_swap_low 90
> cache_swap_high 95
> maximum_object_size 4096 KB
> maximum_object_size_in_memory 50 KB
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap GDSF
> cache_dir aufs /usr/local/squid/cache 6 16 256
> access_log /usr/local/squid/logs/access.log squid
> hosts_file /etc/hosts
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> half_closed_clients off
> shutdown_lifetime 1 seconds
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443  # https
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 8080
> acl purge method PURGE
> acl CONNECT method CONNECT
> acl snmppublic snmp_community public
> acl corpnet dstdomain .corp.local
> http_access allow manager localhost
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access allow CONNECT SSL_ports
> http_access allow Safe_Ports
> http_access deny all
> httpd_suppress_version_string on
> visible_hostname proxy
> memory_pools off
> log_icp_queries off
> client_db off
> buffered_logs on
> never_direct deny corpnet
> never_direct allow all
> snmp_port 3401
> snmp_access allow snmppublic
> snmp_access deny all
> snmp_incoming_address 127.0.0.1
> coredump_dir /usr/local/squid/logs
> pipeline_prefetch on
>




Re: [squid-users] squid-2.7 and youtube caching

2008-02-13 Thread Adrian Chadd
On Wed, Feb 13, 2008, pokeman wrote:
> 
> 2.7 where is download Link :) 

www.squid-cache.org/Versions/v2/2.7/



adrian



Re: [squid-users] Mem Cache flush

2008-02-13 Thread Adrian Chadd
On Wed, Feb 13, 2008, Jacobi, Michael CIV NSWCCD Philadelphia, 3411 wrote:
> I am very interested in caching Windows updates - How soon will this be
> available??

Free? When I get time to sort it out. Thing is, much like caching Youtube,
the target keeps shifting and I'm not willing to keep these rules updated
on short notice for free. Customers will get access to rule updates as soon
as things change.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] mime_table in squid 3.0 stable

2008-02-13 Thread maik . linnemann
Dear List,

im trying to get squid-3.0.STABLE1from source on Suse Linux Enterprise
Server 10 SP1,2.6.16.46-0.12-smp,x86_6 running and have a mysterious
behavior.

whenever i try to use my own conf  and parse it via the -k parse switch i
get an error as follows:

FATAL: MIME Config Table squid//usr/local/squid/etc/mime.conf: (20) No 
such=
file or directory

I havent configured the mime_table setting in my conf and the mime.conf is
in its default place which is /usr/local/squid/etc/mime.conf

If you have a look at the error output, squid assumes that the mime.conf
is under squid//<>

Has anyone a clue where squid's got that from?

Recompiling doesnt help as well as giving the value manually in my
squid.conf...is the same error...

If i parse default config file the error isnt present..., could this be,
because the parser processes the configs successively? i have compiled the
source with heap, which is not compatible with default config file...

Thanks folks..





HITCON AG
Maik Linnemann
Gartenstrasse 208
48147 Münster
0251/2801-206 (Phone)
0251/2801-280 (Fax)
0170/6364-205 (Mobil)
mailto:[EMAIL PROTECTED]
http://www.hitcon.de

Mitglieder des Vorstandes: Helmut Holtstiege, Tobias Helling
Vorsitzender des Aufsichtsrats: Hans-Hermann Schumacher

Sitz der Gesellschaft: Münster
Registergericht: Amtsgericht Münster, HRB 5177

member of http://www.grouplink.de
·


[squid-users] Keith Almli has invited you to open a Google mail account

2008-02-13 Thread Keith Almli
I've been using Gmail and thought you might like to try it out. Here's
an invitation to create an account.

---

Keith Almli has invited you to open a free Gmail account.

To accept this invitation and register for your account, visit
http://mail.google.com/mail/a-5287f755b2-e676fd9a04-ddae61b0b4

Once you create your account, Keith Almli will be notified with
your new email address so you can stay in touch with Gmail!

If you haven't already heard about Gmail, it's a new search-based webmail
service that offers:

- Over 2,700 megabytes (two gigabytes) of free storage
- Built-in Google search that instantly finds any message you want
- Automatic arrangement of messages and related replies into
  "conversations"
- Powerful spam protection using innovative Google technology
- No large, annoying ads--just small text ads and related pages that are
  relevant to the content of your messages

To learn more about Gmail before registering, visit:
http://mail.google.com/mail/help/benefits.html

And, to see how easy it can be to switch to a new email service, check
out our new switch guide: http://mail.google.com/mail/help/switch/

We're still working every day to improve Gmail, so we might ask for your
comments and suggestions periodically.  We hope you'll like Gmail.  We
do.  And, it's only going to get better.

Thanks,

The Gmail Team

(If clicking the URLs in this message does not work, copy and paste them
into the address bar of your browser).


Re: [squid-users] bittorrent behind squid

2008-02-13 Thread Amos Jeffries
>
> MAN
>
> you need to set up some firewall rules to run torrents i thinks torrents
> not
> fully support http connections
> they use alternate

Only the .torrent file download is sure to be performed over HTTP. the
rest of the connection attempts are determined by the .torrent file
contents added by the seed server.

In order to get all torrent downloads through Squid you need a torrent
client software with HTTP-proxy capabilities and configured to use your
squid proxy.

Amos

>
> Arun Shrimali wrote:
>>
>> Dear All,
>>
>> I am having client (fedora 8) behind the squid proxy (with
>> authentication). I am trying to download the files through bittorrent
>> (Transmission) client at fedora client PC.
>> Can anybody help me how to configure client (or squid at server) to
>> download the files.
>>
>> regards
>>
>> Arun
>>
>>
>
> --
> View this message in context:
> http://www.nabble.com/bittorrent-behind-squid-tp15450228p15462439.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
>
>




Re: [squid-users] Mem Cache flush

2008-02-13 Thread pokeman

i seen today a lots of cache hit results i setup my cache drivers 7000 mb now
the limit is full what now happen if another object want to be cache how to
squid expire not use objects and old objects  
i am not set any low high water mark i already post my conf also i am using
ZPH here is my squid HIT info 

Squid Object Cache: Version 2.6.STABLE18
Start Time: Tue, 12 Feb 2008 16:56:17 GMT
Current Time:   Wed, 13 Feb 2008 19:46:18 GMT
Connection information for squid:
Number of clients accessing cache:  0
Number of HTTP requests received:   10290978
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   6391.9
Average ICP messages per minute since start:0.0
Select loop called: 82844943 times, 1.166 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 44.6%, 60min: 45.0%
Byte Hit Ratios:5min: 19.8%, 60min: 23.5%
Request Memory Hit Ratios:  5min: 32.9%, 60min: 31.9%
Request Disk Hit Ratios:5min: 31.6%, 60min: 34.3%
Storage Swap size:  45131052 KB
Storage Mem size:   524224 KB
Mean Object Size:   21.58 KB
Requests given to unlinkd:  339
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.24524  0.25890
Cache Misses:  0.44492  0.46965
Cache Hits:0.00286  0.00379
Near Hits: 0.24524  0.25890
Not-Modified Replies:  0.00091  0.00091
DNS Lookups:   0.00464  0.00372
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:96600.326 seconds
CPU Time:   21118.258 seconds
CPU Usage:  21.86%
CPU Usage, 5 minute avg:35.46%
CPU Usage, 60 minute avg:   37.29%
Process Data Segment Size via sbrk(): 914716 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena:  914716 KB
Ordinary blocks:   900701 KB 856201 blks
Small blocks:   0 KB  0 blks
Holding blocks: 20680 KB 13 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   14014 KB
Total in use:  921381 KB 99%
Total free: 14014 KB 1%
Total size:935396 KB
Memory accounted for:
Total accounted:   763842 KB
memPoolAlloc calls: 1198206486
memPoolFree calls: 1191587209
File descriptor usage for squid:
Maximum number of file descriptors:   32768
Largest file desc currently in use:   1554
Number of file desc currently in use: 1342
Files queued for open:   0
Available number of file descriptors: 31426
Reserved number of file descriptors:   100
Store Disk files open:   0
IO loop method: epoll
Internal Data Structures:
2116617 StoreEntries
111018 StoreEntries with MemObjects
110528 Hot Object Cache Items
2091341 on-disk objects


Adrian Chadd wrote:
> 
> On Wed, Feb 13, 2008, pokeman wrote:
>> 
>> thanks i just switch my cache drives to aufs can you explane me in detail
>> what other changes i made in my squid.conf for high cache resuls we have
>> almost 45 mb link for proxy services 30 mb. can i add more harddrive to
>> caching or just tweak to my squid and linux kernal. ! remember we are
>> using
>> RHEL ES 4 . i know bsd given high availablity but we can'nt use 
> 
> You can just convert diskd to aufs, yes, as long as its compiled in.
> It just requires a restart to be safe.
> 
> You then need to grab some logfile statistics stuff from the internet
> and see what content is being cached and what isn't being cached.
> Then you can decide what to look to cache. :)
> 
> 
> 
> ADrian
> 
>> 
>> Adrian Chadd wrote:
>> > 
>> > G'day,
>> > 
>> > A few notes.
>> > 
>> > * Diskd isn't stable, and won't be until I commit my next set of
>> patches
>> >   to 2.7 and 3.0; use aufs for now.
>> > 
>> > * Caching windows updates will be possible in Squid-2.7. It'll require
>> > some
>> >   rules and a custom rewrite helper.
>> > 
>> > * 3.0 isn't yet as fast as 2.6 or 2.7.
>> > 
>> > 
>> > Adrian
>> > 
>> > On Tue, Feb 12, 2008, pokeman wrote:
>> >> 
>> >> Well I experience with squid cache not good works on heavy load I 4
>> core
>> >> processor machine with 7 scsi drives 4 gb ram average work load in
>> peak
>> >> hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I
>> >> search
>> >> many articles on high cache performance specially windows update these
>> >> days
>> >> very headache to save PSF extension i heard In squid release 3.0 for
>> >> better
>

[squid-users] Solved [squid-users] Using multiple "redirect_program" commands

2008-02-13 Thread Jörg Hoffmann
Yeah, i did this by myself in php. It does a lookup in the mysql database
for unallowed urls and if nothing is found it querys squidguard.
After all it returns.

So i did both things i wanted in one script.
Thanks anyway :)

Jörg H.


-Ursprüngliche Nachricht-
Von: Alex Rousskov [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 13. Februar 2008 17:49
An: Jörg Hoffmann
Cc: squid-users@squid-cache.org
Betreff: Re: [squid-users] Using multiple "redirect_program" commands

On Tue, 2008-02-12 at 06:49 +0100, Jörg Hoffmann wrote:

> is there a way to use mutliple „redirect_program“ commands to use
squidguard
> and another blacklist-tool at the same time?

You should be able to chain redirectors by writing a wrapper redirector
program that Squid knows about and uses. The wrapper will pass URLs to
other redirectors and post-process the results as needed before
returning them to Squid.

Alex.




[squid-users] Error > What is this ?

2008-02-13 Thread Phibee Network Operation Center

Hi

anyone know this problems :

1202922662.095  0 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.119  1 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.151  0 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.191  0 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.234  0 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.273  0 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.322  1 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.400  1 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html
1202922662.443  0 192.168.50.200 TCP_DENIED/400 1374 NONE 
error:unsupported-request-method - NONE/- text/html


Thanks bye






RE: [squid-users] Mem Cache flush

2008-02-13 Thread Jacobi, Michael CIV NSWCCD Philadelphia, 3411
I am very interested in caching Windows updates - How soon will this be
available??

Mike Jacobi

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 13, 2008 1:53
To: pokeman
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Mem Cache flush

G'day,

A few notes.

* Diskd isn't stable, and won't be until I commit my next set of patches
  to 2.7 and 3.0; use aufs for now.

* Caching windows updates will be possible in Squid-2.7. It'll require
some
  rules and a custom rewrite helper.

* 3.0 isn't yet as fast as 2.6 or 2.7.


Adrian

On Tue, Feb 12, 2008, pokeman wrote:
> 
> Well I experience with squid cache not good works on heavy load I 4
core
> processor machine with 7 scsi drives 4 gb ram average work load in
peak
> hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I
search
> many articles on high cache performance specially windows update these
days
> very headache to save PSF extension i heard In squid release 3.0 for
better
> performance but why squid developers could???nt find solution for
cache
> windows update in 2.6 please suggest me if I am doing something wrong
in my
> squid.conf 
> 
> 
> http_port 3128 transparent
> range_offset_limit 0 KB
> cache_mem 512 MB
> pipeline_prefetch on
> shutdown_lifetime 2 seconds
> coredump_dir /var/log/squid
> ignore_unknown_nameservers on
> acl all src 0.0.0.0/0.0.0.0
> acl ourusers src 192.168.100.0/24
> hierarchy_stoplist cgi-bin ?
> maximum_object_size 16 MB
> minimum_object_size 0 KB
> maximum_object_size_in_memory 64 KB
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap GDSF
> cache_dir diskd /cache1 7000 16 256
> cache_dir diskd /cache2 7000 16 256
> cache_dir diskd /cache3 7000 16 256
> cache_dir diskd /cache4 7000 16 256
> cache_dir diskd /cache5 7000 16 256
> cache_dir diskd /cache6 7000 16 256
> cache_dir diskd /cache7 7000 16 256
> cache_access_log none
> cache_log /var/log/squid/cache.log
> cache_store_log none
> dns_nameservers 127.0.0.1
> refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 43200100%
43200
> refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 43200   100%
43200
> refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 43200
100%
> 43200
> refresh_pattern^ftp:   1440 20% 10080
> refresh_pattern^gopher:1440 0% 1440
> refresh_patterncgi-bin 0 0% 0
> refresh_pattern\?  0 0% 4320
> refresh_pattern.   0 20% 4320
> negative_ttl 1 minutes
> positive_dns_ttl 24 hours
> negative_dns_ttl 1 minutes
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443 563
> acl Safe_ports port 1195 1107 1174 1212 1000
> acl Safe_ports port 80  # http
> acl Safe_ports port 82  # http
> acl Safe_ports port 81  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 563 # https, snews
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow ourusers
> http_access deny all
> http_reply_access allow all
> cache allow all
> icp_access allow ourusers
> icp_access deny all
> cache_mgr [EMAIL PROTECTED]
> visible_hostname CE-Fariya
> dns_testnames localhost
> reload_into_ims on
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> log_fqdn off
> half_closed_clients off
> client_db off
> ipcache_size 16384
> ipcache_low 90
> ipcache_high 95
> fqdncache_size 8129
> log_icp_queries off
> strip_query_terms off
> store_dir_select_algorithm round-robin
> client_persistent_connections off
> server_persistent_connections on
> persistent_request_timeout 1 minute
> client_lifetime 60 minutes
> pconn_timeout 10 seconds
> 
> 
> 
> Adrian Chadd wrote:
> > 
> > On Thu, Jan 31, 2008, Chris Woodfield wrote:
> >> Interesting. What sort of size threshold do you see where
performance  
> >> begins to drop off? Is it just a matter of larger objects reducing

> >> hitrate (due to few objects being cacheable in memory) or a
bottleneck  
> >> in squid itself that causes issues?
> > 
> > Its a bottleneck in the Squid code which makes accessing the n'th 4k
> > chunk in memory take O(N) time.
> > 
> > Its one of the things I'd like to fix after Squid-2.7 is released.
> > 
> > 
> > 
> > Adrian
> > 
> > 
> > 
> 
> -- 
> View this message in context:
http://www.nabble.com/Mem-Cache-flush-tp14951540p15449954.html
> Sent from the Squid - Users mailing list archive at Nabble.com.

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entr

[squid-users] SQUID 3 + ICAP + DansGuardian

2008-02-13 Thread TOUZEAU DAVID

Dear all

I'm trying to implement dansGuardian using icap protocol has icap server 
with Squid 3
After googleize  i didn't found any howto  that  explain  if it 
possible  or  what  can  be change in settings  to  use this  kind of 
implementation

Does anybody have experience of this ??

Best regards


--
David Touzeau -- Linux Ubuntu 7.04 feisty 
FreePascal-Lazarus,perl,delphi,php artica for postfix management console 
(http://www.artica.fr) icq:160018849


Re: [squid-users] bittorrent behind squid

2008-02-13 Thread pokeman

MAN 

you need to set up some firewall rules to run torrents i thinks torrents not
fully support http connections 
they use alternate 

Arun Shrimali wrote:
> 
> Dear All,
> 
> I am having client (fedora 8) behind the squid proxy (with
> authentication). I am trying to download the files through bittorrent
> (Transmission) client at fedora client PC.
> Can anybody help me how to configure client (or squid at server) to
> download the files.
> 
> regards
> 
> Arun
> 
> 

-- 
View this message in context: 
http://www.nabble.com/bittorrent-behind-squid-tp15450228p15462439.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] About my squid.conf

2008-02-13 Thread pokeman

well there is many options 

1. you are set maximum_object_size 100 MB and your cache drive define only
1024 its around 1 gb i experience with squid i don't think so if any user
download file larger then 32 mb and same file download another user thats
was waste of your cache drive if you are really need this so upgrage your
drive upto 10 GB 

2. switch your cache drive UFS to AUFS 
3. use refresh patten as per your frequently use pages grep from your logs 


Anderson dos Santos Donda wrote:
> 
> Here in my simple server, the squid works fine, but after post a
> message about radio, Amos sad:
> 
> " Squid is actually an
> interceptor, not fully transparent. When they go down clients can expect
> 'Unable to Connect' errors. "
> 
> And, this is true. When my squid go down, my clients can't be surf
> because squid is not working.
> 
> I don't have anotherr server, and I don't need too.
> 
> I need only control the navegation of my clients on the internet.
> 
> So, if possible, I want if anyone can see my squid.conf and tell me if
> it is good or need improvement.
> 
> Thanks for all.
> 
>  My squid.conf:
> 
>   http_port 10.0.0.250:3128 transparent
> 
>   icp_port 0
> 
>   cache_mem 128 MB
>   cache_swap_low 90
>   cache_swap_high 95
>   cache_dir ufs /usr/local/squid/var/cache 1024 16 256
>   cache_access_log /usr/local/squid/var/logs/access.log
>   cache_log /usr/local/squid/var/logs/cache.log
>   cache_store_log none
>   maximum_object_size_in_memory 1 MB
>   maximum_object_size 100 MB
>   minimum_object_size 0 MB
> 
>   pid_filename /usr/local/squid/var/logs/squid.pid
> 
>   visible_hostname squid.provider.com.br
> 
>   cache_effective_user squidaemon
>   cache_effective_group squid
> 
>   acl autologinDSA dst 10.0.0.250/32
> 
>   acl diretor src 10.0.0.55/32
>   acl recepcao src 10.0.0.57/32
>   acl financeiro src 10.0.0.56/32
>   acl suporte src 10.0.0.248/32
>   acl suporte2 src 10.0.0.13/32
> 
>   acl vip1 src 10.0.1.0/28
>   acl vip2 src 10.0.2.0/28
>   acl vip3 src 10.0.3.0/28
>   acl vip4 src 10.0.4.0/28
> 
>   acl forbidden_words url_regex -i "/usr/local/squid/etc/forbidden_words"
>   acl forbidden_down url_regex -i "/usr/local/squid/etc/forbidden_down"
> 
>  external_acl_type checkip children=40 % SRC
> /usr/local/mwsystem/squid/sbin/checkv2.sh
> 
>  acl checkblock external checkip
> 
>   acl all src 0.0.0.0/0.0.0.0
>   acl localnet src 10.0.0.0/16
>   acl localhost src 127.0.0.0/32
>   acl method_control proto cache_object
> 
>   http_access allow method_control localhost
>   http_access deny method_control
> 
>   http_access allow autologinDSa
> 
>   http_access deny checkblock !autologinDSA
> 
>   http_access allow diretor
>   http_access allow diretor forbidden_down
> 
>   http_access allow recepcao autologinDSA
>   http_access allow recepcao
> 
>   http_access deny financeiro
> 
>   http_access allow suporte
>   http_access allow suporte2
> 
>   http_access deny forbidden_words
>   http_access deny forbidden_down
> 
>   http_access allow vip1
>   http_access allow vip2
>   http_access allow vip3
>   http_access allow vip4
> 
>   http_access deny localnet !autologinDSA
>   http_access deny all
>   http_access deny localnet
> 
> 

-- 
View this message in context: 
http://www.nabble.com/About-my-squid.conf-tp15458475p15462290.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] ANN: New archive for squid mailing lists

2008-02-13 Thread Ryan Grimm

Hello,

Last November Jason Hunter and I launched a site called MarkMail (http://markmail.org 
) for archiving and searching mailing lists. We launched the site with  
roughly 4,000,000 messages from Apache. Since that time we've added  
lists from MySQL, PHP, Ruby and many more including Squid. Given that  
we front our site with Squid, this only seemed appropriate. You can  
easily restrict your searches to just the Squid lists by visiting http://squid.markmail.org 
 directly.


As you'll see with the chart on the home page, one of our goals with  
the site has been to give you a high level view of what's going on  
with the list.  We do this by letting you know how many messages per  
month match your query, what lists those messages were posted to, who  
posted them, how many had attachments and of what type, etc.  You can  
also use this information to easily refine your search results. A  
quick look tells us that Henrik Nordstrom is by far the most frequent  
poster to this list with 16,531 messages to date.


Another goal has been interactivity. We did a lot with keyboard  
shortcuts. You can hit "n" and "p" to move to the next and previous  
result, "j" and "k" move up and down the thread view, 's' will take  
you to the search box and many others that you might find naturally.  
There's a lot of little things like this. Plus if your result message  
includes an attachment they are visible directly inside the browser,  
even Office and PDF files (there actually is a few of them on the  
list). We also look inside attachments for your search terms as well.


Here's a few tips for using the site:

* Search using keywords as well as from:, subject:, extension:, and  
list: constraints.


* The GUI doesn't yet expose it, but you can negate any search term.

* You can restrict your query based on date by selecting a region on  
the graph.
   You can also use the more powerful date: query constraint, for  
more info check out: http://markmail.blogspot.com/2008/01/give-us-date-and-well-search-it.html


* Remember to use "n" and "p" keyboard shortcuts as a time saver for  
navigating search results.


* You're going to want JavaScript enabled

I hope you all find the archive useful and let me know if you have any  
questions or feedback.


--Ryan


Re: [squid-users] Using multiple "redirect_program" commands

2008-02-13 Thread Alex Rousskov
On Tue, 2008-02-12 at 06:49 +0100, Jörg Hoffmann wrote:

> is there a way to use mutliple „redirect_program“ commands to use squidguard
> and another blacklist-tool at the same time?

You should be able to chain redirectors by writing a wrapper redirector
program that Squid knows about and uses. The wrapper will pass URLs to
other redirectors and post-process the results as needed before
returning them to Squid.

Alex.




Re: [squid-users] I can't use left panell of new Squid wiki site

2008-02-13 Thread Kinkie
On Feb 13, 2008 9:15 AM, S.KOBAYASHI <[EMAIL PROTECTED]> wrote:
> Hi Amos,
>
> It's not quite serious problem. It worked fine on the FireFox.

Hi Seiji.

MSIE7 didn't like some javascript code I've now disabled when that
browser is detected.
It should work fine now.

   /kinkie


Re: [squid-users] Re: Re: Re: Cache for mp3 and ogg in memory...

2008-02-13 Thread Joel Jaeggli

Matus UHLAR - fantomas wrote:

If they're all listing to different remote sources at different times 
then there's no point in caching it...

But if I read the logs, they are around 150-180 files which are heared
all the time and they have in summary arround 800 MByte. So creating a
Ram-Cache of 1 GByte would do wonder...


I still have no idea what URIs are requested by users when they are
listening to those songs. In such case I can't do anything but guess...


try:
http://streaming.uoregon.edu:8000/

If they are live streams (internet radio) the the fact that the filename 
is the same every time isn't going to make it cacheable... The file is 
just the mountpoint for the the stream... Every user joining the stream 
will start at a different temporal location so you cannot just serve 
what you already cached, rather you have to serve what's currently being 
streamed. if you have multiple clients listening to the same stream then 
 they're going to  need approximately the same data at the same time, 
varying buffer depths and the fact that tcp is being used 
nothwithstanding, that's why a i suggested a relay if you have 
particularly popular content that you wish to neck down to one stream.


eons ago this is probably something that would have been solved with ip 
multicast (which you might call a degenerate form of network caching) 
but interdomain multicast deployment never really achieved critical mass.


It is plausible I suppose to implment internet radio support in squid 
which would recognize two client connected to the same url, and 
replicate the incoming payload from one to the other. That wouldn't 
require any disk i/o at all.




[squid-users] WG: Problem with speed and userauthentication

2008-02-13 Thread Stefan Vogel
Hello,

I have a really wired issue here, and I have to say: I have no more 
ideas... :-(

We have a Squid 2.5Stable12 and use LDAP Authentication against Active 
Directory.
In this Authentication we are using the LDAP_GROUP and LDAP_AUTH 
authenticaters.
This has been working for months without real issues.

Problem now:
We have two groups, both with around 1000 members. (and a few more groups 
but smaller)

The following tests both were done on the same machine and directly after 
eachother.
If I run a speed test with a users in group A, I have download and upload 
speeds, that are ok to me.
BUT
If I run the same test with a user from group B, the download rate is only 

arount 50% of the group A user and the upload rate is even down below 10% 
of A-group speed.

I have tried changing the order in wich the groups are checked, but the 
effect is still unchanged.

So that's, were I am, and as I said, I'm out of ideas. Has anyone any 
idea?

Regards

Stefan


Re: [squid-users] Bluecoat > Squid

2008-02-13 Thread David Massey
Cool, thanks. I guess I will find another way to talk with the family
back home. Thanks for even responding.

On Feb 13, 2008 10:17 AM, Jakob Curdes <[EMAIL PROTECTED]> wrote:
> The problem is that if the Bluecoat AV series is working there, these
> systems can even intercept and decode SSL traffic so if the traffic is
> (correctly) recognized as illegitimate proxy traffic I have no idea how
> you could prevent that. Proxy traffic is easily distinguishable from
> other http traffic as the protocols are different, regerdless on which
> ports you are running the proxy. You might be able to setup a SSH tunnel
> depending on site policies. Mind you, I am not suggesting that this
> would be a legal or sensible thing to do.
>
> Yoirs,
> Jakob Curdes
>
>


[squid-users] youtube URL changes

2008-02-13 Thread Adrian Chadd
Youtube have changed their content URLs again; here's a sample youtube URL:


1202912186.003   9796 192.168.1.138 TCP_MISS/200 925739 GET 
http://dal-v85.dal.youtube.com/get_video?video_id=OLf_mFWdRJI&signature=2664965981E5AB1C438AA56224A2C0401A149CF9.9B9F0E33475F18862385FD9BF5375E1E3A4B8E09&ip=xxx.xx.xx.xx&ipbits=16&expire=1202933775&key=1

I'll publish an update for my support clients over the weekend.

Is anyone from Google here? I'd have to chat to the SRE team involved with
Youtube stuff.

Thanks,



Adrian


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] squid-2.7 and youtube caching

2008-02-13 Thread pokeman

2.7 where is download Link :) 



Adrian Chadd wrote:
> 
> G'day everyone,
> 
> For those of you who would like to try caching youtube content (as best it
> can be cached right now), please install the squid-2.7 snapshot, make
> sure its working, and then head over to the Wiki.
> 
> http://wiki.squid-cache.org/Features/StoreUrlRewrite/RewriteScript
> 
> Now, this (and other!) stuff may not be properly cached yet if you're
> running
> Squid in a transparent proxy mode. I'm looking into that particular issue.
> But it certainly works for normal explicitly configured caches.
> 
> Let me know how it goes!
> 
> 
> 
> 
> Adrian
> 
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
> Support -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
> 
> 

-- 
View this message in context: 
http://www.nabble.com/squid-2.7-and-youtube-caching-tp14979279p15458802.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Re: re[squid-users] verse proxy headache

2008-02-13 Thread Visolve Squid

Hello,

Squid latest version is squid-2.6STABLE18. You can configure the reverse 
proxy easily with squid-2.6.


Reverse proxy configuration in squid-2.5 :
http_port 80 # Port of Squid proxy
httpd_accel_host 172.16.1.115 # IP address of web server
httpd_accel_port 80 # Port of web server
httpd_accel_single_host on # Forward uncached requests to single host
httpd_accel_with_proxy on
httpd_accel_uses_host_header off

For more details visit at 
http://www.visolve.com/squid/whitepapers/reverseproxy.php#What_is_Reverse_Proxy_Cache


Reverse proxy configuration in squid-2.6 :
http_port 80 vhost
cache_peer  parent  0 no-query originserver

Example:
http_port 80 vhost
cache_peer proxy.nour.net.sa parent 8080 0 no-query originserver

For more Details: http://www.visolve.com/squid/squid26/contents.php

Thanks,
-Visolve Squid Team
www.visolve.com/squid/



dirtybugg wrote:

Hi please help me i am new to squid, i have squid 2.5 my squid.conf is below
please help i am not able to brows our  internet

#Default:
# http_port 3128
http_port 8080

#Default:
# none
#cache_peer proxy.saudi.net.sa parent 8080 3130 default no-query
#cache_peer 62.149.115.12 parent 8080 3130 default no-query
cache_peer proxy.nour.net.sa parent 8080 3130 default no-query

#Default:
# cache_dir ufs /var/spool/squid 100 16 256
cache_dir ufs /cache1 8000 16 256
cache_dir ufs /cache2 8000 16 256

#Default:
# cache_access_log /var/log/squid/access.log
cache_access_log /var/log/squid/access.log

#Default:
# pid_filename /var/run/squid.pid
pid_filename /var/run/squid.pid

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

acl snmpsaudiedi snmp_community rtgg0v1

#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks. Adapt
# to list your (internal) IP networks from where browsing should
# be allowed
#acl our_networks src 192.168.1.0/24 192.168.2.0/24
#http_access allow our_networks
acl user_networks src 192.168.19.0/24
acl svr_networks src 192.168.17.0/24
acl dmz_networks src 62.149.115.128/25

http_access allow user_networks
http_access allow svr_networks
http_access allow dmz_networks
icp_access allow user_networks
icp_access allow svr_networks
icp_access allow dmz_networks

# And finally deny all other access to this proxy
http_access allow localhost
http_access deny all

#Default:
# http_reply_access allow all
#
#Recommended minimum configuration:
#
# Insert your own rules here.
#
#
# and finally allow by default
http_reply_access allow all

#  TAG: icp_access
#   Allowing or Denying access to the ICP port based on defined
#   access lists
#
#   icp_access  allow|deny [!]aclname ...
#
#   See http_access for details
#
#Default:
# icp_access deny all
#
#Allow ICP queries from everyone
icp_access allow all

#Default:
# none
visible_hostname proxy1

#Example:
# snmp_access allow snmppublic localhost
# snmp_access deny all
#
#Default:
# snmp_access deny all
snmp_access allow snmpsaudiedi user_networks
snmp_access deny all
  




[squid-users] Dynamic ACL, ways to management inet access with time/day

2008-02-13 Thread Serj A. Androsov
Hi there,

There is a some of net ranges (computer classes) which should have
internet access in some time and date.

I try to find the ways to get squid net range. After that squid should
give it without any authorization.

I think it may look like this:

My App (time/date sheduler)
|
|
Autogenerated external file one time per day
like this I think:
--
acl class1_time time MTWHFAS 01:00-02:59
acl class2_time time MTWHF 02:00-05:59

acl class1_net src 192.168.1.0/24
acl class2_net src 192.168.2.0/24

http_access allow class1_net class1_time
http_access allow class2_net class2_time
--
|
|
Squid

There is some quiestions:

I don't want much to use the squid reconfigure method
How can I dynamic connect this acl to squid.

If don't, how it's better to solve this task? :)


[squid-users] About my squid.conf

2008-02-13 Thread Anderson dos Santos Donda
Here in my simple server, the squid works fine, but after post a
message about radio, Amos sad:

" Squid is actually an
interceptor, not fully transparent. When they go down clients can expect
'Unable to Connect' errors. "

And, this is true. When my squid go down, my clients can't be surf
because squid is not working.

I don't have anotherr server, and I don't need too.

I need only control the navegation of my clients on the internet.

So, if possible, I want if anyone can see my squid.conf and tell me if
it is good or need improvement.

Thanks for all.

 My squid.conf:

  http_port 10.0.0.250:3128 transparent

  icp_port 0

  cache_mem 128 MB
  cache_swap_low 90
  cache_swap_high 95
  cache_dir ufs /usr/local/squid/var/cache 1024 16 256
  cache_access_log /usr/local/squid/var/logs/access.log
  cache_log /usr/local/squid/var/logs/cache.log
  cache_store_log none
  maximum_object_size_in_memory 1 MB
  maximum_object_size 100 MB
  minimum_object_size 0 MB

  pid_filename /usr/local/squid/var/logs/squid.pid

  visible_hostname squid.provider.com.br

  cache_effective_user squidaemon
  cache_effective_group squid

  acl autologinDSA dst 10.0.0.250/32

  acl diretor src 10.0.0.55/32
  acl recepcao src 10.0.0.57/32
  acl financeiro src 10.0.0.56/32
  acl suporte src 10.0.0.248/32
  acl suporte2 src 10.0.0.13/32

  acl vip1 src 10.0.1.0/28
  acl vip2 src 10.0.2.0/28
  acl vip3 src 10.0.3.0/28
  acl vip4 src 10.0.4.0/28

  acl forbidden_words url_regex -i "/usr/local/squid/etc/forbidden_words"
  acl forbidden_down url_regex -i "/usr/local/squid/etc/forbidden_down"

 external_acl_type checkip children=40 % SRC
/usr/local/mwsystem/squid/sbin/checkv2.sh

 acl checkblock external checkip

  acl all src 0.0.0.0/0.0.0.0
  acl localnet src 10.0.0.0/16
  acl localhost src 127.0.0.0/32
  acl method_control proto cache_object

  http_access allow method_control localhost
  http_access deny method_control

  http_access allow autologinDSa

  http_access deny checkblock !autologinDSA

  http_access allow diretor
  http_access allow diretor forbidden_down

  http_access allow recepcao autologinDSA
  http_access allow recepcao

  http_access deny financeiro

  http_access allow suporte
  http_access allow suporte2

  http_access deny forbidden_words
  http_access deny forbidden_down

  http_access allow vip1
  http_access allow vip2
  http_access allow vip3
  http_access allow vip4

  http_access deny localnet !autologinDSA
  http_access deny all
  http_access deny localnet


Re: [squid-users] Mem Cache flush

2008-02-13 Thread Adrian Chadd
On Wed, Feb 13, 2008, pokeman wrote:
> 
> thanks i just switch my cache drives to aufs can you explane me in detail
> what other changes i made in my squid.conf for high cache resuls we have
> almost 45 mb link for proxy services 30 mb. can i add more harddrive to
> caching or just tweak to my squid and linux kernal. ! remember we are using
> RHEL ES 4 . i know bsd given high availablity but we can'nt use 

You can just convert diskd to aufs, yes, as long as its compiled in.
It just requires a restart to be safe.

You then need to grab some logfile statistics stuff from the internet
and see what content is being cached and what isn't being cached.
Then you can decide what to look to cache. :)



ADrian

> 
> Adrian Chadd wrote:
> > 
> > G'day,
> > 
> > A few notes.
> > 
> > * Diskd isn't stable, and won't be until I commit my next set of patches
> >   to 2.7 and 3.0; use aufs for now.
> > 
> > * Caching windows updates will be possible in Squid-2.7. It'll require
> > some
> >   rules and a custom rewrite helper.
> > 
> > * 3.0 isn't yet as fast as 2.6 or 2.7.
> > 
> > 
> > Adrian
> > 
> > On Tue, Feb 12, 2008, pokeman wrote:
> >> 
> >> Well I experience with squid cache not good works on heavy load I 4 core
> >> processor machine with 7 scsi drives 4 gb ram average work load in peak
> >> hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I
> >> search
> >> many articles on high cache performance specially windows update these
> >> days
> >> very headache to save PSF extension i heard In squid release 3.0 for
> >> better
> >> performance but why squid developers could???nt find solution for cache
> >> windows update in 2.6 please suggest me if I am doing something wrong in
> >> my
> >> squid.conf 
> >> 
> >> 
> >> http_port 3128 transparent
> >> range_offset_limit 0 KB
> >> cache_mem 512 MB
> >> pipeline_prefetch on
> >> shutdown_lifetime 2 seconds
> >> coredump_dir /var/log/squid
> >> ignore_unknown_nameservers on
> >> acl all src 0.0.0.0/0.0.0.0
> >> acl ourusers src 192.168.100.0/24
> >> hierarchy_stoplist cgi-bin ?
> >> maximum_object_size 16 MB
> >> minimum_object_size 0 KB
> >> maximum_object_size_in_memory 64 KB
> >> cache_replacement_policy heap LFUDA
> >> memory_replacement_policy heap GDSF
> >> cache_dir diskd /cache1 7000 16 256
> >> cache_dir diskd /cache2 7000 16 256
> >> cache_dir diskd /cache3 7000 16 256
> >> cache_dir diskd /cache4 7000 16 256
> >> cache_dir diskd /cache5 7000 16 256
> >> cache_dir diskd /cache6 7000 16 256
> >> cache_dir diskd /cache7 7000 16 256
> >> cache_access_log none
> >> cache_log /var/log/squid/cache.log
> >> cache_store_log none
> >> dns_nameservers 127.0.0.1
> >> refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 43200100%
> >> 43200
> >> refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 43200   100%
> >> 43200
> >> refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 43200
> >> 100%
> >> 43200
> >> refresh_pattern^ftp:   1440 20% 10080
> >> refresh_pattern^gopher:1440 0% 1440
> >> refresh_patterncgi-bin 0 0% 0
> >> refresh_pattern\?  0 0% 4320
> >> refresh_pattern.   0 20% 4320
> >> negative_ttl 1 minutes
> >> positive_dns_ttl 24 hours
> >> negative_dns_ttl 1 minutes
> >> acl manager proto cache_object
> >> acl localhost src 127.0.0.1/255.255.255.255
> >> acl to_localhost dst 127.0.0.0/8
> >> acl SSL_ports port 443 563
> >> acl Safe_ports port 1195 1107 1174 1212 1000
> >> acl Safe_ports port 80  # http
> >> acl Safe_ports port 82  # http
> >> acl Safe_ports port 81  # http
> >> acl Safe_ports port 21  # ftp
> >> acl Safe_ports port 443 563 # https, snews
> >> acl Safe_ports port 70  # gopher
> >> acl Safe_ports port 210 # wais
> >> acl Safe_ports port 1025-65535  # unregistered ports
> >> acl Safe_ports port 280 # http-mgmt
> >> acl Safe_ports port 488 # gss-http
> >> acl Safe_ports port 591 # filemaker
> >> acl Safe_ports port 777 # multiling http
> >> acl CONNECT method CONNECT
> >> http_access allow manager localhost
> >> http_access deny manager
> >> http_access deny !Safe_ports
> >> http_access deny CONNECT !SSL_ports
> >> http_access allow ourusers
> >> http_access deny all
> >> http_reply_access allow all
> >> cache allow all
> >> icp_access allow ourusers
> >> icp_access deny all
> >> cache_mgr [EMAIL PROTECTED]
> >> visible_hostname CE-Fariya
> >> dns_testnames localhost
> >> reload_into_ims on
> >> quick_abort_min 0 KB
> >> quick_abort_max 0 KB
> >> log_fqdn off
> >> half_closed_clients off
> >> client_db off
> >> ipcache_size 16384
> >> ipcache_low 90
> >> ipcache_high 95
> >> fqdncache_size 8129
> >> log_icp_queries off
> >> strip_query_terms off
> >> store_dir_select_algorithm round-robin
> >> client_persistent_connections off
> >> server_persistent_connections on
> >> persistent_request_timeout 1 minute
> >> client_lifetime 60 minutes
> >> pconn_timeout 10 seconds
> >> 
> >> 

RE: [squid-users] SQUID cache proxy with SSL (Version 3.0 STABLE-1)

2008-02-13 Thread Tomer Brand
Amos I removed the max-age from the http headers and SQUID kept the file
in the cache directory. 
The problem is that only the first request needs to authenticate.
Is there any way I can configure SQUID / back-end server to force the
authentication on each request but will serve the data from the cache
dir?

Thank you.

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 12, 2008 13:43 PM
To: Tomer Brand
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] SQUID cache proxy with SSL (Version 3.0
STABLE-1)

Tomer Brand wrote:
> Hi,
> 
> I am trying to configure squid to function as SSL cache proxy for an
authenticated object (using login=PASS in the cache_peer directive)
> To do that I've added the "cache-control=public, must-revalidate,
max-age=0" directive to the back-end server whose files I would like to
cache.
> This works great for me when configuring a non-SSL(port )
SQUID-based proxy.
> However when I access the proxy using SSL (port  below) the cache
file is deleted every time and the cache is not used.

Which is what max-age=0 means. "never use again". I think, leave that 
off an it will do IMS on each request.

> 
> SQUID receives HTTPS requests, performs the SSL termination as
expected, gets the data from the back end server and saves the file to
the cache directory.
> 
> Then I pass the SQUID HTTP request, asking for the same file. SQUID
serves the data from the cache.
> Next step is to ask the same file with HTTPS. This time SQUID clears
that file from the cache and download it from the back end server.
> 
> So I used Wireshark to identify the difference:
> SQUID pass the HTTP request using the If-None-Match header while the
HTTPS request doesn't contain this directive in the header.
> 
> Anyone got any idea?
> 
> bellow is my squid.conf file:
> 
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32
> acl localhostdomain src 10.10.10.10
> acl to_localhost dst 127.0.0.0/8
> acl localnet src 10.0.0.0/8   # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal
network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal
network
> acl SSL_ports port 443
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443   # https
> acl Safe_ports port 70# gopher
> acl Safe_ports port 210   # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280   # http-mgmt
> acl Safe_ports port 488   # gss-http
> acl Safe_ports port 591   # filemaker
> acl Safe_ports port 777   # multiling http
> acl Safe_ports port   # SQIOD port
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localnet
> http_access allow all
> icp_access allow localnet
> icp_access deny all
> htcp_access allow localnet
> htcp_access deny all
> https_port  vhost vport cert=/home/tomer/Desktop/certificate.pem
key=/home/tomer/Desktop/key.pem
> http_port  vhost vport
> cache_peer 10.10.10.10 parent 8050 0 originserver default login=PASS 
> cache_dir ufs /usr/local/squid/var/cache 100 16 256 
> maximum_object_size 2097000 KB # A bit below 2 GB - SQUID maximum file
size hierarchy_stoplist cgi-bin ?
> access_log /usr/local/squid/var/logs/access.log squid 
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> refresh_pattern ^ftp: 1440  20%   10080
> refresh_pattern ^gopher:  1440  0%1440
> refresh_pattern .   0 20%   4320
> icp_port 3130
> coredump_dir /usr/local/squid/var/cache
> visible_hostname ubuntu
> 


-- 
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Re: Re: Re: Cache for mp3 and ogg in memory...

2008-02-13 Thread Adrian Chadd
On Wed, Feb 13, 2008, Matus UHLAR - fantomas wrote:

> maybe the data are never fetched from disk, always from remote server and
> just always stored to the cache (to be never fetched from there).
> 
> Or maybe the squid memory usage is too big and machine swaps...

Bit hard to say without access to the box, don't you think? :)

(which reminds me, i have youtube stuff to update this weekend..)

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Chaining Redirectors

2008-02-13 Thread Adrian Chadd
On Wed, Feb 13, 2008, Solomon Asare wrote:
> Hi All,
> pls how do you chaing redirectors?
> 
> Googling gave me wrapzap+zapchain. Are there
> alternatives? Especially those native to squid.

Well, you could just use an external program which threw it at other
helpers.

Or someone could code up support for multiple URL redirectors, it wouldn't be
difficult to do. The code is actually not that difficult.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Mem Cache flush

2008-02-13 Thread pokeman

thanks i just switch my cache drives to aufs can you explane me in detail
what other changes i made in my squid.conf for high cache resuls we have
almost 45 mb link for proxy services 30 mb. can i add more harddrive to
caching or just tweak to my squid and linux kernal. ! remember we are using
RHEL ES 4 . i know bsd given high availablity but we can'nt use 

Adrian Chadd wrote:
> 
> G'day,
> 
> A few notes.
> 
> * Diskd isn't stable, and won't be until I commit my next set of patches
>   to 2.7 and 3.0; use aufs for now.
> 
> * Caching windows updates will be possible in Squid-2.7. It'll require
> some
>   rules and a custom rewrite helper.
> 
> * 3.0 isn't yet as fast as 2.6 or 2.7.
> 
> 
> Adrian
> 
> On Tue, Feb 12, 2008, pokeman wrote:
>> 
>> Well I experience with squid cache not good works on heavy load I 4 core
>> processor machine with 7 scsi drives 4 gb ram average work load in peak
>> hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I
>> search
>> many articles on high cache performance specially windows update these
>> days
>> very headache to save PSF extension i heard In squid release 3.0 for
>> better
>> performance but why squid developers could???nt find solution for cache
>> windows update in 2.6 please suggest me if I am doing something wrong in
>> my
>> squid.conf 
>> 
>> 
>> http_port 3128 transparent
>> range_offset_limit 0 KB
>> cache_mem 512 MB
>> pipeline_prefetch on
>> shutdown_lifetime 2 seconds
>> coredump_dir /var/log/squid
>> ignore_unknown_nameservers on
>> acl all src 0.0.0.0/0.0.0.0
>> acl ourusers src 192.168.100.0/24
>> hierarchy_stoplist cgi-bin ?
>> maximum_object_size 16 MB
>> minimum_object_size 0 KB
>> maximum_object_size_in_memory 64 KB
>> cache_replacement_policy heap LFUDA
>> memory_replacement_policy heap GDSF
>> cache_dir diskd /cache1 7000 16 256
>> cache_dir diskd /cache2 7000 16 256
>> cache_dir diskd /cache3 7000 16 256
>> cache_dir diskd /cache4 7000 16 256
>> cache_dir diskd /cache5 7000 16 256
>> cache_dir diskd /cache6 7000 16 256
>> cache_dir diskd /cache7 7000 16 256
>> cache_access_log none
>> cache_log /var/log/squid/cache.log
>> cache_store_log none
>> dns_nameservers 127.0.0.1
>> refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 43200100%
>> 43200
>> refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 43200   100%
>> 43200
>> refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 43200
>> 100%
>> 43200
>> refresh_pattern^ftp:   1440 20% 10080
>> refresh_pattern^gopher:1440 0% 1440
>> refresh_patterncgi-bin 0 0% 0
>> refresh_pattern\?  0 0% 4320
>> refresh_pattern.   0 20% 4320
>> negative_ttl 1 minutes
>> positive_dns_ttl 24 hours
>> negative_dns_ttl 1 minutes
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443 563
>> acl Safe_ports port 1195 1107 1174 1212 1000
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 82  # http
>> acl Safe_ports port 81  # http
>> acl Safe_ports port 21  # ftp
>> acl Safe_ports port 443 563 # https, snews
>> acl Safe_ports port 70  # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow ourusers
>> http_access deny all
>> http_reply_access allow all
>> cache allow all
>> icp_access allow ourusers
>> icp_access deny all
>> cache_mgr [EMAIL PROTECTED]
>> visible_hostname CE-Fariya
>> dns_testnames localhost
>> reload_into_ims on
>> quick_abort_min 0 KB
>> quick_abort_max 0 KB
>> log_fqdn off
>> half_closed_clients off
>> client_db off
>> ipcache_size 16384
>> ipcache_low 90
>> ipcache_high 95
>> fqdncache_size 8129
>> log_icp_queries off
>> strip_query_terms off
>> store_dir_select_algorithm round-robin
>> client_persistent_connections off
>> server_persistent_connections on
>> persistent_request_timeout 1 minute
>> client_lifetime 60 minutes
>> pconn_timeout 10 seconds
>> 
>> 
>> 
>> Adrian Chadd wrote:
>> > 
>> > On Thu, Jan 31, 2008, Chris Woodfield wrote:
>> >> Interesting. What sort of size threshold do you see where performance  
>> >> begins to drop off? Is it just a matter of larger objects reducing  
>> >> hitrate (due to few objects being cacheable in memory) or a bottleneck  
>> >> in squid itself that causes issues?
>> > 
>> > Its a bottleneck in the Squid code which makes accessing the n'th 4k
>> > chunk in memory take O(N) time.
>> > 
>> > Its one of the things I'd like to fix after Squid-2.7 is released.
>> > 
>> > 
>> > 
>> > Adrian
>> > 
>> > 
>> > 
>> 
>>

Re: [squid-users] Re: Re: Re: Cache for mp3 and ogg in memory...

2008-02-13 Thread Matus UHLAR - fantomas
> On Mon, Feb 11, 2008, Michelle Konzack wrote:
> > But if I read the logs, they are around 150-180 files which are heared
> > all the time and they have in summary arround 800 MByte. So creating a
> > Ram-Cache of 1 GByte would do wonder...

On 13.02.08 09:55, Adrian Chadd wrote:
> Then you should look at why your operating system disk caching isn't
> helping you out here.

maybe the data are never fetched from disk, always from remote server and
just always stored to the cache (to be never fetched from there).

Or maybe the squid memory usage is too big and machine swaps...
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Microsoft dick is soft to do no harm


Re: [squid-users] Re: Re: Re: Cache for mp3 and ogg in memory...

2008-02-13 Thread Matus UHLAR - fantomas
> Am 2008-02-10 08:47:14, schrieb Joel Jaeggli:
> > If you have more than one listener for the same stream, you relay it to 

On 11.02.08 01:01, Michelle Konzack wrote:
> Which is not the case since OGG/MP3 can requested individualy.

by using the same URI? or by different URI each time a song is requested?
each of this makes problems with HTTP caching...

> > a local streaming server (icecast2) and provide them with a link to the 
> > local source...
> 
> But this mean, you have realy heavy Disk-IO.  and imagine, you have only
> 4 peoples/clients which hear different OGG/MP3 the Read-head will stand
> never still and will never read sequetaly.

4 users listening to 160 kbit/s songs in parallel will make ... 640 kbit/s
traffic. most of disks can handle that :) Of course if squid is configured
to fetch more data than user requests (see quick_abort settings) and users
tend to switch songs fast, it may make problems. 

> This kill all harddrives...  And Yes, I can could use SCSI-Drives but
> a Raid-5 of at least 3 drives plus Hotfix and a controller would cost
> arround 5000 Euro. Nearly 16 times the price of the computer itself.

do never use raid-5 for things like squid cache. Better no raid - use single
drives, with one cache_dir on each.

> > If they're all listing to different remote sources at different times 
> > then there's no point in caching it...
> 
> But if I read the logs, they are around 150-180 files which are heared
> all the time and they have in summary arround 800 MByte. So creating a
> Ram-Cache of 1 GByte would do wonder...

I still have no idea what URIs are requested by users when they are
listening to those songs. In such case I can't do anything but guess...

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux is like a teepee: no Windows, no Gates and an apache inside...


[squid-users] Chaining Redirectors

2008-02-13 Thread Solomon Asare
Hi All,
pls how do you chaing redirectors?

Googling gave me wrapzap+zapchain. Are there
alternatives? Especially those native to squid.

I use 2.6.

Regards,
solomon.



Re: [squid-users] Mem Cache flush

2008-02-13 Thread Hugo Santander Ballestin
unsubscribe

El mié, 13-02-2008 a las 15:52 +0900, Adrian Chadd escribió:
> G'day,
> 
> A few notes.
> 
> * Diskd isn't stable, and won't be until I commit my next set of patches
>   to 2.7 and 3.0; use aufs for now.
> 
> * Caching windows updates will be possible in Squid-2.7. It'll require some
>   rules and a custom rewrite helper.
> 
> * 3.0 isn't yet as fast as 2.6 or 2.7.
> 
> 
> Adrian
> 
> On Tue, Feb 12, 2008, pokeman wrote:
> > 
> > Well I experience with squid cache not good works on heavy load I 4 core
> > processor machine with 7 scsi drives 4 gb ram average work load in peak
> > hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I search
> > many articles on high cache performance specially windows update these days
> > very headache to save PSF extension i heard In squid release 3.0 for better
> > performance but why squid developers could???nt find solution for cache
> > windows update in 2.6 please suggest me if I am doing something wrong in my
> > squid.conf 
> > 
> > 
> > http_port 3128 transparent
> > range_offset_limit 0 KB
> > cache_mem 512 MB
> > pipeline_prefetch on
> > shutdown_lifetime 2 seconds
> > coredump_dir /var/log/squid
> > ignore_unknown_nameservers on
> > acl all src 0.0.0.0/0.0.0.0
> > acl ourusers src 192.168.100.0/24
> > hierarchy_stoplist cgi-bin ?
> > maximum_object_size 16 MB
> > minimum_object_size 0 KB
> > maximum_object_size_in_memory 64 KB
> > cache_replacement_policy heap LFUDA
> > memory_replacement_policy heap GDSF
> > cache_dir diskd /cache1 7000 16 256
> > cache_dir diskd /cache2 7000 16 256
> > cache_dir diskd /cache3 7000 16 256
> > cache_dir diskd /cache4 7000 16 256
> > cache_dir diskd /cache5 7000 16 256
> > cache_dir diskd /cache6 7000 16 256
> > cache_dir diskd /cache7 7000 16 256
> > cache_access_log none
> > cache_log /var/log/squid/cache.log
> > cache_store_log none
> > dns_nameservers 127.0.0.1
> > refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 43200100% 43200
> > refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 43200   100% 43200
> > refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 43200 100%
> > 43200
> > refresh_pattern^ftp:   1440 20% 10080
> > refresh_pattern^gopher:1440 0% 1440
> > refresh_patterncgi-bin 0 0% 0
> > refresh_pattern\?  0 0% 4320
> > refresh_pattern.   0 20% 4320
> > negative_ttl 1 minutes
> > positive_dns_ttl 24 hours
> > negative_dns_ttl 1 minutes
> > acl manager proto cache_object
> > acl localhost src 127.0.0.1/255.255.255.255
> > acl to_localhost dst 127.0.0.0/8
> > acl SSL_ports port 443 563
> > acl Safe_ports port 1195 1107 1174 1212 1000
> > acl Safe_ports port 80  # http
> > acl Safe_ports port 82  # http
> > acl Safe_ports port 81  # http
> > acl Safe_ports port 21  # ftp
> > acl Safe_ports port 443 563 # https, snews
> > acl Safe_ports port 70  # gopher
> > acl Safe_ports port 210 # wais
> > acl Safe_ports port 1025-65535  # unregistered ports
> > acl Safe_ports port 280 # http-mgmt
> > acl Safe_ports port 488 # gss-http
> > acl Safe_ports port 591 # filemaker
> > acl Safe_ports port 777 # multiling http
> > acl CONNECT method CONNECT
> > http_access allow manager localhost
> > http_access deny manager
> > http_access deny !Safe_ports
> > http_access deny CONNECT !SSL_ports
> > http_access allow ourusers
> > http_access deny all
> > http_reply_access allow all
> > cache allow all
> > icp_access allow ourusers
> > icp_access deny all
> > cache_mgr [EMAIL PROTECTED]
> > visible_hostname CE-Fariya
> > dns_testnames localhost
> > reload_into_ims on
> > quick_abort_min 0 KB
> > quick_abort_max 0 KB
> > log_fqdn off
> > half_closed_clients off
> > client_db off
> > ipcache_size 16384
> > ipcache_low 90
> > ipcache_high 95
> > fqdncache_size 8129
> > log_icp_queries off
> > strip_query_terms off
> > store_dir_select_algorithm round-robin
> > client_persistent_connections off
> > server_persistent_connections on
> > persistent_request_timeout 1 minute
> > client_lifetime 60 minutes
> > pconn_timeout 10 seconds
> > 
> > 
> > 
> > Adrian Chadd wrote:
> > > 
> > > On Thu, Jan 31, 2008, Chris Woodfield wrote:
> > >> Interesting. What sort of size threshold do you see where performance  
> > >> begins to drop off? Is it just a matter of larger objects reducing  
> > >> hitrate (due to few objects being cacheable in memory) or a bottleneck  
> > >> in squid itself that causes issues?
> > > 
> > > Its a bottleneck in the Squid code which makes accessing the n'th 4k
> > > chunk in memory take O(N) time.
> > > 
> > > Its one of the things I'd like to fix after Squid-2.7 is released.
> > > 
> > > 
> > > 
> > > Adrian
> > > 
> > > 
> > > 
> > 
> > -- 
> > View this message in context: 
> > http://www.nabble.com/Mem-Cache-flush-tp14951540p15449954.html
> > Sent from the Squid - Users mailing list archive at Nabble.com.
> 
-- 
Hugo Santander Ballestín
SADIE

RE: [squid-users] I can't use left panell of new Squid wiki site

2008-02-13 Thread S.KOBAYASHI
Hi Amos,

It's not quite serious problem. It worked fine on the FireFox.

Thank you very much,
Seiji

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 13, 2008 2:16 PM
To: S.KOBAYASHI
Cc: 'Squid Users'
Subject: Re: [squid-users] I can't use left panell of new Squid wiki site

> Hello guys,
>
> I'm happy to hear squid wiki site was improved. However I have a bit of
> problem.
> I can not use left side panel of wiki site. I can't click them such as
> Search, Login. Nothing seems in there, although there are some texts on
> that.
> I'm using IE7. Does anyone know settings of IE7 or any problems for the
> IE7
> ?

I've entered what I can see of the problem to a bug report.
If there is anything else, please add details.

http://www.squid-cache.org/bugs/show_bug.cgi?id=2221

Amos




Re: [squid-users] Bluecoat > Squid

2008-02-13 Thread Jakob Curdes
The problem is that if the Bluecoat AV series is working there, these 
systems can even intercept and decode SSL traffic so if the traffic is 
(correctly) recognized as illegitimate proxy traffic I have no idea how 
you could prevent that. Proxy traffic is easily distinguishable from 
other http traffic as the protocols are different, regerdless on which 
ports you are running the proxy. You might be able to setup a SSH tunnel 
depending on site policies. Mind you, I am not suggesting that this 
would be a legal or sensible thing to do.


Yoirs,
Jakob Curdes