Re: [squid-users] unvalidated objects.

2013-12-10 Thread Amos Jeffries
On 8/12/2013 11:37 a.m., Hussam Al-Tayeb wrote:
 I have something like this:
 2013/12/08 00:18:50| Done reading /home/squid swaplog (293760 entries)
 2013/12/08 00:18:50| Finished rebuilding storage from disk.
 2013/12/08 00:18:50|293760 Entries scanned
 2013/12/08 00:18:50| 0 Invalid entries.
 2013/12/08 00:18:50| 0 With invalid flags.
 2013/12/08 00:18:50|293760 Objects loaded.
 2013/12/08 00:18:50| 0 Objects expired.
 2013/12/08 00:18:50| 0 Objects cancelled.
 2013/12/08 00:18:50| 0 Duplicate URLs purged.
 2013/12/08 00:18:50| 0 Swapfile clashes avoided.
 2013/12/08 00:18:50|   Took 1.00 seconds (294266.73 objects/sec).
 2013/12/08 00:18:50| Beginning Validation Procedure
 2013/12/08 00:18:50|   262144 Entries Validated so far.
 2013/12/08 00:18:50|   Completed Validation Procedure
 2013/12/08 00:18:50|   Validated 293759 Entries
 2013/12/08 00:18:50|   store_swap_size = 18411592.00 KB
 2013/12/08 00:18:50| storeLateRelease: released 0 objects
 
 This means 1 object (293760-293759 =1) was not validated.
 
 - Can squid still eventually automatically purge that 1 object from
 disk through aging or something?
 - Any way to extract through some debug option what that object is?
 
 Yes I know it is just one file but I would like to keep the cache clean.

I dont think your validation algorithm is doing anything at all right
now. The actual validation procedure is to open every disk object and
verify the details of its existence and size. That takes a relatively
long time and is only performed if you start Squid with the -F command
line option. Otherwise it is only performed as-needed by live traffic
(TCP_SWAPFAIL_MISS gets logged if live validation fails).

The 1 difference is a bit odd though.

Amos


Re: [squid-users] Configuring Cache_Dir Size

2013-12-10 Thread Amos Jeffries
On 9/12/2013 10:57 p.m., Linux Zoom wrote:
 Dears,
 
 We have server for caching with the following specifications:
 OS CentOS 6.4 64-bit
 Kernel 2.6.32-358.el6.x86_64
 CPU Intel(R) Xeon(R) CPU  E5640  @ 2.67GHz  8 Cores
 RAM 8 GB
 hard disk space 850 GB
 
 we installed squid-3.1.10-19.el6_4.x86_64 and we want to know the
 optimal way to configure the cache_dir size and L1 and L2


Configure it however you like. The traffic on your network is what
determines best so we cant tell you what to do very well (if we could
predict it would be automated already).
 L1 and L2 on modern filesystems can be any multiple of 2. They exist to
allow manual avoidance of files-per-directory limitations on some
systems. The popular values tend to be powers of 2 such as 16/64,
16/128, 32/256, 64/256, etc

The general recommendation is to aim for storage of around 7 days worth
of traffic.

With a 850 GB disk I would allocate:
* 50GB rock cache_dir for objects size between 4KB and 32KB.
* 10GB rock cache for objects size under 4KB.
* the 600GB AUFS cache_dir for objects 32KB.

Amos


Re: [squid-users] Streams through http proxy wont work with android devices

2013-12-10 Thread Amos Jeffries
On 10/12/2013 11:19 p.m., Prabu Gj wrote:
 Hi,
 
Yes, it is going through squid, I am getting logs in tcpdump
 trace. same has been attached for reference.
 

Okay yes there is traffic going between Squid and the Android.

Where the trace cuts in we get the tail end of a 121KB data exchange
which runs slowely throug the entire time of the trace.

Then 4 very short ones, around 1200-1300 bytes each delivered to Squid
followed by a FIN abort from the Android within between 0.1 - 1.0
seconds before anything comes back from Squid.

Followed by another connection setup with a 1231 byte request from
Android to Squid which results in over 121KB of response by the end of
the trace and continuing after it ends.

All in all it looks like Android and Squid are exchanging traffic. So
what is wont work ?

Amos


[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-10 Thread iishiii
Thanks a lot Amos !!
please help me little more in way exactly what steps should i perform for
SMP 
how should i break my squid conf and what more configuration is required. 


and . How to tune squid for getting greater hit rates... tell me some
steps to try. 





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663751.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Configuring Cache_Dir Size

2013-12-10 Thread Eliezer Croitoru

Hey Amos,

So it's a sum of 660GB of cache while leaving the DISK more space for 
other FS needs?
Also There is was a calculation which I do not remember about for X 
objects there is a need in Y memory to be used by squid DB.

Is it applying this case?

Thanks,
Eliezer

On 10/12/13 10:16, Amos Jeffries wrote:

The general recommendation is to aim for storage of around 7 days worth
of traffic.

With a 850 GB disk I would allocate:
* 50GB rock cache_dir for objects size between 4KB and 32KB.
* 10GB rock cache for objects size under 4KB.
* the 600GB AUFS cache_dir for objects 32KB.

Amos




[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-10 Thread Dr.x
iishiii wrote
 Thanks a lot Amos !!
 please help me little more in way exactly what steps should i perform for
 SMP 
 how should i break my squid conf and what more configuration is required. 
 
 
 and . How to tune squid for getting greater hit rates... tell me some
 steps to try.

hi ,
look

you are right .

smp scale from wiki is some difficult and is not suitable for beginner to
understand .

for luck  u can ask in mailing list  and have more understanding .
1st of all :
let me ask u some questions :

why u need smp ?
wt is the os of ur system ?
do u have high load and ur squid cant handle more traffic ?
wt is ur server machine details ?
how is memory and cpu core and how many hardisk u have ?
==
so ,
as a start ,
to start with smp , ur squid version should be higher than 3.2.x as i
remember ,
so  i advise u to compile squid with 3.3.9 or 3.3.10 or  higher , and
enable rock  store when u compile it .
then 
u should read this wiki
http://wiki.squid-cache.org/Features/SmpScale

and look at the bottom of page u will find 3 frequently problems when we
deal with smp ,
u should do as explained because smp may not start well .

after that ,
and making sure smp is fine

u can add  to squid.conf :
workers 3=start with 3
and start without cache_dir  .


when u do the above tell me so that u continue and go to next steps in
tuning parameters and enhance squid .



again ,
my understanding is not soo good, but as i can i tried to help u ,
mr Amos has alot of answers to next questions .

wish i helped u 

regards

Ahmd



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663753.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Using trusted fake CA cert for ssl-bump on http_port

2013-12-10 Thread Shinoj Gangadharan
Does the certificate match the key? Is there a passphrase for the key? If
yes, please remove the passphrase. Are you able to get it working with
generate-host-certificates=off ?

Regards,
Shinoj.

 -Original Message-
 From: Sridhar N [mailto:sridhar.narasim...@live.com]
 Sent: Monday, December 09, 2013 6:20 PM
 To: squid-users@squid-cache.org
 Subject: RE: [squid-users] Using trusted fake CA cert for ssl-bump on
 http_port

 
  From: sgangadha...@wavecrest.gi
  Date: Mon, 9 Dec 2013 11:55:42 +0530
 
  Hi Sridhar,
 
  I don’t see the following in your config file :
 
  sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
  sslcrtd_children 50
 
  always_direct allow all
 
 
  /var/lib/ssl_db should be owned by squid. This is where the generated
  certificates will be stored. This folder is created by using the
command :
 
  ssl_crtd -c -s /var/lib/ssl_db
 

 Thanks. I added those lines, still getting the same problem though.

 What else might be going on ?

 root@ubuntu:~# squid -k parse
 2013/12/09 18:17:57| Startup: Initializing Authentication Schemes ...
 2013/12/09 18:17:57| Startup: Initialized Authentication Scheme 'basic'
 2013/12/09 18:17:57| Startup: Initialized Authentication Scheme 'digest'
 2013/12/09 18:17:57| Startup: Initialized Authentication Scheme
'negotiate'
 2013/12/09 18:17:57| Startup: Initialized Authentication Scheme 'ntlm'
 2013/12/09 18:17:57| Startup: Initialized Authentication.
 2013/12/09 18:17:57| Processing Configuration File:
/usr/local/etc/squid.conf
 (depth 0)
 2013/12/09 18:17:57| Processing: acl localnet src 10.0.0.0/8  # RFC1918
 possible internal network
 2013/12/09 18:17:57| Processing: acl localnet src 172.16.0.0/12   #
RFC1918
 possible internal network
 2013/12/09 18:17:57| Processing: acl localnet src 192.168.0.0/16  #
RFC1918
 possible internal network
 2013/12/09 18:17:57| Processing: acl localnet src fc00::/7       # RFC
4193 local
 private network range
 2013/12/09 18:17:57| Processing: acl localnet src fe80::/10      # RFC
4291 link-
 local (directly plugged) machines
 2013/12/09 18:17:57| Processing: acl SSL_ports port 443
 2013/12/09 18:17:57| Processing: acl Safe_ports port 80   #
http
 2013/12/09 18:17:57| Processing: acl Safe_ports port 21   #
ftp
 2013/12/09 18:17:57| Processing: acl Safe_ports port 443  #
 https
 2013/12/09 18:17:57| Processing: acl Safe_ports port 70   #
 gopher
 2013/12/09 18:17:57| Processing: acl Safe_ports port 210  #
wais
 2013/12/09 18:17:57| Processing: acl Safe_ports port 1025-65535   #
 unregistered ports
 2013/12/09 18:17:57| Processing: acl Safe_ports port 280  #
 http-mgmt
 2013/12/09 18:17:57| Processing: acl Safe_ports port 488  #
gss-
 http
 2013/12/09 18:17:57| Processing: acl Safe_ports port 591  #
 filemaker
 2013/12/09 18:17:57| Processing: acl Safe_ports port 777  #
 multiling http
 2013/12/09 18:17:57| Processing: acl CONNECT method CONNECT
 2013/12/09 18:17:57| Processing: http_access deny !Safe_ports
 2013/12/09 18:17:57| Processing: http_access allow localhost manager
 2013/12/09 18:17:57| Processing: http_access deny manager
 2013/12/09 18:17:57| Processing: http_access allow localnet
 2013/12/09 18:17:57| Processing: http_access allow localhost
 2013/12/09 18:17:57| Processing: http_access allow all
 2013/12/09 18:17:57| Processing: http_port 4128 ssl-bump  generate-host-
 certificates=on  cert=/etc/ssl/demoCA/CA/cacert.pem
 key=/etc/ssl/demoCA/CA/cacert.key
 2013/12/09 18:17:57| Processing: ssl_bump server-first all
 2013/12/09 18:17:57| Processing: sslcrtd_program
/usr/local/libexec/ssl_crtd
 -s /usr/local/var/lib/ssl_db
 2013/12/09 18:17:57| Processing: sslcrtd_children 5
 2013/12/09 18:17:57| Processing: always_direct allow all
 2013/12/09 18:17:57| Processing: coredump_dir /usr/local/var/cache/squid
 2013/12/09 18:17:57| Processing: refresh_pattern ^ftp:
1440
   20% 10080
 2013/12/09 18:17:57| Processing: refresh_pattern ^gopher: 14400%
   1440
 2013/12/09 18:17:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0  0%
   0
 2013/12/09 18:17:57| Processing: refresh_pattern .0
20%
   4320
 2013/12/09 18:17:57| Initializing https proxy context
 2013/12/09 18:17:57| Initializing http_port [::]:4128 SSL context
 2013/12/09 18:17:57| Using certificate in /etc/ssl/demoCA/CA/cacert.pem
 2013/12/09 18:17:57| storeDirWriteCleanLogs: Starting...
 2013/12/09 18:17:57|   Finished.  Wrote 0 entries.
 2013/12/09 18:17:57|   Took 0.00 seconds (  0.00 entries/sec).
 FATAL: No valid signing SSL certificate configured for http_port
[::]:4128 Squid
 Cache (Version 3.3.10): Terminated abnormally.
 CPU Usage: 0.008 seconds = 0.008 user + 0.000 sys Maximum Resident Size:
 25808 KB
 Page faults with physical i/o: 0


Re: [squid-users] logformat codes

2013-12-10 Thread Brendan Kearney
On Mon, 2013-12-09 at 23:12 +0900, Alan wrote:
 On Thu, Dec 5, 2013 at 9:41 AM, Brendan Kearney bpk...@gmail.com wrote:
  i am wondering if there is a logformat code that can be used to log the
  URL (domain.tld or host.domain.tld) independent of the URI
  (/path/to/file.ext?parameter)?  i am using %ru, which gives me the URL
  and URI in one string.  %rp seems to be the URI, but i am not using that
  right now and can only go by what i am reading in the docs.
 
  i am looking to log the URL in a separate field from the URI so that in
  a database of the log entries, the URL can be indexed for better search
  and reporting performance.  is there an easy way to accomplish this?
 
 Why don't you use a trigger? That is what I do.

amos, dont spend it all in one place.

alan, while i do see that a trigger could digest the whole string and
parse out the pieces i want, i see this functionality as possibly being
a help to anyone running squid and not running a database.



[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-10 Thread iishiii
Dear,

My Squid version is 3.4.0.3 (installed via RPM from squid ftp )
MY OS is CentOS 6.5
My server machine is Xeon Quard core 3.0 Ghz (two processors)
RAm 8 GB 
Hard Drive : 500 GB 7200rpm

My users are almost 100 

i have 4x DSL (4MB each ) 

i am using mikrotik for load balancing 4 wan and created rule to direct all
port 80 traffic to squid. 

squid is working and giving logs. but all are tcp_miss 200 .
there is very rare chance that any object get hit... i want to configure
(SMP) so that in this way squid can give more hits.. 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663757.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] external_acl doesn't work after upgrade debian 6 squid 2.7 to debian 7 squid 3.1.20

2013-12-10 Thread Amos Jeffries

On 2013-12-11 06:25, Thomas Stegbauer wrote:

Hi Amos,

sorry for double post.
It looks like only one of the external_acl scripts get the SRC 
addresses sent.

as i modified and wrote out line to a file, there is only one file
filling ip-addresses.
all other logs have a size of 0byte.

What could be the reason for that?


Possibly your script is operating fast enough that Squid does not need 
to send lookups to more than 1 helper process. Check the cachemgr 
external_acl report to see how many requests are being sent to each 
helper.


Amos



Re: [squid-users] external_acl doesn't work after upgrade debian 6 squid 2.7 to debian 7 squid 3.1.20

2013-12-10 Thread Thomas Stegbauer



Am 03.09.2013 13:40, schrieb Amos Jeffries:

On 3/09/2013 4:25 a.m., Thomas Stegbauer wrote:


Hi,

i upgraded a Debian 6.0 with Squid 2.7 to Debian 7.1 with Squid 3.1.20.

We have an external ACL included like this:
external_acl_type phpauthscript protocol=2.5 children=100 ttl=0
negative_ttl=0 %SRC /etc/squid3/authscript

The script updates an MySQL database with the hits of a user, where the
user get looked up by the clientip.

This worked fine on Debian 5 and Debian 6 with squid 2.7.

But on Debian 7 this stopps working, as the authscript dies as it not
gets the IP-Adress.


Um. SRC always exists, so something else is going on here. What is
cache.log saying to you when the helper dies?

Amos


Hi Amos,

sorry for my delay. it seems my webapp is unable to create correct text 
eMails.


Today i modified  the external_acl php script to safe STDIN to a file 
and also the startup in squid.conf.
It looks like only one process (the first?) get the SRC via STDIN. The 
other process doesn't get anything.


the script from my colleague before looks starts like this:

In the cache.log i see only Lesefehler Test ts which is correct 
behaviour, it it doesn't get the ClientIP


Any ideas?


Thomas



#!/usr/bin/php
?
require(/etc/squid3/config_squid.inc.php);   // Konfiguration laden

// Verbindung zur Datenbank
$db = mysql_pconnect($mysqlhost,$mysqluser,$mysqlpw) or die(Keine 
Verbindung zur Datenbank);

mysql_select_db($mysqldb);

$ZUFALL = rand();
$handle = fopen(/tmp/squid3-auth.$ZUFALL..log, a+);

// Endlosschleife zum Abwarten der Übergaben von Squid
while(1) {
// Einlesen einer Zeile von STDIN
$line = trim(fgets(STDIN));
fwrite($handle, $line);
// Fehler beim Einlesen? Script beenden
if(!$line) {
fclose($handle);
die(Lesefehler Test ts);
//  mit Squid3 kommt in 50% keine IP?
//  echo(sprintf($authok,nobody));
}
$ip = $line;

// Abfrage der Authtable ob Benutzer angemeldet ist
$q = mysql_query(SELECT uname,ttl FROM .$mysqlauthtable. 
WHERE ip='$ip');

$array = mysql_fetch_array($q);

...


Re: [squid-users] external_acl doesn't work after upgrade debian 6 squid 2.7 to debian 7 squid 3.1.20

2013-12-10 Thread Eliezer Croitoru

Hey Thomas,

Is there anyway of obtaining a fake script written in PHP?

I do have a fake Bash,perl,python scripts and I was looking for the next 
languages scripts\code:

tcl, ada, erlang, php, lua, others.

I remember that there was a PHP script in the past that someone tested 
in the past:

http://www.squid-cache.org/mail-archive/squid-users/201209/0374.html

I know that bash ruby and other code works fine.

Eliezer

On 10/12/13 23:52, Thomas Stegbauer wrote:

Hi Amos,

sorry for my delay. it seems my webapp is unable to create correct text
eMails.

Today i modified  the external_acl php script to safe STDIN to a file
and also the startup in squid.conf.
It looks like only one process (the first?) get the SRC via STDIN. The
other process doesn't get anything.

the script from my colleague before looks starts like this:

In the cache.log i see only Lesefehler Test ts which is correct
behaviour, it it doesn't get the ClientIP

Any ideas?


Thomas





[squid-users] Fwd: in-transit objects and more new requests for them

2013-12-10 Thread mohamad pen
greeting every one

I made a request to a big file with url lets say url.ca/big_file.tgz
from one client machine through squid and while it was pending-store
after a little while made another request from another client machine
to the same file. here is the output of

squidclient -p 1210 mgr:vm_objects 21 | grep -i -B 6 -A 5 big_file.tgz


KEY 2328788B9DA67070750EBB434D057E4A
STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
LV:1386699538 LU:1386699538 LM:1362573412 EX:-1
4 locks, 1 clients, 1 refs
Swap Dir -1, File 0X
GET http://url.ca/big_file.tgz
inmem_lo: 227638121
inmem_hi: 228071561
swapout: 0 bytes queued

KEY 7A2D8851C8F8B8E9BC56183DC1C9C26D
--
KEY 7F6FB74FB7AA69E5DB9FCA5002A64C0A
STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
LV:1386700716 LU:1386700716 LM:1362573412 EX:-1
4 locks, 1 clients, 1 refs
Swap Dir -1, File 0X
GET http://url.ca/big_file.tgz
inmem_lo: 8295840
inmem_hi: 8316000
swapout: 0 bytes queued

as you can see, squid creates two entries for each of them. then I
tested the server of url.ca and figured out squid creates two separate
download for each requests, while they are exactly the same. how to
make squid to use the pending in-transit object for every requests
instead of create a new download for them.

I know it might be a security risk, but it might be very usefull for
special cases that security does not matter.
regards


Re: [squid-users] external_acl doesn't work after upgrade debian 6 squid 2.7 to debian 7 squid 3.1.20

2013-12-10 Thread Amos Jeffries

On 2013-12-11 11:29, Thomas Stegbauer wrote:

Am 10.12.2013 22:49, schrieb Amos Jeffries:

On 2013-12-11 06:25, Thomas Stegbauer wrote:

Hi Amos,

sorry for double post.
It looks like only one of the external_acl scripts get the SRC
addresses sent.
as i modified and wrote out line to a file, there is only one file
filling ip-addresses.
all other logs have a size of 0byte.

What could be the reason for that?


Possibly your script is operating fast enough that Squid does not need
to send lookups to more than 1 helper process. Check the cachemgr
external_acl report to see how many requests are being sent to each 
helper.


Amos



Hi Amos,

thank you very much for the answer.
squid is asking, cause if not, the process won't die and log the error
in cache.log


With PHP that is not strictly true. I've found versions in Debian which 
timeout on script execution duration and halt the helper. Leading to 
your logs have a size of 0byte. state.


I'm getting a bit fuzzy nowdays since I dropped PHP myself as a helper 
language some months back. But IIRC there was at least on version where 
if I/O happened frequently enough it kept running for longer. If you are 
lucky you may have that version of PHP.




also in cachemgr.cgi i no requests answered?

External ACL Statistics: phpauthscript
Cache size: 0
program: /etc/squid3/authscript
number active: 4 of 5 (0 shutting down)
requests sent: 0
replies received: 0
queue length: 0
avg service time: 0 msec

#   FD  PID # Requests  Flags   TimeOffset  Request
1   14  47608   0   0.000   0   (none)
2   30  47609   0   0.000   0   (none)
3   41  47610   0   0.000   0   (none)
4   49  47611   0   0.000   0   (none)

Flags key:

   B = BUSY
   W = WRITING
   C = CLOSING
   S = SHUTDOWN PENDING

what could be the reason?


No requests *sent* to the helper. Which is what we are looking for. 
Indicating that the aforementioned timeout is probably the cause of 
this.


If your helper is not doing anything beyond loading the SRC IP in 
database and sending some associated username back to Squid you could 
try the SQL_session external ACL helper bundled with 3.3 and later.


Amos


Re: [squid-users] Fwd: in-transit objects and more new requests for them

2013-12-10 Thread Eliezer Croitoru

Hey mohamad,

The key is meant for searching and finding something in the DB.
as you see that there are two keys it might be a HEAD and  GET requests 
or another thing.


Squid has a testing branch for a function called collapsed forwarding 
which you can try and might find that it is what you are looking for.


What version of squid are you using?

Eliezer

On 11/12/13 00:47, mohamad pen wrote:

greeting every one

I made a request to a big file with url lets say url.ca/big_file.tgz
from one client machine through squid and while it was pending-store
after a little while made another request from another client machine
to the same file. here is the output of

squidclient -p 1210 mgr:vm_objects 21 | grep -i -B 6 -A 5 big_file.tgz


KEY 2328788B9DA67070750EBB434D057E4A
 STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
 RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
 LV:1386699538 LU:1386699538 LM:1362573412 EX:-1
 4 locks, 1 clients, 1 refs
 Swap Dir -1, File 0X
 GET http://url.ca/big_file.tgz
 inmem_lo: 227638121
 inmem_hi: 228071561
 swapout: 0 bytes queued

KEY 7A2D8851C8F8B8E9BC56183DC1C9C26D
--
KEY 7F6FB74FB7AA69E5DB9FCA5002A64C0A
 STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
 RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
 LV:1386700716 LU:1386700716 LM:1362573412 EX:-1
 4 locks, 1 clients, 1 refs
 Swap Dir -1, File 0X
 GET http://url.ca/big_file.tgz
 inmem_lo: 8295840
 inmem_hi: 8316000
 swapout: 0 bytes queued

as you can see, squid creates two entries for each of them. then I
tested the server of url.ca and figured out squid creates two separate
download for each requests, while they are exactly the same. how to
make squid to use the pending in-transit object for every requests
instead of create a new download for them.

I know it might be a security risk, but it might be very usefull for
special cases that security does not matter.
regards





Re: [squid-users] Fwd: in-transit objects and more new requests for them

2013-12-10 Thread Amos Jeffries

On 2013-12-11 11:47, mohamad pen wrote:

greeting every one


snip


how to
make squid to use the pending in-transit object for every requests
instead of create a new download for them.

I know it might be a security risk, but it might be very usefull for
special cases that security does not matter.
regards


This is called collapsed forwarding in Squid-2. Squid-3 does not 
support it yet, but there is work almost completed to fix that.


If you are happy to use the development code it can be found in the 
Launchpad bzr branch lp:~measurement-factory/squid/collapsed-fwd .


Amos



[squid-users] max_user_ip not working in 3.4?

2013-12-10 Thread Romeo Mihalcea
I have these boxes running an early version 3.0 and I decided to
upgrade to latest which I did. Compiled and working, except with one
rule which spits an error (was working fine till now):

acl concurrent_browsing max_user_ip -s 5
http_access deny concurrent_browsing



The error:

FATAL: Bungled /etc/squid/config/squid.conf line 19: acl
concurrent_browsing max_user_ip -s 5

Is max_user_ip dropped or replaced or 3.4 has a bug?

Thanks.


[squid-users] Re: Reverse proxy always misses cache items (dynamic pages)

2013-12-10 Thread juan_fla
Thank you Amos. It's been a while before I could finally get back to this
issue.

I updated my squid.conf per your recommendations (set up cache_dir, changed
debug_options, also used strip_query_terms off to log full urls with
parameters)



debug_options 11 2

http_access allow manager localhost

http_port 3129 accel defaultsite=mydomain.org 
cache_peer 127.0.0.1 parent 3130 0 no-query originserver  name=myAccel
login=PASS forceddomain=mydomain.org

acl our_sites dstdomain mydomain.org
acl our_sites2 dstdomain localhost
http_access allow our_sites
http_access allow our_sites2

http_access allow localhost 

refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern .0 20% 4320

cache_peer_access myAccel allow our_sites
cache_peer_access myAccel allow our_sites2

cache_dir ufs /u/home/mydomain/opt/squid/var/cache 100 16 256



I tried browsing pages (from a browser on a remote client) and also getting
the headers through telnet. This is from access.log


1386731129.654 28 127.0.0.1 TCP_MISS/304 342 GET
http://localhost:3129/index.php?title=Main_Page - FIRSTUP_PARENT/127.0.0.1 -
1386731130.074 44 127.0.0.1 TCP_REFRESH_UNMODIFIED/304 291 GET
http://localhost:3129/load.php?debug=falselang=enmodules=mediawiki.legacy.commonPrint%2Cshared%7Cskins.monobookonly=stylesskin=monobook*
- FIRSTUP_PARENT/127.0.0.1 -
1386731130.194172 127.0.0.1 TCP_REFRESH_MODIFIED/200 11439 GET
http://localhost:3129/load.php?debug=falselang=enmodules=startuponly=scriptsskin=monobook*
- FIRSTUP_PARENT/127.0.0.1 text/javascript
1386731136.877 33 127.0.0.1 TCP_MISS/304 342 GET
http://localhost:3129/index.php?title=Hygiene - FIRSTUP_PARENT/127.0.0.1 -
1386731139.501 83 127.0.0.1 TCP_MISS/304 342 GET
http://localhost:3129/index.php?title=Main_Page - FIRSTUP_PARENT/127.0.0.1 -
1386731182.627 29 127.0.0.1 TCP_MISS/200 472 HEAD
http://localhost:3129/index.php - FIRSTUP_PARENT/127.0.0.1 text/html

Notice two calls to index.php?title=Main_Page , both missed. Also last call
is a HEAD request, and the headers basically show:

HTTP/1.1 200 OK
Date: Wed, 11 Dec 2013 02:54:15 GMT
Server: Apache
X-Powered-By: PHP/5.2.6
X-Content-Type-Options: nosniff
Vary: Accept-Encoding,Cookie
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: private, must-revalidate, max-age=0
Content-Language: en
Last-Modified: Thu, 21 Nov 2013 03:44:42 GMT
Content-Type: text/html; charset=UTF-8
X-Cache: MISS from tourmaline.tilted.net
Via: 1.1 tourmaline.tilted.net (squid/3.3.8)
Transfer-Encoding: chunked

At this point, access.log continues showing all missed files. I really don't
understand what I need to do in order to have squid cache using localhost,
or to try to hit using mydomain, and I don't seem to get any information
from the headers.

Thanks in advance for your help



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Reverse-proxy-always-misses-cache-items-dynamic-pages-tp4663562p4663767.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Issue when SSL bump bypass some domains

2013-12-10 Thread Neddy, NH. Nam
Hi,

I've installed squid 3.4 STABLE for forward proxying with ssl-bump
(followed Squid Wiki). Everything is fine until client visit https
pages which have bad certificates (ie. seft signed).

My configure to tell Squid bypass those:

acl bypass-ssl dstdomain *.website.com

ssl_bump none bypass-ssl
ssl_bump server-first all

The result is Squid bypasses ACL but still do ssl-bump, and client
still receive generated cert from Squid.

Is this right? I've expected ssl_bump will not terminate ssl by those
directive, If so, what should I do? I highly appreciate your comments.

Thanks,
~Neddy


Re: [squid-users] Issue when SSL bump bypass some domains

2013-12-10 Thread Alex Rousskov
On 12/10/2013 09:13 PM, Neddy, NH. Nam wrote:
 Hi,
 
 I've installed squid 3.4 STABLE for forward proxying with ssl-bump
 (followed Squid Wiki). Everything is fine until client visit https
 pages which have bad certificates (ie. seft signed).
 
 My configure to tell Squid bypass those:
 
 acl bypass-ssl dstdomain *.website.com
 
 ssl_bump none bypass-ssl
 ssl_bump server-first all


OK, but please note that the above only works if

a) The CONNECT request is using a domain name;

or

b) The CONNECT request is using an IP address. Squid can get a domain
name by doing a reverse DNS lookup on that IP address _and_ the result
of that reverse lookup is the domain name you expect and not some
internal/irrelevant/different domain.

In many cases, neither (a) nor (b) are true.


 The result is Squid bypasses ACL but still do ssl-bump, and client
 still receive generated cert from Squid.

Sorry, the above sentence is unclear, especially the Squid bypasses
ACL part. You may want to rephrase.


 I've expected ssl_bump will not terminate ssl by those
 directive, If so, what should I do?

Yes, if bypass-ssl matches, Squid should not terminate SSL.


Here is the suggested troubleshooting plan.

1) Collect the CONNECT request that violates your expectations. Use
debug_options ALL,2 in squid.conf, packet capture, custom access.log,
whatever works best for you. Once you have the request, you can repeat
it if needed, in isolation, using tools like nc, curl, wget, etc.

2) Determine whether that CONNECT request is using an IP address for the
tunnel destination. If CONNECT is using a domain name, should the
bypass-ssl match that domain? If bypass-ssl should match but does not,
report a bug.

3) If CONNECT request is using an IP address, perform a reverse DNS
lookup yourself, using the same DNS resolver that Squid is using. Dig
or even host command may be used for that in most cases. Do you get a
DNS answer with a domain name? Should that domain name match your
bypass-ssl ACL? If bypass-ssl should match in this case but does not,
report a bug.

The above plan does not cover all possibilities, but is a good start.

If you need to report a bug, change debug_options to ALL,9; reproduce
the problem using a single request (with no other traffic going through
Squid); and post the compressed cache.log.


Good luck,

Alex.



Re: [squid-users] Re: Reverse proxy always misses cache items (dynamic pages)

2013-12-10 Thread Amos Jeffries
On 11/12/2013 4:16 p.m., juan_fla wrote:

 Notice two calls to index.php?title=Main_Page , both missed. Also last call
 is a HEAD request, and the headers basically show:
 
 HTTP/1.1 200 OK
 Date: Wed, 11 Dec 2013 02:54:15 GMT
 Server: Apache
 X-Powered-By: PHP/5.2.6
 X-Content-Type-Options: nosniff
 Vary: Accept-Encoding,Cookie
 Expires: Thu, 01 Jan 1970 00:00:00 GMT
 Cache-Control: private, must-revalidate, max-age=0
 Content-Language: en
 Last-Modified: Thu, 21 Nov 2013 03:44:42 GMT
 Content-Type: text/html; charset=UTF-8
 X-Cache: MISS from tourmaline.tilted.net
 Via: 1.1 tourmaline.tilted.net (squid/3.3.8)
 Transfer-Encoding: chunked
 
 At this point, access.log continues showing all missed files. I really don't
 understand what I need to do in order to have squid cache using localhost,
 or to try to hit using mydomain, and I don't seem to get any information
 from the headers.
 

They also say that the actual object at this URL changes with any
difference in provided Cookie and Accept-Encoding headers from the
client. By reason of that containing Cookie alone the object will
usually be a MISS.

The headers say the object is a private object only to be delivered to
the one client and not to be shared with others. Squid does not cache
private information by default (and since you are running a
reverse-proxy its probably best not to use the available override).

Amos


Re: [squid-users] squid 3.4.1 and basic auth

2013-12-10 Thread Dmitry Melekhov

11.12.2013 10:31, Dmitry Melekhov пишет:

Will contact rejik developer.



btw, there is already updated version...

http://rejik.ru/bb_rus/viewtopic.php?f=1t=1196



Re: [squid-users] Issue when SSL bump bypass some domains

2013-12-10 Thread Neddy, NH. Nam
HI Alex,

Excuse me because I'm not native to English. And thanks your good points.

I changed debug to ALL,9 that's huge, but I found what's wrong with me:

2013/12/11 13:50:06.914 kid1| Acl.cc(156) matches: checking bypass-ssl
2013/12/11 13:50:06.914 kid1| DomainData.cc(131) match:
aclMatchDomainList: checking 'www.website.com'
2013/12/11 13:50:06.914 kid1| DomainData.cc(135) match:
aclMatchDomainList: 'www.website.com' NOT found

And looked back to my config, I should use dstdom_regex instead of
dstdomain if I want to use wildcard here.

Again, thanks for your value comment.
~Neddy,

On Wed, Dec 11, 2013 at 12:50 PM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 12/10/2013 09:13 PM, Neddy, NH. Nam wrote:
 Hi,

 I've installed squid 3.4 STABLE for forward proxying with ssl-bump
 (followed Squid Wiki). Everything is fine until client visit https
 pages which have bad certificates (ie. seft signed).

 My configure to tell Squid bypass those:

 acl bypass-ssl dstdomain *.website.com

 ssl_bump none bypass-ssl
 ssl_bump server-first all


 OK, but please note that the above only works if

 a) The CONNECT request is using a domain name;

 or

 b) The CONNECT request is using an IP address. Squid can get a domain
 name by doing a reverse DNS lookup on that IP address _and_ the result
 of that reverse lookup is the domain name you expect and not some
 internal/irrelevant/different domain.

 In many cases, neither (a) nor (b) are true.


 The result is Squid bypasses ACL but still do ssl-bump, and client
 still receive generated cert from Squid.

 Sorry, the above sentence is unclear, especially the Squid bypasses
 ACL part. You may want to rephrase.


 I've expected ssl_bump will not terminate ssl by those
 directive, If so, what should I do?

 Yes, if bypass-ssl matches, Squid should not terminate SSL.


 Here is the suggested troubleshooting plan.

 1) Collect the CONNECT request that violates your expectations. Use
 debug_options ALL,2 in squid.conf, packet capture, custom access.log,
 whatever works best for you. Once you have the request, you can repeat
 it if needed, in isolation, using tools like nc, curl, wget, etc.

 2) Determine whether that CONNECT request is using an IP address for the
 tunnel destination. If CONNECT is using a domain name, should the
 bypass-ssl match that domain? If bypass-ssl should match but does not,
 report a bug.

 3) If CONNECT request is using an IP address, perform a reverse DNS
 lookup yourself, using the same DNS resolver that Squid is using. Dig
 or even host command may be used for that in most cases. Do you get a
 DNS answer with a domain name? Should that domain name match your
 bypass-ssl ACL? If bypass-ssl should match in this case but does not,
 report a bug.

 The above plan does not cover all possibilities, but is a good start.

 If you need to report a bug, change debug_options to ALL,9; reproduce
 the problem using a single request (with no other traffic going through
 Squid); and post the compressed cache.log.


 Good luck,

 Alex.



Re: [squid-users] max_user_ip not working in 3.4?

2013-12-10 Thread Eliezer Croitoru

Hey Romeo,

The main issue is that the -s is not working while just a 5 will not 
break squid -kparse


Can you file a bug at the bugzilla to follow up?

Eliezer

On 11/12/13 04:52, Romeo Mihalcea wrote:

I have these boxes running an early version 3.0 and I decided to
upgrade to latest which I did. Compiled and working, except with one
rule which spits an error (was working fine till now):

acl concurrent_browsing max_user_ip -s 5
http_access deny concurrent_browsing



The error:

FATAL: Bungled /etc/squid/config/squid.conf line 19: acl
concurrent_browsing max_user_ip -s 5

Is max_user_ip dropped or replaced or 3.4 has a bug?

Thanks.




Re: [squid-users] Re: squid 3.3.x and machines that aren't domain members

2013-12-10 Thread Eugene M. Zheganin
Hi.

On 23.07.2013 07:50, Brendan Kearney wrote:

 your home machine, is it part of the domain that the work proxies are
 authenticating against?  You would never be able to retrieve a kerberos
 ticket from the domain to use for authentication to the proxies if your
 home machine is not part of the domain.  as for ntlm, you should be able
 to use the proxies if they force auth and support ntlm.  you may need to
 configure your browser to use integrated windows authentication.  IE vs
 Firefox have different configs that have to be setup for each to work
 with proxies that force authentication.

 you may need to turn integrated windows authentication off too, in the
 case where you are not part of the domain.  otherwise the user bob
 with a password of blah in the workgroup kitchen PC will be
 presenting his creds to the proxies and will never be allowed to browse.

 from the errors, it seems that no ticket is presented by your client.  i
 dont see anything about ntlm.  you may have fallen into the valid
 failure scenario, where the proxy and browser both support and agree to
 NEGOTIATE / Kerberos auth, but your client cannot supply valid
 credentials (in the form of a kerberos ticket), and therefore you are
 not authenticated and not allowed to surf.  you do not fall through to
 the next auth type supported because the agreed upon auth method
 returned an appropriate failure.

 to get past that, and use an alternate auth method, such as ntlm, you
 need to configure your browser to not use kerberos auth.  again, IE and
 Firefox will do be different in how you configure that.

So, about this problem.

Does anyone have a working method of authorizing Windows browsers on
such a proxy ? I can easily install another, just for machines that
aren't joined domain, but I kinda dislike this solution. Okkam's razor,
you know this stuff. Furthermore, I'm upgrading my old 3.2 squids to
3.3, and I like the way 3.3 is working, except this thing.

I tried to play with FF's options,. but didn't succeed - squid keeps
rejecting the authentication. I have basic auth also running, and, if
Escape is pressed on a NTLM/SPNEGO popup, a basic auth popup appears,
but FF for some reason still tried to authenticate using NTLM/SPNEGO.

Thanks.
Eugene.


Re: [squid-users] squid 3.4.1 and basic auth

2013-12-10 Thread Eliezer Croitoru

Hey Dmitry,

I was wondering about this piece of software.
There is a redirector and dbl and which I am not sure understood what it 
does?


They have rediretor DBL and another thing?

Can it be tested to do 302:http://redirection.example.com/request?test;

Eliezer

On 11/12/13 08:34, Dmitry Melekhov wrote:

11.12.2013 10:31, Dmitry Melekhov пишет:

Will contact rejik developer.



btw, there is already updated version...

http://rejik.ru/bb_rus/viewtopic.php?f=1t=1196