[squid-users] problem with authentication

2013-02-26 Thread Fuhrmann, Marcel
Hi list,

i've followed this guide (semi) successfully: 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory
All seems to work (dns resolution, ntp, wbinfo), but Internet Explorer on the 
clients bring an authentication window. But a) it doesn't accept any 
credentials and b) it don't want this windows at all .
Can somebody give me a hint?

Proxy:
CentOS 6.3 x64
Squid  3.1.10

ADS:
Windows Server 2008 R2

Clients:
Windows 7
Windows Terminalserver 2003


Thanks for help


--
Marcel


Re: [squid-users] sharepoint pinning issue?

2013-02-26 Thread Alexandre Chappaz
Hi,

I have found some time to go further in the investigation. And here is
status right now.
the behavior is the some with only 1 squid ( no upstream server ), and
is also the same if I use squid as a reverse proxy for the Sharepoint
server.

As read is some threads about this subject, depending ont the browser
/ OS, behaviour differes.
IE in XP works perfectly, whereas FF in XP or linux asks for
authentification in a loop whenever trying to upload a relatively big
file to the server.
Activating the debug level and trying to analyse the headers of the
packets gave me quite some headache! What we can see is that the HTTP
headers and techniques for POSTing files used are totally differents
between browser.
I would love some help on analysing these logs.
Joined are 2 files :
capture_IE_XP : upload of a file works
capture_FF_XP : upload of a file does not

I can provide access to the sharepoint server & reverse proxy if
someone has time to jump in.

Best regards
Alex

2013/2/13 Amos Jeffries :
> On 13/02/2013 3:49 a.m., Alexandre Chappaz wrote:
>>
>> Hi,
>>
>> I know this is a subject that has been put on the table many times,
>> but I wanted to share with you my experience with squid + sharepoint.
>>
>> Squid Cache: Version 3.2.7-20130211-r11781
>>
>> I am having an issue with autehtication :
>> when accessing the sharepoint server, I do get a login/pw popup, I can
>> login and see some of the pages behind, but when doing some operation,
>> even though I am supposed to be logged in, the autentcation popup
>> comes back.
>> Here is what I find the the access log :
>> 1360679927.561 43 X.X.X.X TCP_MISS/200 652 GET
>> http://saralex.hd.free.fr/_layouts/images/selbg.png -
>> FIRSTUP_PARENT/192.168.100.XX image/png
>
>
> URL #1. No authentication required. non-pinned connection used.
>
>
>> 1360679928.543 37 X.X.X.X TCP_MISS/401 542 GET
>> http://saralex.hd.free.fr/_layouts/listform.aspx? -
>> PINNED/192.168.100.XX -
>
>
> URL #2. Sent to upstream on already authenticated+PINNED connection.
> Upstream server requires further authentication details.
>  --> authentication challenge?
>
>
>> 1360679928.665 58 X.X.X.X TCP_MISS/401 795 GET
>> http://saralex.hd.free.fr/_layouts/listform.aspx? -
>> PINNED/192.168.100.XX -
>
>
> URL #2 repeated. Sent to upstream on already authenticated+PINNED
> connection. Upstream server requires further authentication details.
>  --> possibly authentication handshake request?
>
>
>> 1360679928.753229 X.X.X.X TCP_MISS/200 20625 GET
>> http://saralex.hd.free.fr/_layouts/images/fgimg.png -
>> FIRSTUP_PARENT/192.168.100.XX image/png
>
>
> URL #3. No authentication required. non-pinned connection used.
>
>
>> 1360679928.788 68 X.X.X.X TCP_MISS/302 891 GET
>> http://saralex.hd.free.fr/_layouts/listform.aspx? -
>> PINNED/192.168.100.XX text/html
>
>
> URL #2 repeated. Sent to upstream on already authenticated+PINNED
> connection. Upstream server redirectes the client to another URL.
>  --> authentication credentials accepted.
>
>
>> 1360679928.921 45 X.X.X.X TCP_MISS/401 542 GET
>> http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? -
>> PINNED/192.168.100.XX -
>
>
> URL #4. Sent to upstream on already authenticated+PINNED connection.
> Upstream server requires further authentication details.
>  --> authentication challenge?
>
>
>> 1360679929.019 47 X.X.X.X TCP_MISS/401 795 GET
>> http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? -
>> PINNED/192.168.100.XX -
>
>
> URL #4 repeated. Sent to upstream on already authenticated+PINNED
> connection. Upstream server requires further authentication details.
>  --> possibly authentication handshake request?
>
>
>> 1360679929.656 81 X.X.X.X TCP_MISS/200 1986 GET
>> http://saralex.hd.free.fr/_layouts/images/loadingcirclests16.gif -
>> FIRSTUP_PARENT/192.168.100.XX image/gif
>
>
> URL #5. no authentication required. non-pinned connection used.
>
>
>> 1360679930.417   1322 X.X.X.X TCP_MISS/200 130496 GET
>> http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? -
>> PINNED/192.168.100.XX text/html
>
>
> URL #4 repeated. Sent to upstream on already authenticated+PINNED
> connection. Upstream server provides the display response.
>  --> authentication credentials accepted.
>
>
>> 1360679934.618 53 X.X.X.X TCP_MISS/401 542 GET
>> http://saralex.hd.free.fr/_layouts/iframe.aspx? -
>> PINNED/192.168.100.XX -
>> 1360679934.729 51 X.X.X.X TCP_MISS/401 795 GET
>> http://saralex.hd.free.fr/_layouts/iframe.aspx? -
>> PINNED/192.168.100.XX -
>>
>> could this be a pinning issue?
>>
>> Is V2.7 STABLE managing these things in a nicer way?
>
>
> Unknown. But I doubt it. This is Squid using a PINNED connection to relay
> traffic to an upstream server. That upstream server is rejecting the clients
> delivered credentials after each object. There is no sign of proxy
> authentication taking place, this re-challenge business is all between
> client and upstream server.
>
> You need to look at whether these connections are being pinned the

[squid-users] squid3 SMP storage

2013-02-26 Thread jiluspo
Good day,

We have been using storeurl of squid2 works perfect for several
years.

But it came to the point where CPU is congested. After squid3
storeid was finished we tested, but we can't deploy due to SMP storage
limitation (http://wiki.squid-cache.org/Features/SmpScale). With squid3head
worker1 can't outperform last version of squid2.7 so we revert it to
squid2.7. We did plan to remove some codes in squid2.7 just to lower cpu
usage.

Are there any other storage we could use in squid3 that can be
shared with workers? 



[squid-users] Re: About bottlenecks (Max number of connections, etc.)

2013-02-26 Thread Manuel
After rebuilding the rpm without the with-maxfd=16384 and installing it in
two very different servers the are "32768 file descriptors available" for
Squid in each server.

No idea why there are no more file descriptors available. The SO config
seems to be correct with regards to file descriptors available in both
servers. Here is an example of one of them:

[root@anything ]# cat /proc/sys/fs/file-max
100451
[root@anything ]# ulimit -Hn
65535
[root@anything ]# ulimit -Sn
65535
[root@anything ]# cat /proc/sys/fs/file-max
100451
[root@anything ]# sysctl fs.file-max
fs.file-max = 100451
[root@anything ]# service squid stop
Stopping squid: .  [  OK  ]
[root@anything ]# su - squid
This account is currently not available.
[root@anything ]# service squid start
Starting squid: .  [  OK  ]
[root@anything ]# sysctl fs.file-nr
fs.file-nr = 1152   0   100451

Also max_filedesc 98304 is set at the end of squid.conf which clearly works
because when removed there are only 1026 file descriptors or so. ulimit -HSn
98304 is also added to the beginning of vi /etc/init.d/squid

As you can see now there is no with-maxfd=16384 when squid -v is used:
Squid Cache: Version 2.6.STABLE21
configure options:  '--host=x86_64-redhat-linux-gnu'
'--build=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--includedir=/usr/include'
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
'--sharedstatedir=/usr/com' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--exec_prefix=/usr' '--bindir=/usr/sbin'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-arp-acl'
'--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru'
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl'
'--with-openssl=/usr/kerberos' '--enable-delay-pools'
'--enable-linux-netfilter' '--with-pthreads'
'--enable-ntlm-auth-helpers=SMB,fakeauth'
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge'
'--enable-useragent-log' '--enable-referer-log'
'--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost'
'--enable-underscores'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL'
'--enable-cache-digests' '--enable-ident-lookups'
'--enable-follow-x-forwarded-for' '--enable-wccpv2' '--enable-fd-config'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'target_alias=x86_64-redhat-linux' 'CFLAGS=-D_FORTIFY_SOURCE=2 -fPIE -Os -g
-pipe -fsigned-char' 'LDFLAGS=-pie'


The only thing I can think of to try is to rebuild again the rpm but with
with-maxfd=98304 (instead of simply removing with-maxfd=16384). Also I will
probably try soon with a more recent version of Squid (because of the better
performance and in order to see whether that limit of 32768 disappears or
not).

Any ideas?

Thank you in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650p4658732.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: store_avg_object_size -- default needs updating?

2013-02-26 Thread Alex Rousskov
On 02/25/2013 07:19 PM, Linda W wrote:
> Alex Rousskov wrote:
>> On 02/18/2013 04:01 PM, Linda W wrote:
>>> Has anyone looked at their average cached object size
>>> lately?
>>>
>>> At one point, I assume due to measurements, squid
>>> set a default to 13KB / item.
>>>
>>> About 6 or so years ago, I checked mine out:
>>> (cd /var/cache/squid;
>>> cachedirs=( $(printf "%02X " {0..63}) )
>>> echo $[$(du -sk|cut -f1)/$(find ${cachedirs[@]} -type f |wc -l)]
>>> )
>>> --- got '47K, or over 3x the default.
>>>
>>> Did it again recently:
>>> 310K/item average.
>>>
>>> Is the average size of web items going up or are these peculiar to
>>> my users' browser habits (or auto-update programs from windows
>>> going through cache, etc...).
>>
>> According to stats collected by Google in May 2010, the mean size of a
>> GET response was about 7KB:
>> https://developers.google.com/speed/articles/web-metrics
>>
>> Note that the median GET response size was less than 3KB. I doubt things
>> have changed that much since then.
> ---
> I'm pretty sure that google's stats would NOT be representative
> of the net as a whole.  Google doesn't serve content -- the service
> indexes of content -- the indices of content are going to be significantly 
> smaller
> than the content being indexed -- especially when pictures or other non-text
> files are included.

I think you misunderstood what those "google stats" are about. They are
not about Google servers or services. They are collected from
Google-unrelated web sites around the world [that Google robots visit to
index them].



>> Google stats are biased because they are collected by Googlebot.
>> However, if you look at fresh HTTP archive stats, they seem to give a
>> picture closer to 2010 Google stats than to yours:
>> &http://httparchive.org/trends.php#bytesTotalreqTotal
>>
>> (I assume you need to divide bytesTotal by reqTotal to get mean response
>> size of about 14KB).


>   But I'll betcha they don't have any download sites on their
> top list. 


You lost that bet (they do have download sites), but you are right (they
do not count what you consider "downloads"):

>> http://httparchive.org/about.php#listofurls
>> How is the list of URLs generated?
>> 
>> Starting in November 2011, the list of URLs is based solely on the
>> Alexa Top 1,000,000 Sites (zip).

For example, download.com (#200), filehippo.com (#600), last.fm (#908),
and iso.com (#20619) are on that Alexa list (FWIW). However:

>> What are the limitations of this testing methodology (using lists)?
>> 
>> The HTTP Archive examines each URL in the list, but does not crawl
>> the website other pages. Although these lists of websites (Fortune
>> 500 and Alexa Top 500 for example) are well known, the entire
>> website doesn't necessarily map well to a single URL.

so they probably do not get to download very large objects during
their crawl.


> Add in 'downloads.suse.org' and see how the numbers tally.

Alexa rates suse.org as #20'046'765 in the world so it was not included
in the 1M "top sites" HTTP archive sample, but it is probably not
popular enough to significantly affect statistics of an average Squid.


> Seriously -- look at stats that cut off anything > 4M is going to strongly 
> bias
> things.

Yes, of course. For example, Google stats do not show any responses
exceeding 35MB, which means they were not downloading really large files.

If you find an unbiased source of information we can use to adjust
default averages, please post.


Thank you,

Alex.



Re: [squid-users] squid3 SMP storage

2013-02-26 Thread Alex Rousskov
On 02/26/2013 09:28 AM, jiluspo wrote:

>   Are there any other storage we could use in squid3 that can be
> shared with workers? 

Memory cache and Rock disk cache can be shared among workers.

Alex.



[squid-users] comm_open: socket failure: (13) Permission denied

2013-02-26 Thread franckm . 2005
squid do not want to work on debian squeeze armel kernel 2.6.34. I have searched
but not found a solution.
Please do you have an idea ?

root@serveur:~# /etc/init.d/squid start
Starting Squid HTTP proxy: squid.
root@serveur:~# squid[760]: Squid Parent: child process 763 started
squid[760]: Squid Parent: child process 763 exited due to signal 6
squid[760]: Squid Parent: child process 765 started
squid[760]: Squid Parent: child process 765 exited due to signal 6
squid[760]: Squid Parent: child process 767 started
squid[760]: Squid Parent: child process 767 exited due to signal 6
squid[760]: Squid Parent: child process 769 started
squid[760]: Squid Parent: child process 769 exited due to signal 6
squid[760]: Squid Parent: child process 771 started
squid[760]: Squid Parent: child process 771 exited due to signal 6
squid[760]: Exiting due to repeated, frequent failures

root@serveur:~# tail /var/log/squid/cache.log
2013/02/26 00:06:20| Starting Squid Cache version 2.7.STABLE9 for
arm-debian-linux-gnu...
2013/02/26 00:06:20| Process ID 1449
2013/02/26 00:06:20| With 1024 file descriptors available
2013/02/26 00:06:20| Using epoll for the IO loop
2013/02/26 00:06:20| comm_open: socket failure: (13) Permission denied
FATAL: Could not create a DNS socket
Squid Cache (Version 2.7.STABLE9): Terminated abnormally.
CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys
Maximum Resident Size: 6608 KB
Page faults with physical i/o: 0

root@serveur:~# ping google.com
PING google.com (74.125.230.225): 56 data bytes
64 bytes from 74.125.230.225: icmp_seq=0 ttl=56 time=25.000 ms
64 bytes from 74.125.230.225: icmp_seq=1 ttl=56 time=29.000 ms
64 bytes from 74.125.230.225: icmp_seq=2 ttl=56 time=26.000 ms
64 bytes from 74.125.230.225: icmp_seq=3 ttl=56 time=27.000 ms
^C--- google.com ping statistics ---
5 packets transmitted, 4 packets received, 20% packet loss
round-trip min/avg/max/stddev = 25.000/26.750/29.000/1.479 ms

Best regards,
Franck



[squid-users] ext_time_quota_acl default rule

2013-02-26 Thread David Touzeau

Dear

I would like to create a default rule in ext_time_quota_acl helper.
According the config file in man.8 , you must match identity with a quota eg 
:


john 8h / 1d

In our case, we need to specifiy a "all" token as a default rule...
eg

* 24h/4d


Is it possible to do that ? 



Re: [squid-users] ext_time_quota_acl default rule

2013-02-26 Thread Amos Jeffries

On 27/02/2013 8:13 p.m., David Touzeau wrote:

Dear

I would like to create a default rule in ext_time_quota_acl helper.
According the config file in man.8 , you must match identity with a 
quota eg :


john 8h / 1d

In our case, we need to specifiy a "all" token as a default rule...
eg

* 24h/4d


Is it possible to do that ?


The present versio nof the helper does not support that.

It can be made to fairly easily. Patches welcome (or sponsorship for 
coding - mail me directly if you want that).


Amos


Re: [squid-users] comm_open: socket failure: (13) Permission denied

2013-02-26 Thread Amos Jeffries

On 27/02/2013 7:29 p.m., franckm.2...@free.fr wrote:

squid do not want to work on debian squeeze armel kernel 2.6.34. I have searched
but not found a solution.
Please do you have an idea ?



root@serveur:~# tail /var/log/squid/cache.log
2013/02/26 00:06:20| Starting Squid Cache version 2.7.STABLE9 for
arm-debian-linux-gnu...


Time for an upgrade. The squeeze squid3 package is not too bad, although 
I would recommend using the squid3 package from the Wheezy distribution 
if you can.



2013/02/26 00:06:20| Process ID 1449
2013/02/26 00:06:20| With 1024 file descriptors available
2013/02/26 00:06:20| Using epoll for the IO loop
2013/02/26 00:06:20| comm_open: socket failure: (13) Permission denied
FATAL: Could not create a DNS socket


There is your reason. Strange though. Squid uses a random outgoing port 
for DNS to avoid this type of problem amongst other things.


Amos