[squid-users] linux oom situtation

2013-12-30 Thread Oguz Yilmaz
Hello,

I have continous oom & panic situation unresolved, origined from
squid. I am not sure system fills up all the ram (36GB). Why this
system triggered this oom situation? Is it about some other memory?
highmem? lowmem? stack size?

Best Regards,



Kernel 3.10.24
Squid 3.1.14-12

Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked
oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0
Dec 27 09:19:05 2013 kernel: : [277622.359069] squid cpuset=/ mems_allowed=0
Dec 27 09:19:05 2013 kernel: : [277622.359074] CPU: 9 PID: 15533 Comm:
squid Not tainted 3.10.24-1.lsg #1
Dec 27 09:19:05 2013 kernel: : [277622.359076] Hardware name: Intel
Thurley/Greencity, BIOS 080016  10/05/2011
Dec 27 09:19:05 2013 kernel: : [277622.359078]  0003 e377b280
e03c3c38 c06472d6 e03c3c98 c04d2d96 c0a68f84 e377b580
Dec 27 09:19:05 2013 kernel: : [277622.359089]  42d0 0003
 e03c3c64 c04abbda e42bd318  e03c3cf4
Dec 27 09:19:05 2013 kernel: : [277622.359096]  42d0 0001
0247  e03c3c94 c04d3d5f 0001 0042
Dec 27 09:19:05 2013 kernel: : [277622.359105] Call Trace:
Dec 27 09:19:05 2013 kernel: : [277622.359116]  []
dump_stack+0x16/0x20
Dec 27 09:19:05 2013 kernel: : [277622.359121]  []
dump_header+0x66/0x1c0
Dec 27 09:19:05 2013 kernel: : [277622.359127]  [] ?
__delayacct_freepages_end+0x3a/0x40
Dec 27 09:19:05 2013 kernel: : [277622.359131]  [] ?
zone_watermark_ok+0x2f/0x40
Dec 27 09:19:05 2013 kernel: : [277622.359135]  []
check_panic_on_oom+0x37/0x60
Dec 27 09:19:05 2013 kernel: : [277622.359138]  []
out_of_memory+0x92/0x250
Dec 27 09:19:05 2013 kernel: : [277622.359144]  [] ?
wakeup_kswapd+0xda/0x120
Dec 27 09:19:05 2013 kernel: : [277622.359148]  []
__alloc_pages_nodemask+0x68e/0x6a0
Dec 27 09:19:05 2013 kernel: : [277622.359155]  []
sk_page_frag_refill+0x7e/0x120
Dec 27 09:19:05 2013 kernel: : [277622.359160]  []
tcp_sendmsg+0x387/0xbf0
Dec 27 09:19:05 2013 kernel: : [277622.359166]  [] ?
put_prev_task_fair+0x1f/0x350
Dec 27 09:19:05 2013 kernel: : [277622.359173]  [] ?
longrun_init+0x2b/0x30
Dec 27 09:19:05 2013 kernel: : [277622.359177]  [] ?
tcp_tso_segment+0x380/0x380
Dec 27 09:19:05 2013 kernel: : [277622.359182]  []
inet_sendmsg+0x4a/0xa0
Dec 27 09:19:05 2013 kernel: : [277622.359186]  []
sock_aio_write+0x116/0x130
Dec 27 09:19:05 2013 kernel: : [277622.359191]  [] ?
hrtimer_try_to_cancel+0x3c/0xb0
Dec 27 09:19:05 2013 kernel: : [277622.359197]  []
do_sync_write+0x68/0xa0
Dec 27 09:19:05 2013 kernel: : [277622.359202]  []
vfs_write+0x190/0x1b0
Dec 27 09:19:05 2013 kernel: : [277622.359206]  [] SyS_write+0x53/0x80
Dec 27 09:19:05 2013 kernel: : [277622.359211]  []
sysenter_do_call+0x12/0x22
Dec 27 09:19:05 2013 kernel: : [277622.359213] Mem-Info:
Dec 27 09:19:05 2013 kernel: : [277622.359215] DMA per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359218] CPU0: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359220] CPU1: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359222] CPU2: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359224] CPU3: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359226] CPU4: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359228] CPU5: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359230] CPU6: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359232] CPU7: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359234] CPU8: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359236] CPU9: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359238] CPU   10: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359240] CPU   11: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359242] CPU   12: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359244] CPU   13: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359246] CPU   14: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359248] CPU   15: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359250] CPU   16: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359253] CPU   17: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359255] CPU   18: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359258] CPU   19: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359260] CPU   20: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359262] CPU   21: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359264] CPU   22: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359266] CPU   23: hi:0,
btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359268] Normal per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359270] CPU0: hi:  186,
btch:  

[squid-users] caching Smooth Streaming

2013-09-20 Thread Oguz Yilmaz
Hello

Do you have any pratice for caching MS Smooth Streaming videos?

Oguz


Re: [squid-users] getting https pages from peer on ssl-bump mode

2012-10-08 Thread Oguz Yilmaz
Thank you for detailed explanations.




On Mon, Oct 8, 2012 at 1:25 AM, Amos Jeffries  wrote:
> On 08.10.2012 01:16, Oguz Yilmaz wrote:
>>
>> I am trying with ssl-bump. I am using squid 3.1.21.
>>
>> First of all I got the CANNOT FORWARD error page. When I debug I found:
>>
>> 2012/10/07 14:27:49.380| fwdConnectStart: Ssl bumped connections
>> through parrent proxy are not allowed
>> 2012/10/07 14:27:49.380| forward.cc(286) fail: ERR_CANNOT_FORWARD
>> "Service Unavailable"
>>
>>
>> Then, I added always_direct rule and reached to https site.
>>
>> acl HTTPS proto HTTPS
>> always_direct allow HTTPS
>>
>>
>> According to message above and a reply from Amos in another thread,
>> squid stopped getting https over peers, because "it does not again
>> encrypt ssl connection for the peer". Capability of getting https
>> pages over peers was previous behaviour and I did not understand why
>> squid does not get pages from peers instead of direct? I assume it is
>> about software architecture.
>
>
> Without SSL-bump the HTTPS us seen by Squid as a CONNECT request with
> encrypted binary data. The CONNECT request and data can be safely sent to a
> peer and the reply shunted straight back to the client. This is otherwise
> known as a blind tunnel / binary tunnel through the HTTP proxy.
>  This can be done safely whether the peer supports SSL or not. Or even
> whether your proxy supports SSL or not.
>
>
> With SSL-bump the CONNECT wrapper request is removed, the encrypted data
> decrypted. THEN Squid handles the decrypted request almost as if it was sent
> in to a https_port. Squid does not support adding the CONNECT request
> wrapper back when passing the request to non-SSL peers. If the request were
> relayed out to peer this would result in HTTPS be decrypted then sent "in
> the clear" to any peers - seriously breaking the security on HTTPS. (you say
> earlier Squid did that, we know, that got fixed pretty soon after it was
> found but there are still a few releases which do it).
>  In reverse-proxy we can safely assume that peers are part of a trusted
> backend system for the reverse-proxy. For the corporate situations where
> SSL-Bump is used we CANNOT make that assumption safely even if the peer has
> the SSL connection options configured and must, for now, block relaying to
> peers.
>
>
>
>
>>
>> Is this the current situation(3.HEAD). Are there any project to
>> implement getting SSL pages over peers?
>
>
> All current Squid releases share the above behaviour. SSL-Bump in 3.1 was
> experimental and rather limited in what it can do. I recommend using at
> least 3.2 for less client annoyance, preferably use 3.3 for the best
> SSL-Bump behaviour (server-first bumping fixes a few other security
> problems).
>
> As to work underway; I made some effort to work towards re-wrapping CONNECT
> on outbound requests for another project unrelated to SSL-Bump but sharing
> the same requirement. It is still in the planning stages with no timeline
> for any code. Any contributions toward that would be welcome.
>
>
>
>> Because this mode obligate me
>> to choose between:
>> a- do https filtering in squid and does not forward https to
>> dansguardian (I use https domain name filtering on dg)
>> b- dont do https filtering and continue with https domain name
>> filtering on dansguardian.
>>
>
> So far as I'm aware anything you can do in DG can also be done in Squid. So
> (a) is your best option.
>
>
>
>>
>>
>> 2012/10/07 14:35:50.142| peerSelectCallback: https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| Failed to select source for
>> 'https://www.haberturk.com/'
>> 2012/10/07 14:35:50.142|   always_direct = -1
>
>
> Hmm. -1 here is strange. It means some lookup (authentication, or IDENT or
> external ACL) is being waited for. Scan your whole config for always_direct
> lines and check their order carefully.
>
> The "always_direct allow HTTPS" should have produced "1" there and made your
> Squid use DNS results for www.haberturk.com instead of ERR_CANNOT_FORWARD.
>
>
> Amos
>
>
>> 2012/10/07 14:35:50.142|never_direct = 0
>> 2012/10/07 14:35:50.142|timedout = 0
>> 2012/10/07 14:35:50.142| fwdStartComplete: https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| fwdStartFail: https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| forward.cc(286) fail: ERR_CANNOT_FORWARD
>> "Service Unavailable"
>> https://www.haberturk.com/
>> 2012/10/07 14:35:50.142| StoreEntry::unlock: key
&g

[squid-users] getting https pages from peer on ssl-bump mode

2012-10-07 Thread Oguz Yilmaz
I am trying with ssl-bump. I am using squid 3.1.21.

First of all I got the CANNOT FORWARD error page. When I debug I found:

2012/10/07 14:27:49.380| fwdConnectStart: Ssl bumped connections
through parrent proxy are not allowed
2012/10/07 14:27:49.380| forward.cc(286) fail: ERR_CANNOT_FORWARD
"Service Unavailable"


Then, I added always_direct rule and reached to https site.

acl HTTPS proto HTTPS
always_direct allow HTTPS


According to message above and a reply from Amos in another thread,
squid stopped getting https over peers, because "it does not again
encrypt ssl connection for the peer". Capability of getting https
pages over peers was previous behaviour and I did not understand why
squid does not get pages from peers instead of direct? I assume it is
about software architecture.

Is this the current situation(3.HEAD). Are there any project to
implement getting SSL pages over peers? Because this mode obligate me
to choose between:
a- do https filtering in squid and does not forward https to
dansguardian (I use https domain name filtering on dg)
b- dont do https filtering and continue with https domain name
filtering on dansguardian.



2012/10/07 14:35:50.142| peerSelectCallback: https://www.haberturk.com/
2012/10/07 14:35:50.142| Failed to select source for
'https://www.haberturk.com/'
2012/10/07 14:35:50.142|   always_direct = -1
2012/10/07 14:35:50.142|never_direct = 0
2012/10/07 14:35:50.142|timedout = 0
2012/10/07 14:35:50.142| fwdStartComplete: https://www.haberturk.com/
2012/10/07 14:35:50.142| fwdStartFail: https://www.haberturk.com/
2012/10/07 14:35:50.142| forward.cc(286) fail: ERR_CANNOT_FORWARD
"Service Unavailable"
https://www.haberturk.com/
2012/10/07 14:35:50.142| StoreEntry::unlock: key
'31F6E0CCC4924D82F5F0070DE997' count=2
2012/10/07 14:35:50.142| FilledChecklist.cc(168) ~ACLFilledChecklist:
ACLFilledChecklist destroyed 0x91502d0
2012/10/07 14:35:50.142| ACLChecklist::~ACLChecklist: destroyed 0x91502d0
2012/10/07 14:35:50.142| forward.cc(164) ~FwdState: FwdState destructor starting
2012/10/07 14:35:50.142| Creating an error page for entry 0x9152990
with errorstate 0x91504a0 page id 13
2012/10/07 14:35:50.142| StoreEntry::lock: key
'31F6E0CCC4924D82F5F0070DE997' count=3
2012/10/07 14:35:50.142| errorpage.cc(1075) BuildContent: No existing
error page language negotiated for ERR_CANNOT_FORWARD. Using default
error file.

Best Regards,



--
Oguz YILMAZ


[squid-users] squid 3.2.2 suddenly dies in first request from cache_peer

2012-10-07 Thread Oguz Yilmaz
 plugin modules loaded: 0
2012/10/07 13:13:47 kid1| Accepting HTTP Socket connections at
local=0.0.0.0:3129 remote=[::] FD 9 flags=9
2012/10/07 13:13:48 kid1| storeLateRelease: released 0 objects






--
Oguz YILMAZ


[squid-users] bump server first

2012-10-03 Thread Oguz Yilmaz
Hello,

I just wanted to learn status of ssl_bump server-first patch. Is it in
trunk? For testing, should I use direct patch or possible to get from
turnk.

Thank you,

--
Oguz YILMAZ


Re: [squid-users] prevent attempts to go outside for any noncached objects during offline_mode on

2012-09-10 Thread Oguz Yilmaz
--
Oguz YILMAZ


On Mon, Sep 10, 2012 at 4:24 PM, Eliezer Croitoru  wrote:
> On 9/10/2012 2:48 PM, Oguz Yilmaz wrote:
>>
>> Hi,
>>
>> I have read on working of offline_mode in the list. It is an
>> aggressive caching parameter which prevent refreshing of already
>> cached object and gives from the cache. However, if the object is not
>> cached, it try to look for the object thru direct or cache peer. Do
>> you have any suggestions for implementing a "completely offline" state
>> with current squid parameters to prevent attempts to get uncached
>> objects, also
>>
> this feature is not part of squid anymore.
> there are other solutions for this kind of cache out there.

willing to hear about those.

> squid tries to be a more http friendly cache then just "cache all and ask
> questions later".
>
> If you have a case when you need to cache more object squid can have more
> strict cache acls for that.

What I ask is not that. I only ask squid not to forward requests to
anywhere even the content is not cached (not available). (This puts
lots of latency during offline_mode on and internet is disconnected.)

> If you do have a specific case we can maybe try recommend you how to
> implement it with squid.
> But you need first to understand what you do before doing it.

I am clear on what I want :) and waiting for kind recommendations.

Regards,

>
> Good luck,
> Eliezer
>
>> Regards,
>>
>>
>>
>> --
>> Oguz YILMAZ
>>
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il


[squid-users] prevent attempts to go outside for any noncached objects during offline_mode on

2012-09-10 Thread Oguz Yilmaz
Hi,

I have read on working of offline_mode in the list. It is an
aggressive caching parameter which prevent refreshing of already
cached object and gives from the cache. However, if the object is not
cached, it try to look for the object thru direct or cache peer. Do
you have any suggestions for implementing a "completely offline" state
with current squid parameters to prevent attempts to get uncached
objects, also

Regards,



--
Oguz YILMAZ


Re: [squid-users] Enhancing NTLM Authentication to Remote Site Active Directory server

2011-11-02 Thread Oguz Yilmaz
--
Oguz YILMAZ



On Wed, Nov 2, 2011 at 1:44 AM, Amos Jeffries  wrote:
> On Tue, 1 Nov 2011 11:53:34 +0200, Oguz Yilmaz wrote:
>>
>> Hi,
>>
>> We use NLTM Authentication with Squid is some setups.On hose setup
>> local machine joins active directory and squid ntlm_auth helper
>> authenticate through local samba service. Users transparently
>> authenticate through NTLM authentication handshake on HTTP without
>> entering any password in their browser.
>>
>> However, in some cases, branch offices has no local active directory
>> copy. Branch office is connected to the headquarters through a IPSEC
>> vpn. I can join the branch office samba to the headquarter active
>> directory domain and set NTLM authentication on Squid up correctly.
>>
>> This setup has a weakness inherited from high latency, packet loss of
>> some other things that I dont know about samba. 3-4 times in a day
>> users get prompted with user name password authentication popup on
>> their browser. Sometimes this recovered naturally in a few minutes.
>> However, it requires rejoining to the domain in come cases. (wbinfo -t
>> gives error and wbinfo -l can not list users).
>>
>> I have made some tunings in samba:
>>
>>   getwd cache = yes
>>   winbind cache time = 3000
>>   ldap connection timeout = 10
>>   ldap timeout = 120
>>
>> This decreased error rate to 1 per day.
>>
>> Which other tunings can I do on samba and squid? I need your experiences.
>
> Firstly, the validation lag is internal to the authentication system. Which
> consists of the helper and everything it does and uses. There is nothing
> squid can do about the auth systems internal lag. As indicated by the fact
> that tweaking samba resolved a lot of the problem.
>

Right.

>
> There are a few workarounds to avoid doing the validations by Squid though.
>
> Firstly and most preferred is to move to Negotiate/Kerberos authentication.
> It is more than twice as efficient as NTLM and offers modern security
> algorithms for much higher security.
>
>

Does Negotiate/Kerberos auth support transparent authentication for
client browsers? What is the replacement for ntlm challenge/response?

Is this the right page to start?:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

> NTLM authentication handshake is done once per TCP connection, and applies
> only to that connection. So credentials can only be "cached" for as long as
> that TCP connection is active/persisting.
>
> Which should indicate what the fix is:
>  Get persistent connections to the clients staying up as long as possible.
> At present that means you need the latest HTTP/1.1 supporting Squid to
> maximize the keep-alive feature compliance. I recommend 3.2.0.8 if you are
> able, otherwise the latest 3.1 series, or 2.7.STABLE9 (in that order of
> preference). Avoid 3.0, 2.6 and older Squid series.
>
>>
>> Best Regards,
>>
>>
>> squid.conf:
>>
>> auth_param ntlm program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp
>> auth_param ntlm children 20
>> auth_param ntlm keep_alive off
>
> keep-alive ON.
>
>
> Also check:
>  client_persistent_connections ON
>
>
>>
>> auth_param basic program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic
>> auth_param basic children 20
>> auth_param basic realm Squid AD Auth
>> auth_param basic credentialsttl 2 hours
>> auth_param basic casesensitive off
>>
>
> Basic is checked done once per request, with credentialsttl being how often
> the backend gets checked for updates to the yes/no answer. You may be able
> to extend the credentialsttl longer for less backend checks. Impact of this
> tweak depends on how much of the client software is fialing to support NTLM
> and choosing Basic though.
>
> Amos
>


[squid-users] Enhancing NTLM Authentication to Remote Site Active Directory server

2011-11-01 Thread Oguz Yilmaz
Hi,

We use NLTM Authentication with Squid is some setups.On hose setup
local machine joins active directory and squid ntlm_auth helper
authenticate through local samba service. Users transparently
authenticate through NTLM authentication handshake on HTTP without
entering any password in their browser.

However, in some cases, branch offices has no local active directory
copy. Branch office is connected to the headquarters through a IPSEC
vpn. I can join the branch office samba to the headquarter active
directory domain and set NTLM authentication on Squid up correctly.

This setup has a weakness inherited from high latency, packet loss of
some other things that I dont know about samba. 3-4 times in a day
users get prompted with user name password authentication popup on
their browser. Sometimes this recovered naturally in a few minutes.
However, it requires rejoining to the domain in come cases. (wbinfo -t
gives error and wbinfo -l can not list users).

I have made some tunings in samba:

   getwd cache = yes
   winbind cache time = 3000
   ldap connection timeout = 10
   ldap timeout = 120

This decreased error rate to 1 per day.

Which other tunings can I do on samba and squid? I need your experiences.

Best Regards,


squid.conf:

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 20
auth_param ntlm keep_alive off

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 20
auth_param basic realm Squid AD Auth
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off



/etc/samba/smb.conf:

[global]
   netbios name = SQUID
   realm = MY.DOM
   workgroup = my.dom
   security = ads
   encrypt passwords = yes
   password server = 172.16.5.10
   log level = 3
   log file = /var/log/samba.log
   ldap ssl = no
   idmap uid = 1-2
   idmap gid = 1-2

   winbind separator = /
   winbind enum users = yes
   winbind enum groups = yes
   winbind use default domain = yes

   domain master = no
   local master = no
   preferred master = no

   template shell = /sbin/nologin

   getwd cache = yes
   winbind cache time = 3000
   ldap connection timeout = 10
   ldap timeout = 120



/etc/krb5.conf:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = MY.DOM
 default_tkt_enctypes = rc4-hmac des-cbc-crc
 default_tgs_enctypes = rc4-hmac des-cbc-crc
# dns_lookup_realm = false
# dns_lookup_kdc = false

 dns_lookup_realm = false
 dns_lookup_kdc = false
[realms]
 MY.DOM = {
  kdc = 172.16.5.10
  admin_server = 172.16.5.10
  default_domain = MY.DOM
 }

[domain_realm]
 .ronesans.hol = MY.DOM
  ronesans.hol = MY.DOM



--
Oguz YILMAZ


[squid-users] https proxying with squid and on the fly local CA produced site certificates

2011-06-14 Thread Oguz Yilmaz
At the moment I redirect http(443) requests into squid configured
https_port. It works as expected. It terminate ssl connection with the
ssl certificate installed in Squid. And then proxies traffic.

In this setup, end users get illegal certificate errors, of course.

I want to established a local CA and install public certificate of
this local CA into end user client PCs. Squid should get the target
domain name, it should create a ssl certificate for that target domain
in the local ca on the fly. Because I installed CA public certificate
in Trusted Root Certificate Authorities in the client PC, client IE
will not give any errors, trust the site certificate and provides real
tranparent https proxying.

An open source tool, imspector, does the same setup successfully for
another aim.

I try to find a way of implementing such setup with squid and I need
your kind comments.

Best regards,


--
Oguz YILMAZ


Re: [squid-users] Re: maxconn acl with acl_uses_indirect_client

2011-01-28 Thread Oguz Yilmaz
Ok

How can I apply per "indirect client" connection limiting in squid.

--
Oguz YILMAZ



On Fri, Jan 28, 2011 at 3:19 PM, Amos Jeffries  wrote:
> On 29/01/11 01:57, Oguz Yilmaz wrote:
>>
>> I think: I have found client_db:
>> It verifies that client_db includes "client address" not "indirect
>> client address" even if "acl_uses_indirect_client=on":
>>
>
> maxconn controls how many TCP links from each client may be connected to
> Squid. In the case of indirect clients they always have zero TCP links
> directly to Squid.
>
> ... updating our docs to mention this.
>
>
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.10
>  Beta testers wanted for 3.2.0.4
>


[squid-users] Re: maxconn acl with acl_uses_indirect_client

2011-01-28 Thread Oguz Yilmaz
I think: I have found client_db:
It verifies that client_db includes "client address" not "indirect
client address" even if "acl_uses_indirect_client=on":


 mgr:client_list

HTTP/1.0 200 OK
Server: squid/3.1.9
Mime-Version: 1.0
Date: Fri, 28 Jan 2011 12:57:35 GMT
Content-Type: text/plain
Expires: Fri, 28 Jan 2011 12:57:35 GMT
Last-Modified: Fri, 28 Jan 2011 12:57:35 GMT
X-Cache: MISS from localhost.localdomain
X-Cache-Lookup: MISS from localhost.localdomain:3129
Via: 1.0 localhost.localdomain (squid/3.1.9)
Connection: close

Cache Clients:
Address: 127.0.0.1
Name:localhost.localdomain
Currently established connections: 36
ICP  Requests 0
HTTP Requests 217
TCP_MISS 216 100%
TCP_DENIED 1   0%

TOTALS
ICP : 0 Queries, 0 Hits (  0%)
HTTP: 217 Requests, 0 Hits (  0%)



Squid is (squid/3.1.9)
Provious proxy is Dansguardian and users has proxy configuration
dansguardian port.

--
Oguz YILMAZ



On Fri, Jan 28, 2011 at 2:52 PM, Oguz Yilmaz  wrote:
> To sum up, I think maxconn acl directive does not rely on indirect
> client addresses in case of "acl_uses_indirect_client=on".
>
>
> follow_x_forwarded_for allow all
> acl_uses_indirect_client on
> client_db on
> acl maxconn-per-client maxconn 2
> acl client-192.168.0.1 src 192.168.0.1/32
> http_access deny maxconn-per-client client-192.168.0.1
>
>
> In such configuration When I debug squid through cache.log, it returns
> true for 192.168.0.1 (that is acl_uses_indirect_client works), but
> never returns "acl maxconn-per-client maxconn 2" true even when it
> should.
>
> To attest I added "client_ip_max_connections 2" just after "client_db on" 
> line.
>
> In the log I see
>
> 2011/01/28 14:44:41| WARNING: 127.0.0.1:35383 attempting more than 2
> connections.
> 2011/01/28 14:44:41| httpAccept: FD 13: accept failure: (0) Success
>
> To attest I get mgr:info
>        Number of clients accessing cache:      1
> (network is about 25 PCs)
>
> This makes me think, that client_db has client information as
> 127.0.0.1 previous proxy IP even if I enabled
> acl_uses_indirect_client.
>
> 1- Is it true?
> 2- How can I see client_db database
> 3- How can I apply per "indirect client" connection limiting in squid.
>
>
> Note:
> This configuration correctly works for indirect client ip address. So
> I assume "acl_uses_indirect_client on" is working.
> follow_x_forwarded_for allow all
> acl_uses_indirect_client on
> client_db on
> acl oguz src 192.168.0.170/255.255.255.255
> tcp_outgoing_address 172.16.1.1 oguz
>
> Best Regards,
>
> --
> Oguz YILMAZ
>


[squid-users] maxconn acl with acl_uses_indirect_client

2011-01-28 Thread Oguz Yilmaz
To sum up, I think maxconn acl directive does not rely on indirect
client addresses in case of "acl_uses_indirect_client=on".


follow_x_forwarded_for allow all
acl_uses_indirect_client on
client_db on
acl maxconn-per-client maxconn 2
acl client-192.168.0.1 src 192.168.0.1/32
http_access deny maxconn-per-client client-192.168.0.1


In such configuration When I debug squid through cache.log, it returns
true for 192.168.0.1 (that is acl_uses_indirect_client works), but
never returns "acl maxconn-per-client maxconn 2" true even when it
should.

To attest I added "client_ip_max_connections 2" just after "client_db on" line.

In the log I see

2011/01/28 14:44:41| WARNING: 127.0.0.1:35383 attempting more than 2
connections.
2011/01/28 14:44:41| httpAccept: FD 13: accept failure: (0) Success

To attest I get mgr:info
Number of clients accessing cache:  1
(network is about 25 PCs)

This makes me think, that client_db has client information as
127.0.0.1 previous proxy IP even if I enabled
acl_uses_indirect_client.

1- Is it true?
2- How can I see client_db database
3- How can I apply per "indirect client" connection limiting in squid.


Note:
This configuration correctly works for indirect client ip address. So
I assume "acl_uses_indirect_client on" is working.
follow_x_forwarded_for allow all
acl_uses_indirect_client on
client_db on
acl oguz src 192.168.0.170/255.255.255.255
tcp_outgoing_address 172.16.1.1 oguz

Best Regards,

--
Oguz YILMAZ


Re: [squid-users] The method for SSL Mitm Proxying without browser warnings

2010-12-14 Thread Oguz Yilmaz
The error on IE is "The security service presented by this web site
issued for a different web sites address.". I think it is about need
for wildcard certificates. Is it?

--
Oguz YILMAZ



On Wed, Dec 15, 2010 at 8:52 AM, Oguz Yilmaz  wrote:
> On Tue, Dec 14, 2010 at 9:13 PM, Michael Leong
>  wrote:
>> One of the features of SSL is to detect the MITM you're doing.  You need to
>> manually add the squid cert on each browser as a trusted CA to prevent those
>> warnings.
>
> Actually I have added the cert to IE Intermediate Certficition
> authorities and Trusted Root certificates. The error continues. May it
> be about the name of the site does not match the certificate issued
> for field? How can I create the right certificate?
>
>>
>>
>>
>> On 12/14/2010 12:31 AM, Oguz Yilmaz wrote:
>>
>> Dear all,
>>
>> I have enabled my proxy for transparent SSL Mitm proxying. Traffic for
>> destination tcp 443 is DNAT'ed to localhost:8443 through iptables.
>> This part is working. I am able to browse the internet sites. For each
>> SSL site, for once, browser gives a warning of Mitm. It should, of
>> course.
>> However I want to learn the way to remove any warning by through
>> manually adding a certificate to Trusted Key Store of Internet
>> Explorer or Firefox.
>>
>> Squid conf param:
>> https_port 8443 cert=/etc/squid/certs/sslfilter.crt
>> key=/etc/squid/certs/sslfilter.key protocol=https accel vhost
>> defaultsite=google.com
>>
>> The way I have created the certificate and key:
>>
>> openssl genrsa -rand
>> /proc/apm:/proc/cpuinfo:/proc/dma:/proc/filesystems:/proc/interrupts:/proc/ioports:/proc/pci:/proc/rtc:/proc/uptime
>> 1024 > /etc/squid/certs/sslfilter.key
>>
>> cat << EOF | openssl req -new -key /etc/squid/certs/sslfilter.key
>> -x509 -days 1825 -out /etc/squid/certs/sslfilter.crt
>> TR
>> ANK
>> Ankara
>> Info
>> Customer IT
>> SSL Filtering Proxy
>> supp...@domain
>> EOF
>>
>>
>> Regards,
>>
>> --
>> Oguz YILMAZ
>>
>> This electronic communication and any attachments may contain confidential
>> and proprietary
>> information of DigitalGlobe, Inc. If you are not the intended recipient, or
>> an agent or employee
>> responsible for delivering this communication to the intended recipient, or
>> if you have received
>> this communication in error, please do not print, copy, retransmit,
>> disseminate or
>> otherwise use the information. Please indicate to the sender that you have
>> received this
>> communication in error, and delete the copy you received. DigitalGlobe
>> reserves the
>> right to monitor any electronic communication sent or received by its
>> employees, agents
>> or representatives.
>>
>


Re: [squid-users] The method for SSL Mitm Proxying without browser warnings

2010-12-14 Thread Oguz Yilmaz
On Tue, Dec 14, 2010 at 9:13 PM, Michael Leong
 wrote:
> One of the features of SSL is to detect the MITM you're doing.  You need to
> manually add the squid cert on each browser as a trusted CA to prevent those
> warnings.

Actually I have added the cert to IE Intermediate Certficition
authorities and Trusted Root certificates. The error continues. May it
be about the name of the site does not match the certificate issued
for field? How can I create the right certificate?

>
>
>
> On 12/14/2010 12:31 AM, Oguz Yilmaz wrote:
>
> Dear all,
>
> I have enabled my proxy for transparent SSL Mitm proxying. Traffic for
> destination tcp 443 is DNAT'ed to localhost:8443 through iptables.
> This part is working. I am able to browse the internet sites. For each
> SSL site, for once, browser gives a warning of Mitm. It should, of
> course.
> However I want to learn the way to remove any warning by through
> manually adding a certificate to Trusted Key Store of Internet
> Explorer or Firefox.
>
> Squid conf param:
> https_port 8443 cert=/etc/squid/certs/sslfilter.crt
> key=/etc/squid/certs/sslfilter.key protocol=https accel vhost
> defaultsite=google.com
>
> The way I have created the certificate and key:
>
> openssl genrsa -rand
> /proc/apm:/proc/cpuinfo:/proc/dma:/proc/filesystems:/proc/interrupts:/proc/ioports:/proc/pci:/proc/rtc:/proc/uptime
> 1024 > /etc/squid/certs/sslfilter.key
>
> cat << EOF | openssl req -new -key /etc/squid/certs/sslfilter.key
> -x509 -days 1825 -out /etc/squid/certs/sslfilter.crt
> TR
> ANK
> Ankara
> Info
> Customer IT
> SSL Filtering Proxy
> supp...@domain
> EOF
>
>
> Regards,
>
> --
> Oguz YILMAZ
>
> This electronic communication and any attachments may contain confidential
> and proprietary
> information of DigitalGlobe, Inc. If you are not the intended recipient, or
> an agent or employee
> responsible for delivering this communication to the intended recipient, or
> if you have received
> this communication in error, please do not print, copy, retransmit,
> disseminate or
> otherwise use the information. Please indicate to the sender that you have
> received this
> communication in error, and delete the copy you received. DigitalGlobe
> reserves the
> right to monitor any electronic communication sent or received by its
> employees, agents
> or representatives.
>


[squid-users] The method for SSL Mitm Proxying without browser warnings

2010-12-14 Thread Oguz Yilmaz
Dear all,

I have enabled my proxy for transparent SSL Mitm proxying. Traffic for
destination tcp 443 is DNAT'ed to localhost:8443 through iptables.
This part is working. I am able to browse the internet sites. For each
SSL site, for once, browser gives a warning of Mitm. It should, of
course.
However I want to learn the way to remove any warning by through
manually adding a certificate to Trusted Key Store of Internet
Explorer or Firefox.

Squid conf param:
https_port 8443 cert=/etc/squid/certs/sslfilter.crt
key=/etc/squid/certs/sslfilter.key protocol=https accel vhost
defaultsite=google.com

The way I have created the certificate and key:

openssl genrsa -rand
/proc/apm:/proc/cpuinfo:/proc/dma:/proc/filesystems:/proc/interrupts:/proc/ioports:/proc/pci:/proc/rtc:/proc/uptime
1024 > /etc/squid/certs/sslfilter.key

cat << EOF | openssl req -new -key /etc/squid/certs/sslfilter.key
-x509 -days 1825 -out /etc/squid/certs/sslfilter.crt
TR
ANK
Ankara
Info
Customer IT
SSL Filtering Proxy
supp...@domain
EOF


Regards,

--
Oguz YILMAZ


Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Oguz Yilmaz
--
Oguz YILMAZ



On Tue, Nov 30, 2010 at 10:46 AM, Amos Jeffries  wrote:
> On 30/11/10 21:23, Oguz Yilmaz wrote:
>>
>> On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffries
>>  wrote:
>>>
>>> On 30/11/10 04:04, Oguz Yilmaz wrote:
>>>>
>>>> Graham,
>>>>
>>>> This is the best explanation I have seen about ongoing upload problem
>>>> in proxy chains where squid is one part of the chain.
>>>>
>>>> On our systems, we use Squid 3.0.STABLE25. Before squid a
>>>> dansguardian(DG) proxy exist to filter. Results of my tests:
>>>>
>>>> 1-
>>>> DG+Squid 2.6.STABLE12: No problem of uploading
>>>> DG+Squid 3.0.STABLE25: Problematic
>>>> DG+Squid 3.1.8: Problematic
>>>> DG+Squid 3.2.0.2: Problematic
>>>>
>>>> 2- We have mostly prıblems with the sites with web based upload status
>>>> viewers. Like rapidshare, youtube etc...
>>>>
>>>> 3- If Squid is the only proxy, no problem of uploading.
>>>>
>>>> 4- ead_ahead_gap 16 KB does not resolv the problem
>>>>
>>>>
>>>> Dear Developers,
>>>>
>>>> Can you propose some other workarounds for us to test? The problem is
>>>> encountered with most active sites of the net, unfortunately.
>>>
>>> This sounds like the same problem as
>>> http://bugs.squid-cache.org/show_bug.cgi?id=3017
>>
>
> Sorry, crossing bug reports in my head.
>
> This one is closer to the suck-everything behaviour you have seen:
> http://bugs.squid-cache.org/show_bug.cgi?id=2910
>
> both have an outside chance of working.
>

I have tried the patch proposed (BodyPipe.h). However does not work.
Note: My system is based on Linux os.

>>
>> In my tests, no NTLM auth was used.
>> The browser has proxy confguration targeting DG and DG uses squid as
>> provider proxy. If you think it will work,  I can try the patch
>> located in the bug case.
>> Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?
>
> AIUI, Squid is supposed to read SQUID_TCP_SO_RCVBUF + read_ahead_gap and
> wait for some of that to pass on to the server before grabbing some more.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.9
>  Beta testers wanted for 3.2.0.3
>


Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Oguz Yilmaz
On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffries  wrote:
> On 30/11/10 04:04, Oguz Yilmaz wrote:
>>
>> Graham,
>>
>> This is the best explanation I have seen about ongoing upload problem
>> in proxy chains where squid is one part of the chain.
>>
>> On our systems, we use Squid 3.0.STABLE25. Before squid a
>> dansguardian(DG) proxy exist to filter. Results of my tests:
>>
>> 1-
>> DG+Squid 2.6.STABLE12: No problem of uploading
>> DG+Squid 3.0.STABLE25: Problematic
>> DG+Squid 3.1.8: Problematic
>> DG+Squid 3.2.0.2: Problematic
>>
>> 2- We have mostly prıblems with the sites with web based upload status
>> viewers. Like rapidshare, youtube etc...
>>
>> 3- If Squid is the only proxy, no problem of uploading.
>>
>> 4- ead_ahead_gap 16 KB does not resolv the problem
>>
>>
>> Dear Developers,
>>
>> Can you propose some other workarounds for us to test? The problem is
>> encountered with most active sites of the net, unfortunately.
>
> This sounds like the same problem as
> http://bugs.squid-cache.org/show_bug.cgi?id=3017


In my tests, no NTLM auth was used.
The browser has proxy confguration targeting DG and DG uses squid as
provider proxy. If you think it will work,  I can try the patch
located in the bug case.
Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?


>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.9
>  Beta testers wanted for 3.2.0.3
>


Re: [squid-users] squid-3.1 client POST buffering

2010-11-29 Thread Oguz Yilmaz
Graham,

This is the best explanation I have seen about ongoing upload problem
in proxy chains where squid is one part of the chain.

On our systems, we use Squid 3.0.STABLE25. Before squid a
dansguardian(DG) proxy exist to filter. Results of my tests:

1-
DG+Squid 2.6.STABLE12: No problem of uploading
DG+Squid 3.0.STABLE25: Problematic
DG+Squid 3.1.8: Problematic
DG+Squid 3.2.0.2: Problematic

2- We have mostly prıblems with the sites with web based upload status
viewers. Like rapidshare, youtube etc...

3- If Squid is the only proxy, no problem of uploading.

4- ead_ahead_gap 16 KB does not resolv the problem


Dear Developers,

Can you propose some other workarounds for us to test? The problem is
encountered with most active sites of the net, unfortunately.


Best Regards,

--
Oguz YILMAZ


On Thu, Nov 25, 2010 at 6:36 PM, Graham Keeling  wrote:
>
> Hello,
>
> I have upgraded to squid-3.1 recently, and found a change of behaviour.
> I have been using dansguardian in front of squid.
>
> It appears to be because squid now buffers uploaded POST data slightly
> differently.
> In versions < 3.1, it would take some data, send it through to the website,
> and then ask for some more.
> In 3.1 version, it appears to take as much from the client as it can without
> waiting for what it has already got to be uploaded to the website.
>
> This means that dansguardian quickly uploads all the data into squid, and
> then waits for a reply, which is a long time in coming because squid still
> has to upload everything to the website.
> And then dansguardian times out on squid after two minutes.
>
>
> I noticed the following squid configuration option. Perhaps what I need is
> a similar thing for buffering data sent from the client.
>
> #  TAG: read_ahead_gap  buffer-size
> #       The amount of data the cache will buffer ahead of what has been
> #       sent to the client when retrieving an object from another server.
> #Default:
> # read_ahead_gap 16 KB
>
> Comments welcome!
>
> Graham.
>


Re: [squid-users] Marking outgoing connections with a mark acc. to client IP.

2010-08-27 Thread Oguz Yilmaz
Andrew,

I will be helping on testing.


--
Oguz YILMAZ



On Fri, Aug 27, 2010 at 8:43 PM, Andrew Beverley  wrote:
> On Fri, 2010-08-27 at 21:05 +1200, Amos Jeffries wrote:
>> Oguz Yilmaz wrote:
>> > Is it possible for Squid to mark outgoing connection with a mark
>> > indicating the requester for that connection. I want to try this way
>> > for user based time quota. My aim is to catch connections acc.to the
>> > mark through iptables AAA features and apply several time and
>> > bandwidth quota per day/week/month and apply several tc classes for
>> > the traffic.
>>
>> Not yet. All the current Squid can set TOS via tcp_outgoing_tos.
>>
>> Netfilter MARK support is only just being worked on now. It's close to
>> passing our QA audit process and should be in one of our upcoming releases.
>>
>
> Sorry for the delay, hopefully I'll get the next patch candidate in this
> weekend :)
>
> The work I've been doing has only been a MARK add-on to the QOS
> functionality, not the tcp_outgoing_tos feature. Guess I'd better add
> that as well...
>
> Oguz - would you be available to assist with testing?
>
> Andy
>
>
>


[squid-users] Marking outgoing connections with a mark acc. to client IP.

2010-08-27 Thread Oguz Yilmaz
Is it possible for Squid to mark outgoing connection with a mark
indicating the requester for that connection. I want to try this way
for user based time quota. My aim is to catch connections acc.to the
mark through iptables AAA features and apply several time and
bandwidth quota per day/week/month and apply several tc classes for
the traffic.

Best Regards,

--
Oguz YILMAZ


Re: [squid-users] Squid NTLM authentication against Windows 2008 R2 AD

2010-05-08 Thread Oguz Yilmaz
Actually it is about Samba. I have solved my same problem after
upgrading to Samba 3.5. If you are using an rpm based distro, you may
use http://enterprisesamba.org/index.php?id=123.

Regards,
Oguz.

On Sat, May 8, 2010 at 3:22 AM, Mike Diggins  wrote:
>
> My organization is about to upgrade our Windows 2000 AD to Windows 2008 R2.
> I use winbind in Samba 3.0.30 with Squid and NTLM to authenticate my users.
> Today I joined my test Squid server to the Windows 2008 R2 domain for
> testing. Joining the domain worked but authentication using the Samba tools
> (ntlm_auth/wbinfo) does not. I've read this may be a known problem. I've
> also read that upgrading to Samba 3.3x might fix it. Can anyone confirm this
> is a known issue and what options there are to fix it?
>
> I understand this isn't a Squid problem, but I figured someone here must of
> run into this by now.
>
> -Mike
>
>