[squid-users] IP FORWARDER

2010-04-16 Thread •̪●
ISP  RB750(MT) > Squid server > hub/switch---> user

how to make RB750 can read user IP ?
rite now RB750 only read squid server's IP


Re: [squid-users] Outsource error pages

2010-04-16 Thread Amos Jeffries

lupuscramus wrote:

Hello,

I want to harmonize the error pages of my network. The error pages come from 
differents servers, and I want to centralize all the error pages on one shared 
platform.


Which makes all your systems dependent on the one server providing them. 
Under conditions when an error is occurring and needs to be presented 
... think carefully. VERY, VERY carefully.


One situation you need to consider most carefully is that 
ERR_ACCESS_DENIED is presented by Squid on every single bad request from 
relay port scan or DDoS attacks.




To do so, I would like outsource the error pages of Squid. More precisely, I 
would like outsource the ERR_ACCESS_DENIED page, by giving an URL, for 
example.


deny info url acl is useful when we define acl, but when someone try a wrong 
url, the page which is displayed by default is ERR_ACCESS_DENIED.


So, I can't use deny info for this.


Squid 2.x also has the error_map feature which redirects to fixed URL 
based only on status code.


I would question your actual need to centralize though. If it's merely a 
matter of company branding Squid-3.1 provides the errpages.css config 
where you can code up the display for Squid generated pages.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Squid Deployment for ISP's

2010-04-16 Thread Amos Jeffries

Ananth wrote:

Dear Team,

 I have configure squid 3.1 on Fedora core 12.

my hardware configuration:

CPU INFO: Intel(R) Xeon(R) CPU E5440  @ 2.83GHz
RAM : 8 GB
HDD : 160 GB

The problem i m facing is when my http requests are above 130 hist per
second the pages start browsing slow and time out i cant even access
cachemanager. if the http hit rate is below 130 hist per second it
fine.  please check if my configuration is correct. sorry for my poor
english.

Thanks,
Ananth B.R.



Looks fairly good. There are a few tweaks I'll mention inline.


my configuration is as fallows:

### Start of squid.conf #created by ANANTH#
cache_effective_user squid
cache_effective_group squid

http_port 3128 transparent

cache_dir ufs /var/spool/squid 16384 16 256

cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 7
emulate_httpd_log on


emulate_httpd_log does a little bit of extra work to generate dates etc.
If you can use the native squid log format its faster.

"emulate_httpd_log on" is also deprecated in favor of setting the 
"custom" format type on access_log lines.




cache_mem 2 GB
maximum_object_size_in_memory 512 KB


Memory objects are faster then disk ones in Squid and 3.x do not have 
the large object size failures that 2.x has.
The more memory stuff you can do in the newer Squid the faster those 
requests are done with and new ones can be handled.



memory_replacement_policy lru
cache_replacement_policy lru


heap tends to be the replacement policy favored by high-performance 
people. It's up to you though.



maximum_object_size 64 MB

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


Drop the above three lines. They are doing extra work that is not really 
needed.




hosts_file /etc/hosts

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 40% 4320

#acl all src 0.0.0.0/0

##Define your network below

acl mynetwork src 192.168.106.0/24   # cbinetwork private
acl mynetwork src 192.168.107.0/24   # cbinetwork private
acl mynetwork src 192.168.110.0/24   # cbinetwork private
acl mynetwork src 192.168.120.0/24   # cbinetwork private
acl mynetwork src 192.168.121.0/24   # cbinetwork private
acl mynetwork src 192.168.130.0/24   # cbinetwork private
acl mynetwork src 192.168.150.0/24   # cbinetwork private
acl mynetwork src 192.168.151.0/24   # cbinetwork private
acl mynetwork src 10.100.101.0/24   # cbinetwork private
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localhost src ::1/128
acl to_localhost dst 127.0.0.0/8
acl to_localhost dst ::1/128
acl purge method PURGE
acl CONNECT method CONNECT

acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https

acl Safe_ports port 1025-65535 #unregistered ports

acl SSL_ports port 443 563

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access allow mynetwork
# http_access deny all


For peak performance I'd order the above lines a little differently and 
remove some. Give these a test out:


  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost
  http_access deny manager
  http_access deny purge
  http_access allow mynetwork
  # http_access deny all


http_reply_access allow all
icp_access allow mynetwork

# icp_access deny all

visible_hostname proxy.xxx.xx

coredump_dir /var/spool/squid

 End of squid.conf ##


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Log Rotate

2010-04-16 Thread Amos Jeffries

Matt wrote:

I am running squid3-3.1.0.15-1 on CentOS.  I have this for log rotate:



NP: 3.1 beta are all now unsupported. Please upgrade to 3.1.1 or later.


/etc/logrotate.d/squid

/var/log/squid/access.log /var/log/squid/cache.log /var/log/squid/store.log {
daily
missingok
nocompress
noolddir
sharedscripts
postrotate
DATE=`/bin/date --date=yesterday +%y%m%d`
LOGDIR="/var/log/squid"
/usr/sbin/squid -k rotate 2>/dev/null || true


Going by the package name, "squid3" probably means the binary is also 
named "squid3" instead of "squid".
 That means your squid may not be receiving the rotate signal. Run it 
without that "2>/dev/null ||true" and see if any errors are happening.



sleep 10
for LOGFILE in ${LOGDIR}/*.log; do
  [ -f ${LOGFILE}.1 ] && mv ${LOGFILE}.1 ${LOGFILE}-${DATE}


Part of the problem is this:
   loop only considers file with extension .log
 file is only dated if it has extension .1  !!!


  [ -f ${LOGFILE}-${DATE} ] && /bin/gzip ${LOGFILE}-${DATE}
done
/usr/bin/find ${LOGDIR}/ -type f -name "*.log-*.gz" -mtime +30
-exec rm -f {} \;


What does this do? erase the file just now created and zipped?


endscript
}

It never seems to ratate the logs though.  Any idea why?




As a side note;
  The new logdaemon could fairly easily gain the ability to write those 
dated files directly based on the individual log timestamp.
 If you or anyone else using this type of script wants to delve into 
the helper code and make it so I'm happy to mentor the effort. Just 
don't have time to do it myself yet.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
yeah
maybe one of their network admin
are registered to this mailing list
lolz

thx anyway

On Sat, Apr 17, 2010 at 10:08 AM, Amos Jeffries  wrote:
> Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’̵͇̿̿=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ wrote:
>>
>> 
>> i dont do anything
>> suddently
>>
>> solved 
>> can anyone explain ?
>
> Sounds like Facebook had an internal problem which Facebook fixed.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.1
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Amos Jeffries

Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’̵͇̿̿=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ wrote:


i dont do anything
suddently

solved 
can anyone explain ?


Sounds like Facebook had an internal problem which Facebook fixed.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Kerberos Authentication in Relation to Connect ACLs

2010-04-16 Thread Amos Jeffries

GIGO . wrote:

I get the following error whenever i try to use squid: (currently i am trying 
to use it from the AD which is also the KDC for squid to provide 
authentication.)
 
Access Denied:

Access control configuration prevents your request from being allowed at this 
time. Please contact your service provider if you feel this is incorrect.
(No pop open for authentication just whenever i try to open any webpage this 
error)

 
However i dont think that i have done any settings to prevent users. I am not sure what is happening please guide.Is it something to do with the connect method ACLs.
 


Only if you somehow used CONNECT.

Did you use the CONNECT method to make the failing request?

 
acl CONNECT method CONNECT

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny request to unknown ports
http_access deny !Safe_ports
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
#Allow access from localhost
http_access allow localhost
auth_param negotiate program /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl auth proxy_auth REQUIRED
http_access deny !auth
http_access allow auth
http_access deny all
 


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Build fail for versions squid-3.0.STABLE21 and later on IBM PowerPC64

2010-04-16 Thread Steve
Amos

Many Thanks. Have tried the latest snapshot and all is fine. Thanks
for swift fix.

Steve

On Fri, Apr 16, 2010 at 2:46 AM, Amos Jeffries  wrote:
> Steve wrote:
>>
>> Hi
>> This is my first post to such a list so please go easy with me and I
>> hope I have found the correct place. I have a problem compiling any
>> release after  version squid-3.0.STABLE20 on IBM Power ppc64 running
>> Red Hat Enterprise Linux AS release 4 (Nahant Update 8).
>> I have been using Squid on this platform for about 5 years and been
>> compiling previous releases without any issues. I have done the usual
>> googling and searching of the list but it seems I am unique in
>> experiencing this problem!
>>
>> The key errors I am seeing are displayed in the back end of the log
>> extract below and seem to relate to "rfc1738.c: In function
>> `rfc1738_unescape'" and the warnings generated relating to comparison
>> is always false.
>> Compiler is gcc version 3.4.6 20060404 (Red Hat 3.4.6-11).
>>
>> mv -f .deps/md5.Tpo .deps/md5.Po
>> gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
>> -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
>> -Wmissing-declarations -Wcomments -Wall -g -O2 -MT radix.o -MD -MP -MF
>> .deps/radix.Tpo -c -o radix.o radix.c
>> mv -f .deps/radix.Tpo .deps/radix.Po
>> gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
>> -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
>> -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1035.o -MD -MP
>> -MF .deps/rfc1035.Tpo -c -o rfc1035.o rfc1035.c
>> mv -f .deps/rfc1035.Tpo .deps/rfc1035.Po
>> gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
>> -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
>> -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1123.o -MD -MP
>> -MF .deps/rfc1123.Tpo -c -o rfc1123.o rfc1123.c
>> mv -f .deps/rfc1123.Tpo .deps/rfc1123.Po
>> gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
>> -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
>> -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1738.o -MD -MP
>> -MF .deps/rfc1738.Tpo -c -o rfc1738.o rfc1738.c
>> rfc1738.c: In function `rfc1738_unescape':
>> rfc1738.c:209: warning: comparison is always false due to limited
>> range of data type
>> rfc1738.c:212: warning: comparison is always false due to limited
>> range of data type
>> make[2]: *** [rfc1738.o] Error 1
>> make[2]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'
>> make[1]: *** [all-recursive] Error 1
>> make[1]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'make:
>> *** [all-recursive] Error 1
>> Making install in lib
>> make[1]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib'
>> Making all in libTrie
>> make[2]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
>> make  all-recursive
>> make[3]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
>> Making all in src
>> make[4]: Entering directory
>> `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/src'
>> make[4]: Nothing to be done for `all'.
>> make[4]: Leaving directory
>> `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/src'
>> Making all in test
>> make[4]: Entering directory
>> `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/test'
>> make[4]: Nothing to be done for `all'.
>> make[4]: Leaving directory
>> `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/test'
>> make[4]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
>> make[4]: Nothing to be done for `all-am'.
>> make[4]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
>> make[3]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
>> make[2]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
>> make[2]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib'
>> gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
>> -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
>> -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1738.o -MD -MP
>> -MF .deps/rfc1738.Tpo -c -o rfc1738.o rfc1738.c
>> rfc1738.c: In function `rfc1738_unescape':
>> rfc1738.c:209: warning: comparison is always false due to limited
>> range of data type
>> rfc1738.c:212: warning: comparison is always false due to limited
>> range of data type
>> make[2]: *** [rfc1738.o] Error 1
>> make[2]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'
>> make[1]: *** [all-recursive] Error 1
>> make[1]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'
>> make: *** [install-recursive] Error 1
>>
>> I get the same for  squid-3.0.STABLE21, squid-3.0.STABLE22,
>> squid-3.0.STABLE23, squid-3.0.STABLE24 and squid-3.0.STABLE25.
>> Squid-3.1.1 also will not compile for me.
>>
>> This suggests to me that lib/rfc1738.c is an area to look at and I
>> find that the file is different between releases. If I copy the
>> version from squid-3.0.STABLE20 to squid-3.0.STABLE21 and try again
>> then the build works. I don’t know what the knock on

Re: [squid-users] Problem downloading files greater then 2 GB

2010-04-16 Thread Jacques Beaudoin

Sorry I did a typo error i'm testing with firefox 3.6.3

On 2010-04-16 15:57, Jacques Beaudoin wrote:

Thanks Alex for taking time,

I tried a download of opensuse 11.2 dvd with firefox 3.6

i got the same result the download stop after 2 GB.

My squid cache is in a ramdrive using /tmpfs maby thats the cause.

I'm now doing a other download with no squid cache with firefox 2.6.3

I'll keep you in touch

Best Regards,

 Jacques


On 2010-04-16 00:07, Alex Braunegg wrote:

Hi,

This is a known issue with Internet Explorer - as you didn't detail what
browser your using.

The issue - if you are using IE - has nothing to do with Squid, it is 
a well
known IE issue and the download will still fail when Squid is not in 
use. If

you use any browser other than IE the issue should not occur.

If you are not using IE - what's the URL so some tests can be performed?

Best Regards,

Alex

-Original Message-
From: Jacques Beaudoin [mailto:jacques-beaud...@cspi.qc.ca]
Sent: Friday, 16 April 2010 1:24 PM
To: squid-users@squid-cache.org
Cc: jacques-beaud...@cspi.qc.ca
Subject: [squid-users] Problem downloading files greater then 2 GB

Hi,

I'm using version 3.1.1 of Squid on a suse 10.2 server

and I my users cannot download files greater then 2 GB.

I saw some posting via Google but cannot find a solution

for my problem

Greetings









Re: [squid-users] Problem downloading files greater then 2 GB

2010-04-16 Thread Jacques Beaudoin

Thanks for responding

I'm doing more testing, and i'll give you all informations

Geetings

Jacques



On 2010-04-16 01:50, davs...@gmail.com wrote:

Hi
Tell me OS version and kernel version,physical memory and sysctl.conf file.
Also squid.conf after removing hash.
--Original Message--
From: Jacques Beaudoin
To: squid-users@squid-cache.org
Cc: jacques-beaud...@cspi.qc.ca
Subject: [squid-users] Problem downloading files greater then 2 GB
Sent: Apr 16, 2010 8:54 AM

Hi,

I'm using version 3.1.1 of Squid on a suse 10.2 server

and I my users cannot download files greater then 2 GB.

I saw some posting via Google but cannot find a solution

for my problem

Greetings


Sent from BlackBerry® on Airtel




Re: [squid-users] Problem downloading files greater then 2 GB

2010-04-16 Thread Jacques Beaudoin

Thanks Alex for taking time,

I tried a download of opensuse 11.2 dvd with firefox 3.6

i got the same result the download stop after 2 GB.

My squid cache is in a ramdrive using /tmpfs maby thats the cause.

I'm now doing a other download with no squid cache with firefox 2.6.3

I'll keep you in touch

Best Regards,

 Jacques


On 2010-04-16 00:07, Alex Braunegg wrote:

Hi,

This is a known issue with Internet Explorer - as you didn't detail what
browser your using.

The issue - if you are using IE - has nothing to do with Squid, it is a well
known IE issue and the download will still fail when Squid is not in use. If
you use any browser other than IE the issue should not occur.

If you are not using IE - what's the URL so some tests can be performed?

Best Regards,

Alex

-Original Message-
From: Jacques Beaudoin [mailto:jacques-beaud...@cspi.qc.ca]
Sent: Friday, 16 April 2010 1:24 PM
To: squid-users@squid-cache.org
Cc: jacques-beaud...@cspi.qc.ca
Subject: [squid-users] Problem downloading files greater then 2 GB

Hi,

I'm using version 3.1.1 of Squid on a suse 10.2 server

and I my users cannot download files greater then 2 GB.

I saw some posting via Google but cannot find a solution

for my problem

Greetings


   




Re: [squid-users] Squid Deployment for ISP's

2010-04-16 Thread Matt
> CPU INFO: Intel(R) Xeon(R) CPU E5440  @ 2.83GHz
> RAM : 8 GB
> HDD : 160 GB
>
> The problem i m facing is when my http requests are above 130 hist per
> second the pages start browsing slow and time out i cant even access
> cachemanager. if the http hit rate is below 130 hist per second it
> fine.  please check if my configuration is correct. sorry for my poor
> english.

How is your file descriptor usage?

Matt


[squid-users] Log Rotate

2010-04-16 Thread Matt
I am running squid3-3.1.0.15-1 on CentOS.  I have this for log rotate:

/etc/logrotate.d/squid

/var/log/squid/access.log /var/log/squid/cache.log /var/log/squid/store.log {
daily
missingok
nocompress
noolddir
sharedscripts
postrotate
DATE=`/bin/date --date=yesterday +%y%m%d`
LOGDIR="/var/log/squid"
/usr/sbin/squid -k rotate 2>/dev/null || true
sleep 10
for LOGFILE in ${LOGDIR}/*.log; do
  [ -f ${LOGFILE}.1 ] && mv ${LOGFILE}.1 ${LOGFILE}-${DATE}
  [ -f ${LOGFILE}-${DATE} ] && /bin/gzip ${LOGFILE}-${DATE}
done
/usr/bin/find ${LOGDIR}/ -type f -name "*.log-*.gz" -mtime +30
-exec rm -f {} \;
endscript
}

It never seems to ratate the logs though.  Any idea why?

Matt


Re: [squid-users] Squid Deployment for ISP's

2010-04-16 Thread Leonardo Rodrigues

Em 16/04/2010 11:57, Ananth escreveu:

Dear Team,

  I have configure squid 3.1 on Fedora core 12.

my hardware configuration:

CPU INFO: Intel(R) Xeon(R) CPU E5440  @ 2.83GHz
RAM : 8 GB
HDD : 160 GB

   


160Gb is a common SATA disk size. Are you using a single SATA disk for 
holding cache_dir AND logs for a squid system running at 130 
requests/second ? If you answered YES, than you're probably having 
I/O problems !!!


try disabling logs and maybe even disabling cache_dir (set a null 
cache_dir) and see if things get better. If yes, than you really should 
get a decent I/O subsystem for this heavy-loaded squid box.





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






RE: [squid-users] Re: Re: Re: Creating a kerberos Service Principal.

2010-04-16 Thread GIGO .

Markus,
 
Now what to do why this behaviour of the browser though i have confirmed that 
windows integrated authentication is checked. IE version can do the kerberos. 
DNS name as proxy is given. The only missing thing is DNS reverse lookup 
settings on my Domaincontoller/dns. Checked on two clients. I have a virtual 
environment made on VMware.
 

 
How to move forward from here.
 

> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Fri, 16 Apr 2010 15:18:27 +0100
> Subject: [squid-users] Re: Re: Re: Creating a kerberos Service Principal.
> 
> Hi Bilal,
> 
> In your case the browser is returning a NTLM token not a Kerberos token whu 
> squid_kerb_auth will deny access.
> 
> Regards
> Markus
> 
> "GIGO ."  wrote in message 
> news:snt134-w155de8e05828b08d15c09ab9...@phx.gbl...
> 
> Dear Nick,
> 
> This was the result of my klist -k command:
> 
> [r...@squidlhrtest log]# klist -k /etc/squid/HTTP.keytab
> Keytab name: FILE:/etc/squid/HTTP.keytab
> KVNO Principal
>  
> --
> 2 HTTP/vdc.v.com...@v.com.pk
> 2 HTTP/vdc.v.com...@v.com.pk
> 2 HTTP/vdc.v.com...@v.com.pk
> ---
> 
> i recreated the spn as follows in my new lab ( domaincontroller name is now 
> vdc.v.local and proxyname is squidLhrTest)
> msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.local -h squidLhrTest.v.local -k 
> /etc/squid/HTTP.keytab --computer-name squid-http --upn 
> HTTP/squidLhrTest.v.local --server vdc.v.local --verbose
> 
> 
> 
> However whenever a client try to access the internet this error appears:
> 
> CacheHost: squidLhrTest
> ErrPage: ERR_CACHE_ACCESS_DENIED
> Err: [none]
> TimeStamp: Fri, 16 Apr 2010 10:43:51 GMT
> ClientIP: 10.1.82.54
> HTTP Request:
> GET /isapi/redir.dll?prd=ie&ar=hotmail HTTP/1.1
> Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, 
> application/x-shockwave-flash, */*
> Accept-Language: en-us
> User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
> Accept-Encoding: gzip, deflate
> Proxy-Connection: Keep-Alive
> Host: www.microsoft.com
> Proxy-Authorization: Negotiate 
> TlRMTVNTUAABB4IIogAFASgKDw==
> 
> 
> 
> thank you so much for you consideration Nick. yes despite doing lots of 
> efforts not being able to get this thing to work and am frustated now. 
> however in the journey at least learnt many things :)
> 
> 
> 
> regards,
> 
> Bilal Aslam
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>> From: nick.cairncr...@condenast.co.uk
>> To: gi...@msn.com
>> Date: Fri, 16 Apr 2010 09:39:11 +0100
>> Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.
>>
>> Bilal,
>>
>> I understand your frustration! First off: What happens when you klist -k 
>> /etc/squid/HTTP.keytab
>> As I understand it, shouldn't you be specifying the spn as 
>> HTTP/yoursquidproxy and not your DC? You want to be able to authenticate 
>> from the squid proxy, using the HTTP service to the squid-http computer 
>> account.
>>
>> Nick
>>
>>
>>
>>
>>
>> On 16/04/2010 08:43, "GIGO ." wrote:
>>
>>
>>
>> Dear Nick/Markus,
>>
>> I am totally lost in translation and am not sure what to do i need your 
>> help please. The problem is that my kerberos authentication is not 
>> working. In my virtual environment i have two machines one configured as 
>> Domain Controller and the other one as SquidProxy. I am trying to use the 
>> internet from my domain controller( internet explorer 7 & DNS name is 
>> given instead of the ip). However it only popup a authentication window 
>> and never works like it should.
>>
>>
>>
>>
>> I have setup the squid authentication as follows:
>>
>>
>> Steps:
>>
>> I copied the squid_kerb_auth files to correct directory. (SELinux is 
>> enabled)
>>
>> cp -r squid_kerb_auth /usr/libexec/squid/
>>
>> I then Installed the msktutil software
>>
>> step No 1: i changed my krb5.conf file as follows;
>>
>> krb5.conf-
>> [logging]
>> default = FILE:/var/log/krb5libs.log
>> kdc = FILE:/var/log/krb5kdc.log
>> admin_server = FILE:/var/log/kadmind.log
>> [libdefaults]
>> default_realm = V.COM.PK
>> dns_lookup_realm = no
>> dns_lookup_kdc = no
>> ticket_lifetime = 24h
>> forwardable = yes
>> default_keytab_name= /etc/krb5.keytab
>> ; for windows 2003
>> default_tgs_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
>> default_tkt_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
>> permitted_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
>> [realms]
>> V.LOCAL = {
>> kdc = vdc.v.com.pk:88
>> admin_server = vdc.v.com.pk:749
>> default_domain = v.com.pk
>> }
>> [domain_realm]
>> .linux.home = V.COM.PK
>> .v.com.pk=V.COM.PK
>> v.local=V.COM.PK
>>
>> [appdefaults]
>> pam = {
>> debug = false
>> ticket_lifetime = 36000
>> renew_lifetime = 36000
>> forwardable = true
>> krb4_convert = false
>> }
>>
>> Step 2: I verified the settings in resolv.conf & hos

Re: [squid-users] Squid Deployment for ISP's

2010-04-16 Thread Jose Ildefonso Camargo Tolosa
Hi!

It have been a long time since the last time I saw a large amount of
users with just one squid proxy (8 years or so).  Anyway, from what I
can remember, I had a couple of interesting points: number of opened
files, and number of simultaneous connections.  I had to tune: kernel
(proc), system (ulimit) and squid parameters back then.

Anyway, If I find my really old notes, I may be able to give some more
useful info, in the meantime, analyze these points.

I hope this helps,

Ildefonso Camargo

On Fri, Apr 16, 2010 at 10:27 AM, Ananth  wrote:
> Dear Team,
>
>  I have configure squid 3.1 on Fedora core 12.
>
> my hardware configuration:
>
> CPU INFO: Intel(R) Xeon(R) CPU E5440  @ 2.83GHz
> RAM : 8 GB
> HDD : 160 GB
>
> The problem i m facing is when my http requests are above 130 hist per
> second the pages start browsing slow and time out i cant even access
> cachemanager. if the http hit rate is below 130 hist per second it
> fine.  please check if my configuration is correct. sorry for my poor
> english.
>
> Thanks,
> Ananth B.R.
>
> my configuration is as fallows:
>
> ### Start of squid.conf #created by ANANTH#
> cache_effective_user squid
> cache_effective_group squid
>
> http_port 3128 transparent
>
> cache_dir ufs /var/spool/squid 16384 16 256
>
> cache_access_log /var/log/squid/access.log
> cache_log /var/log/squid/cache.log
> cache_store_log none
> logfile_rotate 7
> emulate_httpd_log on
>
> cache_mem 2 GB
> maximum_object_size_in_memory 512 KB
> memory_replacement_policy lru
> cache_replacement_policy lru
> maximum_object_size 64 MB
>
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> no_cache deny QUERY
>
> hosts_file /etc/hosts
>
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern . 0 40% 4320
>
> #acl all src 0.0.0.0/0
>
> ##Define your network below
>
> acl mynetwork src 192.168.106.0/24   # cbinetwork private
> acl mynetwork src 192.168.107.0/24   # cbinetwork private
> acl mynetwork src 192.168.110.0/24   # cbinetwork private
> acl mynetwork src 192.168.120.0/24   # cbinetwork private
> acl mynetwork src 192.168.121.0/24   # cbinetwork private
> acl mynetwork src 192.168.130.0/24   # cbinetwork private
> acl mynetwork src 192.168.150.0/24   # cbinetwork private
> acl mynetwork src 192.168.151.0/24   # cbinetwork private
> acl mynetwork src 10.100.101.0/24   # cbinetwork private
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32
> acl localhost src ::1/128
> acl to_localhost dst 127.0.0.0/8
> acl to_localhost dst ::1/128
> acl purge method PURGE
> acl CONNECT method CONNECT
>
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
>
> acl Safe_ports port 1025-65535 #unregistered ports
>
> acl SSL_ports port 443 563
>
> http_access allow manager localhost
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
>
> http_access allow localhost
> http_access allow mynetwork
> # http_access deny all
> http_reply_access allow all
> icp_access allow mynetwork
>
> # icp_access deny all
>
> visible_hostname proxy.xxx.xx
>
> coredump_dir /var/spool/squid
>
>  End of squid.conf ##
>


[squid-users] Re: Re: need help port 80

2010-04-16 Thread Heinz Diehl
On 14.04.2010, Amos Jeffries wrote: 

> I don't know why you people insist on sticking .* before and aft of the
> pattern.

Because the definition says "-i regexp", and it seems most natural to use
the complete regexp. Can surely be that this is a somewhat "neural reflex"
of people thinking way too complicated :-)

> When that is processed in by Squid it becomes:
>   .*.*Firefox.*.*

Argh, yes. Thanks for pointing this out.



Re: [squid-users] Squid HTTP Keytab SPN question

2010-04-16 Thread Nick Cairncross
Hi Khaled,

It would appear that this was a freak error caused by my particular test 
machine/account. Testing from other test machines and account proved that it 
was working.

A reboot resolved it..

Thanks,

Nick


On 15/04/2010 12:00, "Khaled Blah"  wrote:

Hi Nick,

I believe a decrypt integrity check implies that the wrong key is
being used to decrypt the user's Kerbereros ticket. The KVNO might be
correct but the key is not.

I am using "net" to create a keytab. It's rather easy, simply create a
smb.conf if you don't have one already for the "auth1" account
(Netbios name = AUTH1), then do "net ads join" and then "net ads
keytab add http". This will cause net to create a keytab with the
correct keys and the correct KVNO.

Regards,
Khaled

2010/4/15 Nick Cairncross :
> Hi Khaled,
>
> The reason is that I am also running Samba, which periodically and sometimes 
> 'randomly' updates the machine account in AD (squid1) and throws out the 
> KVNO, and thus the exported squid keytab HTTP.keytab becomes invalid. Using a 
> different account (auth1) means I can run a cron job to run msktutil to 
> update the keytab and keep the KVNO/keytab in sync, and not touching the 
> actual host computer account.
>
> I have got the separate account working up to the point that the cache.log 
> now just reports a Decrypt integrity check failed. I am prompted for my 
> username and password. Entering this allows me to get on the internet and 
> cache.log shows my username. I understand the error message to be an 
> 'incorrect password' type of message but it doesn't quite make sense..
>
> Any pointers from the list?
>
> Nick
>
>
>
>
>
> On 15/04/2010 02:47, "Khaled Blah"  wrote:
>
> Hi Nick,
>
> what I don't get in your question is this: if squid is already joined
> to your domain as squid1, why create another machine account auth1?
> Maybe I missed out on something.
>
> Your msktutil parameters look fine though.
>
> Regards,
> Khaled
>
> 2010/4/14 Nick Cairncross :
>> Hi,
>>
>> I'd like confirmation of something is possible, but first best to detail 
>> what I want:
>>
>> I want to use a separate computer account to authenticate my users against. 
>> I know that this requires an HTTP.keytab and computer in AD with SPN. I 
>> would like to use MKTSUTIL for this.
>> If my proxy server is called SQUID1 and is already happily joined to the 
>> domain then I need to create a new machine account which I will call AUTH1.
>>
>> 1) Do I need to create a DNS entry for AUTH1 (with the same IP as SQUID1)?
>> 2) If so, do I need just an A record?
>> 3) I have evidently got confused over the msktutil switches and values and 
>> so I'm specifying something wrong. What have I done? See below...
>>
>> I used this command after a kinit myusername:
>> msktutil -c -b "CN=COMPUTERS" -s HTTP/squid1.[mydomain] iz -k 
>> /etc/squid/HTTP.keytab --computer-name auth1 --upn HTTP/squid1 --server dc1 
>> -verbose
>>
>> This created the computer account auth1 in the computers ou, added 
>> HTTP/squid1.mydomain to SPN and HTTP/squid1.mydom...@mydomain to the UPN.
>> It also created the keytab HTTP.keytab. Klist reports:
>>
>>   2 HTTP/squid1.[mydoma...@[mydomain]
>>   2 HTTP/squid1.[mydoma...@[mydomain]
>>   2 HTTP/squid1.[mydoma...@[mydomain]
>>
>> However cache.log shows this when I then fire up me IE
>>
>> 2010/04/14 14:52:46| authenticateNegotiateHandleReply: Error validating user 
>> via Negotiate. Error returned 'BH gss_acquire_cred() failed: Unspecified GSS 
>> failure.  Minor code may provide more information. No principal in keytab 
>> matches desired name'
>>
>> Thanks as always,
>> Nick
>>
>>
>>
>>
>> ** Please consider the environment before printing this e-mail **
>>
>> The information contained in this e-mail is of a confidential nature and is 
>> intended only for the addressee.  If you are not the intended addressee, any 
>> disclosure, copying or distribution by you is prohibited and may be 
>> unlawful.  Disclosure to any party other than the addressee, whether 
>> inadvertent or otherwise, is not intended to waive privilege or 
>> confidentiality.  Internet communications are not secure and therefore Conde 
>> Nast does not accept legal responsibility for the contents of this message.  
>> Any views or opinions expressed are those of the author.
>>
>> Company Registration details:
>> The Conde Nast Publications Ltd
>> Vogue House
>> Hanover Square
>> London W1S 1JU
>>
>> Registered in London No. 226900
>>
>
>
> ** Please consider the environment before printing this e-mail **
>
> The information contained in this e-mail is of a confidential nature and is 
> intended only for the addressee.  If you are not the intended addressee, any 
> disclosure, copying or distribution by you is prohibited and may be unlawful. 
>  Disclosure to any party other than the addressee, whether inadvertent or 
> otherwise, is not intended to waive privilege or confidentiality.  Internet 
> communications are not secure and therefore Conde Nast does not ac

[squid-users] Squid Deployment for ISP's

2010-04-16 Thread Ananth
Dear Team,

 I have configure squid 3.1 on Fedora core 12.

my hardware configuration:

CPU INFO: Intel(R) Xeon(R) CPU E5440  @ 2.83GHz
RAM : 8 GB
HDD : 160 GB

The problem i m facing is when my http requests are above 130 hist per
second the pages start browsing slow and time out i cant even access
cachemanager. if the http hit rate is below 130 hist per second it
fine.  please check if my configuration is correct. sorry for my poor
english.

Thanks,
Ananth B.R.

my configuration is as fallows:

### Start of squid.conf #created by ANANTH#
cache_effective_user squid
cache_effective_group squid

http_port 3128 transparent

cache_dir ufs /var/spool/squid 16384 16 256

cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 7
emulate_httpd_log on

cache_mem 2 GB
maximum_object_size_in_memory 512 KB
memory_replacement_policy lru
cache_replacement_policy lru
maximum_object_size 64 MB

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

hosts_file /etc/hosts

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 40% 4320

#acl all src 0.0.0.0/0

##Define your network below

acl mynetwork src 192.168.106.0/24   # cbinetwork private
acl mynetwork src 192.168.107.0/24   # cbinetwork private
acl mynetwork src 192.168.110.0/24   # cbinetwork private
acl mynetwork src 192.168.120.0/24   # cbinetwork private
acl mynetwork src 192.168.121.0/24   # cbinetwork private
acl mynetwork src 192.168.130.0/24   # cbinetwork private
acl mynetwork src 192.168.150.0/24   # cbinetwork private
acl mynetwork src 192.168.151.0/24   # cbinetwork private
acl mynetwork src 10.100.101.0/24   # cbinetwork private
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localhost src ::1/128
acl to_localhost dst 127.0.0.0/8
acl to_localhost dst ::1/128
acl purge method PURGE
acl CONNECT method CONNECT

acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https

acl Safe_ports port 1025-65535 #unregistered ports

acl SSL_ports port 443 563

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access allow mynetwork
# http_access deny all
http_reply_access allow all
icp_access allow mynetwork

# icp_access deny all

visible_hostname proxy.xxx.xx

coredump_dir /var/spool/squid

 End of squid.conf ##


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●

i dont do anything
suddently

solved 
can anyone explain ?

On Fri, Apr 16, 2010 at 9:48 PM, Jeff Peng  wrote:
> On Fri, Apr 16, 2010 at 10:45 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
> ̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>> mobile version
>> without any image
>>
>> so... can you help me how to make it out ?
>
>
> sure... squid-2.7 can send a http1.1 request instead.
> but I forgot the directive name, try searching "http11" in squid.conf.
>
> Jeff.
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Jeff Peng
On Fri, Apr 16, 2010 at 10:45 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
> mobile version
> without any image
>
> so... can you help me how to make it out ?


sure... squid-2.7 can send a http1.1 request instead.
but I forgot the directive name, try searching "http11" in squid.conf.

Jeff.


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
mobile version
without any image

so... can you help me how to make it out ?
cause before i never meet prob like this

On Fri, Apr 16, 2010 at 9:44 PM, Jeff Peng  wrote:
> On Fri, Apr 16, 2010 at 10:40 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
> ̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>> i dont understand///
>>
>> but it works ( facebook not auto direct to m.facebook ) when i dont use squid
>>
>
>
> coz squid sent http1.0 request to facebook, that's different from a
> normal current browser.
> btw, what's the content of m.facebook.com, wap version of it?
>
> Jeff
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Jeff Peng
On Fri, Apr 16, 2010 at 10:40 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
> i dont understand///
>
> but it works ( facebook not auto direct to m.facebook ) when i dont use squid
>


coz squid sent http1.0 request to facebook, that's different from a
normal current browser.
btw, what's the content of m.facebook.com, wap version of it?

Jeff


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
i dont understand///

but it works ( facebook not auto direct to m.facebook ) when i dont use squid

On Fri, Apr 16, 2010 at 9:37 PM, Jeff Peng  wrote:
> I'd like to help do a test for you, but here the gov blocks
> facebook.com so I can't access it.
> It seems this is a server side effection, less about squid.
> Just by guess, facebook see the http1.0 request which is coming from
> squid, for that it returns a 302 redirection.
>
> Jeff.
>
> On Fri, Apr 16, 2010 at 10:29 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
> ̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>> (Request-Line)  GET / HTTP/1.1
>> Host    facebook.com
>> User-Agent      Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1)
>> Gecko/20090616 Firefox/3.5
>> Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>> Accept-Language en-us,en;q=0.5
>> Accept-Encoding gzip,deflate
>> Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
>> Keep-Alive      300
>> Proxy-Connection        keep-alive
>>
>> (Request-Line)  GET / HTTP/1.1
>> Host    www.facebook.com
>> User-Agent      Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1)
>> Gecko/20090616 Firefox/3.5
>> Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>> Accept-Language en-us,en;q=0.5
>> Accept-Encoding gzip,deflate
>> Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
>> Keep-Alive      300
>> Proxy-Connection        keep-alive
>> Cookie  datr=1271428112-0af7243a3fb527ce26b19102066411b0215fb7586958b1e89
>>
>>
>>
>>
>>
>>
>> On Fri, Apr 16, 2010 at 9:25 PM, Jeff Peng  wrote:
>>> On Fri, Apr 16, 2010 at 10:22 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
>>> ̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
 (Status-Line)   HTTP/1.0 302 Moved Temporarily
 Cache-Control   private, no-store, no-cache, must-revalidate,
 post-check=0, pre-check=0
 Expires Sat, 01 Jan 2000 00:00:00 GMT
 Location        http://m.facebook.com/?w2m
>>>
>>>
>>> It seems the server returns a 302 for redirection to m.facebook.com.
>>> What's the request header?
>>>
>>> Jeff.
>>>
>>
>>
>>
>> --
>> -=-=-=-=
>> hix nganggur maning... nganggur maning
>>
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Jeff Peng
I'd like to help do a test for you, but here the gov blocks
facebook.com so I can't access it.
It seems this is a server side effection, less about squid.
Just by guess, facebook see the http1.0 request which is coming from
squid, for that it returns a 302 redirection.

Jeff.

On Fri, Apr 16, 2010 at 10:29 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
> (Request-Line)  GET / HTTP/1.1
> Host    facebook.com
> User-Agent      Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1)
> Gecko/20090616 Firefox/3.5
> Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> Accept-Language en-us,en;q=0.5
> Accept-Encoding gzip,deflate
> Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
> Keep-Alive      300
> Proxy-Connection        keep-alive
>
> (Request-Line)  GET / HTTP/1.1
> Host    www.facebook.com
> User-Agent      Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1)
> Gecko/20090616 Firefox/3.5
> Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> Accept-Language en-us,en;q=0.5
> Accept-Encoding gzip,deflate
> Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
> Keep-Alive      300
> Proxy-Connection        keep-alive
> Cookie  datr=1271428112-0af7243a3fb527ce26b19102066411b0215fb7586958b1e89
>
>
>
>
>
>
> On Fri, Apr 16, 2010 at 9:25 PM, Jeff Peng  wrote:
>> On Fri, Apr 16, 2010 at 10:22 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
>> ̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>>> (Status-Line)   HTTP/1.0 302 Moved Temporarily
>>> Cache-Control   private, no-store, no-cache, must-revalidate,
>>> post-check=0, pre-check=0
>>> Expires Sat, 01 Jan 2000 00:00:00 GMT
>>> Location        http://m.facebook.com/?w2m
>>
>>
>> It seems the server returns a 302 for redirection to m.facebook.com.
>> What's the request header?
>>
>> Jeff.
>>
>
>
>
> --
> -=-=-=-=
> hix nganggur maning... nganggur maning
>


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
(Request-Line)  GET / HTTP/1.1
Hostfacebook.com
User-Agent  Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1)
Gecko/20090616 Firefox/3.5
Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive  300
Proxy-Connectionkeep-alive

(Request-Line)  GET / HTTP/1.1
Hostwww.facebook.com
User-Agent  Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1)
Gecko/20090616 Firefox/3.5
Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive  300
Proxy-Connectionkeep-alive
Cookie  datr=1271428112-0af7243a3fb527ce26b19102066411b0215fb7586958b1e89






On Fri, Apr 16, 2010 at 9:25 PM, Jeff Peng  wrote:
> On Fri, Apr 16, 2010 at 10:22 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
> ̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>> (Status-Line)   HTTP/1.0 302 Moved Temporarily
>> Cache-Control   private, no-store, no-cache, must-revalidate,
>> post-check=0, pre-check=0
>> Expires Sat, 01 Jan 2000 00:00:00 GMT
>> Location        http://m.facebook.com/?w2m
>
>
> It seems the server returns a 302 for redirection to m.facebook.com.
> What's the request header?
>
> Jeff.
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Jeff Peng
On Fri, Apr 16, 2010 at 10:22 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿
̿̿’\̵͇̿̿\=(•̪●)‏ ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
> (Status-Line)   HTTP/1.0 302 Moved Temporarily
> Cache-Control   private, no-store, no-cache, must-revalidate,
> post-check=0, pre-check=0
> Expires Sat, 01 Jan 2000 00:00:00 GMT
> Location        http://m.facebook.com/?w2m


It seems the server returns a 302 for redirection to m.facebook.com.
What's the request header?

Jeff.


[squid-users] Re: Re: Re: Creating a kerberos Service Principal.

2010-04-16 Thread Markus Moeller

Hi Bilal,

 Is the squidadmin user member of the UnixAdmins group ?

Regards
Markus

"GIGO ."  wrote in message 
news:snt134-w374039f11c582486d8169b9...@phx.gbl...


Dear Markus/all,


I am unable to create the keytab using mskutil please help me out i followed 
the following steps:


1. I created a OU and named it UnixOU
2. I created a group account in the UnixOU and named it as UnixAdmins
3. I make my windows account bilal_admin part of UnixAdmins group.
4. I set the settings of UnixOU to be managed by UnixAdmins.
5. Then i synch time of Squid Machine and  Active directory.
6. My domain fully qualified domain name is v.local and netbios names is V.
7. My domain controller name is vdc (fqdn=vdc.v.local)
8. The following lines were changed in the krb5.conf while rest being 
untouched.


  [libdefaults]
   default_realm=V.LOCAL


   [realms]

   V.LOCAL = {
kdc = vdc.v.local:88
admin_server = kerberos.example.com:749 (e.g this not changed 
does it matter at the step of creation of keytab)

default_domain = example.com (unchanged)
}




The i run the following commands to create the keytab:

kinit squidad...@v.local


msktutil -c -b "OU=unixPrincipals" -s HTTP/v.local -h 
squidLhrTest.v.local -k /etc/squid/HTTP.keytab --computer-name 
squid-http --upn HTTP/v.local --server vdc.v.local --verbose


Output of the Command:

-- init_password: Wiping the computer password structure
-- finalize_exec: Determining user principal name
-- finalize_exec: User Principal Name is: HTTP/v.lo...@v.local
-- create_fake_krb5_conf: Created a fake krb5.conf file: 
/tmp/.mskt-3550krb5.conf

-- get_krb5_context: Creating Kerberos Context
-- try_machine_keytab: Using the local credential cache: 
/tmp/.mskt-3550krb5_ccache
-- try_machine_keytab: krb5_get_init_creds_keytab failed (Client not found 
in Kerberos database)

-- try_machine_keytab: Unable to authenticate using the local keytab
-- try_ldap_connect: Connecting to LDAP server: vdc.v.local
-- try_ldap_connect: Connecting to LDAP server: vdc.v.local
SASL/GSSAPI authentication started
SASL username: squidad...@v.local
SASL SSF: 56
SASL installing layers
-- ldap_get_base_dn: Determining default LDAP base: dc=v,dc=local
Warning: No DNS entry found for squidLhrTest.v.local
-- get_short_hostname: Determined short hostname: squidLhrTest-v-local
-- finalize_exec: SAM Account Name is: squid-http$
Updating all entries for squidLhrTest.v.local in the keytab 
/etc/squid/HTTP.keytab

-- try_set_password: Attempting to reset computer's password
-- ldap_check_account: Checking that a computer account for squid-http$ 
exists

No computer account for squid-http found, creating a new one.
Error: ldap_add_ext_s failed (Insufficient access)
Error: ldap_check_account failed (No CSI structure available)
Error: set_password failed
-- krb5_cleanup: Destroying Kerberos Context
-- ldap_cleanup: Disconnecting from LDAP server
-- init_password: Wiping the computer password structure


please help me resolving the issue.

regards,

Bilal Aslam






To: squid-users@squid-cache.org
From: hua...@moeller.plus.com
Date: Fri, 9 Apr 2010 08:10:19 +0100
Subject: [squid-users] Re: Re: Creating a kerberos Service Principal.

Hi Bilal,

I create a new OU in Active Directory like OU=UnixPrincipals,DC=... I
then create a Windows Group UnixAdministrators and add the Windows account
of the UnixAdministrators to it. Finally I change the permissions on the
OU=UnixPrincipals so that the members of the group UnixAdministrators have
full rights (or limited rights ) for objects under this OU.

Regards
Markus

"GIGO ." wrote in message
news:snt134-w395b3433738667ded2186eb9...@phx.gbl...

Markus could not get you please can you elaborate a bit.


thank you all!

regards,

Bilal



To: squid-users@squid-cache.org
From: hua...@moeller.plus.com
Date: Thu, 8 Apr 2010 20:04:30 +0100
Subject: [squid-users] Re: Creating a kerberos Service Principal.

BTW You do not need Administrator rights. You can set permission for
different Groups on OUs for example for Unix Kerberos Admins.

Markus

"Khaled Blah" wrote in message
news:n2j4a3250ab1004080957id2f4a051xb31445428c62b...@mail.gmail.com...
Hi Bilal,

1. ktpass and msktutil practically do the same, they create keytabs
which include the keys that squid will need to decrypt the ticket it
receives from the user. However ktpass only creates a file which you
will then have to securely transfer to your proxy server so that squid
can access it. Using msktutil on your proxy server, you can get the
same keytab without having to transfer it. Thus, msktutil saves you
some time and hassle. AFAIR both need "Administrator" rights, which
means the account used for ktpass/msktutil needs to be a member of the
Administrator group.


2. To answer this question, one would need more information about your
network and your setup. Basically, mixing any other authenticat

Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
(Status-Line)   HTTP/1.0 302 Moved Temporarily
Cache-Control   private, no-store, no-cache, must-revalidate,
post-check=0, pre-check=0
Expires Sat, 01 Jan 2000 00:00:00 GMT
Locationhttp://m.facebook.com/?w2m
Pragma  no-cache
Content-Typetext/html; charset=utf-8
Content-Length  0
X-Cache MISS from beetz
X-Cache-Lookup  MISS from beetz:2210
Via 1.0 beetz:2210 (squid/2.7.STABLE6)
Connection  keep-alive
Proxy-Connectionkeep-alive



that's only happen when i use squid ( not transparant - because i want
to test it first )


On Fri, Apr 16, 2010 at 9:12 PM, Jeff Peng  wrote:
> On Fri, Apr 16, 2010 at 8:36 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’\̵͇̿̿\=(•̪●)‏
> ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>> On Fri, Apr 16, 2010 at 7:34 PM, Jeff Peng  wrote:
>>> Have a package-capuring tool like HttpWatch to see what happened in the 
>>> process.
>>
>> what is the name ? i use ubuntu 9.10
>
> for IE it's httpwatch.
> for firefox (windows or linux) there is another similar one "httpfox".
>
> Jeff.
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


[squid-users] Re: Re: Re: Creating a kerberos Service Principal.

2010-04-16 Thread Markus Moeller

Hi Bilal,

In your case the browser is returning a NTLM token not a Kerberos token whu 
squid_kerb_auth will deny access.


Regards
Markus

"GIGO ."  wrote in message 
news:snt134-w155de8e05828b08d15c09ab9...@phx.gbl...


Dear Nick,

This was the result of my klist -k command:

[r...@squidlhrtest log]# klist -k /etc/squid/HTTP.keytab
Keytab name: FILE:/etc/squid/HTTP.keytab
KVNO Principal
 --
2 HTTP/vdc.v.com...@v.com.pk
2 HTTP/vdc.v.com...@v.com.pk
2 HTTP/vdc.v.com...@v.com.pk
---

i recreated the spn as follows in my new lab ( domaincontroller name is now 
vdc.v.local and proxyname is squidLhrTest)
msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.local -h squidLhrTest.v.local -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn 
HTTP/squidLhrTest.v.local --server vdc.v.local --verbose




However whenever a client try to access the internet this error appears:

CacheHost: squidLhrTest
ErrPage: ERR_CACHE_ACCESS_DENIED
Err: [none]
TimeStamp: Fri, 16 Apr 2010 10:43:51 GMT
ClientIP: 10.1.82.54
HTTP Request:
GET /isapi/redir.dll?prd=ie&ar=hotmail HTTP/1.1
Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, 
application/x-shockwave-flash, */*

Accept-Language: en-us
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Host: www.microsoft.com
Proxy-Authorization: Negotiate 
TlRMTVNTUAABB4IIogAFASgKDw==




thank you so much for you consideration Nick. yes despite doing lots of 
efforts not being able to get this thing to work and am frustated now. 
however in the journey at least learnt many things :)




regards,

Bilal Aslam


















From: nick.cairncr...@condenast.co.uk
To: gi...@msn.com
Date: Fri, 16 Apr 2010 09:39:11 +0100
Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.

Bilal,

I understand your frustration! First off: What happens when you klist -k 
/etc/squid/HTTP.keytab
As I understand it, shouldn't you be specifying the spn as 
HTTP/yoursquidproxy and not your DC? You want to be able to authenticate 
from the squid proxy, using the HTTP service to the squid-http computer 
account.


Nick





On 16/04/2010 08:43, "GIGO ." wrote:



Dear Nick/Markus,

I am totally lost in translation and am not sure what to do i need your 
help please. The problem is that my kerberos authentication is not 
working. In my virtual environment i have two machines one configured as 
Domain Controller and the other one as SquidProxy. I am trying to use the 
internet from my domain controller( internet explorer 7 & DNS name is 
given instead of the ip). However it only popup a authentication window 
and never works like it should.





I have setup the squid authentication as follows:


Steps:

I copied the squid_kerb_auth files to correct directory. (SELinux is 
enabled)


cp -r squid_kerb_auth /usr/libexec/squid/

I then Installed the msktutil software

step No 1: i changed my krb5.conf file as follows;

krb5.conf-
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = V.COM.PK
dns_lookup_realm = no
dns_lookup_kdc = no
ticket_lifetime = 24h
forwardable = yes
default_keytab_name= /etc/krb5.keytab
; for windows 2003
default_tgs_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
default_tkt_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
permitted_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
[realms]
V.LOCAL = {
kdc = vdc.v.com.pk:88
admin_server = vdc.v.com.pk:749
default_domain = v.com.pk
}
[domain_realm]
.linux.home = V.COM.PK
.v.com.pk=V.COM.PK
v.local=V.COM.PK

[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}

Step 2: I verified the settings in resolv.conf & hosts file
--etc/resolv.conf---
nameserver 10.1.82.51 (My domain conroller and DNS)

/etc/hosts 
file

127.0.0.1 squidLhrTest localhost.localdomain localhost
10.1.82.52 squidLhrTest.v.com.pk
::1 localhost6.localdomain6 localhost6
---


Step 3:
i created the keytab as follows:
kinit administra...@v.local

msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.com.pk -h 
squidLhrTest.v.com.pk -k /etc/squid/HTTP.keytab --computer-name 
squid-http --upn HTTP/vdc.v.com.pk --server vdc.v.com.pk --verbose


Out put of my command:

[r...@squidlhrtest msktutil-0.3.16]# msktutil -c -b "CN=COMPUTERS" -s 
HTTP/vdc.v.com.pk -h squidLhrTest.v.com.pk -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn 
HTTP/vdc.v.com.pk --server vdc.v.com.pk --verbose

-- init_password: Wiping the

[squid-users] Re: Squid HTTP Keytab SPN question

2010-04-16 Thread Markus Moeller

Hi Nick,

 You do not need a DNS entry for AUTH1.  As default squid_kerb_auth uses 
HTTP/gethostbyadr(gethostbyname(hostname())  which means I it canonicalises 
the hostname. You can change this by using the -S option.


 When you use msktutil you have to make sure that you do not have two 
entries in AD withe the same SPN. If you still have a samba account in AD 
with the HTTP/ entry delete it the SPN with setspn -d ... .


Regards
Markus



"Nick Cairncross"  wrote in message 
news:c7eb8a2c.1f285%nick.cairncr...@condenast.co.uk...

Hi,

I'd like confirmation of something is possible, but first best to detail 
what I want:


I want to use a separate computer account to authenticate my users against. 
I know that this requires an HTTP.keytab and computer in AD with SPN. I 
would like to use MKTSUTIL for this.
If my proxy server is called SQUID1 and is already happily joined to the 
domain then I need to create a new machine account which I will call AUTH1.


1) Do I need to create a DNS entry for AUTH1 (with the same IP as SQUID1)?
2) If so, do I need just an A record?
3) I have evidently got confused over the msktutil switches and values and 
so I'm specifying something wrong. What have I done? See below...


I used this command after a kinit myusername:
msktutil -c -b "CN=COMPUTERS" -s HTTP/squid1.[mydomain] iz -k 
/etc/squid/HTTP.keytab --computer-name auth1 --upn HTTP/squid1 --server 
dc1 -verbose


This created the computer account auth1 in the computers ou, added 
HTTP/squid1.mydomain to SPN and HTTP/squid1.mydom...@mydomain to the UPN.

It also created the keytab HTTP.keytab. Klist reports:

  2 HTTP/squid1.[mydoma...@[mydomain]
  2 HTTP/squid1.[mydoma...@[mydomain]
  2 HTTP/squid1.[mydoma...@[mydomain]

However cache.log shows this when I then fire up me IE

2010/04/14 14:52:46| authenticateNegotiateHandleReply: Error validating user 
via Negotiate. Error returned 'BH gss_acquire_cred() failed: Unspecified GSS 
failure.  Minor code may provide more information. No principal in keytab 
matches desired name'


Thanks as always,
Nick




** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be 
unlawful.  Disclosure to any party other than the addressee, whether 
inadvertent or otherwise, is not intended to waive privilege or 
confidentiality.  Internet communications are not secure and therefore Conde 
Nast does not accept legal responsibility for the contents of this message. 
Any views or opinions expressed are those of the author.


Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900




Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Jeff Peng
On Fri, Apr 16, 2010 at 8:36 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’\̵͇̿̿\=(•̪●)‏
ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
> On Fri, Apr 16, 2010 at 7:34 PM, Jeff Peng  wrote:
>> Have a package-capuring tool like HttpWatch to see what happened in the 
>> process.
>
> what is the name ? i use ubuntu 9.10

for IE it's httpwatch.
for firefox (windows or linux) there is another similar one "httpfox".

Jeff.


[squid-users] Re: squid_kerb_auth multiple GET request

2010-04-16 Thread Markus Moeller
In theory you can, but it has to be implemented in the client (e.g. the 
Browser).


Regards
Markus

"Tiery DENYS"  wrote in message 
news:h2kfdcc38011004140653p92fd561fv81febc7501188...@mail.gmail.com...

Hi,

I am using squid with squid_kerb_auth plugin for authentication on a
kerberized network.
Squid listen on port 3128 and clients use this proxy.

The transparent authentication works pretty well but if i look at
network flow, i see that for each website request, the client does two
requests:
1) normal GET request
Squid says "proxy authentication required"
2) second GET request with tgs

Is it possible for clients to automatically send tgs in first request ?

Thanks in advance,

Tiery






Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
On Fri, Apr 16, 2010 at 7:34 PM, Jeff Peng  wrote:
> Have a package-capuring tool like HttpWatch to see what happened in the 
> process.

what is the name ? i use ubuntu 9.10
but @ other website there is no problem

>
>
>
> On Fri, Apr 16, 2010 at 6:56 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’\̵͇̿̿\=(•̪●)‏
> ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
>> dear all
>> i have another problem
>>
>> i use squid 2.x latest
>>
>> if i use squid
>> when i open facebook.com
>>
>> always directly to m.facebook.com
>>
>> if im not use squid it's normal...
>>
>> --
>> -=-=-=-=
>> hix nganggur maning... nganggur maning
>>
>



-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] facebook always direct to m.facebook

2010-04-16 Thread Jeff Peng
Have a package-capuring tool like HttpWatch to see what happened in the process.



On Fri, Apr 16, 2010 at 6:56 PM, Die~~ ٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’\̵͇̿̿\=(•̪●)‏
ɹɐzǝupɐɥʞ ɐzɹıɯ  wrote:
> dear all
> i have another problem
>
> i use squid 2.x latest
>
> if i use squid
> when i open facebook.com
>
> always directly to m.facebook.com
>
> if im not use squid it's normal...
>
> --
> -=-=-=-=
> hix nganggur maning... nganggur maning
>


[squid-users] facebook always direct to m.facebook

2010-04-16 Thread •̪●
dear all
i have another problem

i use squid 2.x latest

if i use squid
when i open facebook.com

always directly to m.facebook.com

if im not use squid it's normal...

-- 
-=-=-=-=
hix nganggur maning... nganggur maning


RE: [squid-users] Re: Re: Creating a kerberos Service Principal.

2010-04-16 Thread GIGO .

Dear Nick,
 
This was the result of my klist -k command:

[r...@squidlhrtest log]# klist -k /etc/squid/HTTP.keytab
Keytab name: FILE:/etc/squid/HTTP.keytab
KVNO Principal
 --
2 HTTP/vdc.v.com...@v.com.pk
2 HTTP/vdc.v.com...@v.com.pk
2 HTTP/vdc.v.com...@v.com.pk
---

i recreated the spn as follows in my new lab ( domaincontroller name is now 
vdc.v.local and proxyname is squidLhrTest)
msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.local -h squidLhrTest.v.local -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn 
HTTP/squidLhrTest.v.local --server vdc.v.local --verbose
 
 
 
However whenever a client try to access the internet this error appears:
 
CacheHost: squidLhrTest
ErrPage: ERR_CACHE_ACCESS_DENIED
Err: [none]
TimeStamp: Fri, 16 Apr 2010 10:43:51 GMT
ClientIP: 10.1.82.54
HTTP Request:
GET /isapi/redir.dll?prd=ie&ar=hotmail HTTP/1.1
Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, 
application/x-shockwave-flash, */*
Accept-Language: en-us
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Host: www.microsoft.com
Proxy-Authorization: Negotiate 
TlRMTVNTUAABB4IIogAFASgKDw==

 
 
thank you so much for you consideration Nick. yes despite doing lots of efforts 
not being able to get this thing to work and am frustated now. however in 
the journey at least learnt many things :)
 
 
 
regards,
 
Bilal Aslam
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
> From: nick.cairncr...@condenast.co.uk
> To: gi...@msn.com
> Date: Fri, 16 Apr 2010 09:39:11 +0100
> Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.
>
> Bilal,
>
> I understand your frustration! First off: What happens when you klist -k 
> /etc/squid/HTTP.keytab
> As I understand it, shouldn't you be specifying the spn as 
> HTTP/yoursquidproxy and not your DC? You want to be able to authenticate from 
> the squid proxy, using the HTTP service to the squid-http computer account.
>
> Nick
>
>
>
>
>
> On 16/04/2010 08:43, "GIGO ." wrote:
>
>
>
> Dear Nick/Markus,
>
> I am totally lost in translation and am not sure what to do i need your help 
> please. The problem is that my kerberos authentication is not working. In my 
> virtual environment i have two machines one configured as Domain Controller 
> and the other one as SquidProxy. I am trying to use the internet from my 
> domain controller( internet explorer 7 & DNS name is given instead of the 
> ip). However it only popup a authentication window and never works like it 
> should.
>
>
>
>
> I have setup the squid authentication as follows:
>
>
> Steps:
>
> I copied the squid_kerb_auth files to correct directory. (SELinux is enabled)
>
> cp -r squid_kerb_auth /usr/libexec/squid/
>
> I then Installed the msktutil software
>
> step No 1: i changed my krb5.conf file as follows;
>
> krb5.conf-
> [logging]
> default = FILE:/var/log/krb5libs.log
> kdc = FILE:/var/log/krb5kdc.log
> admin_server = FILE:/var/log/kadmind.log
> [libdefaults]
> default_realm = V.COM.PK
> dns_lookup_realm = no
> dns_lookup_kdc = no
> ticket_lifetime = 24h
> forwardable = yes
> default_keytab_name= /etc/krb5.keytab
> ; for windows 2003
> default_tgs_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
> default_tkt_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
> permitted_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
> [realms]
> V.LOCAL = {
> kdc = vdc.v.com.pk:88
> admin_server = vdc.v.com.pk:749
> default_domain = v.com.pk
> }
> [domain_realm]
> .linux.home = V.COM.PK
> .v.com.pk=V.COM.PK
> v.local=V.COM.PK
>
> [appdefaults]
> pam = {
> debug = false
> ticket_lifetime = 36000
> renew_lifetime = 36000
> forwardable = true
> krb4_convert = false
> }
>
> Step 2: I verified the settings in resolv.conf & hosts file
> --etc/resolv.conf---
> nameserver 10.1.82.51 (My domain conroller and DNS)
>
> /etc/hosts 
> file
> 127.0.0.1 squidLhrTest localhost.localdomain localhost
> 10.1.82.52 squidLhrTest.v.com.pk
> ::1 localhost6.localdomain6 localhost6
> ---
>
>
> Step 3:
> i created the keytab as follows:
> kinit administra...@v.local
>
> msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.com.pk -h squidLhrTest.v.com.pk 
> -k /etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/vdc.v.com.pk 
> --server vdc.v.com.pk --verbose
>
> Out put of my command:
>
> [r...@squidlhrtest msktutil-0.3.16]# msktutil -c -b "CN=COMPUTERS" -s 
> HTTP/vdc.v.com.pk -h squidLhrTest.v.com.pk -k /etc/squid/HTTP.keytab 
> --computer-name squid-http --upn HTTP/vdc.v.com.pk --server vdc.v.com.pk 
> --verbose
> -- init_password: Wiping the computer pass

[squid-users] squid 2.x latest

2010-04-16 Thread •̪●
13127.489    166 192.168.0.50 TCP_MISS/200 634 GET
http://openx.detik.com/delivery/lg.php? - DIRECT/203.190.241.40
image/gif
1271413127.540    224 192.168.0.50 TCP_MISS/200 6178 GET
http://www.detik.com/images/content/2010/04/16/317/Nokia-C3-C6-and-E5(softsailor)dalam-200.jpg
- DIRECT/203.190.241.43 image/jpeg
1271413127.602    273 192.168.0.50 TCP_MISS/200 634 GET
http://openx.detik.com/delivery/lg.php? - DIRECT/203.190.241.40
image/gif
1271413127.707    291 192.168.0.50 TCP_MISS/200 634 GET
http://openx.detik.com/delivery/lg.php? - DIRECT/203.190.242.71
image/gif
1271413127.829    216 192.168.0.50 TCP_MISS/200 634 GET
http://openx.detik.com/delivery/lg.php? - DIRECT/203.190.241.40
image/gif
1271413128.034    105 192.168.0.50 TCP_MISS/200 634 GET
http://openx.detik.com/delivery/lg.php? - DIRECT/203.190.241.40
image/gif
1271413128.137    157 192.168.0.50 TCP_MISS/200 634 GET
http://openx.detik.com/delivery/lg.php? - DIRECT/203.190.242.71
image/gif


why it still miss ?


padahal
maximum_object_size 102400 KB
minimum_object_size 0 KB
cache_mem 256 MB
cache_swap_low 70%
cache_swap_high 99%

what's wrong ?


Re: [squid-users] Re: Yahoo mail Display problem

2010-04-16 Thread Kinkie
> - Original Message 
> From: goody goody 
> To: squid-users@squid-cache.org
> Sent: Thu, April 15, 2010 12:16:38 PM
> Subject: Yahoo mail Display problem
>
> Hi,
>
> I am running squid 2.5 on 5.4-RELEASE FreeBSD 5.4-RELEASE, since the number 
> of years and was working very fine.

Hi Goody.
  2.5 is a really OLD version of Squid (as in: YEARS old). The most
up-to-date versions are 2.7 and 3.1.1 and they contain uncountable
improvements and fixes;using those versions you're most likely to get
help. If you can consider upgrading, please do so.


-- 
/kinkie


Re: [squid-users] Problem downloading file Greater then 2 GB

2010-04-16 Thread Kinkie
On Fri, Apr 16, 2010 at 5:18 AM, Jacques Beaudoin
 wrote:
> Hi,
>
> I'm using version 3.1.1 of Squid on a suse 10.2 server
> and I my users cannot download files greater then 2 GB.

Hi Jacques,
  could you please post the output of the command "squid -v"?
Thanks!

-- 
/kinkie