[squid-users] Re: Squid 3.1 with Tproxy and WCCP on Cisco 3550

2013-10-28 Thread Dr.x
hi ,

you are having tproxy , so  you cant use well know service in this case
in your case above, u need to define two wccp services .
so  ,
you wccp setting will be similar to :
===
wccp2_router x.x.x.x
wccp_version 2
wccp2_forwarding_method 2
wccp2_return_method 1
wccp2_assignment_method 1
wccp2_service dynamic 80
*wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=250 ports=80
wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
priority=250 ports=80
wccp2_rebuild_wait off*


after that restart squid 


now remove  because we dont need it
MLS#conf t
MLS#NO ip wccp web-cache
now , in global config of your MLS put:
MLS#conf t
MLS#ip wccp 80
MLS#ip wccp 90
=
interface FastEthernet0/15
 description PPTP-Server
 no switchport
 ip address X.X.X.X 255.255.255.252 
*ip wccp 80 redirect in
ip wccp 90 redirect out *
=

interface GigabitEthernet0/2
 description ***Squid-Proxy***
 no switchport
 ip address X.X.X.X 255.255.255.248 
* ip wccp redirect exclude in*


do as above , and give me logs squid at :
 /var/log/squid/cache.log
 /var/log/squid/access.log
==
also
give me on your cisco MLS logs :
#debug ip wccp packets
&
sh ip wccp
===

do above , and let me know , 

if ur CPU get high , tell me, i may find a better solution for u

regards



-----
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-with-Tproxy-and-WCCP-on-Cisco-3550-tp4662987p4662989.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: something not being understood in ,workers , squid proces , cores mapping

2013-10-28 Thread Dr.x
hi alex ,
im happy to hear that from you

nice update on wiki .

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/something-not-being-understood-in-workers-squid-proces-cores-mapping-tp4662942p4662990.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.1 with Tproxy and WCCP on Cisco 3550

2013-10-29 Thread Dr.x
 o , 
seems your MLS dont support wccp services ,

http://www.cisco.com/en/US/docs/switches/lan/catalyst3550/software/release/12.1_19_ea1/configuration/guide/swwccp.html


regards





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-with-Tproxy-and-WCCP-on-Cisco-3550-tp4662987p4662993.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.1 with Tproxy and WCCP on Cisco 3550

2013-10-29 Thread Dr.x
hi  ,
not all the Multi layer switches has full wccp capability
tproxy is needed above , so we need to define services ,

but 3550 MLS dont support it 


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-with-Tproxy-and-WCCP-on-Cisco-3550-tp4662987p4662995.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: does rock type deny being dedicated to specific process ??

2013-10-29 Thread Dr.x


hi amos ,
===
Question 1 :

about aufs dir ,
 about smp ,
you say that it dont work with smp 
but my question is more accurate ,
i c that it dont share workers ,  but it works if we assign  specific
process to specific cache dir .

in my opinion we could solve our problem of cpu load partially ??!!!
because without smp , all cache dir aufs or not aufs , were sharing the same
process , and this process was not getting beneift of all cores ""uptill now
this is my understanding "

again , my question is ,
we should not  say that auf dir dont work with  smp  completely , but to be
accurate we can  say  that aufs dont work with shared workers , but we can
get benefit of smp to load cpu cores and set each core to instance aufs dir 
and as a result we solved our cpu load partially ,

i just want to understand this issue and with to correct me about the info i
posted here ,

===

Amos Jeffries-2 wrote
> PS:  if you want to experiment, you could try given the frontend and
> backend config two slightly different cache_dir lines. So the frontend
> has a "read-only" flag but otherwise identical settings. In theory that
> would make the frontend able to HIT on the rock cache, but only the
> backends able to store things there.
> 
> Amos

not understanding yet ,
why add to front end a cahe dir ??? 
did u mean to put cache dir into  process squid # 1??


rergards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/does-rock-type-deny-being-dedicated-to-specific-process-tp4662919p4662997.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: SQUID in TPROXY - do not resolve

2013-10-30 Thread Dr.x
hi amos ,

is there a method that  let squid force its dns reply and ignore the client
dns reply ???

=
i mean if client x  got 1.1.1.1
and squid got 2.2.2.2
i want client to go to 2.2.2.2  not to 1.1.1.1
=

regards





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SQUID-in-TPROXY-do-not-resolve-tp4662819p4663009.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.2 SMP Problem

2013-10-30 Thread Dr.x
hi all ,
ive tried that on kernel of centos 6.4 last one 


but it give me :
[root@squid ~]#  sysctl -w net.local.dgram.recvspace=262144 
error: "net.local.dgram.recvspace" is an unknown key

wt does that mean ?

im trying ti use kernel of centos 6.4 without compiling it  but i have kid
registeration time out !!

im guessing the problem!!

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-2-SMP-Problem-tp4658906p4663020.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: SQUID in TPROXY - do not resolve

2013-10-30 Thread Dr.x
hi amos ,

my request is , 
i dont want to install squidguar don my machine , i want to use dns of squid
except of that

i mean i want to direct squid to norton dns , and in this case if the dns of
clients and squid didnt match ,
the website or the request of client must be blocked !

iive tried :

client_dst_passthru off
host_verify_strict on 

but no luck ,  the client  still can bypass the webfiltering  

i  mean it is supposed that client visit the  destination ip result from
squid dns resovling , not the ip result from its resolving !!


but uptill now , althoug i put the two directives above, the client still
visit the ip resulted from its dns resolving !


with to clarify

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SQUID-in-TPROXY-do-not-resolve-tp4662819p4663024.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: SQUID ERROR

2013-10-30 Thread Dr.x
hi , we dont see any attachments 

plz  post  squid.conf  and cache.log file




regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SQUID-ERROR-tp4663015p4663016.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: wccp2 does not working

2013-11-01 Thread Dr.x
HI ,


Sokvantha wrote



did u find a solution for your problem ??

regards






-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/wccp2-does-not-working-tp4659056p4663050.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] frequent "TCP_MISS_ABORTED" is it harmfull ???

2013-11-01 Thread Dr.x
1383254455.257   4846 x.x.x.x TCP_MISS_ABORTED/000 0 GET
http://imgcdn.ptvcdn.net/pan_img/appDownload/PandoraService/Service_Info.xml

hi , i use rock with smp and i a have very low hit ratio !!!

also i have logs of :

TCP_MISS_ABORTED

as an example :
1383254455.257   4846 x.x.x.x TCP_MISS_ABORTED/000 0 GET
http://imgcdn.ptvcdn.net/pan_img/appDownload/PandoraService/Service_Info.xml
===
does this mean that there is a degredation ?

or natural log ?

regards






-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/frequent-TCP-MISS-ABORTED-is-it-harmfull-tp4663051.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: frequent "TCP_MISS_ABORTED" is it harmfull ???

2013-11-01 Thread Dr.x


hi amos,

i pumped about 500 users and it gave horrible result and slow browsing and 
youtube interrupt

i estimated the interrupt time when watching youtube and i found it relative
to  "TCP_MISS_ABORTED"


again , i have no NAtting in my network  .
i

where should  start troubleshoot ??

is there a probability in squid config file ??

or it just relative to networking issues ??

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/frequent-TCP-MISS-ABORTED-is-it-harmfull-tp4663051p4663068.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: frequent "TCP_MISS_ABORTED" is it harmfull ???

2013-11-01 Thread Dr.x
Eliezer Croitoru-2 wrote
hi ,

thanks alot

i tested without SMP and it gave the same problem !!!

as i remember the last time i changed squid location on  sub-interference 
not on physical interface of router !


but note that when i use ip squid:port in my browser remotely  there is no
interrupt and no tcp_aborted !

let me test connecting squid directly with router and & will give u results

regards





-----
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/frequent-TCP-MISS-ABORTED-is-it-harmfull-tp4663051p4663072.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] how enhance browsing quality for top ten sites on my squid ??!!

2013-11-01 Thread Dr.x
hi ,

from cache manager we have top ten sites ,

my question is how to let squid optimize those sites  ?? 

as an example , i mean how to let squid use cache mem for cahcing them not
use cache dir   ???
 "  in my opinion  getting from ram is better than getting from cachd disk "

is there some refresh patterns i put it for the specific sites (the Top ten
sites visited ) ???



any advices ? 

wish to clarify


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-enhance-browsing-quality-for-top-ten-sites-on-my-squid-tp4663073.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] WARNING: Could not determine this machines public hostname. Please configure one or set 'visible_hostname'.

2013-11-01 Thread Dr.x
hi ,
in smp , im trying to let the acl load on a specific process , i mean that 
the acl verification watse alot of cpu , and i think that if i use a
specific process for acl verification will  make a loabd balance of
squid.conf on  my multi process.

here is my config
:


 i cant sart squid !!!

WARNING: Could not determine this machines public hostname. Please configure
one or set 'visible_hostname'.

it get error although i put it under process !
here is Smp related configs :
##
*if ${process_number} = 4
visible_hostname squid
acl blockkeywords dstdomain "/etc/squid/deny.acl"
http_access deny blockkeywords
http_port 127.0.0.1:4008
http_access allow localhost manager
endif
###
workers 4
cpu_affinity_map process_numbers=1,2,3 cores=1,3,5*



here is squid -k parse output !

2013/11/01 16:37:29| Processing: refresh_pattern .   0   20%
4320
2013/11/01 16:37:29| WARNING: 'squid' rDNS test failed: (0) No error.
2013/11/01 16:37:29| WARNING: Could not determine this machines public
hostname. Please configure one or set 'visible_hostname'.
2013/11/01 16:37:29| WARNING: 'squid' rDNS test failed: (0) No error.
2013/11/01 16:37:29| WARNING: Could not determine this machines public
hostname. Please configure one or set 'visible_hostname'.
2013/11/01 16:37:29| WARNING: 'squid' rDNS test failed: (0) No error.
2013/11/01 16:37:29| WARNING: Could not determine this machines public
hostname. Please configure one or set 'visible_hostname'.



why ?!!



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Could-not-determine-this-machines-public-hostname-Please-configure-one-or-set-visible-hostna-tp4663074.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how enhance browsing quality for top ten sites on my squid ??!!

2013-11-01 Thread Dr.x
Alex Rousskov wrote
> On 11/01/2013 01:26 PM, Dr.x wrote:
> 
>> from cache manager we have top ten sites ,
>> 
>> my question is how to let squid optimize those sites  ?? 
>> 
>> as an example , i mean how to let squid use cache mem for cahcing them
>> not
>> use cache dir   ???
> 
> 
> You may experiment with the memory_cache_mode directive, but most likely
> the default is what you want. The two caches (memory and disk) are not
> exclusive of each other -- the same entry may be in both caches at the
> same time. Squid will use the [faster] memory cache when it can.
> 
> If you verified that a popular object is usually returned from disk
> while your memory_cache_mode and memory cache size restrictions allow
> for that object to be cached and preserved in memory, then there is
> probably a bug somewhere.
> 
> 
> HTH,
> 
> Alex.


hi Alex,
agian , 
modifying cache mem is just a suggestion from me , it is not mandatory to be
done , but im asking if i have top ten sites ,

which methods i can do to  enhance browsing these-sites and make caching
better ?

thats it !



is it better to longer the object timeout for specific sites  ? """  just a
suggestion from me , and may be wrong """



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-enhance-browsing-quality-for-top-ten-sites-on-my-squid-tp4663073p4663080.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: WARNING: Could not determine this machines public hostname. Please configure one or set 'visible_hostname'.

2013-11-01 Thread Dr.x
Alex Rousskov wrote
> On 11/01/2013 02:39 PM, Dr.x wrote:
> 
>> in smp , im trying to let the acl load on a specific process , i mean
>> that 
>> the acl verification watse alot of cpu ,
> 
> ACL verification _wastes_ CPU only if you do not need those ACLs to be
> verified. If that is the case, simply remove them. In all other cases,
> the CPU cycles are not wasted but spent on doing useful work.
> 
> 
>> and i think that if i use a
>> specific process for acl verification will  make a loabd balance of
>> squid.conf on  my multi process.
> 
> Sorry, I do not follow your logic. Restricting some work such as ACL
> processing to one CPU is more likely to make the "balance" worth, not
> better, right? Why do you want to do that?
> 
> Is your current CPU core balance a bottleneck? Have you solved all other
> Squid-related problems? If you cannot answer "yes" to both questions,
> then you are most likely doing what some call "premature optimization".
> Focus on removing what you think is the performance bottleneck instead.
> If you do not know where the bottleneck is, then focus on finding it.
> 
> 
> Meanwhile, until you reach a very good level of Squid understanding, it
> is best to use the same squid.conf for all Squid workers. While certain
> exceptions to this rule of thumb are technically possible, random
> attempts at giving different workers different configurations will
> usually result in a broken Squid.
> 
> 
>> here is Smp related configs :
>> ##
>> if ${process_number} = 4
>> visible_hostname squid
>> acl blockkeywords dstdomain "/etc/squid/deny.acl"
>> http_access deny blockkeywords
>> http_port 127.0.0.1:4008
>> http_access allow localhost manager
>> endif
>> ###
>> workers 4
>> cpu_affinity_map process_numbers=1,2,3 cores=1,3,5
>> 
> 
> 
>> 2013/11/01 16:37:29| WARNING: Could not determine this machines public
>> hostname. Please configure one or set 'visible_hostname'.
>> 
>> 
>> why ?!!
> 
> Because you have not configured visible_hostname for any of your Squid
> processes except process #4.
> 
> BTW, you also have not configured any access rules for most of your
> workers which will result in them denying all requests they get.
> 
> 
> HTH,
> 
> Alex.

hi ,
i just want to let acl checking far from process # 1

with config above ,
i  pumped about 500 users and i monitored my cores

i found that process # 1 is about 30 % and all others about 10 %

i think if i let acl verification far from process # 1 , it will make better
for squid !!!

is my suggestion better ??? or let acl verification shred on all processes
??
???





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Could-not-determine-this-machines-public-hostname-Please-configure-one-or-set-visible-hostna-tp4663074p4663081.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: AW: [squid-users] Vary object loop

2013-11-02 Thread Dr.x
HI AMOS , 
don't know if it has relation , but

i tried it in smp ,
when i remove disk caching and depend only on memory caching, i have the
vary oobject loop !!!

dont know if it has relation to disable  disk caching or not ??

when i return disk cahing ""rock "" , i dont see this error but i have
slownesss in squid !!!

can you explain why ?

regards






-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Vary-object-loop-tp4662627p4663097.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: WARNING: Could not determine this machines public hostname. Please configure one or set 'visible_hostname'.

2013-11-02 Thread Dr.x
Alex Rousskov wrote
> http://wiki.squid-cache.org/Features/SmpScale#Will_similar_workers_receive_similar_amount_of_work.3F
> 
> (but that is not the only theory in existence and YMMV).
> 
> HTH,
> 
> Alex.


hi alex , thanks alot for reply ,

actually it was miss config ,  i was supposed to put visible hostname for
other processes of squid !

again ,
relative to core/worker distribution ,

actually i visited 
http://wiki.squid-cache.org/Features/SmpScale#Will_similar_workers_receive_similar_amount_of_work.3F
more and more , abut not understanding it 100 %


i didnt find  clear answer of " will similar workers have similar cpu ?? " ,
but as i understood , that developers made a patch tat optimize the cpu
distribution ,

but also with result you put , there were clear difference of workers cores
utiliztion

does from you result , mean that we cant have full load core distribution ?

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Could-not-determine-this-machines-public-hostname-Please-configure-one-or-set-visible-hostna-tp4663074p4663099.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] load tpoxy wccp on multiple interfaces by smp ?

2013-11-02 Thread Dr.x
hi ,

its just an updating idea ,
we have 6000 users and we have 96 G ram and 24 CPU cores and DELR720
hardware ,
actually i want to use smp and want to handle them by squid
Q1-from the user experience who tried squid smp , can my hardware handle the
6000 users 
===
Q2-can SMP let me use two tproxy on 2 interface  and share cores cahcing on
the two interfaces ???

i mean  my server will have eth1 , eth2  connected to router

eth1 is x.x.x.x
eth2 is y.y.y.y

squid will be listening yo tproxy x.x.x.x:
and also will be listening to tproxy y.y.y.y:

and each interface will have wccp service number .===> mean that many wccp
services will be working

agian , 
i want to do that , because the traffic on 1 interface cant handle more than
1 G traffic "" " my router cant handle more than 1 G """, 
so i need to use 2 interfaces so that make network load distribution


can squid smp handle what i want ????

without bugs ?





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/load-tpoxy-wccp-on-multiple-interfaces-by-smp-tp4663100.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: frequent "TCP_MISS_ABORTED" is it harmfull ???

2013-11-02 Thread Dr.x
Amos Jeffries-2 wrote
> 
> It is bad for user experience, since it means they had some reason to 
> abort. It also wastes one socket FD on your proxy server, including the 
> memory resources necessary to track that connection on your machine and 
> every router along the path between it and the client.

hi AMOS ,

does  in squid 3.3.9 u added logs of tcp_miss_aborted ??

i monitored squid 3.1.x and it has 2000 users but i never seen  
tcp_miss_aborted in logs 

does that mean that my server with 2000 users dont have any miss aborted ??
""i think impossible"

i mean  it may be in squid 3.3.9 some new logs added ??

plz clarify 

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/frequent-TCP-MISS-ABORTED-is-it-harmfull-tp4663051p4663101.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: frequent "TCP_MISS_ABORTED" is it harmfull ???

2013-11-02 Thread Dr.x
hi amos , 

so , 
in summary , 
i can say that it is normal issue 


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/frequent-TCP-MISS-ABORTED-is-it-harmfull-tp4663051p4663104.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: load tpoxy wccp on multiple interfaces by smp ?

2013-11-03 Thread Dr.x
Amos Jeffries-2 wrote
> On 3/11/2013 5:22 p.m., Dr.x wrote:
>> hi ,
>>
>> its just an updating idea ,
>> we have 6000 users and we have 96 G ram and 24 CPU cores and DELR720
>> hardware ,
>> actually i want to use smp and want to handle them by squid
>> Q1-from the user experience who tried squid smp , can my hardware handle
>> the
>> 6000 users 
> 
> No. It can handle some amount of requests/sec and traffic/sec. But 
> "users" is not related to proxy capacity.
> 
> 6000 users doing 1 req/day, even the footstool under my desk can handle 
> that load.
> 6000 users doing ~150 req/sec each concurrently, you need a monster 
> amount of CPU to handle that load.


hi amos , regarding to the answer "no "

currently , i have a squid server without smp that handle 2500 users and
without slowness , with caching  and ,with acl web filtering.

and it only dissipating a few cores in my cpu 

here is a print screen of my DELR720 server which handle wt i said above :
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4663110/584988478.png>
 

although i made a snapshot in time not considered as rush hour , but u can
indicate that only about 5  core cpus from about 24 cores is running and the
others always  idle !!!1

the question is why it cant 

as we know , squid cant use  all cores without smp  , but in my opinion , if
server without smp could handle 2500 users , it must handle at least 5000
users with smp

plz clarify !

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/load-tpoxy-wccp-on-multiple-interfaces-by-smp-tp4663100p4663110.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: load tpoxy wccp on multiple interfaces by smp ?

2013-11-03 Thread Dr.x
hi amos ,

wts the maximum req/sec squid with smp of 24 cores cpu can handle in my case
?
 




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/load-tpoxy-wccp-on-multiple-interfaces-by-smp-tp4663100p4663112.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: load tpoxy wccp on multiple interfaces by smp ?

2013-11-03 Thread Dr.x
Eliezer Croitoru-2 wrote
> On 11/03/2013 12:41 PM, Dr.x wrote:
>> hi amos ,
>>
>> wts the maximum req/sec squid with smp of 24 cores cpu can handle in my
>> case
>> ?
> Just wondering to myself, what is the CPU of the machine?
> it's not about maximum but rather using this amount of CPU..
> you will need lots of workers to handle these cores so if you do have a 
> SMP that works on these CPU I would start with 3 workers to make sure I 
> understand how it all fits together and then go up into the 10 cores... 
> since each worker should be able to take about 900 requests per sec.
> 6000 users will be a lot of traffic that should be added little by 
> little to see how the load is balanced over the CPU HDD etc.
> Note that the cachemgr interface can give you couple good statistics to 
> get started with.
> 
> Regards,
> Eliezer
>>
>>
>>
>>
>>
>> -
>> Dr.x
>> --
>> View this message in context:
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/load-tpoxy-wccp-on-multiple-interfaces-by-smp-tp4663100p4663112.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>>

hi 
thanks alot , seems good start :
i will start it and giver u reply  and result ,

about ur question above ,
my machine features :

Feature
*
PowerEdge R720 technical specification
Form factor 2U rack
Processors Intel ® Xeon ® processor E5-2600 product family
Processor sockets 2
 Internal interconnect2 x Intel QuickPath Interconnect (QPI) links; 6.4
GT/s; 7.2 GT/s; 8.0 GT/s
 Cache 2.5MB per core; core options: 2, 4, 6, 8
Chipset Intel C600 
*








-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/load-tpoxy-wccp-on-multiple-interfaces-by-smp-tp4663100p4663115.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: load tpoxy wccp on multiple interfaces by smp ?

2013-11-03 Thread Dr.x
Eliezer Croitoru-2 wrote
> On 11/03/2013 02:25 PM, Dr.x wrote:
>> Feature
>> *
>> PowerEdge R720 technical specification
>> Form factor 2U rack
>> Processors Intel ® Xeon ® processor E5-2600 product family
>> Processor sockets 2
>>   Internal interconnect2 x Intel QuickPath Interconnect (QPI) links; 6.4
>> GT/s; 7.2 GT/s; 8.0 GT/s
>>   Cache 2.5MB per core; core options: 2, 4, 6, 8
>> Chipset Intel C600
>> *
> I assume it's the 8 cores and doubles the threads which is probably what 
> you do have in hands.
> 
> "cat /proc/cpuinfo"
> should give the exact model of the CPU.
> so if it's 2 SOCKETS it means 16 real cores with shared 2.5MB cache per 
> couple cores unless there are new CPUs out there that INTEL doesn't 
> provide data on.
> 
> it's a very powerful machine!!
> 16 cores should handle about 11-12k requests per sec and even more 
> without any slowdown from the CPU and ram.
> when it comes to HDD it's another levels of speed which slows down 
> couple things.
> 
> Again adding little by little users on this monster should give you the 
> bigger picture on how to manage this beast.
> 
> Eliezer


wt a nice feedback from you , u really encouraged me to start squid 3.3.9
now !  with it ,
but plz  have a look and make a verification , 
is it 16  or  24 real cores :

here is /proc/cpuinfo result :

*processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 45
model name  : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
stepping: 7
microcode   : 0x70b
cpu MHz : 2299.853
cache size  : 15360 KB
physical id : 0
siblings: 12
core id : 0
cpu cores   : 6
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx lahf_lm ida arat xsaveopt pln pts dts tpr_shadow vnmi
flexpriority ept vpid
bogomips: 4599.70
clflush size: 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model   : 45
model name  : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
stepping: 7
microcode   : 0x70b
cpu MHz : 2299.853
cache size  : 15360 KB
physical id : 1
siblings: 12
core id : 0
cpu cores   : 6
apicid  : 32
initial apicid  : 32
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx lahf_lm ida arat xsaveopt pln pts dts tpr_shadow vnmi
flexpriority ept vpid
bogomips: 4600.03
clflush size: 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 2
vendor_id   : GenuineIntel
cpu family  : 6
model   : 45
model name  : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
stepping: 7
microcode   : 0x70b
cpu MHz : 2299.853
cache size  : 15360 KB
physical id : 0
siblings: 12
core id : 1
cpu cores   : 6
apicid  : 2
initial apicid  : 2
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx lahf_lm ida arat xsaveopt pln pts dts tpr_shadow vnmi
flexpriority ept vpid
bogomips: 4599.70
clflush size: 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 3
vendor_id   : GenuineIntel
cpu family  : 6
model   : 45
model name  : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
stepping: 7
microcode   : 0x70b
cpu MHz : 2

[squid-users] Re: load tpoxy wccp on multiple interfaces by smp ?

2013-11-03 Thread Dr.x
Eliezer Croitoru-2 wrote
> On 11/03/2013 09:52 PM, Eliezer Croitoru wrote:
>> It's an accurate description but it's more then nothing)
> Typo fix: it's not an accurate.
> 
> Eliezer

well , thanks alot for your time and reply ,

that don't mind to test the machine and see the performance .
i will tell you about result  ,

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/load-tpoxy-wccp-on-multiple-interfaces-by-smp-tp4663100p4663123.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] alot of "vary object loop" logs in squid 3.3.9 and slow performance

2013-11-05 Thread Dr.x
_swap_high 99
###
server_persistent_connections off
client_persistent_connections off
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 95
fqdncache_size 65535
ipcache_size 65535
ipcache_low 98
ipcache_high 99
#
### WCCP2 Config#
wccp2_router x.x.x.x
wccp_version 2
wccp2_rebuild_wait off
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_assignment_method 2
wccp2_service dynamic 92
wccp2_service_info 92 protocol=tcp flags=src_ip_hash priority=250 ports=80
wccp2_service dynamic 93 wccp2_service_info 93 protocol=tcp
flags=dst_ip_hash,ports_source priority=250 ports=80
### 
cache_effective_user squid
cache_effective_group squid
###
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
###

Is that natural logs or harmfull logs ???


==
Im compile squid 3.3.9 with options :
Squid Cache: Version 3.3.9
configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
'--includedir=/include' '--mandir=/share/man' '--infodir=/share/info'
'--sysconfdir=/etc' '--enable-cachemgr-hostname=drx' '--localstatedir=/var'
'--libexecdir=/lib/squid' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules' '--srcdir=.'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--disable-translation'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-filedescriptors=131072' '--with-large-files'
'--with-default-user=squid' '--enable-linux-netfilter'
'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS='
'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -Wall -O2' --enable-ltdl-convenience 

==

with config above, 
ive pumped only 500 users  and i have slow browsing  and youtube 
interrupting ,
i tested without squid , i found that performnace is better and no youtube
interrupt


with to know wts wrong , andf wish to fix this issue and start pumping more
users to squid

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/alot-of-vary-object-loop-logs-in-squid-3-3-9-and-slow-performance-tp4663137.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] squid regestiartion time out when loading long acl to all processess !!

2013-11-06 Thread Dr.x
hi  ,

with squid 3.3.9 & smp ,

i m using long acl  and it is shared amon all workers of squid

but i find in logs that some kids  have timeout registration.
===
if i load this acl to one process==> no problems in registration
,but i cant guarantee that applied to every one accessing squid !!


i think this issue similar to (wccp_rebuild off)  when using rock with wccp
!

any solutions ?

regards



-----
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-regestiartion-time-out-when-loading-long-acl-to-all-processess-tp4663148.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid regestiartion time out when loading long acl to all processess !!

2013-11-06 Thread Dr.x
Eliezer Croitoru-2 wrote
> Hey Dr,
> 
> On 11/06/2013 12:34 PM, Dr.x wrote:
>> hi  ,
>>
>> with squid 3.3.9 & smp ,
>>
>> i m using long acl  and it is shared amon all workers of squid
>>
>> but i find in logs that some kids  have timeout registration.
> Registration???
> What do you mean by that?
> 
> You need the acls to be in all process if needed..
> if you will be more specific about the structure of the acls We might 
> understand the situation and maybe will be able to find a way to resolve
> it.
> 
> Eliezer
> 
>> ===
>> if i load this acl to one process==> no problems in registration
>> ,but i cant guarantee that applied to every one accessing squid !!
>>
>>
>> i think this issue similar to (wccp_rebuild off)  when using rock with
>> wccp
>> !
>>
>> any solutions ?
>>
>> regards
>>
>>
>>
>> -
>> Dr.x
>> --
>> View this message in context:
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-regestiartion-time-out-when-loading-long-acl-to-all-processess-tp4663148.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>>

i mean that , in cache.log i see 
"FATAL: kid2 registration timed out"
just on some of my kids i see this issue !

about the acl ,
it is as below :
its size is about 20 M  and used for web filtering 
i have
=
acl blockkeywords dstdomain "/etc/squid/no.acl"
http_access deny blockkeywords

this is a very long acl !!!



regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-regestiartion-time-out-when-loading-long-acl-to-all-processess-tp4663148p4663150.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] is there any thing wrong from cache manager logs ?!!

2013-11-06 Thread Dr.x
063 seconds
CPU Usage:  18.29%
CPU Usage, 5 minute avg:14.41%
CPU Usage, 60 minute avg:   14.62%
Process Data Segment Size via sbrk(): 49108 KB
Maximum Resident Size: 742736 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena:   49504 KB
Ordinary blocks:44876 KB   2850 blks
Small blocks:   0 KB  0 blks
Holding blocks:218964 KB 24 blks
Free Small blocks:  0 KB
Free Ordinary blocks:4628 KB
Total in use:4628 KB 2%
Total free:  4628 KB 2%
Total size:268468 KB
Memory accounted for:
Total accounted:27981 KB  10%
memPool accounted:  27981 KB  10%
memPool unaccounted:   240487 KB  90%
memPoolAlloc calls:42
memPoolFree calls:   12558488
File descriptor usage for squid:
Maximum number of file descriptors:   393216
Largest file desc currently in use:879
Number of file desc currently in use: 1348
Files queued for open:   0
Available number of file descriptors: 391868
Reserved number of file descriptors:   300
Store Disk files open:   0
Internal Data Structures:
   748 StoreEntries
   748 StoreEntries with MemObjects
 12137 Hot Object Cache Items
 0 on-disk objects

==
regards



-----
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
hi ,
after removing the acl, i have the same scenario

squid start excellent , but after sometime

the traffic get decreasing  and interrupt in youtube  and slow browsing !!!

that is my issue


after removing the acl , i had the same issue ,slow after sometime






-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663165.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
hi all ,
here is  cpu and memory when squid is off :
top command :

top - 12:31:32 up 1 day,  3:47,  2 users,  load average: 0.02, 0.02, 0.05
Tasks: 155 total,   1 running, 154 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.3%us,  0.0%sy,  0.0%ni, 93.7%id,  6.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu1  :  0.3%us,  0.3%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni, 99.7%id,  0.3%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu3  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu5  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Mem:   8182336k total,  7340332k used,   842004k free,   168252k buffers
Swap:0k total,0k used,0k free,  6706284k cached

[root@squid ~]# free -m
 total   used   free sharedbuffers cached
Mem:  7990   7069921  0164   6549
-/+ buffers/cache:355   7635
Swap:0  0  0

[root@squid ~]# free
 total   used   free sharedbuffers cached
Mem:   81823367238932 943404  0 1684726706540
-/+ buffers/cache: 3639207818416
Swap:0  0  0

is there memory swapping running ??
i wish to say that my machine of squid also work as dns server , and squid
dns is pointed to the machine ip

i mean that it is squid+dns of squid in the same time .

i doubt that this issue may play in degradation


anyway , i will run squid and monitor cpus and rams and will provide cache
manager  info  before and after the  slow and degradation.


anyway , i wish to make sure that  my system status is not bade before
running squid .

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663166.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
umber of file descriptors:   400
Store Disk files open:   2
Internal Data Structures:
   497 StoreEntries
   496 StoreEntries with MemObjects
  9772 Hot Object Cache Items
 12352 on-disk objects



Generated Thu, 07 Nov 2013 18:46:37 GMT, by cachemgr.cgi/3.3.9@squid
==
Cache Manager menu
sample_start_time = 1383849714.405043 (Thu, 07 Nov 2013 18:41:54 GMT)
sample_end_time = 1383850017.332992 (Thu, 07 Nov 2013 18:46:57 GMT)
client_http.requests = 90.849991/sec
client_http.hits = 6.38/sec
client_http.errors = 1.43/sec
client_http.kbytes_in = 70.30/sec
client_http.kbytes_out = 4938.722996/sec
client_http.all_median_svc_time = 0.253008 seconds
client_http.miss_median_svc_time = 0.287199 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.090797 seconds
client_http.hit_median_svc_time = 0.00 seconds
server.all.requests = 79.706659/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 4931.719662/sec
server.all.kbytes_out = 69.639993/sec
server.http.requests = 79.706659/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 4931.719662/sec
server.http.kbytes_out = 69.639993/sec
server.ftp.requests = 0.00/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 0.00/sec
server.ftp.kbytes_out = 0.00/sec
server.other.requests = 0.00/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 0.00/sec
server.other.kbytes_out = 0.00/sec
icp.pkts_sent = 0.00/sec
icp.pkts_recv = 0.00/sec
icp.queries_sent = 0.00/sec
icp.replies_sent = 0.00/sec
icp.queries_recv = 0.00/sec
icp.replies_recv = 0.00/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.00/sec
icp.kbytes_sent = 0.00/sec
icp.kbytes_recv = 0.00/sec
icp.q_kbytes_sent = 0.00/sec
icp.r_kbytes_sent = 0.00/sec
icp.q_kbytes_recv = 0.00/sec
icp.r_kbytes_recv = 0.00/sec
icp.query_median_svc_time = 0.00 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 5.415243 seconds
unlink.requests = 0.00/sec
page_faults = 0.00/sec
select_loops = 9473.238553/sec
select_fds = 3062.906428/sec
average_select_fd_period = 0.00/fd
median_select_fds = -0.75
swap.outs = 23.526664/sec
swap.ins = 0.49/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 5.51/sec
syscalls.disk.opens = 0.37/sec
syscalls.disk.closes = 0.37/sec
syscalls.disk.reads = 0.87/sec
syscalls.disk.writes = 23.64/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 0.00/sec
syscalls.sock.accepts = 154.756643/sec
syscalls.sock.sockets = 131.653323/sec
syscalls.sock.connects = 83.856659/sec
syscalls.sock.binds = 83.043326/sec
syscalls.sock.closes = 231.946647/sec
syscalls.sock.reads = 1084.103251/sec
syscalls.sock.writes = 1738.646543/sec
syscalls.sock.recvfroms = 23.476662/sec
syscalls.sock.sendtos = 37.269994/sec
cpu_time = 54.046000 seconds
wall_time = 1200.000237 seconds
cpu_usage = 4.503832%

Generated Thu, 07 Nov 2013 18:47:06 GMT, by cachemgr.cgi/3.3.9@squid

=



just to hint , the free memory before using squid is 1 g  and after some
time   the free is about 200 M
could this be the problem ???
i did my best to let squid far from memory problems .
=
currently i stopped squid and want to fix the issue before using the other
servers and pump more users

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663169.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 63772
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 131072
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 63772
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
[root@squid ~]# ulimit -n
131072
[root@squid ~]# 

Are there any clues in the cache.log?
at first , no logs , no errors , after sometime i have ""closing of due to
life timeout in youtube"" videos 

Best Regards,
Eliezer





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663172.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
.


> Select loops:
> * 1K/sec under the fast traffic period
> * relaying 3.5MB/sec
> 
> * 7K/sec and 9K/sec in the periods you indicate as slow
> * relaying 4.7MB/sec
> 
> => hints that Squid is looping once per packet or so.
> 
> 
> Amos

something not being understood ,
if u look at graph
u will note that "out" traffic is samller than "in" traffic  


not understanding why 




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663174.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
Eliezer Croitoru-2 wrote
> OK so after Amos did the calculations (Thanks) I assume that using lsof 
> will give us more clue about it.
> The first thing to do is to start a ssh session into the server with the 
> setting:
> ServerAliveInterval 30
> Added into the /etc/ssh/ssh_config (on the client side)
> 
> When you do have this session running you wont have any troubles running 
> top or any other basic tests on the server while there is a degradation.
> 
> Now this command is what you will need in order to make the suspicious 
> more accurate and maybe lead to something that can help you:
> "lsof -u squid -a -i 4 -n -P"
> (squid is the default username on centos for the proxy user)
> 
> Dont try to run this command just like this out of the blue since the 
> output can be more then 60k lines long..
> 
> You should try to throw it into a file at the tmp dir so this:
> "lsof -u squid -a -i 4 -n -P >/tmp/tmp_lsof.1"
> Should be safe.
> The next thing is to find out how many FD are in sum and how many are 
> ESTABLISHED etc.. so run these:
> ##START
> lsof -u squid -a -i 4 -n -P >/tmp/tmp_lsof.1
> cat /tmp/tmp_lsof.1 |wc -l
> cat /tmp/tmp_lsof.1 |grep UDP |wc -l
> cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
> cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
> cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
> cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
> cat /tmp/tmp_lsof.1 |grep ":53" |wc -l
> cat /tmp/tmp_lsof.1 |grep ":TPROXYPORT" |wc -l
> ##END
> (TPROXYPORT IS the port from squid.conf)
> 
> Once you have all the above results before the degradation in it and 
> after we might have a clue about the source of the problem and whether 
> it comes from too much FD which are not being used but causing the 
> system to loop throw lots of them.

hi , 1st of all , does i face a loop ??

anyway , ive made another test
before the degredation , 1st 5 minutes of squid
[root@squid ~]# cat /tmp/tmp_lsof.1 |wc -l
3398
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep UDP |wc -l
6
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
3134
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
1
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
8
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":53" |wc -l
73
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":3129" |wc -l 
5
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# lsof -u squid -a -i 4 -n -P >/tmp/tmp_lsof.1
cat /tmp/tmp_lsof.1 |wc -l
[root@squid ~]# cat /tmp/tmp_lsof.1 |wc -l
3366
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep UDP |wc -l
6
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
3118
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
2
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
8
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":53" |wc -l
82
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":3129" |wc -l 
5
==

after degredation and system close to dead !
[root@squid ~]# cat /tmp/tmp_lsof.1 |wc -l
3505
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep UDP |wc -l
6
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
3262
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
5
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
8
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":53" |wc -l
57
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":3129" |wc -l 
4




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663180.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
Amos Jeffries-2 wrote
> On 2013-11-08 11:26, Dr.x wrote:
>> .
>> 
>> 
>>> Select loops:
>>> * 1K/sec under the fast traffic period
>>> * relaying 3.5MB/sec
>>> 
>>> * 7K/sec and 9K/sec in the periods you indicate as slow
>>> * relaying 4.7MB/sec
>>> 
>>> => hints that Squid is looping once per packet or so.
>>> 
>>> 
>>> Amos
>> 
>> something not being understood ,
>> if u look at graph
>> u will note that "out" traffic is samller than "in" traffic  
>> 
>> 
>> not understanding why 
>> 
> 
> Think about what is "in" and "out" on that graph?
> Keep in mind that for each request Squid is handling two TCP 
> connections, receiving and sending on both. Also performing HIT's on ~4% 
> total HTTP traffic.
> 
> 
> Amos

hi amos , im not talkign about t the difference  in , out ,
im wondering why the "in" is higher than "out" ???

shouldnt the "out" higher than "in" (  as a result of hit ration) ?

i mean if i want to calculate wt im saving , i say (out-in)but in my case
its in -ve  

i have another good squid machines and they have  out valuse more higher
than in vlause ??!!



regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663181.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-07 Thread Dr.x
Eliezer Croitoru-2 wrote
> OK so the next step since I didn't noticed couple things(IPV6 bindings)
> 
> On 11/08/2013 01:21 AM, Dr.x wrote:
>> anyway , ive made another test
>> before the degredation , 1st 5 minutes of squid
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |wc -l
>> 3398
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep UDP |wc -l
>> 6
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
>> 3134
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
>> 0
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
>> 1
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
>> 8
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":53" |wc -l
>> 73
>> [root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":3129" |wc -l
>> 5
> A small thing:
> The above last line did not made any sense to me since there is no 
> possible way for 3k established connections while only 5 of them are 
> sockets for tproxy..
> 
> SO The lsof command(from before) is showing only ipv4 sockets which are 
> probably the outgoing connections from squid towards the net..
> Change it to show all TCP connections like this:
> ##START
> lsof -u squid -a -i TCP -n -P >/tmp/tmp_lsof.1
> cat /tmp/tmp_lsof.1 |wc -l
> cat /tmp/tmp_lsof.1 |grep UDP |wc -l
> cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
> cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
> cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
> cat /tmp/tmp_lsof.1 |grep ":80" |wc -l
> cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
> cat /tmp/tmp_lsof.1 |grep ":TPROXYPORT" |wc -l
> ##END
> (TPROXYPORT IS the port from squid.conf)
> 
> Which maybe will show more info..
> If you want to see how many TCP FD are being used on the whole system use:
> ##END
> lsof  -i TCP -n -P  >/tmp/tmp_lsof.2
> cat /tmp/tmp_lsof.2 |wc -l
> cat /tmp/tmp_lsof.2 |grep UDP |wc -l
> cat /tmp/tmp_lsof.2 |grep ESTABLISHED |wc -l
> cat /tmp/tmp_lsof.2 |grep TIME_WAIT |wc -l
> cat /tmp/tmp_lsof.2 |grep CLOSE_WAIT |wc -l
> cat /tmp/tmp_lsof.2 |grep LISTEN |wc -l
> cat /tmp/tmp_lsof.2 |grep ":80" |wc -l
> cat /tmp/tmp_lsof.2 |grep ":TPROXYPORT" |wc -l
> ##END
> 
> These lines have a small addition which can help understand the 
> situation a bit more.
> 
> Since you do have about 3k connections


[root@squid ~]# lsof -u squid -a -i TCP -n -P >/tmp/tmp_lsof.1
[root@squid ~]# cat /tmp/tmp_lsof.1 |wc -l
2023
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep UDP |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
1841
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
1
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":80" |wc -l
2009
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
8
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":3129" |wc -l 
4
[root@squid ~]# 
[root@squid ~]# lsof  -i TCP -n -P  >/tmp/tmp_lsof.2
cat /tmp/tmp_lsof.2 |wc -l
cat /tmp/tmp_lsof.2 |grep UDP |wc -l
cat /tmp/tmp_lsof.2 |grep ESTABLISHED |wc -l
cat /tmp/tmp_lsof.2 |grep TIME_WAIT |wc -l
cat /tmp/tmp_lsof.2 |grep CLOSE_WAIT |wc -l
cat /tmp/tmp_lsof.2 |grep LISTEN |wc -l
cat /tmp/tmp_lsof.2 |grep ":80" |wc -l
cat /tmp/tmp_lsof.2 |grep ":3129" |wc -l [root@squid ~]# cat /tmp/tmp_lsof.2
|wc -l
2170
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep UDP |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep ESTABLISHED |wc -l
1982
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep TIME_WAIT |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep CLOSE_WAIT |wc -l
1
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep LISTEN |wc -l
28
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep ":80" |wc -l
2133
[root@squid ~]# cat /tmp/tmp_lsof.2 |grep ":3129" |wc -l 
4
[root@squid ~]# ^C

==
after sometime :

[root@squid ~]# ^C
[root@squid ~]# lsof -u squid -a -i TCP -n -P >/tmp/tmp_lsof.1
[root@squid ~]# cat /tmp/tmp_lsof.1 |wc -l
2250
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep UDP |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ESTABLISHED |wc -l
2078
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep TIME_WAIT |wc -l
0
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep CLOSE_WAIT |wc -l
4
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":80" |wc -l
2236
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep LISTEN |wc -l
8
[root@squid ~]# cat /tmp/tmp_lsof.1 |grep ":3129" |wc -l 
4
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# 
[root@squid ~]# lsof  -i TCP -n -P  >/tmp/tmp_lsof.2
cat /tmp/tmp_lsof.2 |wc -l
cat /tmp/tmp_lsof.2 |grep UDP |wc -l
cat /tmp/tmp_lsof.2 |grep ESTABLISHED |wc -l
cat /tmp/tmp_lsof.2 |grep TIME_WAIT |wc -l
cat /tmp/tmp_lsof.2 |grep CLOSE_WAIT |wc -l
cat /tmp/tmp_lsof.2 |grep LISTEN |wc -l

[squid-users] Re: alot of "vary object loop" logs in squid 3.3.9

2013-11-08 Thread Dr.x
Alex Rousskov wrote
> On 11/05/2013 01:49 AM, Dr.x wrote:
> 
>> Im trying to use smp on 4 cores of server of 8 g ram
>> 4 workers and no disk caching
>> Only depending on hot mem caching
>> 
>> Don’t know why there is log if cache.log about  "vary object loop"
> 
> 
> The "Vary object loop!" warning with shared memory caching may be the
> side effect of Squid bug #3806. See comment#9 for more details and a
> possible fix: http://bugs.squid-cache.org/show_bug.cgi?id=3806#c9
> 
> 
> HTH,
> 
> Alex.

hi ,
does last ver of squid squid 4.x suffer the same bug ??

i just want to say that if i defined a cache dir  whatever it was  , i dont
see this annoying logs !

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/alot-of-vary-object-loop-logs-in-squid-3-3-9-and-slow-performance-tp4663137p4663192.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-09 Thread Dr.x
Eliezer Croitoru-2 wrote
> OK so back to the issue in hands.
> it's not squid directly and there is not much load on the server since 
> the sockets in use are about 3-4k.
> I will need the output of:
> cat /proc/net/sockstat
> cat /proc/net/sockstat6
> 
> Which will make it clear that it is either is or not the basic sockets 
> issue.
> Next thing after that is:
> Strange OS logs?
> 
> also another thing we will might get a clue is tcp tunings.
> sysctl -a |grep tcp
> 
> As I told you in the chat there are couple major things to consider.
> the sysctl values that I was talking about from a machine that knows to 
> work with 1GB ram and can take some load until now.
> # sysctl -a |grep tcp
> net.ipv4.tcp_abc = 0
> net.ipv4.tcp_abort_on_overflow = 0
> net.ipv4.tcp_adv_win_scale = 1
> net.ipv4.tcp_allowed_congestion_control = cubic reno
> net.ipv4.tcp_app_win = 31
> net.ipv4.tcp_available_congestion_control = cubic reno
> net.ipv4.tcp_base_mss = 512
> net.ipv4.tcp_challenge_ack_limit = 100
> net.ipv4.tcp_congestion_control = cubic
> net.ipv4.tcp_cookie_size = 0
> net.ipv4.tcp_dma_copybreak = 4096
> net.ipv4.tcp_dsack = 1
> net.ipv4.tcp_early_retrans = 2
> net.ipv4.tcp_ecn = 2
> net.ipv4.tcp_fack = 1
> net.ipv4.tcp_fastopen = 0
> net.ipv4.tcp_fastopen_key = 2cc99571-9aad9d28-8b81bbcc-48a576b6
> net.ipv4.tcp_fin_timeout = 60
> net.ipv4.tcp_frto = 2
> net.ipv4.tcp_frto_response = 0
> net.ipv4.tcp_keepalive_intvl = 75
> net.ipv4.tcp_keepalive_probes = 9
> net.ipv4.tcp_keepalive_time = 7200
> net.ipv4.tcp_limit_output_bytes = 131072
> net.ipv4.tcp_low_latency = 0
> net.ipv4.tcp_max_orphans = 4096
> net.ipv4.tcp_max_ssthresh = 0
> net.ipv4.tcp_max_syn_backlog = 128
> net.ipv4.tcp_max_tw_buckets = 4096
> net.ipv4.tcp_mem = 22593  30127   45186
> net.ipv4.tcp_moderate_rcvbuf = 1
> net.ipv4.tcp_mtu_probing = 0
> net.ipv4.tcp_no_metrics_save = 0
> net.ipv4.tcp_orphan_retries = 0
> net.ipv4.tcp_reordering = 3
> net.ipv4.tcp_retrans_collapse = 1
> net.ipv4.tcp_retries1 = 3
> net.ipv4.tcp_retries2 = 15
> net.ipv4.tcp_rfc1337 = 0
> net.ipv4.tcp_rmem = 4096  87380   6291456
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_slow_start_after_idle = 1
> net.ipv4.tcp_stdurg = 0
> net.ipv4.tcp_syn_retries = 6
> net.ipv4.tcp_synack_retries = 5
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_thin_dupack = 0
> net.ipv4.tcp_thin_linear_timeouts = 0
> net.ipv4.tcp_timestamps = 1
> net.ipv4.tcp_tso_win_divisor = 3
> net.ipv4.tcp_tw_rtcpecycle = 0
> net.ipv4.tcp_tw_reuse = 0
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_wmem = 4096  16384   4194304
> net.ipv4.tcp_workaround_signed_windows = 0
> ##END
> 
> I have seen couple tunings in the past like:
> https://www.jcputter.co.za/centos/squid-proxy-optimization-tweaks/
> 
> But I never needed to actually use them.
> There is a nice doc from IBM:
> ftp://public.dhe.ibm.com/linux/pdfs/Tuning_for_Web_Serving_on_RHEL_64_KVM.pdf
> 
> Which I have seen but never completed to fully read and understood yet.
> In page 16 there is a nice way to adjust a webserver in the size like 
> the machines mentioned in page 11+12.
> It adds to a machine with:
> 24 cores
> about 90GB of ram and correct me if I am wrong.
> 
> http://www.cyberciti.biz/files/linux-kernel/Documentation/networking/ip-sysctl.txt
> 
> Has some nice description of each value which should be considered 
> before deciding on a drastic change.
> 
> If you did any changes to any of these OS variables please share any of 
> them so we would maybe understand what is happening.
> This is where RESTART is considered a nice move that will force the os 
> to defaults (plus considering /etc/sysctl.conf modifications).
> 
> I do know that you have self compiled kernel on\for CentOS 6.4.
> The current one from CentSO branch is 2.6.X.
> 
> Newer kernel can give lots of things if built right.
> If you have used a specific way to build the kernel I will be happy to 
> see it or at-least the .config for this kernel build.
> 
> Thanks,
> Eliezer

HI , 
1st of all thanks alot for interest and help to last step
actually  my  problem wasnt due to cpu  or ram ,
it was due to kernel ,
u mentioned the  kernel logs >>>> about that u were 100 %  right

my kernel logs says :
*TCP: out of memory -- consider tuning tcp_mem*

which is found by  huge  numbers ,

revising my sysctl , it was found that there were a mistake and corrected by
changing the tcpmem value to another bigger vlaue so that it support large
number of tcp mem requests

in sysctl , was  edited by =>net.ipv4.tcp_mem = 6085248 8113664 12170496

i know it is very huge  relative to small number of users , but i thibnk i
will use it when i migrate my w

[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-10 Thread Dr.x
Amos Jeffries-2 wrote
> On 2013-11-08 12:29, Dr.x wrote:
>> Amos Jeffries-2 wrote
>>> On 2013-11-08 11:26, Dr.x wrote:
>>>> .
>>>> 
>>>> 
>>>>> Select loops:
>>>>> * 1K/sec under the fast traffic period
>>>>> * relaying 3.5MB/sec
>>>>> 
>>>>> * 7K/sec and 9K/sec in the periods you indicate as slow
>>>>> * relaying 4.7MB/sec
>>>>> 
>>>>> => hints that Squid is looping once per packet or so.
>>>>> 
>>>>> 
>>>>> Amos
>>>> 
>>>> something not being understood ,
>>>> if u look at graph
>>>> u will note that "out" traffic is samller than "in" traffic  
>>>> 
>>>> 
>>>> not understanding why 
>>>> 
>>> 
>>> Think about what is "in" and "out" on that graph?
>>> Keep in mind that for each request Squid is handling two TCP
>>> connections, receiving and sending on both. Also performing HIT's on 
>>> ~4%
>>> total HTTP traffic.
>>> 
>>> 
>>> Amos
>> 
>> hi amos , im not talkign about t the difference  in , out ,
>> im wondering why the "in" is higher than "out" ???
>> 
>> shouldnt the "out" higher than "in" (  as a result of hit ration) ?
>> 
> 
> That depends on what they are measuring. Which is why I asked.
> 
> 
>> i mean if i want to calculate wt im saving , i say (out-in)but in my 
>> case
>> its in -ve  
>> 
> 
> IF you measure "in" as being traffic on LAN interface and "out" as being 
> traffic on WAN interface they could very well be negative.
> 
> If you measure "in" as being packets into the box from any interface, 
> and "out" as being packets leaving the Squid box. It could very well be 
> *either* positive or negative.
> 
> If you are measuring only one interface, they will again be *either* 
> positive or negative depending on the interface.
> 
> 
> So ... what are "in" and "out" measuring *exactly* ??
> 
> 
> Amos

well  , i must be missunderstanding  something about the bw saving 
calculation !!!
the  question now is :
how i calculate the bw that im saving by squid ?
regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663208.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.10 is available

2013-11-10 Thread Dr.x
good  job 




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-10-is-available-tp4663209p4663210.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: is there any thing wrong from cache manager logs ?!!

2013-11-11 Thread Dr.x
Amos Jeffries-2 wrote
> The full calculation is:
> 
>bytes from client
>  - bytes to server
> 
>  + bytes to client
>  - bytes from server
> 
>  = bandwidth saving/loss.
> 
> 
> Current Squid only surface the metrics necessary to calculate that
> yourself in the "utilization" cachemgr report (the last section has
> totals) or SNMP counters
> (http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs) under
> "Per-Protocol Statistics".
> 
> 
> Squid adds headers and sometimes chunked encoding overheads and strips
> some garbage from the headers as things go through. So even on a
> non-caching proxy there is a difference which may add up to a savings OR
> a loss.
> 
> 
> Amos

hi amos , thanks alot .

im trying to understand the calculation equation why it is as the formula u
posted above ??
not understanding ,
""bytes  from client "" 
from client to wt ?? from client to  squid ?

==
can i calculate the speed  savings??

i mean that i want to calculate the bw over speed not over bytes

also i want to see this saving every 60 sec
is that possible ?
===
i use mrtg monitors and usually i have basic snmp on interface
i look at traffic speed between "in"  , "out" , speed"in"-speed"out"=speed
saving  ( i know its not accurate)
my question is does my calcultion bw saving speed containg a very error
value ?
assume i have 700Mbps pumped to squid.
===
regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/is-there-any-thing-wrong-from-cache-manager-logs-tp4663156p4663214.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: RPM for Squid 3.3.10 is OUT.

2013-11-11 Thread Dr.x
Eliezer Croitoru-2 wrote
> I am happy to release the new RPM for squid version 3.3.10.(links at the 
> bottom of the article)
> 
> The new release includes the big addition of cache_dir type *rock*, big 
> thanks for Alex Rousskov work on rock ssl-bump and many other small and 
> big things that makes squid what it is!
> 
> What is *rock* cache_dir type? What it gives me?
> Speed! and SMP support for cache_dir.
> 
> A small introduction to FileSystems and Squid:
> Squid uses UFS\AUFS types cache directories for a very long time in a 
> very nice way to overcome and try to beat the OS and the FileSystems 
> limits in order to allow millions of objects\files to be cached.
> 
> The UFS type that can be used with either reiserFS, ext4 or any other FS 
> you can think about that is supported by the OS.
> There are limits to each and every FS like the reiserFS that was 
> designed to work with lots of small\tiny files and does that in a very 
> nice way.
> 
> A FS far as it is perfected it is still a *FileSystem* which is very 
> global and has a design which effects directly on the performance of
> itself.
> An example for this point is being demonstrated when creating a file on 
> a FS can be quite easy in one while erasing a file can result in a very 
> CPU and I\O intensive task on some FS.
> If you are interested in understanding a bit more about FS complexity 
> you can watch Ric Wheeler at his video and presentation:
> * video: 
> http://video.linux.com/videos/one-billion-files-pushing-scalability-limits-of-linux-file-systems
> * or: http://www1.ngtech.co.il/squid/videos/37.webm
> 
> * pdf: 
> http://www.redhat.com/summit/2011/presentations/summit/decoding_the_code/thursday/wheeler_t_0310_billion_files_2011.pdf
> * or: 
> http://www1.ngtech.co.il/squid/fs/wheeler_t_0310_billion_files_2011.pdf
> 
> 
> What heavy lifting do the FS and squid needs to handle with?
> UFS\AUFS actually uses the FileSystem in order to store for an example 
> 200 requests per second which 50 of them are not even cacheable so 150 
> requests per second to be placed in files in the FileSystem based on the
> OS.
> 60 secs doubles 60 minutes doubles 100 requests per second(yes I reduced 
> it..) it means creation of about 3600 files on the FS per hour for a 
> tiny Small Office squid instance.
> While some squid systems can sit on a very big machine with more then 
> one instance that has more then 500 requests per second per instance, 
> the growth can be about 14,400,000 per hour.
> 
> It do sounds like a very big number but a MegaByte is about 1 Million 
> bytes and today we are talking about speeds which exceeds 10Gbps..
> 
> So there might be another design that is needed in order to store all 
> these HTTP objects and which rock comes to unleash.
> 
> In the next release I will try to describe it in more depth.
> 
> * note that the examples do demonstrate the ideas in a wild way.
> 
> The RPMS at:
> http://www1.ngtech.co.il/rpm/centos/6/x86_64/
> 
> The package includes 3 RPMs one for the squid core and helpers, the
> other is for debuging and the third is the init script.
> http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-3.3.10-1.el6.x86_64.rpm
> http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-sysvinit-3.3.10-1.el6.x86_64.rpm
> http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-debuginfo-3.3.10-1.el6.x86_64.rpm
> 
> To Each and everyone of them there is an asc file which contains PGP and
> MD5 SHA1 SHA2 SHA256 SHA384 SHA512 hashes.
> 
> I also released the SRPM which is very simple at:
> http://www1.ngtech.co.il/rpm/centos/6/x86_64/SRPM/squid-3.3.10-1.el6.src.rpm
> 
> * I do hope to release in the next weeks a RPM of 3.HEAD build for ALPHA 
> testers of the newest bug fixes and squid improvements.
> 
> * Sorry that the I686 release is not out yet but since I do not have on 
> me a I686 running OS it will be added later to the repo.
> 
> Eliezer


nice news ,
i would like to ask about mounting options related to rock , 
is it critical for performance ??
' i read wiki , but no one care with it !!!

as an example machine with 7 hardisks ssd , each hardisk with 90 G storage ,  
and with about 4000 req/sec on squid with smp.

does squid 3.3.10 better than squid 3.3.9  for rock support and speed ???
if not big updater i prefer staying with 3.3.9


regards





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-10-is-available-tp4663209p4663229.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] WARNING: unparseable HTTP header field {:: }

2013-11-12 Thread Dr.x
hi ,
is that harmfull log ??

2013/11/11 02:20:12 kid2| WARNING: unparseable HTTP header field {:: }
2013/11/11 02:20:13 kid1| ctx: exit level  0
2013/11/11 02:20:13 kid1| ctx: enter level  0:
'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=38827&campaignid=232&cids=232&bids=38827&zoneid=220681&retarget_matches=null&tid=1075526134_220681_a90622ba5df04921Bd03a7abab3f6328&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=78291847'
2013/11/11 02:20:13 kid1| WARNING: unparseable HTTP header field {:: }
2013/11/11 02:20:13 kid1| ctx: exit level  0
2013/11/11 02:20:13 kid1| ctx: enter level  0:
'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=6573&campaignid=232&cids=232&bids=6573&zoneid=131033&retarget_matches=null&tid=711430930_131033_1820daa33ce9444aAf695c9465d9ea5a&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=16754765'
2013/11/11 02:20:13 kid1| WARNING: unparseable HTTP header field {:: }
2013/11/11 02:20:13 kid2| ctx: exit level  0
2013/11/11 02:20:13 kid2| ctx: enter level  0:
'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=38827&campaignid=232&cids=232&bids=38827&zoneid=220681&retarget_matches=null&tid=6614988552_220681_b6c5cff7d82042ccB86be4cfb6e8595e&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=20837268'
2013/11/11 02:20:13 kid2| WARNING: unparseable HTTP header field {:: }
2013/11/11 02:20:13 kid2| ctx: exit level  0
2013/11/11 02:20:13 kid2| ctx: enter level  0:
'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=6573&campaignid=232&cids=232&bids=6573&zoneid=131033&retarget_matches=null&tid=33051520_131033_4fd6080af4a846df8ba0ef5c3694d699&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=33770260'
2013/11/11 02:20:13 kid2| WARNING: unparseable HTTP header field {:: }
2013/11/11 02:20:13 kid2| ctx: exit level  0
2013/11/11 02:20:13 kid2| ctx: enter level  0:
'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=6573&campaignid=232&cids=232&bids=6573&zoneid=131033&retarget_matches=null&tid=133013941_131033_c61cb783eaab4af98630849e954798b2&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=67402879'
2013/11/11 02:20:13 kid2| WARNING: unparseable HTTP header field {:: }
2013/11/11 02:20:13 kid2| ctx: exit level  0
2013/11/11 02:20:13 kid2| ctx: enter level  0:
'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=38827&campaignid=232&cids=232&bids=38827&zoneid=220681&retarget_matches=null&tid=1952756553_220681_c5b7aec4567a4a65Bb1ef7ec7e718012&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=08604172'
2013/11/11 02:20:13 kid2| WARNING: unparseable HTTP header field {:: }
===

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-unparseable-HTTP-header-field-tp4663232.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] squid cache manager question and snmp with smp question

2013-11-12 Thread Dr.x
hi , 
from cache manager :
Cache information for squid:
Hits as % of all requests:  5min: 11.7%, 60min: 11.0%
Hits as % of bytes sent:5min: 0.6%, 60min: -0.3%
Memory hits as % of hit requests:   5min: 20.0%, 60min: 13.9%
Disk hits as % of hit requests: 5min: 11.6%, 60min: 9.5%
Storage Swap size:  28703904 KB
Storage Swap capacity:  70.1% used, 29.9% free
Storage Mem size:   1024000 KB
*Storage Mem capacity:  100.0% used,  0.0% free*
Mean Object Size:   32.00 KB
Requests given to unlinkd:  0

im asking about :
Storage Mem capacity:   100.0% used,  0.0% free
why it is 100 %

Q1-does that mean that squid dissipated 100% from cache_mem value configured

Q2- does the result in cache manager in "general run time information" is
calculated as total for all processes ???
==
Q3
 ABOUT snmp with smp
wt i need to configure in squid.conf ??

do i need to configure snmp for each instance ???

i want to say that i configured as below :
acl snmppublic snmp_community xxx
snmp_port 3401
snmp_access allow snmppublic localhost
snmp_access allow snmppublic all
snmp_incoming_address 0.0.0.0
snmp_outgoing_address 0.0.0.0
###

i had results in my mrtg , but not sure of the results ,
i got squid mib file  and converted it to oidb file and  put it in m y mrtg
.

by here im using smp , not sure from the results 


i also revived some suspicious logs :

2013/11/08 16:51:26 kid3| snmpHandleUdp: FD 20 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:26 kid1| snmpHandleUdp: FD 19 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:26 kid3| snmpHandleUdp: FD 19 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid2| snmpHandleUdp: FD 20 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid3| snmpHandleUdp: FD 19 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid1| snmpHandleUdp: FD 19 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid3| snmpHandleUdp: FD 20 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid1| snmpHandleUdp: FD 19 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid3| snmpHandleUdp: FD 19 recvfrom: (11) Resource
temporarily unavailable
2013/11/08 16:51:51 kid1| snmpHandleUdp: FD 20 recvfrom: (11) Resource
temporarily unavailable

not sure if it is harmful !!!

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-cache-manager-question-and-snmp-with-smp-question-tp4663233.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: WARNING: unparseable HTTP header field {:: }

2013-11-12 Thread Dr.x
Ralf Hildebrandt wrote
> * Amos Jeffries <

> squid3@.co

> >:
>> On 12/11/2013 9:08 p.m., Dr.x wrote:
>> > hi ,
>> > is that harmfull log ??
>> > 
>> > 2013/11/11 02:20:12 kid2| WARNING: unparseable HTTP header field {:: }
>> > 2013/11/11 02:20:13 kid1| ctx: exit level  0
>> > 2013/11/11 02:20:13 kid1| ctx: enter level  0:
>> >
>> 'http://vap2iad3.lijit.com/www/delivery/lg.php?bannerid=38827&campaignid=232&cids=232&bids=38827&zoneid=220681&retarget_matches=null&tid=1075526134_220681_a90622ba5df04921Bd03a7abab3f6328&channel_ids=,&fpr=c874c715b2faad8885ad1254850d8d74&loc=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&referer=http%3A%2F%2Fforum.mobilism.org%2Fviewtopic.php%3Ff%3D1292%26t%3D652520&cb=78291847'
>> > 2013/11/11 02:20:13 kid1| WARNING: unparseable HTTP header field {:: }
>> 
>> I means the response to the URL shown contains corrupted HTTP headers.
>> Something outside the HTTP protool has been injected, So Squid will drop
>> the header, if relaxed_header_parser is disabled then the whole response
>> is dropped.
> 
> Since I'm also seeing that, I'd guess lijit.com is having issues.
> 
> -- 
> Ralf Hildebrandt   Charite Universitätsmedizin Berlin

> ralf.hildebrandt@

> Campus Benjamin Franklin
> http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
> Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155

well , if this is just error for  lijit.com website , i can remove
redirecting this website to squid and let my head  clear.

but if it face to alot of sites i will try to solve it :)

but any way , 
almost all logs of this type is belongs to  lijit.com  only !!.

if i found another logs to another sites , i will post it here

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-unparseable-HTTP-header-field-tp4663232p4663239.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid cache manager question and snmp with smp question

2013-11-12 Thread Dr.x


> Because your cache is busy and using all the memory you gave it for
> caching objects (cache_mem). This is normal for memory cache, there is
> no reason to hold it lower than 100% since there is no delay in deleting
> things when they need to be..

===>well , no worry :)


>> Q2- does the result in cache manager in "general run time information" is
>> calculated as total for all processes ???
> 
> Um. Is cache_mem set to 1000 MB in on worker? or would that be the sum
> for all the workers with cache_mem ?

==>i think memory will be for all workers
i mean if i put to 1000 ===> the given cache_mem to squid is to be
(1000*process number)


>> ==
>> Q3
>>  ABOUT snmp with smp
>> wt i need to configure in squid.conf ??
> 
> Squid must be built with --enable-snmp.
> 
> Also, snmp_port and snmp_access directives must be configured.
> 
>> 
>> do i need to configure snmp for each instance ???
> 
> http://wiki.squid-cache.org/Features/SmpScale#What_can_workers_share.3F

===>well from wiki it say that snmp is shared wit workers , u mean we dont
need to do it  per workers ??




> Drop those last two lines about address. The first one is doing nothing
> useful. The second one will cause failures.

> ok i will



> From the config manual:
> "
>   NOTE, snmp_incoming_address and snmp_outgoing_address can not have
>   the same value since they both use the same port.
> "
> 
>> ###
>> 
>> i had results in my mrtg , but not sure of the results ,
>> i got squid mib file  and converted it to oidb file and  put it in m y
>> mrtg
>> .
>> 
>> by here im using smp , not sure from the results 
>> 
>> 
>> i also revived some suspicious logs :
>> 
>> 2013/11/08 16:51:26 kid3| snmpHandleUdp: FD 20 recvfrom: (11) Resource
>> temporarily unavailable
> 
> 
> We are still trying to figure this one out. It seems not to be harmful
> particularly, except a waste of effort somewhere.

=>well


regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-cache-manager-question-and-snmp-with-smp-question-tp4663233p4663241.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] alot of annoying logs in squid 3.1.6 disappeared in squid 3.3.9 !!

2013-11-12 Thread Dr.x
this logs pump my cache.log 

*1st log:*
Unknown capability type in WCCPv2 Packet (5).

*2nd log:*
NULL
{Accept: */*
Content-Type: application/x-www-form-urlencoded
2013/11/12 00:53:21| WARNING: HTTP header contains NULL characters {Accept:
*/*
Content-Type: application/x-www-form-urlencoded}
NULL
{Accept: */*
Content-Type: application/x-www-form-urlencoded
2013/11/12 00:53:21| WARNING: HTTP header contains NULL characters {Accept:
*/*
Content-Type: application/x-www-form-urlencoded}
NULL
=
but , im happy that i dont see them in squid 3.3.9 



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/alot-of-annoying-logs-in-squid-3-1-6-disappeared-in-squid-3-3-9-tp4663264.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] (smp-rock )store rebuilding take too much time while squid has started !!

2013-11-13 Thread Dr.x
hi ,
when i restart squid
 ihave
Store rebuilding is 0.94% complete

note that although im getting small storage to squid (10 G) it takes alot of
time to rebuild !!!

also , im disableing wccp rebuild during starr , which mean that squid will
start with wccp and the store hasnt rebuilt yet

the question is :
can i let squid do  rebuild it faster ???

should i stop wccp redirection to squid from router untill rebuild is
completed ?? or it is just normal issue to let squid start and redirect
users and the rebuilt hasnt finished yet?



regards 



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/smp-rock-store-rebuilding-take-too-much-time-while-squid-has-started-tp4663266.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: (smp-rock )store rebuilding take too much time while squid has started !!

2013-11-14 Thread Dr.x
Alex Rousskov wrote
> On 11/13/2013 02:04 AM, Dr.x wrote:
>> hi ,
>> when i restart squid
>>  ihave
>> Store rebuilding is 0.94% complete
>> 
>> note that although im getting small storage to squid (10 G) it takes alot
>> of
>> time to rebuild !!!
>> 
>> also , im disableing wccp rebuild during starr , which mean that squid
>> will
>> start with wccp and the store hasnt rebuilt yet
>> 
>> the question is :
>> can i let squid do  rebuild it faster ???
> 
> There is currently no configuration options to tune rebuild speed except
> foreground rebuild. In general, there is a trade-off between rebuild
> speed and resources remaining for HTTP traffic handling, but Squid lacks
> options to control that trade-off (IIRC). There are also bugs that make
> Squid less responsive that necessary during rebuild. Finally, Rock
> storage rebuilds slower than clean UFS rebuild (but probably faster than
> dirty UFS rebuild).
> 
> Going forward, my plan is:
> 
> 1) Fix rebuild responsiveness bugs (in progress).
> 
> 2) Optimize Rock rebuild (may require Rock db format change and will
> probably be done after Large Rock integration).
> 
> 3) Fix WCCP handling during SMP disk cache rebuild (there is no sponsor
> for this fix at this time though, and I am not sure this bug has been
> officially filed even, so this may take a while unless somebody
> volunteers).
> 
> 
> 
>> should i stop wccp redirection to squid from router untill rebuild is
>> completed ?? or it is just normal issue to let squid start and redirect
>> users and the rebuilt hasnt finished yet?
> 
> Bugs notwithstanding (e.g., see #3 above), there is no right or wrong
> choice here. Some folks prefer Squid to handle traffic while their cache
> is being loaded. Others prefer Squid to start handling traffic with the
> fully loaded cache. The best option depends on the deployment
> environment and optimization goals.
> 
> 
> Cheers,
> 
> Alex.

well, 
from my  test ,

i found that i should be better to  stop redirect users from router until
finishing rebuilding  ,because rebuilding process consume alot of cpu
resources.

if we add wccp rebuild cpu consuming  to workers cpu consuming   it may
cause fault .and cpu exceed 100 %.

so ,
i just wait the rebuilding , and then , i redirect users from router to
squid agian .

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/smp-rock-store-rebuilding-take-too-much-time-while-squid-has-started-tp4663266p4663302.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] logrotate with SMP problem !!!

2013-11-17 Thread Dr.x
isposeName will be /var/log/squid/access.log.1
creating new /var/log/squid/access.log mode = 0640 uid = 500 gid = 4
renaming /var/log/squid/cache.log to /var/log/squid/cache.log.1
disposeName will be /var/log/squid/cache.log.1
creating new /var/log/squid/cache.log mode = 0640 uid = 500 gid = 4
running postrotate script
removing old log /var/log/squid/access.log.1
error: error opening /var/log/squid/access.log.1: No such file or directory
[root@squid logrotate.d]# ls -l /var/log/squid/
total 7680
-rw-r- 1 squid squid8439 Nov 17 10:05 access.log
-rw-r- 1 squid squid   13443 Nov 17 10:05 access.log.0
-rw-r- 1 squid squid   0 Nov 17 10:04 access.log.2
-rw-r- 1 squid squid   0 Nov 17 10:04 access.log.3
-rw-r- 1 squid adm 0 Nov 17 10:04 access.log.4
-rw-r- 1 squid squid 7805576 Nov 17 10:04 access.log.6
-rw-r- 1 squid squid1665 Nov 17 10:04 cache.log
-rw-r- 1 squid squid 280 Nov 17 10:04 cache.log.3
-rw-r- 1 squid squid   14342 Nov 17 10:04 cache.log.5
[root@squid logrotate.d]# 

!
why files still there 

*not sure if this has a relation when we work with smp with squid !!*

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/logrotate-with-SMP-problem-tp466.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: logrotate with SMP problem !!!

2013-11-17 Thread Dr.x
Eliezer Croitoru-2 wrote
> Hey,
> 
> What is the content of the file "/etc/logrotate.d/squid"
> 
> Eliezer
> 
> On 17/11/13 17:09, Dr.x wrote:
>> hi all ,
>> ive configured squid to be logrotated from script in
>> /etc/logrotate.d/squid
>> but  ,
>> logrotating is not working  fine !
>>
>>
>>
>> logrotate is rotating the logs ,
>> but ..
>>
>> it keeping  files with numbered names !!!


[root@squid ~]# cat /etc/logrotate.d/squid
/var/log/squid/*.log {
daily
rotate 0
missingok
create 640 squid adm
sharedscripts
postrotate
test ! -e /var/run/squid.pid || /usr/sbin/squid -k rotate

endscript
}
[root@squid ~]# 



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/logrotate-with-SMP-problem-tp466p4663336.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: logrotate with SMP problem !!!

2013-11-17 Thread Dr.x
Helmut Hullen wrote
> Hallo, Dr.x,
> 
> Du meintest am 17.11.13:
> 
>> ive configured squid to be logrotated from script in
>> /etc/logrotate.d/squid but  ,
>> logrotating is not working  fine !
> 
> Sorry - "logrotate" works fine, but it works in another way than you  
> want it to work.
> 
>> as we see , i have access.log.1 ,access.log.2 ,access.log.3
>> access.log.6
> 
>> i dont want the numbered values here ,
> 
> Then you need another program than "logrotate".
> 
> Viele Gruesse!
> Helmut

the same config above works fine on a debian machine os !!!
on debian os , no renamed files exist , all are deleted .

ive copied the same config  




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/logrotate-with-SMP-problem-tp466p4663337.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: logrotate with SMP problem !!!

2013-11-18 Thread Dr.x
Amos Jeffries-2 wrote
> On 2013-11-18 07:22, Dr.x wrote:
>> Helmut Hullen wrote
>>> Hallo, Dr.x,
>>> 
>>> Du meintest am 17.11.13:
>>> 
>>>> ive configured squid to be logrotated from script in
>>>> /etc/logrotate.d/squid but  ,
>>>> logrotating is not working  fine !
>>> 
>>> Sorry - "logrotate" works fine, but it works in another way than you
>>> want it to work.
>>> 
>>>> as we see , i have access.log.1 ,access.log.2 ,access.log.3
>>>> access.log.6
>>> 
>>>> i dont want the numbered values here ,
>>> 
>>> Then you need another program than "logrotate".
>>> 
>>> Viele Gruesse!
>>> Helmut
>> 
>> the same config above works fine on a debian machine os !!!
>> on debian os , no renamed files exist , all are deleted .
>> 
>> ive copied the same config  
>> 
> 
> When you set the directive in squid.conf:
>logfile_rotate 0
> 
> Then squid does *NOT* do any log file name changing. All it does is 
> close its connection to the logs and open a new connection to the 
> un-numbered file.
> 
> If you have wrongly set logfile_rotate to a number other than 0 (the 
> default is 10 unless you configure it to something else). Or if you have 
> wrongly used access.log.${process_number} to create log files then you 
> will end up with numbered files.
> 
> Note: Debian packages contain hard-coded chanegs that integrate Squid 
> with logrotate. Including lgrotate script and squid.conf default 
> "logfile_rotate 0".
> 
> Check for those two problems. If they do not exist in your squid.conf 
> then the problem is outside of Squid. Possibly you have two logrotate 
> scripts managing squid log files?
>grep -R "squid" /etc/logrotate.d/
> 
> Amos

hi amos, 
you are extremelly right !
ive added logfile_rotate 0 directive  in squid.conf 
and now it working well
here is my test :
[root@squid ~]# logrotate -f -v /etc/logrotate.d/squid
reading config file /etc/logrotate.d/squid
reading config info for /var/log/squid/*.log 

Handling 1 logs

rotating pattern: /var/log/squid/*.log  forced from command line (no old
logs will be kept)
empty log files are rotated, old logs are removed
considering log /var/log/squid/access.log
  log needs rotating
considering log /var/log/squid/cache.log
  log needs rotating
rotating log /var/log/squid/access.log, log->rotateCount is 0
dateext suffix '-20131118'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/squid/access.log.1 to /var/log/squid/access.log.2
(rotatecount 1, logstart 1, i 1), 
old log /var/log/squid/access.log.1 does not exist
renaming /var/log/squid/access.log.0 to /var/log/squid/access.log.1
(rotatecount 1, logstart 1, i 0), 
old log /var/log/squid/access.log.0 does not exist
log /var/log/squid/access.log.2 doesn't exist -- won't try to dispose of it
rotating log /var/log/squid/cache.log, log->rotateCount is 0
dateext suffix '-20131118'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/squid/cache.log.1 to /var/log/squid/cache.log.2
(rotatecount 1, logstart 1, i 1), 
old log /var/log/squid/cache.log.1 does not exist
renaming /var/log/squid/cache.log.0 to /var/log/squid/cache.log.1
(rotatecount 1, logstart 1, i 0), 
old log /var/log/squid/cache.log.0 does not exist
log /var/log/squid/cache.log.2 doesn't exist -- won't try to dispose of it
renaming /var/log/squid/access.log to /var/log/squid/access.log.1
disposeName will be /var/log/squid/access.log.1
creating new /var/log/squid/access.log mode = 0640 uid = 500 gid = 4
renaming /var/log/squid/cache.log to /var/log/squid/cache.log.1
disposeName will be /var/log/squid/cache.log.1
creating new /var/log/squid/cache.log mode = 0640 uid = 500 gid = 4
running postrotate script
*removing old log /var/log/squid/access.log.1
removing old log /var/log/squid/cache.log.1*
[root@squid ~]# ls -l /var/log/squid/
total 304
-rw-r- 1 squid adm 305024 Nov 18 05:07 access.log
-rw-r- 1 squid adm   1945 Nov 18 05:07 cache.log
[root@squid ~]# du -sh /var/log/squid/
584K/var/log/squid/


as u see the named files were removed

thanks alot

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/logrotate-with-SMP-problem-tp466p4663341.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] SMP-Rock-frequent FATAL: Received Segment Violation...dying."only on kid3"

2013-11-18 Thread Dr.x
hi ,

i have smp with config as below :
###SMP oPTIONS#
dns_v4_first on 
workers 3

cpu_affinity_map process_numbers=1,2,3,4 cores=2,4,6,8
==
cache_dir rock /CACHE1 2 max-size=32768 swap-timeout=350
max-swap-rate=350
cache_dir rock /CACHE2 2 max-size=32768 swap-timeout=350
max-swap-rate=350
==
*to be honest , i did alot of killall -9 squid while squid was starting and
building files !*
and just addedd directive logfile_rotate 0
that wt i remebers before i saw that new error in cache.log !!!


but today after that modification i see alot  of "FATAL: Received Segment
Violation...dying." 
especially for  kid 3 !
and seems that kid 3 killed then started agian , killed then started again 
and so on .
dont know shoulld i remove logrotate directive ,  or  may the rock db
damaged and cause this dying signal ???!!!













-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SMP-Rock-frequent-FATAL-Received-Segment-Violation-dying-only-on-kid3-tp4663349.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: SMP-Rock-frequent FATAL: Received Segment Violation...dying."only on kid3"

2013-11-19 Thread Dr.x
hi ,
1-ive removed contents of rock in my folders /CACHE1 & /CACHE2
2-made sure that squid is not running

3-squid -z
4-started squid

and i will monitor it within 48  hours and c the result

about mounting options :


[root@squid ~]# cat /etc/fstab 
# /etc/fstab
# Created by anaconda on Tue Oct 29 12:35:49 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=51e30005-ffc4-4ba1-844f-54ac00cada80 /   ext4   
defaults1 1
tmpfs   /dev/shmtmpfs   defaults0 0
devpts  /dev/ptsdevpts  gid=5,mode=620  0 0
sysfs   /syssysfs   defaults0 0
proc/proc   procdefaults0 0
###
shm/dev/shmtmpfsnodev,nosuid,noexec00
/dev/sdb2 /CACHE1 ext4 noatime,barrier=0,data=writeback,commit=100 0 2
=
the folder /CACHE2 is on the same with disk operating system .

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SMP-Rock-frequent-FATAL-Received-Segment-Violation-dying-only-on-kid3-tp4663349p4663356.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: SMP-Rock-frequent FATAL: Received Segment Violation...dying."only on kid3"

2013-11-21 Thread Dr.x
  Total in use:   58338 KB 7%
Total free: 58338 KB 7%
Total size:812592 KB
Memory accounted for:
Total accounted:60183 KB   7%
memPool accounted:  60183 KB   7%
memPool unaccounted:   752409 KB  93%
memPoolAlloc calls:80
memPoolFree calls:  1998262623
File descriptor usage for squid:
Maximum number of file descriptors:   655360
Largest file desc currently in use:   1643
Number of file desc currently in use: 2677
Files queued for open:   0
Available number of file descriptors: 652683
Reserved number of file descriptors:   500
Store Disk files open:   2
Internal Data Structures:
  2194 StoreEntries
  1029 StoreEntries with MemObjects
 32000 Hot Object Cache Items
651168 on-disk objects



but i noted that there is count # 1 for :

*Page faults with physical i/o: 1*

is that natural ???

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SMP-Rock-frequent-FATAL-Received-Segment-Violation-dying-only-on-kid3-tp4663349p4663428.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] how much memory needed for rock dir ?

2013-12-01 Thread Dr.x
hi ,
does rock memory needs as same as aufs rule ?

does 14 M for 64 bits needed for 1G storage of type rock ??? 
or more ??


i had some hang in squid after about 7 days of working fine  ,  and not sure
of the grow of my rock hardsisk is the reason or not .



regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-much-memory-needed-for-rock-dir-tp4663617.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-10 Thread Dr.x
iishiii wrote
> Thanks a lot Amos !!
> please help me little more in way exactly what steps should i perform for
> SMP 
> how should i break my squid conf and what more configuration is required. 
> 
> 
> and . How to tune squid for getting greater hit rates... tell me some
> steps to try.

hi ,
look

you are right .

smp scale from wiki is some difficult and is not suitable for beginner to
understand .

for luck  u can ask in mailing list  and have more understanding .
1st of all :
let me ask u some questions :

why u need smp ?
wt is the os of ur system ?
do u have high load and ur squid cant handle more traffic ?
wt is ur server machine details ?
how is memory and cpu core and how many hardisk u have ?
==
so ,
as a start ,
to start with smp , ur squid version should be higher than 3.2.x as i
remember ,
so  i advise u to compile squid with 3.3.9 or 3.3.10 or  higher , and
enable "rock"  store when u compile it .
then 
u should read this wiki
http://wiki.squid-cache.org/Features/SmpScale

and look at the bottom of page u will find 3 frequently problems when we
deal with smp ,
u should do as explained because smp may not start well .

after that ,
and making sure smp is fine

u can add  to squid.conf :
workers 3=>"start with 3"
and start without cache_dir  .


when u do the above tell me so that u continue and go to next steps in
tuning parameters and enhance squid .



again ,
my understanding is not soo good, but as i can i tried to help u ,
mr Amos has alot of answers to next questions .

wish i helped u 

regards

Ahmd



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663753.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-11 Thread Dr.x
hi ,
in my opinion 
100 users is not soo much http requests
and smp is not the right solution >>>>>>>>>>>>

you can tune squid parameters and optimize squid , u dint need SMP .

smp is limited to object size 32 k  and in my opinion it will be  less hits
than squid without smp.

SMP can help u when u have multicore machine and u have a huge http requests
and wantto balance the http requesst among ur cores .


and agian ,  here , im asking Amos about SMP>>>>>

can we after using SMP to get more BW saving ?? assume we fixed all other
parameters of squid and want to make comparision .


assume i have server without smp and save about 40 MBps

can i reach this value after SMP ???


with my best regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663780.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-11 Thread Dr.x
try to increase cache mem values :

try 

#  MEMORY CACHE OPTIONS
cache_mem 4096 MB
maximum_object_size_in_memory 2 MB

then test performance




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663781.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how to use multi instances (SMP) in squid

2013-12-11 Thread Dr.x
Eliezer Croitoru-2 wrote
> Hey dr,
> 
> On 11/12/13 10:24, Dr.x wrote:
>>
>> and again ,  here , im asking Amos about SMP>>>>>
>>
>> can we after using SMP to get more BW saving ?? assume we fixed all other
>> parameters of squid and want to make comparison .
> I am not sure I understand your question.
> 
>> assume i have server without smp and save about 40 MBps
>>
>> can i reach this value after SMP ???
> 
>  From the question My only conclusion is that the complexity of SMP was 
> not understood yet.
> 
> Squid is a single process application.
> In 3.2 what we try to do is to let the admin configure a cache service 
> using the basic single process idea while extending the configuration to 
> allow pre-defining of multi process squid service.
> 
> Each and every squid instance can and should communicate with other 
> instances in order to achieve the result which is MutliCPU utilization.
> 
> I would try to say that looking at squid instances is more like "how do 
> I configure a cluster of squids?"
> 
> Once the understanding of how a cluster should be configured the answer 
> to your question will be answered in a sec.
> 
> All the bests,
> Eliezer

but i think i have a good understanding of smp squid , 
" u may understood me "

why i asked that  is ,

because i am talkign about maximum object size is 32 k

thats why i think  that  hit ration and bw saving in smp will be less that
normal squid .

and agian , this is just a questionnaire 



agian ,
 eleizer ,
thanks for your reply 

reagrds



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-multi-instances-SMP-in-squid-tp4663724p4663783.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] after long time squid hang , no logs , kill the internet , and i must restart it !!!!!

2013-12-16 Thread Dr.x
###
http_access allow mysubnet

cache_mgr a@a 
cachemgr_passwd a all
###
# add user authentication and similar options here
http_access allow manager localhost
http_access allow manager localip
http_access allow manager mysubnet
http_access allow mysubnet manager
http_access deny manager
###
cache_mem 512 MB
maximum_object_size 10 MB
##
# the frontend requires a different name to the backend
###
cache_swap_low 90
cache_swap_high 95
###
#
quick_abort_pct 95
fqdncache_size 65535
ipcache_size 65535
###
ipcache_low 98
ipcache_high 99
#
### WCCP2 Config#
wccp2_router xxx
wccp2_rebuild_wait off   
wccp_version 2
wccp2_return_method 2
wccp2_service dynamic 92
wccp2_service_info 92 protocol=tcp flags=src_ip_hash priority=250 ports=80
wccp2_service dynamic 93
wccp2_service_info 93 protocol=tcp flags=dst_ip_hash,ports_source
priority=250 ports=80
##
cache_effective_user squid
cache_effective_group squid
###
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
### 
http_access deny all
##
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

#cache_dir rock /CACHE1 2 max-size=32768 swap-timeout=350
max-swap-rate=350
cache_dir rock /CACHE1 2 max-size=32768 swap-timeout=350 
#
Slow after sometime tshooting
memory_pools off 
pconn_timeout 2 minutes
persistent_request_timeout 1 minute 
#
read_ahead_gap 128 KB
###
#rotating logs#
logfile_rotate 0

===

now i restarted squid and it is fine now ,

i wish to find solution at next time  and estimate the reason of that hang 

with my best regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/after-long-time-squid-hang-no-logs-kill-the-internet-and-i-must-restart-it-tp4663878.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] FATAL: Received Segment Violation...dying.

2013-12-20 Thread Dr.x
hi all ,

 i have smp and it works fine at start of squid ,

but after some time , 

one kids of my kids  must be killed and restarted agian , and so on

here which showed in cache.log
*FATAL: Received Segment Violation...dying.
CPU Usage: 75.134 seconds = 52.767 user + 22.367 sys
Maximum Resident Size: 739872 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():*

i dont know why this "Dying" occurs ??
as u see, it wask killed and it started agian !
after sometime,  squid is hanged and i must killed it and started it agian

==

FATAL: Received Segment Violation...dying.
CPU Usage: 75.134 seconds = 52.767 user + 22.367 sys
Maximum Resident Size: 739872 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:   85848 KB
Ordinary blocks:82297 KB737 blks
Small blocks:   0 KB  1 blks
Holding blocks: 72988 KB  8 blks
Free Small blocks:  0 KB
Free Ordinary blocks:3550 KB
Total in use:  155285 KB 181%
Total free:  3550 KB 4%
2013/12/19 10:35:52 kid2| Starting Squid Cache version 3.3.9 for
i486-pc-linux-gnu...
2013/12/19 10:35:52 kid2| Process ID 20001
2013/12/19 10:35:52 kid2| Process Roles: worker
2013/12/19 10:35:52 kid2| With 131072 file descriptors available
2013/12/19 10:35:52 kid2| Initializing IP Cache...
2013/12/19 10:35:52 kid2| DNS Socket created at [::], FD 7
2013/12/19 10:35:52 kid2| DNS Socket created at 0.0.0.0, FD 8
2013/12/19 10:35:52 kid2| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2013/12/19 10:35:52 kid2| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2013/12/19 10:35:52 kid2| Logfile: opening log
stdio:/var/log/squid/access.log
2013/12/19 10:35:52 kid2| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2013/12/19 10:35:52 kid2| Store logging disabled
2013/12/19 10:35:52 kid2| Swap maxSize 0 + 1536000 KB, estimated 118153
objects
2013/12/19 10:35:52 kid2| Target number of buckets: 5907
2013/12/19 10:35:52 kid2| Using 8192 Store buckets
2013/12/19 10:35:52 kid2| Max Mem  size: 1536000 KB [shared]
2013/12/19 10:35:52 kid2| Target number of buckets: 5907
2013/12/19 10:35:52 kid2| Using 8192 Store buckets
2013/12/19 10:35:52 kid2| Max Mem  size: 1536000 KB [shared]
2013/12/19 10:35:52 kid2| Max Swap size: 0 KB
2013/12/19 10:35:52 kid2| Using Least Load store dir selection
2013/12/19 10:35:52 kid2| Current Directory is /root
2013/12/19 10:35:52 kid2| Loaded Icons.
2013/12/19 10:35:52 kid2| HTCP Disabled.
2013/12/19 10:35:52 kid2| Squid plugin modules loaded: 0
2013/12/19 10:35:52 kid2| Adaptation support is off.
2013/12/19 10:35:52 kid2| Finished rebuilding storage from disk.
2013/12/19 10:35:52 kid2| 0 Entries scanned
2013/12/19 10:35:52 kid2| 0 Invalid entries.
2013/12/19 10:35:52 kid2| 0 With invalid flags.
2013/12/19 10:35:52 kid2| 0 Objects loaded.
2013/12/19 10:35:52 kid2| 0 Objects expired.
2013/12/19 10:35:52 kid2| 0 Objects cancelled.
2013/12/19 10:35:52 kid2| 0 Duplicate URLs purged.
2013/12/19 10:35:52 kid2| 0 Swapfile clashes avoided.
2013/12/19 10:35:52 kid2|   Took 0.02 seconds (  0.00 objects/sec).
2013/12/19 10:35:52 kid2| Beginning Validation Procedure
2013/12/19 10:35:52 kid2| 0 Duplicate URLs purged.
2013/12/19 10:35:52 kid2| 0 Swapfile clashes avoided.
2013/12/19 10:35:52 kid2|   Took 0.02 seconds (  0.00 objects/sec).
2013/12/19 10:35:52 kid2| Beginning Validation Procedure
2013/12/19 10:35:52 kid2|   Completed Validation Procedure
2013/12/19 10:35:52 kid2|   Validated 0 Entries
2013/12/19 10:35:52 kid2|   store_swap_size = 0.00 KB
2013/12/19 10:35:52 kid2| Accepting HTTP Socket connections at local=xx
remote=[::] FD 16 flags=1
2013/12/19 10:35:52 kid2| Accepting TPROXY spoofing HTTP Socket connections
at local=x:xx remote=[::] FD 17 flags=17
2013/12/19 10:35:53 kid2| storeLateRelease: released 0 objects




how troubleshoot this issue ??
wish to help
regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/FATAL-Received-Segment-Violation-dying-tp4663955.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] SECURITY ALERT: Host header forgery detected on local

2013-12-20 Thread Dr.x

hi ,

i have this logs and wondering if it is harmful :
im ==>>using squid 3.3.9
Squid Cache: Version 3.3.9
==
2013/12/20 15:41:00.747 kid1| SECURITY ALERT: Host header forgery detected
on local=10.10.0.50:80 remote=x.x.x.x FD 573 flags=17 (local IP does not
match any domain IP)
2013/12/20 15:41:00.747 kid1| SECURITY ALERT: By user agent: 
2013/12/20 15:41:00.747 kid1| SECURITY ALERT: on URL:
client9.dropbox.com:443
2013/12/20 15:41:00.747 kid1| abandoning local=10.10.0.50:80 remote=x.x.x.x
FD 573 flags=17
 kid1| SECURITY ALERT: Host header forgery detected on local=10.10.0.50:80
remote=x.x.x.x FD 163 flags=17 (local IP does not match any domain IP)
2013/12/20 15:41:29.611 kid1| SECURITY ALERT: By user agent: 
2013/12/20 15:41:29.611 kid1| SECURITY ALERT: on URL: d.dropbox.com:443
2013/12/20 15:41:29.611 kid1| abandoning local=10.10.0.50:80 remote=x.x.x.x
FD 163 flags=17
===

wish to clarify if  it is harmfull log


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SECURITY-ALERT-Host-header-forgery-detected-on-local-tp4663961.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Unbalaned Cpu cores with squid 3.4.3 with centos 6.4 64 bit

2014-02-12 Thread Dr.x
xx
> acl eliezer src 
> acl localip src x
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> ##
> # Recommended minimum Access Permission configuration:
> # Deny requests to certain unsafe ports
> ##
> ##
> http_access deny !Safe_ports
> http_access allow localnet
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> ###
> ###SMP oPTIONS#
> dns_v4_first on 
> workers 4
> 
> #cpu_affinity_map process_numbers=1,2,3,4 cores=2,4,6,8
> #http_port 127.0.0.1:400${process_number}
> #http_port ::1:66000
> #
> visible_hostname 
> 
> Filterring##
> acl blockkeywords dstdomain "/etc/squid/xxx"
> http_access deny blockkeywords
> #
> cache_log /var/log/squid/cache.log
> access_log /var/log/squid/access.log 
> ### 
> http_port ff:
> http_port dd:eee tproxy
> 
> http_access allow mysubnet
> 
> cache_mgr x  
> cachemgr_passwd  all
> ###
> # add user authentication and similar options here
> http_access allow manager localhost
> http_access allow manager localip
> http_access allow manager mysubnet
> http_access allow mysubnet manager
> http_access deny manager
> ###
> cache_mem 3000 MB
> maximum_object_size_in_memory 10 MB
> #
> quick_abort_pct 95
> fqdncache_size 65535
> ipcache_size 65535
> ###
> ipcache_low 98
> ipcache_high 99
> #
> ### WCCP2 Config#
> wccp2_router x.x.x.x 
> wccp_version 1
> wccp2_forwarding_method 2
> wccp2_return_method 2
> wccp2_assignment_method 2
> wccp2_service dynamic 60
> wccp2_service_info 60 protocol=tcp flags=src_ip_hash priority=250 ports=80
> wccp2_service dynamic 70
> wccp2_service_info 70 protocol=tcp flags=dst_ip_hash,ports_source
> priority=250 ports=80
> ##
> cache_effective_user squid
> cache_effective_group squid
> ###
> memory_replacement_policy heap GDSF
> cache_replacement_policy heap LFUDA
> #
> dns_nameservers x.x.x.x 8.8.8.8
> ### 
> http_access deny all
> ##
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> refresh_pattern .   0   20% 4320
> 
> memory_pools off 
> pconn_timeout 2 minutes
> persistent_request_timeout 1 minute 
> #
> max_filedesc 131072



*any help 
any suggestion *






-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Unbalaned-Cpu-cores-with-squid-3-4-3-with-centos-6-4-64-bit-tp4664748.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: question about large rock

2014-02-13 Thread Dr.x
hi all ,
im asking about memeory caching
now we know that we have restirction with smp rock to 32 k  , whiuch result
a poor bw utilization


now

what about memory object size ?
is is also 332 k ?
i mean if i get a server and depened on ram and have 128G ram , and dont
want to use hardsisks

does in this case we still have 32 k object size max in ram ??

what is the best implementation for smp with rock with aufs so that we have
max bw utilization and max hit ratio ?


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/question-about-large-rock-tp4664469p4664768.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] deep understanding of squid smp with disker and worker

2014-02-13 Thread Dr.x
hi all ,
assume have
4 workers ( i used worker 4 in squid.conf)
2 disker ( i used two rock dir )
1 coordinator 

assume that google website has been cached on hardisks (disk1 and disk2
which are represent by diskers process)


assume that that userx is pumped to squid and requesting google.com
now ,
who  will answer the user ??
the worker ? or the disker ?

well i will answer this question and need make sure that my answer is
correct .

as i understand , the responsible of answer the request are the workrers not
the disker , the worker will try to solve it by memory , if it failed  it
will go  to diskers ( responcilbe of disk caching and rock ) and try to
server it from hardisks.
it will go to one of disker and if not found in one of them it will go to
the 2nd disker and so on.
if not found on memory or disker , squid will try to cache it if its size
less or equal 32 kbit !




uptill now is my answer correct ??
i have more questions

regards 



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/deep-understanding-of-squid-smp-with-disker-and-worker-tp4664780.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: deep understanding of squid smp with disker and worker

2014-02-13 Thread Dr.x
hi alex ,
good ,
now the limitation of 32kB is on  disk objects  only ??
or
both (memory or  disk )

?




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/deep-understanding-of-squid-smp-with-disker-and-worker-tp4664780p4664783.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: deep understanding of squid smp with disker and worker

2014-02-15 Thread Dr.x
Alex Rousskov wrote
> On 02/13/2014 12:45 PM, Dr.x wrote:
>> now the limitation of 32kB is on  disk objects  only ??
>> or
>> both (memory or  disk )
> 
> 
> Shared memory cache prior to Large Rock support has the 32KB limit.
> Shared memory cache after Large Rock support does not have that limit.
> Non-shared memory cache (any version) does not have that limit.
> 
> Alex.

hi Alex, thanks alot for explanation ,
now  how do i enable large rock store ??? from wiki i found it on 3.5 squid
, and last version is 3.4.x ??
can i enable it with my version of 3.4.3 ??
===

another question i have
ive implemented squid withboth aufs and rock
i have 3 aufs hardsisk
i creatred 6 workers
and mapped workers 2, 4, 6 to  aufs hardsisks aufs1, aufs2, aufs3, hardsisks
, also i mapped cpu cores  of workers 2, 4 6, to  8, 10 , 12 cores of my cpu

now what i noted is :
workers are not balanced 
i mean that the http traffic is not balanced from os to workers
that mean that one worker will have more than others and so on

what i mean can be summarized with disk utilization duting time
as i said i have 3 workers mapped to 3 aufs hardsisks
nw when i use df -h , i have
[root@squid ~]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda1   117G   14G   98G  13% /
tmpfs16G  2.0G   14G  13% /dev/shm
shm  16G  2.0G   14G  13% /dev/shm
*
/dev/sdb1   117G   13G   99G  12% /ssd1
/dev/sdc1   117G   19G   93G  17% /ssd2
/dev/sdd1   235G  2.7G  220G   2% /ssd3
*
/dev/sde1   275G  772M  260G   1% /sata1 >rock
/dev/sdf199G  765M   93G   1% /sata2 ==>rock

keep underline at the bold lines, those are my 3 aufs hardsisks , you will
see that :
/dev/sdb1   117G   13G   99G  12% /ssd1 
/dev/sdc1   117G   19G   93G  17% /ssd2 =>most requested
recived
/dev/sdd1   235G  2.7G  220G   2% /ssd3 ===>less requestes
recieved

another question ,

can i monitor the cache.log for each aufs instance ???
i configured access.log and cache.log for each aufs instance , but only
access.log worked 
i cant monitor cache.log for each aufs instance but i can monitor cache.log
for rock  store ???


can you guide me with best mount options with best aufs aubdirectories so
that i have the best byte ratio and have more save for my bw ??

agian , i have 24 cores in my servers and need to get benefit of it

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/deep-understanding-of-squid-smp-with-disker-and-worker-tp4664780p4664835.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: deep understanding of squid smp with disker and worker

2014-02-15 Thread Dr.x
ANOTHER QUESTION ,

does mapping cpu numbers make a differs ??

as an example
i have 10 workers
does mapping workers 1,2,3,4,5,6,7,8,9,10 to cpu 1,2,3,4,5,6,7,8,9,10 
can make effect  than
mapping 
1,2,3,4,5,6,7,8,9,10 to cpu 2,4,6,8,10,12,14,16,18,20 ?
or
mapping 1,2,3,4,5,6,7,8,9,10 to cpu 1,3,5,7,9,11,13,15,17,19,21 ??
???
also at the orther side , if i have 3 aufs   hardisk and need to map each
aufs process to worker
dos mapping aufs hardsisk workers can make differs ??

what is the best method to use or ehat i have to avoid ?

regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/deep-understanding-of-squid-smp-with-disker-and-worker-tp4664780p4664836.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Unbalaned Cpu cores with squid 3.4.3 with centos 6.4 64 bit

2014-02-15 Thread Dr.x
hi alex ,
[root@squid ~]# egrep 'cpu cores|physical id' /proc/cpuinfo | sort -u 
cpu cores   : 6
physical id : 0
physical id : 1

what i was meaning is that low byte ratio !
low bandwidth saving ??

when using noraml squid 1 instaned with 3 hardsisk each with 90 giga ,
i was saving about 30-40Mbps

now
with same load and same users and same BW
im only saving about 10-12Mbps !!

that wt i was meaning .

the question is frequently being asked .

how i implement sauqid so that it handle more requests (SMP) and make more
bandwith saving??
i think the biggest problem of smp is that shared issue which limitation in
memeory and rock store to 32 KB


why all the new efforts of squid go to increase the shared memory and shared
objects to about 1M ???

this is a big truoble in my think and need to be fixed .

agian , we all appreciate ur efforts and tired in updating squid and fixing
bugs
but i thuink this issue need a priority in work with squid developers


agian , gret thankfull for you (Alex) and for squid team developers


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Unbalaned-Cpu-cores-with-squid-3-4-3-with-centos-6-4-64-bit-tp4664748p4664838.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Unbalaned Cpu cores with squid 3.4.3 with centos 6.4 64 bit

2014-02-15 Thread Dr.x
hi amos , thanks alot
but i have here something changed in cache manager when i added aufs and
mapped workers to aufs hardsisk
i found that
:
Mean Object Size:   37.85 KB


at previous it was always 32KB

does that mean that squid get the average mean value cache over workers and
put it here is cahce manager ?

NOTE THAT IN AUFS workers i put max object size =  10M 
i mean i was supposed to see something higher than 37.85kB ?

can u explain that ?

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Unbalaned-Cpu-cores-with-squid-3-4-3-with-centos-6-4-64-bit-tp4664748p4664849.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: deep understanding of squid smp with disker and worker

2014-02-15 Thread Dr.x
hi Mr Amos ,
thanks alot for clarification ,
but agian , 
about squid code in
http://www.squid-cache.org/Versions/v3/3.HEAD/squid-3.HEAD-20140214-r13275.tar.gz
now im asking ,

do i need other things needed to be configured so that it support large rock
?
do i need modify squid.conf ?or  compile options ? or install more
dependencies ?

im very enthusuated to test large rock ASAP

regrds



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/deep-understanding-of-squid-smp-with-disker-and-worker-tp4664780p4664850.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: Fwd: [squid-users] Performance tuning of SMP + Large rock

2014-02-15 Thread Dr.x
@Rajiv Desai

have u found increasing in bandwidth saving when u used large rock ??
if so
how  much difference u found ?

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Performance-tuning-of-SMP-Large-rock-tp4664765p4664851.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Worker I/O push queue overflow: ipcIo7.30506r9

2014-02-16 Thread Dr.x
hi all ,
i have implemented aufs with rock
i had that logs at logs of rock cache.log !!!

what does that mean ??

i have done the following

i have 5 hardsisk aufs  dir
2hardsisk rock dir

i have as below :

workers 8
#dns_v4_first on
cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9
cores=2,4,6,8,10,12,14,16,18
#
if ${process_number} = 4
include /etc/squid/aufs1.conf
endif
###
if ${process_number} = 2
include /etc/squid/aufs2.conf
endif

if ${process_number} = 6
include /etc/squid/aufs3.conf
endif
#
if ${process_number} = 7
include /etc/squid/aufs4.conf
endif
#
if ${process_number} = 8
include /etc/squid/aufs5.conf
endif
===

each aufs.conf has dir aufs in it.

but after alll fo that ,
i have still low bandwith saving 
does the errors harmfull 

*Worker I/O push queue overflow: ipcIo7.30506r9

*


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Worker-I-O-push-queue-overflow-ipcIo7-30506r9-tp4664857.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: What are recommended settings for optimal sharing of cache between SMP workers?

2014-02-18 Thread Dr.x
im doubting ,
without smp with same traffic and same users i can save 40Mbps

but in smp with combination of aufs with rock (32KB max obj size)
i can only save 20Mbps


im wondering does large rock will  heal me ?

or return to aufs and wait untill squid relase version that has bigger
object size ?

bw saving is a big issue to me and must be done !!!





-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/What-are-recommended-settings-for-optimal-sharing-of-cache-between-SMP-workers-tp4664909p4664915.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: What are recommended settings for optimal sharing of cache between SMP workers?

2014-02-18 Thread Dr.x
Amos Jeffries-2 wrote
> On 19/02/2014 12:12 a.m., Dr.x wrote:
>> im doubting ,
>> without smp with same traffic and same users i can save 40Mbps
>> 
>> but in smp with combination of aufs with rock (32KB max obj size)
>> i can only save 20Mbps
>> 
>> 
>> im wondering does large rock will  heal me ?
>> 
> 
> How many Squid processes are you currently needing to service those
> users traffic?
> 
> If the number is >1 then the answer is probably yes.
> 
>  * Each worker should have same HIT ratio from AUFS cached objects. Then
> the shared Rock storage should increase HIT ratio some for workers which
> would not normally see those small objects.
> 
> 
>> or return to aufs and wait untill squid relase version that has bigger
>> object size ?
>> 
>> bw saving is a big issue to me and must be done !!!
>> 
> 
> Your choice there.
> 
> FYI: The upcoming Squid series with large-rock support is not planned to
> be packaged for another 3-6 months.
> 
> HTH
> Amos

hi amos ,
i have about 900 req/sec , and i think i need 4 or 5 workers at maximum
i have 24 cores ,
from the old squid that was saving 40-45M i found mean object size 
  Mean Object Size:   *142.30 KB*

i found that 142KB is close to 100KB ,

i mean if i used large rock , will it enhace byte ratio !!!
do agree with me ?

now regardsing to use aufs with rock

now i have 5 aufs hardsisk each has conf file and aufs dir and max object
size

now , wt is the best implementation of smp ?

should i do if statements and map each worker with aufs process ?

im not sure which is best

sure u can give me advice to start ,


also , can i use large rock now ?
regards




-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/What-are-recommended-settings-for-optimal-sharing-of-cache-between-SMP-workers-tp4664909p4664921.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] static caching for specific website for specific time

2014-02-19 Thread Dr.x
hi all ,
i found that sites like .com as an example is  the most site being viewd
by customers.
the question is how do i use squid with refresh pattern so that it  cache
all that website for about 5 minutes and after that go agian cache it , and
so on .


if i do that im sure that byte ratio will be better

:)


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/static-caching-for-specific-website-for-specific-time-tp4664943.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Question in adding banner for ads by squid ?!!

2014-02-21 Thread Dr.x
hi all
Project Description: 
I want an proxy server that can insert an banner (customizable, like an
frame so I can insert an banner in flash, or an html code with js) on the
top of all the web page.
is that possible with squid ?


do i need to be programmer to do it , ?

can somebody in mailing list assist me with it ?

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Question-in-adding-banner-for-ads-by-squid-tp4664976.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Worker I/O push queue overflow: ipcIo7.30506r9

2014-02-21 Thread Dr.x
hi alex ,

how we will share aufs between workers ???

as i understand , 

aufs shouldnt  be shared with workers ,

if that happen , squid will not be stable and each worker will make
different modifiaction on the same aufs .


actually i tried aufs without mapping to process , i mean aufs with shared
workers , but squid couldnt start !!

can u correct me plz ?



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Worker-I-O-push-queue-overflow-ipcIo7-30506r9-tp4664857p4664977.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] why squid can block https when i point my browser to port , and cant when its transparent ?

2014-07-27 Thread Dr.x
hi all ,

i have 2 questions.


1- why when i make a normal squid with normal http port , and i direct my
browser to ip/port it can block https facebook


while 
if it was transparent proxy it cant block https facebook ??

im talking about im configuraing normal http proxy not https !

wish a clarification.


2-now if i use ssl pump and used transparent tproxy with https ... can i buy
a trusted certificate and install it on squid and the users will not face
"certificate not trusted" message ?


i mean , in production network with much users , i need to block https 
youtube/facebook while keep using  transparent tproxy.



with to help

regards 



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/why-squid-can-block-https-when-i-point-my-browser-to-port-and-cant-when-its-transparent-tp4667069.html
Sent from the Squid - Users mailing list archive at Nabble.com.