On 5/30/2013 5:52 PM, Alex Rousskov wrote:
On 05/30/2013 08:29 AM, Eliezer Croitoru wrote:
On 5/30/2013 5:24 PM, Alex Rousskov wrote:
Yes, provided you use SMP Squid and Rock cache_dir.
This is a great feature but most cache objects will be larger then what
rock cache_dir offers no?
In
On 01/06/2013 10:11, csn233 wrote:
A side by side
comparison of what behaviour your proxy has when StoreID is used and when it
is not will probably also be useful in figuring it out.
What I've noticed so far is:
1. Running the same video on 1 browser at a time - no problem, and I
see a HITs aft
Hey Ricardo,
If you can build an RPM and store it it will be helpful for many people.
it will also add redundancy to my RPM and an alternative to mine.
http://www1.ngtech.co.il/rpm/centos/6/x86_64/
if you want the SRPM this is where mine is stored:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/SRP
, Eliezer Croitoru wrote:
Hey Ricardo,
If you can build an RPM and store it it will be helpful for many people.
it will also add redundancy to my RPM and an alternative to mine.
http://www1.ngtech.co.il/rpm/centos/6/x86_64/
if you want the SRPM this is where mine is stored:
http://www1
wxrwxrwt. 2 root root 40 Jun 1 12:16 .
(maybe I am doing the hole shm thing wrong)
Btw I will test your package this morning (it is monday morning here in
Brazil now) and tell you how it goes.
--
Att...
Ricardo Felipe Klein
klein@gmail.com
On Mon, Jun 3, 2013 at 7:58 AM, Eliezer C
In general tproxy works on:
Fedora(any version 10+)
Centos(5.9+)
Ubuntu(9.10+)
Gentoo(for very long time)
Debian(5+)
Slax(XX)
etc..
lots of systems works but you just don't know how to configure them...
What routing settings have you used??
take a loot at this script and change the modules exists
Sorry Satish Thareja,
This post is outdated since squid is not in 2.5\6 but on 3.3.
If you will share more from squid.conf lines we can try to help you.
if you can share the access.log we can try to understand.
please share IP etc..
if you are getting 403 it means that the server is rejecting you
Can you pastebin the script here:
http://www1.ngtech.co.il/paste/
Just to put it to the eyes of the public..
Eliezer
On 6/5/2013 2:48 AM, Ricardo Klein wrote:
I think the problem is he squid init script from fedora/centos has
less timeout then needed and fails to kill all squid processes (and
On 6/5/2013 3:06 PM, csn233 wrote:
On Sun, Jun 2, 2013 at 3:04 AM, Eliezer Croitoru wrote:
try this setup and notify me the results.
Eliezer
Thanks for your reply. Well, looks like it wasn't the setup, but the
fact that l have introduced the problem by changing the helper and not
del
On 6/6/2013 4:25 PM, csn233 wrote:
On Wed, Jun 5, 2013 at 11:57 PM, Eliezer Croitoru wrote:
StoreID is not related directly to cache_dir so it's not suppose to affect
dirty cache_dir.
StoreID uses the output from the helper as the key to store the cached
copy in cache_dir, does it not?
On 6/7/2013 5:11 PM, Amos Jeffries wrote:
In the cases I needed something related to DNS I used costumed external
helper for the specific task.
using the internal dns resolver for too much time to remember there was
external one.
Eliezer
On 6/9/2013 9:08 AM, babajaga wrote:
Amos,
Once subtle and noteworthy difference between Squid-2 and Squid-3 which is
highlighted by this feature is that refresh_pattern applies its regex
argument against the Store ID key and not the transaction URL. So using the
Store-ID feature to alter the v
I have seen http://www.squidblacklist.org/ which is a very nice idea but
I am wondering if squid.conf and other squid products are the good
choice for any place.
For a mission critical proxy server you will need to prevent any
"reload" of the proxy which can cause a *small* download corruption
On 6/9/2013 3:52 PM, Marcus Kool wrote:
I do not understand the performance figure. Can you give more details ?
Best regards,
Marcus
Yes indeed.
The performance of an ICAP service is in another level from helper since
it has concurrency capability in it.
one request doesn't affect another on
On 6/9/2013 6:59 PM, Alex Rousskov wrote:
On 06/09/2013 03:29 AM, Eliezer Croitoru wrote:
Would you prefer a filtering based on a reload or a persistent DB like
mongoDB or tokyo tyrant?
I would prefer to improve Squid so that reconfiguration has no
disrupting effects on traffic, eliminating
12:59 PM, Alex Rousskov wrote:
On 06/09/2013 03:29 AM, Eliezer Croitoru wrote:
Would you prefer a filtering based on a reload or a persistent DB
like mongoDB or tokyo tyrant?
I would prefer to improve Squid so that reconfiguration has no
disrupting effects on traffic, eliminating the "
On 6/9/2013 8:28 PM, Squidblacklist wrote:
On Sun, 09 Jun 2013 20:05:53 +0300
Eliezer Croitoru wrote:
On 6/9/2013 6:59 PM, Alex Rousskov wrote:
On 06/09/2013 03:29 AM, Eliezer Croitoru wrote:
Would you prefer a filtering based on a reload or a persistent DB
like mongoDB or tokyo tyrant
On 6/9/2013 9:30 PM, Alex Rousskov wrote:
On 06/09/2013 10:34 AM, Marcus Kool wrote:
And yes, improve Squid to have no service disruption during a reconfigure
will be a great feature.
Are you aiming at "minimise service disruption window" or go for
"never disrupt service" (unless a very importa
What version of squid?
what is the output of "squid -v"?
Eliezer
On 6/10/2013 7:44 PM, Philip Munaawa wrote:
Hi,
I have configured squid with a rock cache_dir & wccp. When I use
wccp2_rebuild_wait on, squid does not send out 'HERE IAM' messages to
the router.
It appears squid is waiting for t
Then turn the rebuild wait to off and see the results for you.
if it works great the good else we can think of what and how etc.
Eliezer
On 6/10/2013 7:44 PM, Philip Munaawa wrote:
Hi,
I have configured squid with a rock cache_dir & wccp. When I use
wccp2_rebuild_wait on, squid does not send ou
Try to kill all squid that are might be in the background.
also what OS are you using squid on?
self compiled or from repository?
Eliezer
On 6/10/2013 8:24 PM, Philip Munaawa wrote:
Hi,
I have configured squid 3.3.5 to with wccp deployment mode.
If I disable wccp (no ip wccp web-cache) on the
Please share these solutions with us..
I was working on a KV DB using Tokyo Cabinet, Tokyo tyrant, MongoDB,
Redis and more.
if you have something that do exists and can be used I will be happy to
leave this job to the pros.
Eliezer
On 6/11/2013 12:03 AM, Jose-Marcio Martins wrote:
Wel
On 6/11/2013 10:56 AM, Alex Domoradov wrote:
Amos, any idea?
Just asking.
do you understand the difference between a TCP_HIT and
TCP_REFRESH_UNMODIFIED ??
Looking at the wiki again it seems like it would be helpful just to see
this specific section:
http://wiki.squid-cache.org/SquidFaq/Squid
On 6/11/2013 3:36 PM, Marcus Kool wrote:
On 06/11/2013 09:09 AM, Jose-Marcio Martins wrote:
On 06/11/2013 12:50 PM, Marcus Kool wrote:
There is a big misunderstanding:
in the old days when the only URL filter was squidguard, Squid had
the be reloaded in order for
squidguard to reloads its d
On what OS?
and also what is the output of the ulimit -Ha and ulimit -Sa
Eliezer
On 6/11/2013 6:32 PM, Mike Mitchell wrote:
I dropped the cache size to 150 GB instead of 300 GB. Cached object count
dropped
from ~7 million to ~3.5 million. After a week I saw one occurrence of the same
proble
On 6/11/2013 8:42 PM, Marcus Kool wrote:
On 06/11/2013 11:57 AM, Jose-Marcio Martins wrote:
You should use "degraded service" instead of "interruption of service".
In the first part of this thread the discussion was about interruption
of service of the web proxy.
With ufdbGuard as URL filt
On 6/11/2013 11:24 PM, Guillermo Javier Nardoni - Grupo GERYON wrote:
Hello everyone,
We have this situation and we tried a lot of configurations without success.
• 1000 Customers
• 4 Caches BOX running Squid 2.7 on Debian Squeeze • Caches are full-meshed
to each other • Every Squid is running
On 6/11/2013 11:24 PM, Guillermo Javier Nardoni - Grupo GERYON wrote:
Hello everyone,
We have this situation and we tried a lot of configurations without success.
• 1000 Customers
• 4 Caches BOX running Squid 2.7 on Debian Squeeze • Caches are full-meshed
to each other • Every Squid is running
On 6/12/2013 1:30 AM, Alex Rousskov wrote:
On 06/11/2013 02:49 PM, Eliezer Croitoru wrote:
There is a small "bug" which when StoreID is being used the proxy asks
from the sibling only a StoreID url in the ICP requests.
If you do ask me I think that it should work this way
No, it
On 6/12/2013 12:43 AM, Matthew Ceroni wrote:
We are running squid as our primary proxy here at my office.
What I am noticing is that connectivity is fine but every now and then
the browser sits with "Sending request". If I hope on the proxy and
view the access log I don't see it logging my reque
On 6/12/2013 11:28 AM, Amos Jeffries wrote:
On 12/06/2013 3:42 p.m., Chris Bennett wrote:
I'm using squid HEAD published by Eliezer in his repo. I think I've
stumbled upon bug 3806 in HEAD. While examining why a particular URL
wasn't returning a HIT, I can't work out why it isn't being served
Hey Amos,
I am unsure about one thing.
in a case of carp array the related documents are:
-
http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+10.+Talking+to+Other+Squids/10.9+Cache+Array+Routing+Protocol/
- http://docs.huihoo.com/gnu_linux/squid/html/x2398.html
-
On 6/12/2013 1:27 PM, Amos Jeffries wrote:
On 12/06/2013 9:18 p.m., Eliezer Croitoru wrote:
On 6/12/2013 11:28 AM, Amos Jeffries wrote:
Who was the one that worked on vary code in the past?
Henrik mainly, and maybe the guys who did HTCP. I've dabbled a bit but
my best guess was test
Hey,
There was a bug that is related to LOAD on a server.
your server is a monster!!
squid 3.1.12 cannot even use the ammount of CPU you have on this machine
as far as I can tell from my knowledge unless you have couple clever
ideas in your sleeve.(routing marking etc..)
To make sure what the
wreqmod [down,!opt]
2013/06/13 11:09:33.530| essential ICAP service is suspended:
icap://10.122.125.48:1344/wwreqmod [down,susp,fail11]
What does down,!opt or down,susp,fail11 mean?
thanks!
Peter
On Thu, Jun 13, 2013 at 2:41 AM, Eliezer Croitoru wrote:
Hey,
There was a bug that is related
I am unsure but I am almost sure you need to compile TPROXY support in
FreeBSD kernel and it's not out of the box.
I might be imagining but this is how it was the last time I tried it.
Eliezer
On 6/12/2013 7:36 PM, Georgios Androulidakis wrote:
Hello,
I am trying to use the TPROXY feature in
Hey,
The problem you are having is not directly connected to squid.
it might be connected to somethings else.
try a settings of "allow all" and also show us the logs from access.log
and check the cache.log to make sure there is no problem with anything else.
if it's the ntlm or any other netwo
On 6/15/2013 8:57 PM, Bilal J.Mahdi wrote:
Dear all
Which OS is better for squid.
Debian 7 or UBUNTU 10.04 ??
Best Regards ~ Bilal J.Mahdi
Ubuntu 10.04.
if you can use 12.04 go for it.
Eliezer
On 06/15/2013 06:38 PM, Amos Jeffries wrote:
On 16/06/2013 3:34 a.m., CACook wrote:
On the topic of anonymity and help with anonymous proxy configuration;
Sadly it *is* the one topic you are most likely never to get people
openly posting lots of details about. The ones who know most are
un
On 06/17/2013 12:01 PM, Nuno Fernandes wrote:
When I send traffic that I expect be be intercepted to Squid, I get
the following errors in the log file (and a TCP RST from squid):
ERROR: No forward-proxy ports configured
NF getsockopt(SO_ORIGINAL_DST) failed on local=10.174.14.75:80
remote=107.3.
On 06/17/2013 09:49 PM, Beto Moreno wrote:
Hi.
Is posible to allow to access facebook but just to our company page?
Thanks!!!
Possible: yes.
worth: no
you are better use some exist api on a dedicated interface that allows
only specific access to specific facebook API.
but what do y
if is possible to open FB but just to the
company page.
This is why this came into my mind.
On Mon, Jun 17, 2013 at 1:04 PM, Eliezer Croitoru wrote:
On 06/17/2013 09:49 PM, Beto Moreno wrote:
Hi.
Is posible to allow to access facebook but just to our company page?
Thanks!!!
Po
i took this from the acls docs:
acl aclname rep_mime_type [-i] mime-type ...
# regex match against the mime type of the reply received by
# squid. Can be used to detect file download or some
# types HTTP tunneling requests. [fast]
# NOTE: This has no
On 06/19/2013 01:27 PM, Amos Jeffries wrote:
On 19/06/2013 8:33 p.m., Eliezer Croitoru wrote:
i took this from the acls docs:
acl aclname rep_mime_type [-i] mime-type ...
# regex match against the mime type of the reply received by
# squid. Can be used to detect file download or
On 06/20/2013 11:54 AM, babajaga wrote:
Yes.
Assuming, you have different store_dir, squid.conf etc.
I have 8 squid2.7 running on one server.
However, I copied the squid2.7 binary to 8 different binaries. Do not know,
whether this really is necessary or not.
I did not copy the helper binaries, l
On what browser do you see this problem?
Eliezer
On 06/20/2013 01:13 PM, Tom Tom wrote:
Hi Amos
I can actually see this abandoning-messages just for CONNECT-Requests
(and as much as I have investigated -> only for these).
Sometimes, there is a 407 before, and sometimes there is just a
CONNECT
ETAG???Vary??
There was a small talk about a bug in the vary thing But I am unsure how
it was tested.
Can we test this thing??
Eliezer
On 06/20/2013 08:17 PM, Paul Browne wrote:
Hi Amos,
I tried with a GMT setting and unfortunately it still does not cache as you can
see below:
curl -vvv -
debug_options ALL,2
should give the needed iformation from cache.log.
Eliezer
On 06/21/2013 03:28 AM, Amos Jeffries wrote:
On 21/06/2013 8:48 a.m., Eliezer Croitoru wrote:
ETAG???Vary??
There was a small talk about a bug in the vary thing But I am unsure
how it was tested.
Can we test this
What Version of squid??
'squid -v' output
also please tell us what is the purpose of the service?
is this service for ssl-bumping or just ssl reverse proxy?
there should be something in the cache.log if there is a problem in
binding a port.
This is for squid 3.1
http://wiki.squid-cache.org/Feat
I have seen things about RealTime computer systems.
One of the finest descriptions I have seen from: http://www.ecrts.org/ is:
The technical committee on real-time systems promotes state-of-the-art
research and research for applications with temporal constraints,
real-time systems. Such computin
On 06/21/2013 03:43 PM, babajaga wrote:
Squid is 100% one of the systems that tries and succeed on these
specific tasks. <
A bit too optimistic. I did a lot of assembler programming (incl. device
drivers for special HW) for RT-systems (16bit/32bit), using OS, which were
especially designed to ha
On 06/21/2013 02:45 PM, Daniele Segato wrote:
Hi,
is there a way to clear the cache without stopping / rebooting squid?
I usually stop squid, remove the caches from the filesystem (rm -rf
/path/where/the/cache/is), restart squid.
Is there a way to clear that cache without the restart?
Thanks,
On 06/21/2013 10:18 PM, babajaga wrote:
Depends on what the definition of RT is...
RT should be something reliable for a human to use in realtime
What do you think? <
You are talking about "Online Systems" with response times, considered to be
"immediate" for human beings. Which is in the
So now RT is pretty regular everyday thing.
also people do not try to maximize the CPU times but to try giving more
in their CPU time.
it's like "I have 3M cycles just use them" if you are efficient you
indeed can use 2M cycles and it will do whatever the task is.
So there are Mission Critical S
Hey TLS.
There is a solution for that of-course.
you should use some External_Acl helper that will check the parent
proxy(each and one of them) for a test like "www.example.com"
which the external_acl should receive some specific data and that it
will not be served from cache...
take a look at
tions move from on backend to the other.
My basic question is "why would you ever need to rm -rf the cache_dir??"
what version of squid are you using that you have corruption of the
cache_dir or the DB?
Eliezer
On 06/25/2013 04:47 PM, Daniele Segato wrote:
On 06/21/2013 03:26 PM, El
mos Jeffries [mailto:squ...@treenet.co.nz]
Sent: 21 June 2013 01:28
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid icap respmod is not caching
On 21/06/2013 8:48 a.m., Eliezer Croitoru wrote:
ETAG???Vary??
There was a small talk about a bug in the vary thing But I am unsure
how it
I added couple new patterns to my helpers examples.
at: http://www1.ngtech.co.il/paste/1015/
I added google apps binary caching.
it's not 100% and also there is a small "bug" in it so feel free to fix it.
If you see that one of the sites is down it's ok that what happens to
sites. they go down..
The basics...
Good hardware..
If you have good hardware there is nothing much you need to tune.
How many users?
anyone had the chance to do a MIPS vs INTEL GB lan card?
Eliezer
On 06/26/2013 11:41 AM, guzzzi wrote:
Hallo,
im restarting to set up a Squid Proxy with Dansguardian + NTLM. At the
Hey,
Consider what Amos suggested... using Kerberos rather NTLM.
From what I understand this machine can easily take the load with basic
settings.
Dansguardian is another story...
it's a more in depth proxy which will consume more CPU and will do a
thing or two more then squidguard.
The only
I would suggest you to read this:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm
not just because you need to read but since there are options you don't
use which makes it a bit weird.
like:
auth_param ntlm use_ntlm_negotiate on
It should work for you.
also a nice reference is h
Is this your first time with squid?
I would happily try to redirect you into another thread but if you
prefer I would gladly guide you throw the different between forward
proxy and intercept\transparent proxy.
It's very simple.
You have managed to run squid but never understood what it does.
How did you setup the trasparent proxy?
Do you know you need ssl-bump in order to block https traffice?
Also it's limited only to a specified ports..
Eliezer
On 06/27/2013 09:29 PM, javed_samtiah wrote:
Hi, I have configured transparent proxy with squid 3.3.5 stable edition. I am
unable to bloc
On 06/25/2013 08:51 AM, Reid Watson wrote:
Hi,
We seem to be having a strange issue with SQUID caching an object once it has
been purged - this issue does not occur with all objects within the cache..
Quick Overview
- Four Apache Server with Squid Installed
- RedHat 5.7 i386
- squid-3.1.8-1.e
here is a patch
required to resolve this problem, could you please repost it again in
response to this message?
My openssl packages are both versioned 1.0.0-27.el6.4.2.x86_64, the same
version Chris reported in another post in this thread.
Thanks!
Peter
On 05/21/2013 10:28 AM, Eliezer Croitoru
Well still works for me...
You can try other scripts that on the net.
Eliezer
On 07/01/2013 09:03 PM, normunds wrote:
hi el
I tested your script with squid3.head . Youtube is not being cached.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/A-new-addition
I am working on a nice CDN thingy and it seems like squid dosn't like to
cache a specific file which I am unsure what the source of the situation.
http://image.slidesharecdn.com/rhintrotoglusterfsoct11final-111028123627-phpapp01/95/slide-1-728.jpg
The above picture should be valid for a very lo
ng 2
bytes for '23B82F82AA4091F88B08F16AAD124879'
Which indicates that the public key is being updated by vary headers
which looks a bit odd..
what is this??
accept-encoding="gzip,%20deflate"
Thanks,
Eliezer
On 07/03/2013 10:21 AM, Eliezer Croitoru wrote:
I am working on a ni
Hey Markus,
What is the problem you are facing again?
are you trying to detect whether the proxy is working or not??
do you want 4 proxies to be in fail-over mode??? for HA??
I would imaging you have a PC and on it 4 proxies as cache_peers and you
would like to not have an option for a connectio
Hey Markus,
Why is a very hard question while not folding all the cards in-front of me.
I will not tell you what have changed from 2.7 to 3.2 since almost
everybody knows that 2.7=c 3.2=c\c++ and some other stuff.
But I can understand your problem and I can offer you couple things.
First use my
sable Pragma Headers without modifying the request using ICAP?
Eliezer
On 07/03/2013 03:56 PM, Amos Jeffries wrote:
On 3/07/2013 7:21 p.m., Eliezer Croitoru wrote:
I am working on a nice CDN thingy and it seems like squid dosn't like
to cache a specific file which I am unsure what the sourc
On 07/01/2013 08:32 AM, neeraj kharbanda wrote:
hi
store-id.pl available at
https://tempat-sampah.googlecode.com/svn/store-id.pl
doesnt cache youtube contents on squid3.HEAD 12839
--
Well I am sorry it dosn't works for you but the basic script is actually
using a output syntax which uses more
Hey,
Squid do use url regex in any form there is.
The only problem is that SSL connections are being transported as they
are without decryption.
If you do want to filter urls there are options such as squidguard as a
helper for squid.
All the related squid matters that you need are SSLBUMP with dy
try this if you want to try something new.
https://github.com/elico/squid-helpers/blob/master/squid_helpers/store-id.pl
Eliezer
On 07/07/2013 03:50 PM, Eliezer Croitoru wrote:
> On 07/01/2013 08:32 AM, neeraj kharbanda wrote:
>> hi
>>
>> store-id.pl available
Happy for the responses.
On 07/08/2013 09:34 AM, Alan wrote:
> On Mon, Jul 8, 2013 at 6:25 AM, Eliezer Croitoru wrote:
>> try this if you want to try something new.
>> https://github.com/elico/squid-helpers/blob/master/squid_helpers/store-id.pl
>>
>> Eliezer
>
I am proud to release the new RPM for squid version 3.3.6. at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/
The package includes 3 RPMs one for the squid core and helpers, the
other is for debuging and the third is the init script.
http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-3.3.6-1.el6.x8
e to be
served from cache or at least to be treated with 304 towards the server
and still it stays as a full regular request and then as a TCP_MISS.
If there is a need to file a bug just let me know and I will file it
with all the details.
If not please help me make sure I understood right the logi
On 07/08/2013 09:34 AM, Alan wrote:
> On Mon, Jul 8, 2013 at 6:25 AM, Eliezer Croitoru wrote:
>> try this if you want to try something new.
>> https://github.com/elico/squid-helpers/blob/master/squid_helpers/store-id.pl
>>
>> Eliezer
Hey Alan,
I have just updated the
On 07/09/2013 07:32 AM, Amos Jeffries wrote:
> On 8/07/2013 6:34 p.m., Alan wrote:
>> On Mon, Jul 8, 2013 at 6:25 AM, Eliezer Croitoru
>> wrote:
>>> try this if you want to try something new.
>>> https://github.com/elico/squid-helpers/blob/master/squid_helper
Thanks Amos,
I am very happy there is a helper which Is more plural then my ruby helper.
I do understand the differences between ruby and other languages.
I think that compared to all the helpers that already existed the one
that I wrote gives a great example of how we can describe things in a
way
On 07/10/2013 05:54 PM, Nishant Sharma wrote:
> Hi,
>
> I have two parent proxies configured. Parent 1 is on a faster link while
> Parent 2 is on a DSL.
>
> Squid 3.1.20 is the child proxy while Parent proxies are 3.1.6.
>
> I have some domains which need higher priority and should be failed-ove
On 07/11/2013 07:22 AM, Amos Jeffries wrote:
> On 11/07/2013 2:30 p.m., Amos Jeffries wrote:
>> On 11/07/2013 12:59 a.m., aasto...@inwind.it wrote:
>>> Hello,
>>> I have some problem with encoded URL, like this
>>>
>>> http://www.xyz.net/download.php/235507/%5BTwistys%5D%202013-07-10%20Teal%
>>>
>>
Hey,
Where did you got the 3.3.6? was it a self compiled??
I want to enhance my RPM to support SMP by default.
What are the options now?
Create a directory and change ownership to the proxy user? (Amos..)
I will probably add it to the 3.3.7 RPM next week.
Eliezer
On 07/11/2013 10:23 AM, x-man
I have been testing quite some time some urls for cachability.
It seems like there are different methods to request the same file which
leads to different reaction in squid and I want to make sure 100% what
is the cause to the *problem* before I am running to a conclusion since
I am not 100% sure.
>>>
>>> Is it really still not possible to compile 3.3.5 with --enable-ssl-crtd
>>> on a RedHat or CentOS platform without having to patch the source code?
>>> I had hoped that upgrading to 6.4 would solve this problem, but that
>>> does not seem t
On 07/11/2013 05:03 PM, Amos Jeffries wrote:
> On 11/07/2013 11:40 p.m., Eliezer Croitoru wrote:
>> I have been testing quite some time some urls for cachability.
>> It seems like there are different methods to request the same file which
>> leads to different reaction in squ
On 07/11/2013 07:44 PM, Amos Jeffries wrote:
> On 12/07/2013 3:20 a.m., Eliezer Croitoru wrote:
>> On 07/11/2013 05:03 PM, Amos Jeffries wrote:
>>> On 11/07/2013 11:40 p.m., Eliezer Croitoru wrote:
>>>> I have been testing quite some time some urls for cachabili
Hey There,
you are not updated yet but there is a feature in squid 3.HEAD which
called StoreID that allows you youtube caching.
In order to cache YT and other things on squid and to de-duplicate
content you can use the next Doc:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
Squid 3.3.7 is out and there was a new leak that was fixed and might
caused the problem you are referring to.
If you have used my RPM there is an update to 3.3.6 which not includes
the latest patches and a 3.3.7 with all the patches will probably be out
next week since it builds fine.
What version
m the
> source packages into the extracted archives and adjusting accordingly
> (ie modifying Changelog and deleteting old patches) I tried wheezy but
> the OpenSSL 1.0.1 horribly breaks *loads* of sites when using SSLBump.
>
> Cheers
>
> Alex
>
>
>
> On 11/07/13 2
When you compile there is a squid.conf.default file that is being
created in the installation directory.
Just make sure the squid.conf.* files are renamed or removed from the
etc directory.
Eliezer
On 07/12/2013 11:26 AM, x-man wrote:
> Ye, it's a self compiled .. that's why I got some issues :)
Hey,
Can you please share more info from the cache.log??
when we will have this we can understand more.
If you can get a more verbose log using debug_options ALL,2 or ALL,3 it
will be much helpful then a single line.
If you are up to help debug it please refer to:
http://wiki.squid-cache.org/Squi
Network or kernel level.
You can try to look at the MTU and other stuff.
to make sure there is no DNS interception etc.
For me it seems like a network issue.
Eliezer
On 07/12/2013 10:06 PM, Grant wrote:
>>> Sorry to top-post. Any more ideas with this?
>>
>> No sorry. Everything else that usually
If it's a CONNECT that squid recognizes it seems to me that what happens
on the network level.
Squid 2.7 is old but handles CONNECT pretty easily as far as I know.
Can you share more about the scenario and network so we can actually try
to think and help you?
Eliezer
On 07/13/2013 01:03 AM, Squid
RPM will be available in a couple tests.
The RPM includes all the ssl-bump helpers needed.
You will might want to take a look at:
http://wiki.squid-cache.org/Features/DynamicSslCert
Which gives you almost anything.
the only differnce is that the needed file is at lib64/squid
or something Else.
The
if you can share your squid.conf.
We can try to give you a hand with the squid -kparse
in a case it's intercept proxy stop the interception and make sure the
settings are five and go on..
version 3.3.8 RPM will be out soon so you could wait a bit for it to get
out.
Eliezer
On 07/13/2013 11:11 PM
On 07/15/2013 06:14 PM, Grant wrote:
>> Network or kernel level.
>> > You can try to look at the MTU and other stuff.
>> > to make sure there is no DNS interception etc.
>> > For me it seems like a network issue.
> If it's a network issue, could you be any more specific as far as what
> to look for
Hey,
if you insist on serving the local port 80 from the same server I would
say you need to make sure the servers are listening to the current port
using:
netstat -ntlp
what is the output??
Eliezer
On 07/17/2013 12:00 AM, jc.yin wrote:
> I'm not sure if what I've done is correct but I've tried
Hey,
I would say if you want a replica of the AD data use LDAP and not
anything else.
I havn't tried samba4 yet So I cant tell you a thing about it and what
it suppose to do.
If you have some description of samba4 replication thing you just told
me about I would be happy to hear more about it.
Wi
On 07/15/2013 11:32 PM, Grant wrote:
Network or kernel level.
> You can try to look at the MTU and other stuff.
> to make sure there is no DNS interception etc.
> For me it seems like a network issue.
>>> If it's a network issue, could you be any more specific as far as what
>>> to
501 - 600 of 1311 matches
Mail list logo