On Mar 26, 2008, at 3:06 PM, Henrik Nordstrom wrote:
On Tue, 2008-03-25 at 18:13 -0700, Ric wrote:
Even then you have the same problem. A public response is a cache
hit
even if the request carries authentication.
Umm... only if it contains a public cache control token. That's
the
hi all,
i wan to allow a user(192.168.1.10) able to access to ONLY the website
domain.com at any time, but while on Friday 12pm to 2pm, i wan to allow
all users included (192.168.1.1) able to access to all website.
can i set it as below settings, and can it works?
squid.conf
acl
hi all,
sorry ... typo error ... what i mean is on Friday 12pm to 2pm, i wan to
allow all users included (192.168.1.10) able to access to all website.
Regards,
Kenny
- Original Message -
From: Kenny Lee [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Thursday, March 27,
thanks for your reply
1. the version i used is 2.6.STABLE19
$ squid/sbin/squid -v
Squid Cache: Version 2.6.STABLE19
2. the os is red hat enterprise edition 4 update 4, and the file
system of cache dir is ext3, cache_dir is coss:
the cache_dir line in squid.conf:
cache_dir coss /cache/coss 8000
Hi everyone.This is the first that I send E-mail in this group.
I'm so sorry that I have poor English. So if you can't understand me,I
hope you can ask me and I will feel happy.
Then ,I will discreab my question.
After I edit my squid.conf. When I run the comannd /etc/init.d/squid
stop,it always
it may should be:
cache_mem 150 MB
there is a space between the 150 and the 'MB'.
On 3/27/08, lei li [EMAIL PROTECTED] wrote:
Hi everyone.This is the first that I send E-mail in this group.
I'm so sorry that I have poor English. So if you can't understand me,I
hope you can ask me and I will
Hi, I've done as Chris suggests and used
form action=http://192.168.60.254/cgi-bin/auth.cgi;
name=login
with the following result:
http://tinypic.com/view.php?pic=2cz6b87s=3
As you can see the root http://192.168.60.254; has been removed and
squid is reporting an error because /cgi-bin/auth.cgi
Dear all,
I'm running several instances of squid at my university CAN and have
recently discovered some annoying things about the cache, so I kindly
ask for a little help on this.
Every time the squid process is cleanly stopped and started again, it
finds the cache dirty and starts to rebuild
As I've said before, COSS needs some fixes to rebuild from a method other than
reading the whole disk in at once.
If someone wants to do it let me know. If someone would like to help sponsor me
the
month-odd it'd take then please let me know.
If a customer of mine decides they want it, I'll do
Seems reasonable.. nothing beats just trying it out tho... :)
Kinkie
On Thu, Mar 27, 2008 at 8:14 AM, Kenny Lee [EMAIL PROTECTED] wrote:
hi all,
sorry ... typo error ... what i mean is on Friday 12pm to 2pm, i wan to
allow all users included (192.168.1.10) able to access to all website.
Thanks for quick reply, Adrian. As of myself, I'm not much into fixing stuff.
If more people are already interested in this functionality, you
should consider doing a fundrising or something :)
thanks again and regards,
On Thu, Mar 27, 2008 at 12:30 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
On Thu, Mar 27, 2008, Simonas Kareiva wrote:
Thanks for quick reply, Adrian. As of myself, I'm not much into fixing stuff.
If more people are already interested in this functionality, you
should consider doing a fundrising or something :)
There's no need for a fundraise; I started a company
Hi everyone,
I am having a problem making work Windows Update throught my squid
proxy. All the websites are working fine, even the HTTPS websites, but
when I try to use Windows Update appears this message (
http://support.microsoft.com/kb/817144 ):
Thank you for your interest in obtaining
Hi,
have you already testet the solution in the squid FAQ page?
http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
regards,
Daniel
Yes, I've got the solution for windows update in my squid.conf.
What happens is not that the computer can't access to the windows update
websites, it's just that it doesn't accept the OS (I think).
Daniel Becker wrote:
Hi,
have you already testet the solution in the squid FAQ page?
On Thu, Mar 27, 2008, Jose Noto wrote:
Hi everyone,
I am having a problem making work Windows Update throught my squid
proxy. All the websites are working fine, even the HTTPS websites, but
when I try to use Windows Update appears this message (
http://support.microsoft.com/kb/817144 ):
Guillaume Chartrand wrote:
Have you solved the forwarding loop?
Nope, If I understand the loop is when my squid box try himself to go
to
the web the routeur redirected to himself. Is that means?
That's how I interpret it.
If so, I will try to modify my ACL on my router to not
I've setup squidGuard and it works pretty well. What I would like to
do is to have squidGuard log when somebody tries to go to a specific
targetgroup but allow them access rather then doing a redirect.
I can only seem to get it to either log and block access or allow
access but not log.
Ooops... the acl should be
acl {
default {
pass Pornography Warez all
redirect http://cache1.server/cgi-bin/squidGuard.cgi?url=%u
}
}
It still doesn't do what I want though
Quoting Dennis B. Hopp [EMAIL PROTECTED]:
I've setup
Quoting Dennis B. Hopp [EMAIL PROTECTED]:
I've setup squidGuard and it works pretty well. What I would like to
do is to have squidGuard log when somebody tries to go to a specific
targetgroup but allow them access rather then doing a redirect.
I can only seem to get it to either log and
Dennis,
A negation (!) is needed if you want Pornography NOT to pass.
The pass line should be:
pass !Pornography !Warez all
-Marcus
PS: if you do not block proxies, users still have access to all pornography
Dennis B. Hopp wrote:
Ooops... the acl should be
acl {
default {
I have an OpenSuse 10.2 box that runs Samba / OpenLDAP as a PDC, as well as
Squid with delay pools to limit bandwidth dependant upon user, group, time
of day and machine. I have managed to get everything working and
authenticating correctly using smb_ldap_auth and smb_ldap_group. However, I
would
Hi All,
I've been running a reverse proxy here for sometime quite successfully.
However now I am looking into blocking certain useragents from accessing
the various sites currently being served by Squid.
I have created a new acl using the following:
acl badbrowsers browser
Quoting Marcus Kool [EMAIL PROTECTED]:
Dennis,
A negation (!) is needed if you want Pornography NOT to pass.
The pass line should be:
pass !Pornography !Warez all
I know that. I was trying to get it to pass but log. Every free
blacklist that I have used seems to use porn as the
Hi All,
Unfortunately my grasp of using regular expressions is extremely limited
verging on non existant.
Would anyone have an example that I could use to block libwww for example?
Best regards
Robert
Hi,
I hate replying to my own posts, It always seems like having spent a few
hours
Aside from the slight RAID5 performance drawback
and the RAID0 failure case drawbacks, I thought the
main performance issue with any RAID under squid
was with aufs only having a single writer thread,
as compared to giving squid multiple writer
threads if you mount the disks individually.
Of
Is squid -z idempotent?
In other words, is there any issue with a startup script that runs
squid -z before every startup, even if the cache directory has
already been generated in a previous startup?
Ric
On Thu, 2008-03-27 at 00:02 -0700, Ric wrote:
So with either authentication method, the only way to cache a split
view and guarantee that authenticated requests don't get the cached
version is via a Vary header. And excluding the authenticated version
from the cache then just becomes
Paul Bryson wrote:
Kinkie wrote:
This is not a code-writing activity; it's rather about having
experience in one specific usage scenario and being willing to to
share it with others.
Then I will add what I can to the wiki when I get a chance. That really
is the limit of my abilities.
I've
I noticed that Squid doesn't appear to support spaces in path names,
even if the path is quoted.
Are there any plans to enable quoted path names?
Ric
Paul Bryson wrote:
I've added a page with some ideas about creating a Squid install CD.
http://wiki.squid-cache.org/Features/SquidAppliance
How much of this seems realistic for someone to be able to put together?
None of it sounds impossible. I hereby take a step back from the line of
On Mar 27, 2008, at 2:02 PM, Henrik Nordstrom wrote:
On Thu, 2008-03-27 at 00:02 -0700, Ric wrote:
So with either authentication method, the only way to cache a split
view and guarantee that authenticated requests don't get the cached
version is via a Vary header. And excluding the
Tim Bates wrote:
* GUI - definately not needed. Waste of space. You can do a pseudo GUI
in text modes anyway (which I would suggest doing for initial config).
That was my thought too.
* I would personally suggest having a live-CD version with no disk cache
if possible. Some people may want
On Tue, 2008-03-25 at 15:07 +, paul cooper wrote:
so is what i want to do actually possible ?
unixlogin emma logged into VT7
unixlogin andrew - VT8
web page request from either - squid requests login
For trusted stations you can make use of the ident service to tell Squid
which user
On Thu, 2008-03-27 at 00:03 +, Richard Wall wrote:
I'm not sure how relevant this is to your discussion. I don't know how
RAID0 performance is expected to compare to RAID5.
RAID0 is not a RAID level, it's an administrative tool batch
performance tweak. It's performance for Squid is on the
On Thu, 2008-03-27 at 12:21 +1100, Adam Carter wrote:
Following on from my comment above, a single 20gig RAID0 cache_dir is
probably not that much different to two 10gig cache_dirs on single
disks. If using aufs then the RAID0 would only run as a single thread
so that may adversely affect
On Wed, 2008-03-26 at 14:50 +0100, Daniel Becker wrote:
in the access.log of the first proxy is appears correct:
TCP_MISS/000 1557 CONNECT a248.e.akamai.net:443 -
FIRST_UP_PARENT/192.168.100.11
but in the log of the upstream proxy it looks like:
TCP_MISS/404 0 CONNECT http:443 - DIRECT/-
On Wed, 2008-03-26 at 19:55 -0300, Pablo GarcĂa wrote:
Hi , Is there any way I can simply ignore the If-Modify-Since header
that comes in the request to always return 200 OK, with the content
attached ?
Why?
Regards
Henrik
On Thu, 2008-03-27 at 13:49 -0700, Ric wrote:
Is squid -z idempotent?
Depends on the cache_dir type. For aufs/ufs/diskd it is.
Regards
Henrik
On Wed, 2008-03-26 at 10:13 +0200, Dave Coventry wrote:
Chris, regarding the 302 redirection and the use of %s, where can I
find information on this?
http://www.squid-cache.org/Versions/v2/2.6/cfgman/deny_info.html
I've tried:
deny_info 302:http://192.168.60.254/login.html; lan
Should
On Thu, 2008-03-27 at 13:47 +, Jose Noto wrote:
I am having a problem making work Windows Update throught my squid
proxy. All the websites are working fine, even the HTTPS websites, but
when I try to use Windows Update appears this message (
http://support.microsoft.com/kb/817144 ):
On Thu, 2008-03-27 at 14:38 -0700, Ric wrote:
I noticed that Squid doesn't appear to support spaces in path names,
even if the path is quoted.
Where?
Regards
Henrik
On Tue, 2008-03-25 at 12:24 -0600, troxlinux wrote:
there is not much information, you have it implemented, what OS have?
The basic details needed to get going is found in the C-ICAP install
documentation:
http://c-icap.sourceforge.net/install.html
The full details on the possible parameters
On Wed, 2008-03-26 at 11:24 -0300, c0re dumped wrote:
Hello,
Is there a new x-forwarded-for patch to be used on squid3 ?
http://devel.squid-cache.org/projects.html#follow_xff
but it hasn't been updated in quite some time.. (years) and probably
doesn't work too well with current squid3...
On Mar 27, 2008, at 5:36 PM, Henrik Nordstrom wrote:
On Thu, 2008-03-27 at 14:38 -0700, Ric wrote:
I noticed that Squid doesn't appear to support spaces in path names,
even if the path is quoted.
Where?
pid_filename, a quoted path results in no pid file.
cache_access_log, a quoted
On Mar 27, 2008, at 5:25 PM, Henrik Nordstrom wrote:
On Thu, 2008-03-27 at 13:49 -0700, Ric wrote:
Is squid -z idempotent?
Depends on the cache_dir type. For aufs/ufs/diskd it is.
Okay, so not for coss type then? Thanks, that's good to know.
So in the coss case, what happens if squid
On Thu, Mar 27, 2008, Paul Bryson wrote:
Paul Bryson wrote:
Kinkie wrote:
This is not a code-writing activity; it's rather about having
experience in one specific usage scenario and being willing to to
share it with others.
Then I will add what I can to the wiki when I get a chance. That
On Thu, Mar 27, 2008, Neil Harkins wrote:
As for Squid handling a JBOD single disk failure,
not stacking up more reads on an (assumed) failed
disk would be great, but the process still needs to
be killed to get rid of those that blocked before it
noticed and to replace the disk, right?
yes it works great ... thank you
- Original Message -
From: Kinkie [EMAIL PROTECTED]
To: Kenny Lee [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Thursday, March 27, 2008 7:09 PM
Subject: Re: [squid-users] Access Control (Need Help)
Seems reasonable.. nothing beats
On Thu, Mar 27, 2008 at 1:59 AM, Marcus Kool
[EMAIL PROTECTED] wrote:
snip
Only one cache directory per disk is recommended while you have 4 cache
directories on one file system. Consider dropping 2 COSS cache directories
so that you have 1 COSS and 1 AUFS.
Yep, I understand. Unfortunately
People: in my server box , I am using squid as http accelerator
;setup is as follows
Flow of requests from users should be like this
squid listens on public ip port:80 ---apache(127.0.0.1:80) ---
RewriteRule for apache to---zope:8080/plonesite
Important NOTE : for the last couple of
51 matches
Mail list logo