Re: [squid-users] Squid 3.5.7, cache_swap_high, bug or not bug ?

2015-08-24 Thread Stakres
Hi Amos,
The patch is running since 3 days and seems working fine 
Can we expect the next squid build including the patch ?

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-7-cache-swap-high-bug-or-not-bug-tp4672750p4672835.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] refresh_pattern and same objects

2015-08-21 Thread Stakres
Amos,

With this type of config, we'll keep in cache all stale and popular objects,
I think we need special options:
save_big_file on/off
save_big_file_min_size 128 MB
save_big_file_max_time 1 years

It'll be more clear and precise, can we count of these options soon ?

Bye fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-and-same-objects-tp4672792p4672806.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] refresh_pattern and same objects

2015-08-21 Thread Stakres
Amos,

We do use "cache_replacement_policy heap LFUDA", so it should do the job as
you explain, right ?
If i understand you correctly, we should also use something like that
"max_stale 1 year", correct ?

Thanks in advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-and-same-objects-tp4672792p4672805.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.7, cache_swap_high, bug or not bug ?

2015-08-21 Thread Stakres
Amos,

in the meantime, I was thinking another point:
We know there are at least 2 limit with cache_dirs, the max size and the 16+
million entries (Filemap bits).
the cache_swap_high should take care both.
example: if the used cache is 95% of the space or if the Filemap bits is 95%
of the 16+ million entries.

Because if the used cache is 100%, the squid could crash a few later, but if
the squid is 16777216 entries it does crash immetiatly and will be looping
in crashes.

See what I mean ?

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-7-cache-swap-high-bug-or-not-bug-tp4672750p4672803.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] refresh_pattern and same objects

2015-08-21 Thread Stakres
Hi Amos,
Is that possible to have a dedicated option with the Squid to keep objects
in the cache if they're regulary used even if the time is expired ?
Cleaning small expired files (<16kb) is not a problem but we must keep big
files into the cache if often used.
There are many "small" ISPs with 2, 4 or 8Mbps bandwidth and big files are a
problem if they've to download a fresh copy every month (if max-day 1
month).
Doing a local copy is not a right way here because too many possible big
objects and issues to manage them, we should have Squid able to do that.

I'm thinking something like that:
save_big_file on/off
save_big_file_min_size 128 MB
save_big_file_max_time 1 years

Would it be something you could implement with Squid ?
I'm sure it would work so fine 

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-and-same-objects-tp4672792p4672802.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.7, cache_swap_high, bug or not bug ?

2015-08-21 Thread Stakres
Hi Amos,
Thanks for the explaination.
We'll try to apply the patch and will test with the customer, keep you
posted asap... 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-7-cache-swap-high-bug-or-not-bug-tp4672750p4672800.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] refresh_pattern by type mime

2015-08-20 Thread Stakres
Hi All,

There is an existing case in the bugzilla
(http://bugs.squid-cache.org/show_bug.cgi?id=1913) speaking about this
request and it seems a good idea:
refresh_pattern by type mime

It would be very nice and cool to have this feature in squid to define
different min/max time per mime.
We could define script/html/css/etc... with a short time,
images/videos/audio/application/etc... with a long time...

Squid team, what is your opinion about that ?
Maybe already in the roadmap for the next 3.5.x build or the 4.x ?

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-by-type-mime-tp4672793.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] refresh_pattern and same objects

2015-08-20 Thread Stakres
Hi All,

Maybe someone gets the info already...
A refresh_pattern with 1 week maxi, if the same object is "visited" (coming
from the squid cache) every day, will the object be deleted 1 week after the
first cache action or will the squid add +1 week each time the object is
used from the cache ?

My issue is if we cache a big object (windowsupdate, chrome, etc...) for 1
week or 6 months, do we have to download it again once the initial time is
over ?
Or can we expect the same big object will be available from the cache for a
very long time if it's visited at least 1 time before the end of the time
limit...

Windowsupdate 10 is about 2.6GB objects, if the max time is 1 month, I don't
want to re-download such size monthly, if it's used daily...

See what I mean ? 

Bye fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-and-same-objects-tp4672792.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.7, cache_swap_high, bug or not bug ?

2015-08-20 Thread Stakres
Hi Amos,

Any update ?

This morning, it was crazy:
*Percent used: 100.76%*
How is it possible ?

Then the squid has crashed and it's now cleaning objects.
I can understand new objects could be added faster than the squid is able to
clean older objects, but it seems there is something wrong in the cleaning
process.
The wiki says "/As swap utilization gets close to high-water mark object
eviction becomes more aggressive./". From my point of view the "aggressive"
is not aggressive enough... 

Possible to have a patch/workaround soon ?
thanks in advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-7-cache-swap-high-bug-or-not-bug-tp4672750p4672791.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.7, cache_swap_high, bug or not bug ?

2015-08-18 Thread Stakres
Hi Amos,

New check, 30 sec ago, same server:
Store Directory #0 (aufs): /cachesdg/spool
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 67108864 KB
Current Size: 67045256.00 KB
*Percent Used: 99.91%*
Filemap bits in use: 7962720 of 8388608 (95%)
Filesystem Space in use: 65048176/575775620 KB (11%)
Filesystem Inodes in use: 7595323/36569088 (21%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 1.08 days

squid.conf:
...
cache_dir aufs /cachesdg/spool 65536 16 256 min-size=0 max-size=16384
...

Traffic is 700Mbps, the server is a 8 real cores (no virtual here), 64GB
memory.
Load is 3, CPU is 40%

So, the cache will be full in the 30 min... then, what ?
Do I must be ready to see the squid crashing ?

Please, check the squid code asap, thanks in advance.

Bye fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-7-cache-swap-high-bug-or-not-bug-tp4672750p4672758.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.7, cache_swap_high, bug or not bug ?

2015-08-18 Thread Stakres
Hi All,

I'm facing a weird situation with a squid *3.5.7*, have a look:
Store Directory #0 (aufs): /cachesdg/spool
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 67108864 KB
Current Size: 66288408.00 KB
*Percent Used: 98.78%*
Filemap bits in use: 7858031 of 8388608 (94%)
Filesystem Space in use: 63582908/575775620 KB (11%)
Filesystem Inodes in use: 7428375/36569088 (20%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 1.06 days

In the squid.conf:
...
cache_swap_low 75
cache_swap_high *80*
...

Is that normal ?
I should see the Squid cleaning objects to respect the 80% swap high, am I
wrong ?


Thanks in advance for your comments...

Bye fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-7-cache-swap-high-bug-or-not-bug-tp4672750.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 2015/07/28 22:04:49 kid1| assertion failed: filemap.cc:50: "capacity_ <= (1 << 24)"

2015-07-28 Thread Stakres
Hi Amos,

/cache_dir aufs /cachesde/spool1 1560132 16 256 min-size=0 max-size=32768/

Will this bug be fixed in a near future, or do we have to increase the
max-size to 128kb or more ?

Fred





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/2015-07-28-22-04-49-kid1-assertion-failed-filemap-cc-50-capacity-1-24-tp4672516p4672520.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 2015/07/28 22:04:49 kid1| assertion failed: filemap.cc:50: "capacity_ <= (1 << 24)"

2015-07-28 Thread Stakres
Hi All,

Squid 3.5.6 in AUFS.
Any idea why this error happens ?
/2015/07/28 22:04:49 kid1| assertion failed: filemap.cc:50: "capacity_ <= (1
<< 24)"/

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/2015-07-28-22-04-49-kid1-assertion-failed-filemap-cc-50-capacity-1-24-tp4672516.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-21 Thread Stakres
Hi Fred,

No error, no crash.
Some warnings only:
2015/07/21 11:21:02 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
But we can live with these warnings, Squid will take care the missing
objects...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672352.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to get the correct size of a denied object ?

2015-07-20 Thread Stakres
Amos,
How do you get the real size at the moment with a normal object ?
Just do the same 
I suppose you get the size from the headers, right ?

If we know the object is denied, we ask for a head request to know the size
and we use it in the log.
As the object will be blocked, we don't care this action will need 3 sec or
more, the user will not get the object...
See what I mean ?

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-get-the-correct-size-of-a-denied-object-tp4672332p4672343.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to get the correct size of a denied object ?

2015-07-20 Thread Stakres
Antony,
I got this idea too, but here we "lose" (i mean it's not overwritten) the
info in the access.log and there is no effect of realtime if you see what I
mean...
An alternative could to catch the TCP_DENIED with a helper but I did not
find the way yet, i think it cannot be done this way.

The easy way would be to have a special "%http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-get-the-correct-size-of-a-denied-object-tp4672332p4672334.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to get the correct size of a denied object ?

2015-07-20 Thread Stakres
Hi All,

As you know, when an object is denied by an ACl or other, the size of the
object in the log file is the size of the ERR_* page.
Is there a way to get the correct/real size of the blocked object ?

I know the url is denied before squid gets the object from internet, but it
should be nice to have a special action/option to write to the access.log
the real size instead the ERR page size.
Because here we don't care the size of the ERR page, knowing the real size
of the denied object is much more important, not meaning the size we blocked
but what is the size we have not downloaded, this is a valuable data with
clients...

Possible to plan a solution for the next build ?
Just get the size by headers, deny the object then write the correct size to
the access.log 

Thanks in advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-get-the-correct-size-of-a-denied-object-tp4672332.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-20 Thread Stakres
Hi Amos,
Here is the cache.log to check:
http://utimg.unveiltech.com/tmp/amos-cache.tgz

Fred,
I compared the 2 source diskd.cc, squid 3.4.8 and 3.5.6 both official, no
dif.
So, using the diskd 3.4 with the 3.5 does not seem to be a good idea, result
should be the same.

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672331.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-16 Thread Stakres
Hi,

By "cache.log saying objects are not found" I meant
"DiskThreadsDiskFile::openDone: (2) No such file or directory".
(je n'avais plus le message en tete...)
Yes, still this message but it disapears at least 30 minutes later. So not a
problem to us with clients.

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672297.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-16 Thread Stakres
Fred,
The AUFS works for us, we switched all our clients back to the AUFS from
DISKD.
Yes, there are some Queue congestions at the squid restart (during 30 min
maxi), but as Amos said the Squid will re-adapt its internal value to fit
the traffic, I can confirm that point.
After a while, the queue congestion disapears but we see many info from the
cache.log saying objects are not found but here I think we don't really care
as the Squid is smart enough to correct its index file...

We have ISPs with 1-2Gbps bandwidth, no complain anymore, only "Thx guys,
great job !"


Amos, all the Squid team,
You have fixed the issues with the AUFS (isEmpty, ...), so thanks to all of
you because with a previous unstable aufs, a still slow diskd and a bugged
rock, it was not easy to have a stable squid cache with good performances. 

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672294.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-16 Thread Stakres
Hi Fred,
Same results from our side...
Does it mean we should catch the diskd engine from the 3.4.x and apply it
with the 3.5.x ? 
Should be a good try to see if it works

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672291.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Fred,

Not sure we'll have free time for testing the previous 3.4, we now have
dozens of boxes to manually upgrade to the 3.5.6...
yes, we do use the original squid 3.5.6 package, no build mix here.

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672268.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Fred,
We have upgraded 4 big ISPs to the latest 3.5.6 in AUFS, feedbacks are so
good. I can tell you clients see a big (positive) change here.
We use the same settings in the squid.conf but AUFS instead DISKD, the
difference is crazy...

In the past we moved to the Diskd due to too many errors in Aufs (isEmpty,
etc...), now it seems all these errors (welcome to new ones...) have been
fixed.
So, again, I only speak about the Diskd and the TCP_HIT, other flags are ok
with us.

Fred.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672259.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Hi Fred,
tests from my side:
DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have tested.
AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url, same
all.

Wget with AUFS:
Length: 10095849 (9.6M) [application/x-msdos-program]
Saving to: `youtube_downloader_hd_setup-2.9.9.23.exe'
100%[==>] 10,095,849  47.9M/s   in 0.2s
2015-07-15 15:48:29 (47.9 MB/s) - `youtube_downloader_hd_setup-2.9.9.23.exe'
saved

All,
We have switched some ISPs from DISKD to AUFS this morning, the "queue
congestion" appears at the begining then disappears from the cache.log. For
how long, nobody knows...

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672247.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Fred,
(Guys, 2 french Fred here, but not the sames)

Did you check the TCP_HIT response times with the Diskd ?
During our tests, we have seen than it's sometime better to download the
object from internet again instead using the one from the cache, we got
better response times... 

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672235.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Amos,

We're using the latest 3.5.6 build, and we have not yet planed new tests
with the Rock. We were a bit disapointed with so we're not really "hot" to
spend time in testing it.

We're ok with the Diskd mode, except with the TCP_HIT objects (50+ times
slower).
We did tests on a basic server, i3 with 4GB memory, the disk is a 2.5' 80GB
not a rocket but good to figure out bad/good speed ;)

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672233.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Hi Fred,

We did the tests with 1 hard disk only (for testing), we used 150 req/sec,
load was around 0.7-0.8
Naaa, response times are crazy in DISKD/TCP_HIT (20+ sec instead 0.5 sec in
AUFS) but it concerns TCP_HIT only, the other flags are corrects in DISKD.

I'll try the "noatime"...

Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672231.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Hi Fred,

We did the tests with 1 hard disk only (for testing), we used 150 req/sec,
load was around 0.7-0.8
Naaa, response times are crazy in DISKD/TCP_HIT (20+ sec instead 0.5 sec in
AUFS) but it concerns TCP_HIT only, the other flags are corrects in DISKD.

I'll try the "noatime"...

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672230.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Fred,
Welcome to the club... 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672227.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Hi Amos,

Sorry but the Rock mode is totaly bugged, the worst mode to use here.
We did tons of tests, small, medium and big rock cache, all crash process
after process. We have definitively abandonned the Rock mode while it'll be
the same results.

So, it seems we'll have to switch all boxes from diskd to aufs, but I think
we could survive 
Anyway, we liked the diskd because we see good stability, but the HITed
objects are really too slow, all my clients are complaining, that's why we
did many tests yesterday and we found these times...

Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672226.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Yury,

you mean that having the DISKD 52 times slower then AUFS with linux OS is
normal ?
I cannot believe that, incredible !

I could understand the double or the triple, but here we're speaking about
50+ times... 

Fred.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672214.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-15 Thread Stakres
Yuri,

Debian 7 or 8, tested on both...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672212.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] AUFS vs. DISKS

2015-07-14 Thread Stakres
Hi All,

I face a weird issue regarding DISKS cache-dir model and I would like to
have your expertise here 

Here is the result of a cache object with an AUFS cache_dir:
1436916227.603462 192.168.1.88 00:0c:29:6e:2c:99 TCP_HIT/200 10486356
GET http://proof.ovh.net/files/10Mio.dat - HIER_NONE/-
application/octet-stream 0x30

Now, here is the same object from the same Squid box but using the DISKD
cache_dir:
1436916293.648  24281 192.168.1.88 00:0c:29:6e:2c:99 TCP_HIT/200 10486356
GET http://proof.ovh.net/files/10Mio.dat - HIER_NONE/-
application/octet-stream 0x30

Do you see something weird ?
This is the same Squid (3.5.5), I just changed from AUFS to DISKD and
restarted the Squid...

Same object from the cache but *0.462 sec* in AUFS and *24.281 sec* in
DISKD.
52 times more fast in AUFS, why ?

Any idea to speed the diskd up or at least reduce it ?
I could understand the response times could not be the same, but here this
is the Grand Canyon !

My cache_dir option used in test:
cache_dir diskd /var/spool/squid3w1 190780 16 256 min-size=0
max-size=293038080


Thanks in advance for your input...

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraries/CDNs Booster

2015-07-08 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.545   
- July 8th 2015.
- Apple Music - new!
- Google Music - new!
- and more ...
More details on https://svb.unveiltech.com

Enjoy

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4672107.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-04 Thread Stakres
Hi Amos,

We did tons of tests with the latest Squid versions and this is not the
behaviour with the "host_verify_strict off" and "client_dst_passthru off".
With those 2 options OFF, we see a lot of ORIGINAL_DST that we should not
see if we follow your explainations, so it seems there is a bug somewhere ?

Can you check from your side (tproxy or not, same behaviour), thanks in
advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672054.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-03 Thread Stakres
Amos,
OK, got your points.

What I don't understand is:
- The dns records do not match. Squid does the dns request by itself,
downloads the object, delivers it to the client and flags with an
ORIGINAL_DST, right ?
- Same request from another client, same way, it'll be the same object and
flagged ORIGINAL_DST too.
- Again and again... each time the same fresh object...
- Why do we repeat the same action if we deliver the same object each time ?
it makes me crazy... 

Here, I mean by using "*client_dst_passthru off*" and "*host_verify_strict
off*".
I understand the "host_verify_strict on" must act as you explain, no
problem.


Squid re-checks the DNS as there is an issue, downloads and delivers the
object. If Squid delivers the object it should be able to cache it with the
"*client_dst_passthru off*" and "*host_verify_strict off*".

I agree Squid must respect the CVE-2009-0801 but you/we should deal nicely
with and not just applying it...

The right way should be:
Squid think the object is OK to be delivered ?
Yes: deliver it and cache it.
No: block it and don't cache it.

See what I mean ?
(Sorry to be boring with this topic but it's highly important...).

Fred.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672048.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-03 Thread Stakres
Amos,
You told the Squid will check the original dns from the headers, then it'll
do its own dns resolution to verify they both match.
So, if no match, Squid does the request to internet based on the dns it
found.
If I'm right, that the current way, correct ?

What we could do is the same way but as Squid has downloaded the object
based on its dns records, it means the object is correct, the right one. So,
keep all details from Squid job and push the object to the cache (if
cacheable).

user request -> squid checks the dns is ok (corrects it if needed) -> squid
download the right object and cache.
user request -> squid checks the dns is ok (corrects it if needed) -> squid
pushs from its cache.

Again, if Squid requests the right object based on its dns requests, it'll
deliver to clients the good one.
So, we should not see ORIGINAL_DST anymore...

And, when I see the archi Yuri must to do to avoid ORIGINAL_DST, I'm sure
all Squid users will be happy 

Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672044.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-03 Thread Stakres
Hi Amos,
Can we expect a workaround to allow the object to the cache if the dns
record is corrected by Squid instead that having an ORIGINAL_DST ?
If Squid corrects the request, it mean the URL will be good, so we should be
able to cache the object 

Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672041.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Stakres
Hi Yury,

In your installation, with your devices... At home, I do the same like you,
but I'm not an ISP.

Here the issue is that end users could use different dns the ISPs cannot
control.
Home/Entreprise, the admin can control the used DNS servers with devices. In
an ISP environment, we cannot control/manage, end users do what they want.
2 different worlds, not the same rules, sorry 

Fred






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672024.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Stakres
Hi Amos,

"/You can get around it somewhat by having the ISP resolvers use each other
same as proxy chains do./"
This is impossible to do in a multi-level ISPs archi, because each ISP could
use any DNS servers (google, level3, etc...). From the original end user to
the latest ISP step the dns header could be using an ip address that the
Squid could not know.

"/Consider some malicious server at 192.168.0.2 responding with an infected
JPG to all requests. An infected server contains a script that fetches the
Google icon from 192.168.0.2 using Host:www.google.com. /"
Totaly agree with you but what we/you could do is to replace the original
dns records from the headers by the records squid will find and allow the
cache hit.
Here, squid only applies the correct dns but deny the object to be cached.
if squid corrects the dns it means the object should be safe (normaly) so it
should accept to see the object saved into the cache (partial object or
not), right ?

So, fixing a wrong dns record is a good thing I agree, but why do you deny
the cache action if the request was corrected ?

What about if the end user has fixed to a special dns server (home made,
exotic server, etc...), here the ISP cannot increase the % saving and this
point (% saving) is the top priority for the ISP that's why he needs
solutions like Squid products.

Do you think we could have a workaround for fixing the wrong dns record from
headers (Squid action) and having the object cached ? or it does not make
sense because other security issues ?

I read many forums where admins are requesting this behaviour, I 'm sure
we/you can find a nice solution for all of us .

Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672022.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Stakres
Hi Amos,

216.58.220.36 != www.google.com ??? 
Have a look: http://www.ip-adress.com/whois/216.58.220.36, this is google.

Depending the DNS server used, the IP can change, we know that especialy due
to BGP.

In the case the client is an ISP providing internet to smaller ISPs with
different DNS with their end users, here I understand that due to the
ORIGINAL_DST squid will check the headers and if the dns records do not
match so squid will not cache, even with a storeid engine, because too many
different DNS servers in the loop (users -> small ISP -> big ISP -> squid ->
internet), am I right ?

So, the result is a very poor 9% saving where we could expect around 50%
saving. 

Can you plan, for a next build, a workaround to accept the original dns
record from the headers and check dns if and only if the headers do not
contain any dns record ?
I understand Squid should provide some securities but here we should have
the possibility to ON/OFF these securities.
Or do we need to downgrade to Squid 2.7/3.0 ?

ISPs need to cache a lot, security is not their main issue.

Thanks in advance.
Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672020.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-01 Thread Stakres
Hi,

I'm back to this post because it still does not work.
You explain "OFF - Squid selects a (possibly new, or not) IP to be used as
the
server (logs DIRECT).", sorry to say this is not the reality in the Squid.
We have set the pass-thru directive to OFF and here is the result:
TCP_MISS/206 72540 GET
http://www.google.com/dl/chrome/win/B6585D9F8CF5DBD2/43.0.2357.130_chrome_installer.exe
- ORIGINAL_DST/216.58.220.36

Is there a way to totaly disable the DNS control done by Squid ?

Thanks 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672013.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Hi,

Could the issue be related to that ?
TCP_MEM_HIT/200 224442 GET
http://squid1/8b26b519d740afd8ec698b6af06efd8e17c6e5b6:8182/squid-internal-periodic/store_digest
- HIER_NONE/- application/cache-digest

Is it normal to see the store-digest as a MEM_HIT ?
I say to the squid to do not reply with a HIT:
acl cachedigest rep_mime_type application/cache-digest
store_miss deny cachedigest
send_hit deny cachedigest

In my squids, i have 64 kb as the maximum object in memory:
maximum_object_size_in_memory 64 KB
How a 244 kb object can be saved in memory 

More exploring, more weird... 

Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671972.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Amos,
Yes, similar case here on the 4223.
By reading the case 4223, we can see that part "Non-cacheable objects should
never be added to the digest." from you.
In my squid, there is no restriction, ICP is fully open, squid server
(3.5.5) are compiled with the digest option, so all is done to allow
ICP/digest connection and exchange.
So, why the servers think they got the objects, especially when they are not
cacheable and not cached ?

To me, it seems the servers think they have the object but they don't, so
they reply with a 404 translated by a 504 to the squid client because
sibling archi.
I could understand it could be a bug but the squid client should see the 504
and should request the object from internet, no ?

At the moment, i use a "retry_on_error on" as a workaround but not sure it's
fixing all 504.

Then, having a dedicated cable between squid servers is not a realistic
solution, my ISP will not see that as a serious solution 

Last point, "no-tproxy rule on parent type cache_peer", it does not work, we
tried that.
We applied that option 1 month ago and the internet sees the squid ip, not
the original ip address, so maybe another bug here... 

Fred.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671970.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Amos,
We used this example from the wiki:
http://wiki.squid-cache.org/Features/CacheHierarchy
We can see a sibling/sibling archi is possible, right ?

Here, we can not have a "cache_peer parent" archi as the tproxy (original
user ip) will be lost at the parent level, you wrote this in a previous post
I did.
So, we must have a sibling/sibling with the 2 squid servers, both in tproxy.

Now, if there is a better archi and correct settings to build a
tproxy/sibling-sibling archi, please share 
My ISP is nice and I can arrange that with him...

Thanks in advance.

Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671966.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Hi,
I disabled the sibling on both squid servers, we got one 504:
TCP_MISS/504 361 GET
http://rate.msfsob.com/review?h=www.searchhomeremedy.com -
HIER_DIRECT/8.25.35.129
A wget on this url gives a 404, so here we can say the object does not
exist, the TCP_MISS/504 seems a correct answer.
But no new 504... the ISP is a 500Mbps.
If we enable the sibling, we get one/two 504 every second on both squid.

any idea ?

Fred.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671964.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Anthony, Amos,

The 2 squid are kid/parent each of them (both sibling).
So, when one aks the second, they play the role of kid -> parent, am I right
?

Here is the way:
Squid1 checks the Squid2 and gets that:
... user-ip TCP_MISS/504 708 GET
http://code.jquery.com/ui/1.10.3/jquery-ui.js - CD_SIBLING_HIT/10.1.1.2 ...
Squid2 replies that to the Squid1:
... squid1-ip TCP_MISS/504 478 GET
http://code.jquery.com/ui/1.10.3/jquery-ui.js - HIER_NONE/- text/html

I did many wget with "unavailable" objects from both squid servers, i can
get the objects correctly.
I'll disable the sibling to check if we still have TCP_MISS/504, keep you
posted...

As the 2 squid are in production at the ISP datacenter, not sure I could
apply a debug...
They do have the same route connected to 2 mikrotik (squid1 -> mikrotik1,
squid 2 -> mikrotik2), both mikrotik connected to the same cisco router.
Mangles/DNS/Gateway on mikrotiks are the same.

Fred.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671962.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Hi Antony,

Correct, the kid contacts the parent that is getting a 504 and replies the
same to the kid. That's why I suspect the parent tries to download by itself
instead replying to the kid it does not have the object so the kid should do
a fresh download from internet.

examples:
TCP_MISS/504 708 GET http://w.sharethis.com/button/buttons.js -
CD_SIBLING_HIT/x.x.x.x
TCP_MISS/504 708 GET http://www.googletagmanager.com/gtm.js?id=GTM-NNVXD6 -
CD_SIBLING_HIT/x.x.x.x
TCP_MISS/504 478 GET http://dl4.offercdn.com/2613/styleVideo.css -
HIER_NONE/-
TCP_MISS/504 708 GET http://www.indianrail.gov.in/seat_Avail.html -
CD_SIBLING_HIT/x.x.x.x
etc...

The kid receives these answers and does nothing more, the result is the
browser (client) does not have all the objects for a correct page...
Sometimes, the page is blank because a js script, css, etc... is missing.

There are 2 squid, sibling each of them.
Squid1 (10.1.1.1):
cache_peer 10.1.1.2 sibling 8182 8183 proxy-only no-tproxy
Squid2 (10.1.1.2):
cache_peer 10.1.1.1 sibling 8182 8183 proxy-only no-tproxy

Both are in tproxy (settings from the Squid wiki), ICP is enabled with each
squid.

if you need more details, feel free to ask 

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671957.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-30 Thread Stakres
Hi Amos,
Yep, i did not modify the TTL transaction.
Here, it seems the parent (sibling mode) tries to do the request itself but
faces an error (504 gateway timeout), it should answer to the kid it does
not have the object (TCP_MISS) then the parent should download the object
from internet.
With this error, i suspect the kid accepts the "error" and does nothing
more, replying to the browser (user) something like "sorry, the object is
missing".

How can i force the kid to download the object from internet when the parent
replies a such answer ?

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671955.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/504 in cache_peer

2015-06-29 Thread Stakres
Hi Amos,

1. What does the 504 mean ?
2. How can i extend the TTL transaction ?

Thanks :o)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944p4671946.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_MISS/504 in cache_peer

2015-06-29 Thread Stakres
Hi All,
is someone with an idea why it happens ?
*TCP_MISS/504 708 GET http://www.myexamplecom/images/menu_hover_left.png -
CD_SIBLING_HIT/x.x.x.x*

i can see the TCP_MISS with the SIBLING only...
i'm looking for this issue since several days and it makes me crazy 

thanks in advance.
bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-504-in-cache-peer-tp4671944.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_MISS_ABORTED/000 with SIBLING

2015-06-15 Thread Stakres
Hi All,

Weird issue with 2 Squid 3.5.5 in sibling mode, here is the trace:
.. TCP_MISS_ABORTED/000 0 GET http://www.greatandhra.com/ -
SIBLING_HIT/x.x.x.x

Any idea ?
Thanks in advance for your inputs...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-ABORTED-000-with-SIBLING-tp4671735.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache_peer in tproxy

2015-06-08 Thread Stakres
Hi Amos,

Ok, it does confirm our tests.

Is there a way to do this:
users -> squid1 tproxy -> squid2/squid3 tproxy -> internet (seeing the user
ips) ?
or is it impossible ?

in the wiki, there is an option "no-tproxy" in the cache_peer, it would be
nice to have a "keep-tproxy" 

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cache-peer-in-tproxy-tp4671602p4671621.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid cache_peer in tproxy

2015-06-08 Thread Stakres
Hi All,

We're facing a weird issue with the cache_peer and tproxy.

Squid 3.5.4
users -> squid1 -> squid2/squid3 -> internet

squid1:
http_port 3128
http_port 3129 tproxy
icp_port 3130
cache_peer 192.168.1.2 parent 3128 3130  proxy-only weighted-round-robin
background-ping no-digest 
cache_peer 192.168.1.3 parent 3128 3130  proxy-only weighted-round-robin
background-ping no-digest 

squid2:
http_port 3128
icp_port 3130
cache_peer 192.168.1.3 sibling 3128 3130 proxy-only no-digest

squid3:
http_port 3128
icp_port 3130
cache_peer 192.168.1.2 sibling 3128 3130 proxy-only no-digest

The user IPs are corrects at the squid1 but we lost them at the
squid2/squid3, here we see the squid1 ip only.
Where are we wrong ?
Thanks for your inputs 

bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cache-peer-in-tproxy-tp4671602.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraries/CDNs Booster

2015-06-01 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.528   
- *June 1st 2015*.
- New websites
- YouTube mobile app improved
More details on  https://svb.unveiltech.com   

Enjoy

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4671470.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to cache Chrome Installer ?

2015-05-19 Thread Stakres
Hi Yuri,
the url does not change, i use the same url for the tests:
http://r8---sn-n4g-jqbe.gvt1.com/edgedl/chrome/win/776B03BEAFB2810D/42.0.2311.152_chrome_installer.exe

it should work with the storeid.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-cache-Chrome-Installer-tp4671271p4671286.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to cache Chrome Installer ?

2015-05-19 Thread Stakres
Hi Amos,

By deleting the "Vary:*" from the headers, we should be able to cache the
object, correct ?
And by de-duplicating using the url until the "?", we get a single object.
We do all the same YouTube also containing "user" data, so I don't see why
it cannot work with this Chrome Installer url... 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-cache-Chrome-Installer-tp4671271p4671279.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to cache Chrome Installer ?

2015-05-18 Thread Stakres
Hi Yuri,

Do you get a TCP_HIT with your rules ?
From my side, i get this: *X-Cache: MISS* from blablabla...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-cache-Chrome-Installer-tp4671271p4671273.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to cache Chrome Installer ?

2015-05-18 Thread Stakres
Hi All,

Has someone of you already cached this object ?
*http://r8---sn-n4g-jqbe.gvt1.com/edgedl/chrome/win/776B03BEAFB2810D/42.0.2311.152_chrome_installer.exe*

I know this is a dynamic object provided by Google, we tried with the
StoreID by not yet able to get a TCP_HIT from Squid.
If any idea, let me/us know, thanks in advance.

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-cache-Chrome-Installer-tp4671271.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache peers with different load

2015-05-17 Thread Stakres
Hi Amos,
Thanks for the explainations 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-peers-with-different-load-tp4671204p4671249.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache peers with different load

2015-05-11 Thread Stakres
Hi Amos,

OK, got it.
But why a so big gap on the 2 parents ?
The 3 squids are on the same range, connected to the same switch, all in 1Gb
NIC.
No problem if there are some MB difference, but here it's 10+ times more
between 2 parents 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-peers-with-different-load-tp4671204p4671207.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cache peers with different load

2015-05-11 Thread Stakres
Hi All,

A crazy thing I cannot understand:
- 3 squid 3.5.4

the child (172.10.1.1) is like that:
cache_peer 172.10.1.2 parent 8182 8183 proxy-only weighted-round-robin
background-ping no-tproxy
cache_peer 172.10.1.3 parent 8182 8183 proxy-only weighted-round-robin
background-ping no-tproxy

ICP is allowed on the 3 squids.

Traffic is not equal, not balanced as I expect.
cache .2 is 200MB after 10 min
cache .3 is 4GB in the same time

Sure I'm missing something here, but what ? 

Thanks in advance for your input...
Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-peers-with-different-load-tp4671204.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Number of clients accessing cache: 0

2015-05-04 Thread Stakres
Hi Amos,
Well, as usual, you found the reason 
"client_db" was off, now it shows the numbers...

Thanks Amos.
Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Number-of-clients-accessing-cache-0-tp4671102p4671117.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Number of clients accessing cache: 0

2015-05-04 Thread Stakres
Hi All,
Seems the number of connected clients is always 0 (zero) since the 3.5.3...

We have tested with 10+ differents and simultaneous client ips and the
number always shows 0.
Latest tested build, the 3.5.4 official, still 0 as clients accessing the
cache...

Is there something wrong here ?

Here is a sample:
/usr/local/squid3/bin/squidclient -h 127.0.0.1 -p 3128 mgr:info |grep
"Number of"
Number of clients accessing cache:  0
Number of HTTP requests received:   9189
Number of ICP messages received:8581
Number of ICP messages sent:8625
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Number of file desc currently in use:   30


Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Number-of-clients-accessing-cache-0-tp4671102.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: comm.cc:557: "F->flags.open"

2015-04-26 Thread Stakres
Hi Nathan,
Thanks for the reply, I applied the latest build (squid.3.5.3-r13808),
waiting for the client to check if it fixed or not 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-comm-cc-557-F-flags-open-tp4670788p4670935.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] assertion failed: comm.cc:557: "F->flags.open"

2015-04-17 Thread Stakres
Hi All,

Is anyone with an trick regarding this error message in the cache.log ?
*assertion failed: comm.cc:557: "F->flags.open"*

Squid *3.5.3-20150415-r13798*.
config in diskd, tproxy and ssl_bump.

when the squid faces this error, it reloads itself but it breaks the surf
for a while.

Thanks in advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-comm-cc-557-F-flags-open-tp4670788.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Random SSL bump DB corruption

2015-04-15 Thread Stakres
Hi Amos,

Good news !
Waiting for the new build, we'll test and keep you posted...

Best regards.
Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Random-SSL-bump-DB-corruption-tp4670289p4670757.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Random SSL bump DB corruption

2015-04-14 Thread Stakres
Hi Guy,

Thanks for answering :o)
Based on the bugzilla, it's fixed but not yet available.
Anyway, that's a very good news.
Let's wait the next build.

Thanks for your help.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Random-SSL-bump-DB-corruption-tp4670289p4670725.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ***SPAM*** Re: Random SSL bump DB corruption

2015-04-14 Thread Stakres
Hi All,

No reply ?
Do we have to leave with this mega/crazy bug ?
Is there someone in the Squid team able to have a look to this problem or
nobody care ?

Thanks in advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Random-SSL-bump-DB-corruption-tp4670289p4670723.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ***SPAM*** Re: Random SSL bump DB corruption

2015-04-13 Thread Stakres
Hi Amos, All,

We have done as you indicate, but the index.tx is still corrupted, have a
look:
*V  250406120057Z   2C564651B40D1F4F6CAFFF06EA8B201580E3B678
unknown
/CN=173.194.65.84+Sign=signTrusted+SignHash=SHA256
V   250406120057Z   71664F2E27C4E321B2A4F59EA3971D70C298DAFB
unknown
/CN=173.194.112.167+Sign=signTrusted+SignHash=SHA256
V   250406120057Z   45205F68B7AD2DEEBE408E55FF8DADE5C88F6A99
unknown
/CN=74.125.136.188+Sign=signTrusted+SignHash=SHA256
V   250406120057Z   6CFF07CBB3BE016022AF2EA75BA56EC99FD8256E
unknown
/CN=173.194.112.177+Sign=signTrusted+SignHash=SHA256
V   250406120057Z   5829213872C31284E8853C079BF51A0E50F89CF8
unknown
/CN=173.194.112.166+Sign=signTrusted+SignHash=SHA256
V   250406120057Z   06BC95EACDEEC116E11B1B6CE66C9179C4251D6E
unknown
/CN=173.194.112.164+Sign=signTrusted+SignHash=SHA256
V   250406120057Z   7ECFE99D51088BD0692A7439EBAFEDA38A29BC  unknown
/CN=173.194.112.185+Sign=signTrusted+SignHash=SHA256
V   15062300Z   49F18ABCB410F18BE715AF26AEEB0EE4E1D89DC6
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.google.com+Sign=signTrusted+SignHash=SHA256
V   15062300Z   3A2A74F1431B28F9E268B8762706F69597D11447
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.googleapis.com+Sign=signTrusted+SignHash=SHA256
V   15062300Z   695CC4B75B9F38E29836BB211432FF8286966313
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.google-analytics.com+Sign=signTrusted+SignHash=SHA256
V   15062300Z   21C204121B238D4B48F0196F4D644B7A5F775574
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=www.google.com+Sign=signTrusted+SignHash=SHA256
HA256
*

What's the *HA256* at the end of the file ?

here is the squid.conf (3.5.3):
sslproxy_capath /etc/ssl/certs
acl sslstep1 at_step SslBump1
ssl_bump peek sslstep1
ssl_bump bump all
ssl_bump splice all
sslcrtd_program /usr/local/squid3/lib/ssl_crtd -s /var/lib/ssl_db -M 8MB
sslcrtd_children 16 startup=5 idle=1

The Squid crash every 1-2 hours.
Seems the ssl_crtd fails in writing data tot he index.txt.

Thanks for your help.
Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Random-SSL-bump-DB-corruption-tp4670289p4670708.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ***SPAM*** Re: Random SSL bump DB corruption

2015-04-09 Thread Stakres
Yuri,

 

We’re trying that :

-  Tproxy

-  ssl_bump bump all

does not work.

 

We have followed the squid wiki regarding iptables rules, sysctl, etc…

Instead “ssl_bump bump all”, if we use “ssl_bump server-first all” , it works, 
the https is decrypted.

 

So is the tproxy compatible with the new squid 3.5.x ssl_bump options ?

 

Bye Fred

 

De : Yuri Voinov [via Squid Web Proxy Cache] 
[mailto:ml-node+s1019090n4670662...@n4.nabble.com] 
Envoyé : jeudi 9 avril 2015 15:03
À : Stakres
Objet : Re: ***SPAM*** Re: Random SSL bump DB corruption

 


-BEGIN PGP SIGNED MESSAGE- 
Hash: SHA256 
 
I think,first  you can try new stage-based SSL bump with 3.5.x. To do that you 
must identify problem sites.

If there is no results, you can simple bypass problem sites without bump.

Whole server-first bump, on Squid 3.5.x especially, is not so good idea, I 
think. Especially on provider-level proxies.

09.04.15 19:09, Vdoctor пишет:
> Yuri,



  >



  >  



  >



  > So what’s next ?



  >



  > Do you mean we must “do-not-ssl-bump” wrong certificats ?



  >



  > And if a certificate not yet identified is requested by an

  user it’ll crash the Squid ?



  >



  >  



  >



  > Any idea how to fix that issue ?



  >



  >  



  >



  > Thanks in advance.



  >



  > Bye Fred



  >



  >  



  >



  > De : Yuri Voinov [[hidden email]] 



  > Envoyé : jeudi 9 avril 2015 15:04



  > À : Vdoctor; [hidden email]



  > Objet : Re: ***SPAM*** Re: [squid-users] Random SSL bump DB

  corruption



  >



  >  



  >



  >



  > - From my experience, it may occur as a result of forming the

  fake certificate zero length (in the case of the SQUID can not

  complete its formation for any reason).



  >



  > In turn, the formation of such a certificate occurs in

  particular due to any error in the code of the SQUID

  characteristics or if server certificate. In particular, one of

  these servers is iTunes.



  >



  > 09.04.15 19:00, Vdoctor пишет:



  > > Yury,



  >



  >



  >



  >



  >



  >



  >



  >   > I checked the source code (3.4/3.5) ssl_crtd, the

  default



  >



  >   size is 2048.



  >



  >



  >



  >   > -b fs_block_size File system block size in

  bytes.



  >



  >   Need for processing



  >



  >



  >



  >   >  natural size of

  certificate on disk.



  >



  >   Default value is



  >



  >



  >



  >   >  2048 bytes."



  >



  >



  >



  >



  >



  >



  >



  >   > /**



  >



  >



  >



  >   >  \ingroup ssl_crtd



  >



  >



  >



  >   >  * This is the external ssl_crtd process.



  >



  >



  >



  >   >  */



  >



  >



  >



  >   > int main(int argc, char *argv[])



  >



  >



  >



  >   > {



  >



  >



  >



  >   > try {



  >



  >



  >



  >   > size_t max_db_size = 0;



  >



  >



  >



  >   > size_t fs_block_size = 2048;



  >



  >



  >



  >



  >



  >



  >



  >



  >



  >



  >



  >   > But the crazy thing is the index.txt (last line)

  is wrong,



  >



  >   not complete. It seems the tool writes/saves wrong data

  that's why



  >



  >   it becomes corrupted and crash the Squid.



  >



  >



  >



  >



  >



  >



  >



  >   > We have tried with a single ssl_crtd in the

  squid.conf, then



  >



  >   one per worker, the same corruption.



  >



  >



  >



  >



  >



  >



  >



  >   > Bye Fred



  >



  >



  >



  >



  >



  >



  >



  >   > -Message d'origine-



  >



  >



  >



  >   > De : squid-users



  >



  >   [[hidden ema

Re: [squid-users] Random SSL bump DB corruption

2015-04-09 Thread Stakres
Hi Yuri,

We have checked the sslproxy_capath, all certifs updated.
OpenSSL is: OpenSSL 1.0.1e 11 Feb 2013 (Debian 7.8)

Additional point, the auto-signed certif is a 1024, could it be the problem
?
Maybe we need to use the ssl_crtd with the option "-b 1024"
what do you think ?

example of corrupted db:
*V  250402155004Z   7307E4A4E7FC6483C2B1D533821A7D2356DF1B88
unknown
/CN=r2---sn-q4f7sn7z.googlevideo.com+Sign=signTrusted+SignHash=SHA256
V   250402155004Z   2D1FC87E26AC4D8AB1E6F3B45E2C69EB36C7F8D3
unknown
/CN=seal.verisign.com+Sign=signTrusted+SignHash=SHA256
6
*

the squid crash when the index.txt becomes wrong... weird...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Random-SSL-bump-DB-corruption-tp4670289p4670656.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Random SSL bump DB corruption

2015-04-06 Thread Stakres
Hi All, Yury,

Facing the same problem at the moment with the squid 3.5.3, around 150
req/sec.
The SSL crash 5 min later with the error.

index.txt:
V   15062300Z   7EE07E84896D06865495B87A061C4C55D03E428D
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.appspot.com+Sign=signTrusted+SignHash=SHA256
V   15061700Z   4E50C8790541265060E8796852D2E1D2878D7089
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=google.com+Sign=signTrusted+SignHash=SHA256
V   15061700Z   16802607779EC137D972E9731A3D8DD1D65F1819
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=accounts.google.com+Sign=signTrusted+SignHash=SHA256
V   15061700Z   736D922E14C3E8E573141AC6E3E79C4218B1B541
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.google-analytics.com+Sign=signTrusted+SignHash=SHA256
V   15061700Z   0A1D58F2065EA701CD60D874325AFB4D76602922
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.googleusercontent.com+Sign=signTrusted+SignHash=SHA256
V   15061700Z   213231FB70E633CA37606F717BBD1A92AEA97D7B
unknown
/C=US/ST=California/L=Mountain View/O=Google
Inc/CN=*.google.com+Sign=signTrusted+SignHash=SHA256
SHA256

The last line is wrong 

Tested with 1 worker, 1 DISKD cache.

https_port 8189 intercept ssl-bump generate-host-certificates=on
cert=/etc/squid3/mycert.pem key=/etc/squid3/mycert.pem
sslproxy_capath /etc/ssl/certs
ssl_bump server-first all
sslcrtd_program /usr/local/squid3/lib/ssl_crtd -s /var/lib/ssl_db -M 16MB
sslcrtd_children 32 startup=5 idle=1

/var/lib/ssl_db is using the correct rights, all controled many times.

Any idea ?

Bye Fred





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Random-SSL-bump-DB-corruption-tp4670289p4670630.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-04-06 Thread Stakres
Hi Amos,

We have done additional tests in production with ISPs and the ORIGINAL_DST
in tproxy cannot be cached.
In normal mode (not tproxy), ORIGINAL_DST can be cached, no problem.
But once in tproxy (http_port 3128 tproxy), no way, it's impossible to get
TCP_HIT.

We have played with the client_dst_passthru and the host_verify_strict, many
combinaisons on/off.
By settings client_dst_passthru ON and host_verify_strict OFF, we can reduce
the number of ORIGINAL_DST (generating DNS "alerts" in the cache.log) but it
makes issues with HTTPS websites (facebook, hotmail, gmail, etc...).
We have also tried many DNS servers (internals and/or externals), same
issue.

I read what you explain in your previous email but it seems there is
something weird.
The problem is that the ORIGINAL_DST could be up to 25% of the traffic with
some installations meaning this part is "out-of-control" in term of cache
potential.

All help is welcome here 
Thanks in advance.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4670629.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] assertion failed: Read.cc:205: "params.data == data"

2015-04-01 Thread Stakres
Hi All,

Strange problem during surf, squid 3.5.3, 64bits, Debian 7.8:
*2015/04/01 19:19:06 kid3| assertion failed: Read.cc:205: "params.data ==
data"*

caches:
workers 3
cache_dir rock /var/spool/squid3r1 166400 min-size=0 max-size=65536
swap-timeout=500 max-swap-rate=200/sec
if ${process_number} = 1
cache_dir diskd /var/spool/squid3w1 453282 16 256 min-size=65536
max-size=1250301952
endif
if ${process_number} = 2
cache_dir diskd /var/spool/squid3w2 453282 16 256 min-size=65536
max-size=1250301952
endif
if ${process_number} = 3
cache_dir diskd /var/spool/squid3w3 453282 16 256 min-size=65536
max-size=1250301952
endif

/dev/shm exists, correct rights.
/var/run/squid exists, correct rights.

Anyone with experience about this error ?

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-Read-cc-205-params-data-data-tp4670624.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Open Squid Box - FREE

2015-03-19 Thread Stakres
Hi Amos,

This is not a LiveCD, this is a *complete solution* including Squid, web
console, statistics, graphs, StoreID plugin, etc...
An Open solution for people who needs an all-in-one system ready and running
in 10 min maxi...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Open-Squid-Box-FREE-tp4670502p4670504.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Open Squid Box - FREE

2015-03-19 Thread Stakres
*WAN Optimization and Internet Acceleration in Open Source*.
OpenSquidBox is an Open Source of an already pre-configured Squid Proxy
Cache Server under Linux that can be installed within few minute.
It’s an ISO Software Appliance that can be loaded on any hardware and
virtual appliance.
It contains an already pre-installed & configured 64 bits Linux OS and Squid
Proxy Cache software and includes a web graphical console for easy
configuration & management of your cache server.
The Installation of the ISO file on your own hardware/software appliance
takes only few minutes.
No extra manual installation or configuration is required.
Your cache server is then immediately ready to work.
Easy customizable solution for those you need to install rapidly a Cache
Server or want to learn & practice Squid Cache with a nice open source
graphical web console.

Dedicated website about  *  OpenSquidBox* 

*Startup Users*:
You are not yet an expert in Linux nor in Squid Cache but you need something
ready to go to work/play with it.
You can not invest time to investigate how to install/setup and configure.

*Advanced Admins*:
You need to setup a new Proxy Cache server but you do not have time to
install and configure it.
You need something ready-to-use and to install on your hardware appliance.
Within few minutes you have something installed and working.
Worry-free solution.

*Professionals*:
You are looking for a software appliance solution to deploy at your
customers site. 
You need something ready-to-use and to install on your hardware appliance.
Get an immediate solution within few minutes.
Easy configurable solution. 

*Main Features*
ISO Software Appliance solution ready to download
ISO file already containing Linux OS pre-configured
Contains most popular Squid Proxy Cache software pre-configured
Easy to Install on your own hardware appliance
64 bits OS and Proxy Cache Server
Installation in few minutes
No extra manual installation or configuration required
Works on Hardware or Virtual Appliance
Already preconfigured with default settings
Includes a web graphical console for easy configuration & management:
Modern graphical console
Realtime and Mbps graphs
No need to manually configure setting files
Rapid access to configuration with web console
Easy Customizable Solution
Ready to use solution
Good solution to learn and practice Squid Proxy Cache
Open Source solution ("Root" account is provided for free)

Version 1.03 - March 19th 2015
ISO is now available to all in Open Source including the SquidVideoBooster
plugin trial 7 days

*Installation*:
- Download the ISO
- Burn a CD or USB stick
- Boot on the CD/USB and install
- Once installed, go to the web console: http://opensquidbox-ip-address:81

Feel free for comment, suggest or improve it...
Enjoy,
Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Open-Squid-Box-FREE-tp4670502.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-03-13 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.39    -
March 13th 2015.
- New websites
- Tiny bugs fixed
More details on  https://svb.unveiltech.com   

Enjoy 

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4670396.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-03-03 Thread Stakres
Hi Eliezer,

Well, we have done many tests with Squid (3.1 to 3.5.x), disabling
"client_dst_passthru" (off) will stop the DNS entry as explained in the
wiki, the option directly acts on the flag "ORIGINAL_DST".
As you know, ORIGINAL_DST switches the optimization off (ex: StoreID) then
it's not possible to cache the URL (ex: http://cdn2.example.com/mypic.png).

In no tproxy/NAT mode, the client_dst_passthru works perfectly by disabling
the DNS entry control, so optimization is done correctly.
But in tproxy/NAT, the client_dst_passthru has no effect, we see
ORIGINAL_DST in logs.

So, maybe I'm totaly wrong here the client_dst_passthru is not related to
the ORIGINAL_DST, or there is an explaination why the client_dst_passthru
does not act in tproxy/NAT...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4670194.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TProxy and client_dst_passthru

2015-03-03 Thread Stakres
Hi All,

Does someone know why the "*client_dst_passthru*" does not work in TProxy
mode ?

From the Squid wiki, we can read that:
"/Regardless of this option setting, when dealing with intercepted
traffic Squid will verify the Host: header and any traffic which
fails Host verification will be treated as if this option were ON/."

In normal (no intercept) http_port, the option works fine but does not act
on Tproxy...

Thanks in advance for your feedbacks 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Resolution Locker Plugin for Squid Proxy Cache 3.x

2015-02-22 Thread Stakres
Hi All,

New  build 2.07  
:
- YouTube
- Vevo
- Vimeo
- iMDB
- Dailymotion
- Break.com
- Apple Trailers

Ask your free 1 year license on the website 

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Resolution-Locker-Plugin-for-Squid-Proxy-Cache-3-x-tp4669489p4670018.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-02-22 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.35    -
February 22th 2015.
- New websites
More details on https://svb.unveiltech.com

Enjoy 

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4670015.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Resolution Locker Plugin for Squid Proxy Cache 3.x

2015-02-11 Thread Stakres
Hi All,

New  build 2.06  
:
- YouTube
- Vevo
- iMDB
- Dailymotion
- Break.com
- Apple Trailers

Ask your free 1 year license  on the website    

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Resolution-Locker-Plugin-for-Squid-Proxy-Cache-3-x-tp4669489p4669749.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-02-09 Thread Stakres
 Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.33    -
February 8th 2015.
- New websites
More details on https://svb.unveiltech.com

Enjoy

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4669653.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Resolution Locker Plugin for Squid Proxy Cache 3.x

2015-02-04 Thread Stakres
Hi All,

New  build 2.05   
including Dailymotion...

Still a free 1 year license on the  website    

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Resolution-Locker-Plugin-for-Squid-Proxy-Cache-3-x-tp4669489p4669549.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Resolution Locker Plugin for Squid Proxy Cache 3.x

2015-02-02 Thread Stakres
Hi All,

SquidVideoLocker for Squid Proxy Cache is a plugin that locks and limits
video resolutions of YouTube, Vevo and iMDB. The plugin takes into account
all video formats, all resolutions from 144p up to 2160p.

Reduce video resolutions from YouTube, Vevo and iMDB to the lower existing
resolution in order to save significant amount of bandwidth for other
usages.
Once installed on your Squid Proxy Cache, each time one of your users tries
to view a web video from YouTube, Vevo or iMDB, the plugin will lock and
limit the video resolution in order to reduce the size of the audio/video to
download and free precious bandwidth for other users.

Visit our dedicated website  SquidVideoLocker   
and get a *free 1 year license*.

Version 2.01 - February 2th 2015
- Standalone program with no addtional library needed (no external libraries
anymore)
- Independent Squid plugin that does not need the Cloud API anymore
- Compatible All Linux distribution (tested under Debian/Ubuntu/CentOS
64bits and Debian 32bits)

Installation:
- Uncompress ut-reslocker.2.01.tgz to /etc/squid depending your compilation
options
- Depending on your platform (32bits or 64 bits), verify that the
permissions of binary ("/etc/squid/[proc]/ut-reslocker") are 0755
- Then, edit your squid.conf to add the following:
#  #
# SquidVideoLocker
url_rewrite_program /etc/squid/ut-reslocker
# You could increase the number of children if you manage huge traffic
url_rewrite_children 8
- Finally, reload Squid: squid -k reconfigure

Enjoy 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Resolution-Locker-Plugin-for-Squid-Proxy-Cache-3-x-tp4669489.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-01-28 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.31   
including new websites and bug fix - January 28th 2015.
More details on https://svb.unveiltech.com

Enjoy

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4669395.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-01-19 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.27   
including new websites - January 19th 2015.
More details on https://svb.unveiltech.com

Enjoy 

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4669159.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-01-05 Thread Stakres
Yuri,

Do not worry, I need more ot be offended 
Yes regexp is great but this is not clear for all, I mean they have to
understand regexp and speak Perl...

Nice to see you reach 70% with your rules, I really doubt about the 70% with
those simple rules but I'm ready to believe you.
Maybe you could use our plugin for some tests, maybe you'll reach 80+% and
maybe you'll abandon your script... 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668936.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-01-05 Thread Stakres
Hi Yuri,

Does the "we don't need" means "you don't need" or do you speak for all
users of Squid ?

We have done tons of tests with the "storeid_file_rewrite", sorry to tell
you it does not achieve 70% because:
- The prog you provide is nude, I mean there is 1 example only
- Admins have to check hundred of websites to correctly add them into the
process
- All do not know Perl langage
- Many websites are specific in their URLs like YouTube, Akamai, etc...

So, I perfectly understand the "/we don't need any hardcoded commercial
analogue./" but correct me if I'm wrong you're not alone on the planet,
correct ?
There are admins looking for a complete solution because they have no free
time to spend by doing the de-duplication by themselves. My message is to
inform them, if you're not interested no problem but some want to be
updated.

Thanks in advance to keep my post safe.
If this type of message is not accepted in this forum, ok just let me know.

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668933.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-01-05 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache for Videos, Music,
Images, Libraries and CDNs.

By default your existing Squid Proxy Cache cannot properly cache most
popular multi-media websites like YouTube, Netflix, Facebook, DailyMotion,
Vimeo, Vevo, Google Maps & Apps, Apple, Tumblr, Yandex, etc...
Now there is a quick and easy solution within 5 minutes you can load this
Squid add-on to significantly improve the caching possibilities and
performance of your Squid Proxy Cache.
More than *600+ websites* are already supported and can be cached
automatically in order to get the best out of your existing Internet
bandwidth.

List of all supported web sites (600+)
  
SquidVideoBooster works with Squid v2.7, v3.4 and v3.5.

New version 2.24 including new websites - January 5th 2015.
More details on  https://svb.unveiltech.com   

Enjoy 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668929.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2014-12-21 Thread Stakres
Hi All,

New  build version 2.20
   - December 21th
2014
- New option "*-ytd*" to enable the caching function with YouTube Downloader
tools
- New websites added

More details on https://svb.unveiltech.com

Enjoy...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668803.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Determining unique clients in Squid

2014-12-19 Thread Stakres
Hi Veiko,

 

Correct me if I’m wrong, you need to use Squid in HTTPS decryption and try
to cache maximum of objects (mainly big), am I right ?

Regarding the private/public objects, I could not answer here as I don’t see
what your project is – then I’m not a member of the Squid team so I’m not
informed about all tricks J

 

So, do you plan to install a Squid for your users for regular internet
traffic or do you have special restrictions for a specific Squid
installation ?

 

Ready to help you but need more details on what you want to do J

 

Bye Fred

 

De : Veiko Kukk [via Squid Web Proxy Cache]
[mailto:ml-node+s1019090n4668773...@n4.nabble.com] 
Envoyé : vendredi 19 décembre 2014 15:43
À : Stakres
Objet : Determining unique clients in Squid

 

Hi, 

I have been trying to understand, how does Squid determine different 
clients, but it is not clear from the documentation. I guess this does 
not depend entirely on IP address, right? Otherwise all clients behind 
NAT would be considered as single client. 

Reason behind this is that I'd like to configure a forward proxy for 
(mostly) binary files caching. All requests have Authorization headers 
(API key) and come from single IP address (localhost, python 
application, not generic web browser). 

client  squid ssl_bump to see inside https  remote cloud 
storage 

http://wiki.squid-cache.org/SquidFaq/InnerWorkings#What_are_private_and_publ
ic_keys.3F
"Private objects are associated with only a single client whereas a 
public object may be sent to multiple clients at the same time." 

I wonder if it would be possible to use Squid for effectively cache 
larger objects locally with this type of configuration? 

Best regards, 
Veiko 
___ 
squid-users mailing list 
[hidden email] 
http://lists.squid-cache.org/listinfo/squid-users



  _  

If you reply to this email, your message will be added to the discussion
below:

http://squid-web-proxy-cache.1019090.n4.nabble.com/Determining-unique-client
s-in-Squid-tp4668773.html 

To start a new topic under Squid - Users, email
ml-node+s1019090n1019091...@n4.nabble.com 
To unsubscribe from Squid Web Proxy Cache, click here
<http://squid-web-proxy-cache.1019090.n4.nabble.com/template/NamlServlet.jtp
?macro=unsubscribe_by_code&node=1019090&code=dmRvY3RvckBuZXVmLmZyfDEwMTkwOTB
8OTE5NjEzNjUz> .
 
<http://squid-web-proxy-cache.1019090.n4.nabble.com/template/NamlServlet.jtp
?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.n
amespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.vie
w.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail
.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aema
il.naml> NAML 





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Determining-unique-clients-in-Squid-tp4668773p4668780.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2014-12-17 Thread Stakres
Hi All,

New  build 2.17   
with additional website...

Enjoy 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668738.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2014-12-14 Thread Stakres
Hi All,

New  build 2.16   
taking care new websites in de-duplication...

Enjoy 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668707.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2014-12-12 Thread Stakres
Hi Ahmed,

I could not answer concerning the SMP 32KB caching limitation, Amos and/or
Eliezer should be the right persons here to answer you.

Regarding the SquidVideoBooster, this is a plugin to Squid. If you Squid
supports the SMP so the SquidVideoBooster will work.

Reminder, the SquidVideoBooster is a plugin to de-duplicate similar URLs
(video, CDNs etc...) with Squid. The SquidVideoBooster will not fix
misconfigurations or issues with Squid...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4668693.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with Windows updates

2014-12-12 Thread Stakres
Hi JP,

Have you tried the SquidVideoBooster ?
It takes care Squid 2.7, 3.4 and 3.5 including Windows Update and hundred of
other websites.
https://sourceforge.net/projects/squidvideosbooster

Link to the  news

 
.

Dedicated website about SquidVideoBooster and licensing:
http://www.unveiltech.com/indexsquidvideobooster.php

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Help-with-Windows-updates-tp4668681p4668685.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2014-12-12 Thread Stakres
Hi All,

Just to let you know the SquidVideoBooster is now compatible with Squid 2.7,
3.4 and 3.5 to speed up your Video, Music, Image files, Libraries (Jquery,
Bootstrap, etc...), Software Updates (Windows Update, Apple, Android,
etc...), Smartphone/Tablet Apps, CDNs.
It'll help you to save more than 50% of your bandwidth...
The plugin is available from
https://sourceforge.net/projects/squidvideosbooster/

The plugin takes into account 600+ web sites
(http://www.unveiltech.com/videosboost.php) including YouTube, DailyMotion,
Vimeo, Vevo, iMDB, Netflix, Windows Update, Anti-Virus updates, etc...

Read the readme.txt for the new compatibility with Squid 2.7.

Enjoy 

Dedicated website about SquidVideBooster and licensing:
http://www.unveiltech.com/indexsquidvideobooster.php

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraris-CDNs-Booster-tp4668683.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.x Videos/Music Booster

2014-12-11 Thread Stakres
Hi All,

New  build 2.12   
available.
- New websites added
- New option "-shakir" to boost speedtest.net

Enjoy 

Dedicated website about SquidVideBooster and licensing:
http://www.unveiltech.com/indexsquidvideobooster.php

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-x-Videos-Music-Booster-tp4666154p4668677.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.x Videos/Music Booster

2014-12-08 Thread Stakres
Eliezer,

Good to know 

We indicate in the readme.txt the distri. we hve tested:
*Note: Currently SquidVideoBooster is available for Linux Debian, Ubuntu,
CentOS, Suse, etc...
Secific Linux distribution can be provided on demand.
*


Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-x-Videos-Music-Booster-tp4666154p4668647.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   >