Introducing myself
Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. -- Marcello Romani
Re: Introducing myself
Marcello Romani wrote: Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. Welcome. I'm working on the port of LogDaemon into 3.2. I look forward to working with you and it as the first LD helper. MySQL is precisely one of the tools I'm looking at testing the port with. Back at the beginning of LogDaemon in 2.7 there was a MySQL helper as well. I have not heard much since. Presumably is happily working in places, If that was not you we'd best check up and see about merging the script concepts rather than adding new one. If you are intending and able to stick around and support the script I believe mysql logging is one of the helpers we could find great use for bundling. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.6
Re: Introducing myself
Amos Jeffries ha scritto: Marcello Romani wrote: Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. Welcome. I'm working on the port of LogDaemon into 3.2. I look forward to working with you and it as the first LD helper. MySQL is precisely one of the tools I'm looking at testing the port with. Back at the beginning of LogDaemon in 2.7 there was a MySQL helper as well. I have not heard much since. Presumably is happily working in places, If that was not you we'd best check up and see about merging the script concepts rather than adding new one. I'll have to look closely but I belive what you are mentioning is not my script. If you are intending and able to stick around and support the script I believe mysql logging is one of the helpers we could find great use for bundling. I have developed the perl script in my spare time, so I might not be as timely as I should, but I'm willing to support and expand it. I'm not a C or C++ expert, but if you feel that it's better to have a LD helper written in C or C++ rather than perl, I might also look into that. Amos -- Marcello Romani
Re: Introducing myself
Marcello Romani wrote: Amos Jeffries ha scritto: Marcello Romani wrote: Amos Jeffries ha scritto: Marcello Romani wrote: Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. Welcome. I'm working on the port of LogDaemon into 3.2. I look forward to working with you and it as the first LD helper. MySQL is precisely one of the tools I'm looking at testing the port with. Back at the beginning of LogDaemon in 2.7 there was a MySQL helper as well. I have not heard much since. Presumably is happily working in places, If that was not you we'd best check up and see about merging the script concepts rather than adding new one. I'll have to look closely but I belive what you are mentioning is not my script. Turns out it was you on March 2008. I grabbed a copy of your early script and was playing around with it. http://www.mail-archive.com/squid-us...@squid-cache.org/msg53342.html Ouch! You're right. The fact is, I've also seen a LD helper written in C somewhere... The only other MySQL stuff I've seen was Arthur Tumanyan's shaga patch in 2007, which was for integrated MySQL support before LD came along. Might be interesting to compare the SQL tables etc, but the code is not relevant to this. Adrian wrote the initial access.log output into LD form using the original C from inside Squid. I'm not aware of any other LD around yet. If you are intending and able to stick around and support the script I believe mysql logging is one of the helpers we could find great use for bundling. I have developed the perl script in my spare time, so I might not be as timely as I should, but I'm willing to support and expand it. I'm not a C or C++ expert, but if you feel that it's better to have a LD helper written in C or C++ rather than perl, I might also look into that. The only issues we have with coding language are that it must be widely usable/useful, and someone is able to keep up with bug fixes. Perl passes on both. Amos Ok, perfect. I have registered the project on sourceforge, but I've not yet commited anything there. Is it the right way to host the script and its future developements ? The project is here: http://sourceforge.net/projects/squid-mysql-log/ Up to you. Thats how the squid_kerb_auth did it. The binary name if we bundle with Squid will end up being log_mysql_daemon.pl though. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.6
Re: Introducing myself
Amos Jeffries ha scritto: Marcello Romani wrote: Amos Jeffries ha scritto: Marcello Romani wrote: Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. Welcome. I'm working on the port of LogDaemon into 3.2. I look forward to working with you and it as the first LD helper. MySQL is precisely one of the tools I'm looking at testing the port with. Back at the beginning of LogDaemon in 2.7 there was a MySQL helper as well. I have not heard much since. Presumably is happily working in places, If that was not you we'd best check up and see about merging the script concepts rather than adding new one. I'll have to look closely but I belive what you are mentioning is not my script. Turns out it was you on March 2008. I grabbed a copy of your early script and was playing around with it. http://www.mail-archive.com/squid-us...@squid-cache.org/msg53342.html Ouch! You're right. The fact is, I've also seen a LD helper written in C somewhere... If you are intending and able to stick around and support the script I believe mysql logging is one of the helpers we could find great use for bundling. I have developed the perl script in my spare time, so I might not be as timely as I should, but I'm willing to support and expand it. I'm not a C or C++ expert, but if you feel that it's better to have a LD helper written in C or C++ rather than perl, I might also look into that. The only issues we have with coding language are that it must be widely usable/useful, and someone is able to keep up with bug fixes. Perl passes on both. Amos Ok, perfect. I have registered the project on sourceforge, but I've not yet commited anything there. Is it the right way to host the script and its future developements ? The project is here: http://sourceforge.net/projects/squid-mysql-log/ -- Marcello Romani
Couldn't get Header value -- Error occured ICAP protocol error
Hi All, I am new to ECAP lib and working on sample content adapter ecap_adapter_sample-0.0.2. My packages and system specs are: 1. OS: Centos5.0 2. Squid: 3.1.0.6 3. libecap: 0.0.2 I am trying to get the header field for a browser request, say Host: www.google.com, where name is Host and value is www.google.com. My code is as: void Adapter::Xaction::start() { Must(hostx); if (hostx-virgin().body()) { receivingVb = opOn; hostx-vbMake(); // ask host to supply virgin body } else { receivingVb = opNever; } libecap::shared_ptrlibecap::Message adapted = hostx-virgin().clone(); Must(adapted != 0); adapted-header().removeAny(libecap::headerContentLength); // add a custom header static const libecap::Name name(X-Ecap); const libecap::Header::Value value = libecap::Area::FromTempString(libecap::MyHost().uri()); adapted-header().add(name, value); cout 1. I am at __LINE__ endl; const libecap::Name name2(Host); libecap::Header::Value value1 ; cout 2. I am at __LINE__ endl; value1 = adapted-header().value( name2 ); if (!adapted-body()) { sendingAb = opNever; // there is nothing to send lastHostCall()-useAdapted(adapted); } else { hostx-useAdapted(adapted); } } When the execution reaches at line cout 2. I am at __LINE__ endl;, It displays the following error on browser: -- ICAP protocol error. The system returned: [No Error] This means that some aspect of the ICAP communication failed. Some possible problems are: * The ICAP server is not reachable. * An Illegal response was received from the ICAP server. -- please suggest me what I am doing wrong? I think there is a bug in value() function. Any help would be greatly appreciated. Thanks in advance. Ali Muhammad -- View this message in context: http://www.nabble.com/Couldn%27t-get-Header-valueError-occured-%22ICAP-protocol-error%22-tp22969759p22969759.html Sent from the Squid - Development mailing list archive at Nabble.com.
Re: Introducing myself
Marcello Romani wrote: Amos Jeffries ha scritto: Marcello Romani wrote: Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. Welcome. I'm working on the port of LogDaemon into 3.2. I look forward to working with you and it as the first LD helper. MySQL is precisely one of the tools I'm looking at testing the port with. Back at the beginning of LogDaemon in 2.7 there was a MySQL helper as well. I have not heard much since. Presumably is happily working in places, If that was not you we'd best check up and see about merging the script concepts rather than adding new one. I'll have to look closely but I belive what you are mentioning is not my script. Turns out it was you on March 2008. I grabbed a copy of your early script and was playing around with it. http://www.mail-archive.com/squid-us...@squid-cache.org/msg53342.html If you are intending and able to stick around and support the script I believe mysql logging is one of the helpers we could find great use for bundling. I have developed the perl script in my spare time, so I might not be as timely as I should, but I'm willing to support and expand it. I'm not a C or C++ expert, but if you feel that it's better to have a LD helper written in C or C++ rather than perl, I might also look into that. The only issues we have with coding language are that it must be widely usable/useful, and someone is able to keep up with bug fixes. Perl passes on both. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.6
Re: Introducing myself
Amos Jeffries ha scritto: Marcello Romani wrote: Amos Jeffries ha scritto: Marcello Romani wrote: Amos Jeffries ha scritto: Marcello Romani wrote: Hallo, I have been using squid for about 3 years now for normal web caching and url filtering in our company. I have written a small perl script to be used as a logfile daemon, which stores the access log entries in a mysql database. I'm currently using it in our company and I've also received requests from a few people that were looking for a mysql access log solution for squid. So far the script is working well, but mine is a low traffic scenario (15 clients), so I'd like to see it tested in more demanding environments to find out bugs or performance bottlenecks. Also, some of the views contained in the DDL script accompaining the daemon were written as an excercise, while some other provided useful information for my own requirements. The list of views could be expanded or modified based on other people's requests. The last line on my TODO list is how to deal with database growth. Currently there is no consolidation routine, so the database has to be cleaned by hand. I have some experience in mysql and postgresql, so I'm planning to write a version of the script for postgresql also. Welcome. I'm working on the port of LogDaemon into 3.2. I look forward to working with you and it as the first LD helper. MySQL is precisely one of the tools I'm looking at testing the port with. Back at the beginning of LogDaemon in 2.7 there was a MySQL helper as well. I have not heard much since. Presumably is happily working in places, If that was not you we'd best check up and see about merging the script concepts rather than adding new one. I'll have to look closely but I belive what you are mentioning is not my script. Turns out it was you on March 2008. I grabbed a copy of your early script and was playing around with it. http://www.mail-archive.com/squid-us...@squid-cache.org/msg53342.html Ouch! You're right. The fact is, I've also seen a LD helper written in C somewhere... The only other MySQL stuff I've seen was Arthur Tumanyan's shaga patch in 2007, which was for integrated MySQL support before LD came along. Might be interesting to compare the SQL tables etc, but the code is not relevant to this. Adrian wrote the initial access.log output into LD form using the original C from inside Squid. I'm not aware of any other LD around yet. If you are intending and able to stick around and support the script I believe mysql logging is one of the helpers we could find great use for bundling. I have developed the perl script in my spare time, so I might not be as timely as I should, but I'm willing to support and expand it. I'm not a C or C++ expert, but if you feel that it's better to have a LD helper written in C or C++ rather than perl, I might also look into that. The only issues we have with coding language are that it must be widely usable/useful, and someone is able to keep up with bug fixes. Perl passes on both. Amos Ok, perfect. I have registered the project on sourceforge, but I've not yet commited anything there. Is it the right way to host the script and its future developements ? The project is here: http://sourceforge.net/projects/squid-mysql-log/ Up to you. Thats how the squid_kerb_auth did it. The binary name if we bundle with Squid will end up being log_mysql_daemon.pl though. Amos Hmmm... I think it's preferrable to have consistent names. I'll look into renaming the registered project or creating a new one. -- Marcello Romani
Re: Couldn't get Header value -- Error occured ICAP protocol error
On 04/09/2009 06:18 AM, Ali Muhammad Qaim Khani wrote: Hi All, I am new to ECAP lib and working on sample content adapter ecap_adapter_sample-0.0.2. My packages and system specs are: 1. OS: Centos5.0 2. Squid: 3.1.0.6 3. libecap: 0.0.2 I am trying to get the header field for a browser request, say Host: www.google.com, where name is Host and value is www.google.com. I have made some suggestions when I answered the same question at https://answers.launchpad.net/ecap/+question/66951 Please follow those suggestions. Until the culprit is found, you can post either here or on Launchpad, whichever works best for you. Launchpad eCAP answers are probably easier for other folks to find in the future than a squid-dev thread. Thank you, Alex. My code is as: void Adapter::Xaction::start() { Must(hostx); if (hostx-virgin().body()) { receivingVb = opOn; hostx-vbMake(); // ask host to supply virgin body } else { receivingVb = opNever; } libecap::shared_ptrlibecap::Message adapted = hostx-virgin().clone(); Must(adapted != 0); adapted-header().removeAny(libecap::headerContentLength); // add a custom header static const libecap::Name name(X-Ecap); const libecap::Header::Value value = libecap::Area::FromTempString(libecap::MyHost().uri()); adapted-header().add(name, value); cout 1. I am at __LINE__ endl; const libecap::Name name2(Host); libecap::Header::Value value1 ; cout 2. I am at __LINE__ endl; value1 = adapted-header().value( name2 ); if (!adapted-body()) { sendingAb = opNever; // there is nothing to send lastHostCall()-useAdapted(adapted); } else { hostx-useAdapted(adapted); } } When the execution reaches at line cout 2. I am at __LINE__ endl;, It displays the following error on browser: -- ICAP protocol error. The system returned: [No Error] This means that some aspect of the ICAP communication failed. Some possible problems are: * The ICAP server is not reachable. * An Illegal response was received from the ICAP server. -- please suggest me what I am doing wrong? I think there is a bug in value() function. Any help would be greatly appreciated. Thanks in advance. Ali Muhammad
SourceLayout: where to put Delay*?
Hello, Where should we put the following traffic shaping-related stuff? CommonPool.h DelayId.h DelayPools.hDelayVector.h CompositePoolNode.h DelayPool.ccDelaySpec.ccHttpHeaderTools.cc DelayBucket.cc DelayPool.ccDelaySpec.h NullDelayId.cc DelayBucket.hDelayPool.h DelayTagged.cc NullDelayId.h DelayConfig.cc DelayPool.h DelayTagged.h DelayConfig.hdelay_pools.cc DelayUser.cc DelayId.cc delay_pools.cc DelayUser.h DelayIdComposite.h DelayPools.hDelayVector.cc Here are a few options: shaping/ tshaping/ tshape/ shape/ delay/ delays/ quota/ quotas/ Please keep in mind that DelayPools is just a mechanism for some traffic shaping and quota control features so perhaps it is better not to use delay*/ as the directory name. Thank you, Alex.
Re: SourceLayout: where to put Delay*?
On Thu, Apr 9, 2009 at 11:42 PM, Alex Rousskov rouss...@measurement-factory.com wrote: Hello, Where should we put the following traffic shaping-related stuff? [...] Here are a few options: shaping/ tshaping/ tshape/ shape/ delay/ delays/ quota/ quotas/ Please keep in mind that DelayPools is just a mechanism for some traffic shaping and quota control features so perhaps it is better not to use delay*/ as the directory name. I find quota/ is the most fitting. -- /kinkie
Re: SourceLayout: where to put Delay*?
I vote for: trafficcontrol/ tcontrol/ tc/ ? - Original Message - From: Kinkie gkin...@gmail.com To: squid-dev@squid-cache.org Sent: Friday, April 10, 2009 10:08 AM Subject: Re: SourceLayout: where to put Delay*? On Thu, Apr 9, 2009 at 11:42 PM, Alex Rousskov rouss...@measurement-factory.com wrote: Hello, Where should we put the following traffic shaping-related stuff? [...] Here are a few options: shaping/ tshaping/ tshape/ shape/ delay/ delays/ quota/ quotas/ Please keep in mind that DelayPools is just a mechanism for some traffic shaping and quota control features so perhaps it is better not to use delay*/ as the directory name. I find quota/ is the most fitting. -- /kinkie
Re: /bzr/squid3/trunk/ r9625: SourceLayout: src/base, take 1 -- moved remaining Async* files to src/base/
Alex Rousskov wrote: revno: 9625 committer: Alex Rousskov rouss...@measurement-factory.com branch nick: trunk timestamp: Thu 2009-04-09 16:46:45 -0600 message: SourceLayout: src/base, take 1 -- moved remaining Async* files to src/base/ looks like TEST_CALL_SOURCES should die now and clear up a bit of complexity. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.6
Re: SourceLayout: where to put Delay*?
Pieter De Wit wrote: I vote for: trafficcontrol/ tcontrol/ tc/ ? - Original Message - From: Kinkie gkin...@gmail.com To: squid-dev@squid-cache.org Sent: Friday, April 10, 2009 10:08 AM Subject: Re: SourceLayout: where to put Delay*? On Thu, Apr 9, 2009 at 11:42 PM, Alex Rousskov rouss...@measurement-factory.com wrote: Hello, Where should we put the following traffic shaping-related stuff? [...] Here are a few options: shaping/ tshaping/ tshape/ shape/ delay/ delays/ quota/ quotas/ Please keep in mind that DelayPools is just a mechanism for some traffic shaping and quota control features so perhaps it is better not to use delay*/ as the directory name. I find quota/ is the most fitting. My vote is for src/quotas/ - shaping/ also covers the IP-layer QoS controls. - trafficcontrol/ I find to be too long and ambiguous (acls and adaptation are traffic controls too). - delays/ is as you mention not roght. and eventually renaming the internal stuff from Delay* to something like ClientSpeed* since things under ServerSpeed* and ClientQuota* are already on the books. But don't bother with and re-work now, it's all needing very extensive changes for IPv6, CIDR, parsing, etc in 3.2. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.6
Re: Couldn't get Header value -- Error occured ICAP protocol error
Alex Rousskov wrote: On 04/09/2009 06:18 AM, Ali Muhammad Qaim Khani wrote: Hi All, I am new to ECAP lib and working on sample content adapter ecap_adapter_sample-0.0.2. My packages and system specs are: 1. OS: Centos5.0 2. Squid: 3.1.0.6 3. libecap: 0.0.2 I am trying to get the header field for a browser request, say Host: www.google.com, where name is Host and value is www.google.com. I have made some suggestions when I answered the same question at https://answers.launchpad.net/ecap/+question/66951 Please follow those suggestions. Until the culprit is found, you can post either here or on Launchpad, whichever works best for you. Launchpad eCAP answers are probably easier for other folks to find in the future than a squid-dev thread. Thank you, Alex. My code is as: void Adapter::Xaction::start() { Must(hostx); if (hostx-virgin().body()) { receivingVb = opOn; hostx-vbMake(); // ask host to supply virgin body } else { receivingVb = opNever; } libecap::shared_ptrlibecap::Message adapted = hostx-virgin().clone(); Must(adapted != 0); adapted-header().removeAny(libecap::headerContentLength); // add a custom header static const libecap::Name name(X-Ecap); const libecap::Header::Value value = libecap::Area::FromTempString(libecap::MyHost().uri()); adapted-header().add(name, value); cout 1. I am at __LINE__ endl; const libecap::Name name2(Host); libecap::Header::Value value1 ; cout 2. I am at __LINE__ endl; value1 = adapted-header().value( name2 ); if (!adapted-body()) { sendingAb = opNever; // there is nothing to send lastHostCall()-useAdapted(adapted); } else { hostx-useAdapted(adapted); } } When the execution reaches at line cout 2. I am at __LINE__ endl;, It displays the following error on browser: -- ICAP protocol error. The system returned: [No Error] This means that some aspect of the ICAP communication failed. Some possible problems are: * The ICAP server is not reachable. * An Illegal response was received from the ICAP server. -- please suggest me what I am doing wrong? I think there is a bug in value() function. Any help would be greatly appreciated. Thanks in advance. Ali Muhammad Thanks Alex for quick response I will keep posting my questions on 'Launchpad eCAP forum'. Thanks Ali Muhamamd -- View this message in context: http://www.nabble.com/Couldn%27t-get-Header-valueError-occured-%22ICAP-protocol-error%22-tp22969759p22983465.html Sent from the Squid - Development mailing list archive at Nabble.com.