Re: Squid-3.2 status update
Hi Amos, I vote for keeping it in, possibly with a warning that it is a new feature. We provide the rope and highlight the risks. It's them up to each individual administrator to decide whether to take them or not, depending on his unique deployment scenario. Thanks everyone for your hard work! Kinkie On Thu, Sep 22, 2011 at 6:27 AM, Amos Jeffries squ...@treenet.co.nz wrote: SMP rock storage changes hit 3.2 in r11342 a few minutes ago. Expect some new bugs to arrive and some to die mysteriously. Thank you very much for the years of hard work and stress getting this to happen Alex. As I understand it so far (Alex will likely follow up and correct me on some of this); * installations with no workers are expected to only see small benefits via the existing storage code streamlining and bug fixes. * installations with workers will automatically get the SMP shared memory caching. This seems to be the newest most experimental of the whole update. So fair warning: expect new bugs in this area. * installations choosing to explicitly configure rock cache_dir type get that. This is the older part with prior production use as a 3.1 branch. Some changes made for SMP support. So a potential risk of new bugs, but hopefully not very many. Project details are http://wiki.squid-cache.org/Features/RockStore. Although lacking config how-to at present. This marks the end of the feature additions to 3.2 (StringNG is a performance boost which may still happen in the background later). The main focus now is on bugs, bugs, performance, polishing up, and did I mention the 291 bugs? before we can call it stable. The next formal package will be in ~10 days at the regular end of month packaging. Lets see how good we can all make it. Also, we have to decide now whether or not to drop COSS support from 3.2. Given that rock storage fills the same architectural niche of efficient in-memory disk backed persistent storage for small objects on high performance systems. Votes please? (if you want to keep it please say why) Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.15 Beta testers wanted for 3.2.0.12 -- /kinkie
Re: [PATCH] Fix session helper crashing too rapidly
On Thu, 2011-09-22 at 11:10 +1200, Amos Jeffries wrote: However, it still leaves the question: what is the best way to log errors from the helper. At the moment, even with the patch applied, a user will still get The helpers are crashing too rapidly, need help if they don't specify a concurrency value. I was trying to fix this, as I spent hours locating the problem! The correct way is with ERROR: or in this case FATAL: messages sent to stderr from the helper. The user will get N of that message over a 30 seconds period, then Squids helpers dying message and abort. Why the message is not showing up in your cache.log is a mystery. Perhapse it was buffered and discarded by the segfault? something like that? I'm still not getting anything, even with the segfault bug fixed. Could you point in the direction of the code where this should happen (sorry, I can't find it), so that I can try and work out why it's not happening for me please? Thanks for your reply to my other post. Andy
Re: [PATCH] Fix session helper crashing too rapidly
On 22/09/11 18:32, Andrew Beverley wrote: On Thu, 2011-09-22 at 11:10 +1200, Amos Jeffries wrote: However, it still leaves the question: what is the best way to log errors from the helper. At the moment, even with the patch applied, a user will still get The helpers are crashing too rapidly, need help if they don't specify a concurrency value. I was trying to fix this, as I spent hours locating the problem! The correct way is with ERROR: or in this case FATAL: messages sent to stderr from the helper. The user will get N of that message over a 30 seconds period, then Squids helpers dying message and abort. Why the message is not showing up in your cache.log is a mystery. Perhapse it was buffered and discarded by the segfault? something like that? I'm still not getting anything, even with the segfault bug fixed. Could you point in the direction of the code where this should happen (sorry, I can't find it), so that I can try and work out why it's not happening for me please? Thanks for your reply to my other post. Andy src/ipc.cc function ipcCreate() sets up the pipes and links stderr to the debugs() log pipe. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.15 Beta testers wanted for 3.2.0.12
Re: [PATCH] Bug fix: (ssl_crtd): Cannot add certificate to db when updating expired cert
If there is not any objection I will commit this patch to trunk. It fixes a bug which may cause problems in some installations... On 09/15/2011 04:15 PM, Tsantilas Christos wrote: When ssl_crtd helper needs to add a fresh certificate to the database but finds an expired certificate already stored, ssl_crtd deletes the expired certificate file from disk before adding the fresh one. However, the addition still fails because the expired certificate was not removed from database indexes. This fix: - Adds code to update database indexes upon deletion of a row. - Polishes certificates deletion code to avoid duplication. This is a Measurement Factory project.
[PATCH] Fix CertficateDB locking scheme
Fix CertficateDB locking scheme Currently we are locking every file going to be accessed by CertificateDB code even if it is not realy needed, because of a more general lock. This patch: - Replace the old FileLocker class with the pair Lock/Locker classes - Remove most of the locks in CertificateDB with only two locks one for main database locking and one lock for the file contain the current serial number. This is a Measurement Factory project Fix CertficateDB locking scheme Currently we are locking every file going to be accessed by CertificateDB code even if it is not realy needed, because of a more general lock. This patch: - Replace the old FileLocker class with the pair Lock/Locker classes - Remove most of the locks in CertificateDB with only two locks one for main database locking and one lock for the file contain the current serial number. This is a Measurement Factory project === modified file 'src/ssl/certificate_db.cc' --- src/ssl/certificate_db.cc 2011-09-15 16:34:52 + +++ src/ssl/certificate_db.cc 2011-09-22 09:25:48 + @@ -1,69 +1,125 @@ /* * $Id$ */ #include config.h #include ssl/certificate_db.h +#if HAVE_ERRNO_H +#include errno.h +#endif #if HAVE_FSTREAM #include fstream #endif #if HAVE_STDEXCEPT #include stdexcept #endif #if HAVE_SYS_STAT_H #include sys/stat.h #endif #if HAVE_SYS_FILE_H #include sys/file.h #endif #if HAVE_FCNTL_H #include fcntl.h #endif -Ssl::FileLocker::FileLocker(std::string const filename) -:fd(-1) +#define HERE (ssl_crtd) __FILE__ ':' __LINE__ : + +Ssl::Lock::Lock(std::string const aFilename) : +filename(aFilename), +#if _SQUID_MSWIN_ +hFile(INVALID_HANDLE_VALUE) +#else +fd(-1) +#endif +{ +} + +bool Ssl::Lock::locked() const { #if _SQUID_MSWIN_ +return hFile != INVALID_HANDLE_VALUE; +#else +return fd != -1; +#endif +} + +void Ssl::Lock::lock() +{ + +#if _SQUID_MSWIN_ hFile = CreateFile(TEXT(filename.c_str()), GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); -if (hFile != INVALID_HANDLE_VALUE) -LockFile(hFile, 0, 0, 1, 0); +if (hFile == INVALID_HANDLE_VALUE) #else fd = open(filename.c_str(), 0); -if (fd != -1) -flock(fd, LOCK_EX); +if (fd == -1) #endif +throw std::runtime_error(Failed to open file + filename); + + +#if _SQUID_MSWIN_ +if (!LockFile(hFile, 0, 0, 1, 0)) +#else +if (flock(fd, LOCK_EX) != 0) +#endif +throw std::runtime_error(Failed to get a lock of + filename); } -Ssl::FileLocker::~FileLocker() -{ +void Ssl::Lock::unlock() +{ #if _SQUID_MSWIN_ if (hFile != INVALID_HANDLE_VALUE) { UnlockFile(hFile, 0, 0, 1, 0); CloseHandle(hFile); +hFile = INVALID_HANDLE_VALUE; } #else if (fd != -1) { flock(fd, LOCK_UN); close(fd); +fd = -1; } #endif +else +throw std::runtime_error(Lock is already unlocked for + filename); +} + +Ssl::Lock::~Lock() +{ +if (locked()) +unlock(); +} + +Ssl::Locker::Locker(Lock aLock, const char *aFileName, int aLineNo): +weLocked(false), lock(aLock), fileName(aFileName), lineNo(aLineNo) +{ +if (!lock.locked()) { +lock.lock(); +weLocked = true; +} +} + +Ssl::Locker::~Locker() +{ +if (weLocked) +lock.unlock(); } Ssl::CertificateDb::Row::Row() : width(cnlNumber) { row = new char *[width + 1]; for (size_t i = 0; i width + 1; i++) row[i] = NULL; } Ssl::CertificateDb::Row::~Row() { if (row) { for (size_t i = 0; i width + 1; i++) { delete[](row[i]); } delete[](row); } } @@ -113,60 +169,60 @@ int Ssl::CertificateDb::index_name_cmp(const char **a, const char **b) { return(strcmp(a[Ssl::CertificateDb::cnlName], b[CertificateDb::cnlName])); } const std::string Ssl::CertificateDb::serial_file(serial); const std::string Ssl::CertificateDb::db_file(index.txt); const std::string Ssl::CertificateDb::cert_dir(certs); const std::string Ssl::CertificateDb::size_file(size); const size_t Ssl::CertificateDb::min_db_size(4096); Ssl::CertificateDb::CertificateDb(std::string const aDb_path, size_t aMax_db_size, size_t aFs_block_size) : db_path(aDb_path), serial_full(aDb_path + / + serial_file), db_full(aDb_path + / + db_file), cert_full(aDb_path + / + cert_dir), size_full(aDb_path + / + size_file), db(NULL), max_db_size(aMax_db_size), fs_block_size(aFs_block_size), +dbLock(db_full), +dbSerialLock(serial_full), enabled_disk_store(true) { if (db_path.empty() !max_db_size) enabled_disk_store = false; else if ((db_path.empty() max_db_size) || (!db_path.empty() !max_db_size)) throw std::runtime_error(ssl_crtd is missing the required parameter. There should be -s and -M parameters together.);
Re: SSL Bump Certificate Blacklist
On 09/20/2011 05:05 PM, Alex Rousskov wrote: On 09/20/2011 05:46 AM, Fabian Hugelshofer wrote: On 09/16/2011 04:35 PM, Alex Rousskov wrote: On 09/16/2011 12:55 AM, Fabian Hugelshofer wrote: What would you use to identify the certificates? - Serial number? - Serial number and common name? - Serial number and issuer? - Fingerprint (might not be available in each case)? I hope somebody already answered these important questions in general CRL context! CRLs use the serial number in combination with the issuer. The issuer name and the serial are probably not sufficient as there can be multiple CAs with the same name (certificates with different validity, different trust chains, ...). If we implement a custom black list, we can be flexible and support all of the above. To start with, I guess we should block all certificates with DigiNotar in the chain. What are the next steps, would you like me to open a Squid Bug? I believe the rule is that new features should be documented on wiki, but opening a feature request bug report would not hurt, I guess. Can you summarize the minimum number of new ACL types that you think we should support to make this work? I know we want issuer name and serial number. Do we want notBefore and notAfter matching? Anything else? Feature request opened: http://bugs.squid-cache.org/show_bug.cgi?id=3359 Do we need to be able to test a match against a certificate at Nth position in the chain? Or can we always test all certificates in the chain? Like Amos wrote, one would have to go through the whole trust chain. Regards, Fabian
Re: SSL Bump Certificate Blacklist
On 09/22/2011 09:02 AM, Fabian Hugelshofer wrote: Feature request opened: http://bugs.squid-cache.org/show_bug.cgi?id=3359 Do we need to be able to test a match against a certificate at Nth position in the chain? Or can we always test all certificates in the chain? Like Amos wrote, one would have to go through the whole trust chain. Good. It makes configuration support easier as all options will apply to each certificate in the chain. Thank you, Alex.
Re: Squid-3.2 status update
On 09/21/2011 10:27 PM, Amos Jeffries wrote: SMP rock storage changes hit 3.2 in r11342 a few minutes ago. Expect some new bugs to arrive and some to die mysteriously. * installations with no workers are expected to only see small benefits via the existing storage code streamlining and bug fixes. And even small benefits are not currently guaranteed. There are probably more new bugs than improvements if you are not into SMP caching, especially in corner cases. Non-SMP setups were not the focus of this work. * installations with workers will automatically get the SMP shared memory caching. This seems to be the newest most experimental of the whole update. So fair warning: expect new bugs in this area. If you use SMP caching and do not want shared memory caching, see memory_cache_shared in squid.conf. From developer point of view, the biggest change is that shared caching requires using memory cache via Store APIs instead of using a process-global store_table whenever you are curious about something related to caching. Currently, store_table is still used for intransit objects and possibly other half-cached entries (leading to bugs). We still need to wean a lot of Squid code from store_table. Eventually, that global should be removed. * installations choosing to explicitly configure rock cache_dir type get that. This is the older part with prior production use as a 3.1 branch. Some changes made for SMP support. So a potential risk of new bugs, but hopefully not very many. I would say that a _lot_ of changes were made for SMP support. The underlying database design and Store API code is mostly the same, but v3.1 code used blocking I/O within the same Squid process. The now-official code is meant for use with disker kid processes, which means a lot of internal changes and complications. There are also quite a few feature holes. For example, Rock cache_dir hot reconfiguration is not yet supported. Statistics is lacking. Squid uses a lot more memory for shared I/O pages than it needs and there is no good way to control that. Project details are http://wiki.squid-cache.org/Features/RockStore. Although lacking config how-to at present. If you use Rock store, use round-robin cache_dir selector. If you use Rock store and your disks are close to 100% utilized, you probably need to use max-swap-rate and swap-timeout options. Remember that diskers cannot slow down workers so they can be overwhelmed with swap requests. Also, we have to decide now whether or not to drop COSS support from 3.2. Given that rock storage fills the same architectural niche of efficient in-memory disk backed persistent storage for small objects on high performance systems. Votes please? (if you want to keep it please say why) I would propose the following decision tree: 1. If it can compile and start now, keep it. 2. If it can compile and start before Rock changes, keep it. 3. Otherwise, drop it from trunk in one year and do not require COSS updates when something it uses changes. Personally, I wish COSS was available and worked great in Squid3, but nobody seems to want it badly enough to do something about it. We should keep it as long as there is a reasonable chance somebody will pick it up. When we started working on Rock Store, comparing COSS with ufs was very useful, but Rock interpretation of Store APIs is now a better starting point for any new store, IMO. HTH, Alex.
Re: [PATCH] Ignore and warn about attempts to reconfigure static Rock store options.
On 09/20/2011 10:17 AM, Dmitry Kurochkin wrote: On Tue, 20 Sep 2011 19:41:31 +0400, Dmitry Kurochkin dmitry.kuroch...@measurement-factory.com wrote: On Tue, 20 Sep 2011 09:08:33 -0600, Alex Rousskov rouss...@measurement-factory.com wrote: On 09/20/2011 04:30 AM, Dmitry Kurochkin wrote: Hi Alex, Amos. On Mon, 19 Sep 2011 23:07:43 -0600, Alex Rousskov rouss...@measurement-factory.com wrote: On 09/19/2011 08:10 PM, Amos Jeffries wrote: On Tue, 20 Sep 2011 02:40:13 +0400, Dmitry Kurochkin wrote: Attached patch does some polishing for Rock store cache_dir reconfiguration. Some Rock store options cannot be changed dynamically: path, size, and max-size. Before the change, there were no checks during reconfigure to prevent changing these options. This may lead to Rock cache corruption and other bugs. The patch adds necessary checks to Rock store code. If user tries to change an option that cannot be updated dynamically, a warning is reported and the value is left unchanged. Of itself the patch looks okay. BUT... IMO _path_ should never be re-configurable. Good point! I wonder whether admins expect cache_dir lines to be tied to paths (e.g., changing cache_dir order changes nothing) or to position (i.e., changing path can be supported). We should document what we actually support (but that documentation change, if needed, is outside this patch scope). If rock store can't handle multiple cache_dir of type rock with unique paths there is something deeper wrong. It can. Nothing special here. AFAIK path should be the unique key to identify different cache_dir. Changing the path means a completely different dir is now being referenced. If not already loaded the dir needs to be loaded as if on startup. If any existing are no longer referenced the old cache_dir details needs discarding. I tend to agree: Path-matching is probably better than position-matching. Dmitry, could you please check whether current code maps cache_dir lines to SwapDir objects using cache_dir position in squid.conf or using cache_dir paths? If it is the latter, you can remove the path-related warning from the patch, right? Cache dir mapping uses paths (though it ignores case). So it works as you would expect. I knew that when I was working on the patch and I still included the path check. Because it feels to me like we should check the path if it is passed to SwapDir::reconfigure(). Otherwise we should not pass path to SwapDir::reconfigure() at all. (There is also index parameter which should be checked or removed as well following the same arguments.) What do you think about: * remove path check from this patch * prepare another patch that removes index and path parameters from SwapDir::reconfigure() Sounds like a good plan to me. Attached patch without the path check. Oops, forgot to remove unneeded changes in src/fs/rock/RockSwapDir.h. Committed as trunk r11754. Thank you, Alex.
Re: [PATCH] Remove SwapDir::reconfigure() arguments since they are not used.
On 09/20/2011 08:42 PM, Amos Jeffries wrote: On Tue, 20 Sep 2011 20:32:56 +0400, Dmitry Kurochkin wrote: Hi all. This is a small cleanup patch discussed recently in another thread. Note it depends on rock-reconfigure-t3.patch. Before the change, SwapDir::reconfigure() took index and path arguments. But none of them was actually used: neither index nor path can be changed during reconfigure. And both index and path are available as SwapDir members. So there is no reason to have these arguments. Regards, Dmitry +1. Please commit asap. Committed as trunk r11755. Thank you, Alex.
Re: [PATCH] Fix cache_dir type check during reconfiguration.
On 09/19/2011 07:44 PM, Amos Jeffries wrote: On Tue, 20 Sep 2011 02:38:08 +0400, Dmitry Kurochkin wrote: Hello. This is a small and simple fix for cache_dir reconfiguration. SwapDir::type() returns C strings which should be compared with strcmp(3) instead of checking pointers for equality. Regards, Dmitry eek! +1 please commit. Committed as trunk r11756. Alex.
Build failed in Jenkins: 3.HEAD-i386-OpenBSD #1126
See http://build.squid-cache.org/job/3.HEAD-i386-OpenBSD/1126/changes Changes: [Amos Jeffries] Workaround: disable translation of error-details.txt po2txt was translating sub-strings inside name: fields. This is a TEMPORARY measure to prevent breakign any more installations. [Alex Rousskov] Fixed SwapDir::reconfigure() arguments broken by my poor conflict resolution. [Alex Rousskov] Fix cache_dir type check during reconfiguration. SwapDir::type() returns C strings which should be compared with strcmp(3) instead of checking pointers for equality. [Alex Rousskov] Remove SwapDir::reconfigure() arguments since they are not used. Before the change, SwapDir::reconfigure() took index and path arguments, but none of them was actually used: neither index nor path can be changed during reconfigure. And both index and path are available as SwapDir members so there is no reason to have these arguments. [Alex Rousskov] Ignore and warn about attempts to reconfigure static Rock store options. Some Rock store options cannot be changed dynamically: path, size, and max-size. Before the change, there were no checks during reconfigure to prevent changing these options. This may lead to Rock cache corruption and other bugs. The patch adds necessary checks to Rock store code. If user tries to change an option that cannot be updated dynamically, a warning is reported and the value is left unchanged. -- [...truncated 1019 lines...] checking for string.h... (cached) yes checking for memory.h... (cached) yes checking for strings.h... (cached) yes checking for inttypes.h... (cached) yes checking for stdint.h... (cached) yes checking for unistd.h... (cached) yes checking for dlfcn.h... (cached) yes checking for objdir... (cached) .libs checking if ccache gcc supports -fno-rtti -fno-exceptions... (cached) no checking for ccache gcc option to produce PIC... (cached) -fPIC -DPIC checking if ccache gcc PIC flag -fPIC -DPIC works... (cached) yes checking if ccache gcc static flag -static works... (cached) yes checking if ccache gcc supports -c -o file.o... (cached) yes checking if ccache gcc supports -c -o file.o... (cached) yes checking whether the ccache gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... (cached) yes checking dynamic linker characteristics... openbsd4.9 ld.so checking how to hardcode library paths into programs... immediate checking for shl_load... (cached) no checking for shl_load in -ldld... (cached) no checking for dlopen... (cached) yes checking whether a program can dlopen itself... (cached) yes checking whether a statically linked program can dlopen itself... (cached) no checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking which extension is used for runtime loadable modules... (cached) .so checking which variable specifies run-time module search path... (cached) LD_LIBRARY_PATH checking for the default library search path... (cached) /usr/lib checking for library containing dlopen... (cached) none required checking for dlerror... (cached) yes checking for shl_load... (cached) no checking for shl_load in -ldld... (cached) no checking for dld_link in -ldld... (cached) no checking for _ prefix in compiled symbols... (cached) no checking whether deplibs are loaded by dlopen... (cached) yes checking for argz.h... (cached) no checking for error_t... (cached) no checking for argz_add... (cached) no checking for argz_append... (cached) no checking for argz_count... (cached) no checking for argz_create_sep... (cached) no checking for argz_insert... (cached) no checking for argz_next... (cached) no checking for argz_stringify... (cached) no checking whether libtool supports -dlopen/-dlpreopen... (cached) yes checking for unistd.h... (cached) yes checking for dl.h... (cached) no checking for sys/dl.h... (cached) no checking for dld.h... (cached) no checking for mach-o/dyld.h... (cached) no checking for dirent.h... (cached) yes checking for closedir... (cached) yes checking for opendir... (cached) yes checking for readdir... (cached) yes checking for strlcat... (cached) yes checking for strlcpy... (cached) yes configure: updating cache /tmp/config.cache.16653 configure: creating ./config.status config.status: creating Makefile config.status: creating config.h config.status: executing depfiles commands config.status: executing libtool commands === configuring in lib/libTrie (http://build.squid-cache.org/job/3.HEAD-i386-OpenBSD/ws/btlayer-00-default/lib/libTrie) configure: running /bin/sh ../../.././test-suite/../lib/libTrie/configure --disable-option-checking '--prefix=/usr/local/squid' 'CC=ccache gcc' 'CFLAGS=' 'CXX=ccache g++' 'CXXFLAGS=' '--enable-ltdl-convenience' --cache-file=/tmp/config.cache.16653
Jenkins build is back to normal : 3.HEAD-i386-OpenBSD #1127
See http://build.squid-cache.org/job/3.HEAD-i386-OpenBSD/1127/