Re: [squid-users] Mingw(patch for long file pointers) --with-large-files
The store url mismatch stuff means an object wasn't fetched from cache but the client won't notice the difference - it'll just be a miss. I've seen this creep up when storeurl rewrite rules change and generate different backend names for objects. As for COSS, I don't suggest running it on Windows. I never tested it. As for low-priority transfers - if your OS supports setting the TOS on an already-established TCP connection then it wouldn't be difficult to patch Squid to reset the TOS for that socket mid-flight on quick_abort. 2c, Adrian 2008/8/20 chudy [EMAIL PROTECTED]: even using the 2.7 stable 4 version(binary for windows) with newly created swap files still the same. i've been using storeurl and aufs feature since from the squid head. now that im trying to use coss this warnings came up. Henrik Nordstrom-5 wrote: sön 2008-08-17 klockan 20:41 -0700 skrev chudy: one thing i've seeing Warnings about failed to unpack meta data that i've never seen in aufs. Did you wipe your cache when changing the file size api? 32-bit and 64-bit caches may be incompatible.. Regards Henrik ..or maybe storeurl is not final. bec. storeurl mismatch when the content is store in memory and revalidated. but on the second thought no need to use storeurl on smaller objects since speed is our concern. bec. this objects usually give warnings about meta data are smaller objects. i've tried storeurl_access deny smaller_content that are smaller than maximum_object_size_in_memory it seems works fine. but still i need confirmations. on the other thought i've been thinking if the objects being canceled by the clients, i want to continue downloading in squid but in lowest priority of bandwidth... is it possible? or any workaround to make it happen? quick_abort_max to -1 (correct me if i'm wrong) uses same bandwidth. it will be total congestion if these files are videos. its really nice if it will be on lowest priority and what makes ever better if the client retry to download the priority back to normal. -- View this message in context: http://www.nabble.com/Mingw%28patch-for-long-file-pointers%29---with-large-files-tp19025674p19070570.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Mingw(patch for long file pointers) --with-large-files
even using the 2.7 stable 4 version(binary for windows) with newly created swap files still the same. i've been using storeurl and aufs feature since from the squid head. now that im trying to use coss this warnings came up. Henrik Nordstrom-5 wrote: sön 2008-08-17 klockan 20:41 -0700 skrev chudy: one thing i've seeing Warnings about failed to unpack meta data that i've never seen in aufs. Did you wipe your cache when changing the file size api? 32-bit and 64-bit caches may be incompatible.. Regards Henrik ...or maybe storeurl is not final. bec. storeurl mismatch when the content is store in memory and revalidated. but on the second thought no need to use storeurl on smaller objects since speed is our concern. bec. this objects usually give warnings about meta data are smaller objects. i've tried storeurl_access deny smaller_content that are smaller than maximum_object_size_in_memory it seems works fine. but still i need confirmations. on the other thought i've been thinking if the objects being canceled by the clients, i want to continue downloading in squid but in lowest priority of bandwidth... is it possible? or any workaround to make it happen? quick_abort_max to -1 (correct me if i'm wrong) uses same bandwidth. it will be total congestion if these files are videos. its really nice if it will be on lowest priority and what makes ever better if the client retry to download the priority back to normal. -- View this message in context: http://www.nabble.com/Mingw%28patch-for-long-file-pointers%29---with-large-files-tp19025674p19070570.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Mingw(patch for long file pointers) --with-large-files
Hi, At 14.41 18/08/2008, chudy wrote: Using Mingw to compile squid --with-large-files following Patch MinGW for long file pointers http://mdsh.com/wiki/jsp/Wiki?Mplayer:build%20on%20MinGWhighlight=build http://mdsh.com/wiki/jsp/Wiki?Mplayer:build%20on%20MinGWhighlight=build and edit cut I just want a confirmation if i did the right thing. for now the squid is running fine with ./configure --enable--enable-win32-service --enable-storeio=aufs,coss --enable-removal-policies=heap,lru --enable-snmp --disable-wccp --disable-wccpv2 --enable-large-cache-files --prefix=c:/squid --with-large-files --enable-err-languages=english --enable-cachemgr-hostname=server i've attached my squid.conf store_rewrite and url_rewrite helper. http://www.nabble.com/file/p19025674/squid.conf squid.conf http://www.nabble.com/file/p19025674/test.pl test.pl http://www.nabble.com/file/p19025674/rewrite.pl rewrite.pl one thing i've seeing Warnings about failed to unpack meta data that i've never seen in aufs. and still the same Warnings using 2.7 stable version when using coss. This patch could be incomplete. I don't know how MinGW internal are arranged, so I think that you should ask about this on the mingw-users mailing list. On the Squid side, probably there is a conflicting definition in squid_mswin.h at line 174. Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: [squid-users] Mingw(patch for long file pointers) --with-large-files
sön 2008-08-17 klockan 20:41 -0700 skrev chudy: one thing i've seeing Warnings about failed to unpack meta data that i've never seen in aufs. Did you wipe your cache when changing the file size api? 32-bit and 64-bit caches may be incompatible.. Regards Henrik
[squid-users] Mingw(patch for long file pointers) --with-large-files
Using Mingw to compile squid --with-large-files following Patch MinGW for long file pointers http://mdsh.com/wiki/jsp/Wiki?Mplayer:build%20on%20MinGWhighlight=build http://mdsh.com/wiki/jsp/Wiki?Mplayer:build%20on%20MinGWhighlight=build and edit io.h file #ifdef __MSVCRT__ _CRTIMP __int64 __cdecl _filelengthi64(int); _CRTIMP long __cdecl _findfirsti64(const char*, struct _finddatai64_t*); _CRTIMP int __cdecl _findnexti64(long, struct _finddatai64_t*); _CRTIMP __int64 __cdecl _lseeki64(int, __int64, int); _CRTIMP __int64 __cdecl _telli64(int); - removed stat.h file #if defined (__MSVCRT__) -removed struct _stati64 { _dev_t st_dev; _ino_t st_ino; unsigned short st_mode; short st_nlink; short st_uid; short st_gid; _dev_t st_rdev; __int64 st_size; time_t st_atime; time_t st_mtime; time_t st_ctime; }; struct __stat64 { _dev_t st_dev; _ino_t st_ino; _mode_t st_mode; short st_nlink; short st_uid; short st_gid; _dev_t st_rdev; __int64 st_size; __time64_t st_atime; __time64_t st_mtime; __time64_t st_ctime; }; #endif /* __MSVCRT__ */ - up to this line I just want a confirmation if i did the right thing. for now the squid is running fine with ./configure --enable--enable-win32-service --enable-storeio=aufs,coss --enable-removal-policies=heap,lru --enable-snmp --disable-wccp --disable-wccpv2 --enable-large-cache-files --prefix=c:/squid --with-large-files --enable-err-languages=english --enable-cachemgr-hostname=server i've attached my squid.conf store_rewrite and url_rewrite helper. http://www.nabble.com/file/p19025674/squid.conf squid.conf http://www.nabble.com/file/p19025674/test.pl test.pl http://www.nabble.com/file/p19025674/rewrite.pl rewrite.pl one thing i've seeing Warnings about failed to unpack meta data that i've never seen in aufs. and still the same Warnings using 2.7 stable version when using coss. -- View this message in context: http://www.nabble.com/Mingw%28patch-for-long-file-pointers%29---with-large-files-tp19025674p19025674.html Sent from the Squid - Users mailing list archive at Nabble.com.