Re: [PATCH/RFC v3] Optionally allow binaries to run against older SQLite
On Friday, November 04, 2011 1:24 AM, "Jonathan Nieder" wrote: > Daniel Shahaf wrote: > > On Wednesday, November 02, 2011 4:25 PM, "Jonathan Nieder" > > wrote: > > >> I'm not very happy about putting -DSVN_SQLITE_COMPAT_VERSION in CFLAGS > >> --- does subversion have a config.h somewhere? > > > > http://s.apache.org/xy-problem --- Why do you think you need config.h? > > Sorry, that was cryptic of me. The -DSVN_SQLITE_COMPAT_VERSION looked > lonely on the command line not surrounded by settings like -DHAVE_INT > and -DHAVE_PRINTF --- I was afraid there might be some other idiom I > was missing. > > I'm happy keeping it in CFLAGS. > There is the svn_config_private.h idiom. There is also the ${target}_CFLAGS Makefile variable, generated by the build/*/*.py infrastructure. I don't feel that throwing it in CFLAGS is intuitively wrong, in our build system. > Your other comments looked reasonable, too. Details: > > - there were no unpatched instances of SQLITE_VERSION_NUMBER, but >there was one unpatched instance of SQLITE_VERSION. So now I do > > -DSVN_SQLITE_MIN_VERSION_NUMBER=1002003 > -DSVN_SQLITE_MIN_VERSION="1.2.3" > Okay. > - it would be simple to make the configure script take care of the >default for SVN_SQLITE_MIN_VERSION_NUMBER instead of the header >doing so, but I prefer the latter since it means you can use >"gcc -c" directly to build without remembering the -D >option and still have a chance of the result working. So I've >left that alone for now. > +1 > - assignment to $2 in sqlite.m4 could be replaced by > > ver_num=`expr ...` > $2=$ver_num > >or: > > result_var="$2" > ... > eval "$result_var=\$ver_num" > >Left untouched for now. > What I actually had in mind was: f() { echo foo } bar=`f` That's primarily because I'm not used to the $1=* idiom (in its various forms as you suggest above). If it's a standard idiom for configure script writing I wouldn't feel strongly to change it. > [[[ > Introduce a --enable-sqlite-compatibility-version=X.Y.Z option for > ./configure to allow people building Subversion to specify how old the > system being deployed on might be. Setting the compatibility version > to an older version turns on code in subversion that works around > infelicities in older versions and relaxes the version check ("SQLite > compiled for 3.7.4, but running with 3.7.3") at initialization time. > > If the compat version is set to a version before the minimum supported > version (3.6.18), the build will fail early so the person building can > adjust her expectations. > Looks good. > The default for the compat version is the currently installed version, > so there should be no change in behavior for users not passing this > option to the configure script. > I'll read/review the rest later.
[PATCH/RFC v3] Optionally allow binaries to run against older SQLite
Daniel Shahaf wrote: > On Wednesday, November 02, 2011 4:25 PM, "Jonathan Nieder" > wrote: >> I'm not very happy about putting -DSVN_SQLITE_COMPAT_VERSION in CFLAGS >> --- does subversion have a config.h somewhere? > > http://s.apache.org/xy-problem --- Why do you think you need config.h? Sorry, that was cryptic of me. The -DSVN_SQLITE_COMPAT_VERSION looked lonely on the command line not surrounded by settings like -DHAVE_INT and -DHAVE_PRINTF --- I was afraid there might be some other idiom I was missing. I'm happy keeping it in CFLAGS. Your other comments looked reasonable, too. Details: - best to define SVN_SQLITE_MIN_VERSION_NUMBER in the header that uses it, instead of relying on sqlite.c being the only source file that does version checks. Fixed. - there were no unpatched instances of SQLITE_VERSION_NUMBER, but there was one unpatched instance of SQLITE_VERSION. So now I do -DSVN_SQLITE_MIN_VERSION_NUMBER=1002003 -DSVN_SQLITE_MIN_VERSION="1.2.3" - it would be simple to make the configure script take care of the default for SVN_SQLITE_MIN_VERSION_NUMBER instead of the header doing so, but I prefer the latter since it means you can use "gcc -c" directly to build without remembering the -D option and still have a chance of the result working. So I've left that alone for now. - assignment to $2 in sqlite.m4 could be replaced by ver_num=`expr ...` $2=$ver_num or: result_var="$2" ... eval "$result_var=\$ver_num" Left untouched for now. [[[ Introduce a --enable-sqlite-compatibility-version=X.Y.Z option for ./configure to allow people building Subversion to specify how old the system being deployed on might be. Setting the compatibility version to an older version turns on code in subversion that works around infelicities in older versions and relaxes the version check ("SQLite compiled for 3.7.4, but running with 3.7.3") at initialization time. If the compat version is set to a version before the minimum supported version (3.6.18), the build will fail early so the person building can adjust her expectations. The default for the compat version is the currently installed version, so there should be no change in behavior for users not passing this option to the configure script. * subversion/include/private/svn_dep_compat.h (SVN_SQLITE_MIN_VERSION_NUMBER, SVN_SQLITE_MIN_VERSION): Set to SQLITE_VERSION_NUMBER, SQLITE_VERSION if undefined. (SQLITE_VERSION_AT_LEAST): Check SVN_SQLITE_MIN_VERSION_NUMBER instead of SQLITE_VERSION_NUMBER. * subversion/libsvn_subr/sqlite.c (init_sqlite): Check sqlite version against SVN_SQLITE_MIN_VERSION_NUMBER instead of SQLITE_VERSION_NUMBER. * configure.ac: Provide a --enable-sqlite-compatibility-version switch that sets SVN_SQLITE_MIN_VERSION_NUMBER and SVN_SQLITE_MIN_VERSION. * build/ac-macros/sqlite.m4 (SVN_SQLITE_VERNUM_PARSE): Make it reusable (in particular for configure.ac), by taking a version string and a variable to store the corresponding version number as arguments. (SVN_SQLITE_MIN_VERNUM_PARSE): Simplify by calling SVN_SQLITE_VERNUM_PARSE. (SVN_SQLITE_PKG_CONFIG): Adapt SVN_SQLITE_VERNUM_PARSE call to the new calling convention. ]]] Index: subversion/include/private/svn_dep_compat.h === --- subversion/include/private/svn_dep_compat.h (revision 1197399) +++ subversion/include/private/svn_dep_compat.h (working copy) @@ -107,6 +107,32 @@ #endif /* SERF_VERSION_AT_LEAST */ +/** + * By default, if libsvn is built against one version of SQLite + * and then run using an older version, svn will error out: + * + * svn: Couldn't perform atomic initialization + * svn: SQLite compiled for 3.7.4, but running with 3.7.3 + * + * That can be annoying when building on a modern system in order + * to deploy on a less modern one. So these constants allow one + * to specify how old the system being deployed on might be. + * For example, + * + * EXTRA_CFLAGS += -DSVN_SQLITE_MIN_VERSION_NUMBER=3007003 + * EXTRA_CFLAGS += '-DSVN_SQLITE_MIN_VERSION="3.7.3"' + * + * turns on code that works around infelicities in older versions + * as far back as 3.7.3 and relaxes the check at initialization time + * to permit them. + * + * @since New in 1.8. + */ +#ifndef SVN_SQLITE_MIN_VERSION_NUMBER +#define SVN_SQLITE_MIN_VERSION_NUMBER SQLITE_VERSION_NUMBER +#define SVN_SQLITE_MIN_VERSION SQLITE_VERSION +#endif /* SVN_SQLITE_MIN_VERSION_NUMBER */ + /** * Check at compile time if the SQLite version is at least a certain * level. * @param major The major version component of the version checked @@ -120,7 +146,7 @@ */ #ifndef SQLITE_VERSION_AT_LEAST #define SQLITE_VERSION_AT_LEAST(major,minor,patch) \ -((major*100 + minor*1000 + patch) <= SQLITE_VERSION_NUMBER) +((major*100 + minor*1000 + patch) <= SVN_SQLITE_MIN_VERSION_NUMBER
Re: [PATCH] commit --include-externals (v2)
Neels, Question from a user standpoint: would it be possible to request this behavior as default via some config file option? Regards, Alexey. On Thursday, November 03, 2011 06:33:56 am Neels J Hofmeyr wrote: > I've rinsed and improved my proposed feature dubbed > svn commit --include-externals > (Related issues: #1167, #3563) > > I hope this will cut a much clearer path through the jungle that is > externals behavior. Now I'm hoping for some reviews! > > The idea is to have file and dir externals behave the same way during > commit, and to provide a way to recursively commit all externals (that are > from the same repository and are not revision-pegged). > > Who'd have guessed, there are a few corners that would be good to have > others' opinions on. > > (To: CMike and Bert because you two were involved in the original > discussion: http://svn.haxx.se/dev/archive-2011-08/0617.shtml ) > > Thanks! > ~Neels
[RFC/PATCH] Re: auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
Stefan Sperling wrote: > On Thu, Nov 03, 2011 at 02:54:04AM -0500, Jonathan Nieder wrote: >> START: auth-test >> svn_tests: E26: svn_auth_get_platform_specific_client_providers >> should return an array of 5 providers >> FAIL: lt-auth-test 1: test retrieving platform-specific auth providers [...] > Try setting LD_LIBRARY_PATH so that libs from your custom Subversion build > are found when Subversion's kwallet and gnome-keyring plugins are dlopen()ed. Nice! As long as I run "make all" first (to make sure the kwallet and gnome-keyring plugins were built), setting LD_LIBRARY_PATH works. Here's a rough patch to make that second step unnecessary. [[[ * Makefile.in (check): Set LD_LIBRARY_PATH so the just-built versions of auth plugins are used when running the test suite. Otherwise, auth-test can easily fail because some auth provider it expects to find is missing. ]]] Index: Makefile.in === --- Makefile.in (revision 1197345) +++ Makefile.in (working copy) @@ -444,6 +444,10 @@ check-javahl: check-apache-javahl check-all-javahl: check-apache-javahl check-tigris-javahl +gnome_auth_dir = $(abs_builddir)/subversion/libsvn_auth_gnome_keyring/.libs +kwallet_auth_dir = $(abs_builddir)/subversion/libsvn_auth_kwallet/.libs +auth_plugin_dirs = $(gnome_auth_dir):$(kwallet_auth_dir) + # "make check CLEANUP=true" will clean up directories for successful tests. # "make check TESTS=subversion/tests/cmdline/basic_tests.py" # will perform only basic tests (likewise for other tests). @@ -484,6 +488,7 @@ check: bin @TRANSFORM_LIBTOOL_SCRIPTS@ $(TEST_DEPS flags="--list --milestone-filter=$(MILESTONE_FILTER) \ --mode-filter=$(MODE_FILTER) --log-to-stdout $$flags";\ fi;\ + LD_LIBRARY_PATH='$(auth_plugin_dirs):$(LD_LIBRARY_PATH)' \ $(PYTHON) $(top_srcdir)/build/run_tests.py \ --config-file $(top_srcdir)/subversion/tests/tests.conf \ $$flags \
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
On 11/03/2011 05:10 PM, Jonathan Nieder wrote: > Why would that be? When someone dumps in 20 bits of data from a > strong, in-hardware, random number source, even if the PRNG is utterly > stupid, it can have an unguessable 20 bits of internal state. After > reading enough random numbers, I will have enough information to guess > the seed and learn what comes next. If you want to attack a PRNG, you need very little of the output state--only enough to distinguish between the possible values of the generator seed. What you do need is for the generator seed to be partially guessable; otherwise, you will be trying to brute-force a 128-bit or 256-bit seed, which is impractical. If I somehow know the initial generator state, and you reseed your generator with only 20 unguessable bits, I will be able to determine those bits using 20 bits of output and 2^20 effort (which is easy), and then I will know all of the generator state again. However, if you reseed with enough unguessable bits that I can't brute-force them, it doesn't matter how much output I see; I will never again be able to determine the internal state. For the Fortuna generator, for instance, if I discover a way to determine the generator state solely by observing the output, then I will also have discovered a plaintext recovery attack against AES-256. For more, see chapter 9 of _Cryptography Engineering_. > A good PRNG helps mitigate that somewhat More than "somewhat". Any PRNG which doesn't have the above properties in its generator is insecure for any cryptographic purpose, and would be considered a security bug in the operating system. In another message, Peter Samuelson wrote: > apr_time_now() has microsecond resolution. It has microsecond precision but not necessarily microsecond accuracy. For instance, http://fixunix.com/kernel/37-gettimeofday-resolution-linux.html suggests that two requests arriving within a 10ms window could get the same nonce.
Re: [PATCH] Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
Peter Samuelson wrote: > The problem is that svnserve is often used in 'inetd' mode, one > connection per process. I've recommended that for years, as being > easier and less hassle than managing a separate daemon. Given inetd > mode, your approach will exhaust /dev/random just as fast as the status > quo. True. Well, to avoid exhausting /dev/random, one needs some fallback source for the nonce. There are two cases: - in 'inetd' mode, I guess the pid could work. - in non-inetd mode, an in-process PRNG would work well. Using the timestamp as nonce feels a little silly, given that the challenge is going to look like Though it's better than just using nonce=0, say (since some hard-to-predict amount of time passes between when the nonce and timestamp are generated for the challenge). Times like these make me wish that more systems had erandom or some other interface for lower-quality random numbers. The random(4) manpage says: If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys. APR was fixed accordingly in version 1.3.3 (see r652830, "configure.in: Prefer /dev/urandom over /dev/random"). Running $ strings /usr/lib/libapr-1.so.0.4.5 | grep dev /dev/zero /dev/urandom /dev/null I see that Debian has the fix. If refining the Debian-specific code here, please consider patching the upstream code for the !APR_HAS_RANDOM case, too. That way, your changes can get a wider audience and more review, it is more likely that the code will be updated when assumptions of the surrounding code change, and people on those rare platforms that lack a secure random number source would benefit as well. Sorry to have been taking so long to figure this one out.
Re: [PATCH] Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
On Thursday, November 03, 2011 4:26 PM, "Peter Samuelson" wrote: > > [Jonathan Nieder] > > state = apr_random_standard_new(pool); > > > > for (;;) { > > while (apr_random_secure_ready(state) == APR_ENOTENOUGHENTROPY) > > { > > apr_generate_random_bytes(buf, sizeof(buf)); > > apr_random_add_entropy(state, buf, sizeof(buf)); > > } > > > > apr_random_secure_bytes(state, ret, n); > > yield; > > } > > The problem is that svnserve is often used in 'inetd' mode, one > connection per process. I've recommended that for years, as being > easier and less hassle than managing a separate daemon. Given inetd > mode, your approach will exhaust /dev/random just as fast as the status > quo. > > (Also, and this is minor, those functions didn't exist in apr 0.9, for > those poor souls still needing to build Subversion on platforms with > Apache 2.0. I don't _think_ we've ever dropped support for our > original apr platform, have we?) Yes, but I'm not sure that means we can't use the above code when we detect a new APR at build time. I mean, APR_VERSION_AT_LEAST() exists for a reason. > -- > Peter Samuelson | org-tld!p12n!peter | http://p12n.org/ >
Re: [PATCH] Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
[Jonathan Nieder] > state = apr_random_standard_new(pool); > > for (;;) { > while (apr_random_secure_ready(state) == APR_ENOTENOUGHENTROPY) > { > apr_generate_random_bytes(buf, sizeof(buf)); > apr_random_add_entropy(state, buf, sizeof(buf)); > } > > apr_random_secure_bytes(state, ret, n); > yield; > } The problem is that svnserve is often used in 'inetd' mode, one connection per process. I've recommended that for years, as being easier and less hassle than managing a separate daemon. Given inetd mode, your approach will exhaust /dev/random just as fast as the status quo. (Also, and this is minor, those functions didn't exist in apr 0.9, for those poor souls still needing to build Subversion on platforms with Apache 2.0. I don't _think_ we've ever dropped support for our original apr platform, have we?) -- Peter Samuelson | org-tld!p12n!peter | http://p12n.org/
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
Greg Hudson wrote: > As for the second threat, at least on Linux, /dev/random output still > comes from the PRNG. It just keeps an internal counter and blocks when > the PRNG has "run out of" its guess at estimated input entropy. (This > is exceedingly silly, because a PRNG doesn't "use up" input entropy as > it generates results; either it has unguessable internal state or it > doesn't.) Why would that be? When someone dumps in 20 bits of data from a strong, in-hardware, random number source, even if the PRNG is utterly stupid, it can have an unguessable 20 bits of internal state. After reading enough random numbers, I will have enough information to guess the seed and learn what comes next. A good PRNG helps mitigate that somewhat, of course (especially if unfriendly people doesn't have access to the entire random number stream and information about when it was seeded). More practically speaking, the benefit of /dev/random's entropy accounting on Linux is to annoy people enough to get them to find a good source of random data. When I am generating a PGP key, I am grateful for the notification. Less so when picking a nonce for svn access. Another downside of using either /dev/random or /dev/urandom is that it involves revealing information about the random number state, which could hurt the security of other applications if there is not enough entropy.
[PATCH] Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
Daniel Shahaf wrote: > On Thursday, November 03, 2011 12:44 AM, "Jonathan Nieder" > wrote: >> Another possibility would be to enhance >> apr's random number source API to allow requesting random bytes >> without so much entropy (erandom/frandom) or without blocking on lack >> of entropy (urandom). > > Fixing this by extending APR's API sounds good. Would the change be > backportable to APR 1.4.x too? I think a proper fix is possible without changing APR's API. The calling sequence is something like this: state = apr_random_standard_new(pool); for (;;) { while (apr_random_secure_ready(state) == APR_ENOTENOUGHENTROPY) { apr_generate_random_bytes(buf, sizeof(buf)); apr_random_add_entropy(state, buf, sizeof(buf)); } apr_random_secure_bytes(state, ret, n); yield; } Justification: that is how /dev/random seems to be meant to be used (as source for a random seed, with userspace taking over after that). Actually, using apr_random_insecure_bytes without seeding the generator would be about as good, as far as I can tell. What is important is that a given pair is not likely to be repeated. Complications: - This involves application-local state. Making the PRNG state per-thread would be fussy. And on the other hand if it's global, synchronizing between threads could hurt performance. - Could be controversial on platforms "where the built-in strong random number generator works just fine without blocking applications all the time" (but aren't good random numbers a scarce resource almost everywhere?). [...] > Something tells me that when a cryptographic protocol calls for random > numbers then a quasiconstant or known value wouldn't do instead. Right. It's not as bad as it sounds because abusing this requires (1) intercepting and replaying challenge responses in real time (2) getting your tuple to match the one from the challenge you intercepted. On affected installations, both the nonce and timestamp are timestamps with 1-microsecond resolution. but it certainly looks like a problem to me. So how about this to start, to avoid providing people on platforms with !APR_HAS_RANDOM with a false sense of security? [[[ * subversion/libsvn_ra_svn/cram.c (make_nonce): Refuse to build if APR_HAS_RANDOM is unset, instead of falling back to using a timestamp as nonce. ]]] Index: subversion/libsvn_ra_svn/cram.c === --- subversion/libsvn_ra_svn/cram.c (revision 1197294) +++ subversion/libsvn_ra_svn/cram.c (working copy) @@ -117,18 +117,10 @@ return svn_ra_svn_flush(conn, pool); } -/* If we can, make the nonce with random bytes. If we can't... well, - * it just has to be different each time. The current time isn't - * absolutely guaranteed to be different for each connection, but it - * should prevent replay attacks in practice. */ +/* Make the nonce with random bytes. */ static apr_status_t make_nonce(apr_uint64_t *nonce) { -#if APR_HAS_RANDOM return apr_generate_random_bytes((unsigned char *) nonce, sizeof(*nonce)); -#else - *nonce = apr_time_now(); - return APR_SUCCESS; -#endif } svn_error_t *svn_ra_svn_cram_server(svn_ra_svn_conn_t *conn, apr_pool_t *pool,
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
[Daniel Shahaf] > Something tells me that when a cryptographic protocol calls for > random numbers then a quasiconstant or known value wouldn't do > instead. It might still provide some security guarantees but I > wouldn't assume it would provide all guarantees of the > correctly-executed protocol. It doesn't call for random numbers. It calls for a nonce. The nonce is not secret - it is passed in plaintext to the client - it just has to be different every time, to prevent replay. Given the nature of cryptographic hashes, it doesn't even have to be _very_ different to be useless to an attacker. To attack the challenge in CRAM, you have to attack _two_ checksums, outer and then inner. Even one bit of entropy scrambles the inner checksum, and _that_ scrambles the outer. apr_time_now() has microsecond resolution. There's no way a single svnserve process will handle two connections in the same microsecond. (I note that even at gigabit speeds, two adjacent network packets will be about a microsecond apart.) That leaves multiple processes calling apr_time_now in the same microsecond. So I think I'll add a getpid() in there somewhere. (The Debian patch, I mean. Probably not useful here.) > > Would some other simple fix (e.g., calling > > "lrand48") make sense and be generally useful? Well, rand() and friends have to be seeded _somehow_, usually from time of day, so that doesn't seem much better than apr_time_now(). > No objection here, but doesn't APR already use lrand48() if it's available? If that's true, we shouldn't even be having this conversation at all. The problem is that /dev/random was being exhausted. -- Peter Samuelson | org-tld!p12n!peter | http://p12n.org/
Re: segfault when applying patch
Philip Martin writes: > What you need is a patch file that has modifies two targets. One part > of the patch gets applied and deletes a file, another part of the patch > gets skipped, because the path is outside the working copy say. The > skipped target has local_abspath NULL and delete_empty_dirs will crash. I've raised http://subversion.tigris.org/issues/show_bug.cgi?id=4049 -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com
Re: segfault when applying patch
Stefan Küng writes: > A few crash dumps sent for TSVN 1.7 show a segfault when applying a patch. > > in libsvn_client/patch.c, line 2703: > > target_info = APR_ARRAY_IDX(targets_info, i, patch_target_info_t *); > parent = svn_dirent_dirname(target_info->local_abspath, iterpool); > > the target_info->local_abspath is NULL, so svn_dirent_dirname() then > segfaults. > > I haven't figured out why target_info->local_abspath is NULL though, > but maybe you have an idea? What you need is a patch file that has modifies two targets. One part of the patch gets applied and deletes a file, another part of the patch gets skipped, because the path is outside the working copy say. The skipped target has local_abspath NULL and delete_empty_dirs will crash. -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com
Re: svn commit: r1197093 - /subversion/branches/showing-merge-info/subversion/svn/mergeinfo-cmd.c
2011/11/4 Bert Huijben : > > >> -Original Message- >> From: julianf...@apache.org [mailto:julianf...@apache.org] >> Sent: donderdag 3 november 2011 14:08 >> To: comm...@subversion.apache.org >> Subject: svn commit: r1197093 - /subversion/branches/showing-merge- >> info/subversion/svn/mergeinfo-cmd.c >> >> Author: julianfoad >> Date: Thu Nov 3 13:08:09 2011 >> New Revision: 1197093 >> >> URL: http://svn.apache.org/viewvc?rev=1197093&view=rev >> Log: >> On the 'showing-merge-info' branch: Factor out some code. >> >> * subversion/svn/mergeinfo-cmd.c >> Factory out the summary mode code as a separate function. > > 'Factory out'? > Typo? > > (I don't see a factory pattern) > Factor out ... (As in "Refactoring" or "finding common factor") Best regards, Konstantin Kolinko
Re: Editor v2 - suggestions and queries
On Thu, Nov 3, 2011 at 14:10, Julian Foad wrote: >... > @@ -177,18 +177,25 @@ > * \n\n > * Just before each callback invocation is carried out, the @a cancel_func > * that was passed to svn_editor_create() is invoked to poll any > * external reasons to cancel the sequence of operations. Unless it > * overrides the cancellation (denoted by #SVN_ERR_CANCELLED), the driver > * aborts the transmission by invoking the svn_editor_abort() callback. > * Exceptions to this are calls to svn_editor_complete() and > * svn_editor_abort(), which cannot be canceled externally. > * > + * ### JAF: It should check for cancellation before 'complete'. > + * Imagine the user runs 'svn commit large-new-file' and then > + * tries to cancel it, and imagine svn_editor_add() does not > + * internally respond to cancellation. If the driver then calls > + * 'complete' without intercepting the cancellation, the user > + * would not be happy to see that commit being completed. That would be a bug in the add, which is not checking. I see no reason to work around a bug. If the caller wants to abort the editor, then it should be able to do so, regardless of the cancellation setting. >... > @@ -203,76 +210,111 @@ > * > * Driving Order Restrictions > * In order to reduce complexity of callback receivers, the editor callbacks > * must be driven in adherence to these rules: > * > * - svn_editor_add_directory() -- Another svn_editor_add_*() call must > * follow for each child mentioned in the @a children argument of any > * svn_editor_add_directory() call. > * > + * ### JAF: ... must follow at any time after the parent add_directory(), > + * and not before it. Huh? The statement already says the children must follow. I don't understand this seeming redundancy. >... > * In other words, if there are two calls coming in on the same path, the > * first of them has to be svn_editor_set_props(). > * > + * ### JAF: The set_* functions treat the properties of a node as > + * separate from the node's text or symlink target, whereas the > + * add_* functions treat the properties as an integral part of each > + * kind of node. This seems like a needless difference. It would make > + * sense for set_text() to take the properties with the text, and let Sure, that makes sense. Then the set_props() can lose the COMPLETE argument, and we add the restriction that set_text() and set_target() MUST NOT be called before/after the set_props() call. We could also state that props==NULL means "no change". "No properties" (the empty set) would be designated by an empty hash. > + * the implementation worry about whether the props and/or the text > + * have actually changed (as well as how to transmit the changes > + * most efficiently). For symlinks, include the properties with the > + * 'set target' call (rename to 'set symlink') and for directories Keep it set target. You are not setting the symlink, you are setting the *target* of the symlink. This is the nomenclature used everywhere already. > + * introduce a 'set directory' function, and remove the set_props() > + * function. This would eliminate the exception in the "Once Rule". Sure. >... > * - The ancestor of an added or modified node may not be deleted. The > * ancestor may not be moved (instead: perform the move, *then* the edits). > + * > + * ### JAF: Ancestor == parent dir? Any ancestor, not just the parent. > + * > + * ### JAF: The ancestor of a node that is modified or added or copied-here > + * or moved-here ...? Sure. > + * > + * ### JAF: The ancestor of ... [can | may not?] be copied or moved. The statement says the ancestor may not be moved. Since it says nothing about copying: sure, that is allowed (as you'd expect, so we don't have to call it out). > * - svn_editor_delete() must not be used to replace a path -- i.e. > * svn_editor_delete() must not be followed by an svn_editor_add_*() on > * the same path, nor by an svn_editor_copy() or svn_editor_move() with > * the same path as the copy/move target. > + * > + * ### JAF: Nor followed by set_*(). You cannot call set_*() on a missing node, so there is no reason to state this. > + * ### JAF: Nor followed by copy() or move() with the same path as the > + * copy/move source? You cannot copy/move a missing node. The point of the above statement is to enforce that replacement is not allowed using a delete, followed by something that (re)constructs a node at that path. You don't need to (re)define the fact that set, copy, move must have a valid source. > * Instead of a prior delete call, the add/copy/move callbacks should be > * called with the @a replaces_rev argument set to the revision number of > * the node at this path that is being replaced. Note that the path and > * revision number are the key to finding any other information about the > * replaced no
RE: svn commit: r1197093 - /subversion/branches/showing-merge-info/subversion/svn/mergeinfo-cmd.c
> -Original Message- > From: julianf...@apache.org [mailto:julianf...@apache.org] > Sent: donderdag 3 november 2011 14:08 > To: comm...@subversion.apache.org > Subject: svn commit: r1197093 - /subversion/branches/showing-merge- > info/subversion/svn/mergeinfo-cmd.c > > Author: julianfoad > Date: Thu Nov 3 13:08:09 2011 > New Revision: 1197093 > > URL: http://svn.apache.org/viewvc?rev=1197093&view=rev > Log: > On the 'showing-merge-info' branch: Factor out some code. > > * subversion/svn/mergeinfo-cmd.c > Factory out the summary mode code as a separate function. 'Factory out'? Typo? (I don't see a factory pattern) Bert
segfault when applying patch
Hi, A few crash dumps sent for TSVN 1.7 show a segfault when applying a patch. in libsvn_client/patch.c, line 2703: target_info = APR_ARRAY_IDX(targets_info, i, patch_target_info_t *); parent = svn_dirent_dirname(target_info->local_abspath, iterpool); the target_info->local_abspath is NULL, so svn_dirent_dirname() then segfaults. I haven't figured out why target_info->local_abspath is NULL though, but maybe you have an idea? Stefan -- ___ oo // \\ "De Chelonian Mobile" (_,\/ \_/ \ TortoiseSVN \ \_/_\_/>The coolest Interface to (Sub)Version Control /_/ \_\ http://tortoisesvn.net
Re: Patch ping, 5 years later: removing properties
The introduction of the --git option to 'svn diff' opens the door for different "flavors" for 'svn diff' output. Would you be interested in a new 'svn diff --patch' flavor, which generates only diff output that's suitable for consumption with GNU patch? Implied by --patch would be no property diff output and the display of copied items as additions (like --show-copies-as-adds does). Of course, --no-property-diff works for me, too. :-) On 11/03/2011 03:05 PM, Alexey Neyman wrote: > Hi all, > > Sorry for a long delay :) 5 years ago, I sent a patch to the mailing list > that > would allow to specify what to include/exclude from the diff output: > > > http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462&dsMessageId=742594 > > There were two problems this patch tried to address: > > 1. There was no way to exclude properties from the diff output. > > 2. There was no way to obtain a list of modified/added/deleted files and > whether the content and/or properties have been modified. > > I got a response from Ben Collins-Sussman that the patch I sent is superseded > by --summarize option which was implemented in the trunk (which would > eventually become 1.4): > > http://svn.haxx.se/dev/archive-2006-06/0202.shtml > > Actually, --summarize only addresses the issue #2, but not the issue #1. As > to > issue #1, ability to disable property diffs is still missing from Subversion. > Common wisdom suggests running the diff through 'filterdiff --clean': > > http://stackoverflow.com/questions/2755848/how-do-you-get-subversion-diff- > summary-to-ignore-mergeinfo-properties > http://stackoverflow.com/questions/402522/is-there-a-metadata-exclusion- > filter-for-the-svn-diff-command > > But this is still unreliable: if property text includes a diff-like chunk, it > will go through. > > What is the sentiment about implementing --no-property-diff option to 'svn > diff'? I can implement it if there's agreement such option is needed. > > Regards, > Alexey. -- C. Michael Pilato CollabNet <> www.collab.net <> Distributed Development On Demand signature.asc Description: OpenPGP digital signature
Patch ping, 5 years later: removing properties
Hi all, Sorry for a long delay :) 5 years ago, I sent a patch to the mailing list that would allow to specify what to include/exclude from the diff output: http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462&dsMessageId=742594 There were two problems this patch tried to address: 1. There was no way to exclude properties from the diff output. 2. There was no way to obtain a list of modified/added/deleted files and whether the content and/or properties have been modified. I got a response from Ben Collins-Sussman that the patch I sent is superseded by --summarize option which was implemented in the trunk (which would eventually become 1.4): http://svn.haxx.se/dev/archive-2006-06/0202.shtml Actually, --summarize only addresses the issue #2, but not the issue #1. As to issue #1, ability to disable property diffs is still missing from Subversion. Common wisdom suggests running the diff through 'filterdiff --clean': http://stackoverflow.com/questions/2755848/how-do-you-get-subversion-diff- summary-to-ignore-mergeinfo-properties http://stackoverflow.com/questions/402522/is-there-a-metadata-exclusion- filter-for-the-svn-diff-command But this is still unreliable: if property text includes a diff-like chunk, it will go through. What is the sentiment about implementing --no-property-diff option to 'svn diff'? I can implement it if there's agreement such option is needed. Regards, Alexey.
Re: Editor v2 - suggestions and queries
Only a couple of nit-pick comments. Generally, since there is already discussion in the file itself, I'd support committing additional discussion directly, and then discussing it on the mailing list if needed. -Hyrum On Thu, Nov 3, 2011 at 1:10 PM, Julian Foad wrote: > Here, in the form of a patch, are many suggestions and queries I've > made on Ev2. I'm not intimately familiar with the design and goals of > it, I'm just responding to how it's currently written up in the header > file, with some memory of past discussions. Discussion welcomed. > > [[[ > Index: subversion/include/svn_editor.h > === > --- subversion/include/svn_editor.h (revision 1197094) > +++ subversion/include/svn_editor.h (working copy) > @@ -177,18 +177,25 @@ > *\n\n > *Just before each callback invocation is carried out, the @a > cancel_func > *that was passed to svn_editor_create() is invoked to poll any > *external reasons to cancel the sequence of operations. Unless it > *overrides the cancellation (denoted by #SVN_ERR_CANCELLED), the > driver > *aborts the transmission by invoking the svn_editor_abort() callback. > *Exceptions to this are calls to svn_editor_complete() and > *svn_editor_abort(), which cannot be canceled externally. > * > + *### JAF: It should check for cancellation before 'complete'. > + *Imagine the user runs 'svn commit large-new-file' and then > + *tries to cancel it, and imagine svn_editor_add() does not > + *internally respond to cancellation. If the driver then calls > + *'complete' without intercepting the cancellation, the user > + *would not be happy to see that commit being completed. > + * > * - @b Receive: While the driver invokes operations upon the editor, the > *receiver finds its callback functions called with the information > *to operate on its tree. Each actual callback function receives those > *arguments that the driver passed to the "driving" functions, plus > these: > *- @a baton: This is the @a editor_baton pointer originally passed to > * svn_editor_create(). It may be freely used by the callback > * implementation to store information across all callbacks. > *- @a scratch_pool: This temporary pool is cleared directly after > * each callback returns. See "Pool Usage". > @@ -203,76 +210,111 @@ > * > * Driving Order Restrictions > * In order to reduce complexity of callback receivers, the editor > callbacks > * must be driven in adherence to these rules: > * > * - svn_editor_add_directory() -- Another svn_editor_add_*() call must > * follow for each child mentioned in the @a children argument of any > * svn_editor_add_directory() call. > * > + * ### JAF: ... must follow at any time after the parent > add_directory(), > + * and not before it. > + * > * - svn_editor_set_props() > * - The @a complete argument must be TRUE if no more calls will follow > on > * the same path. @a complete must always be TRUE for directories. > * - If @a complete is FALSE, and: > * - if @a relpath is a file, this must (at some point) be followed by > * an svn_editor_set_text() call on the same path. > * - if @a relpath is a symlink, this must (at some point) be followed > by > * an svn_editor_set_target() call on the same path. > * > * - svn_editor_set_text() and svn_editor_set_target() must always occur > * @b after an svn_editor_set_props() call on the same path, if any. > * > * In other words, if there are two calls coming in on the same path, the > * first of them has to be svn_editor_set_props(). > * > + * ### JAF: The set_* functions treat the properties of a node as > + * separate from the node's text or symlink target, whereas the > + * add_* functions treat the properties as an integral part of each > + * kind of node. This seems like a needless difference. It would make > + * sense for set_text() to take the properties with the text, and let > + * the implementation worry about whether the props and/or the text > + * have actually changed (as well as how to transmit the changes > + * most efficiently). For symlinks, include the properties with the > + * 'set target' call (rename to 'set symlink') and for directories > + * introduce a 'set directory' function, and remove the set_props() > + * function. This would eliminate the exception in the "Once Rule". > + * > * - Other than the above two pairs of linked operations, a path should > * never be referenced more than once by the add_* and set_* and the > * delete operations (the "Once Rule"). The source path of a copy (and > * its children, if a directory) may be copied many times, and are > * otherwise subject to the Once Rule. The destination path of a copy > * or move may have set_* operations applied, but not add_* or d
Editor v2 - suggestions and queries
Here, in the form of a patch, are many suggestions and queries I've made on Ev2. I'm not intimately familiar with the design and goals of it, I'm just responding to how it's currently written up in the header file, with some memory of past discussions. Discussion welcomed. [[[ Index: subversion/include/svn_editor.h === --- subversion/include/svn_editor.h (revision 1197094) +++ subversion/include/svn_editor.h (working copy) @@ -177,18 +177,25 @@ *\n\n *Just before each callback invocation is carried out, the @a cancel_func *that was passed to svn_editor_create() is invoked to poll any *external reasons to cancel the sequence of operations. Unless it *overrides the cancellation (denoted by #SVN_ERR_CANCELLED), the driver *aborts the transmission by invoking the svn_editor_abort() callback. *Exceptions to this are calls to svn_editor_complete() and *svn_editor_abort(), which cannot be canceled externally. * + *### JAF: It should check for cancellation before 'complete'. + *Imagine the user runs 'svn commit large-new-file' and then + *tries to cancel it, and imagine svn_editor_add() does not + *internally respond to cancellation. If the driver then calls + *'complete' without intercepting the cancellation, the user + *would not be happy to see that commit being completed. + * * - @b Receive: While the driver invokes operations upon the editor, the *receiver finds its callback functions called with the information *to operate on its tree. Each actual callback function receives those *arguments that the driver passed to the "driving" functions, plus these: *- @a baton: This is the @a editor_baton pointer originally passed to * svn_editor_create(). It may be freely used by the callback * implementation to store information across all callbacks. *- @a scratch_pool: This temporary pool is cleared directly after * each callback returns. See "Pool Usage". @@ -203,76 +210,111 @@ * * Driving Order Restrictions * In order to reduce complexity of callback receivers, the editor callbacks * must be driven in adherence to these rules: * * - svn_editor_add_directory() -- Another svn_editor_add_*() call must * follow for each child mentioned in the @a children argument of any * svn_editor_add_directory() call. * + * ### JAF: ... must follow at any time after the parent add_directory(), + * and not before it. + * * - svn_editor_set_props() * - The @a complete argument must be TRUE if no more calls will follow on * the same path. @a complete must always be TRUE for directories. * - If @a complete is FALSE, and: * - if @a relpath is a file, this must (at some point) be followed by * an svn_editor_set_text() call on the same path. * - if @a relpath is a symlink, this must (at some point) be followed by * an svn_editor_set_target() call on the same path. * * - svn_editor_set_text() and svn_editor_set_target() must always occur * @b after an svn_editor_set_props() call on the same path, if any. * * In other words, if there are two calls coming in on the same path, the * first of them has to be svn_editor_set_props(). * + * ### JAF: The set_* functions treat the properties of a node as + * separate from the node's text or symlink target, whereas the + * add_* functions treat the properties as an integral part of each + * kind of node. This seems like a needless difference. It would make + * sense for set_text() to take the properties with the text, and let + * the implementation worry about whether the props and/or the text + * have actually changed (as well as how to transmit the changes + * most efficiently). For symlinks, include the properties with the + * 'set target' call (rename to 'set symlink') and for directories + * introduce a 'set directory' function, and remove the set_props() + * function. This would eliminate the exception in the "Once Rule". + * * - Other than the above two pairs of linked operations, a path should * never be referenced more than once by the add_* and set_* and the * delete operations (the "Once Rule"). The source path of a copy (and * its children, if a directory) may be copied many times, and are * otherwise subject to the Once Rule. The destination path of a copy * or move may have set_* operations applied, but not add_* or delete. * If the destination path of a copy or move is a directory, then its * children are subject to the Once Rule. The source path of a move * (and its child paths) may be referenced in add_*, or as the * destination of a copy (where these new, copied nodes are subject to * the Once Rule). * * - The ancestor of an added or modified node may not be deleted. The * ancest
Issue 4048 update not marking directories complete
I'm looking at issue 4048 (with an eye on issue 3993, see end of mail) http://subversion.tigris.org/issues/show_bug.cgi?id=4048 There is code in update_editor.c:close_edit /* The editor didn't even open the root; we have to take care of some cleanup stuffs. */ if (! eb->root_opened && *eb->target_basename == '\0') { /* We need to "un-incomplete" the root directory. */ SVN_ERR(svn_wc__db_temp_op_end_directory_update(eb->db, eb->anchor_abspath, scratch_pool)); } This marks the anchor complete when the editor drive is: set_target_revision close_edit and this happens when the server is 1.0.x and the update makes no changes. The problem is that _end_directory_update asserts that the directory is incomplete and that is not always true. Newer servers drive the client: set_target_revision open_directory close_directory close_edit so don't trigger this code. It's easy enough to fix the assert to handle status normal as well as incomplete, but I'm not sure that really fixes the problem. The code in close_edit is old, from before 1.0, but I'm not sure it ever worked. It seems to me that just making the anchor complete is not enough, if anything it should be making the whole tree to the depth of the update complete. If we just complete the anchor then we see things like: $ svn st wc !wc !wc/B $ svn up wc At revision 4. $ svn st wc !wc/B $ svn up wc/B At revision 4. !wc/B where the rest of the tree remains incomplete. (I see this with a 1.0 client, a 1.6 client and a 1.7 client that is patched to avoid the assert.) I'm tempted to make a quick fix that simply removes the call to _end_directory_update altogether. This will fix the assert. It will mean that a "no-op" update will not mark an incomplete anchor complete, but since it doesn't mark the rest of the tree complete that doesn't seem like a major regression. There is a related issue, 3993 http://subversion.tigris.org/issues/show_bug.cgi?id=3993 where a serf bug triggers a similar sort of problem on googlecode servers. If we were to fix 4048 by marking the whole tree complete it may make 3993 worse. -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
On 11/03/2011 01:44 AM, Jonathan Nieder wrote: > What do you think? Is forcing !APR_HAS_RANDOM and just using > apr_time_now() as Debian currently does safe, or does it expose users > to a security risk? I suspect it makes the server vulnerable to a replay attack. The right answer is to use /dev/urandom. Using /dev/random has highly questionable advantages over using /dev/urandom, and it's unfortunate that APR only provides an interface to one and not the other. A longer analysis: if a system has collected even a small amount of entropy (128 bits) relative to what an attacker can guess since boot, it can generate an infinite amount of pseudo-random data without risk of vulnerability, if it uses a suitable PRNG function. The actual dangers are that (1) the system has not accumulated enough entropy, and maybe we should wait until it has, or (2) the system has a bad PRNG function. Using /dev/random does not protect against either threat very effectively. As for the first threat, it's very difficult to mitigate because a system cannot generally estimate its entropy very well. It throws possible entropy events into a pool and mixes them together, but it doesn't have a very good measure of how guessable those events were. PRNG algorithms like Fortuna seek to guarantee that the PRNG will eventually reach an unguessable state (by ensuring that more and more possible entropy is used each time the internal state is updated, until eventually an update happens that the attacker can't brute-force), but they can't tell you when they've reached that point. As for the second threat, at least on Linux, /dev/random output still comes from the PRNG. It just keeps an internal counter and blocks when the PRNG has "run out of" its guess at estimated input entropy. (This is exceedingly silly, because a PRNG doesn't "use up" input entropy as it generates results; either it has unguessable internal state or it doesn't.) An application can only protect against a poor system PRNG by implementing its own generator, and it's far simpler to declare it the system's responsibility to fix its PRNG if there's a security issue associated with it.
[PATCH] commit --include-externals (v2)
I've rinsed and improved my proposed feature dubbed svn commit --include-externals (Related issues: #1167, #3563) I hope this will cut a much clearer path through the jungle that is externals behavior. Now I'm hoping for some reviews! The idea is to have file and dir externals behave the same way during commit, and to provide a way to recursively commit all externals (that are from the same repository and are not revision-pegged). Who'd have guessed, there are a few corners that would be good to have others' opinions on. (To: CMike and Bert because you two were involved in the original discussion: http://svn.haxx.se/dev/archive-2011-08/0617.shtml ) Thanks! ~Neels Introduce the commit argument --include-externals. Most prominent changes: - File externals are no longer committed by default. - When option --include-externals is passed to 'svn commit', *all* externals reached by recursion are included in the commit (dir & file alike). - Dir externals nested "behind" unversioned dirs or "foreign" WCs are now reached by recursion. Previous default was to include all file externals, not dirs. On the API level, separate dir/file arguments allow for this previous behavior, mainly for keeping svn_client_commit5()'s behavior unchanged. See also http://svn.haxx.se/dev/archive-2011-08/0620.shtml ### The commit API can't handle file externals inside unversioned dirs properly yet. Additional SQL currently deliberately excludes them (see STMT_SELECT_COMMITTABLE_EXTERNALS_BELOW). * subversion/include/private/svn_wc_private.h (svn_wc__committable_external_info_t): New struct for: (svn_wc__committable_externals_below): New function. * subversion/include/svn_client.h, * subversion/libsvn_client/commit.c (svn_client_commit6): New function, adding INCLUDE_FILE_EXTERNALS and INCLUDE_DIR_EXTERNALS arguments over svn_client_commit5(). (svn_client_commit5): Call svn_client_commit6() with INCLUDE_FILE_EXTERNALS = TRUE and INCLUDE_DIR_EXTERNALS = FALSE. * subversion/libsvn_client/commit_util.c (harvest_committables): New argument IS_EXPLICIT_TARGET; pass as FALSE upon recursion. (svn_client__harvest_committables): Pass IS_EXPLICIT_TARGET as TRUE: initial harvest_committables() call. (harvest_copy_committables): Pass IS_EXPLICIT_TARGET as TRUE, but has no effect since, inside harvest_committables(), COPY_MODE will be TRUE and all file externals will be excluded anyway. * subversion/libsvn_wc/externals.c (svn_wc__committable_externals_below): New function. * subversion/libsvn_wc/wc_db.h, * subversion/libsvn_wc/wc_db.c (svn_wc__db_committable_externals_below): New function for STMT_SELECT_COMMITTABLE_EXTERNALS_BELOW. * subversion/libsvn_wc/wc-queries.sql (STMT_SELECT_COMMITTABLE_EXTERNALS_BELOW): New SQL. * subversion/svn/commit-cmd.c (svn_cl__commit): Call svn_client_commit6() instead of &5(). * subversion/svn/cl.h (svn_cl__opt_state_t): Add INCLUDE_EXTERNALS for --include-externals. * subversion/svn/main.c (svn_cl__longopt_t, svn_cl__options, svn_cl__cmd_table, main): Add OPT_INCLUDE_EXTERNALS, enable --include-externals for 'commit'. * subversion/tests/cmdline/externals_tests.py (include_externals): New test, PASS. (include_immediate_dir_externals): New test, XFAIL. (test_list): Append above two tests. Index: subversion/include/private/svn_wc_private.h === --- subversion/include/private/svn_wc_private.h (revision 1197104) +++ subversion/include/private/svn_wc_private.h (working copy) @@ -131,6 +131,49 @@ svn_wc__read_external_info(svn_node_kind apr_pool_t *result_pool, apr_pool_t *scratch_pool); +/** See svn_wc__committable_externals_below(). */ +typedef struct svn_wc__committable_external_info_t { + + /* The local absolute path where the external should be checked out. */ + const char *local_abspath; + + /* The relpath part of the source URL the external should be checked out + * from. */ + const char *repos_relpath; + + /* The root URL part of the source URL the external should be checked out + * from. */ + const char *repos_root_url; + + /* Set to either svn_node_file or svn_node_dir. */ + svn_node_kind_t kind; + +} svn_wc__committable_external_info_t; + +/* Add svn_wc__committable_external_info_t* items to *EXTERNALS, describing + * 'committable' externals checked out below LOCAL_ABSPATH. Recursively find + * all nested externals (externals defined inside externals). + * + * In this context, a 'committable' external belongs to the same repository as + * LOCAL_ABSPATH, is not revision-pegged and is currently checked out in the + * WC. (Local modifications are not tested for.) + * + * *EXTERNALS must be initialized either to NULL or to a pointer created with + * apr_array_make(..., sizeof(svn_wc__committable_external_info_t *)). If + * *EXTERNALS is initialized to NULL, an array will be allocated fro
Re: auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
I (Julian Foad) wrote: > > Jonathan Nieder wrote: > > > svn_tests: E26: > > > svn_auth_get_platform_specific_client_providers should > > > return an array of 5 providers > > > FAIL: lt-auth-test 1: test retrieving > > > platform-specific auth providers > > I get this too when I run "make davautocheck" but not with > "make check" or "make svnserveautocheck". [...] Ah, but you ran "make check". I jumped to a conclusion seeing this bug in my own testing, and although the fix I made is good, it's not fixing what you were seeing. > Before running the tests, I set > LD_LIBRARY_PATH="$OBJ_DIR/subversion/libsvn_auth_gnome_keyring/.libs". That's something I do in my own script before running the tests. As an alternative, I tried the configure option "--enable-local-library-preloading" that Philip mentioned, and it seems to work for me. - Julian
Re: 1.7.0 assert on svn_client_checkout with E235000
On 11/02/2011 05:42 PM, Barry Scott wrote: > I wish that the canonical stuff was inside the svn_client_XXX API calls and > not > a burdon on callers. To my mind the svn.exe API and the svn_client_XXX should > accept the same strings and either operate or return an error. Avoiding the > asserts from using svn_client_XXX is the lions share of the work of getting > svn binding sane. Two problems with such a move: - libsvn_client isn't the only API called from code outside of the Subversion core distribution. What does to auto-canonicalization in libsvn_client do if passing the same uncanonical path to libsvn_wc will then crash? - canonicalizing inside the API means the API must therefore assume that input paths are not canonical. That means performing the same canonicalization over and over again each time the API is called. That's wasteful. Better to ask the highest level of code to make its input conform once (we even offer the functions to do so!) and then take advantage of known-good input from then on. -- C. Michael Pilato CollabNet <> www.collab.net <> Distributed Development On Demand signature.asc Description: OpenPGP digital signature
Re: auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
Daniel Shahaf wrote: > Jonathan Nieder wrote: > > svn_tests: E26: svn_auth_get_platform_specific_client_providers > > should return an array of 5 providers > > FAIL: lt-auth-test 1: test retrieving platform-specific auth providers > > > > Indeed, instrumenting the test, we learn that the actual number of > > providers returned is 1. > > > > Known problem? > > Buildbots are green and I don't remember this one > before. Can you set a breakpoint in the test and check > which four providers aren't returned? > (Or, if that's easier, which one provider _is_ returned.) I get this too when I run "make davautocheck" but not with "make check" or "make svnserveautocheck". The "svn.simple" provider is the only one returned. I build and test in a separate directory from the source tree. I build with Gnome-keyring support. Before running the tests, I set LD_LIBRARY_PATH="$OBJ_DIR/subversion/libsvn_auth_gnome_keyring/.libs". But when "auth-test" gets run, it find that LD_LIBRARY_PATH is set to "/subversion/libsvn_ra_neon/.libs: /subversion/libsvn_ra_local/.libs: /subversion/libsvn_ra_svn/.libs" Lo! and behold: svnserveautocheck.sh sets LD_LIBRARY_PATH="...:$LD_LIBRARY_PATH" whereas davautocheck.sh (and dav-mirror-autocheck.sh) overwrites the old value. Blame... me. I inserted the ":$LD_LIBRARY_PATH" to svnserveautocheck.sh back in 2009, and I failed to spot the same thing being needed in these other places. Thanks for the report. Fixed in r1197065. - Julian
Re: [Subversion Wiki] Update of "CopySvnsyncMirrorToMaster" by PhilipMartin
On Thursday, November 03, 2011 10:29 AM, "Apache Wiki" wrote: > Dear Wiki user, > > You have subscribed to a wiki page or wiki category on "Subversion Wiki" for > change notification. > > The "CopySvnsyncMirrorToMaster" page has been changed by PhilipMartin: > http://wiki.apache.org/subversion/CopySvnsyncMirrorToMaster?action=diff&rev1=7&rev2=8 > > > The master repository has corruption in some of its revision files, but the > mirror repository does not. > ([[http://subversion.tigris.org/issues/show_bug.cgi?id=3845|Issue 3845]]). > > - Master and slave are still live since commits that don't access the corrupt > data still work. > + Master and slave are still live since commits that don't access the corrupt > data still work. We want to copy the slave to the master with minimal > downtime. > Thanks for making this requirement explicit. > == The Repair == > - Ensure the master repository is off line and the other preconditions are > met. Keep a backup of the master repository before starting to fix it. > + 1. Create a new repository that will become the new master. > + 2. Use 'svnsync init' to setup the slave as the temporary source for the > new master. > + 3. At this point 'svnsync synchronize' can be used to bring the new master > up-to-date, but rsync is probably faster. > + * rsync 'db/current' > + * rsync 'db' excluding 'db/current', db/locks', 'db/transactions', > 'db/txn-protorevs', 'db/revprops' and 'db/rep-cache.db' > + * make slave read-only > + * rsync 'db/rep-cache.db' > + * make slave read-write > + 4. Adjust svn:sync-last-merged-rev to the youngest revision in the new > master and run 'svnsync sync'. You'd need to mkdir revprops/$youngest_shard/ for the sync to work. > + 5. Disable revprop changes on the master, or make the master read-only. > + 6. Copy 'db/revprops' from the master into the new master. > + 7. Make the master read-only if not already. Suggestion: document at the top of the file that this procedure is for Subversion 1.7 and earlier. They may or may not continue to work as we bump FS formats. > + 8. Ensure slave is up-to-date and run 'svnsync sync' to get any final > commits. You deleted the svn:sync-* revprops in step 6. > + 9. Delete svn:sync-* revprops from new master r0. > + 10. Take master offline. > + 11. Replace master 'db' with 'db' from new master. > + 12. Check whether 'db/fsfs.conf' needs to be adjusted. > + 13. Bring master back online. > > - Copy, by any convenient method (such as rsync or by physically shipping a > disk), the following parts of the mirror repository into the master > repository: > - > - * the 'db' subdirectory > - * optionally omit 'db/locks', 'db/transactions', 'db/txn-protorevs' as > they're not needed > - * optionally omit 'db/revprops' if you want to keep the master's revprops > - > - Then delete 'db/locks/*', 'db/transactions/*' and 'db/txn-protorevs/*'. > - > - Check whether the settings in 'db/fsfs.conf' need to be adjusted or merged > with the previous (backed up) version of this file. > - >
Re: auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
Jonathan Nieder writes: > The failure is described in tests.log: > > START: auth-test > svn_tests: E26: svn_auth_get_platform_specific_client_providers > should return an array of 5 providers > FAIL: lt-auth-test 1: test retrieving platform-specific auth providers > END: auth-test > ELAPSED: auth-test 0:00:00.179133 > > Indeed, instrumenting the test, we learn that the actual number of > providers returned is 1. > > Known problem? Yes. The kwallet and gnome keyring providers are loaded dynamically and the loader will not look for providers in the right place. There are several ways to get the test to PASS: - If you install in the final destination before running the tests then the providers will be loaded from the install dir. - If you add --enable-local-library-preloading the libtool files will be edited to preload the providers. We have had some problems in the past with this editing breaking the scripts. - Or you can set LD_PRELOAD yourself. -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
On Thursday, November 03, 2011 11:42 AM, "Stefan Sperling" wrote: > On Thu, Nov 03, 2011 at 12:01:58PM +0200, Daniel Shahaf wrote: > > Something tells me that when a cryptographic protocol calls for random > > numbers then a quasiconstant or known value wouldn't do instead. > > Put more bluntly, if protocol designers bothered with putting a random > number into their protocol, implementors must assume that designers had > a good reason for the number to be *random*. Using the current time instead > of a random number is breaking the protocol implementation. > > I mean, seriously, it's not like Debian didn't have a track record > of breaking security with custom patches. Remember the ssh keys debacle? > I am amazed to learn such a patch exists in Debian's Subversion packages. > I think this patch should be pulled from Debian's Subversion packages > immediately. > Yes, in general if you don't know why a random number was used you'd better not make it any less random. But please don't rush to conclusions without studying the concrete protocol.
Re: [Subversion Wiki] Update of "CopySvnsyncMirrorToMaster" by JulianFoad
On Thursday, November 03, 2011 10:34 AM, "Philip Martin" wrote: > Daniel Shahaf writes: > > > Agreed that the data in the known-good copy should replace the data in > > other copies > > > > Disagreed that we should recommnd rsync'ing db/ to a portable disk over > > hotcopy to such a disk > > hotcopy to a portable disk would be one way, but the admin may want to > rsync over the network. You know that rsync of a live repository won't always create a correct mirror and that it's not officially supported. Shouldn't there be a warning on the page mentioning that rsync'ing is unrecommended, relies on implementation details, had better be used on non-live repositories, something?
Re: Subversion 1.7.1 SRPM building tools and utilities
On Thu, Nov 03, 2011 at 12:06:51PM +0200, Daniel Shahaf wrote: > On Wednesday, November 02, 2011 9:09 PM, "Nico Kadel-Garcia" > wrote: > > As it stands, simply deleting the existing rhel-3, rhel-4, and rlel-5 > > legacy building bundles and leaving a pointer to the archive patch > > would probably be enough to allow others to use it as necessary. > > > > If they're broken (and I agree that overwriting dotfiles constitutes a > bug), and if no one uses them, then yes we could delete them. (Though > perhaps someone will apply your patch before then.) I would prefer to delete them and put a README into the directory that points at the new home of Nico's package, wherever it will end up being. I don't think our repository is a good home for it, because other packages don't live here anymore either.
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
On Thu, Nov 03, 2011 at 12:01:58PM +0200, Daniel Shahaf wrote: > > As a result, since November, 2005, svnserve on Debian has been patched > > to avoid calling apr_generate_random_bytes() and just use the current > > time as a nonce. That's ugly. So I'd like your advice. > > > > The random number is used by the CRAM-MD5 auth mechanism in svnserve > > (and could be used by other programs calling svn_ra_svn_cram_server > > from libsvn_ra_svn-1, though I doubt anyone does that). One > > possibility would be to have an internal random number source > > initialized from a strong random source at startup, to avoid drawing > > on the entropy pool so much. Another possibility would be to enhance > > apr's random number source API to allow requesting random bytes > > without so much entropy (erandom/frandom) or without blocking on lack > > of entropy (urandom). > > > > Fixing this by extending APR's API sounds good. Would the change be > backportable to APR 1.4.x too? Whatever is done, please do not introduce problems on operating systems where the built-in strong random number generator works just fine without blocking applications all the time. > > What do you think? Is forcing !APR_HAS_RANDOM and just using > > apr_time_now() as Debian currently does safe, or does it expose users > > to a security risk? > > Something tells me that when a cryptographic protocol calls for random > numbers then a quasiconstant or known value wouldn't do instead. Put more bluntly, if protocol designers bothered with putting a random number into their protocol, implementors must assume that designers had a good reason for the number to be *random*. Using the current time instead of a random number is breaking the protocol implementation. I mean, seriously, it's not like Debian didn't have a track record of breaking security with custom patches. Remember the ssh keys debacle? I am amazed to learn such a patch exists in Debian's Subversion packages. I think this patch should be pulled from Debian's Subversion packages immediately.
Re: [Subversion Wiki] Update of "CopySvnsyncMirrorToMaster" by JulianFoad
Daniel Shahaf writes: > Agreed that the data in the known-good copy should replace the data in > other copies > > Disagreed that we should recommnd rsync'ing db/ to a portable disk over > hotcopy to such a disk hotcopy to a portable disk would be one way, but the admin may want to rsync over the network. -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com
Re: auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
On Thu, Nov 03, 2011 at 02:54:04AM -0500, Jonathan Nieder wrote: > The failure is described in tests.log: > > START: auth-test > svn_tests: E26: svn_auth_get_platform_specific_client_providers > should return an array of 5 providers > FAIL: lt-auth-test 1: test retrieving platform-specific auth providers > END: auth-test > ELAPSED: auth-test 0:00:00.179133 > > Indeed, instrumenting the test, we learn that the actual number of > providers returned is 1. > > Known problem? > > Jonathan Try setting LD_LIBRARY_PATH so that libs from your custom Subversion build are found when Subversion's kwallet and gnome-keyring plugins are dlopen()ed.
Re: Subversion 1.7.1 SRPM building tools and utilities
On Wednesday, November 02, 2011 9:09 PM, "Nico Kadel-Garcia" wrote: > On Wed, Nov 2, 2011 at 7:05 AM, Stefan Sperling wrote: > > On Sun, Oct 30, 2011 at 04:45:51PM -0400, Nico Kadel-Garcia wrote: > >> [ Accidentally sent this to old dev address, resending ] > >> > >> The attached patch is a pretty thorough rewrite of the SRPM building > >> utilities, and spec files, for Subversion-1.7.1. It includes a number > >> of components: > > > > Thanks for sending this. > > ... > > Would it make more sense to version your package at rpmforge? > > I'm trying to get it in at RPMforge, I've submitted packages > successfully there before. The problem is that I don't personally have > a reliable a public FTP site to post the bulky diffs in, and the > complete rewrite of the .spec file is kind of bulky to submit via > email, and I'm not hearing a lot back from that repository. I've > already submitted it to Fedora. Publishing it to the dev mailing list > gave me a good place to submit it so the RPMforge maintainers can get > at it. > dev@svn is not a pastebin :P > My big concern with what's in the Subversion source tree is that the > the existing RPM building structure is dangerous. It overwrites the > builder's $HOME/.rpmmacros file. And it leaves multiple, unusable > ".spec" files scattered in the repository. The new structure has *one* > file, a .spec.in file, that gets tweaked and clean SRPM built locally, > without overwriting .rpmmacro. Building from SRPM is left as an > exercise for the builder: I personally use "mock" in order to build > clean RPM's without having to mess up my development enviornment, and > to make sure that I haven't left out dependencies. > > As it stands, simply deleting the existing rhel-3, rhel-4, and rlel-5 > legacy building bundles and leaving a pointer to the archive patch > would probably be enough to allow others to use it as necessary. > If they're broken (and I agree that overwriting dotfiles constitutes a bug), and if no one uses them, then yes we could delete them. (Though perhaps someone will apply your patch before then.)
Re: [RFC] ra_svn::make_nonce: how to cope with entropy shortages?
On Thursday, November 03, 2011 12:44 AM, "Jonathan Nieder" wrote: > Hi, > > In December, 2004, Alex Jacques reported[1]: > > > If there is little entropy available then svnserve can hang for up to > > several minutes waiting on /dev/random. This is similiar to the problem > > listed here: > > > > http://svnbook.red-bean.com/en/1.1/apb.html#svn-ap-b-sect-1.2.14 > > As a result, since November, 2005, svnserve on Debian has been patched > to avoid calling apr_generate_random_bytes() and just use the current > time as a nonce. That's ugly. So I'd like your advice. > > The random number is used by the CRAM-MD5 auth mechanism in svnserve > (and could be used by other programs calling svn_ra_svn_cram_server > from libsvn_ra_svn-1, though I doubt anyone does that). One > possibility would be to have an internal random number source > initialized from a strong random source at startup, to avoid drawing > on the entropy pool so much. Another possibility would be to enhance > apr's random number source API to allow requesting random bytes > without so much entropy (erandom/frandom) or without blocking on lack > of entropy (urandom). > Fixing this by extending APR's API sounds good. Would the change be backportable to APR 1.4.x too? > What do you think? Is forcing !APR_HAS_RANDOM and just using > apr_time_now() as Debian currently does safe, or does it expose users > to a security risk? Something tells me that when a cryptographic protocol calls for random numbers then a quasiconstant or known value wouldn't do instead. It might still provide some security guarantees but I wouldn't assume it would provide all guarantees of the correctly-executed protocol. > Would some other simple fix (e.g., calling > "lrand48") make sense and be generally useful? > No objection here, but doesn't APR already use lrand48() if it's available? > [1] http://bugs.debian.org/285708 >
Re: auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
On Thursday, November 03, 2011 2:54 AM, "Jonathan Nieder" wrote: > Hi, > > Building r1196980: > > svn export ~/src/subversion svn-test > cd svn-test > ./autogen.sh > mkdir BUILD > cd BUILD > PYTHON=python RUBY=ruby \ > ../configure --prefix=$HOME/opt/subversion \ > --mandir=\$${prefix}/share/man \ > --with-apr=/usr \ > --with-apr-util=/usr \ > --with-neon=/usr \ > --with-serf=/usr \ > --with-berkeley-db=:::db \ That works? We should mention it in the help output... > --with-sasl=/usr \ > --with-editor=editor \ > --with-ruby-sitedir=/usr/lib/ruby \ > --with-swig=/usr \ > --with-kwallet --with-gnome-keyring \ > --enable-javahl --without-jikes \ > --with-jdk=/usr/lib/jvm/default-java \ > --with-junit=/usr/share/java/junit.jar \ > --with-apxs=/usr/bin/apxs2 --disable-mod-activation > make check > > I get the following result: > > Running tests in auth-test [1/89].FAILURE > Running tests in cache-test [2/89]success > [... lots of successes snipped ...] > > The failure is described in tests.log: > > START: auth-test > svn_tests: E26: svn_auth_get_platform_specific_client_providers > should return an array of 5 providers > FAIL: lt-auth-test 1: test retrieving platform-specific auth providers > END: auth-test > ELAPSED: auth-test 0:00:00.179133 > > Indeed, instrumenting the test, we learn that the actual number of > providers returned is 1. > > Known problem? > Buildbots are green and I don't remember this one before. Can you set a breakpoint in the test and check which four providers aren't returned? (Or, if that's easier, which one provider _is_ returned.) > Jonathan >
Re: [PATCH/RFC v2] Optionally allow binaries to run against older SQLite
On Wednesday, November 02, 2011 4:25 PM, "Jonathan Nieder" wrote: > Hi again, > > Here's a hopefully saner patch. Thanks much for the quick feedback on > the previous incarnation. > > I'm not very happy about putting -DSVN_SQLITE_COMPAT_VERSION in CFLAGS > --- does subversion have a config.h somewhere? > http://s.apache.org/xy-problem --- Why do you think you need config.h? Anyway: have a look at the other build/ac-macros/ files for precedents. The makefile supports EXTRA_CFLAGS for the user to add his own CFLAGS at 'make' time. > I can split this into three patches (one to expose > SVN_SQLITE_VERNUM_PARSE from build/ac-macros/sqlite.m4, one > introducing the SVN_SQLITE_MIN_VERSION_NUMBER handling in code, and > another to wire it up into the configure script) if that's wanted. > Let me know. > Will do. Thanks. > Thanks a lot, > Jonathan > > [[[ > If libsvn is built against one version of sqlite3 and then run using > an older version, currently svn will error out: > > svn: Couldn't perform atomic initialization > svn: SQLite compiled for 3.7.4, but running with 3.7.3 > > Not all sqlite3 updates include ABI changes that are relevant to > Subversion, though. This can be annoying when building on a modern > system in order to deploy on a less modern one. > > This patch introduces a new --enable-sqlite-compatibility-version=X.Y.Z The above line should be the first line of the log message: describe the change first and the historical reasons for it either later or in code comments. > option for ./configure to allow people building Subversion to > specify how old the system being deployed on might be, to address > that. Setting the compatibility version to an older version turns > on code in subversion that works around infelicities in those > older versions and relaxes the version check at initialization > time. > > If the compat version is set to a version before the minimum supported > version (3.6.18), the build will fail early so the person building can > adjust her expectations. > > The default for the compat version is the currently installed version, > so there should be no change in behavior for users not passing this > option to the configure script. > Sounds sane. > * subversion/include/private/svn_dep_compat.h (SQLITE_VERSION_AT_LEAST): > Check SVN_SQLITE_MIN_VERSION_NUMBER instead of SQLITE_VERSION_NUMBER. > > * subversion/libsvn_subr/sqlite.c > (SVN_SQLITE_MIN_VERSION_NUMBER): Set to SQLITE_VERSION_NUMBER if > undefined. > (init_sqlite): Check sqlite version against SVN_SQLITE_MIN_VERSION_NUMBER > instead of SQLITE_VERSION_NUMBER. > > * configure.ac: Provide a --enable-sqlite-compatibility-version switch > that sets SVN_SQLITE_MIN_VERSION_NUMBER. > > * build/ac-macros/sqlite.m4 > (SVN_SQLITE_VERNUM_PARSE): Make it reusable (in particular for > configure.ac), by taking a version string and a variable to store the > corresponding version number as arguments. > (SVN_SQLITE_MIN_VERNUM_PARSE): Simplify by calling > SVN_SQLITE_VERNUM_PARSE. > (SVN_SQLITE_PKG_CONFIG): Adapt SVN_SQLITE_VERNUM_PARSE call to the new > calling convention. > > Found by: Joao Palhoto Matos > http://bugs.debian.org/608925 > ]]] > > Index: subversion/include/private/svn_dep_compat.h > === > --- subversion/include/private/svn_dep_compat.h (revision 1196775) > +++ subversion/include/private/svn_dep_compat.h (working copy) > @@ -120,7 +120,7 @@ typedef apr_uint32_t apr_uintptr_t; > */ > #ifndef SQLITE_VERSION_AT_LEAST > #define SQLITE_VERSION_AT_LEAST(major,minor,patch) \ > -((major*100 + minor*1000 + patch) <= SQLITE_VERSION_NUMBER) > +((major*100 + minor*1000 + patch) <= SVN_SQLITE_MIN_VERSION_NUMBER) > #endif /* SQLITE_VERSION_AT_LEAST */ > What if SVN_SQLITE_MIN_VERSION_NUMBER isn't #define'd yet? Below you have an #ifndef guard, so presumably you need one here too. > #ifdef __cplusplus > Index: subversion/libsvn_subr/sqlite.c > === > --- subversion/libsvn_subr/sqlite.c (revision 1196775) > +++ subversion/libsvn_subr/sqlite.c (working copy) > @@ -50,6 +50,10 @@ >#include > #endif > > +#ifndef SVN_SQLITE_MIN_VERSION_NUMBER > + #define SVN_SQLITE_MIN_VERSION_NUMBER SQLITE_VERSION_NUMBER > +#endif > + > #if !SQLITE_VERSION_AT_LEAST(3,6,18) > #error SQLite is too old -- version 3.6.18 is the minimum required version > #endif > @@ -606,7 +610,7 @@ static volatile svn_atomic_t sqlite_init_state = 0 > static svn_error_t * > init_sqlite(void *baton, apr_pool_t *pool) > { > - if (sqlite3_libversion_number() < SQLITE_VERSION_NUMBER) > + if (sqlite3_libversion_number() < SVN_SQLITE_MIN_VERSION_NUMBER) I don't have a working copy handy. Are there instance of SQLITE_VERSION_NUMBER that _didn't_ get changed to SVN_SQLITE_MIN_VERSION_NUMBER? > { >
auth-test fails (E200006: svn_auth_get_platform_specific_client_providers should return an array of 5 providers)
Hi, Building r1196980: svn export ~/src/subversion svn-test cd svn-test ./autogen.sh mkdir BUILD cd BUILD PYTHON=python RUBY=ruby \ ../configure --prefix=$HOME/opt/subversion \ --mandir=\$${prefix}/share/man \ --with-apr=/usr \ --with-apr-util=/usr \ --with-neon=/usr \ --with-serf=/usr \ --with-berkeley-db=:::db \ --with-sasl=/usr \ --with-editor=editor \ --with-ruby-sitedir=/usr/lib/ruby \ --with-swig=/usr \ --with-kwallet --with-gnome-keyring \ --enable-javahl --without-jikes \ --with-jdk=/usr/lib/jvm/default-java \ --with-junit=/usr/share/java/junit.jar \ --with-apxs=/usr/bin/apxs2 --disable-mod-activation make check I get the following result: Running tests in auth-test [1/89].FAILURE Running tests in cache-test [2/89]success [... lots of successes snipped ...] The failure is described in tests.log: START: auth-test svn_tests: E26: svn_auth_get_platform_specific_client_providers should return an array of 5 providers FAIL: lt-auth-test 1: test retrieving platform-specific auth providers END: auth-test ELAPSED: auth-test 0:00:00.179133 Indeed, instrumenting the test, we learn that the actual number of providers returned is 1. Known problem? Jonathan