Re: [PATCH v2] New dumpstream parser to check version number

2010-08-19 Thread Daniel Shahaf
(read the last review hunk first)

Ramkumar Ramachandra wrote on Thu, Aug 19, 2010 at 11:17:55 +0530:
 [[[
 * subversion/libsvn_repos/load.c
   (svn_repos_parse_dumpstream2, svn_repos_parse_dumpstream3): Rename
   the older function and add a version_number argument; error out if
   there's a version mismatch.
 
 * subversion/include/svn_repos.h
   (svn_repos_parse_dumpstream2, svn_repos_parse_dumpstream3): Add new
   function and mark svn_repos_parse_dumpstream2 as deprecated.
 
 * subversion/svnrdump/load_editor.c
   (drive_dumpstream_loader): Update to use the new API, and call it
   with version_number 3.
 ]]]
 
 Index: subversion/svnrdump/load_editor.c
 ===
 --- subversion/svnrdump/load_editor.c (revision 986884)
 +++ subversion/svnrdump/load_editor.c (working copy)
 @@ -540,8 +540,10 @@ drive_dumpstream_loader(svn_stream_t *stream,
pb = parse_baton;
  
SVN_ERR(svn_ra_get_repos_root2(session, (pb-root_url), pool));
 -  SVN_ERR(svn_repos_parse_dumpstream2(stream, parser, parse_baton,
 -  NULL, NULL, pool));
 +  SVN_ERR(svn_repos_parse_dumpstream3(stream, parser, parse_baton,
 +  NULL, NULL,
 +  SVN_REPOS_DUMPFILE_FORMAT_VERSION,

Wrong macro: when we introduce dumpfile format 4, this will become 4,
but I think you want it to remain 3 even then, right?

The alternative is to add a new (feature-oriented) macro whose value
will *not* change.  See libsvn_fs_fs/fs.h:90 and below.

 +  pool));
  
return SVN_NO_ERROR;
  }
 Index: subversion/include/svn_repos.h
 ===
 --- subversion/include/svn_repos.h(revision 986884)
 +++ subversion/include/svn_repos.h(working copy)
 @@ -2286,6 +2286,7 @@ svn_repos_node_from_baton(void *edit_baton);
  /* The RFC822-style headers in our dumpfile format. */
  #define SVN_REPOS_DUMPFILE_MAGIC_HEADER
 SVN-fs-dump-format-version
  #define SVN_REPOS_DUMPFILE_FORMAT_VERSION   3
 +#define SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION-1
  #define SVN_REPOS_DUMPFILE_UUID  UUID
  #define SVN_REPOS_DUMPFILE_CONTENT_LENGTHContent-length
  
 @@ -2646,6 +2647,11 @@ typedef svn_repos_parse_fns2_t svn_repos_parser_fn
   * @a cancel_baton as argument to see if the client wishes to cancel
   * the dump.
   *
 + * If @a exact_version is SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION,

#define macroname SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION

In doxygen, you can write #macroname or @c macroname. (IIRC the first
makes a link and the second makes constant-width font; untested.)


 + * it is ignored and the dumpstream is parsed without this
 + * information. Else, the function checks the dumpstream's version
 + * number, and errors out if there's a mismatch.
 + *

Sometimes, in situations like this we guarantee exactly which error code
will be returned, for the benefit of API users who want to trap a
specific (non-fatal) error condition.  See svn_ra_open4() for an
example.

   * This parser has built-in knowledge of the dumpfile format, but only
   * in a general sense:
   *
 @@ -2661,17 +2667,33 @@ typedef svn_repos_parse_fns2_t svn_repos_parser_fn
   * This is enough knowledge to make it easy on vtable implementors,
   * but still allow expansion of the format:  most headers are ignored.
   *
 - * @since New in 1.1.
 + * @since New in 1.7.
   */
  svn_error_t *
 -svn_repos_parse_dumpstream2(svn_stream_t *stream,
 +svn_repos_parse_dumpstream3(svn_stream_t *stream,
  const svn_repos_parse_fns2_t *parse_fns,
  void *parse_baton,
  svn_cancel_func_t cancel_func,
  void *cancel_baton,
 +int exact_version,
  apr_pool_t *pool);
  
 +/**
 + * Similar to svn_repos_parse_dumpstream3(), but with @a exact_version
 + * always set to SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION.
 + *
 + * @since New in 1.1.
 + * @deprecated Provided for backward compatibility with the 1.6 API.
 + */

Looks good.

 +SVN_DEPRECATED
 +svn_error_t *svn_repos_parse_dumpstream2(svn_stream_t *stream,
 +const svn_repos_parse_fns2_t *parse_fns,
 +void *parse_baton,

Style nitpick:
The parameters aren't aligned properly.  (the previous declaration does
it correctly.)

 +svn_cancel_func_t cancel_func,
 +void *cancel_baton,
 +apr_pool_t *pool);
  
 +
  /**
   * Set @a *parser and @a *parse_baton to a vtable parser which commits new
   * revisions to the fs in @a repos.  The constructed parser will treat
 Index: subversion/libsvn_repos/load.c
 

Re: [RFC/PATCH] Create a fresh svn_repos_parse_fns3_t

2010-08-19 Thread Daniel Shahaf
Ramkumar Ramachandra wrote on Thu, Aug 19, 2010 at 11:12:14 +0530:
 Hi,
 
 I sent a patch a while ago for svn_repos_parse_dumpstream3. While I
 wait for approval, this is an RFC patch describing my future plan once
 that patch gets approved. It can be described tersely as While at it
 (adding the new callback), fix everything that's wrong with the
 struct. I'm planning to fix a few other things as well, but this is
 the basic sketch. See load_editor.c for the usecase- I actually
 stuffed a scratch pool into the parse_baton so I could use it
 everywhere.
 
 -- Ram

I derived the diff between the existing/proposed structs.  I think this
form of the RFC will be easier to review:

[[[
--- svn_repos_parse_fns2_t  2010-08-19 10:23:09.0 +0300
+++ svn_repos_parse_fns3_t  2010-08-19 10:23:08.0 +0300
@@ -1,10 +1,20 @@
 /**
- * A vtable that is driven by svn_repos_parse_dumpstream2().
+ * A vtable that is driven by svn_repos_parse_dumpstream3().
  *
- * @since New in 1.1.
+ * @since New in 1.7.
  */
-typedef struct svn_repos_parse_fns2_t
+typedef struct svn_repos_parse_fns3_t
 {
+  /** The parser has parsed the version information from header of
+   *  the dumpsteeam within the parsing session represented by
+   *  @parse_baton. The version information is available in @a
+   *  version, and a scratch pool @a pool is available for any
+   *  temporary allocations.
+   */
+  svn_error_t *(*dumpstream_version_record)(int version,
+void *parse_baton,
+apr_pool_t *pool);
+
   /** The parser has discovered a new revision record within the
* parsing session represented by @a parse_baton.  All the headers are
* placed in @a headers (allocated in @a pool), which maps ttconst
@@ -36,21 +46,34 @@
   void *revision_baton,
   apr_pool_t *pool);
 
-  /** For a given @a revision_baton, set a property @a name to @a value. */
+  /** For a given @a revision_baton, set a property @a name to @a
+   *  value. Scratch pool @a pool is available for any temporary
+   *  allocations.
+   */
   svn_error_t *(*set_revision_property)(void *revision_baton,
 const char *name,
-const svn_string_t *value);
+const svn_string_t *value,
+apr_pool_t *pool);
 
-  /** For a given @a node_baton, set a property @a name to @a value. */
+  /** For a given @a node_baton, set a property @a name to @a
+   *  value. Scratch pool @a pool is available for any temporary
+   *  allocations.
+   */
   svn_error_t *(*set_node_property)(void *node_baton,
 const char *name,
-const svn_string_t *value);
-
-  /** For a given @a node_baton, delete property @a name. */
-  svn_error_t *(*delete_node_property)(void *node_baton, const char *name);
+const svn_string_t *value,
+apr_pool_t *pool);
 
-  /** For a given @a node_baton, remove all properties. */
-  svn_error_t *(*remove_node_props)(void *node_baton);
+  /** For a given @a node_baton, delete property @a name. Scratch pool
+* @a pool is available for any temporary allocations.
+*/
+  svn_error_t *(*delete_node_property)(void *node_baton,
+   const char *name,
+   apr_pool_t *pool);
+
+  /** For a given @a node_baton, remove all properties. Scratch pool
+  @a pool is available for any temporary allocations. */
+  svn_error_t *(*remove_node_props)(void *node_baton, apr_pool_t *pool);
 
   /** For a given @a node_baton, receive a writable @a stream capable of
* receiving the node's fulltext.  After writing the fulltext, call
@@ -59,9 +82,12 @@
* If a @c NULL is returned instead of a stream, the vtable is
* indicating that no text is desired, and the parser will not
* attempt to send it.
+   *
+   * Scratch pool @a pool is available for any temporary allocations.
*/
   svn_error_t *(*set_fulltext)(svn_stream_t **stream,
-   void *node_baton);
+   void *node_baton,
+   apr_pool_t *pool);
 
   /** For a given @a node_baton, set @a handler and @a handler_baton
* to a window handler and baton capable of receiving a delta
@@ -71,21 +97,26 @@
* If a @c NULL is returned instead of a handler, the vtable is
* indicating that no delta is desired, and the parser will not
* attempt to send it.
+   *
+   * Scratch pool @a pool is available for any temporary allocations.
*/
   svn_error_t *(*apply_textdelta)(svn_txdelta_window_handler_t *handler,
   void **handler_baton,
-  void 

Re: svn commit: r986865 - /subversion/trunk/notes/wc-ng/node-data

2010-08-19 Thread Philip Martin
Greg Stein gst...@gmail.com writes:

 But that said, there is an argument for combining all three conceptual
 tables into one. Is that was you guys were suggesting?

Yes.  The tables are so similar.  For example, base_node's
repos_id/repos_relpath/revnum and the working_node's
copyfrom_id/copyfrom_relpath/copyfrom_revnum and both a sort of
repos-node-rev.  For op_depth 0 the repos-node-rev is always set,
there is pristine content and it's the same node in the repository and
wc.  For other op_depth the repos-node-rev is optional, copies have
it, adds don't; but when it exists it means much the same: the node
has pristine content.  op_depth tells us whether the repos-node-rev is
a copy or a base and that's exactly what op_depth is for.

Now op_depth 0 can be split out into a separate base_node table, our
current model, but during our meeting we were wondering if that is
necessary or useful.

We do have to have all fields at all op_depth, and some are not always
valid.  dav_cache only applies to op_depth 0, translated_size only
applies the the greatest op_depth for any local_relpath, etc.; but
most of the fields are common.  Even dav_cache might apply to higher
levels, perhaps it could be useful for the copyfrom?

-- 
Philip


Re: svn diff optimization to make blame faster?

2010-08-19 Thread Johan Corveleyn
On Wed, Aug 18, 2010 at 3:44 PM, Johan Corveleyn jcor...@gmail.com wrote:
 On Wed, Aug 18, 2010 at 12:49 PM, Stefan Sperling s...@elego.de wrote:
 Can you show a profiler run that illustrates where the client is
 spending most of its time during diff? That would probably help with
 getting opinions from people, because it saves them from spending time
 doing this research themselves.
 You've already hinted at svn_diff__get_tokens() in another mail, but
 a real profiler run would show more candidates.

 Sorry, I'm still very much a beginner in C (I've been programming for
 10 years, but mainly in Java). Especially the tooling is a steep part
 of the learning curve :-), so I don't know (how to use) a profiler
 yet. Any suggestions? I'm on Windows (XP), using VCE 2008, and used
 cygwin to compare with GNU diff.

I googled around a bit for C profilers on Windows. I found several
which seem useful:
- very sleepy (http://www.codersnotes.com/sleepy/sleepy)
- Shiny (http://sourceforge.net/projects/shinyprofiler/)
- AMD CodeAnalyst (http://developer.amd.com/cpu/CodeAnalyst/Pages/default.aspx)
- AQTime - not free, but has a trial of 30 days
(http://www.automatedqa.com/products/aqtime/)

Before I dive in and try them out: any preference/favorites from the
windows devs on this list? Or other suggestions?

Further, some additional context to the manual-profiling numbers: see below.

 For the time being, I've helped myself using apr_time_now() from
 apr_time.h and printf statements. Not terribly accurate and probably
 somewhat overheadish, but it gives me some hints about the
 bottlenecks.

 FWIW, below is the output of a recent run with my instrumented trunk
 build. I've instrumented svn_diff_diff in diff.c and
 svn_diff__get_tokens in token.c. I checked out bigfile.xml from a
 repository, and edited a single line of it in my working copy. The two
 statements Starting diff and Diff finished are the first and last
 statements inside the svn_diff_diff function. These are numbers from
 the second run (any following runs show approximately the same
 numbers).

 $ time svn diff bigfile.xml
 Starting diff ... (entered svn_diff_diff in diff.c)
  - calling svn_diff__get_tokens for svn_diff_datasource_original
      == svn_diff__get_tokens datasource_open: 0 usec
      == svn_diff__get_tokens while loop: 265625 usec
         calls to datasource_get_next_token: 62500 usec
                svn_diff__tree_insert_token: 171875 usec
                                       rest: 15625 usec
  - done: 281250 usec
  - calling svn_diff__get_tokens for svn_diff_datasource_modified
      == svn_diff__get_tokens datasource_open: 234375 usec
      == svn_diff__get_tokens while loop: 312500 usec
         calls to datasource_get_next_token: 62500 usec
                svn_diff__tree_insert_token: 171875 usec
                                       rest: 46875 usec
  - done: 562500 usec
  - calling svn_diff__lcs
  - done: 0 usec
  - calling svn_diff__diff
  - done: 0 usec
 Diff finished in 875000 usec (875 millis)
 Index: bigfile.xml
 ===
 [snip]

 real    0m1.266s
 user    0m0.015s
 sys     0m0.031s

For comparison: a debug build from trunk, with only one
instrumentation spot (at start and end of svn_diff_diff):
$ time svn diff bigfile.xml
Starting diff ... (entered svn_diff_diff in diff.c)
Diff finished in 703125 usec (703 millis)
[snip]
real0m1.109s
user0m0.015s
sys 0m0.031s

So the instrumentation in the critical loop probably costs me around
150-200 ms.

A release build will also probably be faster, but I have no time to
make one and test it now. If I time my 1.6.5 build from tigris.org,
that's still a lot faster:
$ time svn diff bigfile.xml
[snip]
real0m0.468s
user0m0.015s
sys 0m0.031s

Maybe that can be totally attributed to the build (release vs. debug),
or maybe 1.6.5 was faster at svn diff than trunk is?

 Some observations:
 - svn_diff__get_tokens takes most of the time
 - for some reason, in the case of datasource_modified, the call to
 datasource_open takes a long time (234 ms). In case of
 datasource_original, it's instantaneous. Maybe because of translation,
 ... of the pristine file? But I'd think that would be the original??
 - apart from that, most of the time goes to the while loop in
 svn_diff__get_tokens.
 - Inside the while loop, most time is spent on calls to
 svn_diff__tree_insert_token (which compares tokens (=lines) to insert
 them into some tree structure). Calls to datasource_get_next_token
 also take some time.

 I'm not too sure of the accuracy of those last numbers with my simple
 profiling method, because it's the accumulated time of calls inside a
 while loop (with 61000 iterations).


 For completeness, the same diff with /usr/bin/diff (under cygwin), of
 the edited bigfile.xml vs. the original, as of the second diff run:

 $ ls -l settings.xml
 -rwxr-xr-x+ 1 User None 1447693 2010-08-17 14:20 bigfile.xml
 $ time diff 

RE: svn commit: r986865 - /subversion/trunk/notes/wc-ng/node-data

2010-08-19 Thread Bert Huijben


 -Original Message-
 From: Philip Martin [mailto:philip.mar...@wandisco.com]
 Sent: donderdag 19 augustus 2010 3:39
 To: Greg Stein
 Cc: Bert Huijben; dev@subversion.apache.org; phi...@apache.org
 Subject: Re: svn commit: r986865 - /subversion/trunk/notes/wc-ng/node-
 data
 
 Greg Stein gst...@gmail.com writes:
 
  But that said, there is an argument for combining all three conceptual
  tables into one. Is that was you guys were suggesting?
 
 Yes.  The tables are so similar.  For example, base_node's
 repos_id/repos_relpath/revnum and the working_node's
 copyfrom_id/copyfrom_relpath/copyfrom_revnum and both a sort of
 repos-node-rev.  For op_depth 0 the repos-node-rev is always set,
 there is pristine content and it's the same node in the repository and
 wc.  For other op_depth the repos-node-rev is optional, copies have
 it, adds don't; but when it exists it means much the same: the node
 has pristine content.  op_depth tells us whether the repos-node-rev is
 a copy or a base and that's exactly what op_depth is for.
 
 Now op_depth 0 can be split out into a separate base_node table, our
 current model, but during our meeting we were wondering if that is
 necessary or useful.
 
 We do have to have all fields at all op_depth, and some are not always
 valid.  dav_cache only applies to op_depth 0, translated_size only
 applies the the greatest op_depth for any local_relpath, etc.; but
 most of the fields are common.  Even dav_cache might apply to higher
 levels, perhaps it could be useful for the copyfrom?

How would this handle deleted nodes in one layer (then some overlays) and
then calling _read_children(). I think that would become a union/select over
multiple layers? We already had some performance issues there in the past
and I hope this only makes this query easier. (SELECT DISTINCT name where
parent_relpath=? or something)

Before this new idea I expected that we didn't have to query the NODE_DATA
if you were just querying _read_info() for kind and status. So for those two
most common fields I didn't expect any slowdown over the current model. 
With moving everything in one table we will need the sqlite index for
optimization in a few more cases to keep the same speed. (I think SQLite can
handle this for us as one of the nice features of using a real database, but
nevertheless, I think we should try to verify this before moving everything
into one table)

Bert



RE: svn diff optimization to make blame faster?

2010-08-19 Thread Bert Huijben


 -Original Message-
 From: Johan Corveleyn [mailto:jcor...@gmail.com]
 Sent: donderdag 19 augustus 2010 4:55
 To: Subversion Development
 Subject: Re: svn diff optimization to make blame faster?
 


 
 I googled around a bit for C profilers on Windows. I found several
 which seem useful:
 - very sleepy (http://www.codersnotes.com/sleepy/sleepy)
 - Shiny (http://sourceforge.net/projects/shinyprofiler/)
 - AMD CodeAnalyst
 (http://developer.amd.com/cpu/CodeAnalyst/Pages/default.aspx)
 - AQTime - not free, but has a trial of 30 days
 (http://www.automatedqa.com/products/aqtime/)

Microsoft and Intel have some advanced profilers integrated in their C
products. (Only a bit of experience with those)

I had good results with very sleepy. When you have the PDB files available
it is as simple as just running the application and you get an easy to
understand result. (Make sure you have a recent version; the older versions
only allowed attaching to an already running process)

Bert



Re: svn commit: r986865 - /subversion/trunk/notes/wc-ng/node-data

2010-08-19 Thread Philip Martin
Bert Huijben b...@vmoo.com writes:

 How would this handle deleted nodes in one layer (then some overlays) and
 then calling _read_children(). I think that would become a union/select over
 multiple layers? We already had some performance issues there in the past
 and I hope this only makes this query easier. (SELECT DISTINCT name where
 parent_relpath=? or something)

 Before this new idea I expected that we didn't have to query the NODE_DATA
 if you were just querying _read_info() for kind and status. So for those two
 most common fields I didn't expect any slowdown over the current model. 
 With moving everything in one table we will need the sqlite index for
 optimization in a few more cases to keep the same speed. (I think SQLite can
 handle this for us as one of the nice features of using a real database, but
 nevertheless, I think we should try to verify this before moving everything
 into one table)

I'm not an SQL expert, much less an SQLite expert, however BASE_NODE
is still available by adding op_depth=0 to the query.  WORKING_NODE is
a bit more complicated as one needs to get the biggest op_depth0, so
select op_depth0, order by op_depth and limit to 1.  Obviously we
will have to include op_depth in the SQLite index.

In cases such as _read_info where both BASE_NODE and WORKING_NODE are
required we can ask for the biggest op_depth first and if this turns
out to be zero then we find out that there is no WORKING_NODE and get
the BASE_NODE with one query.  For unmodified nodes this might be
faster than separate BASE/WORKING.

I'm not sure how _read_children would be affected.  SELECT DISTINCT
probably allows us to count them, but I don't know how to construct
the query to return the greatest op_depth for each name.

-- 
Philip


Re: [PATCH v2] New dumpstream parser to check version number

2010-08-19 Thread Ramkumar Ramachandra
Hi Daniel,

Daniel Shahaf writes:
  -  SVN_ERR(svn_repos_parse_dumpstream2(stream, parser, parse_baton,
  -  NULL, NULL, pool));
  +  SVN_ERR(svn_repos_parse_dumpstream3(stream, parser, parse_baton,
  +  NULL, NULL,
  +  SVN_REPOS_DUMPFILE_FORMAT_VERSION,
 
 Wrong macro: when we introduce dumpfile format 4, this will become 4,
 but I think you want it to remain 3 even then, right?

Actually, I'd want to svnrdump tests to fail so someone (or me)
immediately fixes it to use version 4: it'll probably be a performance
boost as well.

  +#define SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION-1
   #define SVN_REPOS_DUMPFILE_UUID  UUID
   #define SVN_REPOS_DUMPFILE_CONTENT_LENGTHContent-length
   
  @@ -2646,6 +2647,11 @@ typedef svn_repos_parse_fns2_t svn_repos_parser_fn
* @a cancel_baton as argument to see if the client wishes to cancel
* the dump.
*
  + * If @a exact_version is SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION,
 
 #define macroname SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION
 
 In doxygen, you can write #macroname or @c macroname. (IIRC the first
 makes a link and the second makes constant-width font; untested.)

Ok, thanks for the pointer. Fixed.

  + * it is ignored and the dumpstream is parsed without this
  + * information. Else, the function checks the dumpstream's version
  + * number, and errors out if there's a mismatch.
  + *
 
 Sometimes, in situations like this we guarantee exactly which error code
 will be returned, for the benefit of API users who want to trap a
 specific (non-fatal) error condition.  See svn_ra_open4() for an
 example.

Oh. Is this necessary though? I'm going to move out this code into a
callback soon- users can handle it there after that.

  +SVN_DEPRECATED
  +svn_error_t *svn_repos_parse_dumpstream2(svn_stream_t *stream,
  +const svn_repos_parse_fns2_t *parse_fns,
  +void *parse_baton,
 
 Style nitpick:
 The parameters aren't aligned properly.  (the previous declaration does
 it correctly.)

Fixed.

 I'm not sure what we gain by committing this patch when you already have
 a parse_fns3_t patch outstanding.  (I haven't looked at that thread
 yet.)  Wouldn't it make more sense to first commit that patch, than to
 commit this one, then that one, and then a revision of that one to use
 the new API?

That patch is still an RFC, and it's unlikely to be approved soon I
think. If I were able to send a series, it would roughly look like
this:
1. Create parse_dumpstream3 to include the logic for checking equality
   in version.
2. Note the lack of flexibility in 1, and create a new struct (the
   other patch).
3. Modify parse_dumpstream3 to use the new struct, and move out the
   logic for checking the version into a fresh callback.

 Note: it's acceptable to post patches that depend on previous patches.
 (So you could write this patch in terms of parse_fns3_t directly.)
 
 There are a number of ways to manage the interdependencies... (you
 mentioned quilt on IRC; I know some folks here use Mercurial patch
 queues and similar tricks)

Oh, ok. I'll learn to use Quilt then. Using Git to stage is a bit of
an overkill.

-- Ram

-- 8 --
Index: subversion/include/svn_repos.h
===
--- subversion/include/svn_repos.h  (revision 986884)
+++ subversion/include/svn_repos.h  (working copy)
@@ -2286,6 +2286,7 @@ svn_repos_node_from_baton(void *edit_baton);
 /* The RFC822-style headers in our dumpfile format. */
 #define SVN_REPOS_DUMPFILE_MAGIC_HEADERSVN-fs-dump-format-version
 #define SVN_REPOS_DUMPFILE_FORMAT_VERSION   3
+#define SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION-1
 #define SVN_REPOS_DUMPFILE_UUID  UUID
 #define SVN_REPOS_DUMPFILE_CONTENT_LENGTHContent-length
 
@@ -2646,6 +2647,11 @@ typedef svn_repos_parse_fns2_t svn_repos_parser_fn
  * @a cancel_baton as argument to see if the client wishes to cancel
  * the dump.
  *
+ * If @a exact_version is #SVN_REPOS_DUMPFILE_FORMAT_UNKNOWN_VERSION,
+ * it is ignored and the dumpstream is parsed without this
+ * information. Else, the function checks the dumpstream's version
+ * number, and errors out if there's a mismatch.
+ *
  * This parser has built-in knowledge of the dumpfile format, but only
  * in a general sense:
  *
@@ -2661,16 +2667,31 @@ typedef svn_repos_parse_fns2_t svn_repos_parser_fn
  * This is enough knowledge to make it easy on vtable implementors,
  * but still allow expansion of the format:  most headers are ignored.
  *
- * @since New in 1.1.
+ * @since New in 1.7.
  */
 svn_error_t *
-svn_repos_parse_dumpstream2(svn_stream_t *stream,
+svn_repos_parse_dumpstream3(svn_stream_t *stream,
 const svn_repos_parse_fns2_t *parse_fns,
 void 

Re: svn_client_status5() and depth

2010-08-19 Thread Stefan Küng

On 18.08.2010 21:58, Bert Huijben wrote:

I've checked r986510 and I like to add some comments:

In libsvn_client/deprecated.c, svn_client_status4() you pass FALSE for
the new sticky param. But that's wrong: in 1.6.x and previous versions,
the depth was always respected/enforced. That's why I noticed the
changed behavior in the first place!
Shouldn't you pass TRUE there?


The not filtering of the nodes was never the intended behavior. It was a
bug, but had useful behavior for your specific case: explicitly pulling
unavailable nodes into the working copy.


I've checked the original docs of the svn_client_status4() API. It 
nowhere mentions that the intended behavior is to show what an 'svn up' 
would do. Actually 'svn up' isn't mentioned at all:

https://svn.apache.org/repos/asf/subversion/tags/1.6.0/subversion/include/svn_client.h

So, no matter if the old behavior is not correct: you can't just change 
the behavior and tell people: sorry, that was a bug. Because it wasn't 
documented that way so people start to rely on what's documented and on 
how it actually works.

If you change the behavior, the API is not compatible anymore.

It's perfectly fine to do that with svn_client_status5() if it gets 
documented that way - after all, that's what new APIs are for: not just 
changing the params but also the behavior if necessary.

But the older (existing) APIs must not change their behavior.

What's the harm in leaving the behavior in the old APIs even though it's 
not what you intended? Please read the docs for the API again - that 
intention isn't documented so the old API works as documented.


Stefan

--
   ___
  oo  // \\  De Chelonian Mobile
 (_,\/ \_/ \ TortoiseSVN
   \ \_/_\_/The coolest Interface to (Sub)Version Control
   /_/   \_\ http://tortoisesvn.net


Re: svn commit: r986510 - in /subversion/trunk/subversion: bindings/javahl/native/SVNClient.cpp include/svn_client.h libsvn_client/delete.c libsvn_client/deprecated.c libsvn_client/externals.c libsvn_

2010-08-19 Thread Daniel Shahaf
rhuij...@apache.org wrote on Tue, Aug 17, 2010 at 22:25:09 -:
 +++ subversion/trunk/subversion/include/svn_client.h Tue Aug 17 22:25:09 2010
 @@ -2162,6 +2166,7 @@ svn_client_status5(svn_revnum_t *result_
 +   svn_boolean_t depth_as_sticky,

s/depth_as_sticky/depth_is_sticky/g ?


Re: svn commit: r986510 - in /subversion/trunk/subversion: bindings/javahl/native/SVNClient.cpp include/svn_client.h libsvn_client/delete.c libsvn_client/deprecated.c libsvn_client/externals.c libsvn_

2010-08-19 Thread Stefan Küng

On 19.08.2010 19:10, Daniel Shahaf wrote:

rhuij...@apache.org wrote on Tue, Aug 17, 2010 at 22:25:09 -:

+++ subversion/trunk/subversion/include/svn_client.h Tue Aug 17 22:25:09 2010
@@ -2162,6 +2166,7 @@ svn_client_status5(svn_revnum_t *result_
+   svn_boolean_t depth_as_sticky,


s/depth_as_sticky/depth_is_sticky/g ?


From what I learned, that's correct: it works *as* if the operation is 
sticky, but it still is a readonly operation.


Stefan

--
   ___
  oo  // \\  De Chelonian Mobile
 (_,\/ \_/ \ TortoiseSVN
   \ \_/_\_/The coolest Interface to (Sub)Version Control
   /_/   \_\ http://tortoisesvn.net


Re: [PATCH v2] New dumpstream parser to check version number

2010-08-19 Thread Daniel Shahaf
Ramkumar Ramachandra wrote on Thu, Aug 19, 2010 at 21:28:37 +0530:
 Hi Daniel,
 
 Daniel Shahaf writes:
   -  SVN_ERR(svn_repos_parse_dumpstream2(stream, parser, parse_baton,
   -  NULL, NULL, pool));
   +  SVN_ERR(svn_repos_parse_dumpstream3(stream, parser, parse_baton,
   +  NULL, NULL,
   +  SVN_REPOS_DUMPFILE_FORMAT_VERSION,
  
  Wrong macro: when we introduce dumpfile format 4, this will become 4,
  but I think you want it to remain 3 even then, right?
 
 Actually, I'd want to svnrdump tests to fail so someone (or me)
 immediately fixes it to use version 4: it'll probably be a performance
 boost as well.
 

Feel free to suggest a test that will remind us to patch up v4 support
into svnrdump.  However, as the quoted patch is written, svnrdump will
start croaking on v3 dumpfiles as soon as v4 is defined.  And I do not
think that's acceptable.

(Limit case: if I just go today and write a patch that defines v4 dump
to be *exactly the same as* v3 dumps, then svnrdump will start to die on
v3 dumps.)

   + * it is ignored and the dumpstream is parsed without this
   + * information. Else, the function checks the dumpstream's version
   + * number, and errors out if there's a mismatch.
   + *
  
  Sometimes, in situations like this we guarantee exactly which error code
  will be returned, for the benefit of API users who want to trap a
  specific (non-fatal) error condition.  See svn_ra_open4() for an
  example.
 
 Oh. Is this necessary though? I'm going to move out this code into a
 callback soon- users can handle it there after that.
 
  I'm not sure what we gain by committing this patch when you already have
  a parse_fns3_t patch outstanding.  (I haven't looked at that thread
  yet.)  Wouldn't it make more sense to first commit that patch, than to
  commit this one, then that one, and then a revision of that one to use
  the new API?
 
 That patch is still an RFC, and it's unlikely to be approved soon I
 think. If I were able to send a series, it would roughly look like
 this:
 1. Create parse_dumpstream3 to include the logic for checking equality
in version.
 2. Note the lack of flexibility in 1, and create a new struct (the
other patch).
 3. Modify parse_dumpstream3 to use the new struct, and move out the
logic for checking the version into a fresh callback.
 
  Note: it's acceptable to post patches that depend on previous patches.
  (So you could write this patch in terms of parse_fns3_t directly.)

So we first create parse_dumpstream3() and then fix it a second later?
I'd rather just revv the parse_fns3_t API (with the other patch) and
then touch parse_dumpstream3() once.  Does that make sense to you?


dev@ and users@ need more moderators?

2010-08-19 Thread Daniel Shahaf
So, artagnon mailed yesterday a note to users@, CCing me.  And now,
23 hours later, I notice it isn't on the list yet.

Joe Schaefer from ASF Infra confirms that there are quite a few legitimate
messages pending moderation on the us...@subversion.apache.org list.  In
other words, it seems that the moderators aren't approving messages to go
through.

Current moderators, is that indeed the situation?

@all, do we need more moderators on us...@?

Daniel
(BCCing users-owner@)


Re: dev@ and users@ need more moderators?

2010-08-19 Thread Daniel Shahaf
Daniel Shahaf wrote on Thu, Aug 19, 2010 at 20:51:08 +0300:
 @all, do we need more moderators on us...@?

By the way, exactly the same people moderate dev@ and us...@.  (i.e.,
users-owner@ and dev-owner@ both go to the same 4 people.)  So I imagine
that, inasmuch there is a problem, it would apply to both lists.

Daniel
(the dev@ moderation queue currently contains only one message, which is
only an hour old)


Re: svnrdump: The BIG update

2010-08-19 Thread 'Daniel Shahaf'
(sorry for the delay; didn't want to reply while sleepy)

Bert Huijben wrote on Tue, Aug 17, 2010 at 09:30:08 -0700:
 
 
  -Original Message-
  From: Ramkumar Ramachandra [mailto:artag...@gmail.com]
  Sent: dinsdag 17 augustus 2010 9:09
  To: Daniel Shahaf
  Cc: Subversion-dev Mailing List
  Subject: Re: svnrdump: The BIG update
  
  Hi Daniel,
  
  Daniel Shahaf writes:
   Ramkumar Ramachandra wrote on Thu, Aug 12, 2010 at 12:17:34 +0530:
  The dump functionality is also complete- thanks to Stefan's review
  and
  MANY others for cleaning it up. It's however hit a brick wall now
  because of missing headers in the RA layer. Until I (or someone
 else)
  figures out how to fix the RA layer, we can't do better than the
 XFail
  copy-and-modify test I've committed.

 Part of the diff there is lack of SHA-1 headers --- which is
 unavoidable
 until editor is revved --- but part of it is a missing
 Text-copy-source-
  md5.
 Why don't you output that information --- doesn't the editor give it
 to
  you?
   
Afaik, no. I don't see Text-copy-source-* anywhere in the RA
layer. Maybe I'm not looking hard enough?
   
  
   Hmm.  It seems you're right.  So you might have to use two RA session in
   parallel...
  
   (and then, you might have to have the user authenticate twice?)
  
  Hm, I also have to find out if it's allowed. The commit_editor doesn't
  allow it for instance. Besides, it's a very inelegant solution- I'd
  rather fix the RA layer than do this.
 
 @Daniel, what would adding these adders add?
 
 The extra headers are for making it easier to detect corruptions by checking
 them along the transfer. 
 
 If we are just doing additional work to add headers via a different process
 it slows the dumping down more than a bit and it doesn't make the dump file
 any safer because it uses a different processes to obtain the header. 
 I think you would have to obtain the source of the copyfrom and get some
 checksum from that; maybe you can do that without transferring the file
 again, but I'm not sure about that.
 

I'm a bit surprised, but indeed I don't see a way to obtain the checksum
via svn_ra.h.  (The word 'checksum' doesn't appear there, and it isn't
included in svn_dirent_t either.)  I wonder how we got away without
having it...

 (And without the added headers the process is already as safe as svnsync.).
 
 Yes, we can add more and more processing to also get those new Sha1 headers
 by recalculating them while dumping, but the idea for svnrdump was to create
 a fast and secure way to dump and load repositories... not an incredible
 slow one that has to transfer files multiple times just to make all the
 optional headers match the output of svnadmin.
 
 Those headers were made optional for a reason: you don't always have them. 
 And different conversion processes have different headers available.
 Svnadmin looks at the FS layer for dumping, so it sees different things than
 an RA layer api. E.g. the dump in svnadmin has to create diffs from
 fulltexts itself, while svnrdump has diffs and must apply these itself to
 get full texts. The checksums have a similar mangling. The FS has access to
 some of the checksums and recalculates others for you. (See the performance
 drop in 1.6 of svnadmin dump)
 

Okay, agreed.  I assumed the editor would provide the copyfrom's
checksum for free (or, at least, that svn_ra_stat() would provide it),
but of course I won't suggest to add those copyfrom-checksum headers if
calculating them is as expensive as it now appears to be.

 There is a similar case at the import side. Applying commits can't check all
 the checksums, but the really important ones are already handled. Svnrdump
 dump and svnrdump load are a nice match.
 
   Bert
 

Thanks for doubting,

Daniel



Re: dev@ and users@ need more moderators?

2010-08-19 Thread Greg Stein
I normally moderate messages through quite actively, but have been
remiss this week. IOW, normally messages shouldn't ever hang out for
very long. I slacked off this week, thinking others would pick up the
slack. Guess I was wrong :-P

More moderators never hurts, of course. For those wondering, I think
the moderation load is 5-10 messages a day. A simple Reply-All and
send is all that is required (to moderate through, and to put the
person on the allow list so future messages do not require
moderation).

Cheers,
-g

On Thu, Aug 19, 2010 at 13:52, Daniel Shahaf d...@daniel.shahaf.name wrote:
 Daniel Shahaf wrote on Thu, Aug 19, 2010 at 20:51:08 +0300:
 @all, do we need more moderators on us...@?

 By the way, exactly the same people moderate dev@ and us...@.  (i.e.,
 users-owner@ and dev-owner@ both go to the same 4 people.)  So I imagine
 that, inasmuch there is a problem, it would apply to both lists.

 Daniel
 (the dev@ moderation queue currently contains only one message, which is
 only an hour old)



Re: Bug: resolve --accept theirs-conflict does not properly handle paths

2010-08-19 Thread Philip Martin
Doug Reeder reeder...@gmail.com writes:

 svn, version 1.6.4 (r38063)

That's old.

compiled Aug  7 2009, 11:08:17
 running under OS X 10.5.8 on a Intel Core 2 Duo (i.e. a 64-bit
 processor)

 resolve --accept theirs-conflict fails on a CONFLICTED-PATH with a
 directory component, but works fine when CONFLICTED-PATH is just a
 filename:

 ~/webOS/workspace/OutlineTrackerRecurD svn resolve --accept theirs- 
 conflict javascripts/storage.js
 svn: warning: Can't open file 'storage.js.merge-left.r448': No such
 file or directory

Fixed by r891009?

-- 
Philip


Re: Bug: resolve --accept theirs-conflict does not properly handle paths

2010-08-19 Thread Stefan Sperling
On Thu, Aug 19, 2010 at 08:24:38PM +0100, Philip Martin wrote:
 Doug Reeder reeder...@gmail.com writes:
 
  svn, version 1.6.4 (r38063)
 
 That's old.
 
 compiled Aug  7 2009, 11:08:17
  running under OS X 10.5.8 on a Intel Core 2 Duo (i.e. a 64-bit
  processor)
 
  resolve --accept theirs-conflict fails on a CONFLICTED-PATH with a
  directory component, but works fine when CONFLICTED-PATH is just a
  filename:
 
  ~/webOS/workspace/OutlineTrackerRecurD svn resolve --accept theirs- 
  conflict javascripts/storage.js
  svn: warning: Can't open file 'storage.js.merge-left.r448': No such
  file or directory
 
 Fixed by r891009?

Yes, it looks like that bug. The fix was released in 1.6.9.

Stefan


Subversion 1.6.12 and Python 2.7

2010-08-19 Thread Barry Warsaw
Hello folks,

I'm a core Python developer and an Ubuntu developer, and I'm currently working
on adding Python 2.7 support to Ubuntu.  I'm in the process of investigating
build problems with various packages against Python 2.7.  One such package is
Subversion 1.6.12, which appears to build against Python 2.7 but fails a
handful of Python binding tests.  I can see from this message

http://article.gmane.org/gmane.comp.version-control.subversion.devel/120567/match=python+2.7

the same set of failing tests.  Building against Python 2.6 has no problems.

I've looked at the tests but don't understand them well enough for any obvious
problems to jump out at me.  I was unable to find any open issues tracking
these failures.

Does anybody have any additional information?  Is anybody working on fixing
the failing Python 2.7 tests?  Is there an open bug on the issue?

I'd be happy to work with folks to put together a set of patches to fix the
Python 2.7 bindings, and get them into Debian and Ubuntu.  For now, I've added
the patch below which just disables the offending tests when the Python 2.7
bindings are built, so it's not particularly satisfying.

I am not on this mailing list so please do CC me or contact me directly
off-list.

Cheers,
-Barry

=== modified file 'subversion/bindings/swig/python/tests/client.py'
--- subversion/bindings/swig/python/tests/client.py 2009-11-17 02:16:23 
+
+++ subversion/bindings/swig/python/tests/client.py 2010-08-18 22:07:19 
+
@@ -7,6 +7,31 @@
   REPOS_PATH, REPOS_URL
 from urlparse import urljoin
 
+
+
+# XXX 2010-08-18 barry
+# Skip the test when run under Python 2.7.
+#
+# These tests are apparently known failures.  No fix or bug number is yet
+# available, and this is the only reference I've been able to find.
+#
+# 
http://article.gmane.org/gmane.comp.version-control.subversion.devel/120567/match=python+2.7
+
+def XXX_py27_skip(function):
+  import sys
+  from functools import wraps
+  if sys.version_info  (2, 7):
+@wraps(function)
+def wrapper(*args, **kws):
+  return function(*args, **kws)
+return wrapper
+  else:
+@wraps(function)
+def wrapper(*args, **kws):
+  print  sys.stderr, 'SKIP', function.__name__, 'PYTHON 2.7'
+return wrapper
+
+
 class SubversionClientTestCase(unittest.TestCase):
   Test cases for the basic SWIG Subversion client layer
 
@@ -116,6 +141,7 @@
 temp_client_ctx = None
 self.assertEqual(test_object2(), None)
 
+  @XXX_py27_skip
   def test_checkout(self):
 Test svn_client_checkout2.
 
@@ -131,6 +157,7 @@
 client.checkout2(REPOS_URL, path, rev, rev, True, True,
 self.client_ctx)
 
+  @XXX_py27_skip
   def test_info(self):
 Test svn_client_info on an empty repository
 
@@ -147,6 +174,7 @@
 self.assertEqual(self.info.URL, REPOS_URL)
 self.assertEqual(self.info.repos_root_URL, REPOS_URL)
 
+  @XXX_py27_skip
   def test_mkdir_url(self):
 Test svn_client_mkdir2 on a file:// URL
 dir = urljoin(REPOS_URL+/, dir1)
@@ -155,6 +183,7 @@
 self.assertEqual(commit_info.revision, 13)
 self.assertEqual(self.log_message_func_calls, 1)
 
+  @XXX_py27_skip
   def test_mkdir_url_with_revprops(self):
 Test svn_client_mkdir3 on a file:// URL, with added revprops
 dir = urljoin(REPOS_URL+/, some/deep/subdir)
@@ -164,6 +193,7 @@
 self.assertEqual(commit_info.revision, 14)
 self.assertEqual(self.log_message_func_calls, 1)
 
+  @XXX_py27_skip
   def test_log3_url(self):
 Test svn_client_log3 on a file:// URL
 dir = urljoin(REPOS_URL+/, trunk/dir1)
@@ -180,12 +210,14 @@
   self.assert_(dir in self.changed_paths)
   self.assertEqual(self.changed_paths[dir].action, 'A')
 
+  @XXX_py27_skip
   def test_uuid_from_url(self):
 Test svn_client_uuid_from_url on a file:// URL
 self.assert_(isinstance(
  client.uuid_from_url(REPOS_URL, self.client_ctx),
  types.StringTypes))
 
+  @XXX_py27_skip
   def test_url_from_path(self):
 Test svn_client_url_from_path for a file:// URL
 self.assertEquals(client.url_from_path(REPOS_URL), REPOS_URL)
@@ -200,6 +232,7 @@
 
 self.assertEquals(client.url_from_path(path), REPOS_URL)
 
+  @XXX_py27_skip
   def test_uuid_from_path(self):
 Test svn_client_uuid_from_path.
 rev = core.svn_opt_revision_t()
@@ -218,11 +251,13 @@
 self.assert_(isinstance(client.uuid_from_path(path, wc_adm,
 self.client_ctx), types.StringTypes))
 
+  @XXX_py27_skip
   def test_open_ra_session(self):
   Test svn_client_open_ra_session().
   client.open_ra_session(REPOS_URL, self.client_ctx)
 
 
+  @XXX_py27_skip
   def test_info_file(self):
 Test svn_client_info on working copy file and remote files.
 



signature.asc
Description: PGP signature


[PATCH] add 'svnadmin verify' pass to hot-backup.py.in

2010-08-19 Thread Leo Davis
 Hello,

I've found it useful to verify my backups using 'svnadmin verify' and
thought it would make a good addition to hot-backup.py.

[[[
* tools/backup/hot-backup.py.in
Added command line option --verify.  If flag is present, the
hotcopy will be verified.
Added a new pass between the existing steps 3 and 4 (hotcopy and
compress) that
invokes 'svnadmin verify' on the hotcopy. If verification fails, the
script exits with an error.
]]]

Cheers,
Leo

Index: tools/backup/hot-backup.py.in
===
--- tools/backup/hot-backup.py.in   (revision 987343)
+++ tools/backup/hot-backup.py.in   (working copy)
@@ -106,6 +106,7 @@
zip  : Creates a compressed zip file.
zip64: Creates a zip64 file (can be  2GB).
   --num-backups=NNumber of prior backups to keep around (0 to keep all).
+  --verify   Verify the hotcopy
   --help  -h Print this help message and exit.
 
  % (scriptname,))
@@ -114,6 +115,7 @@
 try:
   opts, args = getopt.gnu_getopt(sys.argv[1:], h?, [archive-type=,
   num-backups=,
+  verify,
   help])
 except getopt.GetoptError, e:
   sys.stderr.write(ERROR: %s\n\n % e)
@@ -122,12 +124,15 @@
   sys.exit(2)
 
 archive_type = None
+verify_copy = False
 
 for o, a in opts:
   if o == --archive-type:
 archive_type = a
   elif o == --num-backups:
 num_backups = int(a)
+  elif o == --verify:
+verify_copy = True
   elif o in (-h, --help, -?):
 usage()
 sys.exit()
@@ -266,8 +271,19 @@
 else:
   print(Done.)
 
+### Step 4: Verify the hotcopy
+if verify_copy:
+  print(Verifying hotcopy...)
+  err_code = subprocess.call([svnadmin, verify, -q,
+  backup_subdir])
+  if err_code != 0:
+sys.stderr.write(Verifying the hotcopy `%s' failed.\n % backup_subdir)
+sys.stderr.flush()
+sys.exit(err_code)
+  else:
+print(Done.)
 
-### Step 4: Make an archive of the backup if required.
+### Step 5: Make an archive of the backup if required.
 if archive_type:
   archive_path = backup_subdir + archive_map[archive_type]
   err_msg = 
@@ -321,7 +337,7 @@
 print(Archive created, removing backup ' + backup_subdir + '...)
 safe_rmtree(backup_subdir, 1)
 
-### Step 5: finally, remove all repository backups other than the last
+### Step 6: finally, remove all repository backups other than the last
 ### NUM_BACKUPS.
 
 if num_backups  0:


Re: Bug: resolve --accept theirs-conflict does not properly handle paths

2010-08-19 Thread Doug Reeder
My apologies; it was the other machine that had the up-to-date copy of  
Subversion, and I failed to double-check.



On Aug 19, 2010, at 4:20 PM, Stefan Sperling wrote:


On Thu, Aug 19, 2010 at 08:24:38PM +0100, Philip Martin wrote:

Doug Reeder reeder...@gmail.com writes:


svn, version 1.6.4 (r38063)


That's old.


  compiled Aug  7 2009, 11:08:17
running under OS X 10.5.8 on a Intel Core 2 Duo (i.e. a 64-bit
processor)

resolve --accept theirs-conflict fails on a CONFLICTED-PATH with a
directory component, but works fine when CONFLICTED-PATH is just a
filename:

~/webOS/workspace/OutlineTrackerRecurD svn resolve --accept theirs-
conflict javascripts/storage.js
svn: warning: Can't open file 'storage.js.merge-left.r448': No such
file or directory


Fixed by r891009?


Yes, it looks like that bug. The fix was released in 1.6.9.

Stefan


Doug Reeder
reeder...@gmail.com
http://reeder29.livejournal.com/
https://twitter.com/reeder29

https://twitter.com/hominidsoftware
http://outlinetracker.com










Re: [PATCH] add 'svnadmin verify' pass to hot-backup.py.in

2010-08-19 Thread C. Michael Pilato
On 08/19/2010 04:25 PM, Leo Davis wrote:
  Hello,
 
 I've found it useful to verify my backups using 'svnadmin verify' and
 thought it would make a good addition to hot-backup.py.
 
 [[[
 * tools/backup/hot-backup.py.in
 Added command line option --verify.  If flag is present, the
 hotcopy will be verified.
 Added a new pass between the existing steps 3 and 4 (hotcopy and
 compress) that
 invokes 'svnadmin verify' on the hotcopy. If verification fails, the
 script exits with an error.
 ]]]

I like it.  Committed in r987379, with some minor tweaks (mostly just
referring to the new backup as a backup rather than as a hotcopy for
consistency with the rest of the tool's messaging).

Thanks, Leo!

-- 
C. Michael Pilato cmpil...@collab.net
CollabNet  www.collab.net  Distributed Development On Demand



signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2] New dumpstream parser to check version number

2010-08-19 Thread Ramkumar Ramachandra
Hi Daniel,

Daniel Shahaf writes:
  That patch is still an RFC, and it's unlikely to be approved soon I
  think. If I were able to send a series, it would roughly look like
  this:
  1. Create parse_dumpstream3 to include the logic for checking equality
 in version.
  2. Note the lack of flexibility in 1, and create a new struct (the
 other patch).
  3. Modify parse_dumpstream3 to use the new struct, and move out the
 logic for checking the version into a fresh callback.
  
   Note: it's acceptable to post patches that depend on previous patches.
   (So you could write this patch in terms of parse_fns3_t directly.)
 
 So we first create parse_dumpstream3() and then fix it a second later?
 I'd rather just revv the parse_fns3_t API (with the other patch) and
 then touch parse_dumpstream3() once.  Does that make sense to you?

Okay, got it. I'll post a series soon.

-- Ram