[PATCH] New XFail tests for issue 3609

2011-01-22 Thread Noorul Islam K M

Log 

[[[

New XFail tests for issue 3609.

* subversion/tests/cmdline/mergeinfo_tests.py
  (mergeinfo_url_special_characters, test_list),
  subversion/tests/cmdline/prop_tests.py
  (props_url_special_characters, test_list),
  subversion/tests/cmdline/merge_tests.py
  (merge_url_special_characters, test_list),
  subversion/tests/cmdline/log_tests.py
  (log_url_special_characters, test_list),
  subversion/tests/cmdline/copy_tests.py
  (copy_url_special_characters, test_list),
  subversion/tests/cmdline/blame_tests.py
  (blame_url_special_characters, test_list):
New XFail tests.

Patch by: Noorul Islam K M noorul{_AT_}collab.net
]]]

Index: subversion/tests/cmdline/mergeinfo_tests.py
===
--- subversion/tests/cmdline/mergeinfo_tests.py (revision 1062140)
+++ subversion/tests/cmdline/mergeinfo_tests.py (working copy)
@@ -479,6 +479,18 @@
 adjust_error_for_server_version(''),
 ['4', '5'], A_path, A_COPY_path + '@PREV', '--show-revs', 'eligible')
 
+def mergeinfo_url_special_characters(sbox):
+  special characters in svn mergeinfo URL
+
+  sbox.build()
+  wc_dir = sbox.wc_dir
+  special_url = sbox.repo_url + '/%2E'
+
+  svntest.actions.run_and_verify_svn(None, None, [], 'ps', SVN_PROP_MERGEINFO,
+ '/:1', wc_dir)
+  svntest.actions.run_and_verify_mergeinfo(adjust_error_for_server_version(),
+   ['1'], special_url, wc_dir)
+
 
 # Run the tests
 
@@ -494,6 +506,7 @@
   SkipUnless(recursive_mergeinfo, server_has_mergeinfo),
   SkipUnless(mergeinfo_on_pegged_wc_path,
  server_has_mergeinfo),
+  XFail(mergeinfo_url_special_characters),
  ]
 
 if __name__ == '__main__':
Index: subversion/tests/cmdline/prop_tests.py
===
--- subversion/tests/cmdline/prop_tests.py  (revision 1062140)
+++ subversion/tests/cmdline/prop_tests.py  (working copy)
@@ -2335,6 +2335,31 @@
   if ((len(expected_output) * 3) - 6) != len(pg_stdout_redir):
 raise svntest.Failure(Redirected pg -vR has unexpected duplicates)
 
+def props_url_special_characters(sbox):
+  set/get/list/del with special characters in URL
+
+  sbox.build()
+  wc_dir = sbox.wc_dir
+  special_url = sbox.repo_url + '/%2E'
+  
+  svntest.actions.enable_revprop_changes(sbox.repo_dir)
+
+  svntest.actions.run_and_verify_svn(None, None, [],
+ 'propset', '--revprop', '-r', '0',
+ 'cash-sound', 'cha-ching!', special_url)
+
+  svntest.actions.run_and_verify_svn(None, None, [],
+ 'propget', '--revprop', '-r', '0',
+ 'cash-sound', special_url)
+
+  svntest.actions.run_and_verify_svn(None, None, [],
+ 'proplist', '--revprop', '-r', '0',
+ special_url)
+
+  svntest.actions.run_and_verify_svn(None, None, [],
+ 'propdel', '--revprop', '-r', '0',
+ 'cash-sound', special_url)
+
 
 # Run the tests
 
@@ -2380,6 +2405,7 @@
   obstructed_subdirs,
   atomic_over_ra,
   propget_redirection,
+  XFail(props_url_special_characters),
  ]
 
 if __name__ == '__main__':
Index: subversion/tests/cmdline/merge_tests.py
===
--- subversion/tests/cmdline/merge_tests.py (revision 1062140)
+++ subversion/tests/cmdline/merge_tests.py (working copy)
@@ -16402,6 +16402,33 @@
   if not os.access(beta_path, os.X_OK):
 raise svntest.Failure(beta is not marked as executable after commit)
 
+def merge_url_special_characters(sbox):
+  special characters in svn merge URL
+
+  sbox.build()
+  wc_dir = sbox.wc_dir
+  a_dir = os.path.join(wc_dir, 'A')
+  new_file = os.path.join(a_dir, new file)
+
+  # Make r2.
+  svntest.main.file_append(new_file, Initial text in the file.\n)
+  svntest.main.run_svn(None, add, new_file)
+  svntest.actions.run_and_verify_svn(None, None, [],
+ ci, -m, r2, wc_dir)
+
+  # Make r3.
+  svntest.main.file_append(new_file, Next line of text in the file.\n)
+  svntest.actions.run_and_verify_svn(None, None, [],
+ ci, -m, r3, wc_dir)
+
+  os.chdir(wc_dir)
+  svntest.actions.run_and_verify_svn(None, None, [],
+ up)
+
+  special_url = sbox.repo_url + '/A' + '/%2E'
+  svntest.actions.run_and_verify_svn(None, None, [],
+ merge, -r3:2, special_url)
+
 

Re: My take on the diff-optimizations-bytes branch

2011-01-22 Thread Johan Corveleyn
On Wed, Jan 19, 2011 at 1:19 AM, Stefan Fuhrmann
stefanfuhrm...@alice-dsl.de wrote:
 On 18.01.2011 12:56, Johan Corveleyn wrote:

 On Mon, Jan 17, 2011 at 3:12 AM, Johan Corveleynjcor...@gmail.com
  wrote:

 On Sat, Jan 8, 2011 at 6:50 AM, Stefan Fuhrmann
 stefanfuhrm...@alice-dsl.de  wrote:

 [ ... snip ... ]

 But I think, the stack variable is certainly helpful
 and easy to do.

 Ok, I've done this (locally, still have to clean up a little and then
 commit). It gives a nice 10% speedup of prefix/suffix scanning, which
 is great.

 I have a couple of small questions though, about other small
 optimizations you did:

 Index: subversion/libsvn_diff/diff_file.c
 ===
 --- subversion/libsvn_diff/diff_file.c  (revision 1054044)
 +++ subversion/libsvn_diff/diff_file.c  (working copy)

 ...

 @@ -258,10 +259,10 @@

     \
      for (i = 0; i  files_len; i++)
      \
      {
      \
 -      if (all_files[i]-curp  all_files[i]-endp - 1)
     \
 -        all_files[i]-curp++;
      \
 +      if (all_files[i].curp + 1  all_files[i].endp)
     \
 +        all_files[i].curp++;
     \

 Just curious: why did you change that condition (from 'X  Y - 1' to
 'X + 1  Y')?

 You mention in your log message: minor re-arragements in arithmetic
 expressions to maximize reuse of results (e.g. in
 INCREMENT_POINTERS).

 Why does this help, and what's the potential impact?

 There is a hidden common sub-expression.
 Variant 1 in pseudo-code:

 lhs = all_files[i].curp;
 temp1 = all_files[i].endp;
 rhs = temp1 - 1;
 if (lhs  rhs)
 {
  temp2 = lhs + 1;
  all_files[i].curp = temp2;
 }

 Variant 2:

 lhs = all_files[i].curp;
 temp = lhs + 1;
 rhs = all_files[i].endp;
 if (lhs  rhs)
  all_files[i].curp = temp;

 The problem is that the compiler cannot figure that out
 on its own because (x+1) and (y-1) don't overflow / wrap
 around for the same values. In particular 0  (0-1) is true
 and (0+1)  0 is false in unsigned arithmetic.

Thanks for the explanation. So, I understand why this should be more
efficient, except ... that it's not, for some reason. At least not
after testing it on my x86 machine.

It's about 1% slower, tested with two test files generated with the
gen-big-files.py script (sent in the other thread), with 1,000,000
prefix and suffix lines, and 500 mismatching lines in the middle:

$ ./gen-big-files.py -1 big1.txt -2 big2.txt -p 100 -s 100

So for now, I didn't commit this change. Maybe it's better on other
platforms, so it may still make sense to do it, but then again, at
around 1% it's probably not that important ...


 Second question:

 +find_identical_prefix_slow(svn_boolean_t *reached_one_eof,
 +                           apr_off_t *prefix_lines,
 +                           struct file_info file[], apr_size_t file_len,
 +                           apr_pool_t *pool)
 +{
 +  svn_boolean_t had_cr = FALSE;
    svn_boolean_t is_match, reached_all_eof;
    apr_size_t i;
 +  apr_off_t lines = 0;

    *prefix_lines = 0;
    for (i = 1, is_match = TRUE; i  file_len; i++)
 -    is_match = is_match  *file[0]-curp == *file[i]-curp;
 +    is_match = is_match  *file[0].curp == *file[i].curp;
 +
    while (is_match)
      {
        /* ### TODO: see if we can take advantage of
           diff options like ignore_eol_style or ignore_space. */
        /* check for eol, and count */
 -      if (*file[0]-curp == '\r')
 +      if (*file[0].curp == '\r')
          {
 -          (*prefix_lines)++;
 +          lines++;

 Here you added a local variable 'lines', which is incremented in the
 function, and copied to *prefix_lines at the end (you do the same in
 fine_identical_prefix_fast). The original code just sets and
 increments *prefix_lines directly. So, out of curiousness: why does
 this help, and how much?

 Incrementing the temporary variable requires one
 indirection less than the original code and uses only
 one instead of two registers (in typical cases).

 The 'how much?' part depends on compiler / optimizer
 and even more on the target platform (x86 has very
 few registers to keep data in fly; x64 has about twice
 as many).

Ok, this apparently sped it up for around 1% on my machine. So I
committed this in r1062275.

Thanks.
Cheers,
-- 
Johan