Re: Upgrading a very old SVN version

2017-12-14 Thread Andreas Mohr
On Wed, Dec 13, 2017 at 05:19:50PM -0500, Nico Kadel-Garcia wrote:
> On Wed, Dec 13, 2017 at 2:27 PM, Mark Phippard  wrote:
> > Step 1 is very safe and easy and you are unlikely to encounter problems.
> > Step 2 is more of an unknown.  There are various bugs that existed in older
> > versions that allowed some data to be stored in repository in format that
> > was in violation of what was intended.  Newer versions of Subversion detect
> > and enforce those rules better.  If you have any of this data you might get
> > errors when loading the repository to the new format.  If you do, you can
> > search the archives of this list to find answers on how to proceed.
> 
> Jumping that far between versions, I'd *expect* trouble. The
> repository is basically a file-system based database. I'd urge *not*
> updating that in place.

Are we talking:
- full update, with certain suitable (last-minor stable) interim versions used?
- full update, going from earth to mars?

Possibly you're recommending "avoiding *both* variants of such a large jump,
if possible".


If this "problematic" sentiment holds, then interesting questions are:
- is an upgrade from very old versions generally supposed to be "doable"?
  (i.e., should this use case be supported as best as can be?)
- if support ought to be best/improved, then how to analyze whether this
  holds water? (in this particular case)
  Would perhaps be good to go through such an upgrade for test purposes only,
  then "play a bit" with the resulting test-only data
  to possibly determine some issues.

HTH,

Andreas Mohr


Re: Bug - svn hangs

2017-11-27 Thread Andreas Mohr
On Sun, Nov 26, 2017 at 11:03:11PM +0100, Luca Baraldi wrote:
>I am blocked by this problem and cannot go on working with my pc.

As an initial quick emergency hint by an outside person,
try using strace/ltrace to figure out more context info
(i.e., *why* it might be hanging).

Good luck!

Andreas Mohr


Re: Seeing very slow performance with svnadmin verify

2016-11-01 Thread Andreas Mohr
On Tue, Nov 01, 2016 at 12:14:32AM -0700, Eric Johnson wrote:
> Since I have on the order of 230,000 additional commits to check, I'm
> looking at 39 days before my repository finishes verifying. That's not
> really acceptable.
> 
> Any suggestions? Am I doing something wrong?

You could try to
run strace and/or ltrace with sufficiently detailed cmdline options,
to try to figure out
which system call exactly is involved in the delay/expensive handling.

A 30s pause might perhaps hint at some network / DNS resolve things.



HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: which version control supports file locking and who has it locked

2016-06-08 Thread Andreas Mohr
On Wed, Jun 08, 2016 at 09:42:10AM -0400, Boris Epstein wrote:
>I believe ClearCase does that.
>If I may ask, why is it important? I believe CVS, SVN, Git and many others
>allow to get your edits in via merging mechanisms of various kinds, so I
>am just curious what the use case scenario would be where locking is
>absolutely essential.

Binary files, I guess.

Not that you would want to have all too many of these in an SCM, but...
(...if even Red Hat seems to think that it's fashionable to do project
collaboration via binary files... -
https://opensource.com/open-organization/16/6/introducing-open-decision-framework
)

Andreas Mohr


Re: VisualSVN server / Linux client

2016-03-23 Thread Andreas Mohr
Hello,

On Wed, Mar 23, 2016 at 09:42:10AM -0700, Cathy Mullican wrote:
> On Wed, Mar 23, 2016 at 3:35 AM, Pavel Lyalyakin
>  wrote:
> > As far as I can guess, the server has Integrated Windows
> > Authentication enabled and Basic disabled. In such case, it's the
> > client that does not support NTLM/Negotaite (NTLM or Kerberos), not
> > the server. It seems that the client supports or is configured to use
> > Basic authentication only.
> 
> This is what I think as well, because connecting with TortoiseSVN from
> Windows systems works fine.  I'm just hoping to figure out how to get
> the Linux systems to connect without having to re-enable Basic auth.

Rimshot idea:
on Linux, Konqueror (as opposed to e.g., rather very infamously, Firefox)
seems to be more reliably fully properly providing NTLM auth support
(AFAIK v2 in addition to v1),
thus if http-based URIs are involved (right?),
using Konqueror on these (and perhaps enabling additional tracing?)
might be able to provide some investigation opportunities.

HTH,

Andreas Mohr


Re: version number

2016-02-21 Thread Andreas Mohr
On Sun, Feb 21, 2016 at 08:45:56AM -0500, Ren Wang wrote:
>Is there a way or API to set and get a file version number instead of
>revision number? For my case, when a file is created, the version by
>default should be 1, every change the version number will be incremented

Generally spoken this is a pretty simple matter:

[number of log entries for this file] == "version number"

Since your requirement is compatible enough that
the number of log entries directly corresponds to the version number,
we now at least know
that there is no *extra* state data needed
which would need to be stored in database area.


As an svn semi-expert (insufficiently informed since SvnBridge devel),
I don't know which particular APIs might be available
to retrieve this information in the most direct/efficient manner, though.
You are likely looking for something
which directly returns a shallow "number of log entries"
rather than deep (per-entry) log information...

HTH,

Andreas Mohr


Re: BUG - SVN tries to connect to "akamai" - 15 second timeout - CRL - ctldl.windowsupdate.com

2016-01-27 Thread Andreas Mohr
Hello Mr. Sours,

On Wed, Jan 27, 2016 at 08:32:45PM +, Cameron Sours wrote:
>**Additional Information:** Debugging this issue was particularly
>difficult. SVN 1.8 disabled support for the Neon HTTP RA (repository
>access) library in favor of the Serf library which removed client debug
>logging. [1] In addition, the SVN error code returned did not match the
>string given in svn_error_codes.h [2] Also, SVN Error codes cannot be
>mapped back to their ENUM label easily, this case SVN error code E170013
>maps to SVN_ERR_RA_CANNOT_CREATE_SESSION.
> 
> 
> 
>1.   
>
> stackoverflow.com/questions/8416989/is-it-possible-to-get-svn-client-debug-output

Ouch. I'm using Neon's logging of SVN protocol stream all the time
(SvnBridge protocol compatibility analysis),
and while I knew that this logging is Neon-specific
(it's called neon-debug-mask after all...)
I had (stupidly?) expected Serf mode
to offer something comparable in functionality.
So, to restate Stackoverflow's question:
"So how does it work now?"

Don't tell me that one is expected to apply liberal use of packet analyzers 
now...
(well, not-so-"liberal" in that case, that is, for obvious reasons...)

OTOH serf-trunk (of http://serf.apache.org/contribute )
does seem to have logging serf:ed ;) in several areas,
so possibly svn does provide log enabling,
or would be able to implement such relatively easily,
and that would then possibly also spew the data of interest:

$ grep serf__log *
context.c:serf__log_init(ctx);
incoming.c:serf__log(LOGLVL_DEBUG, LOGCOMP_CONN, __FILE__,
client->config,
logging.c:apr_status_t serf__log_init(serf_context_t *ctx)
logging.c:void serf__log_nopref(apr_uint32_t level, apr_uint32_t comp,
logging.c:void serf__log(apr_uint32_t level, apr_uint32_t comp, const
char *prefix,
logging.c:int serf__log_enabled(apr_uint32_t level, apr_uint32_t comp,
serf_config_t *config)
logging.c:apr_status_t serf__log_init(serf_context_t *ctx)
logging.c:void serf__log_nopref(apr_uint32_t level, apr_uint32_t comp,
logging.c:void serf__log(apr_uint32_t level, apr_uint32_t comp, const
char *prefix,
logging.c:int serf__log_enabled(apr_uint32_t level, apr_uint32_t comp,
serf_config_t *config)
...



Congratulations for a very impressively maximally detailed issue description!

Andreas Mohr


Re: Bug report: svnversion crashes during Subversion build/install

2016-01-08 Thread Andreas Mohr
Hi,

On Fri, Jan 08, 2016 at 11:44:49AM -0800, David Lowe wrote:
> On 2016Jan 7,, at 17:32, Ryan Schmidt  wrote:
> > 
> > During the build of Subversion 1.9.3, it calls the just-built svnversion 
> > program. On OS X at least, this crashes because the just-built Subversion 
> > libraries have not been installed yet so they are not in their expected 
> > place. The crash causes OS X to create a crash log file, which I've 
> > attached, but the relevant bit is:
> > 
> > 
> > Dyld Error Message:
> >  Library not loaded: /opt/local/lib/libsvn_wc-1.0.dylib
> >  Referenced from: /opt/local/var/macports/*/svnversion
> >  Reason: image not found
> > 
> > 
> > I do set DESTDIR; that may be necessary to reproduce the problem.
> > 
> > A solution on OS X is for the build system to set DYLD_LIBRARY_PATH to the 
> > directory where the libraries can be found in the build directory, anytime 
> > you want to run a just-built program that links with just-built libraries. 
> > I imagine the problem would affect other unix operating systems as well, 
> > and for them the solution may be to set LD_LIBRARY_PATH, but I am not 
> > familiar with non-OS X unix systems.
> 
>   We have been seeing this problem a lot with FOSS on El Crapitan, caused 
> by the new System Integrity Protection [SIP].  Unfortunately, the engineers 
> who came up with this feature must not have used any software that wants to 
> run tests prior to installation.

Hmm, wouldn't that perhaps happen to be a (albeit possibly not so?) clever way
to force people to produce fully prefix-relocatable binaries,
by ways of generic rpath etc. mechanisms?
(via generic Linux $ORIGIN markers etc., see e.g.
https://cmake.org/Wiki/CMake_RPATH_handling#CMake_and_the_RPATH )

I.e., a special way of saying "your build distillery is B0RKEN, fix it"?


I've been going through the trouble
of making my poor (currently unsupportable :() proprietary app
fully supportive of relocation (for purposes of rpm relocation, shar archive, 
etc.)
some 3 years ago,
that's why this thought came up rather naturally.

HTH,

Andreas Mohr


Re: svn: E125012: Invalid character in hex checksum (svn-1.9.2/osx-10.11.1)

2015-11-24 Thread Andreas Mohr
Hi,

On Wed, Nov 25, 2015 at 08:07:03AM +0100, Graham Miln wrote:
>Is this something I should reporting? Is there other information I can
>provide to help debug?

- don't know
- use:

~/.subversion/servers

[global]
neon-debug-mask = 511

(for the case where one is using neon as http layer - for other cases this 
likely is different)

That log output ought to reveal where exactly svn does have or believes it has 
a problem.

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Announce: SvnBridge (TFS <-> SVN protocol transversion): new version (strongly improved)

2015-10-30 Thread Andreas Mohr
Hello all,

I'd like to shortly mention that I implemented many updates
to the unfortunately relatively unstable SvnBridge project
as published on http://svnbridge.codeplex.com/
which provides access to TFS server for many SVN clients.

My changes (around 900 commits) to the upstream project's history
can be found at:
https://github.com/andim2/SvnBridge

For a description of updates / changes, see:
"Announce: SvnBridge: new version (strongly improved)"
  https://svnbridge.codeplex.com/discussions/646832



The SvnBridge project (especially this fixed version)
might be sufficiently interesting for certain cases
of compatibility / interoperability testing / investigations
(SVN or WebDAV).

Of course it will need to be used in combination with an existing
TFS server setup
(which if you're lucky might also be publicly provided on the
Internet somewhere, though).

Thanks for listening & HAND,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: Incomplete SVN dump files

2015-09-15 Thread Andreas Mohr
Hi,

On Tue, Sep 15, 2015 at 05:26:38PM -0700, Eric Johnson wrote:
>I just checked, and there aren't any open bugs about this.
>Interrupting svnrdump can result in a dump file with not all the files of
>the last commit in the dump record. Accidentally use that dump file to
>load into a new repository, and the resulting repository will not be a
>copy of the original.
>My particular use case, I was trying to suck down a large repository.
>Connection interrupted part way through. I resumed from part way through
>(using the --incremental option) into an additional dump file. Then did a
>load of those two dump files. Did not yield a copy of the original
>repository, though.
>This seems like a critical issue for possible data loss when copying
>repositories from machine to machine using svnrdump.

AFAICS (not an svnrdump expert here) very well described and to the point.
You just managed to pinpoint a rather important serialization format
that seemingly isn't fully properly atomically transaction-safe...
(good catch!)

>I suspect the right solution to this is to put an "end of file" marker at
>the end of a dump stream. If it isn't there, then svnadmin load will see
>its absence, and must discard the last commit.

However a "file"-related "end of payload" marker does not necessarily cut it,
since "file" merely is a (rather unrelated) outer transport container
for (a flexible number of) inner sub elements of data.

Or, IOW, payload of each and every meaningful sub element
within the complete payload to be transmitted
best ought to (or rather: "MUST"?) be fully verifiable in itself.

To make this more evident,
inferring "discard this broken commit"
  due to a completely unrelated/foreign event "missing transmission end marker"
is a lot more indirect (completely unrelated mechanisms/reasons) than
inferring "discard this broken commit"
  due to the commit data payload full (outer) sub unit itself
  failing a cryptographic/checksum/length check *of this unit proper*.


(oh, and what about not only the case of having to discard the last commit only,
but instead detecting/discarding other commits within the stream
which happen to contain breakage?
talk about fully provided transaction safety...)


And then there is also the question of
whether it's even the serialization format itself
which is to specially add markers
of what constitutes a "complete" sub unit,
or whether it's the "higher-layer"
which is to "inherently/implicitly realize"
whether those chunks of data it got
do constitute a "complete" sub unit
(think layering - e.g. ISO etc.).

OTOH since serialization (format)
*is* generated by just *that* higher-level layer
"on the other side" of the parser side
(probably also svnrdump, right?),
*that* layer does fully define/control
the entire serialization format
and thus probably should insert
payload sub unit boundary/validity markers
(perhaps via a chunked file format or some such).


But these thoughts of mine here about this topic
could possibly be relegated to "ramblings" area,
since after all it's a simple(?) matter
of thoroughly researching current "Best Practice"
of implementing transaction-safe serialization formats
and then simply achieving just such a correct implementation... ;)

Andreas Mohr


Re: Feature request: Save the old file when svn revert

2015-07-21 Thread Andreas Mohr
On Tue, Jul 21, 2015 at 11:06:06AM +0200, OBones wrote:
> Grierson, David wrote:
> >I completely understand that the action of sending to the Recycle Bin (in 
> >TortoiseSVN) is very system specific.
> >
> >To simply rename the item being reverted as $item.$backupSuffix before then 
> >restoring the pristine item is presumably not that system specific?
> >
> >Having this functionality in the base tool would then provide a benefit to 
> >all users and not just those using a specific IDE.
> I would very much prefer if this could be an option that is not enabled by
> default. I mean, this would clutter the filesystem with many files that one
> would have to delete manually, especially when considering that some of us
> are using less than optimal filesystems when it comes down to lots of small
> files.

This seems to hint that the revert-backup item
possibly should *not* be placed in the same directory as the item,
but rather in an "alternate tree base"
(creating random similarly-named files next to each other in unexpected ways
seems just asking for trouble,
and lots of it - think build system mechanisms, other automatic
handling, ...).


Knee-jerk sample (hard-coded, non-elegant, read: day-to-day occurrence ;):

unit_test1.c
unit_test2.c
unit_test1.c.revert_backup

"cp -a unit_test* some_dir/"
"some_dir/tool unit_test*"


One might even implement this as a config option ("revert tree base directory"),
and if left unspecified/empty
svn could fall back to having .reverted files local,
or another mode might be to record this within the local .svn dirs.

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: Subversion encountered a serious problem.

2015-07-10 Thread Andreas Mohr
Hi,

no input to the specific issue at hand (sorry), but:

On Fri, Jul 10, 2015 at 05:32:35PM +, Mark Sudakov wrote:
>I was trying to resolve conflict, didn’t find a solution in mailing
>archives, and got an error. This behavior was reproduced several times and
>conflict was not resolved.
> 
>---
> 
>Subversion Exception!
> 
>---
> 
>Subversion encountered a serious problem.
> 
>Please take the time to report this on the Subversion mailing list
> 
>with as much information as possible about what
> 
>you were trying to do.

WTH does this rather very generic / central exceptional error message handler
not even log a version value? Unless this happens to be fully
intentional for some reason...

>But please first sear 
>
> 'D:\Development\SVN\Releases\TortoiseSVN-1.8.7\ext\subversion\subversion\libsvn_wc\wc_db_update_move.c'

(version indirectly included here)


HTH,

Andreas Mohr


Re: svn: Malformed network data

2015-06-18 Thread Andreas Mohr
On Wed, Jun 17, 2015 at 11:57:08AM -0400, Oren Cohen wrote:
>I am running subversion-1.6.11-12.el6_6.x86_64 on centos 6.4. When
>running “svn st -u” I get: svn: Malformed network data
>svnserve is running as a service with —daemon and is listening according
>to netstat -a.
>tcp        0      0 *:svn                       *:*                      
>  LISTEN   
>I set logging up but it doesn’t log anything to files. Any help is
>appreciated.
>Thanks.

~/.subversion/servers:
[global]
neon-debug-mask = 511

might help.

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: The XML response contains invalid XML

2015-04-24 Thread Andreas Mohr
On Fri, Apr 24, 2015 at 12:13:53AM -0700, OlePinto wrote:
> Bert Huijben-5 wrote
> > Just to be sure: you get this error during an update?
> > 
> > Not during a commit?
> 
> Yes, a merge exactly. I even get it if I try the merge with --record-only. I
> was surprised to know that the client downloads the whole files, when I
> thought it was a local operation (merge inside the local WC, not a remote
> url).

My strangely semi-related ("foreign universe") notes at
"Copy this Debugging HOWTO to Wiki" 
https://svnbridge.codeplex.com/workitem/15337
might help in nailing down this issue.

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: Every Version of Every File in a Repository

2014-10-07 Thread Andreas Mohr
Hi,

On Tue, Oct 07, 2014 at 03:03:13PM -0500, jt.mil...@l-3com.com wrote:
>Is there a way to check out every version of a file in a repository? We
>just had a requirement levied to perform a scan of every file in a
>repository. The scan tool must have each file in a stand-alone format.
>Thus, I need a way to extract every version of every file within a
>repository.
> 
> 
> 
>Aside from the brute-force method of checking out the entire repository
>starting at revision 1 , performing a scan, updating to the next revision,
>and repeating until I reach the head, I don’t know of a way to do this.

That's certainly a somewhat tough one.


I will get tarred and feathered here for my way of trying to solve this,
and possibly even rightfully so, but... ;)

OK, here it goes:
you could do a git-svn on your repo,
then get all files ever existing via http://stackoverflow.com/a/12090812
, then for each such file do a git log --all --something --someveryshortformat
to get all its revisions,
then do a
file_content=$(git show :./path/to/file)
(alternatively do git show ... > $TMPDIR/mytmp since that ought to be more
reliable for largish files)
, then scan that
(but ideally you'd be able to directly pipe the git show stream into your scan 
tool).

That ought to give you a scan result for *all* revisions of *all* files
in *all* branches of your repo (you might want to decorate things with a
"uniq" applied at some place or another, to ensure that you're indeed
not doing wasteful duplicate processing of certain items).
OK possibly scratch the "*all* branches" part, since this may require
some extra effort in the case of git-svn...


However this high-level complex lookup solution
might be both rather crude and much less precise
compared to a parse-each-object kind of solution at git plumbing level, if this 
is
possible (and I'd very much guess it is).
Hmm, that could be a git rev-list, and that would then list changed files for 
each commit,
and AFAICS globally (i.e., on the global commit tree, rather than specific
"human-tagged" branch names). So that operation mode once successfully scripted
ought to be *a lot* better than the "list all files, then rev-log each file" 
algo.

And you could then safety check your algorithm
by having it spit out a full list of all commit hash / file combos
(this happens to be the same list which you would then feed into git show,
entry by entry),
and then try hard to figure out a way
to pick a repo-side file version which accidentally is NOT contained in that 
list
--> algo error!


Oh, and BTW: all this *without* having to do a filesystem-based checkout
(i.e., working copy modification)
of any repo item, even once.
(i.e., this is actually going *against* your initially stated "requirement" of
"Is there a way to check out every version of a file in a repository?",
and rightfully so ;)

HTH,

Andreas Mohr


Re: svnmucc

2013-11-16 Thread Andreas Mohr
On Sat, Nov 16, 2013 at 04:29:13AM -0500, Geoff Rowell wrote:
> 
> > On Nov 16, 2013, at 2:10 AM, Vladislav Javadov  wrote:
> > 
> > rm programs/develop/fasm/tags
> > rm programs/games/mine/tags
> > rm programs/games/snake/tags
> 
> Each command argument must be on a separate line:
> 
> rm
> programs/develop/fasm/tags
> rm
> programs/games/mine/tags
> rm
> programs/games/snake/tags

So, does that mean that svnmucc has single-arg support only?
Cause given this example, on the syntax side there's nothing
that would indicate that a new command is following,
rather than further options to the existing command...
(unless "rm" is one of the commands
which are specially recognized as a known-supported command)

Sorry for my critique (and thank you for your help!) - just sayin'...

Andreas Mohr


Re: encoding issue with ruby binding

2013-09-26 Thread Andreas Mohr
Hi,

On Wed, Sep 25, 2013 at 11:20:58AM +0200, Stephane D'Alu wrote:
> Version:
> Subversion: 1.8.3
> Ruby: 2.0.0.195
> 
> Error message:
> /usr/local/lib/ruby/site_ruby/2.0/svn/info.rb:236:in `===': invalid byte
> sequence in US-ASCII (ArgumentError)
> 
> It occurs in the "parse_diff_unified" methods when trying to mach lines
> of "entry.body"
> 
> 
> How to repeat:
> Having an UTF-8 encoded character in a committed file


It may not be a solution in its entirety or even overly helpful,
but for reference here's some code fragment that I created
to handle such issues in vcproj2cmake
(in this case in filenames, as opposed to file content,
but that does not matter):


# RUBY VERSION COMPAT STUFF

if (RUBY_VERSION < '1.9') # FIXME exact version where it got introduced?
  def rc_string_start_with(candidate, str_start)
nil != candidate.match(/^#{str_start}/)
  end
else
  def rc_string_start_with(candidate, str_start)
candidate.start_with?(str_start) # SYNTAX_CHECK_WHITELIST
  end
end

module V2C_Ruby_Compat
  alias string_start_with rc_string_start_with
  module_function :string_start_with
end



# Guards against exceptions due to encountering mismatching-encoding entries
# within the directory.
def dir_entries_grep_skip_broken(dir_entries, regex)
  dir_entries.grep(regex)
rescue ArgumentError => e
  if not V2C_Ruby_Compat::string_start_with(e.message, 'invalid byte sequence')
raise
  end
  # Hrmpf, *some* entry failed. Rescue operations,
  # by going through each entry manually and logging/skipping broken ones.
  array_collect_compact(dir_entries) do |entry|
result = nil
begin
  if not regex.match(entry).nil?
result = entry
  end
rescue ArgumentError => e
  if V2C_Ruby_Compat::string_start_with(e.message, 'invalid byte sequence')
log_error "Dir entry #{entry} has invalid (foreign?) encoding 
(#{e.message}), skipping!"
result = nil
  else
raise
  end
end
    result
  end
end


> Stephane D'Alu -- Ingenieur Recherche
> Laboratoire CITI / INSA-Lyon

Lyon is nice for vacations :-)

Andreas Mohr


Re: subversion load fails with “no such revision”

2013-09-26 Thread Andreas Mohr
Hi,

On Thu, Sep 26, 2013 at 12:18:30PM -0400, Harlan Harris wrote:
>An FYI to all,
>I'm back trying to deal with this migration, and it's still a giant mess.
>The consistent problem seem to be that revision numbering is just flat
>wrong when files were renamed in a revision. I'm fixing "svnadmin load"
>errors one-at-a-time by the following process:
>1. Find in the dump file the "add" that's breaking. Note that there is a
>Node-copyfrom-rev that refers to a�revision�that doesn't exist in the
>filtered file (and is irrelevant in the pre-filtered file); that is, the
>revision number is incorrect. Also note that the svnadmin load error
>message refers to a third revision number that has nothing to do with the
>problem either -- it's NOT the number in the Node-copyfrom-rev.�

I'm not entirely convinced that manual corrections are the way to go.
From the sounds of it, this is a painfully huge effort for you,
with considerable complications. IOW: ouch.

So, AFAICS (you hinted at that) these dumps were split out via svndumpfilter.

What about tending towards trying to fix the suspected root cause
(problematic algorithms in svndumpfilter) rather than huge manual efforts
of trying to fix each such "broken rename" case?

I don't know how difficult/challenging it would be to try to fix
svndumpfilter algorithms to correctly handle/take into account such renames
(I don't have any experience with svndumpfilter),
but to me it sounds like this would be much more lucrative.

If one managed to do some investigations about the differences between
the 4 different svndumpfilter "offsprings" and codify this into a nice
well-manageable upstream project (SourceForge, github, ...)
while fixing the revisions-with-renames issues,
then this would be a huge win AFAICS.


I'm trying to teach the TFS Plain-Original-Software some new SVN tricks,
so I know that there's quite some effort required to achieve proper SCM
handling, but fixing one isolated issue (let's hope it's only one issue...)
hopefully wouldn't be too hard (in the TFS support case there were many
low-hanging fruits, too).

Or try to ask someone to have a look at what would be required
to get these svndumpfilter issues improved...

HTH,

Andreas Mohr


Re: corrupted svn repository with “serialized hash missing terminator” error

2013-09-07 Thread Andreas Mohr
Hi,

On Sat, Sep 07, 2013 at 05:50:29PM +0530, Nitin Bhide wrote:
>Philip/Stefan/Andreas,
> 
>Thanks for the help. I was able to write a small python script
>specific to
>my needs and recover the repository. It was corrupted revision
>property
>file. I have written the details in blog post.
>
> [1]http://nitinbhide.blogspot.in/2013/09/recovering-from-corrupted-subversion.html

Thanks for openly documenting/sharing your experience!
(and of course congrats for your smashing success)

Now a question that would spring from this would be whether this is
something that could be more automated in Subversion project circles
(for this one user who was nicely able to draw the necessary conclusions
and then even write his own tool, there's probably a dozen more users
who gave up in despair).
That's probably not a feature that svnadmin ought to provide ("form
intermediate revisions out of thin air when encountering corruption in
certain revisions"), but rather an external recovery tool that's made
for that purpose.
But those are just random thoughts of someone who's quite external to all
this... ;)

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.



Re: corrupted svn repository with “serialized hash missing terminator” error

2013-09-05 Thread Andreas Mohr
Hi,

On Thu, Sep 05, 2013 at 07:54:51PM +0530, Nitin Bhide wrote:
>Hi Stefan,
> 
>Thanks for the hint. I tried the fsfsverify and fsfixer also. Both did not
>work. Looking back I think the problem started when I upgraded to from
>1.7.x to 1.8.1.
> 
>What exactly is 'serialized hash terminator' error ?

"Use The Source, Luke!"?

HTH (perhaps not),

Andreas Mohr


Re: Unsubscribe

2013-08-27 Thread Andreas Mohr
Hi,

On Wed, Aug 28, 2013 at 03:07:15PM +1000, Geoff Field wrote:
>Hi Venkat,
> 
>You need to send an email to [1]users-unsubscr...@subversion.apache.org to
>unsubscribe from this list.

...as can in most cases of mailing lists be determined
via an email header line:

|| List-Unsubscribe: <mailto:users-unsubscr...@subversion.apache.org> ||

, if mail headers of a mail are viewable
(which may unfortunately not be the case with "less capable" - ahem - MUAs).

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: Feature Req: sorthand urls for branches/tags in CLI

2013-08-23 Thread Andreas Mohr
On Fri, Aug 23, 2013 at 10:55:03PM +0200, Branko Čibej wrote:
> On 23.08.2013 21:34, Daniel Shahaf wrote:
> > 'svn list-branches' would work how?
> 
> About the same as "svn mergeinfo"? Or how about a variant on "svn
> status" that also looks for that prperty? Frankly, I find this problem
> infinitesimally small compared to the ones I mentioned in connection
> with the proposal given in this thread.
> 
> Indexing based on the presence and/or value of a property is a somewhat
> orthogonal feature, IMO, but there's been a lot of interest about that
> as well (e.g., what did user X commit? sure, you can filter 'svn log'
> but that's kinda slow). If/when it appears, "svn list-branches" could
> use it. Until it does, we already more than one command that crawls the
> whole working copy or repository tree. This one is no different.

Perhaps it would be worthwhile to have extended ops be implemented
by a helper binary for such extended purposes ("svnstat"? "svnanalyze"?).
That way one would not bloat the usual hotpath workhorse binary with such...
shenanigans :)
(both in performance terms and in usability terms - usage text length...)

But that decision probably depends on the total number of conceivable
"extended" ops. If there's only few high-level op names
which do all the work and options internally,
then one could keep them provided by svn, else...

But keeping them in main binary might still influence overall performance -
unless implementation data of commands
(as possibly opposed to registration data of commands!)
is being provided on-demand only anyway, via shared library dlopen references.

OTOH you could argue that the total amount of SCM sub commands will ultimately
remain limited, thus it's better to keep them aggregated to main binary
and do maximum optimization of that case instead (dlopen etc.).

E.g. git (Jehovah! ;) seems to be doing it that way, and with
external git_load_dirs binary being analogous to svn_load_dirs binary to boot.

Apologies for the long rambling - I had expected it to remain shorter,
but then with all the details added...

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: Help with compiling

2013-04-08 Thread Andreas Mohr
Hi,

On Mon, Apr 08, 2013 at 11:04:17AM -0400, Maureen Barger wrote:
> This is very thorough and I thank you, but I am not finding anything
> like that in the subversion dir tree.
> Daniel, I am also not finding 'You could look for a '#define
> AP_HAVE_C99 1' line in files that
> http_log.h includes.'

The usual way to gain hard evidence about current settings of defines is to
#define AP_HAVE_C99 foo_force_conflict

somewhere prominent in a relevant compile unit (.cpp or some important
header that participates in the build),
to forcibly cause a define conflict which will make gcc barf about it
and thereby show where the actual define was originally defined
(*iff* it was defined).

You might need to relocate the temporary force-define to some other location
in order to have it be effective, though.

HTH,

Andreas Mohr


Re: Discrepancies in svn mirror created with svnsync

2013-02-12 Thread Andreas Mohr
Hi,

On Tue, Feb 12, 2013 at 05:18:01PM +0200, Marius Gedminas wrote:
> On Sat, Feb 09, 2013 at 11:31:07AM +0100, Andreas Mohr wrote:
> > Hi,
> > 
> > On Fri, Feb 08, 2013 at 03:45:29PM +0100, Stefan Sperling wrote:
> > > So you should definitely wrap svnsync in a tool like lockfile (part of
> > > procmail), or upgrade to 1.7.
> 
> I went with
> 
>   #!/bin/sh
>   lockfile -r500 /stuff/zope-mirror/locks/ADHOC.lock || exit 1
>   /usr/bin/sudo -u syncuser /usr/bin/svnsync sync file:///stuff/zope-mirror
>   rm -f /stuff/zope-mirror/locks/ADHOC.lock
> 
> > Be careful about "solutions" other than lockfile - some of these appear to 
> > be
> > terribly unsafe (some newer Ubuntu-introduced "atomic locking" package
> > comes to mind - which then executes anyway after a measly timeout!).
> 
> Do you remember the name of the package?

Possibly it was the lockfile-progs one, but not sure, couldn't nail
it down any more, sorry.

There seem to be recommendations for flock, since that has clean
behaviour on shell trap etc.

Andreas Mohr


Re: Discrepancies in svn mirror created with svnsync

2013-02-09 Thread Andreas Mohr
Hi,

On Fri, Feb 08, 2013 at 03:45:29PM +0100, Stefan Sperling wrote:
> I cannot tell you what happened here and why the revisions in the
> mirro are empty. That sure is concerning.
> 
> However there are known race conditions in svnsync in Subversion 1.6.
> See http://subversion.apache.org/docs/release-notes/1.7.html#atomic-revprops
> 
> So you should definitely wrap svnsync in a tool like lockfile (part of
> procmail), or upgrade to 1.7.

Note that directory creation/removal are the only FS mechanism which is
guaranteed to be atomic, on UNIX (POSIX?) at least.
Thus if lockfile isn't available/installable, as a manual mechanism
you could fetch a tempfile name (obviously to be globally used by *all*
svnsync script users!), and use that name to create a directory,
run svnsync on success and then remove it.
(or probably better use a static GUID value in the directory name)

Be careful about "solutions" other than lockfile - some of these appear to be
terribly unsafe (some newer Ubuntu-introduced "atomic locking" package
comes to mind - which then executes anyway after a measly timeout!).

Andreas Mohr


Re: svn:externals - svn: E200009: Targets must be working copy paths

2013-01-30 Thread Andreas Mohr
Hi,

On Wed, Jan 30, 2013 at 04:38:05PM -0600, Ryan Schmidt wrote:
> 
> On Jan 30, 2013, at 14:14, C M wrote:
> 
> > Thank you for the "teach a man to fish" approach. Though I didn't find the 
> > documentation to be very clear, I was able to set the property using the 
> > syntax below.
> > 
> >  
> > c:\Temp\800>svn propset svn:externals "ver_1.0 
> > svn://3.x.x.x/trunk/Customer1/system_800/ver_1.0" .
> > property 'svn:externals' set on '.'
> > 
> > This worked for the directory listed above. I then added several external 
> > definitions, but when I finally committed and checked out, it seems that 
> > only the last definition was applied. Evidently I overwrote the previous 
> > definitions.
> > 
> > Can I add multiple external definitions? If it's in the "svn help propset", 
> > I am not seeing it.
> 
> Yes you can have multiple externals definitions, one on each line of the 
> svn:externals property. Setting a multiline property on the command line via 
> "svn propset" is difficult, so instead try "svn propedit svn:externals ." 
> (and type the multiline externals into the interactive editor) or "svn 
> propset svn:externals --file X ." (and provide a file X containing the 
> multiline externals).

"didn't find the documentation to be very clear",
"I am not seeing it." - perhaps provide useful details of what went wrong in
docs, and then someone sufficiently authorized could ensure
to get that fixed soon?
Otherwise the next Clueless To Be Enlightened Person (tm) [nothing
against the OP, mind you!] comes along and experiences the very same
unspeakable trouble again...

So much for the "teach a man to fish" errmm "teach a document to educate"
approach...

HTH,

Andreas Mohr


Re: version can not be svnsync

2012-11-25 Thread Andreas Mohr
Hi,

On Fri, Nov 23, 2012 at 11:12:01AM +, Philip Martin wrote:
> net_robber  writes:
> 
> > XML: char-data (274) returns 0
> > XML: start-element (274, {svn:, close-file}) => 279
> > XML: end-element (279, {svn:, close-file})
> > XML: char-data (274) returns 0
> > XML: XML_Parse returned 0
> > XML: Parse error: XML parse error at line 384: not well-formed (invalid 
> > token)
> 
> Do a network trace to identify the invalid XML:
> 
> http://subversion.apache.org/docs/community-guide/debugging.html#net-trace

In this case I don't think that's necessary.
The first line of his log showed the last part of the XML data
(I assume he obviously didn't want to post his entire visible data content).

I happen to have written an explanation for such cases (however not
published before yet):

=
XML: Parse error: XML parse error at line 41: not well-formed (invalid token)
If you happen to experience XML parse errors in your Subversion debug output 
(e.g. "invalid token"), then do the following:
either you're lucky enough to have a sufficiently modern client which does 
properly log the token involved, or you have an older one which inexcusably 
doesn't. In that case don't even bother trying to match up the rather confusing 
XML line number as displayed by that Subversion error. Rather, put both the 
corresponding response data and the subsequent corresponding XML parser log 
(those should have a *matching* byte count, e.g. "7340 bytes received")
side by side in two editors and go through them line by line
(or rather: tag type by tag type!) until you should be able
to find the problematic line at the end of the XML parser log.
Some potential things to watch out for are:
- missing space (' ') between XML attributes (--> protocol error)
=

The missing spaces was what had happened in my case.

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: version can not be svnsync

2012-11-22 Thread Andreas Mohr
Hi,

On Thu, Nov 22, 2012 at 06:40:30PM +0200, Daniel Shahaf wrote:
> Andreas Mohr wrote on Thu, Nov 22, 2012 at 12:33:05 +0100:
> > Enabling
> > 
> > [global]
> > neon-debug-mask = 511
> > 
> > (or perhaps some other setting for non-Neon operation, dunno)
> > in ~/.subversion/servers
> > will probably enable a world of new info
> > which would hopefully spit out the whereabouts of the closing parts
> > of this transfer
> > ("200 OK" doesn't seem too detailed for now).
> 
> I don't recall off the top of my head --- would that also print
> usernames/passwords (in the clear or otherwise)? 

Good catch. (it possibly will - apologies!)

So one should obviously narrow down the debug mask to contain only those bits
that do NOT expose sensitive data.

Andreas Mohr


Re: version can not be svnsync

2012-11-22 Thread Andreas Mohr
Hi,

On Thu, Nov 22, 2012 at 06:28:07PM +0800, net_robber wrote:
> hi,
> 
> first of all, thanks for your reply.
> 
> and then,
> some version info:
> OS: RHEL 4.3 x86_64
> apache: 2.2.22
> subversion: 1.6.17
> 
> i am not sure does these enough to find out the fault.
> so, let me know if i should provide more

[...]

> >  when the backup is running, i got some error like this
> >
> >  svnsync: REPORT of 'https://svn.domain.com/my/repos': 200 OK (
> > https://svn.domain.com)

Enabling

[global]
neon-debug-mask = 511

(or perhaps some other setting for non-Neon operation, dunno)
in ~/.subversion/servers
will probably enable a world of new info
which would hopefully spit out the whereabouts of the closing parts
of this transfer
("200 OK" doesn't seem too detailed for now).

HTH,

Andreas Mohr


Re: Renaming UTF-8 file names is broken

2012-11-17 Thread Andreas Mohr
Hi,

On Sat, Nov 17, 2012 at 12:59:15AM +0400, Заболотный Андрей wrote:
> Okay, so thanks to Nico Kadel-Garcia I've upgraded subversion to version 
> 1.7.7 on my Centos server.
> 
> Unfortunately, this did not help. The error is still the same:
> 
> svn: E160013: File not found: transaction '18-16', path 
> '/%D0%A1%D0%A3%20%D0%90%D0%9A%D0%91/doc/forth-asm/%D0%9A%D0%B0%D1%80%D1%82%D0%B0%20%D0%BF%D0%B0%D0%BC%D1%8F%D1%82%D0%B8.txt'
> svn: E160013: Your commit message was left in a temporary file:
> svn: E160013:'svn-commit.2.tmp'
> 
> I'm out of ideas.

Since that discussion has been a bit longer now and you're at a bit of a
loss, I'll try to explain things a bit.


I've recently started having to fight with an overly nonreliable
and non-supported (non-)interop(er)ability product "of a large so-called PC 
software company".

That product did not support international (read: non-ASCII range)
handling yet, thus I had to add support for that (I could send you the patch
to have a look at what I had to do to fix the problems in my case).


Encoding of URIs is usually governed by RFC 2396 originally.
This RFC strictly covered ASCII range content only.
It indicates which set of within-payload-content (i.e., between-delimiters!)
characters to encode (in order to avoid having a payload character end up
misinterpreted as a delimiter).

In your case, we *are* talking about international characters.
As far as RFC 2396 is concerned, it does not say much about non-ASCII
range, thus AFAICS it usually is chosen to simply hex-encode all non-ASCII chars
as well.
Possibly for RFC2396 purposes internationalization is simply not allowed
in its scope at all. RFC3986 (http://www.ietf.org/rfc/rfc3986.txt)
says the following:
"
Percent-encoded
   octets (Section 2.1) may be used within a URI to represent characters
   outside the range of the US-ASCII coded character set if this



Berners-Lee, et al. Standards Track [Page 8]

RFC 3986   URI Generic Syntax   January 2005


   representation is allowed by the scheme or by the protocol element in
   which the URI is referenced.  Such a definition should specify the
   character encoding used to map those characters to octets prior to
   being percent-encoded for the URI.
"


In your case the problem could be that SVN knows to correctly apply hex encoding
of non-ASCII range chars in user-defined content
(i.e. specific directory names, file names, ...).
What then might be what happens is that the *other* party sends
back a request (a possibly rather unrelated one some time later!!)
with exactly this hex-encoded string data content
(the actual meaning of that content may be unknown to the sender,
i.e. it's just an opaque "token"/"handle"),
which SVN then fails to *decode* symmetrically prior to doing
actual filesystem item lookup.


General note: encoding handling should always be perfectly symmetric
(and thus reversible without any information loss!!),
and *layered* (i.e. URI en/decoding requirements should be handled by a
different transcoding layer than e.g. predefined entity escape
requirements for transport in XML protocol,
and further transport paths might need to add another transcoding layer!).
It's just a "do the required and correct transcoding per each transport 
mechanism" thing.


There might also be issues such as *duplicate* encoding involved
(in which case hex-encoding would be (in most cases improperly)
changed into hex-encoded hex-encoding,
and the receiving party would then of course only decode *once*
and thus end up with one hex-encoding remaining rather than
original data).


Or it might be server vs. client mismatching in their implementation
conformance (e.g. one conforms to RFC2396 vs. the other RFC3986,
as one example).



Please take my statements with a grain of salt since it was a very Q&D
report without much research. Anyway, these pointers should be quite
useful and to the point, especially since I had to go through pretty
much the same thing recently.

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: svnserve crash

2012-11-02 Thread Andreas Mohr
Hi,

On Fri, Nov 02, 2012 at 08:35:11PM +0200, Daniel Shahaf wrote:
> Please attach log files as text/* MIME type (maybe by renaming them to
> *.txt) so it's easier to read/reply to them.
> 
> Now, the first thing that jumps out is that some of the actual
> parameters are 0x or 0x1000; for example:
> 
> #14  0x13f639fd8 in serve(conn=(svn_ra_svn_conn_st *) 0x, 
> params=(serve_params_t *) 0x, pool=(apr_pool_t *) 0x) at
> 
> which might suggest a stack smash, or just that this is how windows
> stack traces normally work and I'm not aware of that convention.

Input parameter values as shown by the backtrace
deviating from their original caller values
may also have been caused by these input variables getting modified
*within* the function (some people in some cases tend to prefer
creating local variable copies to actively work on, for this reason).

However, three parameters in a row being NULL
might obviously point to a more systematic mem erase.

Andreas Mohr


[CONFIRMED] Re: Bug in help

2012-10-26 Thread Andreas Mohr
Hi,

On Thu, Oct 25, 2012 at 05:33:13PM -0300, Vinícius Muniz wrote:
> Hi,
> 
>  I am trying execute svn with command "-r", but when I see the help in
> terminal, show me:
> 
> r [--revision] ARG  : ARG (alguns comandos também usam faixa ARG1:ARG2)
>  Um número de revisão pode ser um entre:
> NÚMERO   número da revisão
> '{' DATA '}' revisão no início da data
> 'HEAD'   último no repositório
> 'BASE'   revisão base do item da cópia
> de trabalho
> 'COMMITED'
> 
> 
> the option COMMITED it's write wrong, correct is COMMITTED.
> 
> Operation System Ubuntu 12.04
> 
> svn, versão 1.6.12 (r955767)

Medieval...

>compilado Feb 17 2012, 10:36:45

OK, maybe not so much ;)


Current repository trunk:

$ find|xargs grep COMMITED
./subversion/po/zh_TW.po:#~ "'COMMITED'   位於或早於 
BASE 的最後送交\n"
./subversion/po/pt_BR.po:#~ "'COMMITED'   
último commit em ou antes de BASE\n"
./subversion/po/pt_BR.po:#~ "'PREV'   
revisão exatamente antes de COMMITED"
./subversion/po/es.po:#~ "   'PREV'    revisión 
justo antes de COMMITED"


So, trunk still has that problem and somebody needs to fix this spelling issue
(not me, since I don't have active knowledge of project activity).

Andreas Mohr


Re: VSS to SVN Migration.

2012-09-21 Thread Andreas Mohr
Hi,

On Fri, Sep 21, 2012 at 10:29:27AM +0100, Neil Bird wrote:
> Around about 21/09/12 09:12, Neil Bird typed ...
> >I may try to blog my process at some point.
> 
>   Quicker than I thought, the cut'n'paste from MediaWiki into
> Wordpress kept all the formatting.
> 
> http://fnxweb.com/blog/2012/09/21/migrating-from-visual-sourcesafe-to-subversion/

Very nice!
While that no longer applies to me (Thanks Heavens!),
it's very good to get more content/activity in that area.

Andreas Mohr


Re: RE : SVN 1.6: What is the maximum size for a commit?

2012-09-10 Thread Andreas Mohr
Hi,

On Mon, Sep 10, 2012 at 08:27:19AM +, CHAZAL Julien wrote:
> Thanks a lot for you help.
> 
> My server is in a 32-bit mode. Anyway, I'll take a look for 
> "LimitRequestBody" option in apache conf.

Just to state the yet non-stated to make sure that matters are clear
(many people may easily be unaware of that):

a server being a 32bit OS should IN NO WAY result in I/O transfers
of a heavy-weight I/O software being limited to 32bit, too (well, 
ideally..).
That's what the difference between "64bit file I/O" and actual 64bit OS was
supposed to be for (the transition to "64bit file I/O" was made
quite a bit *before* actual 64bit OSes properly supporting 64bit *CPUs*
were commonplace).

IOW, this likely indicates that a 32bit type
(i.e., a type limited/adhering to system-width) is erroneously being used
somewhere within the transfer chain, rather than an actual 64bit type
(see various stdint.h implementations for specifics)
that's suitable for 64bit data transfers on a 32bit system as well.


So much for the theory - now as to whether SVN transfers
are supposed to qualify for "heavy-weight I/O"
(i.e. whether that is a valid sufficiently frequent use case on 32bit 
platforms),
that's obviously another matter ;)

Andreas Mohr

> 
> De : Andy Levy [andy.l...@gmail.com]
> Date d'envoi : vendredi 7 septembre 2012 17:52
> À : CHAZAL Julien
> Cc : users@subversion.apache.org
> Objet : Re: SVN 1.6: What is the maximum size for a commit?

[...]

> Is your server 32-bit or 64-bit? If 64-bit, are Apache & the
> Subversion modules compiled for 32-bit or 64-bit? If it's all 32-bit,
> I wonder if that's a factor as well.

-- 
GNU/Linux. It's not the software that's free, it's you.


Re: Suggestion required on best migration tool for VSS 6.0 to SVN 1.7.5 migration

2012-08-08 Thread Andreas Mohr
Hi,

On Wed, Aug 08, 2012 at 04:39:36PM +0100, Bhushan Jogi wrote:
> Hi,
> 
> I am getting following error when I am trying to execute svnimporter on
> cmd, It seems that the connection is not getting established, Does anyone
> know the correct connection string.
> 
> conf.xml details :
> 
>
>v:\vss\srcafe.ini


Maybe that one...



BTW, definitely think of whether the current directory layout of your 
repository is
ok (or if the actual current repository root should be moved into a sub
directory of the new one, e.g. think of the usual svn trunk/branches/tags 
stdlayout
requirement/setup),
and check whether there's a suitable way to do such a directory hierarchy change
right *during* the migration step (i.e., without complicating your history due 
to
subsequent complicated directory moving actions).
Don't ask me why I'm mentioning this here...


BTW, congrats for tackling the switch to a more suitable/widely usable/modern
SCM!

Andreas Mohr


Re: Cleanup needed after failed update

2012-06-13 Thread Andreas Mohr
On Wed, Jun 13, 2012 at 02:12:47PM +0200, Johan Corveleyn wrote:
> On Wed, Jun 13, 2012 at 12:20 PM, Justin Case
>  wrote:
> > Why should I have to cleanup???
> > svn update (see point 2) KNEW the file is in use, so instead of leaving 
> > locks around it could just have skipped that file and print a message that 
> > not everything have been updated.
> 
> The problem is that, at the point where svn runs into this locked
> file, half of the work has already been done (the metadata in wc.db
> has already been updated). The remaining work (moving the file into
> place) is scheduled in a specific table called the work_queue. The
> work_queue *must* be run to completion to get back into a valid state.
> It cannot be rolled back (undoing the other part in wc.db), at least
> not with the current design.

So, the awkward "external svn cleanup" requirement
is indeed "simply" a matter of implementation weakness in
"atomic transaction processing" (i.e. application of a collected change
only after *all* interim steps have been successfully completed in a
*temporary* working area,
or alternatively [when unable to implement it in such a definitely desireable 
way],
a properly working rollback mechanism for partially modified data).

> 'svn cleanup' removes any working copy locks, and runs whatever is
> left in the work_queue, thereby returning the working copy to a valid
> state.

Yeah, and ideally that currently running command itself would:
- either instantiate the modified dataset only after everything
  has been successfully completed
  (data modification could be carried out atomically
  by an atomic rename of old dir vs. new dir or some such)
- or execute a properly working rollback mechanism

But as said before, that's possibly quite hard to achieve
in light of an existing possibly multi-layered implementation
which does things differently
(and which possibly has some existing binding contracts to userspace
which might get broken by a rewrite,
perhaps special wc.db behaviour or so).

Andreas Mohr


Re: Cleanup needed after failed update

2012-06-12 Thread Andreas Mohr
Hi,

On Tue, Jun 12, 2012 at 06:43:52AM -0700, Justin Case wrote:
> > From: Ulrich Eckhardt 
> >
> > Only you (the user) knows "if it was interrupted" or is maybe still
> > running! I would say that this message could be improved[0], but
> 
> I beg to differ: the operation which interrupted itself because it found a 
> file in use knows very well that it was interrupted. So, it would be in the 
> best position to do something instead of just quitting graciously. You later 
> suggestion "leave the working copy in a state of mixed revisions" is exactly 
> what I as user would expect from it: "hey, error! I couldn't finish my job 
> bcoz file X is in use, just close the other app then try updating again". 

Dito. That context knows that it encountered a problem and knows best what
it's been doing, thus it's obviously within its own scope and *responsibility*
that it should be doing (or at least attempting to do) proper cleanup.
Necessity of "svn cleanup" should definitely be relegated to exceptional use 
cases,
since it's a problematic *foreign* intrusion into the lock-shared processing.
Well, so much for idealistic speak - I don't know SVN implementation
specifics which might go against implementing it like that.

> And I could swear it was like this before, I never had to cleanup even though 
> I always forgot DLLs in use... I just can't test it because the only machine 
> with a 1.6 SVN I have is a server and doesn't have Word (and will never have) 
> - any idea how to mark a file "in use" on Windows Server 2003? 
> Notepad/Wordpad doesn't cut it.

Probably see exclusive sharing modes (flags at Win32 CreateFile() API).
Either code up a quick app which locks a file, or do an internet search on
the terms encountered in CreateFile() docs and thus discover some app
which already does that, too.

HTH,

Andreas Mohr


Re: [BUG] 1.6.17 "svn: Bogus date" fatal abort - due to incomplete chunk!?

2012-05-11 Thread Andreas Mohr
Hi,

On Fri, May 11, 2012 at 09:49:36AM +, Greg Stein wrote:
> On Thu, May 10, 2012 at 08:57:33PM +0200, Andreas Mohr wrote:

[...]

> I ran into an SvnBridge bug a few years ago. AFAIK, they still have
> not fixed it, despite my pointer to the specific line in the bridge
> code causing the problem.
> 
> Codeplex is not exactly well-maintained :-(

Oh hell yeah indeed. And a pretty terrible community development interface
(e.g. patch files will get *removed* from public view once they're rejected,
e.g. for "outdated patch, please update" reasons - well then how to ever get
it updated??).

> >...
> > Anyway, more details:
> > That obviously seems to be a problem with parsing/transmission of
> > HTTP Chunked Transfers
> > (and BTW another abort also happened with a "SVN: Malformed XML" error
> > on another svn operation).
> 
> Well, if the connection somehow indicates EOF in the middle the XML
> response, then I'm not surprised by that error.

[...]

> The Subversion client has been dealing with chunked encoding since its
> inception back in 2000. The Apache httpd server (and mod_dav(_svn))
> have been sending responses that way forever. I really can't see that
> this is a problem on the client side.

Scratch that. It seems that this is a grave problem in yet another
(I already had to deactivate one for similar reasons)
dirty "wrapper" class of SvnBridge, ListenerResponse.cs.
Its Flush() _override_ handler (which does get invoked by system-side
handlers directly!) has problematic state handling, with an open-coded
bool "flushed" member.
AND THAT MEMBER NEVER GETS RESET, EVEN IN CASE OF CONTINUED WRITES!
which means the Flush() member reliably prevents any object lifetime cleanup
(via "using", i.e. Dispose() / destructor)
from properly executing these members' Flush() on shutdown.

Wireshark analysis does show that traffic does end right within the
timestamp part, BTW, which does suggest a flushing problem on the sender
side (SvnBridge).


Unfortunately, early attempts at trying to fix that problematic flushing
now initially resulted in even less working transfers (keep hitting timeouts).
Oh well, back to Wireshark analysis (previous semi-working handling vs.
now broken handling).
Possibly it's a matter of intermingled writers writing on the same
output, with now mixed-up flushing happening, or some such.

> Would you mind trying the same scenario using ra_serf? It doesn't have
> any problems with chunked responses either, but it might give a
> different error which would help direct some further inquiry.

Originally I thought that it's not needed any more,
but now that I'm hitting further issues it might turn out to be helpful.

Thank you for your very detailed reply!

Andreas Mohr


Re: [BUG] 1.6.17 "svn: Bogus date" fatal abort - due to incomplete chunk!?

2012-05-10 Thread Andreas Mohr
Hi,

On Thu, May 10, 2012 at 06:32:06PM +0200, Stephen Butler wrote:
> that's an excellent bug report.  Unfortunately, TFS SvnBridge isn't an
> Apache Subversion server.  It emulates the mod_dav_svn protocol,
> like GitHub does.
> 
>   http://svn.haxx.se/dev/archive-2012-02/0244.shtml

Hohumm. No nicer way to say "You're On Your Own, Get Lost." :-)

> You'll have to contact the TFS SvnBridge developers.

That would be... me (for all intents and purposes). Argh.
I'll keep a strong mental note (engraved in stone)
to firmly keep away from any kind of Microsoft "infrastructure" / "ecosystem"
in future.



Anyway, more details:
That obviously seems to be a problem with parsing/transmission of
HTTP Chunked Transfers
(and BTW another abort also happened with a "SVN: Malformed XML" error
on another svn operation).
[BTW Google shows 167000 results for "malformed XML subversion",
which might be a tad much to believe them to all be
*actual* payload corruption incidents only]

So far I cannot identify a problem on the SvnBridge side here
(which in itself is astonishing!):
it's configuring response objects to have them
implicitly (internally) do chunked encoding:
response.SendChunked = true;
and it's also making sure to have a "using" statement for the
StreamWriter object, to ensure proper Flush()ing,
even in case of exceptions.
For source, see e.g.
http://svnbridge.codeplex.com/SourceControl/changeset/view/57698#776175

So there are some possibilities:
- subversion server is configuring its "update-report" to *not* do
  chunked transfers (i.e., doing Content-Length:-based transfers instead),
  and this is why there's no problem on its side (will check immediately)
  [SvnBridge enables response.SendChunked almost everywhere]
- subsequent chunks fail to arrive for some reason
  (premature response shutdown on server side?),
  and *this* is why svn client shuts down hard
  (will revisit my Wireshark tracing to verify)
- svn client HTTP response parsing library (neon?) implementation is
  buggy, taking parsing results of *incomplete*, to-be-amended chunks
  at face value (I'm currently betting on this one,
  but might change my opinion at any time)


Any other thoughts on this?

Thanks,

Andreas Mohr


[BUG] 1.6.17 "svn: Bogus date" fatal abort - due to incomplete chunk!?

2012-05-10 Thread Andreas Mohr
Hi,

subversion-1.6.17-1 here, on Linux RHEL5, versus a TFS SvnBridge server (don't 
think that matters though).


~/.subversion/servers:
neon-debug-mask = 511


.
.
.
[chunk] < 400
Got chunk size: 1024
Reading 1024 bytes of response body.
Got 1024 bytes.
Read block (1024 bytes):
[
UUIDUUID-a1e8-4837-9fb9-UUIDUUIDUUID











/MY_TFS_SERVER:8080/!svn/ver/REVREV/PROJ/DIRNAME
REVREV
2012-05-10T09:29:05.41Z
unknown
UUIDUUID-a1e8-4837-9fb9-UUIDUUIDUUID

/MY_TFS_SERVER:8080/!svn/ver/61677/PROJ/DIRNAME/DIRNAME
61677
2012-05-1] 
<===
XML: Parsing 1024 bytes. 
<===
XML: start-element (224, {svn:, set-prop}) => 213
XML: end-element (213, {svn:, set-prop})
XML: char-data (224) returns 0
XML: start-element (224, {svn:, set-prop}) => 213
XML: char-data (213) returns 0
XML: end-element (213, {svn:, set-prop})
XML: char-data (224) returns 0
XML: start-element (224, {svn:, prop}) => 245
XML: end-element (245, {svn:, prop})
XML: char-data (224) returns 0
XML: end-element for 224 failed with -1.
XML: end-element (224, {svn:, add-directory})
XML: XML_Parse returned 1
sess: Closing connection.
sess: Connection closed.
Request ends, status 200 class 2xx, error line:
200 OK
Running destroy hooks.
Request ends.
svn: Bogus date   <===
sess: Destroying session.
sess: Destroying session.




To me it totally looks like svn proceeds with parsing the first chunk of the 
reply
(a full initial 1024 bytes, as much as could be gathered within the
statically-sized limited buffer),
then the committed-date property manages to hit the end of the chunk
(i.e., seemingly incomplete/corrupt content *within* this incomplete parsing 
context),
thus date parsing throws a SVN_ERR_BAD_DATE somewhere,
and this error state is actually taken at face value
(read: a very large svn up operation simply aborted directly and fatally, ugh!)
despite the chunk being an *INCOMPLETE* reply.

> severe problem.


(now that I reported it on users@, should I file this thing as a an actual 
honest-to-god bug?)

Thanks,

Andreas Mohr