Re: Broken SVN revision paths encoding
Thanks Bert for your answer. I've used the latest tortoisehg which used to ship the subversion bindings but stopped to do so. That is entirely documented here: https://bitbucket.org/tortoisehg/thg/wiki/libsvn The server on the other ends dates back to 2011. I believe it is something along 1.4, but I might be wrong and will eventually correct myself tomorrow morning. For the completeness of the story let me outline what happened: Beforehand, as you might have guessed I am using hg as my local svn client because of obvious reasons (working offline, faster history queries and faster working copy queries among others). I've copied some random files that had to be versioned from a network share without knowing that the filenames might be encoded badly. When I tried to check in the files in question tortoisehg complained that he could not find the file. That should have got me thinking, but getting the job done was fairly easy using the command line and thus I did not bother. So then I pushed the changesets to svn which worked flawlessly. But then the continuous integration system began to complain and I started to comprehend that I've messed it up for good. And it proved that i was not wrong in my assumption. We're now unable to svn log, nore the build/test environments and we have to find a way to fix the issue and eventually find a solution to prevent future trouble. Ideas welcome.. Cheers What client (including version) did you use to commit… and against what kind of server? Subversion's clients properly encode characters to utf-8 as far as we know, but perhaps you used some not standard client for the commit. (Newer servers should perform more verifications; that is why that answer is also relevant) Bert Sent from Surface *From:* dpsen...@apache.org *Sent:* Tuesday, July 28, 2015 4:11 PM *To:* users@subversion.apache.org Hi there, Somehow I was able to commit a file with a broken filename encoding and now the svn client can no longer process the log messages from the server! For example I commited the file “fooä.bar” and when I then try to svn log I get this: svn: E130003: The REPORT response contains invalid XML (200 OK) However, in wireshark I can see this coming in (stripped to the interesting lines): S:log-item S:added-pathfoo\344.file/S:added-path /S:log-item The clients svn is not the latest, but a newer version does not work either: svn –version svn, version 1.8.10 (r1615264) compiled Aug 10 2014, 15:48:46 on x86-microsoft-windows Any good ideas how we can bring the repository back to be fully functional? Cheers
Re: Git to svn
http://lmgtfy.com/?q=git+to+svn 2013/6/3 EricD des...@gmail.com Can I migrate from GIT to SVN ? I didn't find any good procedure on the web. -- Dominik Psenner ## OpenPGP Key Signature # # Key ID: B469318C # # Fingerprint: 558641995F7EC2D251354C3A49C7E3D1B469318C # ##
RE: SVN Problems
1. It shows locked status even other people not being used it in Lock. 2. I will have to run CleanUp command , Why? Good morning, I ran into this problem recently, too. First: So far I can tell that there might be some circumstances where the working copy is left locked on svn update/status/add/remove operations even though there was no error that caused a crash of svn. Thus all locks *should be* released once svn terminates. Second: I can confirm that it is not a problem with tortoisesvn since this happened to me while using a self-written svn frontend that interacts with svn on the commandline. Since it did not happen very frequently to me I did not try to reproduce the error and - to be honest - I don't bother because it happened only once or twice. But there is indeed some noise in the subversion-users mailing list regarding this problem. I'm going to quote one: On 29.02.2012 12:40 'Adrian Smith' wrote: Even with all of the precautions above on a single multi threaded application we see the error below on average seven times in two thousand individual updates. svn: E155004: Working copy '*' locked svn: E200033: database is locked svn: E200033: database is locked svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details) On 29.02.2012 15:21 'Markus Schaber' responded: Two ideas: - Some antivirus live scanner might lock the working copies. - Some other background process like windows search indexer, or TortoiseSVNs TSvnCache.exe might access the working copies in parallel. Could something like this be your problem? Meaning: are you accessing your working copy concurrently from different threads/processes? There might be some interesting race conditions treasured in 1.7.X that haven't been found yet, as Markus Schaber already mentioned on 14.02.2012 12:57: When SVN 1.7 working copies are accessed concurrently (different Threads or Processes), I often get SVN_ERR_WC_LOCKED. Cheers, Dominik
RE: Recursive externals checkout
http://tortoisesvn.tigris.org/ds/viewMessage.do?dsForumId=4061dsMessageId= 2939615 Do you think svn checkout should be defensive against recursive externals ? At elego (where I work) we actually use this as a trick question during Subversion workshops. People who don't necessarily know about externals are asked to check out a working copy (which, unknown to them, contains recursive externals) and are asked to figure out if anything is going wrong and if so how to fix it. Once they've figured out and fixed the problem they understand what externals are :) That aside, I wouldn't mind if svn printed a warning or error message when it finds a recursive externals definition. But off-hand I don't what a good method for detecting recursion would be. It's somewhat complicated by the fact that externals are currently separate working copies and that the recursion might be rooted not only at the immediate parent WC but at some parent of the parent. Cross-working-copy operations aren't trivial to implement correctly. The problem may be levered by recursively comparing the repository UID and the relative path in the uri when the external is resolved. Proofing that this check would be enough is left to the reader. :-) JMTC
RE: Recursive externals checkout
The problem may be levered by recursively comparing the repository UID and the relative path in the uri when the external is resolved. Proofing that this check would be enough is left to the reader. :-) This will catch the simple case when an external includes its own parent directory. But it will not catch mutually recursive externals (svn://path/to/a/ includes svn://path/to/b/ and vice-versa), there might even exist cycles over 3 or more repositories... That's what I ment with recursive. Since externals resolve from the parent repository down to the child repository, before checking out the child repository all parent repositories would have to be checked if it could be a recursive checkout, i.e. in a python-like pseudocode: Def Checkout(repoUri, []parents={}): Foreach childRepoUri in repoUri: If(IsPossiblyRecursive(childRepoUri, parents)): Warn() If(ConfirmByUser()) Checkout(childRepoUri, parents.Append(childRepoUri)) Else Checkout(childRepoUri, parents.Append(childRepoUri)) Def IsPossiblyRecursive (repoUri, []parents={}): If(parents.Count 0) Foreach parent in parents: If(IsPossiblyRecursive (repoUri, parent)) True Else False Def IsPossiblyRecursive (repoUri, parent): If(UID(repoUri) == UID(parent)) # this might be a recursive checkout # other checks like comparing the relative path could be put here True False Cheers, Dominik
RE: why does my SVN client process die an hour after completion of commit?
If I look in /var/log/messages on my client machine, I see this: {{{ Oct 20 21:00:48 andLinux -- MARK -- Oct 20 21:20:48 andLinux -- MARK -- Oct 20 21:40:48 andLinux -- MARK -- Oct 20 22:00:49 andLinux -- MARK -- Oct 20 22:21:09 andLinux -- MARK -- Oct 20 22:41:37 andLinux -- MARK -- Oct 20 22:43:23 andLinux kernel: kded4 invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Oct 20 22:43:23 andLinux kernel: dd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Oct 20 22:43:23 andLinux kernel: [c0103b7a] show_trace_log_lvl+0x1a/0x30 Oct 20 22:43:23 andLinux kernel: [c0103cb2] show_trace+0x12/0x20 Oct 20 22:43:23 andLinux kernel: [c0104ae5] dump_stack+0x15/0x20 Oct 20 22:43:23 andLinux kernel: [c014197d] out_of_memory+0x19d/0x200 Oct 20 22:43:23 andLinux kernel: [c014319a] __alloc_pages+0x2da/0x330 Oct 20 22:43:23 andLinux kernel: [c0152b11] read_swap_cache_async+0xa1/0xe0 Oct 20 22:43:23 andLinux kernel: [c01492f5] swapin_readahead+0x55/0x70 Oct 20 22:43:23 andLinux kernel: [c014b98e] __handle_mm_fault+0x82e/0xa00 Oct 20 22:43:23 andLinux kernel: [c010b541] do_page_fault+0x351/0x690 Oct 20 22:43:23 andLinux kernel: [c02e81fa] error_code+0x6a/0x70 Oct 20 22:43:23 andLinux kernel: [c019d315] kmsg_read+0x25/0x50 Oct 20 22:43:23 andLinux kernel: [c015e105] vfs_read+0xb5/0x140 Oct 20 22:43:23 andLinux kernel: [c015e4ed] sys_read+0x3d/0x70 Oct 20 22:43:23 andLinux kernel: [c01028f2] syscall_call+0x7/0xb Oct 20 22:43:23 andLinux kernel: === Oct 20 22:43:23 andLinux kernel: Mem-info: Oct 20 22:43:23 andLinux kernel: Normal per-cpu: Oct 20 22:43:23 andLinux kernel: CPU 0: Hot: hi: 90, btch: 15 usd: 5 Cold: hi: 30, btch: 7 usd: 6 Oct 20 22:43:23 andLinux kernel: Active:30295 inactive:30529 dirty:0 writeback:0 unstable:0 Oct 20 22:43:23 andLinux kernel: free:509 slab:1850 mapped:42 pagetables:315 bounce:0 Oct 20 22:43:23 andLinux kernel: Normal free:2036kB min:2036kB low:2544kB high:3052kB active:121180kB inactive:122116kB present:260096kB Is this a virtual machine? 260096kB of RAM was probably not enough. Therefore it used a lot of swap memory, which explains why the process took so long. pages_scanned:407510 all_unreclaimable? yes Oct 20 22:43:23 andLinux kernel: lowmem_reserve[]: 0 Oct 20 22:43:23 andLinux kernel: Normal: 11*4kB 3*8kB 1*16kB 1*32kB 0*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 2036kB Oct 20 22:43:23 andLinux kernel: Swap cache: add 1043236, delete 1043236, find 25790/146554, race 0+0 Oct 20 22:43:23 andLinux kernel: Free swap = 0kB *ouch* Looks like you've had no swap left, either. Probably you ran out of memory during the commit and the process was killed by the kernel when the process tried to allocate even more. Cheers
Re: How to know if a path is a file in Subversion Log Records in the fastest way?
See: $svn list Directories should all match against the regular expression .*/$ Someone else feel free to correct me if I'm spreading bullsh*t. :-) -- Dominik Psenner ## OpenPGP Key Signature # # Key ID: B469318C # # Fingerprint: 558641995F7EC2D251354C3A49C7E3D1B469318C # ## signature.asc Description: OpenPGP digital signature
RE: Problem when commiting with svnmirrors
And the product name is ... If I remembered, I would have posted it. It was _probably_ WANdisco.. please use your favourite web search engine to find out more. Regards, D.
RE: Transaction author without requiring password
Yes I did, and this is my deduction also. The user names won't make it to the pre-commit unless they're required to authenticate by svnserve first. I'm not an svnserve expert, please don't expect help there. All I can say is, that you could try to set up subversion within apache. Then it should work since we're using it daily.
RE: Problem when commiting with svnmirrors
Im unsure if this really matters, but there exists a commercial product that enables svn morroring with writeable mirrors by using the paxos algorithm. _ From: yasith tharindu [mailto:yasithu...@gmail.com] Sent: Thursday, October 06, 2011 1:03 PM To: Thorsten Schöning Cc: users@subversion.apache.org Subject: Re: Problem when commiting with svnmirrors Actually mirror is read only. Im sorry if i make you misunderstand. [1] is the configurations i made. Mirror use the SVNMasterURI directive. [1] http://www.yasith.info/2011/08/comprehensive-svn-mirror.html Thanks.. 2011/10/4 Thorsten Schöning tschoen...@am-soft.de Guten Tag yasith tharindu, am Dienstag, 4. Oktober 2011 um 14:35 schrieben Sie: We have configured svn mirrors and when there are some big commits passing through secondary servers to the master, some times gives following error. Mirror is configured as following diagram[1]. Is there a solution for this? Are you really sure that you use svnsync that way? Normally svnsync-mirros must be read only du to conflicting version generations. In your case a big local commit would intefere with a public commit and the time it needs to get the local commit to the main repo and the newly public commit to the mirror. Mit freundlichen Grüßen, Thorsten Schöning -- Thorsten Schöning AM-SoFT IT-Systeme - Hameln | Potsdam | Leipzig Telefon: Potsdam: 0331-743881-0 E-Mail: tschoen...@am-soft.de Web: http://www.am-soft.de AM-SoFT GmbH IT-Systeme, Konsumhof 1-5, 14482 Potsdam Amtsgericht Potsdam HRB 21278 P, Geschäftsführer: Andreas Muchow -- Thanks.. Regards... Blog: http://www.yasith.info Twitter : http://twitter.com/yasithnd LinkedIn : http://www.linkedin.com/in/yasithnd
RE: Transaction author without requiring password
I'm trying to force commits to have an attached author, but I don't care for requiring passwords. In other words, commits should contain an author field but there's no enforcing that the committer is who they claim to be. I've tried filtering for an author in the pre-commit hook, but the user name given in the commit is not passed unless anon-access doesn't given write privileges, and auth-access is enabled. Furthermore, without a corresponding author name in the passwd file, I don't think svnserve makes it to the pre-commit stage at all. Are there any recommended solutions for ensuring that commits have an attached author, or similar field? Maybe something like this put in place as a pre-commit hook? #!/bin/bash SVNLOOK=/usr/bin/svnlook REPOPATH=$1 TRANSACTION=$2 # get user USER=`$SVNLOOK author $REPOPATH -t $TRANSACTION` # check if the user is empty if[ -z $USER ]; then exit 1 exit 0
RE: Transaction author without requiring password
I tried this, but an author is not passed unless auth-access is in use AFAICT. So $USER in your example is always empty. This would mean that you would never see any usernames in commits. Did you try to commit using the --username parameter?
RE: how to compare an exported file (or set of files) against the repository?
This procedure could be easily automated by a stupid script that does something like: for REV in seq 0 NEWESTREV; do svn up -r REV cp updated-file repos-file if [ -z `svn diff` ]; then echo found candidate rev $REV!; fi svn revert . done JMTC, D. -Original Message- From: Ulrich Eckhardt [mailto:ulrich.eckha...@dominolaser.com] Sent: Wednesday, October 05, 2011 3:38 PM To: users@subversion.apache.org Subject: Re: how to compare an exported file (or set of files) against the repository? Am 05.10.2011 14:49, schrieb Mertens, Bram: I have been unable to find an answer to this in the FAQ or the mailing list archives. I found one question that appears to be similar to what I'm trying to achieve but it did not contain a reply that solves my problem. I haven't found the need for that yet, even though I'm prepared (see below) for the situation. I've got a set of files that were exported from a repository some time ago. The files have been moved around and some have been edited since. I would like to find out: a) what revision these files are from and There are so-called keywords, which SVN can be made to replace in text files. You can for example tell it to fill in the URL and revision a file is checked out from. This can be used to attach some metadata to an exported source tree. Of course that doesn't help you know, unless someone already prepared for this case. Note that the revision of a file doesn't change if you change a different file, so it can't give you _The_ revision of the source tree. OTOH, there is no guarantee that you don't have an export from a mixed-revision working copy. b) changes have been made to it that may not be in the repository? Find out where this was exported from, and check out that revision. Copy the export on top of it and compare, or use a recursive tree comparison utility. Is this possible without looping through all the revisions and calculating checksums? The problem with appraoch besides the time it would take is that it would obviously not catch files that are not 100% identical to the files in that revision. If the source tree contains files from several different revisions, that will be the only (hard) way to go. However, I guess you can expect that the export was made from one revision. If you know the history of the according project a bit, you might be able to find the approximate revision it was checked out and from there search for the exact revision. Another hint might be hidden in modification timestamps. BTW: The most efficient way is to check out an approximate revision and then use svn up -r ... to move to the next revision quickly. In particular you shouldn't use export instead of an incremental update. Good luck! Uli *** *** Domino Laser GmbH, Fangdieckstraße 75a, 22547 Hamburg, Deutschland Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932 *** *** Visit our website at http://www.dominolaser.com *** *** Diese E-Mail einschließlich sämtlicher Anhänge ist nur für den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empfänger sein sollten. Die E-Mail ist in diesem Fall zu löschen und darf weder gelesen, weitergeleitet, veröffentlicht oder anderweitig benutzt werden. E-Mails können durch Dritte gelesen werden und Viren sowie nichtautorisierte Änderungen enthalten. Domino Laser GmbH ist für diese Folgen nicht verantwortlich. *** ***
RE: Strange behavior on directory delete/commit
You could also delete the directory directly in the repository using svn delete url -m message. This way you would avoid the problem of committing partial changes of your working copy. .. which is just another workaround.
RE: Strange behavior on directory delete/commit
I think SVN is behaving correctly. When you do svn commit foo you're telling Subversion to commit changes made in foo. There are no changes in foo because it's been deleted. The changes, instead, are in its parent directory, the one from where you issued your commands. That's why svn commi works, it assumes . as the path. I disagree. Providing a PATH argument should tell svn that one wants to commit the changes to foo. In this case it would be that foo was deleted. Since I want only the changes to foo to be versioned, it would not make sense to include all other changes within the parent directory. I think svn commit foo would work fine, provided you do not rmdir foo first; that was your error. I also have a feeling Subversion 1.7's new working copy arrangement will fix or at least change this behavior. So there's still a light at the far end of the tunnel! :o) From what I've read just now 1.7.X's WC arrangement will become alike hg's way. I understand well that the current svn infrastructure is not suited for this usecase. Would patching svn 1.6.X to fix the behaviour be feasible?
RE: Strange behavior on directory delete/commit
I doubt it. Or rather, the behavior is not broken. The user is broken. As I said: svn commit foo should have worked fine if you had not run rmdir foo beforehand. Don't run rmdir foo. Just run svn rm foo followed by svn commit foo and everything should work. True - it's nothing totally new to me. I just doubt the lusers who are going to use the frontend I'm writing for them understand the difference. :-)
RE: Strange behavior on directory delete/commit
Then you must explain it to them. :) To move or delete items in a working copy, you must use svn commands. You must not use OS commands. That's just how it is. This is going to be a long journey. *jokingly* Thanks for the insights and incredibly fast answers! It's awesome that you're working on the .svn meta-data folders problem. If you manage to get it settled, I believe 1.7.X is going to be great!
RE: Strange behavior on directory delete/commit
I'm not sure you understand the kinds problems the new working copy format is settling. For me it settles the major problem of multiple .svn folders in a checkout. You must still use svn commands instead of OS commands in 1.7. That won't change. I don't think it will ever change. The reason is that Subversion tracks operations explicitly, rather than implicitly. In other words, Subversion needs to modify meta-data in the .svn directory if you change something. If you run an OS-level command the actual disk state and the meta-data get out of sync. *sarcastic* The user does not care what levers subversion needs to pull to show him what parts of a file were modified. */sarcastic* Subversion is not like git which goes Woaaahh... I just woke up and now... what??? What did the user DO??? Well, whatever, I'm just gonna take a wild guess to deal with this and go back to bed... Please correct me if I'm wrong: Subversion is still an observer and whatever a user does, he must tell Subversion what he did in cases where subversion can't understand it by itself (i.e. file/folder rename/move that preserve history across the revisions). Every VCS I know works like this. Maybe one invents a VCS filesystem that can hook into the filesystem operations, but that's something that will be written on other papers. ;-) Back to topic: The current working copy layout is unable to handle the usecase I described since it needs the missing meta-data that was stored within the deleted folder itself. Thus Subversion 1.7 would need a special logic to handle these cases. One could discuss whether this should be fixed so that 1.7 behaves on folder deletes just alike file deletes. I leave that decision up to the devs since I'm unable to estimate the costs. The new working copy layout does not have the meta-data within subfolders and thus is able to commit just that change. HOORAY! This issue *can* be solved with WC-NG without special logics! --quote Ryan Schmidt-- I also have a feeling Subversion 1.7's new working copy arrangement will fix or at least change this behavior. --eof quote Ryan Schmidt-- Greetings, D.
RE: Strange behavior on directory delete/commit
[snip] Subversion is still an observer and whatever a user does, he must tell Subversion what he did in cases where subversion can't understand it by itself (i.e. file/folder rename/move that preserve history across the revisions). Every VCS I know works like this. Maybe one invents a VCS Sort of. Mercurial is perfectly happy if you tell it afterwards (before a commit of course) that you renamed that file and deleted the other one. This behavior comes in handy, if you have to rename some files in the IDE, because only the IDE knows what (and in which file) has to be renamed... It smells differently, but is basically the same. :-) But now I understand what Stefan Sperling wanted to point out.
Strange behavior on directory delete/commit
Hi, having a fresh subversion repository doing this as preparation: $ mkdir foo/ $ svn add foo $ svn commit -m test Adding foo Revision X sent. $ rmdir foo $ svn st ! foo $ svn delete foo D foo And finally this command fails: $ svn commit foo -m fail svn: entry foo has no URL This instead does work: $ svn commit -m works Delete foo Revision X sent. svn commit behaves inconsistently depending on whether a PATH argument is given or not. If it is a bug, it should get at least priority P2 because one is unable to commit partial changes to the WC as in this scenario: $ mkdir foo/ $ svn add foo A foo $ svn commit -m test Adding foo Revision X sent. $ rmdir foo $ svn st ! foo $ svn delete foo D foo $ touch bar $ touch foobar $ svn add bar foobar A bar A foobar $ svn commit foo bar svn: entry foo has no URL To get things done, one would have to backup foobar somewhere, revert foobar, do the commit without PATH arguments and copy foobar over to the WC. *eek* Let me know what you think about this! Greetings, D.