Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Ramon Ribó
 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least
 only did so partially.

Why not? In what sshfs failed to give you the equivalent functionality than
a remote access to a fossil database through ssh?


2012/11/11 Timothy Beyer bey...@fastmail.net

 At Sat, 10 Nov 2012 22:31:57 +0100,
 j. van den hoff wrote:
 
  thanks for responding.
  I managed to solve my problem in the meantime (see my previous mail in
  this thread), but I'll make a memo of sshfs and have a look at it.
 
  joerg
 

 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
 least only did so partially.  Though, the problems that I was having with
 ssh were different.

 What I'd recommend doing is tunneling http or https through ssh, and host
 all of your fossil repositories on the host computer on your web server of
 choice via cgi.  I do that with lighttpd, and it works flawlessly.

 Tim
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
sshfs is cool but in a corporate environment it can't always be used. For
example fuse is not installed for end users on the servers I have access
to.

I would also be very wary of sshfs and multi-user access. Sqlite3 locking
on NFS doesn't always work well, I imagine that locking issues on sshfs
could well be worse.

sshfs is an excellent work-around for an expert user but not a replacement
for the feature of ssh transport.




On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó ram...@compassis.com wrote:


  Sshfs didn't fix the problems that I was having with fossil+ssh, or at
 least
  only did so partially.

 Why not? In what sshfs failed to give you the equivalent functionality
 than a remote access to a fossil database through ssh?



 2012/11/11 Timothy Beyer bey...@fastmail.net

 At Sat, 10 Nov 2012 22:31:57 +0100,
 j. van den hoff wrote:
 
  thanks for responding.
  I managed to solve my problem in the meantime (see my previous mail in
  this thread), but I'll make a memo of sshfs and have a look at it.
 
  joerg
 

 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
 least only did so partially.  Though, the problems that I was having with
 ssh were different.

 What I'd recommend doing is tunneling http or https through ssh, and host
 all of your fossil repositories on the host computer on your web server of
 choice via cgi.  I do that with lighttpd, and it works flawlessly.

 Tim
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread j. van den hoff
On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com  
wrote:



sshfs is cool but in a corporate environment it can't always be used. For
example fuse is not installed for end users on the servers I have access
to.

I would also be very wary of sshfs and multi-user access. Sqlite3 locking
on NFS doesn't always work well, I imagine that locking issues on sshfs


it doesn't? in which way? and are the mentioned problems restricted to NFS  
or other file systems (zfs, qfs, ...) as well?
do you mean that a 'central' repository could be harmed if two users try  
to push at the same time (and would corruption propagate to the users'  
local repositories later on)? I do hope not so...




could well be worse.

sshfs is an excellent work-around for an expert user but not a  
replacement

for the feature of ssh transport.


yes I would love to see a stable solution not suffering from interference  
of terminal output (there are people out there loving the good old  
`fortune' as part of their login script...).


btw: why could fossil not simply(?) filter a reasonable amount of terminal  
output for the occurrence of a sufficiently strong magic pattern  
indicating that the noise has passed by and fossil can go to work? right  
now putting `echo  ' (sending a single blank) suffices to let the  
transfer fail. my understanding is that fossil _does_ send something like  
`echo test' (is this true). all unexpected output to tty from the login  
scripts  would come _before_ that so why not test for receiving the  
expected text ('test' just being not unique/strong enough) at the end of  
whatever is send (up to a reasonable length)? is this a stupid idea?







On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó ram...@compassis.com wrote:



 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least
 only did so partially.

Why not? In what sshfs failed to give you the equivalent functionality
than a remote access to a fossil database through ssh?



2012/11/11 Timothy Beyer bey...@fastmail.net


At Sat, 10 Nov 2012 22:31:57 +0100,
j. van den hoff wrote:

 thanks for responding.
 I managed to solve my problem in the meantime (see my previous mail  
in

 this thread), but I'll make a memo of sshfs and have a look at it.

 joerg


Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least only did so partially.  Though, the problems that I was having  
with

ssh were different.

What I'd recommend doing is tunneling http or https through ssh, and  
host
all of your fossil repositories on the host computer on your web  
server of

choice via cgi.  I do that with lighttpd, and it works flawlessly.

Tim
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users





--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread j. v. d. hoff
On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com  
wrote:



On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
veedeeh...@googlemail.comwrote:


On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
wrote:

 sshfs is cool but in a corporate environment it can't always be used.  
For
example fuse is not installed for end users on the servers I have  
access

to.

I would also be very wary of sshfs and multi-user access. Sqlite3  
locking

on NFS doesn't always work well, I imagine that locking issues on sshfs



it doesn't? in which way? and are the mentioned problems restricted to  
NFS

or other file systems (zfs, qfs, ...) as well?
do you mean that a 'central' repository could be harmed if two users try
to push at the same time (and would corruption propagate to the users'
local repositories later on)? I do hope not so...



I should have qualified that with the detail that historically NFS  
locking
has been reported as an issue by others but I myself have not seen it.  
What
I have seen in using sqlite3 and fossil very heavily on NFS is users  
using
kill -9 right off the bat rather than first trying with just kill. The  
lock

gets stuck set and only dumping the sqlite db to text and recreating it
seems to clear the lock (not sure but maybe sometimes copying to a new  
file

and moving back will clear the lock).

I've seen a corrupted db once or maybe twice but never been clear that it
was caused by concurrent access on NFS or not. Thankfully it is fossil  
and

recovery is a cp away.

Quite some time ago I did limited testing of concurrent access to an
sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was  
very

slow but that could well be due to my being clueless on how to correctly
tune AFS itself.

When you say zfs do you mean using the NFS export functionality of zfs?

yes
I've never tested that and it would be very interesting to know how well  
it

works.


not yet possible here, but we'll probably migrate to zfs in the not too  
far future.




My personal opinion is that fossil works great over NFS but would caution
anyone trying it to test thoroughly before trusting it.




 could well be worse.


sshfs is an excellent work-around for an expert user but not a  
replacement

for the feature of ssh transport.



yes I would love to see a stable solution not suffering from  
interference

of terminal output (there are people out there loving the good old
`fortune' as part of their login script...).

btw: why could fossil not simply(?) filter a reasonable amount of  
terminal
output for the occurrence of a sufficiently strong magic pattern  
indicating
that the noise has passed by and fossil can go to work? right now  
putting
`echo  ' (sending a single blank) suffices to let the transfer fail.  
my

understanding is that fossil _does_ send something like `echo test' (is
this true). all unexpected output to tty from the login scripts  would  
come
_before_ that so why not test for receiving the expected text ('test'  
just

being not unique/strong enough) at the end of whatever is send (up to a
reasonable length)? is this a stupid idea?



I thought of trying that some time ago but never got around to it.  
Inspired

by your comment I gave a similar approach a quick try and for the first
time I saw ssh work on my home linux box!!!

All I did was read and discard any junk on the line before sending the  
echo

test:

http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

===without==
rm: cannot remove `*': No such file or directory
make: Nothing to be done for `all'.
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
../fossil: ssh connection failed: [Welcome to Ubuntu 12.04.1 LTS  
(GNU/Linux

3.2.0-32-generic-pae i686)

 * Documentation:  https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

test]

==with===
fossil/junk$ rm *;(cd ..;make)  ../fossil clone
ssh://matt@xena//home/matt/fossils/fossil.fossil
fossil.fossil
make: Nothing to be done for `all'.
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
Bytes  Cards  Artifacts Deltas
Sent:  53  1  0  0
Received: 5004225  13950   1751   5238
Sent:  71  2  0  0
Received: 5032480   9827   1742   3132
Sent:  57 93  0  0
Received: 5012028   9872   1137   3806
Sent:  57  1  0  0
Received: 4388872   3053360   1168
Total network traffic: 1037 bytes sent, 19438477 bytes received
Rebuilding repository meta-data...
  100.0% complete...
project-id: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
server-id:  3029a8494152737798f2768c7991921f2342a84b
admin-user: matt (password is 7db8e5)




great. that's 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
I'll top-post an answer to this one as this thread has wandered and gotten
very long, so who knows who is still following :)

I made a simple tweak to the ssh code that gets ssh working for me on
Ubuntu and may solve some of the login shell related problems that have
been reported with respect to ssh:

http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

Joerg iasked if this will make it into a future release. Can Richard or one
of the developers take a look at the change and comment?

Note that unfortunately this does not fix the issues I'm having with
fsecure ssh but I hope it gets us one step closer.

Thanks,

Matt
-=-


On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff veedeeh...@googlemail.comwrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be used.
 For

 example fuse is not installed for end users on the servers I have access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on sshfs


 it doesn't? in which way? and are the mentioned problems restricted to
 NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that it
 was caused by concurrent access on NFS or not. Thankfully it is fossil and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how well
 it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer fail. my
 understanding is that fossil _does_ send something like `echo test' (is
 this true). all unexpected output to tty from the login scripts  would
 come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago but never got around to it.
 Inspired
 by your comment I gave a similar approach a quick try and for the first
 time I saw ssh work on my home linux box!!!

 All I did was read and discard any junk on the line before sending the
 echo
 test:

 http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**
 935bc0a983135b26v2=**61f9ddf1e2c8bbb0http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

 ===without==
 rm: cannot remove `*': No such file or directory
 make: Nothing to be done for `all'.
 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [Welcome to Ubuntu 12.04.1 LTS
 (GNU/Linux
 3.2.0-32-generic-pae i686)

  * Documentation:  https://help.ubuntu.com/

 0 packages can be updated.
 0 updates are security updates.

 test]

 ==with**===
 fossil/junk$ rm *;(cd ..;make)  ../fossil clone
 ssh://matt@xena//home/matt/**fossils/fossil.fossil
 fossil.fossil
 make: Nothing to be done 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Richard Hipp
On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.com wrote:

 I'll top-post an answer to this one as this thread has wandered and gotten
 very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


Not exactly the same patch, but something quite similar has been checked in
at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it out and
let me know if it clears any outstanding problems, or if I missed some
obvious benefit of Matt's patch in my refactoring.




 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.comwrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be used.
 For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on sshfs


 it doesn't? in which way? and are the mentioned problems restricted to
 NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS
 locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users
 using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that it
 was caused by concurrent access on NFS or not. Thankfully it is fossil
 and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how well
 it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from
 interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer fail.
 my
 understanding is that fossil _does_ send something like `echo test' (is
 this true). all unexpected output to tty from the login scripts  would
 come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago but never got around to it.
 Inspired
 by your comment I gave a similar approach a quick try and for the first
 time I saw ssh work on my home linux box!!!

 All I did was read and discard any junk on the line before sending the
 echo
 test:

 http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**
 935bc0a983135b26v2=**61f9ddf1e2c8bbb0http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

 ===without==
 rm: cannot remove `*': No such file or directory
 make: Nothing to be done for `all'.
 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.com wrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it
 out and let me know if it clears any outstanding problems, or if I missed
 some obvious benefit of Matt's patch in my refactoring.


It seems not to work in my situation with the sending of test1. I'm not
sure why.

= I get the following 
fossil/junk$ ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
fossil.fossil
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff veedeeh...@googlemail.com
  wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be used.
 For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on
 sshfs


 it doesn't? in which way? and are the mentioned problems restricted to
 NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users
 try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS
 locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users
 using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating
 it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that
 it
 was caused by concurrent access on NFS or not. Thankfully it is fossil
 and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how
 well it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would
 caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from
 interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer fail.
 my
 understanding is that fossil _does_ send something like `echo test' (is
 this true). all unexpected output to tty from the login scripts  would
 come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago but never got around to it.
 Inspired
 by your comment I gave a similar approach a quick try and for the first
 time I saw ssh work on my home linux box!!!

 All I did was read and discard any junk on the line before 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Richard Hipp
On Sun, Nov 11, 2012 at 7:10 PM, Matt Welland estifo...@gmail.com wrote:


 On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.comwrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it
 out and let me know if it clears any outstanding problems, or if I missed
 some obvious benefit of Matt's patch in my refactoring.


 It seems not to work in my situation with the sending of test1. I'm not
 sure why.


The trunk changes works here.  And I don't see how it is materially
different from your patch.  Am I overlooking something?



 = I get the following 
 fossil/junk$ ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.com wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 
 wrote:

  sshfs is cool but in a corporate environment it can't always be
 used. For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on
 sshfs


 it doesn't? in which way? and are the mentioned problems restricted
 to NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users
 try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS
 locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users
 using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating
 it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that
 it
 was caused by concurrent access on NFS or not. Thankfully it is fossil
 and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to
 correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how
 well it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would
 caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from
 interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer
 fail. my
 understanding is that fossil _does_ send something like `echo test'
 (is
 this true). all unexpected output to tty from the login scripts
  would come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to
 a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
Comparison of your fix vs. my hack below. I suspect that blindly clearing
out the buffer of any line noise before sending anything to the remote end
will work better but I have no logic or solid arguments to back up that
assertion.

=
matt@xena:~/data/fossil/junk$ fsl info
project-name: Fossil
repository:   /home/matt/fossils/fossil.fossil
local-root:   /home/matt/data/fossil/
project-code: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
checkout: 4473a27f3b6e049e3c162e440e0e4c87daf9570c 2012-11-11 22:42:50
UTC
parent:   8c7faee6c5fac25b8456e96070ce068400d1d7e1 2012-11-11 17:59:42
UTC
tags: trunk
comment:  Further attempts to help the ssh sync protocol move past
noisy
  motd comments and other extraneous login text, synchronize
with
  the remote end, and start exchanging messages successfully.
  (user: drh)
matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make)  ../fossil clone
ssh://matt@xena//home/matt/fossils/fossil.fossil fossil.fossil
make: Nothing to be done for `all'.
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
../fossil: ssh connection failed: [test1]
=

matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make  make.log) 
../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
fossil.fossil
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
Bytes  Cards  Artifacts Deltas
Sent:  53  1  0  0
Received: 5004225  13950   1751   5238
Sent:  71  2  0  0
Received: 5032480   9827   1742   3132
Sent:  57 93  0  0
Received: 5012028   9872   1137   3806
Sent:  57  1  0  0
Received: 4422156   3069367   1169
Total network traffic: 1035 bytes sent, 19471761 bytes received
Rebuilding repository meta-data...
  100.0% complete...
project-id: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
server-id:  3e5f8ed7b0eed8a144fa4b07b4b34cc6c374d20c
admin-user: matt (password is 40faae)





On Sun, Nov 11, 2012 at 6:09 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 7:10 PM, Matt Welland estifo...@gmail.com wrote:


 On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.comwrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it
 out and let me know if it clears any outstanding problems, or if I missed
 some obvious benefit of Matt's patch in my refactoring.


 It seems not to work in my situation with the sending of test1. I'm not
 sure why.


 The trunk changes works here.  And I don't see how it is materially
 different from your patch.  Am I overlooking something?



 = I get the following 
 fossil/junk$ ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.com wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland 
 estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be
 used. For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on
 sshfs


 it doesn't? in which way? and are the mentioned problems restricted
 to NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users
 try
 to push at the same time (and would corruption propagate to the
 users'
 local repositories later on)? I do hope not so...



 I should 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Richard Hipp
On Sun, Nov 11, 2012 at 8:25 PM, Matt Welland estifo...@gmail.com wrote:

 Comparison of your fix vs. my hack below. I suspect that blindly clearing
 out the buffer of any line noise before sending anything to the remote end
 will work better but I have no logic or solid arguments to back up that
 assertion.


Both versions send two echo commands to the remote side, ignore the
return from the first echo and check the return from the second.  The only
difference that I see between your patch and mine (unless I'm missing
something) is that I'm sending different echo text.  What do you see that
is different from this?




 =
 matt@xena:~/data/fossil/junk$ fsl info
 project-name: Fossil
 repository:   /home/matt/fossils/fossil.fossil
 local-root:   /home/matt/data/fossil/
 project-code: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
 checkout: 4473a27f3b6e049e3c162e440e0e4c87daf9570c 2012-11-11 22:42:50
 UTC
 parent:   8c7faee6c5fac25b8456e96070ce068400d1d7e1 2012-11-11 17:59:42
 UTC
 tags: trunk
 comment:  Further attempts to help the ssh sync protocol move past
 noisy
   motd comments and other extraneous login text, synchronize
 with
   the remote end, and start exchanging messages successfully.
   (user: drh)
 matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make)  ../fossil
 clone ssh://matt@xena//home/matt/fossils/fossil.fossil fossil.fossil

 make: Nothing to be done for `all'.
 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]
 =

 matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make  make.log) 
 ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 Bytes  Cards  Artifacts Deltas
 Sent:  53  1  0  0
 Received: 5004225  13950   1751   5238
 Sent:  71  2  0  0
 Received: 5032480   9827   1742   3132
 Sent:  57 93  0  0
 Received: 5012028   9872   1137   3806
 Sent:  57  1  0  0
  Received: 4422156   3069367   1169
 Total network traffic: 1035 bytes sent, 19471761 bytes received

 Rebuilding repository meta-data...
   100.0% complete...
 project-id: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
  server-id:  3e5f8ed7b0eed8a144fa4b07b4b34cc6c374d20c
 admin-user: matt (password is 40faae)






 On Sun, Nov 11, 2012 at 6:09 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 7:10 PM, Matt Welland estifo...@gmail.comwrote:


 On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.comwrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been
 checked in at http://www.fossil-scm.org/fossil/info/4473a27f3b -
 please try it out and let me know if it clears any outstanding problems, or
 if I missed some obvious benefit of Matt's patch in my refactoring.


 It seems not to work in my situation with the sending of test1. I'm not
 sure why.


 The trunk changes works here.  And I don't see how it is materially
 different from your patch.  Am I overlooking something?



 = I get the following 
 fossil/junk$ ../fossil clone 
 ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard
 or one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.com wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland 
 estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be
 used. For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and