On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland <estifo...@gmail.com> wrote:

> I'll top-post an answer to this one as this thread has wandered and gotten
> very long, so who knows who is still following :)
>
> I made a simple tweak to the ssh code that gets ssh working for me on
> Ubuntu and may solve some of the login shell related problems that have
> been reported with respect to ssh:
>
>
> http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26&v2=61f9ddf1e2c8bbb0
>

Not exactly the same patch, but something quite similar has been checked in
at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it out and
let me know if it clears any outstanding problems, or if I missed some
obvious benefit of Matt's patch in my refactoring.



>
> Joerg iasked if this will make it into a future release. Can Richard or
> one of the developers take a look at the change and comment?
>
> Note that unfortunately this does not fix the issues I'm having with
> fsecure ssh but I hope it gets us one step closer.
>
> Thanks,
>
> Matt
> -=-
>
>
>
> On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
> <veedeeh...@googlemail.com>wrote:
>
>> On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland <estifo...@gmail.com>
>> wrote:
>>
>>  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
>>> <veedeeh...@googlemail.com>**wrote:
>>>
>>>  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland <estifo...@gmail.com>
>>>> wrote:
>>>>
>>>>  sshfs is cool but in a corporate environment it can't always be used.
>>>> For
>>>>
>>>>> example fuse is not installed for end users on the servers I have
>>>>> access
>>>>> to.
>>>>>
>>>>> I would also be very wary of sshfs and multi-user access. Sqlite3
>>>>> locking
>>>>> on NFS doesn't always work well, I imagine that locking issues on sshfs
>>>>>
>>>>>
>>>> it doesn't? in which way? and are the mentioned problems restricted to
>>>> NFS
>>>> or other file systems (zfs, qfs, ...) as well?
>>>> do you mean that a 'central' repository could be harmed if two users try
>>>> to push at the same time (and would corruption propagate to the users'
>>>> "local" repositories later on)? I do hope not so...
>>>>
>>>
>>>
>>> I should have qualified that with the detail that historically NFS
>>> locking
>>> has been reported as an issue by others but I myself have not seen it.
>>> What
>>> I have seen in using sqlite3 and fossil very heavily on NFS is users
>>> using
>>> kill -9 right off the bat rather than first trying with just kill. The
>>> lock
>>> gets stuck "set" and only dumping the sqlite db to text and recreating it
>>> seems to clear the lock (not sure but maybe sometimes copying to a new
>>> file
>>> and moving back will clear the lock).
>>>
>>> I've seen a corrupted db once or maybe twice but never been clear that it
>>> was caused by concurrent access on NFS or not. Thankfully it is fossil
>>> and
>>> recovery is a "cp" away.
>>>
>>> Quite some time ago I did limited testing of concurrent access to an
>>> sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
>>> very
>>> slow but that could well be due to my being clueless on how to correctly
>>> tune AFS itself.
>>>
>>> When you say zfs do you mean using the NFS export functionality of zfs?
>>>
>> yes
>>
>>  I've never tested that and it would be very interesting to know how well
>>> it
>>> works.
>>>
>>
>> not yet possible here, but we'll probably migrate to zfs in the not too
>> far future.
>>
>>
>>
>>> My personal opinion is that fossil works great over NFS but would caution
>>> anyone trying it to test thoroughly before trusting it.
>>>
>>>
>>>
>>>>  could well be worse.
>>>>
>>>>>
>>>>> sshfs is an excellent work-around for an expert user but not a
>>>>> replacement
>>>>> for the feature of ssh transport.
>>>>>
>>>>>
>>>> yes I would love to see a stable solution not suffering from
>>>> interference
>>>> of terminal output (there are people out there loving the good old
>>>> `fortune' as part of their login script...).
>>>>
>>>> btw: why could fossil not simply(?) filter a reasonable amount of
>>>> terminal
>>>> output for the occurrence of a sufficiently strong magic pattern
>>>> indicating
>>>> that the "noise" has passed by and fossil can go to work? right now
>>>> putting
>>>> `echo " "' (sending a single blank) suffices to let the transfer fail.
>>>> my
>>>> understanding is that fossil _does_ send something like `echo test' (is
>>>> this true). all unexpected output to tty from the login scripts  would
>>>> come
>>>> _before_ that so why not test for receiving the expected text ('test'
>>>> just
>>>> being not unique/strong enough) at the end of whatever is send (up to a
>>>> reasonable length)? is this a stupid idea?
>>>>
>>>
>>>
>>> I thought of trying that some time ago but never got around to it.
>>> Inspired
>>> by your comment I gave a similar approach a quick try and for the first
>>> time I saw ssh work on my home linux box!!!
>>>
>>> All I did was read and discard any junk on the line before sending the
>>> echo
>>> test:
>>>
>>> http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**
>>> 935bc0a983135b26&v2=**61f9ddf1e2c8bbb0<http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26&v2=61f9ddf1e2c8bbb0>
>>>
>>> ===========without==========
>>> rm: cannot remove `*': No such file or directory
>>> make: Nothing to be done for `all'.
>>> ssh matt@xena
>>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>>> ../fossil: ssh connection failed: [Welcome to Ubuntu 12.04.1 LTS
>>> (GNU/Linux
>>> 3.2.0-32-generic-pae i686)
>>>
>>>  * Documentation:  https://help.ubuntu.com/
>>>
>>> 0 packages can be updated.
>>> 0 updates are security updates.
>>>
>>> test]
>>>
>>> ==============with============**===
>>> fossil/junk$ rm *;(cd ..;make) && ../fossil clone
>>> ssh://matt@xena//home/matt/**fossils/fossil.fossil
>>> fossil.fossil
>>> make: Nothing to be done for `all'.
>>> ssh matt@xena
>>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>>>                 Bytes      Cards  Artifacts     Deltas
>>> Sent:              53          1          0          0
>>> Received:     5004225      13950       1751       5238
>>> Sent:              71          2          0          0
>>> Received:     5032480       9827       1742       3132
>>> Sent:              57         93          0          0
>>> Received:     5012028       9872       1137       3806
>>> Sent:              57          1          0          0
>>> Received:     4388872       3053        360       1168
>>> Total network traffic: 1037 bytes sent, 19438477 bytes received
>>> Rebuilding repository meta-data...
>>>   100.0% complete...
>>> project-id: CE59BB9F186226D80E49D1FA2DB29F**935CCA0333
>>> server-id:  3029a8494152737798f2768c799192**1f2342a84b
>>> admin-user: matt (password is "7db8e5")
>>>
>>>
>>>
>> great. that's essentially what I had in mind (but your approach  of
>> sending two commands while flushing
>> the first response completely probably is better, AFAICS). will something
>> like this make it into a future release?
>>
>> joerg
>>
>>
>>>
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó <ram...@compassis.com>
>>>>> wrote:
>>>>>
>>>>>
>>>>>  > Sshfs didn't fix the problems that I was having with fossil+ssh, or
>>>>>> at
>>>>>> least
>>>>>> > only did so partially.
>>>>>>
>>>>>> Why not? In what sshfs failed to give you the equivalent functionality
>>>>>> than a remote access to a fossil database through ssh?
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2012/11/11 Timothy Beyer <bey...@fastmail.net>
>>>>>>
>>>>>>  At Sat, 10 Nov 2012 22:31:57 +0100,
>>>>>>
>>>>>>> j. van den hoff wrote:
>>>>>>> >
>>>>>>> > thanks for responding.
>>>>>>> > I managed to solve my problem in the meantime (see my previous
>>>>>>> mail in
>>>>>>> > this thread), but I'll make a memo of sshfs and have a look at it.
>>>>>>> >
>>>>>>> > joerg
>>>>>>> >
>>>>>>>
>>>>>>> Sshfs didn't fix the problems that I was having with fossil+ssh, or
>>>>>>> at
>>>>>>> least only did so partially.  Though, the problems that I was having
>>>>>>> with
>>>>>>> ssh were different.
>>>>>>>
>>>>>>> What I'd recommend doing is tunneling http or https through ssh, and
>>>>>>> host
>>>>>>> all of your fossil repositories on the host computer on your web
>>>>>>> server
>>>>>>> of
>>>>>>> choice via cgi.  I do that with lighttpd, and it works flawlessly.
>>>>>>>
>>>>>>> Tim
>>>>>>> ______________________________****_________________
>>>>>>> fossil-users mailing list
>>>>>>> fossil-users@lists.fossil-scm.****org <fossil-users@lists.fossil-**
>>>>>>> scm.org <fossil-users@lists.fossil-scm.org>>
>>>>>>> http://lists.fossil-scm.org:****8080/cgi-bin/mailman/listinfo/****
>>>>>>> fossil-users<http://lists.**fossil-scm.org:8080/cgi-bin/**
>>>>>>> mailman/listinfo/fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users>
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> ______________________________****_________________
>>>>>> fossil-users mailing list
>>>>>> fossil-users@lists.fossil-scm.****org <fossil-users@lists.fossil-**
>>>>>> scm.org <fossil-users@lists.fossil-scm.org>>
>>>>>> http://lists.fossil-scm.org:****8080/cgi-bin/mailman/listinfo/****
>>>>>> fossil-users<http://lists.**fossil-scm.org:8080/cgi-bin/**
>>>>>> mailman/listinfo/fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users>
>>>>>> >
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>> --
>>>> Using Opera's revolutionary email client: http://www.opera.com/mail/
>>>> ______________________________****_________________
>>>> fossil-users mailing list
>>>> fossil-users@lists.fossil-scm.****org <fossil-users@lists.fossil-**
>>>> scm.org <fossil-users@lists.fossil-scm.org>>
>>>> http://lists.fossil-scm.org:****8080/cgi-bin/mailman/listinfo/**
>>>> **fossil-users<http://lists.**fossil-scm.org:8080/cgi-bin/**
>>>> mailman/listinfo/fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users>
>>>> >
>>>>
>>>>
>>
>> --
>> Using Opera's revolutionary email client: http://www.opera.com/mail/
>> ______________________________**_________________
>> fossil-users mailing list
>> fossil-users@lists.fossil-scm.**org <fossil-users@lists.fossil-scm.org>
>> http://lists.fossil-scm.org:**8080/cgi-bin/mailman/listinfo/**
>> fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users>
>>
>
>
> _______________________________________________
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>


-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to