On Thu, Dec 8, 2011 at 8:18 AM, Heinrich Huss <
heinrich.h...@psh-consulting.de> wrote:

> ** Hello Richard,
> now I'm confused. Is it a valid option to use fileaccess by several
> clients? I always assumed, I have to setup a fossil server from which
> clients clones their local repository via http.
>

If you have multiple clients on different machines, using HTTP is
definitely the preferred solution.  But sharing the repository over a
network filesystem is possible.


> Thanks.
>
> Heinrich
> --
> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
> gesendet.
>
>
>
> Richard Hipp <d...@sqlite.org> schrieb:
>
>>
>>
>> On Thu, Dec 8, 2011 at 12:46 AM, Matt Welland <estifo...@gmail.com>wrote:
>>
>>>
>>> On Wed, Dec 7, 2011 at 10:38 PM, Nolan Darilek 
>>> <no...@thewordnerd.info>wrote:
>>>
>>>>  Maybe Fossil could recommend a WAL rebuild command in these instances?
>>>> Then at least the user has some direction in which to go. At the very least
>>>> it could output that for relaying to the server administrator.
>>>>
>>>
>>> I wasn't clear in my message, these are being served directly by file
>>> access, not http and via NFS from multiple hosts. I don't think wal is a
>>> safe option.
>>>
>>
>> I missed that part.
>>
>> Probably the error then results from having a broken posix advisory lock
>> implementation on your NFS server (a very common scenario).  The
>> work-around is to use dot-file locking instead.
>>
>>     export FOSSIL_VFS=unix-dotfile
>>     fossil update
>>
>> The danger here is that all users must be using the same VFS, or else
>> they won't agree on the locking protocol and they could collide with each
>> other.
>>
>> If you are absolutely certain that nobody else will be using the remote
>> repository at the same time, you can also do:
>>
>>     export FOSSIL_VFS=unix-none
>>
>> to disable locking entirely.
>>
>>
>>
>>>
>>>
>>>> On 12/07/2011 07:48 PM, Richard Hipp wrote:
>>>>
>>>>
>>>>
>>>> On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland <estifo...@gmail.com>wrote:
>>>>
>>>>> This is on NFS and with a large check in so it is worst case scenario
>>>>> but still I'm seeing this error when people simultaneously do certain 
>>>>> heavy
>>>>> weight actions.
>>>>>
>>>>>
>>>>>  Are there any settings that would help here? I've dug though the
>>>>> docs and not seen anything yet. I'll dig though the code tonight but
>>>>> pointers from the experts would be appreciated.
>>>>>
>>>>
>>>> Setting WAL mode on the database will help a lot.  However, WAL might
>>>> not work on NFS.  Are all server instances running on the same machine?  If
>>>> so, then you might be able to get WAL to work.  I suppose you could try.
>>>>
>>>> Do this:
>>>>
>>>>    fossil rebuild -wal -pagesize 8192 REPO
>>>>
>>>> Then see if that helps.
>>>>
>>>> FWIW, the Fossil and SQLite repositories take a pretty heavy load
>>>> without problems and they are both running on the same 1/24th slice VM.
>>>> They do both use WAL.  But they also both use a local disk, not NFS.
>>>>
>>>>
>>>
>>>>
>>>>>
>>>>>  FYI, I think these are probably unnecessary failures, however I
>>>>> grant that is may be tough to differentiate from real issues such as db 
>>>>> not
>>>>> readable. I think fossil could possibly do a couple things here:
>>>>>
>>>>>
>>>>>  1. Interleave sync actions
>>>>>
>>>>> 2. On failure in sync tell the user that the db is probably busy and
>>>>> try again in a few minutes.
>>>>>
>>>>>
>>>>>  [830] > fossil update
>>>>>
>>>>> Autosync:  file:///blah/blah.fossil
>>>>>
>>>>>                 Bytes      Cards  Artifacts     Deltas
>>>>>
>>>>> Sent:            6945        146          0          0
>>>>>
>>>>> Error: Database error: database is locked
>>>>>
>>>>> DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)
>>>>>
>>>>> Received:         118          1          0          0
>>>>>
>>>>> Total network traffic: 3842 bytes sent, 871 bytes received
>>>>>
>>>>> fossil: Autosync failed
>>>>>
>>>>> --------------
>>>>>
>>>>> updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18
>>>>> 22:33:49 UTC
>>>>>
>>>>> tags:         trunk
>>>>>
>>>>> comment:      initial empty check-in (user: blah)
>>>>>
>>>>> _______________________________________________
>>>>> fossil-users mailing list
>>>>> fossil-users@lists.fossil-scm.org
>>>>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> D. Richard Hipp
>>>> d...@sqlite.org
>>>>
>>>>
>>>> _______________________________________________
>>>> fossil-users mailing 
>>>> listfossil-users@lists.fossil-scm.orghttp://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> fossil-users mailing list
>>>> fossil-users@lists.fossil-scm.org
>>>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>>>
>>>>
>>>
>>> _______________________________________________
>>> fossil-users mailing list
>>> fossil-users@lists.fossil-scm.org
>>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>>
>>>
>>
>>
>> --
>> D. Richard Hipp
>> d...@sqlite.org
>>
>
> _______________________________________________
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>


-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to