Re: [fossil-users] It would be nice if sync collisions failed more?gracefully

2011-12-08 Thread Lluís Batlle i Rossell
On Thu, Dec 08, 2011 at 02:18:36PM +0100, Heinrich Huss wrote:
> Hello Richard,
> now I'm confused. Is it a valid option to use fileaccess by several clients? 
> I always assumed, I have to setup a fossil server from which clients clones 
> their local repository via http.

I hope it works without much options on some cygwin systems, we (two or
three people) use the same username and share the same fossil repository in the
local filesystem for our independent checkouts.

We never had troubles with that.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Richard Hipp
On Thu, Dec 8, 2011 at 8:18 AM, Heinrich Huss <
heinrich.h...@psh-consulting.de> wrote:

> ** Hello Richard,
> now I'm confused. Is it a valid option to use fileaccess by several
> clients? I always assumed, I have to setup a fossil server from which
> clients clones their local repository via http.
>

If you have multiple clients on different machines, using HTTP is
definitely the preferred solution.  But sharing the repository over a
network filesystem is possible.


> Thanks.
>
> Heinrich
> --
> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
> gesendet.
>
>
>
> Richard Hipp  schrieb:
>
>>
>>
>> On Thu, Dec 8, 2011 at 12:46 AM, Matt Welland wrote:
>>
>>>
>>> On Wed, Dec 7, 2011 at 10:38 PM, Nolan Darilek 
>>> wrote:
>>>
  Maybe Fossil could recommend a WAL rebuild command in these instances?
 Then at least the user has some direction in which to go. At the very least
 it could output that for relaying to the server administrator.

>>>
>>> I wasn't clear in my message, these are being served directly by file
>>> access, not http and via NFS from multiple hosts. I don't think wal is a
>>> safe option.
>>>
>>
>> I missed that part.
>>
>> Probably the error then results from having a broken posix advisory lock
>> implementation on your NFS server (a very common scenario).  The
>> work-around is to use dot-file locking instead.
>>
>> export FOSSIL_VFS=unix-dotfile
>> fossil update
>>
>> The danger here is that all users must be using the same VFS, or else
>> they won't agree on the locking protocol and they could collide with each
>> other.
>>
>> If you are absolutely certain that nobody else will be using the remote
>> repository at the same time, you can also do:
>>
>> export FOSSIL_VFS=unix-none
>>
>> to disable locking entirely.
>>
>>
>>
>>>
>>>
 On 12/07/2011 07:48 PM, Richard Hipp wrote:



 On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland wrote:

> This is on NFS and with a large check in so it is worst case scenario
> but still I'm seeing this error when people simultaneously do certain 
> heavy
> weight actions.
>
>
>  Are there any settings that would help here? I've dug though the
> docs and not seen anything yet. I'll dig though the code tonight but
> pointers from the experts would be appreciated.
>

 Setting WAL mode on the database will help a lot.  However, WAL might
 not work on NFS.  Are all server instances running on the same machine?  If
 so, then you might be able to get WAL to work.  I suppose you could try.

 Do this:

fossil rebuild -wal -pagesize 8192 REPO

 Then see if that helps.

 FWIW, the Fossil and SQLite repositories take a pretty heavy load
 without problems and they are both running on the same 1/24th slice VM.
 They do both use WAL.  But they also both use a local disk, not NFS.


>>>

>
>  FYI, I think these are probably unnecessary failures, however I
> grant that is may be tough to differentiate from real issues such as db 
> not
> readable. I think fossil could possibly do a couple things here:
>
>
>  1. Interleave sync actions
>
> 2. On failure in sync tell the user that the db is probably busy and
> try again in a few minutes.
>
>
>  [830] > fossil update
>
> Autosync:  file:///blah/blah.fossil
>
> Bytes  Cards  Artifacts Deltas
>
> Sent:6945146  0  0
>
> Error: Database error: database is locked
>
> DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)
>
> Received: 118  1  0  0
>
> Total network traffic: 3842 bytes sent, 871 bytes received
>
> fossil: Autosync failed
>
> --
>
> updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18
> 22:33:49 UTC
>
> tags: trunk
>
> comment:  initial empty check-in (user: blah)
>
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>


 --
 D. Richard Hipp
 d...@sqlite.org


 ___
 fossil-users mailing 
 listfossil-users@lists.fossil-scm.orghttp://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


>>>
>>> ___
>>> fossil-users mailing list
>>> fossil-users@lists.fossil-scm.org
>>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>

Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Heinrich Huss
Hello Richard,
now I'm confused. Is it a valid option to use fileaccess by several clients? I 
always assumed, I have to setup a fossil server from which clients clones their 
local repository via http.
Thanks.
Heinrich
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.



Richard Hipp  schrieb:



On Thu, Dec 8, 2011 at 12:46 AM, Matt Welland  wrote:


On Wed, Dec 7, 2011 at 10:38 PM, Nolan Darilek  wrote:

Maybe Fossil could recommend a WAL rebuild command in these instances? Then at 
least the user has some direction in which to go. At the very least it could 
output that for relaying to the server administrator.


I wasn't clear in my message, these are being served directly by file access, 
not http and via NFS from multiple hosts. I don't think wal is a safe option.


I missed that part.

Probably the error then results from having a broken posix advisory lock 
implementation on your NFS server (a very common scenario).  The work-around is 
to use dot-file locking instead.

export FOSSIL_VFS=unix-dotfile
fossil update

The danger here is that all users must be using the same VFS, or else they 
won't agree on the locking protocol and they could collide with each other.

If you are absolutely certain that nobody else will be using the remote 
repository at the same time, you can also do:

export FOSSIL_VFS=unix-none

to disable locking entirely.





On 12/07/2011 07:48 PM, Richard Hipp wrote:



On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland  wrote:

This is on NFS and with a large check in so it is worst case scenario but still 
I'm seeing this error when people simultaneously do certain heavy weight 
actions.


Are there any settings that would help here? I've dug though the docs and not 
seen anything yet. I'll dig though the code tonight but pointers from the 
experts would be appreciated.


Setting WAL mode on the database will help a lot.  However, WAL might not work 
on NFS.  Are all server instances running on the same machine?  If so, then you 
might be able to get WAL to work.  I suppose you could try.

Do this:

   fossil rebuild -wal -pagesize 8192 REPO

Then see if that helps.

FWIW, the Fossil and SQLite repositories take a pretty heavy load without 
problems and they are both running on the same 1/24th slice VM.  They do both 
use WAL.  But they also both use a local disk, not NFS.





FYI, I think these are probably unnecessary failures, however I grant that is 
may be tough to differentiate from real issues such as db not readable. I think 
fossil could possibly do a couple things here:


1. Interleave sync actions

2. On failure in sync tell the user that the db is probably busy and try again 
in a few minutes.


[830] > fossil update

Autosync:  file:///blah/blah.fossil

Bytes  Cards  Artifacts Deltas

Sent:6945146  0  0

Error: Database error: database is locked

DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)

Received: 118  1  0  0

Total network traffic: 3842 bytes sent, 871 bytes received

fossil: Autosync failed

--

updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18 22:33:49 UTC

tags: trunk

comment:  initial empty check-in (user: blah)


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




--
D. Richard Hipp
d...@sqlite.org


___ fossil-users mailing list 
fossil-users@lists.fossil-scm.org 
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




--
D. Richard Hipp
d...@sqlite.org

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Richard Hipp
On Thu, Dec 8, 2011 at 12:46 AM, Matt Welland  wrote:

>
> On Wed, Dec 7, 2011 at 10:38 PM, Nolan Darilek wrote:
>
>>  Maybe Fossil could recommend a WAL rebuild command in these instances?
>> Then at least the user has some direction in which to go. At the very least
>> it could output that for relaying to the server administrator.
>>
>
> I wasn't clear in my message, these are being served directly by file
> access, not http and via NFS from multiple hosts. I don't think wal is a
> safe option.
>

I missed that part.

Probably the error then results from having a broken posix advisory lock
implementation on your NFS server (a very common scenario).  The
work-around is to use dot-file locking instead.

export FOSSIL_VFS=unix-dotfile
fossil update

The danger here is that all users must be using the same VFS, or else they
won't agree on the locking protocol and they could collide with each other.

If you are absolutely certain that nobody else will be using the remote
repository at the same time, you can also do:

export FOSSIL_VFS=unix-none

to disable locking entirely.



>
>
>> On 12/07/2011 07:48 PM, Richard Hipp wrote:
>>
>>
>>
>> On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland  wrote:
>>
>>> This is on NFS and with a large check in so it is worst case scenario
>>> but still I'm seeing this error when people simultaneously do certain heavy
>>> weight actions.
>>>
>>>
>>>  Are there any settings that would help here? I've dug though the docs
>>> and not seen anything yet. I'll dig though the code tonight but pointers
>>> from the experts would be appreciated.
>>>
>>
>> Setting WAL mode on the database will help a lot.  However, WAL might not
>> work on NFS.  Are all server instances running on the same machine?  If so,
>> then you might be able to get WAL to work.  I suppose you could try.
>>
>> Do this:
>>
>>fossil rebuild -wal -pagesize 8192 REPO
>>
>> Then see if that helps.
>>
>> FWIW, the Fossil and SQLite repositories take a pretty heavy load without
>> problems and they are both running on the same 1/24th slice VM.  They do
>> both use WAL.  But they also both use a local disk, not NFS.
>>
>>
>
>>
>>>
>>>  FYI, I think these are probably unnecessary failures, however I grant
>>> that is may be tough to differentiate from real issues such as db not
>>> readable. I think fossil could possibly do a couple things here:
>>>
>>>
>>>  1. Interleave sync actions
>>>
>>> 2. On failure in sync tell the user that the db is probably busy and try
>>> again in a few minutes.
>>>
>>>
>>>  [830] > fossil update
>>>
>>> Autosync:  file:///blah/blah.fossil
>>>
>>> Bytes  Cards  Artifacts Deltas
>>>
>>> Sent:6945146  0  0
>>>
>>> Error: Database error: database is locked
>>>
>>> DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)
>>>
>>> Received: 118  1  0  0
>>>
>>> Total network traffic: 3842 bytes sent, 871 bytes received
>>>
>>> fossil: Autosync failed
>>>
>>> --
>>>
>>> updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18
>>> 22:33:49 UTC
>>>
>>> tags: trunk
>>>
>>> comment:  initial empty check-in (user: blah)
>>>
>>> ___
>>> fossil-users mailing list
>>> fossil-users@lists.fossil-scm.org
>>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>>
>>>
>>
>>
>> --
>> D. Richard Hipp
>> d...@sqlite.org
>>
>>
>> ___
>> fossil-users mailing 
>> listfossil-users@lists.fossil-scm.orghttp://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>
>>
>>
>> ___
>> fossil-users mailing list
>> fossil-users@lists.fossil-scm.org
>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>
>>
>
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>


-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Heinrich Huss
Matt,

why don't you setup a fossil server instance and let the clients communicate 
via http? In that case you don't neef NFS.

Regards
Heinrich
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.



Matt Welland  schrieb:


On Wed, Dec 7, 2011 at 10:38 PM, Nolan Darilek  wrote:

Maybe Fossil could recommend a WAL rebuild command in these instances? Then at 
least the user has some direction in which to go. At the very least it could 
output that for relaying to the server administrator.


I wasn't clear in my message, these are being served directly by file access, 
not http and via NFS from multiple hosts. I don't think wal is a safe option.


On 12/07/2011 07:48 PM, Richard Hipp wrote:



On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland  wrote:

This is on NFS and with a large check in so it is worst case scenario but still 
I'm seeing this error when people simultaneously do certain heavy weight 
actions.


Are there any settings that would help here? I've dug though the docs and not 
seen anything yet. I'll dig though the code tonight but pointers from the 
experts would be appreciated.


Setting WAL mode on the database will help a lot.  However, WAL might not work 
on NFS.  Are all server instances running on the same machine?  If so, then you 
might be able to get WAL to work.  I suppose you could try.

Do this:

   fossil rebuild -wal -pagesize 8192 REPO

Then see if that helps.

FWIW, the Fossil and SQLite repositories take a pretty heavy load without 
problems and they are both running on the same 1/24th slice VM.  They do both 
use WAL.  But they also both use a local disk, not NFS.





FYI, I think these are probably unnecessary failures, however I grant that is 
may be tough to differentiate from real issues such as db not readable. I think 
fossil could possibly do a couple things here:


1. Interleave sync actions

2. On failure in sync tell the user that the db is probably busy and try again 
in a few minutes.


[830] > fossil update

Autosync:  file:///blah/blah.fossil

Bytes  Cards  Artifacts Deltas

Sent:6945146  0  0

Error: Database error: database is locked

DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)

Received: 118  1  0  0

Total network traffic: 3842 bytes sent, 871 bytes received

fossil: Autosync failed

--

updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18 22:33:49 UTC

tags: trunk

comment:  initial empty check-in (user: blah)


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




--
D. Richard Hipp
d...@sqlite.org


___ fossil-users mailing list 
fossil-users@lists.fossil-scm.org 
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Stephan Beal
2011/12/8 Lluís Batlle i Rossell 

> Why should be so? NFS implements file locks. Is the situation very
> different
> from not-on-nfs?
>

NFS "technically" supports locking, but has a long history of having
broken/buggy locking (google "nfs file locking"). The sqlite3 mail archives
are full of comments about it as well.

http://www.sqlite.org/lockingv3.html

says: "One should note that POSIX advisory locking is known to be buggy or
even unimplemented on many NFS implementations (including recent versions
of Mac OS X) and that there are reports of locking problems for network
filesystems under Windows. Your best defense is to not use SQLite for files
on a network filesystem."

MS Access has similar problems on CIFS filesystems - this is not
sqlite/fossil-specific.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Lluís Batlle i Rossell
On Thu, Dec 08, 2011 at 09:53:20AM +0100, Stephan Beal wrote:
> On Thu, Dec 8, 2011 at 1:15 AM, Matt Welland  wrote:
> Now you've got conflicting requirements: "safe option" AND "concurrent
> usage over NFS" (which is never a good idea, regardless of the
> application). As soon as you're doing concurrent use over NFS, all bets are
> off.

Why should be so? NFS implements file locks. Is the situation very different
from not-on-nfs?

I understand that the suggestion from Richard was about getting lower
lock contention, with wal.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-08 Thread Stephan Beal
On Thu, Dec 8, 2011 at 1:15 AM, Matt Welland  wrote:

> Error: Database error: database is locked
>
> DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)
>

If i'm not mistaken, this was fixed in late October or early November
(sometime shortly before 1.20 was released)? i reported it to the list and
Richard removed an "extraneous delete" which caused it.

On Thu, Dec 8, 2011 at 6:46 AM, Matt Welland  wrote:

> I wasn't clear in my message, these are being served directly by file
> access, not http and via NFS from multiple hosts. I don't think wal is a
> safe option.
>

Now you've got conflicting requirements: "safe option" AND "concurrent
usage over NFS" (which is never a good idea, regardless of the
application). As soon as you're doing concurrent use over NFS, all bets are
off.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-07 Thread Matt Welland
On Wed, Dec 7, 2011 at 10:38 PM, Nolan Darilek wrote:

>  Maybe Fossil could recommend a WAL rebuild command in these instances?
> Then at least the user has some direction in which to go. At the very least
> it could output that for relaying to the server administrator.
>

I wasn't clear in my message, these are being served directly by file
access, not http and via NFS from multiple hosts. I don't think wal is a
safe option.


> On 12/07/2011 07:48 PM, Richard Hipp wrote:
>
>
>
> On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland  wrote:
>
>> This is on NFS and with a large check in so it is worst case scenario but
>> still I'm seeing this error when people simultaneously do certain heavy
>> weight actions.
>>
>>
>>  Are there any settings that would help here? I've dug though the docs
>> and not seen anything yet. I'll dig though the code tonight but pointers
>> from the experts would be appreciated.
>>
>
> Setting WAL mode on the database will help a lot.  However, WAL might not
> work on NFS.  Are all server instances running on the same machine?  If so,
> then you might be able to get WAL to work.  I suppose you could try.
>
> Do this:
>
>fossil rebuild -wal -pagesize 8192 REPO
>
> Then see if that helps.
>
> FWIW, the Fossil and SQLite repositories take a pretty heavy load without
> problems and they are both running on the same 1/24th slice VM.  They do
> both use WAL.  But they also both use a local disk, not NFS.
>
>

>
>>
>>  FYI, I think these are probably unnecessary failures, however I grant
>> that is may be tough to differentiate from real issues such as db not
>> readable. I think fossil could possibly do a couple things here:
>>
>>
>>  1. Interleave sync actions
>>
>> 2. On failure in sync tell the user that the db is probably busy and try
>> again in a few minutes.
>>
>>
>>  [830] > fossil update
>>
>> Autosync:  file:///blah/blah.fossil
>>
>> Bytes  Cards  Artifacts Deltas
>>
>> Sent:6945146  0  0
>>
>> Error: Database error: database is locked
>>
>> DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)
>>
>> Received: 118  1  0  0
>>
>> Total network traffic: 3842 bytes sent, 871 bytes received
>>
>> fossil: Autosync failed
>>
>> --
>>
>> updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18
>> 22:33:49 UTC
>>
>> tags: trunk
>>
>> comment:  initial empty check-in (user: blah)
>>
>> ___
>> fossil-users mailing list
>> fossil-users@lists.fossil-scm.org
>> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>>
>>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
>
> ___
> fossil-users mailing 
> listfossil-users@lists.fossil-scm.orghttp://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>
>
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-07 Thread Nolan Darilek
Maybe Fossil could recommend a WAL rebuild command in these instances? 
Then at least the user has some direction in which to go. At the very 
least it could output that for relaying to the server administrator.



On 12/07/2011 07:48 PM, Richard Hipp wrote:



On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland > wrote:


This is on NFS and with a large check in so it is worst case
scenario but still I'm seeing this error when people
simultaneously do certain heavy weight actions.


Are there any settings that would help here? I've dug though the
docs and not seen anything yet. I'll dig though the code tonight
but pointers from the experts would be appreciated.


Setting WAL mode on the database will help a lot.  However, WAL might 
not work on NFS.  Are all server instances running on the same 
machine?  If so, then you might be able to get WAL to work.  I suppose 
you could try.


Do this:

   fossil rebuild -wal -pagesize 8192 REPO

Then see if that helps.

FWIW, the Fossil and SQLite repositories take a pretty heavy load 
without problems and they are both running on the same 1/24th slice 
VM.  They do both use WAL.  But they also both use a local disk, not NFS.



FYI, I think these are probably unnecessary failures, however I
grant that is may be tough to differentiate from real issues such
as db not readable. I think fossil could possibly do a couple
things here:


1. Interleave sync actions

2. On failure in sync tell the user that the db is probably busy
and try again in a few minutes.


[830] > fossil update

Autosync: file:///blah/blah.fossil

Bytes  Cards  Artifacts Deltas

Sent:6945146  0  0

Error: Database error: database is locked

DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)

Received: 118  1  0  0

Total network traffic: 3842 bytes sent, 871 bytes received

fossil: Autosync failed

--

updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18
22:33:49 UTC

tags: trunk

comment:  initial empty check-in (user: blah)


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org

http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




--
D. Richard Hipp
d...@sqlite.org 


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-07 Thread Richard Hipp
On Wed, Dec 7, 2011 at 7:15 PM, Matt Welland  wrote:

> This is on NFS and with a large check in so it is worst case scenario but
> still I'm seeing this error when people simultaneously do certain heavy
> weight actions.
>
>
> Are there any settings that would help here? I've dug though the docs and
> not seen anything yet. I'll dig though the code tonight but pointers from
> the experts would be appreciated.
>

Setting WAL mode on the database will help a lot.  However, WAL might not
work on NFS.  Are all server instances running on the same machine?  If so,
then you might be able to get WAL to work.  I suppose you could try.

Do this:

   fossil rebuild -wal -pagesize 8192 REPO

Then see if that helps.

FWIW, the Fossil and SQLite repositories take a pretty heavy load without
problems and they are both running on the same 1/24th slice VM.  They do
both use WAL.  But they also both use a local disk, not NFS.



>
> FYI, I think these are probably unnecessary failures, however I grant that
> is may be tough to differentiate from real issues such as db not readable.
> I think fossil could possibly do a couple things here:
>
>
> 1. Interleave sync actions
>
> 2. On failure in sync tell the user that the db is probably busy and try
> again in a few minutes.
>
>
> [830] > fossil update
>
> Autosync:  file:///blah/blah.fossil
>
> Bytes  Cards  Artifacts Deltas
>
> Sent:6945146  0  0
>
> Error: Database error: database is locked
>
> DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)
>
> Received: 118  1  0  0
>
> Total network traffic: 3842 bytes sent, 871 bytes received
>
> fossil: Autosync failed
>
> --
>
> updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18 22:33:49
> UTC
>
> tags: trunk
>
> comment:  initial empty check-in (user: blah)
>
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
>


-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] It would be nice if sync collisions failed more gracefully

2011-12-07 Thread Matt Welland
This is on NFS and with a large check in so it is worst case scenario but
still I'm seeing this error when people simultaneously do certain heavy
weight actions.


Are there any settings that would help here? I've dug though the docs and
not seen anything yet. I'll dig though the code tonight but pointers from
the experts would be appreciated.


FYI, I think these are probably unnecessary failures, however I grant that
is may be tough to differentiate from real issues such as db not readable.
I think fossil could possibly do a couple things here:


1. Interleave sync actions

2. On failure in sync tell the user that the db is probably busy and try
again in a few minutes.


[830] > fossil update

Autosync:  file:///blah/blah.fossil

Bytes  Cards  Artifacts Deltas

Sent:6945146  0  0

Error: Database error: database is locked

DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)

Received: 118  1  0  0

Total network traffic: 3842 bytes sent, 871 bytes received

fossil: Autosync failed

--

updated-to:   9012cff7d15010018d2fdd73375d198b27116844 2011-10-18 22:33:49
UTC

tags: trunk

comment:  initial empty check-in (user: blah)
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users