Not sure at all. I don't know how to check precisely if a live design doc
is pointing to a particular file. I was basing my statement off the fact
that I have my views declared and they were available pre-indexed before
compaction (but they were not physically opened as file handles by couch,
but t
On Jan 31, 2014, at 6:36 PM, Luca Morandini wrote:
> On 31/01/14 19:42, Robert Samuel Newson wrote:
>>
>> As Simon says, this is the normal and expected behavior of CouchDB after
>> database compaction (or replication). CouchDB is not a revision control
>> system,
>> it only keeps the latest ve
On 31/01/14 19:42, Robert Samuel Newson wrote:
As Simon says, this is the normal and expected behavior of CouchDB after
database compaction (or replication). CouchDB is not a revision control system,
it only keeps the latest versions (including conflicts) of every document
(including deleted one
On Jan 31, 2014, at 12:07 PM, Boaz Citrin wrote:
> But if replication only copies the leaf then it makes sense that it is
> fatser, at least on the same machine. Instead of balancing a tree it just
> copies a single revision.
Um, no. The copied revision has to be inserted into the tree on the t
Boaz, I encourage you to benchmark both techniques. I think you will
discover that compaction is faster. However if not, then you will at least
have peace of mind that your approach is the best.
On Sat, Feb 1, 2014 at 3:07 AM, Boaz Citrin wrote:
> But if replication only copies the leaf then it
On Sat, Feb 1, 2014 at 2:27 AM, Boaz Citrin wrote:
> @Jason - the purpose is that compaction will not catch up with heavy
> write rate and will affect performance.
>
If compaction will not catch up with your writes, then you do not have
sufficient hardware resources. You are operating in a very
But if replication only copies the leaf then it makes sense that it is
fatser, at least on the same machine. Instead of balancing a tree it just
copies a single revision.
Also I undestand from Jason that compaction tries to catch up, unlike a non
continuos replication.
So it seems that if I don't n
On Jan 31, 2014, at 11:27 AM, Boaz Citrin wrote:
> @Jason - the purpose is that compaction will not catch up with heavy
> write rate and will affect performance.
The same will be true of replication.
> I wonder if I can omit the compaction if replication gives the same result.
> Will replicati
fyi
-- Forwarded message --
From: Vivek Mishra
Date: Sat, Feb 1, 2014 at 1:18 AM
Subject: {kundera-discuss} Kundera 2.10 released
To: "kundera-disc...@googlegroups.com"
Hi All,
We are happy to announce the Kundera 2.10 release.
Kundera is a JPA 2.0 compliant, object-datastore
@Jason - the purpose is that compaction will not catch up with heavy
write rate and will affect performance.
So I replicate/copy to a side database, compact it, then replicate the
new changes from the original and switchover.
I wonder if I can omit the compaction if replication gives the same resul
That was one iteration I missed. It worked. Thanks.
The extra backslash was my edit error in the email. Striping out internal
function names.
"\{couch_httpd_external, handle_external_req, <<'my_function'>>\}"
"\"{couch_httpd_external, handle_external_req, <<'my_function'>>\}\""
"{couch_httpd
I think the additional backslash before the > is causing the problem, try:
"{couch_httpd_external, handle_external_req, <<\"my_function\">>}"
This works for me:
echo '"{couch_httpd_external, handle_external_req, <<\"my_function\">>}"' |
jsonlint
Whilst:
echo '"{couch_httpd_external, handle_ext
Anyway I think the broader point is, compaction is for compacting databases
(removing old document revisions), and replication is for making a copy of
a database (or subset). If compaction is causing downtime then that is a
different bug to talk about, but it should be totally transparent.
Jens (i
Would replication not solve that problem all by itself?
Robert Samuel Newson wrote:
>
>Good question. We are certainly obliged to ensure couchdb pre-merge
>data upgrades to couchdb post-merge but the bigcouch to couchdb path is
>not something we’ve yet committed to. That said, unless it turns o
How does one perform am API based update to the "httpd_db_handlers" section of
the config.
Given something like this:
[httpd_db_handlers]
_my_function = {couch_httpd_external, handle_external_req, <<"my_function">>}
I try to update it with:
couchSend( "PUT /_config/httpd_db_handlers/_my_fu
On Jan 31, 2014, at 9:46 AM, Mark Hahn wrote:
> It wouldn't matter if it did. Within the same server linux short-circuits
> http to make it the same as unix sockets, i.e. very little overhead.
I think you mean it short-circuits TCP :)
There's extra work involved in HTTP generation & parsing no
> replication within the same CouchDB server does not use HTTP
It wouldn't matter if it did. Within the same server linux short-circuits
http to make it the same as unix sockets, i.e. very little overhead.
On Fri, Jan 31, 2014 at 9:43 AM, Jason Smith wrote:
> On Sat, Feb 1, 2014 at 12:30 AM,
On Sat, Feb 1, 2014 at 12:30 AM, Jens Alfke wrote:
>
> On Jan 31, 2014, at 8:39 AM, Boaz Citrin wrote:
>
> > Instead we could replicate the database into a new instance and
> > switchover, as replication is much faster.
>
> That seems sort of strange to me — I would expect replication to an empt
On Jan 31, 2014, at 8:39 AM, Boaz Citrin wrote:
> Instead we could replicate the database into a new instance and
> switchover, as replication is much faster.
That seems sort of strange to me — I would expect replication to an empty db to
be slower, as it's doing basically the same work as com
Good question. We are certainly obliged to ensure couchdb pre-merge data
upgrades to couchdb post-merge but the bigcouch to couchdb path is not
something we’ve yet committed to. That said, unless it turns out to be
intractable (which seems unlikely) we’ll certainly make an effort to support
t
Ownership is interesting. Would the bigcouch user have the right to delete the
file but not open it for reading?
There’s definitely an issue in bigcouch (fixed long since in couchdb) where any
failure to open a view file makes us delete it.
OS/fs all check out fine, You see the filename that s
Hello again,
Just wondering if bigcouch data files will be compatible with future
versions of couchdb now that the projects are merged. Will there be
separate ports for data and shards? Will the directory structure of
bigcouch be used or it is completely new? What would be the recommended
option t
Thanks a lot. The database was moved from older machines so some other file
system metadata might be scrambled. But I don't see what can cause a
problem like this.
Yes the debug output is seen "deleting unused view index files:" and it
deletes every view in every database, little doubt about it. I
Hello,
We need to periodically need to compact our database.
The problem is that this takes many hours and we cannot stay offline
for so long.
Instead we could replicate the database into a new instance and
switchover, as replication is much faster.
The question however is if both produces the sam
and details of OS, filesystem, anything you think might be relevant.
B.
On 31 Jan 2014, at 16:20, Robert Samuel Newson wrote:
> First thing to note is that bigcouch development is over, but we can at least
> confirm this;
>
> This function fetches all the design docs of the database, grabs al
First thing to note is that bigcouch development is over, but we can at least
confirm this;
This function fetches all the design docs of the database, grabs all the
signatures from each (you’ll have noticed view filenames look uuid/randomy,
that’s a 'sig'), and then sweeps the dir where all vie
Hi guys,
bigcouch 0.4.2 has the following code that handles view cleanup:
cleanup_index_files(Db) ->
% load all ddocs
{ok, DesignDocs} = couch_db:get_design_docs(Db),
% make unique list of group sigs
Sigs = lists:map(fun(#doc{id = GroupId}) ->
{ok, Info} = get_group_
https://github.com/cloudant/bigcouch/tree/f0f5a107c0b895dd72187c10baedec24b85329a9
On 31 Jan 2014, at 10:13, Vladimir Ralev wrote:
> I tried to rule out a file system problem and I did these:
>
> chmod -R 777 /opt/bigcouch
> chown -R root /opt/bigcouch
>
> Then ran the bigcouch as root.
>
>
I tried to rule out a file system problem and I did these:
chmod -R 777 /opt/bigcouch
chown -R root /opt/bigcouch
Then ran the bigcouch as root.
I still have a leak, but it's for other files:
beam.smp 28679root *086u REG 254,2 8282
32250112
/opt/bigcouch/var/lib/
As Simon says, this is the normal and expected behavior of CouchDB after
database compaction (or replication). CouchDB is not a revision control system,
it only keeps the latest versions (including conflicts) of every document
(including deleted ones).
B.
On 31 Jan 2014, at 08:16, Luca Morand
Did you run compaction?
On Friday, 31 January 2014 at 08:16, Luca Morandini wrote:
> Folks,
>
> something puzzling happened on the 11th of November last year to one of our
> CouchDB databases: all the non-current revisions were deleted (the rev info
> shows
> them as missing). From that mom
Folks,
something puzzling happened on the 11th of November last year to one of our
CouchDB databases: all the non-current revisions were deleted (the rev info shows
them as missing). From that moment on, all revisions were kept (we discovered the
missing revisions only yesterday).
The logs s
32 matches
Mail list logo