If I update a particular doc multiple times rapidly, is each update
guaranteed to show up in a continuous changes feed? I am worried that the
change feed will be optimized to just show the latest value of a doc with
multiple updates. This would break my logic.
I think a lot of the newer browsers automatically translate the URL
into
http://examples.cloudant.com/animaldb/_design/views101/_view/latin_name?key=%22Meles%20meles%22
when you enter in
http://examples.cloudant.com/animaldb/_design/views101/_view/latin_name?key="Meles
meles"
In Debian I can run:
On 09/24/2012 04:33 AM, Robert Newson wrote:
Are you sure it's not your shell that's swallowing the double quotes?
?key="foo" should work.
B.
On 24 September 2012 10:45, Simon Metson wrote:
That sounds wrong :) Did your doc.username contain the single quotes? e.g the double
quotes mean a js
On 22 September 2012 11:56, Rudi Benkovič wrote:
> Hi,
>
> I have a .couch file where compaction hasn't finished its job and
> we've lost the pre-compaction production DB file (an unfortunate
> sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
> in disk format version 6, with s
On 24 September 2012 06:00, svilen wrote:
> i've been reading
> http://wiki.apache.org/couchdb/Security_Features_Overview
> http://wiki.apache.org/couchdb/Replication
> for a while, but some things inbetween them are missing.
>
> so after some trial-and-errors here's what i understand/assume so
s/main/many/
On 24 September 2012 18:02, Robert Newson wrote:
> re: Tim, no, the database headers are written at the end of the file
> and a database will therefore contain main database headers over time
> (the compactor will not preserve old headers, though). This also
> implies (correctly) tha
re: Tim, no, the database headers are written at the end of the file
and a database will therefore contain main database headers over time
(the compactor will not preserve old headers, though). This also
implies (correctly) that truncating a couchdb .couch file will give
you the state of the databa
On Sep 24, 2012, at 9:12 AM, svilen wrote:
> hey, very interesting concept.
> multiplying on the other side of things -
> instead of a service/host-app managing many users, each user-app is
> managing many (similar) services/hosts.
I agree — I really like this approach and I’m glad to see some
On Mon, Sep 24, 2012 at 11:42 AM, Rudi Benkovič wrote:
> On Mon, Sep 24, 2012 at 6:00 PM, Paul Davis
> wrote:
>> The quickest way to fix this would probably be to go back and update
>> recover-couchdb to recognize the new disk format. Although that gets
>> harder now that snappy compression is i
The compactor is written to flush batches of docs every 5K bytes and
then write a header out ever 5M bytes (assuming default batch sizes).
Its important to remember that this judged against #doc_info{} records
which don't contain a full doc body. For documents with relatively few
revisions we're lo
On Mon, Sep 24, 2012 at 6:00 PM, Paul Davis wrote:
> The quickest way to fix this would probably be to go back and update
> recover-couchdb to recognize the new disk format. Although that gets
> harder now that snappy compression is involved.
I've tried upgrading recover-couchdb to 1.2.0 couch co
Since this is the result of a compaction, shouldn't the header be at
the beginning of the file? (just testing my knowledge on how all this
works...)
-Tim
On Mon, Sep 24, 2012 at 12:09 PM, Robert Newson wrote:
> That does imply that the last valid header is a long way back up the
> file, though.
hey, very interesting concept.
multiplying on the other side of things -
instead of a service/host-app managing many users, each user-app is
managing many (similar) services/hosts.
hmmm very close to what i have been thinking, for
quite some time now but on a different projection..
svil
On Mon
That does imply that the last valid header is a long way back up the
file, though.
On 24 September 2012 17:00, Paul Davis wrote:
> I'd ignore the snappy error for now. There's no way this thing ran for
> an hour and then suddenly hit an error in that code. If this is like a
> bug I've seen before
I'd ignore the snappy error for now. There's no way this thing ran for
an hour and then suddenly hit an error in that code. If this is like a
bug I've seen before the reason that this runs out of RAM is due to
the code that's searching for a header not releasing binary ref counts
as it should be.
Hi everyone,
I'm getting close to releasing a new app for iOS that leans heavily on CouchDB,
and thought it might be of interest to some folks here.
Joint is the name of the app. It lets you use any CouchDB server as a private
message board.
Other private sharing services maintain the data for
On 24 September 2012 15:02, Robert Newson wrote:
> {badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
> installation is even right.
>
> Can someone enlighten me? Is it possible to get this error spuriously?
No. I'd be keen to see a bit of logfiles to understand what's not working
Hey,
If you organise a CouchDB related meet up list it on this wiki page
http://wiki.apache.org/couchdb/CouchDB_meetups so others can find it. Someone
asked about meet ups in Toronto - if you organise one there's someone who wants
to join you!
Cheers
Simon
{badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
installation is even right.
Can someone enlighten me? Is it possible to get this error spuriously?
Does running out of RAM cause erlang to unload NIF's?
B.
On 24 September 2012 13:32, Rudi Benkovic wrote:
> Hello Robert,
>
> Sa
Hello Robert,
Saturday, September 22, 2012, 2:49:16 PM, you wrote:
> Yup, CouchDB starts from the end of the file and looks backwards until
> it finds a valid footer, it can take some time if that's a long way
> from the end. It's not so much that CouchDB is skipping over "random
> binary data",
i've been reading
http://wiki.apache.org/couchdb/Security_Features_Overview
http://wiki.apache.org/couchdb/Replication
for a while, but some things inbetween them are missing.
so after some trial-and-errors here's what i understand/assume so far:
* couchdb users are per server-instance, in se
Are you sure it's not your shell that's swallowing the double quotes?
?key="foo" should work.
B.
On 24 September 2012 10:45, Simon Metson wrote:
> That sounds wrong :) Did your doc.username contain the single quotes? e.g
> the double quotes mean a json string and the single quotes should then
That sounds wrong :) Did your doc.username contain the single quotes? e.g the
double quotes mean a json string and the single quotes should then be inside
that string. How were you making the query? I can hit
http://examples.cloudant.com/animaldb/_design/views101/_view/latin_name?key="Meles
me
23 matches
Mail list logo