pick a particular shard range from Q, and then it would pick one of
the N copies (usually 3) to stream from.
Cheers,
> Markus
>
>
>
> Am Mittwoch, dem 12.06.2024 um 13:54 -0400 schrieb Nick Vatamaniuc:
> > Another feature related to efficient view querying are
> > partitionedda
(Q) lower than the number of
> nodes?
> Wouldn't that result in unused nodes?
>
> Best wishes!
> Marcus
>
>
> Am Mittwoch, dem 12.06.2024 um 13:54 -0400 schrieb Nick Vatamaniuc:
> > Another feature related to efficient view querying are
> > partitioneddata
..)
> Thanks again for the answerMarcus
>
>
> Am Mittwoch, dem 12.06.2024 um 13:23 -0400 schrieb Nick Vatamaniuc:
> > Hi Marcus,
> > The node handling the request only queries the nodes with shard
> > copies ofthat database. In a 100 node cluster the shards for that
> &g
Hi Marcus,
The node handling the request only queries the nodes with shard copies of
that database. In a 100 node cluster the shards for that particular
database might be present on only 6 nodes, depending on the Q and N
sharding factors, so it will query 6 out 100 nodes. For instance, for N=3
Hi everyone,
We updated 3.3.3 deb and rpm packages to use the latest Erlang 24
patch version. There are no source changes. The new rpm package
version is 3.3.3.2 and deb version is 3.3.3-2.
Cheers,
-Nick
> correction, just realised it's a container doh... host is Debian, container
> (official image) is using 1.5GB
Ah no worries, that's common with containers.If if you're still
concerned about memory here is one recent issue discussing a few
strategies how to introspect it and possible reduce it
Thanks for your report Matthieu.
Yeah, it looks like JFrog Artifactory is down for ASF (a licensing
issue). ASF infra team is aware of it and will hopefully fix it soon.
Cheers,
-Nick
On Fri, Mar 22, 2024 at 11:40 AM Matthieu RAKOTOJAONA RAINIMANGAVELO
wrote:
>
> Hello there,
>
> It seems the
Hi Christopher,
It would depend on what bottleneck is causing the slowness. It could
be limited by the speed at the source, the replication job itself, or
the target write throughput. Some of the settings you can play with
are:
* Increase batch size if the documents are not too large
Hi everyone,
Debian and RPM 3.3.3 packages were rebuilt and updated with the Erlang
version 24.3.4.15. That version fixes a memory leak
https://github.com/erlang/otp/issues/7834
There were no other CouchDB source changes. The updated deb package
has version 3.3.3-1 and the rpm one has version
That looks like a DNS or other network problem where some nodes cannot
be reached by other nodes.
If I am not mistaken I think you and Jan already debugged this issue
in Slack and it turned out to be that one of the nodes was
couchdb@127.0.0.1 instead of a proper hostname.
Regards,
-Nick
On
Hi Paul,
You can recreate your databases and then re-populate them from another
source. As of CouchDB version 3.3.0, if a database is recreated it
will reset the replication job id, so it will start a new replication
job between the new recreated source and the target.
> There are some duplicate
Severity: moderate
Affected versions:
- Apache CouchDB through 3.3.2
- IBM Cloudant before 8413
Description:
Design document functions which receive a user http request object may expose
authorization or session cookie headers of the user who accesses the document.
These design document
That's currently the expected behavior for _replicate (transient)
replication jobs. There is retries_per_request parameter
https://docs.couchdb.org/en/stable/config/replicator.html#replicator/retries_per_request
to help configure retries for individual http requests the replication
job makes, but
Hi Aurélien,
Cloudant currently uses an ES5 Javascript engine.
Cheers,
-Nick
On Thu, May 18, 2023 at 12:07 PM Aurélien Bénel wrote:
>
> Dear all,
>
> Lately I changed my habits on CouchDB and started to use ES6/ES7 features for
> map functions (arrow functions, spread operators...). I was
environment.
Credit:
Nick Vatamaniuc vatam...@apache.org (finder)
References:
https://couchdb.apache.org/
https://www.cve.org/CVERecord?id=CVE-2023-26268
Hi Paul,
Thanks for the update.
We do use Zgzip when building our packages:
https://github.com/apache/couchdb-pkg/blob/main/debian/rules#L45-L47
Trying to install couchdb on ubuntu jammy worked on my VM. I noticed
it pulled in libmozjs-78 and libicu70 (70.1-2) was already present on
the server:
Hi Willem,
The different versions of epmd could be an issue. You can try killing
(stopping) the older epmd and seeing if CouchDB will start the newer
epmd version from its release.
3.3.0 packages are built with Erlang 24, while 3.2.2 were using Erlang
23. Erlang 23 should be able to talk to
> Note that I am using HAProxy for SSL.
> Could this have any adverse effect?
>
> Thank you very much.
>
> Alfred.
>
>
> On Thu, Sep 1, 2022 at 6:02 AM Nick Vatamaniuc wrote:
>
> > Hi Alfred,
> >
> > It could be you're seeing the effect of busy
Replications should work between 2.3.x and 3.x
The closing_on_request error seems to indicate something is forcibly
closing the connection.Double-check if you have any proxies in
between, they allow enough connection throughs, and don't set
connection timeouts too low. Could also try increasing
Hi Alfred,
It could be you're seeing the effect of busy waiting from Erlang VM
schedulers. Erlang VM would spawn a scheduler for each CPU core, and
then after any of those schedulers run out of work, they would still
spin in a busy loop for a bit waiting for more work. Sometimes that is
not
e curl script quotes the parameters
> > with a single quote in:
> >
> > this shouldn't be a problem. I actually started running those commands
> > without the env variable, but
> > decided to update my notes when I was copying them over gist.
> >
> > >
turns out
> to be helpful.
>
> I haven't yet tried to replicate data among two instances running the same
> version. Reason is, during this migration,
> I believe it will be impossible to swap all my services to the new CouchDB
> version, so there should be a period of
Hi Alan,
Thanks for reaching out.
It looks like CouchDB had failed to parse the replication document,
and couldn't turn it into a proper replication job.
The 'undef' error could suggest running on an unsupported version of
Erlang. It's a generic "this function doesn't exist" error in Erlang.
Hi Paul,
I don't think there is a clustered CouchDB book out there, and you've
probably already seen
http://docs.couchdb.org/en/stable/maintenance/performance.html.
The often overlooked aspect of performance tuning is that it starts
with application design. Things like how indices are defined,
Without an example it's hard to tell why it doesn't work. The initial
guess is it's because you might have fetched attachments as stubs and
not their full body.
For example by default fetching a doc we might get:
"_attachments": {
"att1": {
"content_type":
That's really neat! Thanks for sharing. It feels like this could cover
a larger number of cases where users don't really need the full JS
syntax for their views
Cheers,
-Nick
On Thu, Jul 22, 2021 at 3:28 PM Diana Thayer wrote:
>
> Howdy folks,
>
> Inspired by Mango, I've written a plugin for
Very nice tool. I like the replication feature, the contexts, and the
multiple output formats
Thanks for sharing it, Jonathan!
-Nick
On Tue, Apr 27, 2021 at 4:25 PM Jonathan Hall wrote:
>
> Good day everyone!
>
> I'd like to announce the "alpha" release of a new CLI tool for
> interacting with
Hi Vladimir,
What you're seeing is the list of previous checkpoints between that
pair of source and target endpoints. The list won't grow indefinitely,
it is capped at 50 entries. So after 51 entries, the oldest one will
be dropped, and you'd have only entries 2..51 and so on.
Cheers,
-Nick
On
Hi Sharath,
You can use {"source": ..., "target": ..., "create_target": true,
"create_target_params": {"partitioned": true}, ...} to create
partitioned target dbs.
The option was there since version 2.2.0
Hi Gustavo,
"no match of right hand value {error,eio}" looks like a failure to write to
a block device. Perhaps disks are full, or there is some kind of throttling
(over-quota) issue?
Replication will restart on failures. There is an exponential backoff on
repeated errors so if your replications
Hi Compu Net,
Thanks for reaching out!
I couldn't find your particular mp3 file. But had created a random file
then was able to insert it with 2.3.0 (fresh not upgraded) with 2 nodes on
Mac OS built with Erlang 20.3.8.14.
I had started a 3 node dev cluster but used the --degrade-cluster 1
Hi Andrea,
When replicating the content (source code) of the filter becomes part of
the replication identity. If the filter changes the replication would get a
new replication ID and it would effectively become a new replication and
reprocesses all the changes.
Cheers,
-Nick
On Tue, Aug 28,
exit
> > mochiweb_socket:close(Socket),
>
>
> but Im not sure why it wouldnt pick up the socket_options which has recbuf
> set to 64k. Ill keep debugging (build again from source and see what value
> of recbuf its taking and logging it), but if you have any other point
Hi Raja,
This sounds like this issue:
https://issues.apache.org/jira/browse/COUCHDB-3293
It stems from a bug in http parser
http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
perhaps mochiweb not knowing how to handle a "message too large error".
One way to work around it is
34 matches
Mail list logo