in the view
> definition and skip include_docs altogether, but of course that has a whole
> set of associated tradeoffs ...
>
> Cheers, Adam
>
>
> > On Dec 19, 2017, at 6:32 AM, Carlos Alonso
> wrote:
> >
> > Hello everyone!!
> >
> > We're exp
out a race condition that could be the cause of this behaviour.
I was wondering if this still holds true and if there's anything we can do
to fix it/work around.
Regards
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Alonso*
Data Engineer
Madrid, Spain
car
; Is there some more documentation on the sequence, the since parameter,
> >>> how to make sure to actually see the changes I'm interested in?
> >>> I like to read first before i ask specific questions - if any are left.
> >>>
> >>> regards,
ected to node1, and showing the couch process tree.
>
> https://i.imgur.com/Vb3Jisa.png
>
> -Joan
>
> - Original Message -
> From: "Carlos Alonso"
> To: user@couchdb.apache.org, "Joan Touzet"
> Sent: Tuesday, 10 October, 2017 3:37:20 AM
> Su
ote:
> Hi Carlos, I wrote a post on monitoring CouchDB using Prometheus:
>
> https://hackernoon.com/monitoring-couchdb-with-prometheus-grafana-and-docker-4693bc8408f0
>
> I’m not sure if it will provide all the metrics you need, but I hope this
> helps
>
> Geoff
> On Mon
I'd like to connect a diagnosing tool such as etop, observer, ... to see
which processes are open there but I cannot seem to have it working.
Could anyone please share how to run any of those tools on a remote server?
Regards
On Sat, Oct 7, 2017 at 6:13 PM Carlos Alonso
wrote:
> So
017 at 11:18 PM Carlos Alonso
wrote:
> This is one of the complete errors sequences I can see:
>
> [error] 2017-10-03T21:13:16.716692Z couchdb@couchdb-node-1 emulator
> Error in process <0.24558.209> on node 'couchdb@couchdb-node-1'
> with exit value:
>
/1 L231">>,<<"mochiweb_http:headers/6
L91">>,<<"proc_lib:init_p_do_apply/3 L240">>]
[error] 2017-10-03T21:13:16.717859Z couchdb@couchdb-node-1 <0.20718.207>
Replicator, request PUT to "
http://127.0.0.1:5984/my_db/de45a832a1
I can find
the documents that generated the error (in the logs) in the target db...
Regards
On Tue, Oct 3, 2017 at 10:52 PM Carlos Alonso
wrote:
> So to give some more context this node is responsible for replicating a
> database that has quite many attachments and it raises the 'fam
All this shows us is that the replicator
> itself attempted a POST and had the connection closed on it. (Remember
> that the replicator is basically just a custom client that sits
> alongside CouchDB on the same machine.) There should be more to the
> error log that shows why CouchDB
Hello, this is happening every day, always on the same node. Any ideas?
Thanks!
On Sun, Oct 1, 2017 at 11:42 AM Carlos Alonso
wrote:
> Hello everyone!!
>
> I'm trying to understand an issue we're experiencing on CouchDB 2.1.0
> running on Ubuntu 14.04. The clust
9a38\",\"_rev\":\"1-ebb0119fbdcad604ad372fa6e05d06a2\",...\":{\"start\":1,\"ids\":[\"ebb0119fbdcad604ad372fa6e05d06a2\"]}}">>],605}}
The particular node is 'responsible' for a replication that has quite many
{mp_parser_di
gt; {"error":"not_found","reason":"Database does not exist."}
>
> Isn`t it possible to configure CouchDB cluster?
>
> Regards
> Dominic
>
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Alonso*
Data Engineer
very soon now) and see if the problem goes away first.
>
> -Joan
>
> - Original Message -----
> From: "Carlos Alonso"
> To: user@couchdb.apache.org, "Joan Touzet"
> Sent: Wednesday, August 2, 2017 8:40:39 AM
> Subject: Re: Understanding a couple re
replication features and try again.
>
> -Joan
>
> - Original Message -
> From: "Carlos Alonso"
> To: "user"
> Sent: Monday, July 31, 2017 9:58:09 AM
> Subject: Re: Understanding a couple replication errors
>
> On top of those, this one hap
f<0.0.12124.197441>},
identity},
{att,<<"FF01-0020941.xml">>,<<"application/xml">>,8526,
8526,
<<40,146,23,28,52,27,112,157,241,120,182,142,82,29,
117,115>>
,75,131,59,18,8,235,62,169,28,55,201,79>>}],[]}},nil,[{docs_read,140},{docs_written,18},{missing_checked,500},{missing_found,500}],nil,nil,{batch,[<<"{\"_id\":\"20ab5179602db3c31abb60507b40ebe9\",\"_rev\":\"2-d774bbcec2927a9c2de7f24a21486765\
<
, Jul 27, 2017 at 2:21 PM Robert Samuel Newson wrote:
> eacces means the file ownership or permissions were wrong, which indicates
> the wrong settings on your rsync command. that error will persist until you
> chown / chmod correctly.
>
> > On 27 Jul 2017, at 12:08,
ts new location _before_ telling couchdb about it, and that's
> important. CouchDB will keep an open file descriptor for performance
> reasons, so copying data to those files via external processes, or even
> deleting the files, can have unexpected consequences and data loss
se rebalancing is definitely a bit tricky and we
> could use better tools for it.
>
>
> > On 26 Jul 2017, at 18:10, Carlos Alonso
> wrote:
> >
> > Hi!
> >
> > I have had a few log errors when moving a shard under particular
> > circumstances and I
,{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,632}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
```
*In conclusion:*
To me it looks like two things are involved here:
1. The fact that I deleted the file from disk and something e
remotely. I also wrote an efficient command line tool to create and
> populate a bloom filter using https://github.com/axiak/pybloomfiltermmap.
>
> I'll open source my code in the near future.
>
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Al
x27;now'.
Any other suggestions that you may think of?
Many thanks for your help. I really appreciate it.
Regards
On Mon, Jul 3, 2017 at 10:25 AM Carlos Alonso
wrote:
> Hi Adam,
>
> Did you manage to find something about this possible bug in your Jira? We
> would really apprecia
Hi Adam,
Did you manage to find something about this possible bug in your Jira? We
would really appreciate it as we're quite heavily affected by it.
Regards
On Thu, Jun 29, 2017 at 10:45 AM Carlos Alonso
wrote:
> Hi Adam.
>
> That's nice to hear, I've been searching
> your Erlang distribution cluster that are not CouchDB nodes, such as
> operator consoles or other Erlang OTP applications.
>
> mem3:nodes() returns all nodes participating in the CouchDB cluster.
>
> -Joan
>
> - Original Message -
> From: "Carlos Alonso"
>
i.e, if in all_nodes but not in cluster_nodes
> it means it cannot...).
>
> Thanks!
> --
> [image: Cabify - Your private Driver] <http://www.cabify.com/>
>
> *Carlos Alonso*
> Data Engineer
> Madrid, Spain
>
> carlos.alo...@cabify.com
>
> Prueba grati
t.
>
> I know there have been a couple of bugs that caused exactly that behavior
> in the past. Will see what I can dig up from JIRA. Cheers,
>
> Adam
>
> > On Jun 28, 2017, at 12:03 PM, Carlos Alonso
> wrote:
> >
> > Hi guys, we're seeing CouchDB change
2017-06-28 at 19.01.57.png]
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Alonso*
Data Engineer
Madrid, Spain
carlos.alo...@cabify.com
Prueba gratis con este código
#CARLOSA6319 <https://cabify.com/i/carlosa6319>
[image: Facebook] <http://cbify.com/fb_
eases:
>
> [replicator]
> max_replication_retry_count = infinity
> retries_per_request = 6
>
> For current master:
>
> [replicator]
> max_history = 8
> retries_per_request = 6
>
> Cheers,
> -Nick
>
>
> > On Jun 21, 2017, at 1:40 AM, Carlos Alons
ite replication attempts with maximum
> retry interval of 5 minutes or so. Can I do this? Do I need to change
> default configuration to achieve this?
> >>
> >> thanks,
> >> --Vovan
> >
>
> --
[image: Cabify - Your private Driver] <http://www.cabi
fail to delete.
> >
>
> _purge is the wrong tool for this job.
> From my understanding it's there as a last resort to get sensitive data out
> of a DB.
>
> Regards,
> Stefan
>
> [1] I think the main reason for this was actually the operating system, but
> it was faster
> The 500 is because the operation timed out waiting for a majority of nodes
> to report success, I am guessing.
>
> Once the other two nodes come back up, the _dbs database update will be
> replicated to them.
>
> B.
>
>
> > On 19 Jun 2017, at 13:17, Carlos Alonso
&g
of what it means to appear in the
all_nodes and cluster_nodes and each situation's considerations as I can't
find it documented anywhere. (i.e, if in all_nodes but not in cluster_nodes
it means it cannot...).
Thanks!
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
ing on the configured
replicas, the nodes down may even get a replica. Should I file a GH issue
for further investigation on this?
Regards
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Alonso*
Data Engineer
Madrid, Spain
carlos.alo...@cabify.com
Prueba gratis
ecurity
> synchronization process to “punch through” maintenance mode and retrieve
> the security objects from those shards for the purposes of establishing a
> majority and subsequently converging all the shards. I think that’s worth
> further discussion in a GitHub issue at least.
>
&
hen is, what does this error exactly mean? What are the so
called security objects? Is it something one has to carefully consider
avoiding?
Thank you.
On Tue, Jun 13, 2017 at 7:34 PM Carlos Alonso
wrote:
> Hi guys!
>
> I continue trying to understand how CouchDB clusters work and trying
>
> Preparing To Support A Lot More Users with CouchDB 2
> <
> https://medium.com/@quizsteredu/preparing-to-support-a-lot-more-users-with-couchdb-2-39d45f322f0b
> >
>
> Thanks.
>
> Geoff
>
--
[image: Cabify - Your private Driver] <http://www.cabify.com/&g
very similar for docker
> swarm: https://github.com/redgeoff/couchdb-docker-service. I managed to
> name the nodes couchdb1, couchdb2, ... (without the dash) and it appears to
> work well for me. Does omitting the dash work for you?
>
> On Thu, Jun 1, 2017 at 7:45 AM Carlos Alons
s).
I've seen it both happening when the new replica node was completely empty
but also when it had the data preloaded (via rsync or because it had
previously been a replica).
I hope so many text helps you out :)
Thanks!
--
[image: Cabify - Your private Driver] <http://www.cabify.c
ilds in active_tasks and lift the flag once
> those clear out. Cheers,
>
> Adam
>
> > On Jun 6, 2017, at 10:39 AM, Carlos Alonso
> wrote:
> >
> > Hi guys.
> >
> > I've been experimenting with operating a CouchDB 2.0 cluster and I've
> seen
>
: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Alonso*
Data Engineer
Madrid, Spain
carlos.alo...@cabify.com
Prueba gratis con este código
#CARLOSA6319 <https://cabify.com/i/carlosa6319>
[image: Facebook] <http://cbify.com/fb_ES>[image: Twitter]
<http://cbif
tribution but the couchdb development community as a whole could. I
> agree with you that extensive management tooling will be very valuable to
> couchdb, if you start something, you might find others will build from what
> you start.
>
> B.
>
> > On 29 May 2017, at 12:41, Carlos
quot;bringing nodes back alive if they went down for
> a while" -- just turn it back on and it'll automatically catch up on the
> data it "missed" while away.
>
> B.
>
> > On 27 May 2017, at 11:38, Carlos Alonso
> wrote:
> >
> > Hi Rober
back for shard moves at
> https://stackoverflow.com/questions/6676972/moving-a-shard-from-one-bigcouch-server-to-another-for-balancing
> and it still holds
>
> Future releases of couchdb should make this easier.
>
> Sent from a camp site with surprisingly good 4G signal
>
> > On
py on each node of those
databases to have them homogeneous (nodes added to the cluster when the
cluster was created all received their copy).
Thanks in advance!
--
[image: Cabify - Your private Driver] <http://www.cabify.com/>
*Carlos Alonso*
Data Engineer
Madrid, Spain
carlos.alo...@ca
45 matches
Mail list logo