Hi Robert and other couchdb fellows.

Let me share with you our very first draft of our couchdb_admin tools and
Kubernetes config. Hope you find it useful and an interesting starting
point for better tooling around couchdb clusters operations. The URL of the
public repo is: https://github.com/cabify/couchdb-admin

Aside from that and following on the previous question (bring a new node
with same name as previous one) I was trying to have couch-n (being n an
integer) as names and I can't add them to the cluster. Neither using
Fauxton to setup the cluster nor using the API (_nodes db) I can add a node
whose name is not its IP address. With the following errors:

This is the error Fauxton shows when I try to add a node with "couch-1",
for example as its name: {invalid_ejson,{conn_failed,{error,nxdomain}}}
When trying to add a node to the cluster via the API, this is the error:

[notice] 2017-06-01T13:01:26.942862Z couchdb@couch-0 <0.9214.0> --------
127.0.0.1 - - PUT /_nodes/couchdb@couch-1 201
                            [error] 2017-06-01T13:01:26.947005Z
couchdb@couch-0 <0.9235.0> -------- ** System running to use fully
qualified hostnames **
                          ** Hostname couch-1 is illegal **

Any help with this, please?

Regards

On Mon, May 29, 2017 at 6:20 PM Robert Samuel Newson <rnew...@apache.org>
wrote:

> Hi,
>
> Yes, with the same name in vm.args. If you look in the _dbs database,
> you'll see what we call "shard maps", which is just what it says. It tells
> the nodes in the cluster which ranges of which databases are stored on
> which nodes (or should be). In normal operations these documents direct
> reads and writes when you make http requests, but in the background they
> organise internal replication processes to ensure every update is fully
> redundant. This replication is triggered whenever an update is made. So, in
> the special case that a node goes away and then comes back empty (because
> you've replaced its disks or the entire machine), it's just doing a
> replication from 0 rather than the last checkpoint.
>
> Cloudant has a bunch of internal tools around some of this but,
> realistically, I don't think we'll be cleaning them up sufficiently for
> safe public consumption. I can't promise to review any specific, sizeable
> contribution but the couchdb development community as a whole could. I
> agree with you that extensive management tooling will be very valuable to
> couchdb, if you start something, you might find others will build from what
> you start.
>
> B.
>
> > On 29 May 2017, at 12:41, Carlos Alonso <carlos.alo...@cabify.com>
> wrote:
> >
> > Hi Robert, many thanks for your help. Really appreciate it.
> >
> > Just for clarification, when you say "just bring it back with the same
> node
> > and it will refill" did you meant "just bring it back with the same
> > **name** and it will refill"? Meaning name as of "-name <thenodename>" in
> > vm.args file?
> >
> > I'm building an admin tool so that all this operations can be ran
> > automatically to try to make operations smoother and less error-prone.
> Are
> > you aware of anything already built along this lines? Otherwise it would
> be
> > great if you could make some comments on it when it is ready, would you
> be
> > up for it? Such a tool would be really valuable for the community I
> think.
> >
> > Regards and many thanks again for your help!
> >
> > On Sun, May 28, 2017 at 5:28 PM Robert Samuel Newson <rnew...@apache.org
> >
> > wrote:
> >
> >> Hi,
> >>
> >> You can rebalance those databases the same way.
> >>
> >> Note, you don't _have_ to rebalance these databases as you grow your
> >> cluster, it's not always the case that you need all nodes involved for a
> >> particular database.
> >>
> >> Removing a node permanently involves no work but if you wanted to you
> >> could go through all the documents in the _dbs database and modify them
> so
> >> they no longer refer to the removed node. this will reduce log noise.
> More
> >> likely, you'd replace a failed node with another, in that case just
> bring
> >> it back with the same node and it will refill with the data it had
> before.
> >> This is also true for your "bringing nodes back alive if they went down
> for
> >> a while" -- just turn it back on and it'll automatically catch up on the
> >> data it "missed" while away.
> >>
> >> B.
> >>
> >>> On 27 May 2017, at 11:38, Carlos Alonso <carlos.alo...@cabify.com>
> >> wrote:
> >>>
> >>> Hi Robert, many thanks for your response and for your SO post, really
> >>> detailed. Brilliant!! That's very clarifying.
> >>>
> >>> One last question: What about the _global_changes, _metadata, _users
> and
> >>> _replicator databases? Should I manually replicate them into the newly
> >>> added node? Is there any sort of good practices or advice about that?
> >>>
> >>> And about operations procedures... Is there any detailed documentation
> >> out
> >>> there (like your SO post) for the other operations? (Removing a node,
> >>> bringing nodes back alive if they went down for a while, ...)
> >>>
> >>> That would be really valuable.
> >>>
> >>> Thanks!!
> >>>
> >>> On Fri, May 26, 2017 at 7:42 PM Robert Newson <rnew...@apache.org>
> >> wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> That's expected behaviour. Existing databases are not rebalanced as
> new
> >>>> nodes are added. New databases are distributed over them, of course,
> up
> >> to
> >>>> the limit of q and n parameters.
> >>>>
> >>>> I wrote a guide a while back for shard moves at
> >>>>
> >>
> https://stackoverflow.com/questions/6676972/moving-a-shard-from-one-bigcouch-server-to-another-for-balancing
> >>>> and it still holds
> >>>>
> >>>> Future releases of couchdb should make this easier.
> >>>>
> >>>> Sent from a camp site with surprisingly good 4G signal
> >>>>
> >>>>> On 26 May 2017, at 16:24, Carlos Alonso <carlos.alo...@cabify.com>
> >>>> wrote:
> >>>>>
> >>>>> Hi guys!
> >>>>>
> >>>>> I'm very new to CouchDB administration and I was trying to setup a
> >>>> cluster
> >>>>> and do basic operations on it (adding nodes, moving shards, ...)
> >>>> following
> >>>>> this two resources:
> >>>>>
> >>>>> http://docs.couchdb.org/en/2.0.0/cluster/sharding.html
> >>>>>
> >>>>>
> >>>>
> >>
> https://medium.com/linagora-engineering/setting-up-a-couchdb-2-cluster-on-centos-7-8cbf32ae619f
> >>>>>
> >>>>> So far I've managed to setup a cluster and add nodes to it, but newly
> >>>> added
> >>>>> nodes (once the cluster is running) have not received any shard from
> >>>>> _global_changes, _metadata, _replicator and _users databases.
> >>>>>
> >>>>> I was wondering if it makes sense to have a copy on each node of
> those
> >>>>> databases to have them homogeneous (nodes added to the cluster when
> the
> >>>>> cluster was created all received their copy).
> >>>>>
> >>>>> Thanks in advance!
> >>>>> --
> >>>>> [image: Cabify - Your private Driver] <http://www.cabify.com/>
> >>>>>
> >>>>> *Carlos Alonso*
> >>>>> Data Engineer
> >>>>> Madrid, Spain
> >>>>>
> >>>>> carlos.alo...@cabify.com
> >>>>>
> >>>>> Prueba gratis con este código
> >>>>> #CARLOSA6319 <https://cabify.com/i/carlosa6319>
> >>>>> [image: Facebook] <http://cbify.com/fb_ES>[image: Twitter]
> >>>>> <http://cbify.com/tw_ES>[image: Instagram] <http://cbify.com/in_ES
> >>>>> [image:
> >>>>> Linkedin] <https://www.linkedin.com/in/mrcalonso>
> >>>>>
> >>>>> --
> >>>>> Este mensaje y cualquier archivo adjunto va dirigido exclusivamente a
> >> su
> >>>>> destinatario, pudiendo contener información confidencial sometida a
> >>>> secreto
> >>>>> profesional. No está permitida su reproducción o distribución sin la
> >>>>> autorización expresa de Cabify. Si usted no es el destinatario final
> >> por
> >>>>> favor elimínelo e infórmenos por esta vía.
> >>>>>
> >>>>> This message and any attached file are intended exclusively for the
> >>>>> addressee, and it may be confidential. You are not allowed to copy or
> >>>>> disclose it without Cabify's prior written authorization. If you are
> >> not
> >>>>> the intended recipient please delete it from your system and notify
> us
> >> by
> >>>>> e-mail.
> >>>>
> >>> --
> >>> [image: Cabify - Your private Driver] <http://www.cabify.com/>
> >>>
> >>> *Carlos Alonso*
> >>> Data Engineer
> >>> Madrid, Spain
> >>>
> >>> carlos.alo...@cabify.com
> >>>
> >>> Prueba gratis con este código
> >>> #CARLOSA6319 <https://cabify.com/i/carlosa6319>
> >>> [image: Facebook] <http://cbify.com/fb_ES>[image: Twitter]
> >>> <http://cbify.com/tw_ES>[image: Instagram] <http://cbify.com/in_ES
> >>> [image:
> >>> Linkedin] <https://www.linkedin.com/in/mrcalonso>
> >>>
> >>> --
> >>> Este mensaje y cualquier archivo adjunto va dirigido exclusivamente a
> su
> >>> destinatario, pudiendo contener información confidencial sometida a
> >> secreto
> >>> profesional. No está permitida su reproducción o distribución sin la
> >>> autorización expresa de Cabify. Si usted no es el destinatario final
> por
> >>> favor elimínelo e infórmenos por esta vía.
> >>>
> >>> This message and any attached file are intended exclusively for the
> >>> addressee, and it may be confidential. You are not allowed to copy or
> >>> disclose it without Cabify's prior written authorization. If you are
> not
> >>> the intended recipient please delete it from your system and notify us
> by
> >>> e-mail.
> >>
> >> --
> > [image: Cabify - Your private Driver] <http://www.cabify.com/>
> >
> > *Carlos Alonso*
> > Data Engineer
> > Madrid, Spain
> >
> > carlos.alo...@cabify.com
> >
> > Prueba gratis con este código
> > #CARLOSA6319 <https://cabify.com/i/carlosa6319>
> > [image: Facebook] <http://cbify.com/fb_ES>[image: Twitter]
> > <http://cbify.com/tw_ES>[image: Instagram] <http://cbify.com/in_ES
> >[image:
> > Linkedin] <https://www.linkedin.com/in/mrcalonso>
> >
> > --
> > Este mensaje y cualquier archivo adjunto va dirigido exclusivamente a su
> > destinatario, pudiendo contener información confidencial sometida a
> secreto
> > profesional. No está permitida su reproducción o distribución sin la
> > autorización expresa de Cabify. Si usted no es el destinatario final por
> > favor elimínelo e infórmenos por esta vía.
> >
> > This message and any attached file are intended exclusively for the
> > addressee, and it may be confidential. You are not allowed to copy or
> > disclose it without Cabify's prior written authorization. If you are not
> > the intended recipient please delete it from your system and notify us by
> > e-mail.
>
> --
[image: Cabify - Your private Driver] <http://www.cabify.com/>

*Carlos Alonso*
Data Engineer
Madrid, Spain

carlos.alo...@cabify.com

Prueba gratis con este código
#CARLOSA6319 <https://cabify.com/i/carlosa6319>
[image: Facebook] <http://cbify.com/fb_ES>[image: Twitter]
<http://cbify.com/tw_ES>[image: Instagram] <http://cbify.com/in_ES>[image:
Linkedin] <https://www.linkedin.com/in/mrcalonso>

-- 
Este mensaje y cualquier archivo adjunto va dirigido exclusivamente a su 
destinatario, pudiendo contener información confidencial sometida a secreto 
profesional. No está permitida su reproducción o distribución sin la 
autorización expresa de Cabify. Si usted no es el destinatario final por 
favor elimínelo e infórmenos por esta vía. 

This message and any attached file are intended exclusively for the 
addressee, and it may be confidential. You are not allowed to copy or 
disclose it without Cabify's prior written authorization. If you are not 
the intended recipient please delete it from your system and notify us by 
e-mail.

Reply via email to