Hi Nick, Thank you for your clarification. Best wishesMarcus
Am Mittwoch, dem 12.06.2024 um 16:47 -0400 schrieb Nick Vatamaniuc: > On Wed, Jun 12, 2024 at 3:54 PM Markus Doppelbauer< > [email protected]> wrote: > > Dear Nick, > > This means:If Q+N are large enough to distribute the data to > > allnodes, e.g. 120 nodes Q=40, N=3Then the view-query: > > startkey=foobar&endkey=foobazhas to ask all 120 nodes? > > In the initial startup phase it could query all 120 nodes, then it > willpick 40 shard workers and will stream from those only. > You do have some control where each of the 3 copies of shards end up: > https://docs.couchdb.org/en/stable/cluster/databases.html#placing-a-database-on-specific-nodes > > > > If I use a partion-key, is it possible to distribute the partioned- > > viewamong multiple nodes?The docs say it should stay under 10GB. > > A partition cannot be split across multiple shard ranges. A single > shardrange can contain multiple partitions. So, for a given partition > key, itwould pick a particular shard range from Q, and then it would > pick one ofthe N copies (usually 3) to stream from. > Cheers, > > Markus > > > > > > Am Mittwoch, dem 12.06.2024 um 13:54 -0400 schrieb Nick Vatamaniuc: > > > Another feature related to efficient view querying > > > arepartitioneddatabases: > > > https://docs.couchdb.org/en/stable/partitioned-dbs/index.html.It's > > > abit of a niche, as you'd need to have a good partition key, > > > butasidefrom that, it can speed up your queries as responses > > > would becomingfrom a single shard only instead of Q shards. > > > > > > On Wed, Jun 12, 2024 at 1:30 PM Markus Doppelbauer< > > > [email protected]> wrote: > > > > Hi Nick,Thank you very much for your reply.This is exactly what > > > > weare lookingfor.There are so many DBs that store the > > > > secondaryindexlocally(Cassandra, Aerospike, SyllaDB, ...)Thanks > > > > again forthe answerMarcus > > > > Am Mittwoch, dem 12.06.2024 um 13:23 -0400 schrieb Nick > > > > Vatamaniuc: > > > > > Hi Marcus,The node handling the request only queries the > > > > > nodeswith shardcopies ofthat database. In a 100 node cluster > > > > > theshards for thatparticulardatabase might be present on only > > > > > 6nodes, depending on theQ and Nsharding factors, so it will > > > > > query6 out 100 nodes. Forinstance, for N=3and Q=2 sharding > > > > > factors, itwill first send N*Q=6requests, and wait untilit > > > > > gets at least oneresponse for each of theQ=2 shard ranges. > > > > > Thishappens veryquickly. Then, for the duration ofthe > > > > > response, it willonlystream responses from those Q=2 > > > > > workers.So, to summarize fora Q=2database, it will be a > > > > > streaming responsefrom 2 workers. ForQ=4, from 4workers, > > > > > etc...Cheers,-NickOn Wed, Jun 12, 2024 at 1:00 PM Markus > > > > > Doppelbauer<[email protected]> wrote: > > > > > > Hello,Is the CouchDB-view a "global" or "local" > > > > > > index?Forexample, if acluster has 100 nodes, would the > > > > > > query askfor asingle node - > > > > > > or100nodes?/.../_view/posts?startkey="foobar"&endkey="fooba > > > > > > z"BestwishesMarcus
