That does sound cool! I'll have the check out asap.
Thank you!
On 7/22/21 2:27 PM, Diana Thayer wrote:
Howdy folks,
Inspired by Mango, I've written a plugin for PouchDB that adds a JSON-based
map/reduce interface called *JsonViews*. This plugin, called
PouchDB-JsonViews, allows yo
x27;ve written a plugin for PouchDB that adds a JSON-based
> map/reduce interface called *JsonViews*. This plugin, called
> PouchDB-JsonViews, allows you to build templated JavaScript views using a
> simple document property accessor syntax (ex: 'foo', 'foo.bar'),
Howdy folks,
Inspired by Mango, I've written a plugin for PouchDB that adds a JSON-based
map/reduce interface called *JsonViews*. This plugin, called
PouchDB-JsonViews, allows you to build templated JavaScript views using a
simple document property accessor syntax (ex: 'foo',
.apache.org, "Joan Touzet"
Sent: Friday, May 25, 2018 11:31:34 AM
Subject: Re: `allocation size overflow` when building map-reduce view on 2.1.1
cluster
Thank you for the insight!
I can’t share, unfortunately, but you did lead me to finding a document that
was much bigger than I r
; From: "David Alan Hjelle" <mailto:dahjelle+couchdb@thehjellejar.com>>
> To: user@couchdb.apache.org <mailto:user@couchdb.apache.org>
> Sent: Thursday, May 24, 2018 5:27:52 PM
> Subject: `allocation size overflow` when building map-reduce view on 2.1.1
> clust
To: user@couchdb.apache.org
Sent: Thursday, May 24, 2018 5:27:52 PM
Subject: `allocation size overflow` when building map-reduce view on 2.1.1
cluster
I’m getting an `allocation size overflow` error [1] when building a map-reduce
view on a 2.1.1 3-node cluster on CentOS [2]. The view code works for m
I’m getting an `allocation size overflow` error [1] when building a map-reduce
view on a 2.1.1 3-node cluster on CentOS [2]. The view code works for many
other databases on the same cluster, and works on an identical database on
CouchDB 1.6.1 [3]. The database isn’t particularly large, nor
>> I want to ask this simple question:
>> Is mango a full replacement of the the original map/reduce API? i.e: Can it
>> do everything that the old map/reduce
>> API does? If not, in which cases that I still need to use map/reduce?
>>
>> -Ying
:36 PM, Ying Bian wrote:
> Hi All,
>
> I know we introduced mango in couchdb 2.0. While having not tried that out, I
> want to ask this simple question:
> Is mango a full replacement of the the original map/reduce API? i.e: Can it
> do everything that the old map/reduce
> API d
Hi All,
I know we introduced mango in couchdb 2.0. While having not tried that out, I
want to ask this simple question:
Is mango a full replacement of the the original map/reduce API? i.e: Can it do
everything that the old map/reduce
API does? If not, in which cases that I still need to use map
Unit testing and debugging view functions without any code changes can
be done, as discussed here:
https://github.com/vivekpathak/casters
Since the "todo list" talking about test cases - while trivial - is over
three years old, it probably is okay to share here.
Thanks!
Thanks. That makes sense.
Using map/reduce was as much a curiosity as a practical requirement.
Another way to monitor accuracy is to watch my progress indicator and see
how close it is to the real time.
On Thu, Nov 28, 2013 at 10:50 AM, Nitin Borwankar wrote:
> Hi Mark,
>
> It m
run.
>
> I have tons of data in my couchdb from previous conversions. I want to do
> regression analysis of these past runs to calculate parameters for
> estimation. I know the file-size, run-time, and conversion time for each.
>
> I will use runLen * A + fileSize * B as
eSize * B = convTime from the
samples. It would be nice to use a map-reduce to always have the latest
estimate of A and B, if possible.
My first thought would be to just find the average for each of the three
input vars and solve for A and B using these averages. However I'm pretty
sure t
On Aug 5, 2013, at 1:15 PM, Stanley Iriele wrote:
> I have to play with Bigcouch a little more. while I am here...what is the
> difference between Bigcouch's clustering techniques and couchbase's XDCR?
They’re not the same type of thing. BigCouch clustering is a lot like Couchbase
Server’s clu
t;>> the r + w > n thing... Either way that answers questions thank you!
>>> On Aug 4, 2013 7:18 AM, "Joan Touzet" wrote:
>>>
>>>> Hi Stanley,
>>>>
>>>> Let me provide a simplistic explanation, and others can help refine it
>
:18 AM, "Joan Touzet" wrote:
> >
> >> Hi Stanley,
> >>
> >> Let me provide a simplistic explanation, and others can help refine it
> >> as necessary.
> >>
> >> On Sat, Aug 03, 2013 at 09:34:48AM -0700, Stanley Iriele wrote:
> &g
Stanley,
>>
>> Let me provide a simplistic explanation, and others can help refine it
>> as necessary.
>>
>> On Sat, Aug 03, 2013 at 09:34:48AM -0700, Stanley Iriele wrote:
>>> How then does distributed map/reduce work?
>>
>> Each BigCouch node
at 09:34:48AM -0700, Stanley Iriele wrote:
> > How then does distributed map/reduce work?
>
> Each BigCouch node with a shard of the database also keeps that shard of
> the view. When a request is made for a view, sufficient nodes are
> queried to retrieve the view result, with the reduc
Hi Stanley,
Let me provide a simplistic explanation, and others can help refine it
as necessary.
On Sat, Aug 03, 2013 at 09:34:48AM -0700, Stanley Iriele wrote:
> How then does distributed map/reduce work?
Each BigCouch node with a shard of the database also keeps that shard of
the view. Whe
hello,
let me preface my question with the fact that I saw that BigCouch uses
clustering techniques, like quorum, found in the dynamo white paper so I
read about half of it yesterday.
How then does distributed map/reduce work?
if not all nodes have replications of all things how does that
Is it possible to filter based on aggregation results in couchdb? For example,
say I have a map/reduce query that counts the number of reports produced
by an employee on a given date
Here's my map function
function(doc)
{
if(doc.employeeId && doc.type == 'Report
On Wed, Dec 12, 2012 at 05:07:08PM -0500, nicholas a. evans wrote:
> On Wed, Dec 12, 2012 at 4:03 PM, James Marca
> wrote:
> > I feel your pain but cannot offer any help. I also use your option 5:
> > I use node.js to manually store view output into a separate db, with
> > the doc _ids equal to t
. None of these are individually deal breakers.
> But all of them together made me want to see if I could write
> something myself that I could support in production, using your
> architecture as inspiration.
>
> At any rate, I need an incremental iterative map reduce. And althoug
u can listen to and filter the doc-changes feed and only re-apply the
"delete" ops to chainedDB... in parralel to the other stuff. Combination
might not be immediately consistent but will become one... eventually
have fun
On Wed, 12 Dec 2012 16:55:05 -0500
"nicholas a. evans" wrote:
> On Wed, D
r basic approach (three databases) is the only way
to get an incremental iterative map reduce from CouchDB (without
poking around in the Erlang innards).
> Unfortunately, my work took me away from couch, and aside from a single
> issue [1] there was little interest in the project from others
Yes, our chained map-reduce is incremental.
On 12 December 2012 22:07, nicholas a. evans wrote:
> On Wed, Dec 12, 2012 at 4:03 PM, James Marca
> wrote:
>> I feel your pain but cannot offer any help. I also use your option 5:
>> I use node.js to manually store view output
On Wed, Dec 12, 2012 at 4:03 PM, James Marca
wrote:
> I feel your pain but cannot offer any help. I also use your option 5:
> I use node.js to manually store view output into a separate db, with
> the doc _ids equal to the key of the view output, so that I can limit
> updates to only those things
On Wed, Dec 12, 2012 at 4:22 PM, svilen wrote:
> i dont know if it can help, but i found that u can include
> local_seq=true in the view options, and that will expose
> doc._local_seq as the last change# of the doc. which eventualy
> can skip some steps below..
Thanks. I had completely forgotten
thub.com/afters/couch-incarnate/issues/1
On 12 December 2012 19:50, nicholas a. evans wrote:
> I've got some views that simply must use iterative map reduce. The
> greatest need is simply to sort based on the value of the first reduction.
> I'm looking over my options, and
i dont know if it can help, but i found that u can include
local_seq=true in the view options, and that will expose
doc._local_seq as the last change# of the doc. which eventualy
can skip some steps below..
>.
> 1) GET changes to SourceDB.
> 2) query view using ["metadata", changed.id] k
at 12:50:52PM -0500, nicholas a. evans wrote:
> I've got some views that simply must use iterative map reduce. The
> greatest need is simply to sort based on the value of the first reduction.
> I'm looking over my options, and I'm going to list them here. I'm looking
I've got some views that simply must use iterative map reduce. The
greatest need is simply to sort based on the value of the first reduction.
I'm looking over my options, and I'm going to list them here. I'm looking
for someone to tell me that I've missed an option, or
On Oct 29, 2012, at 11:46 PM, Pulkit Singhal
mailto:pulkitsing...@gmail.com>> wrote:
I was wondering if there are already utility methods present in CouchDB
1.2.0 that perform a case-insensitive comparison?
Not that I know of. You can do this by lowercasing each string and comparing
them; or i
I was wondering if there are already utility methods present in CouchDB
1.2.0 that perform a case-insensitive comparison?
I know its really easy to write one but if its there and its already part
of the core JS engine's exposed methods, I would rather not rewrite it.
I like how reusing the sum and
,customerID, productID,
> price,quantity,date}
> (...) so i should do a "Join" based on productID from Order and Product.
Please be aware that using Map/Reduce will force you to reduce drastically the
number of "joins" you'll make.
To do that, you
}
- Orders: {orderID,orderRowNumber,customerID,
productID,price,quantity,date}
Now, i must get some statistics based on customerID, and i wrote this
map/reduce functions:
function(doc) {
if (doc.type == "customer") {
emit(doc._id, doc);
} else if (doc.type ==
Oops, this slipped by me earlier.
The engine isn't relevant, CouchDB supports pluggable view servers.
CouchDB currently builds a view group sequentially (though different view
groups build concurrently), but after the BigCouch merge this will change to be
parallel (up the number of shards of yo
On Aug 3, 2012, at 1:54 AM, Jan Fajerski
wrote:
> What Javascript engine is used,
SpiderMonkey.
> how is workload distributed (if at all)
I don't believe it is. Erlang itself is good at multiprocessing, but it runs
SpiderMonkey in a separate process. I don't know whether it spawns more tha
JavaScript (or any other language).
>
> [1] http://guide.couchdb.org/draft/views.html
>
> - mathias
>
> On Aug 2, 2012, at 12:12 , Jan Fajerski wrote:
>
> > Hi,
> > I am researching map reduce implemenations for distributed database systems.
> > Is there a
, 2012, at 12:12 , Jan Fajerski wrote:
> Hi,
> I am researching map reduce implemenations for distributed database systems.
> Is there a paper or documentation on how this is done in CouchDB? Or is the
> source code the answer? If so would you be so kind to point me to a good star
Hi,
I am researching map reduce implemenations for distributed database systems.
Is there a paper or documentation on how this is done in CouchDB? Or is the
source code the answer? If so would you be so kind to point me to a good start
in the source code?
Many thanks in advance,
Jan
Hi,
That's not currently possible in vanilla CouchDB. You can do it on cloudant.com
via a chained map reduce
(http://support.cloudant.com/customer/portal/articles/359310-chained-mapreduce-views).
I think you could also do the same thing using the rcouch distribution
(http://lists.refu
On Mon, Jul 2, 2012 at 11:12 PM, João Ramos wrote:
> Now my problem is that I also want to filter by date, for example
You actually want 2 unrelated ways of querying your data; the only
thing I can see is that you use 2 different views, one for each of
your query.
--
Matthieu RAKOTOJAONA
I personally store dates by standard ms number and take the trouble to
figure out the startkey and endkey. Standard date functions can easily let
you pick a particular year, mon, day, or any other range.
On Mon, Jul 2, 2012 at 2:12 PM, João Ramos wrote :
> Hi,
>
> I have a map function that emi
Hi,
I have a map function that emits these keys:
[doc.type, 2012, 2, 14]
[doc.type, 2012, 2, 14]
[doc.type, 2012, 4, 22]
[doc.type, 2012, 5, 23]
This works great because I can get exactly what I want (ajusting the
group_level accordingly): for each doc type, how many exist each da
Hi experts
I recently did my first map+reduce function.
I used CouchDb for this (obviously ;-)).
It is a graph traversal algorithm.
map() picks up raw objects from the database,
(re-)reduce creates a number of graphs out of these.
What I found when doing this, is that you cannot filter for any
quality: 3.6
}
I'd like to be able to compute responses to views/queries that basically
ask:
how many documents do I have (broken down by 'fruit' and 'size')
that have the 'quality' greater than some cutoff?
And I'm having trouble identifying
Dave, if you need chaining, I really wouldn't mind anyone testing
CouchIncarnate and reporting bugs :)
On 14 December 2011 11:16, Dave Cottlehuber wrote:
> On 14 December 2011 10:05, Robert Newson wrote:
> > Chained map-reduce is not available in the open source BigCouch
> &
On 14 December 2011 10:05, Robert Newson wrote:
> Chained map-reduce is not available in the open source BigCouch
> project, only on our hosted service at cloudant.com. Sorry!
>
> B.
A cryin' shame!!!
> On 14 December 2011 09:01, Alon Keren wrote:
>> I've been
Chained map-reduce is not available in the open source BigCouch
project, only on our hosted service at cloudant.com. Sorry!
B.
On 14 December 2011 09:01, Alon Keren wrote:
> I've been working on an open-source tool with a similar approach chaining
> map-reduce:
> https://github.co
I've been working on an open-source tool with a similar approach chaining
map-reduce:
https://github.com/afters/Couch-Incarnate
The need to pay the bills withheld me from putting more time into it
lately, so I can't say it's production quality.
Hopefully, this will change soon.
On 14 December 2011 05:46, Dominic Tarr wrote:
> hi,
>
> I need a iterative map reduce, prefurably, in couchdb.
>
> I want to be able to do a map reduce to generate unique items, then another
> map reduce to generate stats about those unique items.
>
> from what I know abo
hi,
I need a iterative map reduce, prefurably, in couchdb.
I want to be able to do a map reduce to generate unique items, then another
map reduce to generate stats about those unique items.
from what I know about couchdb at the moment, it seems like id have to do
the first step at the document
Hi folks!
In the last few months, between projects, I've been tinkering with ideas and
implementations for making map-reduce chains work.
I think my current direction is solid enough to share - and I hope to get
from you guys either encouragement, insight, or a "hold it, I have a m
I'm actually putting some effort in that direction at the moment. If it
bears fruit, I'll share the result.
a.
On 10 June 2011 13:25, Fabio Di Bernardini wrote:
> I found an old mail<
> http://couchdb-development.1959287.n2.nabble.com/chaining-map-reduce-in-hovercraft-t
Fabio, while chaining map/reduce views sounds useful, we found it easy to work
around it by updating our documents with the derived data. Subsequent views can
use that derived information in your documents. This also breaks down your
chain into well-defined steps, which you can apply to your
This feature does not exist in any release of Apache CouchDB to date.
B.
On 10 June 2011 11:25, Fabio Di Bernardini wrote:
> I found an old
> mail<http://couchdb-development.1959287.n2.nabble.com/chaining-map-reduce-in-hovercraft-td3028752.html>of
> Chris Anderson about a patch
I found an old
mail<http://couchdb-development.1959287.n2.nabble.com/chaining-map-reduce-in-hovercraft-td3028752.html>of
Chris Anderson about a patch enabling views chaining.
I don't find other documentations since then.
There are news about views chaining with latest 1.1 release?
Thank you.
Hi,
I would like to know about the CouchDB best practices related to data
modeling (schema design), writing map-reduce functions.
If you have any suggestions or references to some related resources, please
reply.
Thanks.
--
Best Regards,
Yogesh Khambia
Postgraduate Design Engineer
Mobile: +31
My understanding is the big benefit of Google doing it on multiple machine
is that they are concurrently processing huge amounts of data in batch.
CouchDB is incremental map/reduce meaning that as documents are
updated/inserted/deleted the map function is run on them and the index for
the view
map can be performed back in cluster mode? There is an average number of
computers that are running this operation?
For example, the map reduces done by Google on several machines in the
cluster?
http://labs.google.com/papers/mapreduce.html
best regards
--
-
since you asked me many try to be more clear.
I'm trying to verifing in my thesis, what are the cases where it is best to
use a relational database, and what a nosql.
I'm now considering if and in what cases can pass from a relational database
to a nosql.
I took two columns in the TPC-H test and th
Dang hit the wrong key!
Usually the convention is you have some kind of 'type' attribute (so type
checking) on your documents or you check that all the fields you are
interested in exist (more of the duck approach) in your map function. So
perhaps your map function is something like:
function(doc
Usually the convention is you have some kind of 'type' attribute (so type
checking) on your documents or you check that all the fields you are
interested in exist (more of the duck approach) in your map function. So
perhaps your map function is something like:
function(doc){
if (doc.type != 'l
Mauro, it is very difficult to help you with so little information.
There's no general rules to translate from SQL to couchdb map/reduce views.
You can find some hints at http://guide.couchdb.org/draft/cookbook.html
Try to explain what you are trying to do.
Marcello
2011/5/16 Mauro Fa
Stefan I'm trying to convert this sql query in an identical to the document
I created in couchdb. Unfortunately, it is the first time using this
database and would like to understand how to translate the query using only
the wiki because I have not figured out how to do a lot
2011/5/16 Stefan Math
Mauro,
i think a short example would be really helpful .. especially related
to your data-structure .. and the expected behaviour. do you already
have an reduce function, but it does not work like you'd have it to?
Regards
Stefan
On Mon, May 16, 2011 at 4:52 PM, Mauro Fagnoni wrote:
> Hi all, i
Hi all, i've to convert this sql query into couchdb function but i've some
problem with reduce function. Some one can help me??
Many thanks and best regards
*
SQL QUERY:*
SELECT
L_RETURNFLAG,
L_LINESTATUS,
sum(L_QUANTITY)as sum_qty,
sum(L_EXTENDEDPRICE) as sum
Glad that did whatcha needed. (^_^)
-Zach
On Monday, December 6, 2010, Matthew Woodward wrote:
> On Mon, Dec 6, 2010 at 2:32 PM, Zachary Zolton
> wrote:
>
>> Then you'll want to use a range query:
>>
>> startkey=["Bush"]&endkey=["Bush",{}]
>>
>
> Just wanted to confirm this works perfectly and i
On Mon, Dec 6, 2010 at 2:32 PM, Zachary Zolton wrote:
> Then you'll want to use a range query:
>
> startkey=["Bush"]&endkey=["Bush",{}]
>
Just wanted to confirm this works perfectly and in combination with all the
other assistance in this thread gets me exactly what I need.
Not only does CouchDB
On Mon, Dec 6, 2010 at 2:32 PM, Zachary Zolton wrote:
> Mathew,
>
> So, for instance, your view emits keys like:
>
> ["Bush", "George", "H", "W"]
> ["Bush", "George", "W"]
> ["Clinton", "William", "J"]
> ["Obama", "Barack", "H"]
> ["Reagan", "Ronald", "W"]
>
> And you just want both rows with the
On Mon, Dec 6, 2010 at 2:27 PM, Aurélien Bénel wrote:
> Create a view with the following map function:
> function(o) {
> emit(o.lastName, o);
> }
>
> Then call the view with:
> ?key="Woodward"
>
> ... or maybe I don't understand your question?
>
Well, I've kind of come full circle here and
Mathew,
So, for instance, your view emits keys like:
["Bush", "George", "H", "W"]
["Bush", "George", "W"]
["Clinton", "William", "J"]
["Obama", "Barack", "H"]
["Reagan", "Ronald", "W"]
And you just want both rows with the last name of "Bush"?
Then you'll want to use a range query:
startkey=["B
Hi Matthew,
> So what I'd like to be able to do is include key="Woodward" in the URL and
> have the value of the key in the URL be what's used in my match regex, or if
> there's a different way to do exact matches (because in this case I will
> always be pulling by exact key matches) and still
On Mon, Dec 6, 2010 at 1:32 PM, Matthew Woodward wrote:
> Is there a way to pass a specific last name as the key in the URL and have
> the results be the example I gave in my previous message, but only for the
> specific last name provided as the key?
>
Sorry, keep thinking of better ways to expl
On Mon, Dec 6, 2010 at 1:22 PM, Matthew Woodward wrote:
> I'm currently getting back things like that, which is fine. But, let's say
> I wanted to return only the records with "foo" as the first element of the
> array. How would I go about doing that?
>
Oh and I should have stated that I'm trying
On Fri, Dec 3, 2010 at 1:41 AM, Robert Newson wrote:
> You can't reduce your way out of that, I think. What you can do instead is;
>
Thanks--kind of where I wound up (key including everything, value being
null) but very helpful to get confirmation on that. Sorry if my generic
examples weren't acc
add group_level=1 to get the unique foo's.
On Fri, Dec 3, 2010 at 9:41 AM, Robert Newson wrote:
> Matthew,
>
> Your original message implied that you might have duplicates but I
> didn't notice that you had documents with the same key but different
> contents (and there isn't one in your example)
Matthew,
Your original message implied that you might have duplicates but I
didn't notice that you had documents with the same key but different
contents (and there isn't one in your example).
You can't reduce your way out of that, I think. What you can do instead is;
map:
function(doc) {
if
On Thu, Dec 2, 2010 at 3:50 PM, Randall Leeds wrote:
> I think you may want to play w the ?group_level query parameter.
>
Thanks--messed with that but since my keys weren't unique across records it
didn't seem to make a difference. I'll look back into that though.
--
Matthew Woodward
m...@mattw
I think you may want to play w the ?group_level query parameter.
-Randall
Sent from my unicorn-powered, heavy rainbow-calibre, surface-to-air
installation battery.
On Dec 2, 2010 6:18 PM, "Matthew Woodward" wrote:
> On Thu, Dec 2, 2010 at 2:50 PM, Matthew Woodward wrote:
>
>> Starting to think--
On Thu, Dec 2, 2010 at 2:50 PM, Matthew Woodward wrote:
> Starting to think--does rereduce do its thing based on unique *keys* as
> opposed anything to do with the value? That would certainly explain the
> behavior I'm seeing, but of course means I may need to go back to the
> drawing board to get
Starting to think--does rereduce do its thing based on unique *keys* as
opposed anything to do with the value? That would certainly explain the
behavior I'm seeing, but of course means I may need to go back to the
drawing board to get where I want to go with this. ;-)
--
Matthew Woodward
m...@mat
On Thu, Dec 2, 2010 at 11:28 AM, Adam Kocoloski wrote:
> You need to perform the query with ?group=true. Best,
>
Thanks! Closer, but now I'm only getting one result per key which isn't
quite what I need.
When I added group=true I now get:
{"rows":[
{"key":"key1","value":["hi","there"]},
{"key"
On Dec 2, 2010, at 10:19 AM, Matthew Woodward wrote:
> On Thu, Dec 2, 2010 at 3:46 AM, Robert Newson wrote:
>
>> The simplest means to dedupe this is;
>>
>> function(keys, values, rereduce) {
>> return values[0];
>> }
>>
>>
> Thanks! I'm sure I'm missing something, but I stuck this in as my re
On Thu, Dec 2, 2010 at 3:46 AM, Robert Newson wrote:
> The simplest means to dedupe this is;
>
> function(keys, values, rereduce) {
> return values[0];
> }
>
>
Thanks! I'm sure I'm missing something, but I stuck this in as my reduce
function and I'm only getting one result back total. Do I do som
to the map bit of map/reduce decently, but now that I need
> to reduce something I'm having some issues, so I'm hoping someone can steer
> me in the right direction.
>
> I have a view that outputs a key, and then an array as the value using the
> following map function:
>
I'm catching on to the map bit of map/reduce decently, but now that I need
to reduce something I'm having some issues, so I'm hoping someone can steer
me in the right direction.
I have a view that outputs a key, and then an array as the value using the
following map function:
On Tue, Sep 21, 2010 at 9:19 AM, Christopher Bare
wrote:
> Hi Couch-istas,
>
>
Yes! Couchista FTW
--
http://www.readwriteweb.com/about#tyler
My website: http://list.pdxbrain.com
On Tue, Sep 21, 2010 at 18:19, Christopher Bare
wrote:
>
> I only vaguely understand the incremental indexing aspect of Couch,
> and welcome any comments about other differences between Couch's
> map-reduce and other forms; biting or not. There's lots of cool
> engineerin
ative
protocol of the web, after all. My problem boils down to counting
co-occurrence of sets of terms in documents, which can be expressed
nicely in terms of Map-reduce. I'd like to distribute the data purely
to parallelize and speed up these kinds of queries. The app will serve
only a handfu
s CouchDB with them, say "why not use Hadoop?".
>>>> Admittedly it's mostly because I'm trying to hold back a biting
>>>> comment, since there's really no commonality besides the use of
>>>> (distinct variants of) the Map/Reduce (family
; that, when I discuss CouchDB with them, say "why not use Hadoop?".
>>> Admittedly it's mostly because I'm trying to hold back a biting
>>> comment, since there's really no commonality besides the use of
>>> (distinct variants of) the Map/R
mittedly it's mostly because I'm trying to hold back a biting
>> comment, since there's really no commonality besides the use of
>> (distinct variants of) the Map/Reduce (family of) algorithm(s).
>>
>> B.
>
> M/R := Map/Reduce
>
> Generally, whe
> comment, since there's really no commonality besides the use of
> (distinct variants of) the Map/Reduce (family of) algorithm(s).
>
> B.
M/R := Map/Reduce
Generally, when I hear people comparing CouchDB M/R to Google M/R, I
remind them that Google M/R isn't real M/R.
CouchDB is
> a data store, where as Hadoop is a data processing platform. While
> they both have "MapReduce" functionality they aren't quite the same
> thing.
>
> In CouchDB, when we use Map/Reduce, we create a single persistent
> index of data using map and reduce
hey aren't quite the same
thing.
In CouchDB, when we use Map/Reduce, we create a single persistent
index of data using map and reduce operators. These indexes can then
be queried using single key or range lookups. Because of the
properties of Map/Reduce we're capable of updating these inde
> several instances of CouchDB each running on their own nodes. Then, I
> want to run distributed map-reduce queries over the whole collection
> of documents. Do I understand correctly that Lounge is currently the
> way to do this?
Lounge is one way. BigCouch (just released) is another.
Hi Couch-potatoes,
I'm investigating using CouchDB for a data mining application and
could use some advice.
What I have in mind is sharding a collection of documents between
several instances of CouchDB each running on their own nodes. Then, I
want to run distributed map-reduce queries ove
1 - 100 of 224 matches
Mail list logo