On Fri, Mar 4, 2016 at 12:05 PM, Jan Lehnardt wrote:
> It will not :)
Darn, now I have to write write a wrapper ;)
To answer your question:
Since I'm using a shared VMware environment, I ran my benchmark several
times, to hopefully mitigate load spikes from other VMs. For each test run,
I pu
> On 04 Mar 2016, at 16:52, Peyton Vaughn wrote:
>
> Thanks for the help! Turning delayed_commits back on did indeed make a big
> difference.
Sorry, hit send too quickly there. What numbers are you seeing now?
Best
Jan
--
> I had run across that setting already in the docs, but just
> assum
> On 04 Mar 2016, at 16:52, Peyton Vaughn wrote:
>
> Thanks for the help! Turning delayed_commits back on did indeed make a big
> difference. I had run across that setting already in the docs, but just
> assumed it was set to false in 1.6 as well (I see in the docs sometimes
> there are annotati
Thanks for the help! Turning delayed_commits back on did indeed make a big
difference. I had run across that setting already in the docs, but just
assumed it was set to false in 1.6 as well (I see in the docs sometimes
there are annotations like "Changed in version 1.2" - this might be a good
candi
Hi,
The main reason is that 1.6 has couchdb/delayed_commits[1] = true by
default while 2.0 has it false instead to safe you from fun of
distributed issues. No delayed commits means that every write
operation hits the disk with fsync() call without any intermediate
buffering which delayed commits c
Hello,
I've been using 1.6 for several months now, and wanted to try out 2.0. But
from the start, I'm experiencing much slower performance with 2.0. In both
cases I'm using the docker images (klaemo/couchdb:1.6 and
klaemo/couchdb:2.0-dev), with a small program that pushes a few thousand
documents
Hi,
We have developed an iOS application with replication from:
{"couchdb":"Welcome","version":"1.1.1","bigcouch":"0.4.0"}.
We are facing performance issue with the parameterized filter replication. As
required we need to filte
On 12 October 2011 16:33, Robert Newson wrote:
> ooh, my math is way off, ignore ;)
No, I think you were right, assuming your 2000 rows/s calculation was
for 3.5M in 30mins.
>
> On 12 October 2011 16:32, Robert Newson wrote:
>> The 3.5M row response is not formed in memory. :) It's done line by
ooh, my math is way off, ignore ;)
On 12 October 2011 16:32, Robert Newson wrote:
> The 3.5M row response is not formed in memory. :) It's done line by line.
>
> that said, that's almost 2000 rows per second, which doesn't sound
> that bad to me.
>
> B.
>
> On 12 October 2011 16:26, Matt Goodall
The 3.5M row response is not formed in memory. :) It's done line by line.
that said, that's almost 2000 rows per second, which doesn't sound
that bad to me.
B.
On 12 October 2011 16:26, Matt Goodall wrote:
> On 12 October 2011 14:22, Arnaud Bailly wrote:
>> Hello,
>> We have started experiment
On 12 October 2011 14:22, Arnaud Bailly wrote:
> Hello,
> We have started experimenting with CouchDb as our backend, being especially
> interested with the changes API, and we ran into performances issues.
> We have a DB containing aournd 3.5M docs, each about 10K in size. Running
> the following
Hello,
We have started experimenting with CouchDb as our backend, being especially
interested with the changes API, and we ran into performances issues.
We have a DB containing aournd 3.5M docs, each about 10K in size. Running
the following query on the database :
http://192.168.1.166:5984/infowar
On May 25, 2010, at 5:53 PM, Adam Christian wrote:
> Hello everyone,
>
> I have a "conf" db with the following documents:
>
> (env)(db) [17:38] sauce $curl localhost:5984/conf/_all_docs
> {"total_rows":8,"offset":0,"rows":[
> {"id":"_design/amis","key":"_design/amis","value":{"rev":"1-d210474cd
Hello everyone,
I have a "conf" db with the following documents:
(env)(db) [17:38] sauce $curl localhost:5984/conf/_all_docs
{"total_rows":8,"offset":0,"rows":[
{"id":"_design/amis","key":"_design/amis","value":{"rev":"1-d210474cd70114aa12a9a0a45e5d0f20"}},
{"id":"_design/takos","key":"_design/ta
-Original Message-
From: ko...@fillibach.de [mailto:ko...@fillibach.de]
Sent: Tuesday, November 03, 2009 12:28 PM
To: user@couchdb.apache.org; Sebastian Negomireanu
Subject: Re: Performance issue
Hello,
> I am encountering a big performance problem with CouchDB. I get
> response
+40-269-210008 | off...@justdesign.ro | www.justdesign.ro
-Original Message-
From: Brian Candler [mailto:b.cand...@pobox.com]
Sent: Wednesday, November 04, 2009 12:06 AM
To: Sebastian Negomireanu
Cc: user@couchdb.apache.org
Subject: Re: Performance issue
On Tue, Nov 03, 2009 at 11:07
20, 550352, Sibiu, Romania
> +40-269-210008 | off...@justdesign.ro | www.justdesign.ro
>
>
> -Original Message-
> From: ko...@fillibach.de [mailto:ko...@fillibach.de]
> Sent: Tuesday, November 03, 2009 12:28 PM
> To: user@couchdb.apache.org; Sebastian Negomireanu
> Subject:
On Tue, Nov 03, 2009 at 11:07:53PM +0200, Sebastian Negomireanu wrote:
> Here is the pcap.
Thanks, nicely readable by
$ tcpdump -r couchdb.pcap -n -s0 -X tcp port 5984 | less
>From this, it is very clear that the delays are at the client side. I
summarise this as:
20:22:18.126308 client HEAD req
On 3 Nov 2009, at 23:06, Brian Candler wrote:
On Tue, Nov 03, 2009 at 11:07:53PM +0200, Sebastian Negomireanu wrote:
Here is the pcap.
Thanks, nicely readable by
$ tcpdump -r couchdb.pcap -n -s0 -X tcp port 5984 | less
From this, it is very clear that the delays are at the client side. I
su
On Tue, Nov 03, 2009 at 10:37:43PM +0200, Sebastian Negomireanu wrote:
> I've attached the whole HTTP conversation.
It would be much more useful packet-by-packet and with timestamps. Could you
post the original pcap file somewhere? Or at least something similar
to tcpdump -X output?
tdesign.ro | www.justdesign.ro
-Original Message-
From: Robert Newson [mailto:robert.new...@gmail.com]
Sent: Tuesday, November 03, 2009 4:15 PM
To: user@couchdb.apache.org
Subject: Re: Performance issue
One thing that snagged me one time was a client that sent "Expect:
Continue"
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sebastian Negomireanu wrote:
> The issue is not sending documents in batches, but the extremely slow
> insertion of individual documents.
You still haven't included a packet capture that has timings. Without that
we can't tell how long the actual Cou
esign SRL | Str. Dorului 20, 550352, Sibiu, Romania
+40-269-210008 | off...@justdesign.ro | www.justdesign.ro
-Original Message-
From: kol...@gmail.com [mailto:kol...@gmail.com] On Behalf Of Alex P
Sent: Tuesday, November 03, 2009 11:50 PM
To: user@couchdb.apache.org
Subject: Re: Performance i
-269-210008 | off...@justdesign.ro | www.justdesign.ro
>
>
> -Original Message-
> From: Robert Newson [mailto:robert.new...@gmail.com]
> Sent: Tuesday, November 03, 2009 2:22 PM
> To: user@couchdb.apache.org
> Subject: Re: Performance issue
>
> What HTTP client are you us
om: Robert Newson [mailto:robert.new...@gmail.com]
Sent: Tuesday, November 03, 2009 2:22 PM
To: user@couchdb.apache.org
Subject: Re: Performance issue
What HTTP client are you using?
On Tue, Nov 3, 2009 at 11:06 AM, Sebastian Negomireanu
wrote:
> Ok I will try that and come back with results.
>
009 11:59 AM
> To: user@couchdb.apache.org
> Subject: Re: Performance issue
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Sebastian Negomireanu wrote:
>> In both scenarios, I get response times around 500ms
>
> In these kind of situations I am a big fan of usin
...@justdesign.ro | www.justdesign.ro
-Original Message-
From: Roger Binns [mailto:rog...@rogerbinns.com]
Sent: Tuesday, November 03, 2009 11:59 AM
To: user@couchdb.apache.org
Subject: Re: Performance issue
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sebastian Negomireanu wrote:
> In b
Hello,
I am encountering a big performance problem with CouchDB. I get
response times around 500ms (and sometimes more). I've tried simple
operations like adding a very minimal document (about 5 fields, a total
payload of max 0.5 KB / doc). I've also tried the operation while running it
in a l
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sebastian Negomireanu wrote:
> In both scenarios, I get response times around 500ms
In these kind of situations I am a big fan of using Wireshark to see exactly
what the response time is. There could be all sorts of funky stuff going on
such as proxy
Hello,
I am encountering a big performance problem with CouchDB. I've tried it both
on Windows and on an Ubuntu server 9.10 virtual machine. In both scenarios,
I get response times around 500ms (and sometimes more). I've tried simple
operations like adding a very minimal document (about 5 fiel
30 matches
Mail list logo