Thanks everyone! The final db is 1.85gb. I altered my script
to use _bulk_docs and it was much faster with the final not taking as much
space. Either way, the final after compacting is much better than I
expected!
On Mon, May 14, 2012 at 8:03 PM, Paul Davis wrote:
> On Mon, May 14, 2012 at 3:0
On Mon, May 14, 2012 at 3:08 PM, James Marca
wrote:
> On Mon, May 14, 2012 at 03:42:01PM -0400, Tim Tisdall wrote:
>> Yes, I did it with a PUT for each id. When you call for compaction, is
>> there a way to see the progress or a way to know if it's done?
>
> the "status" tool in Futon will show y
On Mon, May 14, 2012 at 01:42:00PM -0700, Jens Alfke wrote:
>
> On May 14, 2012, at 1:08 PM, James Marca wrote:
>
> For example, I have detector data with one record per 30 seconds. If
> I combine data into daily docs and save, after compaction the
> resulting database is much smaller than if I
On Mon, May 14, 2012 at 9:13 PM, Tim Tisdall wrote:
> Does anyone have some recommendations on how to reduce the size of the db?
You can do a lot with tweaking your _id. See this link :
http://wiki.apache.org/couchdb/Performance#File_size
Basically :
- use shorter _ids
- use sequential _ids. If
On May 14, 2012, at 1:08 PM, James Marca wrote:
For example, I have detector data with one record per 30 seconds. If
I combine data into daily docs and save, after compaction the
resulting database is much smaller than if I keep one document per
observation.
Isn’t that just because there are a
On Mon, May 14, 2012 at 04:01:25PM -0400, Tim Tisdall wrote:
> Okay, I see that you can tell that it's running by doing a GET on the
> database in question and looking for the "compact_running":true . However,
> I don't seem to see any changes in the db's file size.
As the documentation of the co
On Mon, May 14, 2012 at 03:42:01PM -0400, Tim Tisdall wrote:
> Yes, I did it with a PUT for each id. When you call for compaction, is
> there a way to see the progress or a way to know if it's done?
the "status" tool in Futon will show you compaction progress
Also, two other things. Insertions
Okay, I see that you can tell that it's running by doing a GET on the
database in question and looking for the "compact_running":true . However,
I don't seem to see any changes in the db's file size.
On Mon, May 14, 2012 at 3:42 PM, Tim Tisdall wrote:
> Yes, I did it with a PUT for each id. Wh
Yes, I did it with a PUT for each id. When you call for compaction, is
there a way to see the progress or a way to know if it's done?
On Mon, May 14, 2012 at 3:20 PM, Paul Davis wrote:
> How did you insert them? If you did a PUT per docid you'll still want
> to compact afterwards.
>
> On Mon, Ma
How did you insert them? If you did a PUT per docid you'll still want
to compact afterwards.
On Mon, May 14, 2012 at 2:13 PM, Tim Tisdall wrote:
> I've got several gigabytes of data that I'm trying to store in a couchdb on
> a single machine. I've placed a section of the data in an sqlite db and
I've got several gigabytes of data that I'm trying to store in a couchdb on
a single machine. I've placed a section of the data in an sqlite db and
the file is about 5.9gb. I'm currently placing the same data into couchdb
and while it hasn't finished yet, the file size is already 10gb and
continu
Hello,
I am test driving spine.app ( https://github.com/maccman/spine.app ),
and before I get too much time invested, I wonder if it makes sense to
use for couchdb.
--
Regards,
Brian
Jason Smith writes:
> Hi, again.
>
> Yes, polling as a result of CouchDB continuous replication is
> discounted 99%. Polling a database all month long incurs a roughly
> 10-cent cost. Since there is a $5.00 monthly credit, continuous
> repletion is free, to a first approximation.
>
> The discount
13 matches
Mail list logo