What I described in the last mail is what I try to do.
But I said earlier that I only do about 3-4 inserts / seconds because of my
problem.
So it's about one insert each 30 minutes for each table.
On Sat, Aug 23, 2008 at 7:31 PM, Loic Petit <[EMAIL PROTECTED]> wrote:
> One sensor (so one table) sends a packet each seconds (for 3000 sensors).
> => So we have : 1 insert per second for 3000 tables (and their indexes).
> Hopefully there is no update nor delete in it...
Wait, I'm confused, I tho
One sensor (so one table) sends a packet each seconds (for 3000 sensors).
=> So we have : 1 insert per second for 3000 tables (and their indexes).
Hopefully there is no update nor delete in it...
On Sat, Aug 23, 2008 at 6:59 PM, Loic Petit <[EMAIL PROTECTED]> wrote:
> 1 table contains about 5 indexes : timestamp, one for each sensor type - 3,
> and one for packet counting (measures packet dropping)
> (I reckon that this is quite heavy, but a least the timestamp and the values
> are really u
1 table contains about 5 indexes : timestamp, one for each sensor type - 3,
and one for packet counting (measures packet dropping)
(I reckon that this is quite heavy, but a least the timestamp and the values
are really usefull)
On Sat, Aug 23, 2008 at 6:47 PM, Loic Petit <[EMAIL PROTECTED]> wrote:
> I was a bit confused about the read and write sorry ! I understand what you
> mean...
> But do you think that the IO cost (of only one page) needed to handle the
> index writing is superior than 300ms ? Because each insert in
I was a bit confused about the read and write sorry ! I understand what you
mean...
But do you think that the IO cost (of only one page) needed to handle the
index writing is superior than 300ms ? Because each insert in any of these
tables is that slow.
NB: between my "small" and my "big" tests the
On Sat, Aug 23, 2008 at 6:09 PM, <[EMAIL PROTECTED]> wrote:
> On this smaller test, the indexes are over the allowed memory size (I've got
> over 400.000 readings per sensor) so they are mostly written in disk.
They're always written to disk. Just sometimes they're not read.
Note that the OS cac
On this smaller test, the indexes are over the allowed memory size (I've got
over 400.000 readings per sensor) so they are mostly written in disk. And on the
big test, I had small indexes (< page_size) because I only had about 5-10 rows
per table, thus it was 3000*8kb = 24mb which is lower than the
On Sat, Aug 23, 2008 at 1:35 PM, <[EMAIL PROTECTED]> wrote:
> Actually, I've got another test system with only few sensors (thus few tables)
> and it's working well (<10ms insert) with all the indexes.
> I know it's slowing down my performance but I need them to interogate the big
> tables (each o
Actually, I've got another test system with only few sensors (thus few
tables) and it's working well (<10ms insert) with all the indexes.
I know it's slowing down my performance but I need them to interogate the
big tables (each one can reach millions rows with time) really fast.
Regards
Loïc
Actually, I've got another test system with only few sensors (thus few tables)
and it's working well (<10ms insert) with all the indexes.
I know it's slowing down my performance but I need them to interogate the big
tables (each one can reach millions rows with time) really fast.
> Each INDEX crea
Each INDEX create a delay on INSERT. Try to measure performance w/o any
indexes.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Jan Otto wrote:
Hi Mathias,
On Aug 22, 2008, at 8:35 AM, Mathias Stjernström wrote:
I Agree with Robert but i never heard of Cybercluster before. Does
anyone have any experience with Cybercluster? It sounds really
interesting!
Some months ago i took a look into cybercluster. At that point
14 matches
Mail list logo