On Thu, Dec 2, 2010 at 3:36 AM, Mario Splivalo
wrote:
> On 12/01/2010 09:43 AM, Pierre C wrote:
>>
>> Note that in both cases postgres reports that the FK checks take 92-120
>> milliseconds... which is a normal time for about 4000 rows.
>> Inserting 4000 lines with just a few fields like you got s
Mario Splivalo wrote:
> It is OT, but, could you please shead just some light on that?
> Part of my next project is to test performance of pg9 on both
> windows and linux systems so I'd appreciate any data/info you both
> may have.
I don't know how much was the filesystem, but with both tuned
On 12/01/2010 10:43 PM, Pierre C wrote:
On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner
wrote:
Mladen Gogala wrote:
There is a operating system which comes with a very decent extent
based file system and a defragmentation tool, included in the OS.
The file system is called "NTFS"
Been t
On 12/01/2010 09:43 AM, Pierre C wrote:
Note that in both cases postgres reports that the FK checks take 92-120
milliseconds... which is a normal time for about 4000 rows.
Inserting 4000 lines with just a few fields like you got should take
quite much less than 1 s...
Where the rest of the time
On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner
wrote:
Mladen Gogala wrote:
There is a operating system which comes with a very decent extent
based file system and a defragmentation tool, included in the OS.
The file system is called "NTFS"
Been there, done that. Not only was performa
Kevin Grittner wrote:
Mladen Gogala wrote:
Been there, done that. Not only was performance quite poor compared
to Linux, but reliability and staff time to manage things suffered
in comparison to Linux.
I must say that I am quite impressed with Windows 7 servers, especially
64 bit
Kenneth Marshall wrote:
Redhat6 comes with ext4 which is an extent based filesystem with
decent performance.
Ken
But e4defrag is still not available. And, of course, Red Hat 6 is still
not available, either. Maybe Red Hat 7 will do the trick? I assume it
will work beautifully with PostgreS
Mladen Gogala wrote:
> There is a operating system which comes with a very decent extent
> based file system and a defragmentation tool, included in the OS.
> The file system is called "NTFS"
Been there, done that. Not only was performance quite poor compared
to Linux, but reliability and sta
On Wed, Dec 01, 2010 at 12:15:19PM -0500, Mladen Gogala wrote:
> Mario Splivalo wrote:
>> I'll try what Pierre suggested, on whole new filesystem. This one did get
>> quite filled with thousands of files that I deleted while the database was
>> working.
>>
>> Mario
>>
>
> Yes, that is a g
Mario Splivalo wrote:
I'll try what Pierre suggested, on whole new filesystem. This one did
get quite filled with thousands of files that I deleted while the
database was working.
Mario
Yes, that is a good idea. That's the reason why we need a
defragmentation tool on Linux. Unfor
On 12/01/2010 05:34 PM, Mladen Gogala wrote:
Mario Splivalo wrote:
Yes, as Mladen Gogala had advised. No noticable change in performance -
it's still slow :)
Declaring constraints as deferrable doesn't do anything as such, you
have to actually set the constraints deferred to have an effect
Mario Splivalo wrote:
Yes, as Mladen Gogala had advised. No noticable change in performance -
it's still slow :)
Declaring constraints as deferrableĀ doesn't do anything as such, you
have to actually set the constraints deferred to have an effect. You
have to do it within a transaction
So, I did. I run the whole script in psql, and here is the result for
the INSERT:
realm_51=# explain analyze INSERT INTO drones_history (2771, drone_id,
drone_log_notice, drone_temperature, drone_pressure) SELECT * FROM
tmp_drones_history;
Just once.
OK, another potential problem eliminated, it gets strange...
If I have 5000 lines in CSV file (that I load into 'temporary' table
using COPY) i can be sure that drone_id there is PK. That is because CSV
file contains measurements from all the drones, one measurement per
drone. I u
On 12/01/2010 02:47 AM, Joshua D. Drake wrote:
> On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:
>> The database for monitoring certain drone statuses is quite simple:
>>
>
>> This is the slow part:
>> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
>> drone_temperatur
On 12/01/2010 01:51 AM, Pierre C wrote:
>
>> Now I tried removing the constraints from the history table (including
>> the PK) and the inserts were fast. After few 'rounds' of inserts I
>> added constraints back, and several round after that were fast again.
>> But then all the same. Insert of som
On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:
> The database for monitoring certain drone statuses is quite simple:
>
> This is the slow part:
> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
> drone_temperature, drone_pressure)
> SELECT * FROM tmpUpdate;
>
> For
Now I tried removing the constraints from the history table (including
the PK) and the inserts were fast. After few 'rounds' of inserts I added
constraints back, and several round after that were fast again. But then
all the same. Insert of some 11k rows took 4 seconds (with all
constrain
On 11/30/2010 05:26 PM, Mladen Gogala wrote:
At the beginning of the load, you should defer all of the deferrable
constraints, setting constraints deferred and issuing the copy statement
within a transaction block, like this:
scott=# begin; BEGIN
Time: 0.203 ms
scott=# set constraints all deferr
On 30/11/10 05:53, Pierre C wrote:
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on
drone_i
On 11/29/2010 05:53 PM, Pierre C wrote:
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on
dron
On 11/29/2010 05:47 PM, Pierre C wrote:
realm_51=# vacuum analyze verbose drones;
INFO: vacuuming "public.drones"
INFO: scanned index "drones_pk" to remove 242235 row versions
DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.
INFO: "drones": removed 242235 row versions in 1952 pages
DETAIL: CPU 0.01s
I'm just back from vacation, so I apologize in advance if I missed
anything of importance. Here is something to consider:
Instead of using the statement you used to create the table, try the
following:
CREATE TABLE drones_history (
drone_id integer not null,
sample_id integer
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on drone_id
alone it is likely to be very s
realm_51=# vacuum analyze verbose drones;
INFO: vacuuming "public.drones"
INFO: scanned index "drones_pk" to remove 242235 row versions
DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.
INFO: "drones": removed 242235 row versions in 1952 pages
DETAIL: CPU 0.01s/0.02u sec elapsed 0.03 sec.
INFO:
On 11/29/2010 08:11 AM, Mark Kirkwood wrote:
On 29/11/10 00:46, Mario Splivalo wrote:
This is the slow part:
INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
drone_temperature, drone_pressure)
SELECT * FROM tmpUpdate;
For 100 rows this takes around 2 seconds. For 1000 rows th
On 11/28/2010 10:50 PM, Pierre C wrote:
I pasted DDL at the begining of my post.
Ah, sorry, didn't see it ;)
The only indexes tables have are the ones created because of PK
constraints. Table drones has around 100k rows. Table drones_history
has around 30M rows. I'm not sure what additional
On 29/11/10 00:46, Mario Splivalo wrote:
This is the slow part:
INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
drone_temperature, drone_pressure)
SELECT * FROM tmpUpdate;
For 100 rows this takes around 2 seconds. For 1000 rows this takes
around 40 seconds. For 5000 rows t
I pasted DDL at the begining of my post.
Ah, sorry, didn't see it ;)
The only indexes tables have are the ones created because of PK
constraints. Table drones has around 100k rows. Table drones_history has
around 30M rows. I'm not sure what additional info you'd want but I'll
be more th
On 11/28/2010 07:56 PM, Pierre C wrote:
When I remove foreign constraints (drones_history_fk__samples and
drones_history_fk__drones) (I leave the primary key on drones_history)
than that INSERT, even for 50k rows, takes no more than a second.
So, my question is - is there anything I can do to
When I remove foreign constraints (drones_history_fk__samples and
drones_history_fk__drones) (I leave the primary key on drones_history)
than that INSERT, even for 50k rows, takes no more than a second.
So, my question is - is there anything I can do to make INSERTS with PK
faster? Or, si
The database for monitoring certain drone statuses is quite simple:
CREATE TABLE samples (
sample_id integer not null primary key,
sample_timestamp timestamp not null default now()
);
CREATE TABLE drones (
drone_id integer not null primary key,
drone_log_notice ch
32 matches
Mail list logo