On 12/07/2011 09:23 AM, Havasvölgyi Ottó wrote:
> Thanks, Josh.
> The only reason I tried 8.4 first is that it was available for Debian as
> compiled package, so it was simpler for me to do it. Anyway I am going
> to test 9.1 too. I will post about the results.
>
If you're using squeeze, you can
On 12/06/2011 09:00 PM, Tom Lane wrote:
> Mario Splivalo writes:
>> I have 8.4.8 on producion and 8.4.9 on test, could that explain the
>> difference in plans chosen?
>
> I'd wonder first if you have the same statistics settings on both.
> The big problem here is th
On 12/06/2011 09:29 PM, Kevin Grittner wrote:
> "Kevin Grittner" wrote:
>
>> But both servers develop that estimate for the join size.
>
> [sigh] Those *were* both from the production server. Please show
> us the EXPLAIN ANALYZE from the other server.
Huh, right... missed that one. Here is
On 12/06/2011 09:17 PM, Kevin Grittner wrote:
>
> The hash join path must look more expensive on the first machine,
> for some reason.
>
> Mario, could you post the result of running this query from both
> servers?:
>
> http://wiki.postgresql.org/wiki/Server_Configuration
Sure. Here is from
On 12/06/2011 09:00 PM, Tom Lane wrote:
> Mario Splivalo writes:
>> I have 8.4.8 on producion and 8.4.9 on test, could that explain the
>> difference in plans chosen?
>
> I'd wonder first if you have the same statistics settings on both.
> The big problem here is th
I have a fairly simple query:
SELECT
FROM "tubesite_image"
INNER JOIN "tubesite_object"
ON ("tubesite_image"."object_ptr_id" = "tubesite_object"."id")
WHERE
"tubesite_object"."site_id" = 8
ORDER BY
"tubesite_object"."pub_date" ASC LIMIT 21;
That query is having a bad qu
On 07/13/2011 02:53 AM, Mario Splivalo wrote:
On 07/13/2011 12:39 AM, Tom Lane wrote:
Mario Splivalo writes:
On 07/12/2011 10:04 PM, Tom Lane wrote:
What you need to look into is why the estimated join size is 9400 rows
when the actual join size is zero. Are both tables ANALYZEd? Are you
On 07/13/2011 12:39 AM, Tom Lane wrote:
Mario Splivalo writes:
On 07/12/2011 10:04 PM, Tom Lane wrote:
What you need to look into is why the estimated join size is 9400 rows
when the actual join size is zero. Are both tables ANALYZEd? Are you
intentionally selecting rows that have no join
On 07/13/2011 12:39 AM, Tom Lane wrote:
Mario Splivalo writes:
On 07/12/2011 10:04 PM, Tom Lane wrote:
What you need to look into is why the estimated join size is 9400 rows
when the actual join size is zero. Are both tables ANALYZEd? Are you
intentionally selecting rows that have no join
On 07/12/2011 10:04 PM, Tom Lane wrote:
Mario Splivalo writes:
Limit (cost=0.00..415.91 rows=21 width=8) (actual
time=11263.089..11263.089 rows=0 loops=1)
-> Nested Loop (cost=0.00..186249.55 rows=9404 width=8) (actual
time=11263.087..11263.087 rows=0 loops=1)
Why is plan
Hi, all.
I have a query, looking like this:
SELECT
pub_date
FROM
tubesite_object
INNER JOIN tubesite_image
ON tubesite_image.object_ptr_id = tubesite_object.id
WHERE
tubesite_object.site_id = 8
AND tubesite_object.pub_date < E'2011-07-12 13:
On 12/01/2010 10:43 PM, Pierre C wrote:
On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner
wrote:
Mladen Gogala wrote:
There is a operating system which comes with a very decent extent
based file system and a defragmentation tool, included in the OS.
The file system is called "NTFS"
Been t
On 12/01/2010 09:43 AM, Pierre C wrote:
Note that in both cases postgres reports that the FK checks take 92-120
milliseconds... which is a normal time for about 4000 rows.
Inserting 4000 lines with just a few fields like you got should take
quite much less than 1 s...
Where the rest of the time
On 12/01/2010 05:34 PM, Mladen Gogala wrote:
Mario Splivalo wrote:
Yes, as Mladen Gogala had advised. No noticable change in performance -
it's still slow :)
Declaring constraints as deferrable doesn't do anything as such, you
have to actually set the constraints deferred
On 12/01/2010 02:47 AM, Joshua D. Drake wrote:
> On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:
>> The database for monitoring certain drone statuses is quite simple:
>>
>
>> This is the slow part:
>> INSERT INTO drones_history (sample_
On 12/01/2010 01:51 AM, Pierre C wrote:
>
>> Now I tried removing the constraints from the history table (including
>> the PK) and the inserts were fast. After few 'rounds' of inserts I
>> added constraints back, and several round after that were fast again.
>> But then all the same. Insert of som
On 11/30/2010 05:26 PM, Mladen Gogala wrote:
At the beginning of the load, you should defer all of the deferrable
constraints, setting constraints deferred and issuing the copy statement
within a transaction block, like this:
scott=# begin; BEGIN
Time: 0.203 ms
scott=# set constraints all deferr
On 11/29/2010 05:53 PM, Pierre C wrote:
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on
dron
On 11/29/2010 05:47 PM, Pierre C wrote:
realm_51=# vacuum analyze verbose drones;
INFO: vacuuming "public.drones"
INFO: scanned index "drones_pk" to remove 242235 row versions
DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.
INFO: "drones": removed 242235 row versions in 1952 pages
DETAIL: CPU 0.01s
On 11/30/2010 12:45 PM, Dimitri Fontaine wrote:
Mario Splivalo writes:
I have simple database schema, containing just three tables:
samples, drones, drones_history.
Now, those tables hold data for the drones for a simulation. Each simulation
dataset will grow to around 10 GB in around 6
On 11/29/2010 08:11 AM, Mark Kirkwood wrote:
On 29/11/10 00:46, Mario Splivalo wrote:
This is the slow part:
INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
drone_temperature, drone_pressure)
SELECT * FROM tmpUpdate;
For 100 rows this takes around 2 seconds. For 1000 rows
On 11/28/2010 10:50 PM, Pierre C wrote:
I pasted DDL at the begining of my post.
Ah, sorry, didn't see it ;)
The only indexes tables have are the ones created because of PK
constraints. Table drones has around 100k rows. Table drones_history
has around 30M rows. I'm not sure what additional
On 11/28/2010 07:56 PM, Pierre C wrote:
When I remove foreign constraints (drones_history_fk__samples and
drones_history_fk__drones) (I leave the primary key on drones_history)
than that INSERT, even for 50k rows, takes no more than a second.
So, my question is - is there anything I can do to
The database for monitoring certain drone statuses is quite simple:
CREATE TABLE samples (
sample_id integer not null primary key,
sample_timestamp timestamp not null default now()
);
CREATE TABLE drones (
drone_id integer not null primary key,
drone_log_notice ch
I have simple database schema, containing just three tables:
samples, drones, drones_history.
Now, those tables hold data for the drones for a simulation. Each
simulation dataset will grow to around 10 GB in around 6 months.
Since the data is not related in any way I was thinking in separatin
Scott Marlowe wrote:
CREATE INDEX photo_info_data_ix_field_value
ON user_info_data USING btree (field_value);
So, there is index on (user_id, field_name). Postgres is using index for
user_id (...WHERE user_id = 12345) but not on field-name (...WHERE
field_name = 'f-spot'). When I add extra inde
Scott Marlowe wrote:
On Mon, Apr 6, 2009 at 6:20 AM, Mario Splivalo
wrote:
Scott Marlowe wrote:
It's not really solved, it's just a happy coincidence that the current
plan runs well. In order to keep the query planner making good
choices you need to increase stats target for the fi
Scott Marlowe wrote:
It's not really solved, it's just a happy coincidence that the current
plan runs well. In order to keep the query planner making good
choices you need to increase stats target for the field in the index
above. The easiest way to do so is to do this:
alter database mydb se
Tom Lane wrote:
Mario Splivalo writes:
-> Bitmap Heap Scan on photo_info_data u (cost=39134.84..63740.08
rows=109024 width=50) (actual time=270.464..270.469 rows=3 loops=2)
Recheck Cond: ((u.field_name)::text = (t.key)::text)
-> Bitmap Index S
I have two tables, like this:
Big table:
CREATE TABLE photo_info_data
(
photo_id integer NOT NULL,
field_name character varying NOT NULL,
field_value character varying,
CONSTRAINT photo_info_data_pk PRIMARY KEY (photo_id, field_name)
)
WITH (OIDS=FALSE);
CREATE INDEX user_info_data_ix_f
Tom Lane wrote:
> Mario Splivalo writes:
>> Is this difference normal?
>
> It's hard to tell, because you aren't comparing apples to apples.
> Try a prepared statement, like
[...cut...]
> which should produce results similar to the function. You could
> the
Guillaume Cottenceau wrote:
>>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
>> Usually the reason for this is that the planner chooses a different plan
>> when it has knowledge of the particular value you are searching for than
>> when it does not.
>
> Yes, and since
Tom Lane wrote:
> Mario Splivalo writes:
>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
>
> Usually the reason for this is that the planner chooses a different plan
> when it has knowledge of the particular value you ar
I have a function, looking like this:
CREATE OR REPLACE FUNCTION get_memo_display_queue_size(a_service_id integer)
RETURNS integer AS
$BODY$
SELECT
COUNT(*)::integer
FROM
v_messages_memo
LEFT JOIN messages_memo_displayed
ON id = message_id
WHERE
s
Mario Splivalo wrote:
Robert Haas wrote:
jura=# set enable_seqscan to false;
SET
jura=# explain analyze select * from transactions where
transaction_time_commit between '2008-01-01' and '2008-01-31 23:59:59';
Robert Haas wrote:
On Fri, Feb 6, 2009 at 12:41 PM, Kevin Grittner
wrote:
Robert Haas wrote:
What's weird about this example is that when he sets enable_seqscan to
off, the bitmap index scan plan is actually substantially faster, even
though it in fact does scan nearly the entire heap. I don
Tom Lane wrote:
Hardly surprising --- a search on the index's lowest-order column would
require scanning practically all of the index. (If you think about the
ordering of the index entries you'll see why.) If this is a typical
query then you need a separate index on transaction_time_commit.
I have a table, like this:
CREATE TABLE transactions
(
transaction_id integer NOT NULL DEFAULT
nextval('transactions_seq'::regclass),
transaction_type integer NOT NULL,
transaction_client_id integer NOT NULL,
transaction_destination_id integer NOT NULL,
transaction_operator_id integer
On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:
> On Sat, Jun 03, 2006 at 10:31:03AM +0100, [EMAIL PROTECTED] wrote:
> > I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)
> > One beast will be apache, and the other will be postgres.
> > I'm using httperf/autobench for m
ases the only thing I can suggest is to merge
> user_subscription_credits_given and user_subscription_credits_taken
> into one table so you don't need the UNION ALL.
See, that's an idea! :) Thnx, I'll try that.
Is it inapropriate to ask about rough estimate on availableness of
8.2? :)
Mario
--
M
al
time=0.032..12641.705 rows=747884 loops=1)
-> Seq Scan on
user_subscription_credits_taken (cost=1.00..100011145.43
rows=747843 width=8) (actual time=0.023..4386.769 rows=747884 loops=1)
Total runtime: 56536.774 ms
(13 rows)
Thank you all in advance,
Mario
--
Mario
On Wed, 2006-05-10 at 17:10 -0500, Jim C. Nasby wrote:
> On Thu, May 04, 2006 at 04:45:57PM +0200, Mario Splivalo wrote:
> Well, here's the problem...
>
> > -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
> > (actual time=1074.984..
On Wed, 2006-05-03 at 13:58 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > I have a quite large query that takes over a minute to run on my laptop.
>
> The EXPLAIN output you provided doesn't seem to agree with the stated
> query. W
On Wed, 2006-05-03 at 10:20 -0500, Dave Dutcher wrote:
> > -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
> > (actual time=68.322..529472.026 rows=57925 loops=1)
> >-> Seq Scan on ticketing_codes_played
> > (cost=0.00..863.25 rows=57925 width=8) (actual time=0
outer".code_id)
Total runtime: 542000.093 ms
(27 rows)
I'll be more than happy to provide any additional information that I may
be able to gather. I'd be most happy if someone would scream something
like "four joins, smells like a poor design" because design is poor, but
lf with
me ears...
Thnx :)
Mike
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
"I can do it quick, I can do it cheap, I can do it well. Pick any two."
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an app
The query is that very same query, just the values
1000 and 'C7ZP2U' are parametars for the function.
So, the second question would be why is that query much much slower when
run from within function? Is there a way to see an execution plan for
the query inside the function?
abort the current transaction (although you could use a
> savepoint in the INSERT case to intercept the error).
>
>
This works perfectly, but sometimes the game has no codes, and I still
need to know exactley who came first, who was second, and so on... So a
locking table as Tom sug
On Tue, 2006-04-18 at 11:33 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> >> If there is concurrent locking,
> >> you're also running a big risk of deadlock because two processes might
> >> try to lock the same rows in different ord
WAL
configuration parametars, even put the log on separate disk spindles, it
did nothing.
Shall I reconsider the need for the exact lock I developed, or there is
something more I could do to speed the things up?
Mario
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
"I c
n($1, $2, $3) ON someTable.col =
someOtherFunction.col
WHERE
someCondition
$$BODY$$
LANGUAGE 'sql'.
Thank you in advance,
Mario
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
"I can do it quick, I can do it cheap, I can do it well. Pick any two."
--
51 matches
Mail list logo