Tom Lane wrote:
I wrote:
Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
John Arbash Meinel wrote:
In fact, on MySQL I didn't see any slow reactions so I didn't measure
and inspect it. But I can try it if I figure out how to copy the
database from PostgreSQL to MySQL.
I figured you still had a copy of the MySQL around to compare to. You
probably don't need to spend
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path for field access
Christopher Kings-Lynne wrote:
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL
I use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not
of the same type (smallint - integer (SERIAL)), in MySQL I use the
same types
Well both those things will make
PFC wrote:
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL
or an integer.
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= [EMAIL PROTECTED] writes:
As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
be helpful to change them to char(1)? Would it solve the variable-width
problem at least for some fields and speed the query up?
No, because
Harald Fuchs wrote:
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data type.
Since BOOL is exactly what you want to express and since MySQL also
Tom Lane wrote:
Just FYI, I did a quick search-and-replace on your dump to replace
varchar(1) by char, which makes the column fixed-width without any
change in the visible data. This made hardly any difference in the
join speed though :-(. So that is looking like a dead end.
I'll try to
Kaloyan Iliev Iliev wrote:
Hi,
I have an idea about your problem. Will it be difficult not to change
the entire code but only the queries? You can change type in the
Postgres to bool. Then, when select data you can use a CASE..WHEN to
return 'Y' or 'N' or even write a little function which
Tom Lane wrote:
So I have some results. I have tested the query on both PostgreSQL 8.0.1
and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL
result is 11,667.916 ms, MySQL result is 448.4 ms.
That's a fairly impressive discrepancy :-(, and even the slot_getattr()
patch
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= [EMAIL PROTECTED] writes:
seriously, I am far below this level of knowledge. But I can contribute
a test that (maybe) can help. I have rewritten the query so it JOINs the
varchar() fields (in fact all fields except the IDPK) at the last INNER
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed was
sufficient but PostgreSQL chooses time expensive query plan. I would
like to
Hi Ragnar,
Ragnar Hafsta wrote:
[snip output of EXPLAIN ANALYZE]
for those of us who have not yet reached the level where one can
infer it from the query plan, how abour showing us the actual
query too ?
I thought it will be sufficient to show me where the main bottleneck is.
And in fact, the
John Arbash Meinel wrote:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
I have tried to set shared_buffers to 48000 now but no speedup
(11,098.813 ms third try). The others are still default. I'll see
documentation and
John Arbash Meinel wrote:
It's actually more of a question as to why you are doing left outer
joins, rather than simple joins.
Are the tables not fully populated? If so, why not?
Some records do not consist of full information (they are collected from
different sources which use different
John Arbash Meinel wrote:
Is there a reason to use varchar(1) instead of char(1). There probably
is 0 performance difference, I'm just curious.
No, not at all. I'm just not used to char().
Well, with cursors you can also do FETCH ABSOLUTE 1 FROM
cursor_name, which sets the cursor position, and
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= [EMAIL PROTECTED] writes:
shared_buffers = 48000 # min 16, at least max_connections*2,
8KB each
work_mem = 1024 # min 64, size in KB
maintenance_work_mem = 16384# min 1024, size in KB
max_stack_depth = 2048 # min
17 matches
Mail list logo