Kaloyan Iliev Iliev wrote:
Hi,
I have an idea about your problem. Will it be difficult not to change
the entire code but only the queries? You can change type in the
Postgres to bool. Then, when select data you can use a CASE..WHEN to
return 'Y' or 'N' or even write a little function which accep
Hi,
I have an idea about your problem. Will it be difficult not to change
the entire code but only the queries? You can change type in the
Postgres to bool. Then, when select data you can use a CASE..WHEN to
return 'Y' or 'N' or even write a little function which accepts bool
and returns 'Y' o
Miroslav ¦ulc wrote:
PFC wrote:
Your query seems of the form :
SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY
sort_key LIMIT N OFFSET M;
I would suggest to rewrite it in a simpler way : instead of
generating the whole result set, sorting it, and then grabbing a
slice, gener
Tom Lane wrote:
Just FYI, I did a quick search-and-replace on your dump to replace
varchar(1) by "char", which makes the column fixed-width without any
change in the visible data. This made hardly any difference in the
join speed though :-(. So that is looking like a dead end.
I'll try to chan
=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[EMAIL PROTECTED]> writes:
> PFC wrote:
>> Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL
>> or an integer.
> Sure I could. The problem is our project still supports both MySQL and
> PostgreSQL. We used enum('Y','N') in MySQL so there would
Miroslav ©ulc <[EMAIL PROTECTED]> writes:
> I think so. I'll change the varchar(1) fields to char(1) where possible
char isn't faster than varchar on postgres. If anything it may be slightly
slower because every comparison first needs to pad both sides with spaces.
--
greg
--
Miroslav Šulc wrote:
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path
Harald Fuchs wrote:
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data type.
Since BOOL is exactly what you want to express and since MySQL also
sup
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
be helpful to change them to char(1)? Would it solve the variable-width
problem at least for some fields and speed the query up?
No, becaus
In article <[EMAIL PROTECTED]>,
=?ISO-8859-15?Q?Miroslav_=A6ulc?= <[EMAIL PROTECTED]> writes:
>> Instead of a varchar(1) containing 'y' or 'n' you could use a
>> BOOL or an integer.
> Sure I could. The problem is our project still supports both MySQL and
> PostgreSQL. We used enum('Y','N') in My
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Actually, we already had a pending patch (from Atsushi Ogawa) that
>> eliminates that particular O(N^2) behavior in another way. After
>> applying it, I get about a factor-of-4 reduction in the runtime for
>> Miros
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
> be helpful to change them to char(1)? Would it solve the variable-width
> problem at least for some fields and speed the query up?
No, because char(1) isn't p
PFC wrote:
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL
or an integer.
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data t
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL I
use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not of
the same type (smallint <-> integer (SERIAL)), in MySQL I use the same
types
Well both those things will make PostgreSQL slower...
Chris
--
Christopher Kings-Lynne wrote:
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL
I use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not
of the same type (smallint <-> integer (SERIAL)), in MySQL I use the
same types
Well both those things will make P
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL or an
integer.
Your query seems of the form :
SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY sort_key LIMIT
N OFFSET M;
I would suggest to rewrite it in a simpler way : instead of generating
the whole result s
John Arbash Meinel wrote:
In fact, on MySQL I didn't see any slow reactions so I didn't measure
and inspect it. But I can try it if I figure out how to copy the
database from PostgreSQL to MySQL.
I figured you still had a copy of the MySQL around to compare to. You
probably don't need to spend too
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path for field access does
Tom Lane wrote:
I wrote:
Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
I wrote:
> Since ExecProject operations within a nest of joins are going to be
> dealing entirely with Vars, I wonder if we couldn't speed matters up
> by having a short-circuit case for a projection that is only Vars.
> Essentially it would be a lot like execJunk.c, except able to cope
> with two
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
>> Is the data confidential? If you'd be willing to send me a pg_dump
>> off-list, I'd like to replicate this test and try to see where the time
>> is going.
>>
> Thank you very much for your offer. The data are partially confidental
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> There are only JOINs number against number.
Hmph. There's no reason I can see that hash joins should be as slow as
they seem to be in your test.
Is the data confidential? If you'd be willing to send me a pg_dump
off-list, I'd like
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
shared_buffers = 48000 # min 16, at least max_connections*2,
8KB each
work_mem = 1024 # min 64, size in KB
maintenance_work_mem = 16384# min 1024, size in KB
max_stack_depth = 2048 # min
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> I've just tried to uncomment the settings for these parameters with with
> no impact on the query speed.
> shared_buffers = 48000 # min 16, at least max_connections*2,
> 8KB each
> work_mem = 1024 # min 64, size in
John Arbash Meinel wrote:
Is there a reason to use varchar(1) instead of char(1). There probably
is 0 performance difference, I'm just curious.
No, not at all. I'm just not used to char().
Well, with cursors you can also do "FETCH ABSOLUTE 1 FROM
", which sets the cursor position, and then you can
Tom Lane wrote:
John Arbash Meinel <[EMAIL PROTECTED]> writes:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
Indeed. The hash joins seem unreasonably slow considering how little
data they are processing (unless t
Miroslav Šulc wrote:
John Arbash Meinel wrote:
...
Many of the columns are just varchar(1) (because of the migration from
MySQL enum field type) so the record is not so long as it could seem.
These fields are just switches (Y(es) or N(o)). The problem is users
can define their own templates and in
John Arbash Meinel <[EMAIL PROTECTED]> writes:
> How about a quick side track.
> Have you played around with your shared_buffers, maintenance_work_mem,
> and work_mem settings?
Indeed. The hash joins seem unreasonably slow considering how little
data they are processing (unless this is being run
John Arbash Meinel wrote:
It's actually more of a question as to why you are doing left outer
joins, rather than simple joins.
Are the tables not fully populated? If so, why not?
Some records do not consist of full information (they are collected from
different sources which use different approach
John Arbash Meinel wrote:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
I have tried to set shared_buffers to 48000 now but no speedup
(11,098.813 ms third try). The others are still default. I'll see
documentation and
Miroslav Šulc wrote:
Hi John,
thank you for your response.
I will comment on things separately.
John Arbash Meinel wrote:
...
These external tables contain information that are a unique parameter
of the AdDevice (like Position, Region, County, City etc.), in some
containing localized description of
Miroslav Šulc wrote:
Hi John,
thank you for your response.
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
What version of postgres are you using? The above names changed in 8.0,
and 8.0 also has some perfomance improvement
Hi Ragnar,
Ragnar Hafstaà wrote:
[snip output of EXPLAIN ANALYZE]
for those of us who have not yet reached the level where one can
infer it from the query plan, how abour showing us the actual
query too ?
I thought it will be sufficient to show me where the main bottleneck is.
And in fact, the
Hi John,
thank you for your response.
John Arbash Meinel wrote:
You really need to post the original query, so we can see *why* postgres
thinks it needs to run the plan this way.
Here it is:
SELECT AdDevicesSites.IDPK, AdDevicesSites.AdDevicesSiteSizeIDFK,
AdDevicesSites.AdDevicesSiteRegionIDFK,
On Sun, 2005-03-13 at 16:32 +0100, Miroslav Šulc wrote:
> Hi all,
>
> I am new to PostgreSQL and query optimizations. We have recently moved
> our project from MySQL to PostgreSQL and we are having performance
> problem with one of our most often used queries. On MySQL the speed was
> sufficien
Miroslav Šulc wrote:
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed
was sufficient but PostgreSQL chooses time expensive query plan.
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed was
sufficient but PostgreSQL chooses time expensive query plan. I would
like to
37 matches
Mail list logo