Em 19/01/2017 12:13, Tom Lane escreveu:
Gustavo Rezende Montesino writes:
Being the client in question, I would like to make a little remark: What
we thought could be optimized here at first is on the row estimate of
the index scan; which could take null_frac into account. To put things
into pe
The picture is becoming clearer now. So to recap the issue is in the plan
selection not utilizing the null_frac statistic properly to skip what seems to
be in your case 99% of the rows which are NULL for the field the join is
happening on and would be discarded anyways.
For completeness do you
Gustavo Rezende Montesino writes:
> Being the client in question, I would like to make a little remark: What
> we thought could be optimized here at first is on the row estimate of
> the index scan; which could take null_frac into account. To put things
> into perspective, our similar case in p
I apologize my statement about NULL being used to join is incorrect as both
Vitalii and Gustavo have both pointed out in their respective replies.
-
Phillip Couto
> On Jan 19, 2017, at 08:30, Vitalii Tymchyshyn wrote:
>
>
> Hi.
>
> In SQL "null == any value" resolves to fal
Hi.
In SQL "null == any value" resolves to false, so optimizer can safely skip
nulls from either side if any for the inner join.
Best regards, Vitalii Tymchyshyn
NULL is still a value that may be paired with a NULL in a.a
>
> The only optimization I could see is if the a.a column has NOT NULL
>
Hello,
Em 19/01/2017 11:04, Clailson escreveu:
Hi Phillip.
Not sure if it is all that common. Curious what if you put b.b IS NOT
NULL in the WHERE statement?
It's the question. In the company I work with, one of my clients asked
me: "Why PostgreSQL does not remove rows with null in colum
Ah ok that makes sense. I am curious if there is actually a performance benefit
to doing that. In postgresql as per the execution plan you provided the Merge
Join joins both sets after the have been sorted. If they are sorted already
then the NULLs will all be grouped at the beginning or end. (C
Hi Phillip.
The only optimization I could see is if the a.a column has NOT NULL
defined while b.b does not have NOT NULL defined.
a.a is the primary key on table a and b.b is the foreign key on table b.
Tabela "public.a"
++-+---+
| Coluna | Tipo | Modificadores
NULL is still a value that may be paired with a NULL in a.a
The only optimization I could see is if the a.a column has NOT NULL defined
while b.b does not have NOT NULL defined.
Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in
the WHERE statement?
-
Hi,
Is there something in the roadmap to optimize the inner join?
I've this situation above. Table b has 400 rows with null in the column b.
explain analyze select * from a inner join b on (b.b = a.a);
"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599
loops=
On 05/04/2015 12:23 AM, Anton Bushmelev wrote:
Hello guru of postgres, it's possoble to tune query with join on random
string ?
i know that it is not real life example, but i need it for tests.
soe=# explain
soe-# SELECT ADDRESS_ID,
soe-# CUSTOMER_ID,
soe-# DATE_CREATED
Hello guru of postgres, it's possoble to tune query with join on random
string ?
i know that it is not real life example, but i need it for tests.
soe=# explain
soe-# SELECT ADDRESS_ID,
soe-# CUSTOMER_ID,
soe-# DATE_CREATED,
soe-# HOUSE_NO_OR_NAME,
soe-#
On Sat, Aug 9, 2014 at 5:15 AM, Josh Berkus wrote:
> Folks,
>
> So one thing we tell users who have chronically long IN() lists is that
> they should create a temporary table and join against that instead.
> Other than not having the code, is there a reason why PostgreSQL
> shouldn't do something
Folks,
So one thing we tell users who have chronically long IN() lists is that
they should create a temporary table and join against that instead.
Other than not having the code, is there a reason why PostgreSQL
shouldn't do something like this behind the scenes, automatically?
--
Josh Berkus
Po
[Please keep the list copied.]
siva palanisamy wrote:
> Could you pls guide me on how to minimize time consumption? I've
> postgresql 8.1.4; Linux OS.
Well, the first thing to do is to use a supported version of
PostgreSQL. More recent releases perform better, for starters.
http://wiki.pos
On 3 Listopad 2011, 16:52, siva palanisamy wrote:
> I basically have 3 tables. One being the core table and the other 2 depend
> on the 1st. I have the requirement to add upto 7 records in the
> tables.
> I do have constraints (primary & foreign keys, index, unique etc) set for
> the tables. I
siva palanisamy wrote:
> I basically have 3 tables. One being the core table and the other
> 2 depend on the 1st. I have the requirement to add upto 7
> records in the tables. I do have constraints (primary & foreign
> keys, index, unique etc) set for the tables. I can't go for bulk
> import
I basically have 3 tables. One being the core table and the other 2 depend
on the 1st. I have the requirement to add upto 7 records in the tables.
I do have constraints (primary & foreign keys, index, unique etc) set for
the tables. I can't go for bulk import (using COPY command) as there is no
croolyc wrote:
> can you help me with performance optimization
For overall tuning you could start here:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
If, after some general tuning, you are having problems with slow
queries, it is best if you pick one and show it with EXPLAI
Hi!
can you help me with performance optimization
on my machine I have 8 databases with ca. 1-2GB
processor is:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E3110 @ 3.00GHz
stepping: 10
cpu MHz
2010/5/1 Cédric Villemain :
> 2010/4/28 Robert Haas :
>> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
>> wrote:
>>> In the first query, the planner doesn't use the information of the 2,3,4.
>>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>>> say 3, but it doesn't)
>>>
2010/4/28 Robert Haas :
> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
> wrote:
>> In the first query, the planner doesn't use the information of the 2,3,4.
>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>> say 3, but it doesn't)
>> So it divide the estimated number of
2010/4/29 Robert Haas :
> On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov wrote:
>> Even if it will be done it does not solve the original issue. If I
>> understood you right there is now no any decent way of speeding up the query
>>
>> select *
>> from t2
>> join t1 on t1.t = t2.t
>> where t1.id =
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov wrote:
> Even if it will be done it does not solve the original issue. If I
> understood you right there is now no any decent way of speeding up the query
>
> select *
> from t2
> join t1 on t1.t = t2.t
> where t1.id = X;
>
> except of the propagating
2010/4/28 Robert Haas :
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3, but it doesn't)
So it divide the estimated number of ro
2010/4/28 Robert Haas :
> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
> wrote:
>> In the first query, the planner doesn't use the information of the 2,3,4.
>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>> say 3, but it doesn't)
>> So it divide the estimated number of
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
wrote:
> In the first query, the planner doesn't use the information of the 2,3,4.
> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
> say 3, but it doesn't)
> So it divide the estimated number of rows in the t2 table by 5
> (di
2010/4/26 Vlad Arkhipov :
>
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
>> wrote:
>>
>>>
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query like this:
>>>
>>
>> I'm not sure how you think this proves that it isn't a prob
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I think what you sho
On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane wrote:
> Robert Haas writes:
>> Hmm. We currently have a heuristic that we don't record a value as an
>> MCV unless it's more frequent than the average frequency. When the
>> number of MCVs is substantially smaller than the number of distinct
>> values
Robert Haas writes:
> Hmm. We currently have a heuristic that we don't record a value as an
> MCV unless it's more frequent than the average frequency. When the
> number of MCVs is substantially smaller than the number of distinct
> values in the table this is probably a good heuristic, since it
On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain
wrote:
> 2010/4/23 Robert Haas :
>> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
>> wrote:
>>> 2010/4/23 Robert Haas :
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
wrote:
> I don't think this is just an issue with statistic
2010/4/23 Robert Haas :
> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
> wrote:
>> 2010/4/23 Robert Haas :
>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
>>> wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a que
Cédric Villemain wrote:
> 2010/4/23 Robert Haas :
>> Since all your data is probably fully cached, at a first cut, I
>> might try setting random_page_cost and seq_page_cost to 0.005 or
>> so, and adjusting effective_cache_size to something appropriate.
>
> that will help worrect the situation, b
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
wrote:
> 2010/4/23 Robert Haas :
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query like this:
>>
>> I'm not sure how
2010/4/23 Robert Haas :
> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
>> I don't think this is just an issue with statistics, because the same
>> problem arises when I try executing a query like this:
>
> I'm not sure how you think this proves that it isn't a problem with
> statistics,
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
> I don't think this is just an issue with statistics, because the same
> problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I think what you should be fo
Greg Smith пишет:
I can't replicate your problem on the current development 9.0; all
three plans come back with results quickly when I just tried it:
Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual
time=0.049..0.959 rows=200 loops=1)
-> Seq Scan on t1 (cost=0.00..1.06 rows=1 widt
Greg Smith пишет:
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work with, the ones ANAL
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does not
do that.
create temp table t1 (id bigint, t bigint);
insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert i
I also think there have been changes in pgbench itself.
Make sure you run the same pgbench on both servers.
Dave
On 24-Mar-07, at 6:44 AM, Heikki Linnakangas wrote:
amrit angsusingh wrote:
I try to change my database server from the older one ie. 2Cpu
Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID
amrit angsusingh wrote:
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one
with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try
to use rather the same parameter from the pre
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one
with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try
to use rather the same parameter from the previous postgresql.conf :-
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one
with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try
to use rather the same parameter from the previous postgresql.conf :-
On 10/18/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote:
> so, imo alexander is correct:
> contacto varchar(255)
>
> ...is a false constraint, why exactly 255? is that were the dart landed?
BTW, if we get variable-length varlena headers
On Tue, Oct 17, 2006 at 12:25:39PM +0200, Ruben Rubio wrote:
> First of all I have to say that I now the database is not ok. There was
> a people before me that didn't do the thinks right. I would like to
> normalize the database, but it takes too much time (there is is hundred
> of SQLs to change
On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote:
> so, imo alexander is correct:
> contacto varchar(255)
>
> ...is a false constraint, why exactly 255? is that were the dart landed?
BTW, if we get variable-length varlena headers at some point, then
setting certain limits might make
On Wed, Oct 18, 2006 at 11:31:44AM +0200, Mario Weilguni wrote:
> It's not a bad idea. Usually I use postal codes with 25 chars, and never had
> any problem. With text, the limit would be ~1 GB. No matter how much testing
> in the application happens, the varchar(25) as last resort is a good idea
Mario Weilguni wrote:
  contacto varchar(255),
  fuente varchar(512),
  prefijopais varchar(10)
Instead, use:
  contacto text,
  fuente text,
  prefijopais text
See the PostgreSQL manual for an explanation of varchar vs. text.
Enforcing length constraints with varchar(xyz
Am Dienstag, 17. Oktober 2006 17:50 schrieb Alexander Staubo:
> On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
> >
> > Enforcing length constraints with varchar(xyz) is good database
> > design, not a
> > bad one. Using text everywhere might be tempting because it works,
> > but it's
> > not a g
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
>> so, imo alexander is correct:
>> contacto varchar(255)
Why do we have limits on this, for example?
contacto varchar(255)
1) First of all, this is a web application. People use to enter really
strange thinks there, and a lot of rubbish. So, as s
Alexander Staubo wrote:
On Oct 17, 2006, at 17:10 , Craig A. James wrote:
These tables are particularly egregious examples of ignorant
database design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe
Mr. Staubo, and the Post
Christopher Browne wrote:
The world rejoiced as [EMAIL PROTECTED] (Shane Ambler) wrote:
Chris Browne wrote:
In the case of a zip code? Sure. US zip codes are integer values
either 5 or 9 characters long.
So your app will only work in the US?
And only for US companies that only have US client
The world rejoiced as [EMAIL PROTECTED] (Shane Ambler) wrote:
> Chris Browne wrote:
>> In the case of a zip code? Sure. US zip codes are integer values
>> either 5 or 9 characters long.
>
> So your app will only work in the US?
> And only for US companies that only have US clients?
>
>
> Sorry ha
Chris Browne wrote:
In the case of a zip code? Sure. US zip codes are integer values
either 5 or 9 characters long.
So your app will only work in the US?
And only for US companies that only have US clients?
Sorry had to dig at that ;-P
--
Shane Ambler
[EMAIL PROTECTED]
Get Sheeky @ http
[EMAIL PROTECTED] (Alexander Staubo) writes:
> On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
>
>> Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
>>> Lastly, note that in PostgreSQL these length declarations are not
>>> necessary:
>>>
>>>contacto varchar(255),
>>>fuente v
[EMAIL PROTECTED] ("Merlin Moncure") writes:
> On 10/17/06, Mario Weilguni <[EMAIL PROTECTED]> wrote:
>> Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
>> > Lastly, note that in PostgreSQL these length declarations are not
>> > necessary:
>> >
>> > contacto varchar(255),
>> > fuente
On 10/17/06, Mario Weilguni <[EMAIL PROTECTED]> wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
> Lastly, note that in PostgreSQL these length declarations are not
> necessary:
>
> contacto varchar(255),
> fuente varchar(512),
> prefijopais varchar(10)
>
> Instead, use:
>
> c
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
> Lastly, note that in PostgreSQL these length declarations are not
> necessary:
>
> contacto varchar(255),
> fuente varchar(512),
> prefijopais varchar(10)
>
> Instead, use:
>
> contacto text,
> fuente text,
> prefij
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
prefijopais varchar(10)
Enforcing length constrain
On Oct 17, 2006, at 17:10 , Craig A. James wrote:
These tables are particularly egregious examples of ignorant
database design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You
owe Mr. Staubo, and the Postgress community, an apo
These tables are particularly egregious examples of ignorant database
design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe Mr.
Staubo, and the Postgress community, an apology.
There is absolutely no reason to insult peop
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi to everyone,
First of all I have to say that I now the database is not ok. There was
a people before me that didn't do the thinks right. I would like to
normalize the database, but it takes too much time (there is is hundred
of SQLs to change and t
You could try rewriting the query like this:
SELECT MAX(idcomment)
FROM comment c
WHERE idstatus=3 AND ctype=1
AND EXISTS (SELECT 1 FROM ficha vf WHERE idestado IN ('3', '4') AND
vf.idficha = c.idfile);
The planner can then try a backward scan on the comment_pkey index,
which should be quicke
On Oct 17, 2006, at 11:33 , Ruben Rubio wrote:
CREATE TABLE "comment"
(
idcomment int4 NOT NULL DEFAULT
nextval('comment_idcomment_seq'::regclass),
[snip 28 columns]
CONSTRAINT comment_pkey PRIMARY KEY (idcomment)
)
Ficha structure:
No indexes in ficha
Ficha rows: 17.850
CREATE TABLE fic
am Tue, dem 17.10.2006, um 11:33:18 +0200 mailte Ruben Rubio folgendes:
> >
> > SELECT max(idcomment)
> > FROM ficha vf
> > INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR
> > idestado=4))
> > WHERE idstatus=3
> > AND ctype=1
check for indexes on vf.idficha, c.idfile, idstatu
-Original Message-
> From: [EMAIL PROTECTED] on behalf of Ruben Rubio
> Sent: Tue 10/17/2006 2:05 AM
> To: pgsql-performance@postgresql.org
> Cc:
> Subject: [PERFORM] Optimization of this SQL sentence
>
> This SQL sentence is very simple. I need to get better
> From: [EMAIL PROTECTED] on behalf of Ruben Rubio
> Sent: Tue 10/17/2006 2:05 AM
> To: pgsql-performance@postgresql.org
> Cc:
> Subject: [PERFORM] Optimization of this SQL sentence
>
> This SQL sentence is very simple. I need to get better results. I have
> tried s
ssage-
From: [EMAIL PROTECTED] on behalf of Ruben Rubio
Sent: Tue 10/17/2006 2:05 AM
To: pgsql-performance@postgresql.org
Cc:
Subject:[PERFORM] Optimization of this SQL sentence
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This SQL sentence is very simple. I need to get better re
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This SQL sentence is very simple. I need to get better results. I have
tried some posibilities and I didn't get good results.
SELECT max(idcomment)
FROM ficha vf
INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR
idestado=4))
WHERE ids
eVl <[EMAIL PROTECTED]> writes:
>> You tell us --- let's see EXPLAIN ANALYZE results for both cases.
> Here EXPLAIN ANALYZE results for both queries attached.
The problem seems to be that the is_uaix() function is really slow
(somewhere around 4 msec per call it looks like). Look at the
first sc
eVl <[EMAIL PROTECTED]> writes:
> When executing this SELECT (see SELECT.A above) it executes in about
> 700 ms, but when I want wipe out all info about local traffic, with query
> like this:
> SELECT * FROM ( SELECT.A ) a WHERE type = 'global';
> It executes about 1 ms - more the
Hello!
Got a DB with traffic statictics stored. And a SELECT statement which
shows traffic volume per days also divided by regions - local traffic and
global.
Thus SELECT statement returns about some (in about 10-20) rows paired
like this:
ttype (text)| volume (int)| tdate (date
Dear all,
What would be the best configure line that would suite for optimization
As I understand by eliminating unwanted modules, I would make the DB lighter
and faster.
Lets say the module needed are only english module with LC_collate C
module type.
How could we eliminate the unwanted modul
On Fri, 13 Feb 2004 16:21:29 +0100, I wrote:
>Populate this table with
> INSERT INTO idmap
> SELECT id, id, true
> FROM t;
This should be
INSERT INTO idmap
SELECT DISTINCT id, id, true
FROM t;
Servus
Manfred
---(end of broadc
"Marinos J. Yannikos" <[EMAIL PROTECTED]> writes:
> Jeff Trout wrote:
>> Remember that it is going to allocate 800MB per sort.
> I didn't know that it always allocates the full amount of memory
> specificed in the configuration
It doesn't ... but it could use *up to* that much before starting to
Jeff Trout wrote:
Remember that it is going to allocate 800MB per sort. It is not "you
can allocate up to 800MB, so if you need 1 meg, use one meg". Some
queries may end up having a few sort steps.
I didn't know that it always allocates the full amount of memory
specificed in the configuratio
On Feb 14, 2004, at 9:02 PM, Marinos J. Yannikos wrote:
Josh Berkus wrote:
800MB for sort mem? Are you sure you typed that correctly? You
must be counting on not having a lot of concurrent queries. It sure
will speed up index updating, though!
800MB is correct, yes... There are usually onl
800MB is correct, yes... There are usually only 10-30 postgres processes
active (imagine 5-10 people working on the web front-end while cron
jobs access the db occasionally). Very few queries can use such large
amounts of memory for sorting, but they do exist.
But remember that means that if yo
Josh Berkus wrote:
800MB for sort mem? Are you sure you typed that correctly? You must be
counting on not having a lot of concurrent queries. It sure will speed up
index updating, though!
800MB is correct, yes... There are usually only 10-30 postgres processes
active (imagine 5-10 people
Marinos, while you are busy answering my first set of questions :-),
here is an idea that might help even out resource consumption.
On Fri, 13 Feb 2004 01:58:34 +0100, "Marinos J. Yannikos"
<[EMAIL PROTECTED]> wrote:
>begin transaction;
> delete from t where id=5;
> insert into t (id,...) valu
On Fri, 13 Feb 2004 01:58:34 +0100, "Marinos J. Yannikos"
<[EMAIL PROTECTED]> wrote:
>I'm looking for ideas that might improve the interactive performance of
>the system, without slowing down the updates too much.
IOW, you could accept slower updates. Did you actually try and throttle
down the i
Marinos,
> shared_buffers=10
> (I tried many values, this seems to work well for us - 12GB RAM)
> wal_buffers=500
> sort_mem=80
> checkpoint_segments=16
> effective_cache_size=100
> etc.
800MB for sort mem? Are you sure you typed that correctly? You must be
counting on not having
Hi,
one of our tables has to be updated frequently, but concurrently running
SELECT-queries must also have low latency times (it's being accessed
through a web interface).
I'm looking for ideas that might improve the interactive performance of
the system, without slowing down the updates too m
Couple quick questions.
1. Turning autocommit=FALSE is most useful when using COPY to load
data to a table and could impact the performance of INSERT of
similiar data on the same table.
2. What comparison operator is the most efficient when comparing
text or varchar? Today the SELECT I u
On 29 Jul 2003 at 8:14, Peter Childs wrote:
> On Tue, 29 Jul 2003, Shridhar Daithankar wrote:
>
> > On 28 Jul 2003 at 12:27, Josh Berkus wrote:
> > > Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
> > > 10-15 minutes, not every 2-3 hours. Regular VACUUM does not
On Tue, 29 Jul 2003, Shridhar Daithankar wrote:
> On 28 Jul 2003 at 12:27, Josh Berkus wrote:
> > Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
> > 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your
> > database. You will also want to increa
On 28 Jul 2003 at 12:27, Josh Berkus wrote:
> Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
> 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your
> database. You will also want to increase your FSM_relations so that VACUUM
> is more effective
Justin,
> I am trying to understand the various factors used by Postgres to optimize.
I presently have a dual-866 Dell server with 1GB of memory. I've done the
following:
see: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
which has articles on .conf files.
(feel free to link thes
Justin,
> I am trying to understand the various factors used by Postgres to optimize.
I presently have a dual-866 Dell server with 1GB of memory. I've done the
following:
Please set the performance articles at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
--
-Josh Berkus
Agl
>Can someone tell me what effective_cache_size should be set to?
You may be able to intuit this from my last post, but if I understand
correctly, what you should be doing is estimating how much memory is likely
to be "left over" for the OS to do disk caching with after all of the basic
needs of
Justin-
It sounds like you're on a system similar to ours, so I'll pass along the
changes that I made, which seem to have increased performance, and most
importantly, haven't hurt anything. The main difference in our environment
is that we are less Update/Insert intensive than you are- in our
appl
Justin-
It sounds like you're on a system similar to ours, so I'll pass along the
changes that I made, which seem to have increased performance, and most
importantly, haven't hurt anything. The main difference in our environment
is that we are less Update/Insert intensive than you are- in our
appl
Greetings,
I am trying to understand the various factors used
by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of
memory. I've done the following:
set /proc/sys/kernel/shmmax to
51200
shared_buffers = 32000sort_mem =
32000max_connections=64fsync=false
Can som
95 matches
Mail list logo