On mán, 2007-08-06 at 00:10 -0700, Sven Clement wrote:
2007/8/5, Heikki Linnakangas [EMAIL PROTECTED]:
I don't remember a bug like that. Where did you read that
from?
--
Heikki Linnakangas
EnterpriseDB
On lau, 2007-07-28 at 17:12 -0300, Bruno Rodrigues Siqueira wrote:
where
to_char( data_encerramento ,'-mm')
between '2006-12' and '2007-01'
assuming data_encerramento is a date column, try:
WHERE data_encerramento between '2006-12-01' and '2007-01-31'
gnari
On lau, 2007-06-16 at 13:35 +0200, [EMAIL PROTECTED] wrote:
Thanks
if i understand well that means that if i choose character varying(3) or
character varying(8) or character varying(32) or character varying with no max
length the fields will take the same place in the disk (8kb) except for
On þri, 2007-03-27 at 16:13 +0530, Gauri Kanekar wrote:
SELECT rs.id AS sid, rs.name AS sname, rc.id AS campid, rc.name AS
campname, rc.rev_type AS revtype, rc.act_type AS actntype, ra.id AS
advid, ra.name AS advname, rpt_chn.id AS chanid, rpt_chn.name AS
channame, rpt_cre.dn AS dn,
On mán, 2007-03-26 at 20:33 +0530, Gauri Kanekar wrote:
you did not show your query, nor did you answer whather you had vacuumed
and analyzed.
enable_seqscan = off
why this? this is unlikely to help
QUERY PLAN
...
- Nested Loop
(cost=0.00..1104714.83
On þri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:
Alvaro Herrera wrote:
Actually it has been suggested that a combination of ext2 (for WAL) and
ext3 (for data, with data journalling disabled) is a good performer.
AFAIK you don't want the overhead of journalling for the WAL partition.
On mið, 2006-12-20 at 05:31 +, ALVARO ARCILA wrote:
HI,
I've looking around the log files of my server and lately they
indicate that I should consider increase the check_point segments
because they're beeing reading too often and also recommend increasing
the max_fsm_pages over
On mið, 2006-12-13 at 11:51 +0100, Arnaud Lesauvage wrote:
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 =
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y') once and then
On mið, 2006-12-13 at 13:42 -0500, Tim Jones wrote:
I have a query that uses an IN clause and it seems in perform great
when there is more than two values in it but if there is only one it is
really slow. Also if I change the query to use an = instead of IN in the
case of only one value it
On mán, 2006-12-11 at 17:01 +1100, Chris wrote:
Mark Kirkwood wrote:
Chris wrote:
It's the same as doing a select count(*) type query using the same
clauses, but all in one query instead of two.
It doesn't return any extra rows on top of the limit query so it's
better than using
On þri, 2006-05-30 at 10:26 +1000, Anthony Ransley wrote:
Can any one explain why the following query
select f(q) from
(
select * from times
where '2006-03-01 00:00:00'=q and q'2006-03-08 00:00:00'
order by q
) v;
never completes, but splitting up the time span into single
On fös, 2006-05-26 at 11:56 +0200, James Neethling wrote:
SELECT branch_id, prod_cat_id, sum(prod_profit) as prod_cat_profit
FROM () as b1
WHERE x = y
GROUP BY branch, prod_cat_id
Now, I also need the branch total, effectively,
SELECT branch_id, sum(prod_profit) as branch_total
On sun, 2006-05-21 at 02:21 -0600, Brendan Duddridge wrote:
Hi,
I have a query that performs WAY better when I have enable_seqscan =
off:
explain analyze select ac.attribute_id, la.name, ac.sort_order from
attribute_category ac, localized_attribute la where ac.category_id =
1001402
On mán, 2006-04-10 at 10:30 +0200, Doron Baranes wrote:
I Attached here a file with details about the tables, the queries and
the
Explain analyze plans.
Hope this can be helpful to analyze my problem
first query:
explain analyze SELECT date_trunc('hour'::text,
i.entry_time) AS datetime,
On sun, 2006-04-09 at 12:47 +0200, Doron Baranes wrote:
Hi
I am running on postgres 7.4.6 on a pineapp with 512MB RAM.
I did a database vacuum analyze and rebuild my indexes.
If you have previously done a lot of deletes or updates
without regular vacuums, you may have to do a
VACUUM
On fim, 2006-04-06 at 12:35 +1000, Brian Herlihy wrote:
I have a problem with the choice of index made by the query planner.
My table looks like this:
CREATE TABLE t
(
p1 varchar not null,
p2 varchar not null,
p3 varchar not null,
i1 integer,
i2 integer,
i3 integer,
On fim, 2006-04-06 at 19:27 +1000, Brian Herlihy wrote:
--- Ragnar [EMAIL PROTECTED] wrote:
On fim, 2006-04-06 at 12:35 +1000, Brian Herlihy wrote:
...
PRIMARY KEY (p1, p2, p3)
...
I have also created an index on (p2, p3), as some of my lookups are on
these only.
...
db
On fös, 2006-04-07 at 00:01 +1000, Brian Herlihy wrote:
--- Ragnar [EMAIL PROTECTED] wrote:
On fim, 2006-04-06 at 19:27 +1000, Brian Herlihy wrote:
Yes, the primary key is far better. I gave it the ultimate test - I
dropped
the (p2, p3) index. It's blindingly fast when using
On fös, 2006-03-10 at 10:11 +0100, NbForYou wrote:
Hey Michael, you sure know your stuff!
Versions:
PostgreSQL 7.3.9-RH running on the webhost.
PostgreSQL 8.0.3 running on my homeserver.
So the only solution is to ask my webhost to upgrade its postgresql?
The question is will he do
On mán, 2006-03-06 at 13:46 -0500, Harry Hehl wrote:
Query: select * from ommemberrelation where srcobj='somevalue'
and dstobj in (select objectid from omfilesysentry where name='dir15_file80');
Columns srcobj, dstobj name are all indexed.
On fim, 2006-02-23 at 13:35 +0100, Kjeld Peters wrote:
Select and update statements are quite slow on a large table with more
than 600,000 rows. The table consists of 11 columns (nothing special).
The column id (int8) is primary key and has a btree index on it.
The following select
On fös, 2006-02-17 at 08:01 -0500, Ron wrote:
At 04:24 AM 2/17/2006, Ragnar wrote:
On fös, 2006-02-17 at 01:20 -0500, Ron wrote:
OK, so here's _a_ way (there are others) to obtain a mapping such that
if a b then f(a) f (b) and
if a == b then f(a) == f(b)
By scanning
On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:
For lots non-read-only database workloads, RAID5 is a performance
killer. Raid 1/0 might be better, or having two mirrors of two disks
each, the first mirror holding system, swap, and the PostgreSQL WAL
files, the second one holding the
On Thu, 2005-07-14 at 10:06 +1000, Marc McIntyre wrote:
I'm having a problem with a query that performs a sequential scan on a
table when it should be performing an index scan. The interesting thing
is, when we dumped the database on another server, it performed an index
scan on that
On Tue, 2005-03-22 at 14:36 -0500, Alex Turner wrote:
I will use an index 220-300, but not 200-300.
...
Seq Scan on propmain (cost=0.00..15517.56 rows=6842 width=4) (actual
time=0.039..239.760 rows=6847 loops=1)
...
Index Scan using propmain_listprice_i on propmain
(cost=0.00..22395.95
On Sun, 2005-03-13 at 16:32 +0100, Miroslav ulc wrote:
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed was
sufficient but
On Wed, 2005-03-02 at 01:51 -0500, Ken Egervari wrote:
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id =
On Wed, 2005-03-02 at 13:28 -0500, Ken Egervari wrote:
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id =
On Mon, 2005-01-24 at 15:45 -0800, Josh Berkus wrote:
[about keeping open DB connections between web-client connections]
[I wrote:]
no. you can only count on web-server-process==connection, but not
web-user==connection, unless you can garantee that the same user
client always connects to
On Mon, 2005-01-24 at 09:52 -0800, Josh Berkus wrote:
[about keeping connections open in web context]
Ah, clarity problem here.I'm talking about connection pooling tools from
the client (webserver) side, such as Apache::DBI, PHP's pg_pconnect,
Jakarta's connection pools, etc. Not
On Wed, 2005-01-19 at 21:00 -0800, [EMAIL PROTECTED] wrote:
Let's see if I have been paying enough attention to the SQL gurus.
The planner is making a different estimate of how many deprecated'' versus
how many broken ''.
I would try SET STATISTICS to a larger number on the ports table,
On Wed, 2005-01-19 at 20:37 -0500, Dan Langille wrote:
Hi folks,
Running on 7.4.2, recently vacuum analysed the three tables in
question.
The query plan in question changes dramatically when a WHERE clause
changes from ports.broken to ports.deprecated. I don't see why.
Well, I do
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
The best way to do pages for is not to use offset or cursors but to use an
index. This only works if you can enumerate all the sort orders the
application might be using and can have an index on each of them.
To do this the query would
On Thu, 2005-01-20 at 19:12 +, Ragnar Hafstað wrote:
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
The best way to do pages for is not to use offset or cursors but to use an
index. This only works if you can enumerate all the sort orders the
application might be using and can
On Thu, 2005-01-13 at 12:14 +1300, Mark Kirkwood wrote:
[snip some explains]
I have random_page_cost = 0.8 in my postgresql.conf. Setting it back to
the default (4) results in a plan using test_id1.
it is not rational to have random_page_cost 1.
if you see improvement with such a setting,
On Mon, 2004-12-27 at 22:31 +0700, Amrit Angsusingh wrote:
[ [EMAIL PROTECTED] ]
These are some settings that I am planning to start with for a 4GB RAM
dual
opteron system with a maximum of 100 connections:
shared_buffers 8192 (=67MB RAM)
sort_mem 4096 (=400MB RAM for 100
On Tue, 2004-12-21 at 16:31 +0700, Amrit Angsusingh wrote:
I used postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram
of 4 Gb. Since 1 1/2 yr. when I started to use the database server after
optimizing the postgresql.conf everything went fine until a couple of
weeks ago , my
On Wed, 2004-12-22 at 00:03 +0100, Thomas Wegner wrote:
Hello, i have a problem between V7.4.3 Cygwin and
V8.0RC2 W2K. I have 2 systems:
1. Production Machine
- Dual P4 3000MHz
- 2 GB RAM
- W2K
- PostgreSQL 7.4.3 under Cygwin
- i connect to it over a DSL Line
2.
On Fri, 2004-12-17 at 23:51 -0800, Ron Mayer wrote:
Any advice for settings for extremely IO constrained systems?
A demo I've set up for sales seems to be spending much of it's time in
disk wait states.
The particular system I'm working with is:
Ext3 on Debian inside Microsoft
On Wed, 2004-12-15 at 11:50 -0500, Tom Lane wrote:
Geoffrey [EMAIL PROTECTED] writes:
sarlav kumar wrote:
I would like to write the output of the \d command on all tables in a
database to an output file.
What is the OS? On any UNIX variant you can do:
echo '\d' | psql outputfile
41 matches
Mail list logo