On Fri, 8 Aug 2003, mixo wrote:
I have just installed redhat linux 9 which ships with Pg
7.3.2. Pg has to be setup so that data inserts (blobs) should
be able to handle at least 8M at a time.
Nothing has to be done to tune postgresql to handle this, 8 Meg blobs are
no problem as far as I
On 11 Aug 2003 at 23:42, Ron Johnson wrote:
On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:
Well, yeah. But given the Linux propensity for introducing major
features in minor releases (and thereby introducing all the
attendant bugs), I'd think twice about using _any_ Linux
On Wed, 13 Aug 2003, ingrid martinez wrote:
Floes table looks like this
Table flows
Column| Type | Modifiers
--+--+---
flidload | bigint | not null
Jeff,
Informix, etc. have spent a lot of time and money working on it.
They also have the advantage of having many paid fulltime
developers who are doing this for a job, not as a weekend hobby
(Compared to the what? 2-3 full time PG developers).
I think 4-6 full-time, actually, plus about
Martha Stewart called it a Good Thing [EMAIL PROTECTED] (Gregory S. Williamson)wrote:
FWIW, Informix can be run using a cooked (Unix) file for storing
data or it uses raw disk space and bypasses the ordinary (high
level) UNIX controllers and does its own reads/writes. About 10
times faster and
Hi, All!
Richard Huxton wrote:
On Thursday 07 August 2003 17:30, Yaroslav Mazurak wrote:
Richard Huxton wrote:
On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:
PG's memory use can be split into four areas (note - I'm not a developer so
this could be wrong).
1. Shared memory -
I have a table
permissions
with the fields (party_id integer, permission varchar, key_name varchar,
key_value integer)
for which I need to a query to see if a person has permission to carry out a
particular action.
The query looks like:
SELECT 1
FROM permissions
WHERE party_id in (4, 7, 11, 26)
On 11/08/2003 09:59 Serge Dorofeev wrote:
Hi PostrgeSQL team,
My PostrgeSQL installed as part of CYGWIN (Windows XP).
I have compared performance PostrgeSQL to MS SQL (I used a little Java
program with number of inserts in table).
MS SQL is faster in 12 times :-(
It's very strange results.
Guys
I have just installed redhat linux 9 which ships with Pg
7.3.2. Pg has to be setup so that data inserts (blobs) should
be able to handle at least 8M at a time. The machine has
two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and
a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.
What
hello.
my database size is 5GB. what is the block size
recommend?
thanks
wilson
Matthew T. O'Connor [EMAIL PROTECTED] writes:
... My really question
was can I make large changes to a contrib module to a point release,
meaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much
improved version. Does that sound possible?
For core code, the answer would be a big
Ron,
If the databases are on different machines, maybe the postgres.conf
or pg_hba.conf files are different, and the buffer counts is affect-
ing the optimizer?
The databases are on the same machine, using the same postmaster.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
I've been trying to search through the archives, but it hasn't been
successful.
We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm
trying to fine tune things to get it running a bit better and I'm trying
to figure out how vacuum output correlates to tuning parameters.
Here's
Sean Chittenden [EMAIL PROTECTED] writes:
If you CLUSTER on an index and then ANALYSE, you get a correlation of
1.0 (== optimum) for the first column of the index.
Correlating of what to what? Of data to nearby data? Of data to
related data (ie, multi-column index?)? Of related data to
On Wed, Aug 06, 2003 at 03:03:41PM -0300, Wilson A. Galafassi Jr. wrote:
I'm installing Postgresql under linux for better performance and i want to know how
is the best configuration.
1. What is the best linux distribuition for better performance?
The Linux distribution itself isn't that
Hi Xin;
PostgreSQL is configured to run on virutally anything out of the box.
The reason for this is that, like Oracle, the database manager will not
start if it cannot allocate sufficient resources. We take the approach
of ensuring that it will start so you can tune it.
I would recomment trying
Hi!
Please send me the test db and the queries, with precise information
maybe the developers can help.
--
Tomka Gergely
S most - vajon barbrok nlkl mi lesz velnk?
k mgiscsak megolds voltak valahogy...
---(end of broadcast)---
TIP 6: Have you
Hi,
I'm running on Redhat 7.2 with postgresql 7.3.2 and I have two schema in
the same database 'db' and 'db_dev'. Both contain a set of 20 tables for
a total of less than 50 Mb of data each (on the order of 50k rows in
total). Once in a while (often these days!), I need to synchronize the
dev
Matthew T. O'Connor wrote:
Fair point, my only concern is that a backend integrated
pg_autovacuum would be radically different from the current libpq
based client application.
Unfortunately, a configurable-via-tables pg_autovacuum is also going
to be quite different from the current
On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:
1. Nobody has gone through any formal proofs, and there are few
systems _anywhere_ that are 100% reliable.
I think the problem is that ext2 is known to be not perfectly crash
safe. That is, fsck on reboot after a crash can
On Wed, 6 Aug 2003, Tom Lane wrote:
scott.marlowe [EMAIL PROTECTED] writes:
On Wed, 6 Aug 2003, Tom Lane wrote:
One obvious question is whether you have your foreign keys set up
efficiently in the first place. As a rule, the referenced and
referencing columns should have identical
On Fri, 8 Aug 2003, Andrew Sullivan wrote:
On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:
Redhat puts ext3 on by default. Consider switching to a non-journaling FS
(ext2?) with the partition that holds your data and WAL.
I would give you exactly the opposite advice:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Friday 08 August 2003 03:28, mixo wrote:
I have just installed redhat linux 9 which ships with Pg
7.3.2. Pg has to be setup so that data inserts (blobs) should
be able to handle at least 8M at a time. The machine has
two P III 933MHz CPU's,
Well, yeah. But given the Linux propensity for introducing major
features in minor releases (and thereby introducing all the
attendant bugs), I'd think twice about using _any_ Linux feature
until it's been through a major version (e.g. things introduced in
2.4.x won't really be
Hi All!
I have installed PostgreSQL 7.3.2 on FreeBSD 4.7, running on PC with
CPU Pentium II 400MHz and 384Mb RAM.
Problem is that SQL statement (see below) is running too long. With
current WHERE clause 'SUBSTR(2, 2) IN ('NL', 'NM') return 25 records.
With 1 record, SELECT time is about
The query that execute is only inserts, I use a batch of 300 and then do
commit.
insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
and
postgresql.conf looks like this
#
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form
Tom,
Partly. The numbers suggest that in ANALYZE's default sample of 3000
rows, it's only finding about a dozen non-null tgroup_ids (yielding the
0.996 null_frac value); and that in one case all dozen are different and
in the other case there are two duplicates. It would help if you
On Wed, 6 Aug 2003, Wilson A. Galafassi Jr. wrote:
hello!!!
what is suggested partitioning schema for postgresql??
the size of my db is 5BG and i have 36GB scsi disk!
The first recommendation is to run Postgresql on a RAID set for
reliability.
I'm assuming you're building a machine and
On Tue, 2003-08-12 at 13:39, Bruce Momjian wrote:
OK, I got some hard evidence. Here is a discussion on the Linux kernel
mailing list with postings from Allen Cox (ac Linux kernels) and Stephen
Tweedie (ext3 author).
More than likely you are suffering from an affliction known as type
mismatch. This is listed as tip 9 here on the performance list (funny, it
was sent at the bottom of your reply :-)
What happens is that when you do:
select * from some_table where id=123;
where id is a bigint the query
On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:
I don't use Linux and was just repeating what I had heard from others,
and read in postings. I don't have any first-hand experience with ext2
(except for a laptop I borrowed that wouldn't boot after being shut
off), but others on
Hi all,
I posted this problem on the sql list, and was referred to this list in stead.
I have attached an sql statement that normally runs under 3 minutes.
That is, until I vacuum analyze the database (or just the tables in the query),
then the same query runs longer than 12 hours, and I have to
On Tue, Aug 12, 2003 at 02:39:19PM -0400, Bill Moran wrote:
Meaning ... just tell it a raw partition to keep the data on and
Postgre would create its own filesystem ... obviously, doing that
would allow Postgre to bypass all the failings of all filesystems
and rely entirely apon its own rules.
Shridhar Daithankar wrote:
On 11 Aug 2003 at 23:42, Ron Johnson wrote:
On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:
Well, yeah. But given the Linux propensity for introducing major
features in minor releases (and thereby introducing all the
attendant bugs), I'd think twice about
Bruce Momjian commented:
Uh, the ext2 developers say it isn't 100% reliable ... I mentioned
it while I was visiting Red Hat, and they didn't refute it.
1. Nobody has gone through any formal proofs, and there are few
systems _anywhere_ that are 100% reliable. NASA has occasionally lost
Josh Berkus [EMAIL PROTECTED] writes:
Also, there doesn't seem to be any way in 7.2 for me to find out what the
current statistics target for a column is. What am I missing?
There still isn't a handy command for it --- you have to look at
pg_attribute.attstattarget for the column.
On Thursday 07 August 2003 17:30, Yaroslav Mazurak wrote:
Hi All!
First, thanks for answers!
Richard Huxton wrote:
On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:
IIRC there is a limit on filesystem cache on freeBSD. 300MB by default.
If that is the case, you might have
Josh,
I'm sure that you've thought of this, but it sounds like you may not have
done an analyze in your new DB.
Thanks,
Peter Darley
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
Sent: Monday, August 11, 2003 3:48 PM
To: Ron Johnson;
Bruce Momjian [EMAIL PROTECTED] writes:
Rod Taylor wrote:
On Fri, 2003-08-08 at 14:53, Andrew Sullivan wrote:
I would give you exactly the opposite advice: _never_ use a
non-journalling fs for your data and WAL. I suppose if you can
afford to lose some transactions, you can do without
Well, yeah. But given the Linux propensity for introducing major
features in minor releases (and thereby introducing all the
attendant bugs), I'd think twice about using _any_ Linux feature
until it's been through a major version (e.g. things introduced in
2.4.x won't really be stable until
On Wed, Aug 13, 2003 at 10:17:45AM -0500, ingrid martinez wrote:
The query that execute is only inserts, I use a batch of 300 and then do
commit.
insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
Are there any foreign keys, c?
and
postgresql.conf looks like this
[ . .
the primary key is flidload
- Original Message -
From: scott.marlowe [EMAIL PROTECTED]
To: ingrid martinez [EMAIL PROTECTED]
Cc: Andrew Sullivan [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Wednesday, August 13, 2003 11:47 AM
Subject: Re: [PERFORM] How can I Improve performance in
On Tue, 12 Aug 2003 13:09:42 -0700 Josh Berkus [EMAIL PROTECTED] wrote:
This idea has been discussed numerous times on the HACKERS list, and is
a
(pretty much) closed issue. While Oracle and SQL Server use their own
filesystems, PostgreSQL will not because:
...
2) The filesystem projects
there are advantages to being able to split the database across a slew of
disk drives. if we accept the notion of using the native OS filesystem on
each, it would seem that being able to direct various tables and indices
to
specific drives might be a valuble capability. i know that i could go
On Tue, 5 Aug 2003, Trevor Astrope wrote:
I was wondering if anyone found a sweet spot regarding how many inserts to
do in a single transaction to get the best performance? Is there an
approximate number where there isn't any more performance to be had or
performance may drop off?
It's
Greg,
FWIW, Informix can be run using a cooked (Unix) file for storing data or
it uses raw disk space and bypasses the ordinary (high level) UNIX
controllers and does its own reads/writes. About 10 times faster and safer.
Of course, itmay have taken a lot of programmer time to make that
Scott Cain [EMAIL PROTECTED] writes:
A few days ago, I asked for advice on speeding up substring queries on
the GENERAL mailing list. Joe Conway helpfully pointed out the ALTER
TABLE STORAGE EXTERNAL documentation. After doing the alter,
the queries got slower! Here is the background:
Josh Berkus [EMAIL PROTECTED] writes:
Tom,
Okay, here's our problem:
live DB: tgroup_id n_distinct = -1
test DN: tgroup_id n_distinct = 11
The former estimate actually means that it thinks tgroup_id is a unique
column, whereas the latter says there are only 11 distinct values in the
Dear master:
I have learned postgreSQL for serveral days, now i meet some problems. when I use aTPCC(Transaction Processing Performance Council) test program to test the performance of postgreSQL , postgreSQL works very slowly, it almost need 1 minute to finish a transaction, and the CPU percent
Tom,
Partly. The numbers suggest that in ANALYZE's default sample of 3000
rows, it's only finding about a dozen non-null tgroup_ids (yielding the
0.996 null_frac value); and that in one case all dozen are different and
in the other case there are two duplicates. It would help if you
On Tue, 12 Aug 2003, Neil Conway wrote:
On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:
I don't use Linux and was just repeating what I had heard from others,
and read in postings. I don't have any first-hand experience with ext2
(except for a laptop I borrowed that
Uh, the ext2 developers say it isn't 100% reliable --- at least that is
that was told. I don't know any personally, but I mentioned it while I
was visiting Red Hat, and they didn't refute it.
Now, the failure window might be quite small, but I have seen it happen
myself, and have heard it from
As I remember, there were clear cases that ext2 would fail to recover,
and it was known to be a limitation of the file system implementation.
Some of the ext2 developers were in the room at Red Hat when I said
that, so if it was incorrect, they would hopefully have spoken up. I
addressed the
On Tue, 12 Aug 2003, Christopher Kings-Lynne wrote:
Well, yeah. But given the Linux propensity for introducing major
features in minor releases (and thereby introducing all the
attendant bugs), I'd think twice about using _any_ Linux feature
until it's been through a major version (e.g.
On Wed, Aug 06, 2003 at 12:45:34AM -0400, Tom Lane wrote:
For core code, the answer would be a big NYET. We do not do feature
additions in point releases, only bug fixes. While contrib code is more
under the author's control than the core committee's control, I'd still
say that you'd be
On Wed, 2003-08-13 at 10:46, Josh Berkus wrote:
Jeff,
[snip]
The other advantage (which I hinted to above) with raw disks is being able
to optimize queries to take advantage of it. Informix is multithreaded
and it will spawn off multiple readers to do say, a seq scan (and merge
the
On Wed, 13 Aug 2003 09:48:18 +0800 Christopher Kings-Lynne [EMAIL PROTECTED] wrote:
We've got a little bunch of us tinkering with a tablespace
implementation.
However, it's been staller for a while now.
interesting. i'm involved in the very early stages of a startup that is
likely to do a
[EMAIL PROTECTED] (Jeff) writes:
On Tue, 12 Aug 2003, Christopher Browne wrote:
Are you _certain_ that's still true? Have you a metric that shows
Informix being 10x faster on a modern system? That would be quite
surprising...
We were forced (for budget reason) to switch from raw disk to
On Tue, 12 Aug 2003, Bruce Momjian wrote:
I think Gavin Sherry is working on this. I am CC'ing him.
---
Yes I am working on this. I am about 50% of the way through the patch but
have been held up with other work. For
Folks,
More followup on this:
The crucial difference between the two execution plans is this clause:
test db has:
- Seq Scan on case_clients (cost=0.00..3673.48 rows=11274 width=11) (actual
time=0.02..302.20 rows=8822 loops=855)
whereas live db has:
- Index Scan using idx_caseclients_case
On Mon, Aug 11, 2003 at 03:58:41PM +0200, Stef wrote:
I have attached an sql statement that normally runs under 3 minutes.
That is, until I vacuum analyze the database (or just the tables in the query),
then the same query runs longer than 12 hours, and I have to kill it.
Hmm, I have noticed
Tom,
Let's see the pg_stats rows for case_clients in both databases. The
entries for trial_groups might be relevant too.
My reading is that the case is borderline; that is, becuase the correlation
is about 10-20% higher on the test database (since it was restored clean
from backup) the
On Mon, 11 Aug 2003, Bruce Momjian wrote:
scott.marlowe wrote:
On Fri, 8 Aug 2003, Andrew Sullivan wrote:
On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:
Redhat puts ext3 on by default. Consider switching to a non-journaling FS
(ext2?) with the partition
On Mon, Aug 11, 2003 at 06:59:30PM -0400, Bruce Momjian wrote:
Uh, the ext2 developers say it isn't 100% reliable --- at least that is
that was told. I don't know any personally, but I mentioned it while I
was visiting Red Hat, and they didn't refute it.
IMHO, if we're going to say don't use
Stef [EMAIL PROTECTED] writes:
= Could we see the results of EXPLAIN ANALYZE, rather than just EXPLAIN,
= for the un-analyzed case?
Attached the output of this.
Hmm... not immediately obvious where it's going wrong. Could you try
this (after ANALYZE):
set enable_mergejoin to off;
Mem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free
Swap: 368M Total, 2852K Used, 366M Free
It's right that I can figure that I can use 384M (total RAM) - 72M
(wired) - 48M (buf) = 264M for PostgreSQL.
Hence, if I set effective_cache_size to 24M (3072 8K
On 08/08/2003 11:28 mixo wrote:
I have just installed redhat linux 9 which ships with Pg
7.3.2. Pg has to be setup so that data inserts (blobs) should
be able to handle at least 8M at a time. The machine has
two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and
a 36 Gig hd with 1 Gig swap and 3
On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:
IIRC there is a limit on filesystem cache on freeBSD. 300MB by default.
If that is the case, you might have to raise it to make
effective_cache_size really effective..
Try various sysctls says nothing for me. I want use *all
Yaroslav Mazurak [EMAIL PROTECTED] writes:
fsync = false
I'd turn fsync back on - unless you don't mind losing your data after a crash.
This is temporary performance solution - I want get SELECT query result
first, but current performance is too low.
Disabling fsync will not help
On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:
Hi All!
I have installed PostgreSQL 7.3.2 on FreeBSD 4.7, running on PC with
CPU Pentium II 400MHz and 384Mb RAM.
Version 7.3.4 is just out - probably worth upgrading as soon as it's
convenient.
Problem
hello!!!
what is suggested partitioning schema for
postgresql??
the size of my db is 5BG and i have 36GB scsi
disk!
thanks
wilson
On Tue, 5 Aug 2003, Medora Schauer wrote:
I hope this piques someones curiosity. I'd really like to know
what is going on here...
I think you're getting caught by the typing of constants preventing
index scans.
UPDATE shot_record SET trace_count = %d \
WHERE
Scott Cain wrote:
Oh, and I forgot to mention: it's highly compressed (bzip2 -9) and is
109M.
Thanks. I'll grab a copy from home later today and see if I can find
some time to poke at it.
Joe
---(end of broadcast)---
TIP 1: subscribe and
73 matches
Mail list logo