point release behind the bleeding edge.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
es 255 other possible values to describe the state of the
column. That seems preferable to adding an additional column to every
nullable column.
But as you say, that would have to be taken up with the SQL
standardization bodies, and not PostgreSQL.
--
Guy Rouillier
--
Sent via pgsql-pe
same as null. When I wrote
my own data access layer years ago, I expressly checked for empty
strings on input and changed them to null. I did this because empty
strings had a nasty way of creeping into our databases; writing queries
to produce predictable results got to be very messy.
view. The concept of null was
introduced into the SQL vernacular by Codd and Date expressly to
represent unknown values.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
provide you
any performance numbers.
Difference in data structures, etc, are fairly easy to determine.
Anyone can read the Oracle documentation.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
head
movement) -> faster seek time
More platters means more tracks under the read heads at a time, so
generally *better* performance. All other things (like rotational
speed) being equal, of course.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
t longer,
but in non-archival applications, I imagine even that mark won't take
but a year or two to surpass.
--
Guy Rouillier
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
supported or a bug ?
thank you for your support.
This is really not a performance question, but a general SQL question.
select * from t1
f1
--
1
2
3
select * from t2
f1
--
1
2
select * from t1
where not exists
(
select 1
from t2
where t2.f1 = t1.f1
)
f1
--
3
--
Guy
it from some industry experience as well as production
maturity with Barcelona, and can judge then which better fits your needs.
--
Guy Rouillier
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
uting significantly to long run times.
Guy Rouillier wrote:
I don't want to violate any license agreement by discussing performance,
so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as
BigDBMS here.
I'm trying to convince my employer to replace BigDBMS with PostgreSQ
d expect diminishing returns from the Xeon northbridge
memory access. If you are willing to spend that kind of money on
memory, you'd be better off with Opteron or Sparc.
--
Guy Rouillier
---(end of broadcast)---
TIP 3: Have you
Dave Cramer wrote:
Is it possible that providing 128G of ram is too much ? Will other
systems in the server bottleneck ?
What CPU and OS are you considering?
--
Guy Rouillier
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL
ll be regarding changing the implementation.
--
Guy Rouillier
---(end of broadcast)---
TIP 6: explain analyze is your friend
Dave Cramer wrote:
On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be
hurt by this more than PG. Here are the settings I've modified in
postgresql.conf:
As I said you need to set shared_buffers to at
localhost'
shared_buffers = 128MB
work_mem = 16MB
maintenance_work_mem = 64MB
temp_buffers = 32MB
max_fsm_pages = 204800
checkpoint_segments = 30
redirect_stderr = on
log_line_prefix = '%t %d'
--
Guy Rouillier
---(end of broadcast)---
TI
empt
logic changes. The core logic is unchangeable; millions of rows of data
in a single table will be updated throughout the day. If PG can't
handle high volume updates well, this may be brick wall.
--
Guy Rouillier
---(end of broadcast)--
tions beyond that, those advantages
are not being realized.
Hope that helps your tests...
Thanks to everyone for providing suggestions, and I apologize for my
delay in responding to each of them.
Shoaib Mir
EnterpriseDB (www.enterprisedb.com <http://www.enterprisedb.com
ns of the stored proc. The exact same code is running against
BigDBMS, so any penalty from this approach should be evenly felt.
--
Guy Rouillier
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
it is a software array.
I'm out of ideas how to improve this picture any further. I'd
appreciate some suggestions. Thanks.
--
Guy Rouillier
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
isabled. However, if you
use big-'I' to actually interrogate the drive, you get the correct setting.
I tested this a while ago by writing a program that did fsync() to test
write latency and random-reads to test read latency, and then comparing
them.
- Guy
* I did experience a too-close
he OS cache.
Of course you could argue the OS should be able to detect this, and prevent
it occuring anyway. I don't know anything about linux's behaviour in this
area.
.Guy
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ards the
connection state to a worker process and says 'deal with this'.
(Clearly the state object needs to contain all user and transaction state
the connection is involved in).
- Guy Thornley
---(end of broadcast)---
TIP 6: explain analyze is your friend
You said:
"If your write size is smaller than chunk_size*N (N = number of data blocks
in a stripe), in order to calculate correct parity you have to read data
from the remaining drives."
Neil explained it in this message:
http://marc.theaimsgroup.com/?l=linux-raid&m=1086821907
iate data is
swapped to disk? At least since 7.4 you no longer need a VACUUM when
postgres starts, to learn about free space ;)
- Guy Thornley
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
not release quality... if you run them on a
metadata journalling filesystem, without an 'ordered write' mode, its
possible to end up with corrupt heaps after a crash because of garbage data
in the extended files.
If/when we move to postgres 8 I'll try to ensure th
uffer really, really bad fragmentation,
which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)
quite drastically. We have in-house patches that somewhat alleiviate this,
but they are not release quality. Has anybody else suffered this?
Guy Thornley
---(e
and get
the same result every time.
For us, a global 'enable_seqscan = off' in postgresql.conf is the way to go.
You occasionally see an odd plan while developing a query (eg: scanning an
index with no contraint to simply get ORDER BY). Usually thats a broken
query/index, and I simply fix it.
Guy Thornley
---(end of broadcast)---
TIP 8: explain analyze is your friend
27 matches
Mail list logo