This patch adds a note to the documentation describing why the
performance of min() and max() is slow when applied to the entire table,
and suggesting the simple workaround most experienced Pg users
eventually learn about (SELECT xyz ... ORDER BY xyz LIMIT 1).
Any suggestions on improving the
Brian Bruns wrote:
Problem is, nobody builds packages on windows anyway. They just all
download the binary a guy (usually literally one guy) built. So, let's
just make sure that one guy has cygwin loaded on his machine and we'll be
all set. /tougue in cheek
Correct.
I wonder why we need
+ people measure postgresql by the speed of bulk imports
This is a good point. I can complete agree. What we might need is
something called SQL Loader or so. This may sound funny and it doesn't
make technical sense but it is an OBVIOUS way of importing data. People
often forget to use
Personally I think that configuring things like that is definitely
beyond the scope of an average administrator.
However, there is one thing which would be useful for many applications:
It would be nice if there was a way to renice a connection. When it
comes to reporting it would be nice to
Why are the features provided by PostGIS not added to the core of
PostgreSQL?
Hans
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
What I'd like to have in future versions of PostgreSQL:
- replication, replication, ... (you have seen that before). i guess
most people would like to see that.
- a dblink like system for connecting to remote database systems
(not just PostgreSQL???)
something like CREATE REMOTE
Oops, there is something I have forgotten:
- nicing backends: this would be nice for administration tasks
- CREATE DATABASE ... WITH MAXSIZE (many providers would like to see
that; quotas are painful in this case - especially when porting the
database to a different or a second server)
Is there going to be a way to use transactions inside transactions of
transactions?
In other words:
BEGIN;
BEGIN;
BEGIN;
BEGIN;
COMMIT;
COMMIT;
COMMIT;
COMMIT;
Is there a way to have some sort of recursive solution with every
transaction but the first one being a child
The standard approach to such a scenario would imho be to write stored procedures
for the complex queries (e.g. plpgsql) and use that from the client.
Maybe even eliminate a few ping pongs between client and server.
Andreas
Does it reduce the time taken by the planner?
Are server side SQL
I came across a quite interesting issue I don't really understand but
maybe Tom will know.
This happened rather accidentally.
I have a rather complex query which executes efficiently.
There is one interesting thing - let's have a look at the query:
SELECT t_struktur.id, t_text.code,
First of all PREPARE/EXECUTE is a wonderful thing to speed up things
significantly.
I wonder if there is a way to store a parsed/rewritten/planned query in
a table so that it can be loaded again.
This might be useful when it comes to VERY complex queries ( 10 tables).
I many applications the
thinking about prepared plans somewhere on disk.
Is there a way to transform ASCII - plan?
Hans
Bruno Wolff III wrote:
On Wed, Oct 23, 2002 at 18:04:01 +0200,
Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
An example:
I have a join across 10 tables + 2 subselects across 4 tables
The idea is not to have it accross multiple backends and having it in
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications
sending just a few complex queries to the server. The problem is: If you
have many queries
Greg Copeland wrote:
Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy? I believe this would result in the desired
performance boost and behavior.
Now, the next obvious
I guess we had this discussion before but I have just gone through the
general list and I have encountered a problem I had a least VERY often
before.
Sometimes the planner does not find the best way through a query.
Looking at the problem of query optimization it is pretty obvious that
things
Jim Buttafuoco wrote:
Is this NOT what I have been after for many months now. I dropped the
tablespace/location idea before 7.2 because that
didn't seem to be any interest. Please see my past email's for the SQL commands and
on disk directory layout I have
proposed. I have a working 7.2
Bingo = great :).
The I/O problem seems to be solved :).
A table space concept would be top of the histlist :).
The symlink version is not very comfortable and I think it would be a
real hack.
Also: If we had a clean table space concept it would be real advantage.
In the first place it would
Greg Copeland wrote:
I wouldn't hold your breath for any form of threading. Since PostgreSQL
is process based, you might consider having a pool of sort processes
which address this but I doubt you'll get anywhere talking about threads
here.
Greg
I came across the problem yesterday. We
Threads are not the best solutions when it comes to portability. A
prefer a process model as well.
My concern was that a process model might be a bit too slow for that but
if we had processes in memory this would be wonderful thing.
Using it for small amounts of data is pretty useless - I
Threads are bad - I know ...
I like the idea of a pool of processes instead of threads - from my
point of view this would be useful.
I am planning to run some tests (GEQO, AIX, sorts) as soon as I have
time to do so (still too much work ahead before :( ...).
If I had time I'd love to do
Can anybody please tell me in detail.(Not just a pointing towards TODO items)
1) What a table space supposed to offer?
They allow you to define a maximum amount of storage for a certain set
of data.
They help you to define the location of data.
They help you to define how much data can be
2) What a directory structure does not offer that table space does?
You need to the command line in order to manage quotas - you might not
want that.
Mount a directory on a partition. If the data exceeds on that partition, there
would be disk error. Like tablespace getting
Quotas are handled differently on ever platform (if available).
Yeah. But that's sysadmins responsibility not DBA's.
Maybe many people ARE the sysadmins of their PostgreSQL box ...
When developing a database with an open mind people should try to see a
problem from more than
CREATE INDEX could use many CPUs.
Maybe this is worth thinking about because it will speed up huge
databases and enterprise level computing.
Best regards,
Hans-Jürgen Schönig
--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43
I have seen various benchmarks where XFS seems to perform best when it
comes to huge amounts of data and many files (due to balanced internal
b+ trees).
also, XFS seems to be VERY mature and very stable.
ext2/3 don't seem to be that fast in most of the benchmarks.
i did some testing with
AMD Athlon 500
512MB Ram
IBM 120GB IDE
Tested with:
BLCKSZ=8192
TESTCYCLES=50
Result:
Collecting sizing information ...
Running random access timing test ...
Running sequential access timing test ...
Running null loop timing test ...
random test: 2541
sequential test: 2455
null
Linux RedHat 7.3 (ext3, kernel 2.4.18-3)
512MB Ram
AMD Athlon 500
IBM 120GB IDE
[hs@backup hs]$ ./randcost.sh /data/db/
Collecting sizing information ...
Running random access timing test ...
Running sequential access timing test ...
random_page_cost = 0.901961
[hs@backup hs]$ ./randcost.sh
Christopher Kings-Lynne wrote:
Assuming it's giving out correct information, there seems to be a lot of
evidence for dropping the default random_page_cost to 1...
Chris
Some time ago Joe Conway suggest a tool based on a genetic algorithm
which tries to find the best parameter settings.
As
just don't look for information.
All in all I think that there are ways to find people contributing
financially to the project.
Regards,
Hans-Jürgen Schönig
Bruce Momjian wrote:
I think we are going to see more company-funded developers working on
PostgreSQL. There are a handful now
I have tried to compile PostgreSQL with the Intel C Compiler 6.0 for
Linux. During this process some errors occurred which I have attached to
this email. I have compiled the sources using:
[hs@duron postgresql-7.2.1]$ cat compile.sh
#!/bin/sh
I guess the website is really good. The only thing I'd do is to add a
section listing the core features of PostgreSQL - I think this could be
an important point.
In my opinion MySQL is not a competitor and we should not benchmark
PostgreSQL and compare it with MySQL. Those features which are
201 - 231 of 231 matches
Mail list logo