As anyone done benchmarking tests with postgres running on solaris and linux
(redhat) assuming both environment has similar hardware, memory, processing
speed etc. By reading few posts here, i can see linux would outperform
solaris cause linux being very good at kernel caching than solaris which is
--sorry to repost, just subscribed to the list. hopefully it gets to the
list this time --
Hi All,
We are evaluating the options for having multiple databases vs. schemas on a
single database cluster for a custom grown app that we developed. Each app
installs same set of tables for each service.
On Mon, 22 Mar 2004, Tom Lane wrote:
> Ricardo Vaz Mannrich <[EMAIL PROTECTED]> writes:
> > Is FOREIGN KEY so slow like that?
>
> Not normally. Have you checked that the referencing and referenced
> columns are of the same datatype? Have you done ANALYZE on both tables
> recently?
Other questio
Ricardo Vaz Mannrich <[EMAIL PROTECTED]> writes:
> Is FOREIGN KEY so slow like that?
Not normally. Have you checked that the referencing and referenced
columns are of the same datatype? Have you done ANALYZE on both tables
recently?
regards, tom lane
---
I'm trying to do a lot of inserts on a detail table, but with foreign
key schema it's too slow.
I made few tests.
1) Master table with 290,000 rows and 4 columns (primary key is SERIAL)
2) Detail table now with 1,300,000 rows and 3 columns (primary key is
SERIAL and I have a column master_id here
On Sat, Mar 20, 2004 at 07:52:08PM -0800, Ghazan Haider wrote:
> (1) Scaling in which direction will help postgresql best, given the
> queries are CPU, memory, io and disk-intensive? I understand dual-CPUs
> will help in certain circumstances, but say for large subqueries which
> are built in the
"Mark M. Huber" <[EMAIL PROTECTED]> writes:
> I was did a normal vacuum and then I did a full which hung on a table,
Sounds to me like some other transaction is holding a lock on that
table.
In recent releases you can look in the pg_locks system view to see
who's got the lock.
Dohyung Kim <[EMAIL PROTECTED]> writes:
> 1. why 3 daemon processes start in its starting point
> 2. what the roles of each 3 daemon processes are
Postmaster, statistics collector, and statistics buffer process.
> 3. and how can I control the initial number of daemon process.
You can't (other th
On Sat, Mar 20, 2004 at 08:12:02AM -0500, Al Cohen wrote:
> In our particular situation, being down for two hours or so is OK.
> What's really bad is losing data.
>
> The PostgreSQL replication solutions that we're seeing are very clever,
> but seem to require significant effort to set up and ke
That brings up a good point. It would be extremely helpful to add two
parameters to pg_dump. One, to add how many rows to insert before a
commit, and two, to live through X number of errors before dying (and
putting the "bad" rows in a file).
At 10:15 AM 3/19/2004, Mark M. Huber wrote:
>What
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Marc Mitchell wrote:
| This is follow-up to a problem first reported on 3/1/04. The problem
| has continued to occur intermittently and recently we experienced the
| first occurrence where the first column of a table was the column where
| the corrupte
What it was that I guess the pg_dump makes one large transaction and our shell script
wizard wrote a perl program to add a commit transaction every 500 rows or what every
you set. Also I should have said that we were doing the recovery with the insert
statements created from pg_dump. So... my 5
Hi All,
After fist installation, using pg_ctl script, and starting postmaster process.
3 of processes as its initial daemon are on in the unix system memory which is
able to see by 'ps -ef|grep postmaster'.
like to get information about the reason belows.
1. why 3 daemon processes start in its st
Hello pgsql-admin,
I have PostgreSQL 7.3.3 (build 2tr) installed on Trustix Secure Linux 2.0
and pgAdmin III v. 1.1.0 (Mar 17 2004), win32 binary package (running under Windows XP)
When I'm using the special names for my tables or columns, like "user", "time" etc.,
(PGSQL keywords), pgAdmin does
Hi all, my question is related to two recent posts, which didnt quite
answer my curiosity.
(1) Scaling in which direction will help postgresql best, given the
queries are CPU, memory, io and disk-intensive? I understand dual-CPUs
will help in certain circumstances, but say for large subqueries whi
Hi all,
postgresql.conf says:
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect, or use
# "pg_ctl reload".
Does that mean I can just use "pg_ctl reloa
Hi all
I have had this issue a couple of times now. on Solaris, and on 7.3 of the db.
I was did a normal vacuum and then I did a full which hung on a table, I waited for
over an hour but there were no activity going on so I ctrl'cd the job in a terminal
window, kill off all associated process
Hello
I was reading the documentation about backup and I coud'nt find information
about differential backup, i need to know if is posible to make a
differential backup in postgres, the only tool that I know is pg_dump
and this tool only make a full backup.
If someone knows a method for make a di
We've been using PostgreSQL for some time, and it's been very, very
reliable. However, we're starting to think about preparing for
something bad happening - dead drives, fires, locusts, and whatnot.
In our particular situation, being down for two hours or so is OK.
What's really bad is losing
On Mar 18, 2004, at 3:08 PM, Mark M. Huber wrote:
It seams that any backup and recovery sucks. My main db backup takes
over two hours to complete and 13 hours to recover what am I doing
wrong? Any hints?ideas? Recommendations?
Increasing sort_mem dramatically (say, 128M) will greatly speed thi
20 matches
Mail list logo