[EMAIL PROTECTED] ("Bruno Almeida do Lago") wrote:
> Is there a real limit for max_connections? Here we've an Oracle server with
> up to 1200 simultaneous conections over it!
>
> "max_connections: exactly like previous versions, this needs to be set to
> the actual number of simultaneous connection
Neil Conway <[EMAIL PROTECTED]> writes:
> There is a TODO item about allowing the delaying of WAL writes. If we
> maintain the WAL invariant (that is, a WAL record describing a change
> must hit disk before the change itself does) but simply don't flush the
> WAL at transaction commit, we should
Magnus Hagander wrote:
Yes, fsync=false is very good for bulk loading *IFF* you can live with
data loss in case you get a crash during load.
It's not merely data loss -- you could encounter potentially
unrecoverable database corruption.
There is a TODO item about allowing the delaying of WAL writ
John Allgood wrote:
Here is a summary about the cluster suite from redhat. All 9 databases
will be on the primary server the secondary server I have is the
failover. They don't actually share the partitions at the same time.
When you have some type of failure the backup server takes over. Once
you
Bruno,
> For example, 150 active connections on a medium-end
> 32-bit Linux server will consume significant system resources, and 600 is
> about the limit."
That, is, "is about the limit for a medium-end 32-bit Linux server".Sorry
if the implication didn't translate well. If you use beefie
Here is a summary about the cluster suite from redhat. All 9 databases
will be on the primary server the secondary server I have is the
failover. They don't actually share the partitions at the same time.
When you have some type of failure the backup server takes over. Once
you setup the hardwa
[EMAIL PROTECTED] wrote:
I got the answer that is in module config of postgresl-webmin , there is a
check
box for
Use DBI to connect if available?yes nothe default is
yes , but if I choosed no everything went fine.
I also test it in the desktop mechine and get the same er
John Allgood wrote:
This some good info. The type of attached storage is a Kingston 14 bay
Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think
the way it is being explained that I should build a mirror with two
disk for the pg_xlog and the striping and mirroring the rest and put
al
This some good info. The type of attached storage is a Kingston 14 bay
Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think the
way it is being explained that I should build a mirror with two disk for
the pg_xlog and the striping and mirroring the rest and put all my
databases i
On Wed, Feb 23, 2005 at 02:15:52PM -0500, John Allgood wrote:
> using custom scripts. Maybe I have given a better explanation of the
> application. my biggest concern is how to partition the shared storage
> for maximum performance. Is there a real benifit to having more that one
> raid5 partiti
John Allgood wrote:
I think maybe I didn't explain myself well enough. At most we will
service 200-250 connections across all the 9 databases mentioned. The
database we are building is for a trucking company. Each of the
databases represents a different division. With one master database
that every
I think maybe I didn't explain myself well enough. At most we will
service 200-250 connections across all the 9 databases mentioned. The
database we are building is for a trucking company. Each of the
databases represents a different division. With one master database that
everything is updated
Christopher Browne wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
I do a graph about my disk usage and it's a ramp since one week,
I'll continue to wait in order to see if it will decrease.
I was expecting the steady state at something like 4 GB
( after a full vacuum and reindex ) + 10 % = 4
Rod Taylor <[EMAIL PROTECTED]> writes:
> The kernel also starts to play a significant role with a high number of
> connections. Some operating systems don't perform as well with a high
> number of processes (process handling, scheduling, file handles, etc.).
Right; the main problem with having lot
On Wed, 2005-02-23 at 15:26 -0300, Bruno Almeida do Lago wrote:
> Is there a real limit for max_connections? Here we've an Oracle server with
> up to 1200 simultaneous conections over it!
If you can reduce them by using something like pgpool between PostgreSQL
and the client, you'll save some head
"Bruno Almeida do Lago" <[EMAIL PROTECTED]> writes:
> Is there a real limit for max_connections? Here we've an Oracle server with
> up to 1200 simultaneous conections over it!
[ shrug... ] If your machine has the beef to run 1200 simultaneous
queries, you can set max_connections to 1200.
The poin
Is there a real limit for max_connections? Here we've an Oracle server with
up to 1200 simultaneous conections over it!
"max_connections: exactly like previous versions, this needs to be set to
the actual number of simultaneous connections you expect to need. High
settings will require more shared
On Wed, Feb 23, 2005 at 11:39:27AM -0500, John Allgood wrote:
> Hello All
>
>I am setting up a hardware clustering solution. My hardware is Dual
> Opteron 550 with 8GB ram. My external storage is a Kingston Fibre
> channel Infostation. With 14 15000'k 36GB drives. The OS we are running
> is
Sorry, just a fool tip, cause I haven't seen that you already done the pg_ctl
stop && pg_ctl start ...
(I mean, did you reload your conf settings?)
Regards,
Guido
> > > I used you perl script and found the error =>
> > > [EMAIL PROTECTED] tmp]# perl relacl.pl
> > > DBI connect('dbname=template1
David Haas <[EMAIL PROTECTED]> writes:
> I'm comparing the speeds of the following two queries on 7.4.5. I was
> curious why query 1 was faster than query 2:
> query 1:
> Select layer_number
> FROM batch_report_index
> WHERE device_id = (SELECT device_id FROM device_index WHERE device_name
>
"Luke Chambers" <[EMAIL PROTECTED]> writes:
> The following query plans both result from the very same query run on
> different servers. They obviously differ drastically, but I don't why
> as one db is a slonied copy of the other with identical postgresql.conf
> files.
There's an order-of-magnitu
Hello All
I am setting up a hardware clustering solution. My hardware is Dual
Opteron 550 with 8GB ram. My external storage is a Kingston Fibre
channel Infostation. With 14 15000'k 36GB drives. The OS we are running
is Redhat ES 3.0. Clustering using Redhat Cluster Suite. Postgres
Version i
Christopher Browne wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>
>>Tom Lane wrote:
>>
>>>Gaetano Mendola <[EMAIL PROTECTED]> writes:
>>>
>>>
I'm using ony pg_autovacuum. I expect that disk usage will reach
a steady state but is not. PG engine: 7.4.5
>>>
>>>
>>>One data point do
> Well, sure looks like you only have one running. Your data directory is
> /var/lib/pgsql/data so lets see the files:
>
> /var/lib/pgsql/data/pg_hba.conf
> /var/lib/pgsql/data/pg_ident.conf
> /var/lib/pgsql/data/postmaster.opts
>
> Might also be useful to know any nondefault settings in postgresql
> > You can *never* get above 80 without using write cache,
> regardless of
> > your OS, if you have a single disk.
>
> Why? Even with, say, a 15K RPM disk? Or the ability to
> fsync() multiple concurrently-committing transactions at once?
Uh. What I meant was a single *IDE* disk. Sorry. Been
> Hi,
>
> I changed fsync to false. It took 8 minutes to restore the
> full database.
> That is 26 times faster than before. :-/ (aprox. 200 tps)
> With background writer it took 12 minutes. :-(
That seems reasonable.
> The funny thing is, I had a VMWARE emulation on the same
> Windows mashi
Hi, Asatryan,
Asatryan, Anahit schrieb:
> I am running postgreSQL 8.0.1 under the Windows 2000. I want to use COPY
> FROM STDIN function from Java application, but it doesn’t work, it throws:
>
> “org.postgresql.util.PSQLException: Unknown Response Type G” error.
Currently, there is no COPY sup
Hi, Magnus & all,
Magnus Hagander schrieb:
> 20-30 transactionsi s about what you'll get on a single disk on Windows
> today.
> We have a patch in testing that will bring this up to about 80.
> You can *never* get above 80 without using write cache, regardless of
> your OS, if you have a single di
[EMAIL PROTECTED] wrote:
The cluster table only has 11 rows, so I'm not sure an index would
help. The sensorreport table has 15,000,000 rows so that's why I've
got the index there.
Ah - only 11?
on the foreign key from sensortable.
Again, is there any way to get the delete to use the
idx_sensorrep
> -Original Message-
> From: Richard Huxton [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 23, 2005 3:40 AM
> To: [EMAIL PROTECTED]
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Joins, Deletes and Indexes
>
> [EMAIL PROTECTED] wrote:
> > I've got 2 tables defin
Hi,
I changed fsync to false. It took 8 minutes to restore the full database.
That is 26 times faster than before. :-/ (aprox. 200 tps)
With background writer it took 12 minutes. :-(
The funny thing is, I had a VMWARE emulation on the same Windows mashine,
running Red Hat, with fsync turned on. I
[EMAIL PROTECTED] wrote:
I've got 2 tables defined as follows:
CREATE TABLE "cluster"
(
id int8 NOT NULL DEFAULT nextval('serial'::text),
clusterid varchar(255) NOT NULL,
...
CONSTRAINT pk_cluster PRIMARY KEY (id)
)
CREATE TABLE sensorreport
(
id int8 NOT NULL DEFAULT nextval('serial'::
Asatryan, Anahit wrote:
I am running postgreSQL 8.0.1 under the Windows 2000. I want to use COPY
FROM STDIN function from Java application, but it doesn't work, it
throws:
"org.postgresql.util.PSQLException: Unknown Response Type G" error.
I don't think that there is a "STDIN" if you are executing
33 matches
Mail list logo