On 1/8/16, 12:51 PM, "Simon Riggs"
mailto:si...@2ndquadrant.com>> wrote:
On 8 January 2016 at 18:56, Joshua D. Drake
mailto:j...@commandprompt.com>> wrote:
On 01/08/2016 10:42 AM, Andrew Biggs (adb) wrote:
Installed 9.5 to CentOS7 via yum, and tried going through the
On 1/8/16, 10:53 AM, Rob Sargent wrote:
On 01/08/2016 10:39 AM, Andrew Biggs (adb) wrote:
Can anyone tell me if PostgreSQL 9.5 supports (either natively or by extension)
the BDR functionality?
I tried it out and ran into issues, but it could well have been I was doing
something wrong.
Thanks
Can anyone tell me if PostgreSQL 9.5 supports (either natively or by extension)
the BDR functionality?
I tried it out and ran into issues, but it could well have been I was doing
something wrong.
Thanks!
Andrew
it all depends on the number of drives, the type of drives
and if raid is being done in hardware or software.
A three drive raid 5 array in software with older drives
is probably going to be slower than a single 7200rpm ata100 drive.
Not that this is what you have, just pointing out that it's po
When in doubt, try the explain command
Not exactly sure about postgres but in general LIKE can
only use an index in the case of LIKE "Something%"
LIKE "%Something" or LIKE "%Something%"
won't use an index since it would have to scan the entire
index to find all matches.
Alex.
On Wed, 14 Mar
Have you done any benchmarks with a prototype of your application.
Based off of some of the numbers I've been seeing with my testing,
I would not be surprised if a single pIII 1ghz box with a decent disk
(ata 100 or scsi 160) would handle the load you describe and is way
cheaper than some big smp
I think there's an option to log the pid in postgresql.conf in 7.1
log_pid = true
Alex.
On Wed, 7 Mar 2001, Jean-Christophe Boggio wrote:
>
> When you log pg's activity with statistics enabled you can get this
> kind of report :
>
> StartTransactionCommand
> query: select * from identity wher
As of version 7.1 postgres uses write ahead logging which is what
you are looking for.
As for backups, currently your only option is full dumps of the entire
db or full dumps of individual tables.
Some folks are working around this by using replication to
a secondary server with rserv (in contr
Hi, I'm trying to figure out what some reasonable settings would
be for kernel parameters like shared memory and max open files.
I've read the section of the manual but it doesn't seem to give
any rule of thumb based on number of users or frequency of queries.
With my load testing I definitely bu
every story: your side, their side, the
> truth, and what really happened.
> - Original Message -
> From: "adb" <[EMAIL PROTECTED]>
> To: "Gregory Wood" <[EMAIL PROTECTED]>
> Cc: "PostgreSQL-General" <[EMAIL PROTECTED]>
> Sent: Friday, Ma
I personally would like to see 8byte OIDs or at least int8 sequences, I'm
a little worried about the pain of managing a potential rollover when I'm
using sequences as a replication key between servers.
Alex.
On Fri, 2 Mar 2001, Peter Eisentraut wrote:
> Rod Taylor writes:
>
> > Someones boun
I agree that they are very handy. They become a major pain in
the butt when you start doing replication between servers.
For instance if you fail over to a standby server and you
forget to update it's sequence first, merging data later
becomes a nightmare. I'd like to have int8 sequences and
ba
If there's no simple way to do this, I think I found an example of what I
need in function printQuery in psql/print.c
Alex.
On Fri, 23 Feb 2001, adb wrote:
>
> Is there an easy way in libpq to get the results from
> any query as strings.
>
> Imagine a cgi where you inpu
I've noticed that select max(the_primary_key) from some_table
does a table scan. Is there any plan to implement max/min calculations
using index lookups if the appropriate index exists?
Thanks,
Alex.
I'm willing to do some writing if someone will explain things to me.
I've pretty much disected all the sql used by Rserv.pm to see what it
does so I've got a decent head start.
The two questions that are holding me up at this point are:
1. which column do you specify for setting up the replicatio
I've heard of other people running databases on a Netapp Filer
I think this is generally ok because the netapp has a special
write cache in NVram and that's what makes it so fast.
On Fri, 9 Feb 2001, tc lewis wrote:
>
> On Fri, 9 Feb 2001, Shaw Terwilliger wrote:
> > I've been using PostgreSQL
On Fri, 2 Feb 2001, Alex Pilosov wrote:
> On Fri, 2 Feb 2001, adb wrote:
>
> > Is there any way to have up to the minute recovery from a disk
> > failure in postgres?
> RAID1? :)
>
> > Is there a timeframe for the recover from WAL feature?
> If you are asking if WAL
Is there any way to have up to the minute recovery from a disk
failure in postgres?
Is there a timeframe for the recover from WAL feature?
Thanks,
Alex.
Sounds like you need to use a trigger set to
fire after the insert on table A
Alex.
On Tue, 30 Jan 2001, Nelio Alves Pereira Filho wrote:
> I read at the docs that rules are executed before the query that
> generated them. Is there any way to change this?
>
> Here's my problem: I have two tabl
Hi again, I'm getting stuff like
ERROR: cannot write block 1056 of tradehistory [testdb] blind: Too many
open files in system
and
postmaster: StreamConnection: accept: Too many open files in system
I understand that I need to up the max number of open files in the linux
kernel but I'd like t
Hi, I've read the administrator guide section on backups and
I'm wondering is there an easy way to do backups of the
transaction log similar to sybase or oracle?
I imagine I would use pg_dumpall nightly but I'm wondering if there's
something else to run every 10 minutes or so to dump the log. Or
Has anyone implemented a form of one way replication with the new
write ahead logging in 7.1?
I'm looking for a way to have a warm or hot standby server waiting to
take over in the event of a machine failure.
I know there's discusion about adding replication to future releases,
I'm just wonderin
22 matches
Mail list logo