Re: [GENERAL] PgPool or alternatives

2017-01-21 Thread Simon Windsor

Hi

Thanks for the reply. We were not planning to use pgPools connection 
pool mode, but its replication mode.


Our tests with pgPool allow us to install a backup db via pgPool to each 
node, and tests loads overnight of 10+GB of inserts/updates/deletes all 
work fine, with only a slight loss of performance vs a standalone DB.


I was wondering if there is another option that will allow me to spool 
all ALTER|CREATE|DELETE|DROP|INSERT|UPDATE commands to all nodes, and 
SELECTs to any of the connected nodes. The apllication can actually 
handle separate READ|WRITE nodes from how it was written for Oracle.


Simon

On 21/01/2017 20:09, Stephen Frost wrote:

Simon,

* Simon Windsor (simon.wind...@cornfield.me.uk) wrote:

My employer wants to move from an in house Oracle solution to a
cloud based Postgres system. The system will involve a number of
data loaders running 24x7 feeding several Postgres Databases that
will be used by internal applications and external customer
applications.

For the record, internal and external applications make heavy use of
Temporary tables, that are session related. This requirement means I
cannot consider normal replication methods.

Is PgPool the only viable that will allow the system the data
loaders to feed [n] databases that will be functional identical?

I'm not sure what you mean by 'functional identical', but I wouldn't
generally consider that to be a property of pgpool (or pgbouncer, or any
other connection pooler, really).

That said, my general feeling is that pgbouncer tends to be simpler,
faster, and less likely to introduce oddities that you don't expect.
The 'session' mode might work for you, though it might be debatable if
that really helps you all that much.  'transaction' mode is what I
usually recommend as it allows idle connections to be handled by
pgbouncer (unlike 'session' mode), but there are caveats to using that
mode, of course.

I'm a bit curious where you're thinking of using the connection pooler
also though.  If you have data loaders running 24x7 feeding data
constantly to PG, do you really need a connection pooler for those?
Connection poolers make a lot of sense for environments where there's
lots of down-time on the connection, but the less down-time, the less
they make sense.

Thanks!

Stephen


--
Simon Windsor

Eml: simon.wind...@cornfield.me.uk
Tel: 01454 617689
Mob: 0755 197 9733


“There is nothing in the world that some man cannot make a little worse and 
sell a little cheaper, and he who considers price only is that man's lawful 
prey.”



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PgPool or alternatives

2017-01-21 Thread Simon Windsor

Hi

My employer wants to move from an in house Oracle solution to a cloud 
based Postgres system. The system will involve a number of data loaders 
running 24x7 feeding several Postgres Databases that will be used by 
internal applications and external customer applications.


For the record, internal and external applications make heavy use of 
Temporary tables, that are session related. This requirement means I 
cannot consider normal replication methods.


Is PgPool the only viable that will allow the system the data loaders to 
feed [n] databases that will be functional identical?


Simon

--
Simon Windsor

Eml: simon.wind...@cornfield.me.uk
Tel: 01454 617689
Mob: 0755 197 9733


“There is nothing in the world that some man cannot make a little worse and 
sell a little cheaper, and he who considers price only is that man's lawful 
prey.”



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Simon Windsor
Hi

Thanks for the response.

I am new to small IT company that have recently migrated an Oracle based
system Postgres. The system stores full XML responses, ranging in size from
a few K to over 55MB, and a sub set of key XML fields are stored on a more
permanent basis.

The database design was thus determined by the previous Oracle/Java system,
with empty LOBS being created and data being streamed in.

The data only has to be kept for a few days, and generally the system is
performing well, but as stated in the email, regular use of vacuumlo, vacuum
and autovacuum leaves the OS disc space slowly shrinking.

As a last resort this week, I'm going to get 500+GB of extra file store
added, add a tablespace and move pg_largeobjects to this area. Then use
CLUSTER to rebuild pg_largeobjects back in the default tablespace. This
should fix things I hope, and if needed I'll use Cluster regularly.

Simon 


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: 02 January 2012 11:18
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Large Objects and and Vacuum

On 12/30/11 3:54 PM, Simon Windsor wrote:
 I am struggling with the volume and number of XML files a new 
 application is storing.

how big are these XML files?  large_object was meant for storing very 
large files, like videos, etc. multi-megabyte to gigabytes.   XML stuff 
is typically a lot smaller than that.

me, I would be decomposing the XML in my application and storing the data in
proper relational tables, and only generate XML output if I absolutely had
to send it to another system beyond my control as its easily one of the most
inefficient methods of data representation out there.



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large Objects and and Vacuum

2011-12-30 Thread Simon Windsor
Hi

 

I am struggling with the volume and number of XML files a new application is
storing. The table pg_largeobjects is growing fast, and despite the efforts
of vacuumlo, vacuum and auto-vacuum it keeps on growing in size.

 

The main tables that hold large objects are partitioned and every few days I
drop partition tables older than seven days, but despite all this, the
system is growing in size and not releasing space back to the OS.

 

Using either vacuum full or cluster to fix pg_largeobjects will require a
large amount of work space which I do not have on this server.

 

Is there another method of scanning postgres tables, moving active blocks
and releasing store back to the OS?

 

Failing this, I can see an NFS mount being required.

 

Simon

 

 

Simon Windsor

Eml:  mailto:simon.wind...@cornfield.org.uk simon.wind...@cornfield.org.uk

Tel: 01454 617689

Mob: 07590 324560

 

There is nothing in the world that some man cannot make a little worse and
sell a little cheaper, and he who considers price only is that man's lawful
prey.

 



[GENERAL] Vacuum and Large Objects

2011-12-14 Thread Simon Windsor
Hi

 

I am having problems recovering storage from a Postgres 9.05 database that
is used to hold large XML blocks for a week, before they are archived off
line.

 

The main tables are partitioned in daily partitions, and these are easy to
manage, however the DB keeps growing despite using Vacuum (daily at 0700)
and autovacuum (this does not seem to run, although the process is running).
The system is insert only, and partitions are dropped when over 7 days of
age.

 

I believe the issue lies with pg_largeobject, it is split between 88 files
of approx. 1G each.

 

The Postgres settings are default, EXCEPT

 

grep ^[a-z] postgresql.conf

listen_addresses = '*'  # what IP address(es) to listen on;

port = 5432 # (change requires restart)

max_connections = 1000  # (change requires restart)

shared_buffers = 256MB  # min 128kB

work_mem = 4MB  # min 64kB

maintenance_work_mem = 256MB# min 1MB

vacuum_cost_delay = 20ms# 0-100 milliseconds

checkpoint_segments = 32# in logfile segments, min 1, 16MB
each

checkpoint_completion_target = 0.9  # checkpoint target duration, 0.0 -
1.0

checkpoint_warning = 60s# 0 disables

archive_mode = off  # allows archiving to be done

constraint_exclusion = partition# on, off, or partition

log_destination = 'stderr'  # Valid values are combinations of

logging_collector = on  # Enable capturing of stderr and csvlog

silent_mode = on# Run server silently.

log_checkpoints = on

log_line_prefix = '%t %d %u '   # special values:

log_statement = 'none'  # none, ddl, mod, all

track_activities = on

track_counts = on

autovacuum = on # Enable autovacuum subprocess?  'on'

log_autovacuum_min_duration = 250   # -1 disables, 0 logs all actions
and

autovacuum_max_workers = 3  # max number of autovacuum
subprocesses

autovacuum_naptime = 3min   # time between autovacuum runs

autovacuum_vacuum_threshold = 500   # min number of row updates before

autovacuum_analyze_threshold = 100  # min number of row updates before

autovacuum_vacuum_scale_factor = 0.1# fraction of table size before
vacuum

autovacuum_analyze_scale_factor = 0.05  # fraction of table size before
analyze

autovacuum_vacuum_cost_delay = 5ms  # default vacuum cost delay for

autovacuum_vacuum_cost_limit = 200  # default vacuum cost limit for

statement_timeout = 0   # in milliseconds, 0 is disabled

datestyle = 'iso, dmy'

lc_messages = 'en_GB.UTF-8' # locale for system error
message

lc_monetary = 'en_GB.UTF-8' # locale for monetary
formatting

lc_numeric = 'en_GB.UTF-8'  # locale for number
formatting

lc_time = 'en_GB.UTF-8' # locale for time formatting

default_text_search_config = 'pg_catalog.english'

 

Besides running VACUUM FULL  pg_largeobject;, is there a way I can get
autovacuum to start and clear this up?

 

All the best

 

Simon 

 

Simon Windsor

Eml:  mailto:simon.wind...@cornfield.org.uk simon.wind...@cornfield.org.uk

Tel: 01454 617689

Mob: 07590 324560

 

There is nothing in the world that some man cannot make a little worse and
sell a little cheaper, and he who considers price only is that man's lawful
prey.

 



[GENERAL] Postgres performance and the Linux scheduler

2011-06-16 Thread Simon Windsor
Hi

 

Can the performance of Postgres be boosted, especially on busy systems,
using the none default DEADLINE Scheduler?

 

Simon  

 

 

Simon Windsor

Eml:  mailto:simon.wind...@cornfield.org.uk simon.wind...@cornfield.org.uk

Tel: 01454 617689

Mob: 07590 324560

 

There is nothing in the world that some man cannot make a little worse and
sell a little cheaper, and he who considers price only is that man's lawful
prey.

 



[GENERAL] Postgres or Greenplum

2011-06-07 Thread Simon Windsor
Hi

 

I have been using Postgres for many years and have recently discover
Greenplum, which appears to be a heavily modify Postgres based, multi node
DB that is VERY fast.

 

All the tests that I have seen suggest that Greenplum when implemented on a
single server, like Postgres, but with several  separate installations can
be many time times faster than Postgres. This is achieved by using multiple
DBs to store the data and using multiple logger  and writer processes to
fully use the all the resources of the server.

 

Has the Postgres development team ever considered using this technique to
split the data into separate sequential files that can be accessed by
multiple writers/reader processes? If so, what was the conclusion?

 

Finally,  thanks for all the good work over the years!

 

Simon 

 

Simon Windsor

Eml:  mailto:simon.wind...@cornfield.org.uk simon.wind...@cornfield.org.uk

Tel: 01454 617689

Mob: 07590 324560

 

There is nothing in the world that some man cannot make a little worse and
sell a little cheaper, and he who considers price only is that man's lawful
prey.

 



[GENERAL] PIVOT tables and crosstab

2009-12-10 Thread Simon Windsor
Hi

Are there any plans to embed crosstab within the main release of
Postgres, rather than as a Contrib component.

Also, are there plans to enhance crosstab along the lines of Oracle 11g
of pivot command?

All the best Simon

-- 
Simon Windsor
Eml: simon.wind...@cornfield.me.uk
Tel: 01454 617689
Mob: 07590 324560

“There is nothing in the world that some man cannot make a little worse
and sell a little cheaper, and he who considers price only is that man's
lawful prey.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance of views

2008-11-02 Thread Simon Windsor

Hi

Generally, I have avoided using VIEWS within application code and only 
used them for client interfaces, the sole reason being the performance 
of views against tables.


Changes to database over the past few years appear to have improved the 
performance of views, but I am still not comfortable with using VIEWS 
within application code. The main reasons I have are


  * Data within a view is not necessary sequential, unlike a table
  * Higher overhead mapping to original tables and indexes
  * Danger of linking views and tables and not utilising utilising
underlying tables properly.

Am I right to avoid to VIEWS within application code?

Simon

--
Simon

Simon Windsor
Eml: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_hba problem

2005-06-20 Thread Simon Windsor
Hi

I have just installed 8.0.3, and my standard pg_hba config no longer works.

I usually enable the postgres accound using

# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# local is for Unix domain socket connections only
local   all postgres  ident sameuser


and use other lines for named datbases.

However, with 8.0.3, the format appears to have changed to

#   host  DBNAME  IP_ADDRESS  ADDRESS_MASK  AUTHTYPE  [AUTH_ARGUMENT]
#   local  DBNAME  AUTHTYPE  [AUTH_ARGUMENT]

with no mention of USER. I have tried to use pg_ident to get round this, 

pg_ident
map1   postgres postgres

pg_hba
host all  127.0.0.1 255.255.255.255ident   map1

but I keep getting the error

psql: No pg_hba.conf entry for host localhost, user postgres, database 
template1

Any ideas how I can safely provide global access to the postgres account?

Simon


-- 
Simon Windsor
Email: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07720 447385

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Postgres 8.1

2005-06-12 Thread Simon Windsor








Hi



Has a roadmap and timescale been released for Postgres 8.1?



All the best



Simon



Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599










[GENERAL] OT: phpPgAdmin

2005-04-28 Thread Simon Windsor
Hi

I apologise for this off-topic question.

I am setting up phpPgAdmin on a reporting server to access other, remote
postgres servers. I want to use a restricted, password user account and
use pg_hba.conf to restrict access to each database.

For the user account, what level of permissions will I need? I have
tried it out with 'SELECT', on all tables/views, being granted to the
user, but when I try and look at processes, I get a list of
insufficient privilege values when trying to look at the SQL.

Any ideas?

Thanks

Simon


-- 
Simon Windsor
Email: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Postgres 8 and Slony-1

2004-12-10 Thread Simon Windsor
Hi
Any news when Postgres 8 will be released?
Also, are there any plans to include Slony with it, as part of the same 
source download, or as part of a  group  'matched' source 
files/rpms/dpkgs etc?

Thanks All
Simon
--
Simon Windsor
Eml: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] coalesce and nvl question

2004-06-23 Thread Simon Windsor
Hi

Is there a standard postgres method of replacing empty strings.

In Oracle, nvl handles nulls and empty strings, as does ifnull() in
MySQL, but with postgres coalesce only handles null strings.

If, not is the best solution to create a plpgsql function, ie

CREATE FUNCTION isEmpty (character varying, character varying) RETURNS
character varying
AS '
declare
fnRetTrue  alias for $1;
fnRetFalse alias for $2;
begin
if fnRetTrue = '' or fnRetTrue is not null then
return fnRetTrue;
else
return fnRetFalse;
end if;
end;
'
LANGUAGE plpgsql;

Thanks

Simon

-- 
Simon Windsor
Email: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] coalesce and nvl question

2004-06-23 Thread Simon Windsor
Hi

I understand that null and '' are different, and MySQL and Oracle
functions are confusing, but my question was not about replacing NULL
but replacing Empty strings. These are handled in MySQL/Oracle by the
same functions that do NULL checks.

Is there a standard function in Postgres that replaces Empty strings, as
against a NULL value.

Simon



On Wed, 2004-06-23 at 18:05, Mike Mascari wrote:
 Simon Windsor wrote:
  Hi
  
  Is there a standard postgres method of replacing empty strings.
  
  In Oracle, nvl handles nulls and empty strings, as does ifnull() in
  MySQL, but with postgres coalesce only handles null strings.
  
  If, not is the best solution to create a plpgsql function, ie
  
  CREATE FUNCTION isEmpty (character varying, character varying) RETURNS
  character varying
 
 This all depends upon what you mean by handle. Do you want to treat 
 empty strings as NULL or NULL as empty strings? As you said, you can 
 treat NULL as empty strings using COALESCE:
 
 SELECT COALESCE(x, '');
 
 You can treat empty strings as NULL
 
 SELECT NULLIF(x, '');
 
 But I'd guess most on this list are wondering why you want to equate 
 an empty string with NULL, as they have two distinct meanings. 
 Oracle's treatment of empty strings as NULL is world-renowned for 
 being insane...
 
 HTH
 
 Mike Mascari
-- 
Simon Windsor
Email: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] WAL details

2004-04-27 Thread Simon Windsor
Hi

I want to be preserve a log of all SQL actions that change a database, (ie 
INSERT, UPDATE, DELETE, CREATE), to provide a full audit trail of the date, 
and to allow the recreation of a database at any time.

I can achieve this wth ORACLE(recover and others) and MySQL(mysqlbinlog) quite 
easily. How do I achieve this with Postgres?

Simon
-- 
Simon Windsor
Eml: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] How do I change column details

2004-03-09 Thread Simon Windsor
Hi

I have a MySQL application which I am porting to Postgres. I have been
asked to increase the size of a varchar column from 200 chars to 2000.

Normally, in MySQL/Oracle this is done with the alter table modify
column command.

How do I do this in Postgres?

I have just renamed the column, created the new one, copied the data
into it and dropped the renamed column. Is the only way?

All the best

Simon

-- 
Simon Windsor
Eml: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] md5 calls

2004-02-28 Thread Simon Windsor








Hi



I am using the standard debian testing release of postgres(7.3.4)
and was wondering how to produce and md5 string.



I had thought



Select md5(joe); 



Would be sufficient?



Any ideas, or is the best option to create a perl function
to do this for me ?



Simon



Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599










[GENERAL] Case of strings

2004-02-27 Thread Simon Windsor








Hi



I have moved an application from MySQL to PgSQL, and after
the making changes I thought all was ok. 



However, I have just realised that



Where A = STRING



Is not the same as



Where A =String



Is there anyway I can force the PgSQL to accept case
equivalence, or must I add upper()/lower() to force the case and then make
string tests?



Ie Where upper(A)=upper(String)



Many thanx



Simon



Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599







-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.



[GENERAL] Replication options

2004-02-20 Thread Simon Windsor








Hi



I am fairly news to Postgres, but have spent many years
using Oracle (15) and MySQL(5).



Whilst I find Postgres very easy to pickup, the
number of replication options are puzzling.



There are a number of replication options available to
Postgres. Can anyone on this list advise suitable methods (ie Replicator,
dbexperts, GBORG etc) for MASTER-MASTER and MASTER-SLAVE(Multiple) replication.



Many thanx



Simon



Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599










[GENERAL] Replication options

2004-02-19 Thread Simon Windsor








Hi



I am used to using Oracle (15 years) and MySQL(5 years), but
I am planning to move an existing application from MySQL to Postgres. The
reasons are very simple,




 New requirements means we need
 views, or a significant re-write
 Better query/index performance
 essential.
 Postgres and MySQL share a very
 common implementation of SQL-92, and what is missing can easily be implanted
 in functions




The only negative issue is replication. I have checked
several Postgres Replication options and unsure which way to go. Can anyone
recommend a replication option that meets the following:




 Does not use triggers. Usually
 slow, and one action that modifies several records, can trigger many actions
 on slaves/peers.
 Does use WAL, or other log, so
 that SQL DDL/DML is copied to slave/peer, rather than the result of the
 DDL/DML.
 Must provide master-master and
 master-slave replication
 Simple to configure and
 maintain




Many Thanx





Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599









-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.



[GENERAL] Moving from MySQL

2003-08-10 Thread Simon Windsor



Hi

I am sorry for mailing this list directly, but I am 
planning to migrate a web application from MySQL to Postgres. I am aware of a 
number of applications to help in this process, ie my2pg, etc. The biggest 
stumbling block I face is replication.

What facilities exist within Postgres to replicate 
data from one instance to another?
Which version of Postgres would people to 
recommend?

Many thanx

Simon WindsorEmail: [EMAIL PROTECTED]Tel: 
01454 617689Mob: 07720 447385This message has been scanned for viruses and dangerous content by
MailScanner, and is believed to be clean.