Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Albe Laurenz
Please don't send HTML mail to this list.

Simon Windsor wrote:
 I am struggling with the volume and number of XML files a new
application is storing. The table
 pg_largeobjects is growing fast, and despite the efforts of vacuumlo,
vacuum and auto-vacuum it keeps
 on growing in size.

Have you checked if the number of large objects in the database
is growing as well?

Check the result of
SELECT count(DISTINCT loid) FROM pg_largeobject;
over time, or before and after vacuumlo.

 The main tables that hold large objects are partitioned and every few
days I drop partition tables
 older than seven days, but despite all this, the system is growing in
size and not releasing space
 back to the OS.

Do you also delete the large objects referenced in these dropped tables?
They won't vanish automatically.

If you use large objects in a partitioned table, you probably have
a design problem. Having to clean up after orphaned large objects
will mitigate the performance gain by dropping partitions instead
of deleting data. You might be better off with bytea.

 Using either vacuum full or cluster to fix pg_largeobjects will
require a large amount of work space
 which I do not have on this server.

 Is there another method of scanning postgres tables, moving active
blocks and releasing store back to
 the OS?

If VACUUM does not keep pg_largeobject from growing, VACUUM FULL or
something else will not help either.
You have to figure out why your large objects don't get deleted.
Only after they are deleted, VACUUM can free the space.

 Failing this, I can see an NFS mount being required.

Beg your pardon?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would whoever is at Hi-Tech Gears Ltd, Gurgaon, India fix their mailer?

2012-01-02 Thread Chetan Suttraway
On Sat, Dec 31, 2011 at 12:04 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Whoever you are, you are forging Devrim Gunduz's name to signed
 reposts of all his posts in pgsql-general.  This is at best impolite to
 Devrim, and it's annoying everybody else.  If it continues I will see
 to it that you get removed from this mailing list.

 [ latest example attached ]

regards, tom lane

 --- Forwarded Message

 Received: from mx1.hub.org (mx1.hub.org [200.46.208.106])
by sss.pgh.pa.us (8.14.2/8.14.2) with ESMTP id pBUHiWKs019441
for t...@sss.pgh.pa.us; Fri, 30 Dec 2011 12:44:32 -0500 (EST)
 Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx1.hub.org (Postfix) with ESMTP id 4B30E439FA8;
Fri, 30 Dec 2011 13:44:31 -0400 (AST)
 Received: from makus.postgresql.org (makus.postgresql.org[98.129.198.125])
by mail.postgresql.org (Postfix) with ESMTP id B0FA4C2469A
for pgsql-general@postgresql.org; Fri, 30 Dec 2011 13:43:52
 -0400 (AST)
 Received: from dr178.cyberspaceindia.com ([216.144.195.178])
by makus.postgresql.org with esmtp (Exim 4.72)
(envelope-from kb...@hitechesoft.com)
id 1RggUe-0005fL-4f
for pgsql-general@postgresql.org; Fri, 30 Dec 2011 17:43:52 +
 Received: from 59.160.98.132.static.vsnl.net.in [59.160.98.132] by
 dr178.cyberspaceindia.com with SMTP;
   Fri, 30 Dec 2011 23:13:21 +0530
 MIME-Version: 1.0
 X-DSM: true
 Subject: Re: [GENERAL] Dated Version of PostgreSQL
 Date: Fri, 30 Dec 2011 23:22:11 +0530 (IST)
 Message-ID: 11134559.931325267531671.JavaMail.Administrator@desktop166
 X-MailServer: PostMaster Enterprise v7.15 on [192.168.0.25] Windows XP
 (5.1)
 From: dev...@gunduz.org
 To: Carlos Mennens carlos.menn...@gmail.com
 Cc: PostgreSQL pgsql-general@postgresql.org
 content-type: multipart/mixed;
 boundary==_Part_0_25610178.1325267531640
 X-Declude-Sender: kb...@hitechesoft.com [59.160.98.132]
 X-Declude-Spoolname: 324775960742.eml
 X-Declude-RefID: str=0001.0A090201.4EFDF839.011B,ss=1,fgs=0
 X-Declude-Note: Scanned by Declude 4.10.79 
 http://www.declude.com/x-note.htm;
 X-Declude-Scan: Outgoing Score [0] at 23:13:26 on 30 Dec 2011
 X-Declude-Tests: Whitelisted
 X-Country-Chain:
 X-Declude-Code: 0
 X-HELO: hitechgears.com
 X-Identity: 59.160.98.132 |  | postgresql.org
 X-Pg-Spam-Score: -1.9 (-)
 X-Mailing-List: pgsql-general
 List-Archive: http://archives.postgresql.org/pgsql-general
 List-Help: mailto:majord...@postgresql.org?body=help
 List-ID: pgsql-general.postgresql.org
 List-Owner: mailto:pgsql-general-ow...@postgresql.org
 List-Post: mailto:pgsql-general@postgresql.org
 List-Subscribe: mailto:majord...@postgresql.org?body=sub%20pgsql-general
 List-Unsubscribe: mailto:majord...@postgresql.org
 ?body=unsub%20pgsql-general
 Precedence: bulk
 Sender: pgsql-general-ow...@postgresql.org


 --=_Part_0_25610178.1325267531640
 content-type: multipart/alternative;
 boundary==_Part_1_27565208.1325267531640


 --=_Part_1_27565208.1325267531640
 content-type: text/plain; charset=us-ascii
 content-transfer-encoding: 7bit

 This message has been digitally signed by the sender.
 --=_Part_1_27565208.1325267531640
 content-type: text/html; charset=us-ascii
 content-transfer-encoding: 7bit

 This message has been digitally signed by the sender.
 --=_Part_1_27565208.1325267531640--

 --=_Part_0_25610178.1325267531640
 content-type: application/octet-stream;
 name=Re___GENERAL__Dated_Version_of_PostgreSQL.eml
 content-transfer-encoding: 7bit
 content-disposition: attachment;
 filename=Re___GENERAL__Dated_Version_of_PostgreSQL.eml

 X-POP3-Server: Host ([mail.hitechroboticsystemz.com])   by 
 hitechgears.com(PostMaster POP3 Login [
 h...@hitechroboticsystemz.com] [192.168.0.25]);   Fri, 30 Dec 2011
 23:22:11 +0530
 Return-Path: pgsql-general-owner+m183...@postgresql.org
 Received: from mx2.hub.org [200.46.204.254] by dr178.cyberspaceindia.comwith 
 SMTP;
   Fri, 30 Dec 2011 23:10:36 +0530
 Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id 6905D65F8B0;
Fri, 30 Dec 2011 13:40:31 -0400 (AST)
 Received: from makus.postgresql.org (makus.postgresql.org[98.129.198.125])
by mail.postgresql.org (Postfix) with ESMTP id A26A9C24695
for pgsql-general@postgresql.org; Fri, 30 Dec 2011 13:39:53
 -0400 (AST)
 Received: from ns1.gunduz.org ([77.79.103.58])
by makus.postgresql.org with esmtp (Exim 4.72)
(envelope-from dev...@gunduz.org)
id 1RggQl-0005Xw-Jg
for pgsql-general@postgresql.org; Fri, 30 Dec 2011 17:39:53 +
 Received: from [192.168.100.6] (unknown [78.189.47.167])
(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
(No client certificate requested)
by ns1.gunduz.org (Postfix) with ESMTPSA id CF1DA603DA;
Fri, 30 Dec 2011 17:35:31 + (UTC)
 Message-ID: 

Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Alban Hertroys
On 31 December 2011 00:54, Simon Windsor simon.wind...@cornfield.me.uk wrote:
 I am struggling with the volume and number of XML files a new application is
 storing. The table pg_largeobjects is growing fast, and despite the efforts
 of vacuumlo, vacuum and auto-vacuum it keeps on growing in size

I can't help but wonder why you're using large objects for XML files?
Wouldn't a text-field be sufficient? Text-fields get toasted, that
would safe you some space.

Another option would be to use xml-fields, but that depends on whether
you have valid XML and whether you have any desire to make use of any
xml-specific features such fields provide. There will probably be a
performance hit for this.

I do realise that you can stream large objects, that's a typical
use-case for choosing for them, but with XML files that doesn't seem
particularly useful to me; after all, they're not valid if not
complete. You have to read the whole file into memory _somewhere_
before you can interpret them meaningfully. The exception to that rule
is if you're using a SAX-parser (which also explains why those parsers
usually have fairly limited features).

Of course there are valid reasons for choosing to use large objects
for XML files, I assume yours are among them. If they're not, however,
maybe you should have a thorough look at your problem again.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread John R Pierce

On 12/30/11 3:54 PM, Simon Windsor wrote:
I am struggling with the volume and number of XML files a new 
application is storing. 


how big are these XML files?  large_object was meant for storing very 
large files, like videos, etc. multi-megabyte to gigabytes.   XML stuff 
is typically a lot smaller than that.


me, I would be decomposing the XML in my application and storing the 
data in proper relational tables, and only generate XML output if I 
absolutely had to send it to another system beyond my control as its 
easily one of the most inefficient methods of data representation out there.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [partition table] python fetchall or fetchone function can not get the returning rows

2012-01-02 Thread Jasen Betts
On 2011-12-21, Xiaoning Xu x...@bcgsc.ca wrote:
 Hello,

 I have a problem concerning the partition table.
 When I store a record into one of the partition and use RETURNING table_id 
 or RETURNING *,
 I expect the same result when calling fetchall or fetchone function as not 
 using partition.
 However, it simply returns nothing.

If you've got a before insert trigger there that does return false that's what
happens.

If you need returning to work, you could look at doing the
partitioning in a after insert trigger, deleteing from th main table
and inserting into the partition.

Else perhaps you can use a rule instead, have the rule run a function
and the function return the id.

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Simon Windsor
Hi

Thanks for the response.

I am new to small IT company that have recently migrated an Oracle based
system Postgres. The system stores full XML responses, ranging in size from
a few K to over 55MB, and a sub set of key XML fields are stored on a more
permanent basis.

The database design was thus determined by the previous Oracle/Java system,
with empty LOBS being created and data being streamed in.

The data only has to be kept for a few days, and generally the system is
performing well, but as stated in the email, regular use of vacuumlo, vacuum
and autovacuum leaves the OS disc space slowly shrinking.

As a last resort this week, I'm going to get 500+GB of extra file store
added, add a tablespace and move pg_largeobjects to this area. Then use
CLUSTER to rebuild pg_largeobjects back in the default tablespace. This
should fix things I hope, and if needed I'll use Cluster regularly.

Simon 


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: 02 January 2012 11:18
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Large Objects and and Vacuum

On 12/30/11 3:54 PM, Simon Windsor wrote:
 I am struggling with the volume and number of XML files a new 
 application is storing.

how big are these XML files?  large_object was meant for storing very 
large files, like videos, etc. multi-megabyte to gigabytes.   XML stuff 
is typically a lot smaller than that.

me, I would be decomposing the XML in my application and storing the data in
proper relational tables, and only generate XML output if I absolutely had
to send it to another system beyond my control as its easily one of the most
inefficient methods of data representation out there.



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] asynchronous api questions

2012-01-02 Thread Nulik Nol
Hi,

I have 2 questions regarding the asynchronous C api (I am using vers. 8.4):

1) To make a connection in non-blocking manner the api provides PGconn
*PQconnectStart(const char *conninfo) function. The parameters are
passed in 'conninfo' variable which is a string so I have to use
sprintf() to put the parameters into this string. Is there another
function to connect asynchronously that would be similar to
PQsetdbLogin so it would accept the host/port/user/password parameters
directly as argument to the function? I want to avoid to use sprintf()

2) Will this code produce a valid non-blocking connection ?


db_globals=PQsetdbLogin(DT_DBHOST,DT_DBPORT,NULL,NULL,global,DT_DBUSER,DT_DBPASS);
if (PQstatus(db_globals) != CONNECTION_OK) {
fprintf(stderr, Connection to database failed: %s,
PQerrorMessage(db_globals));
PQfinish(db_globals);
exit(1);
}
if (PQsetnonblocking(db_globals, int arg)!=1) {
printf(can't set global connection to non blocking mode\n);
exit(1);
}

Will appreciate any help

Nulik

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.2: Preventing connections / syncing a database

2012-01-02 Thread Jay Levitt

Rob Sargentg wrote:
 Not clear to me why an
 individual dev box needs to be that current data-wise.

It's significantly easier to debug a production problem when you can 
duplicate the problem on your local machine. Hey, when I go to JennyC's 
activity page and scroll down three times, I see out-of-order postings. 
Our site is a dynamic Facebook-style feed, and you're usually seeing only 
the newest content, so it's a truism that any noticeable production bug will 
be with some user who registered yesterday and posted something this morning.


Likewise, new code has a much better chance of working on production if it's 
been tested against production data. I know big shops use fancy technology 
like test plans and QA people. But. (There's actually a good argument 
that we don't WANT that.)


Keeping the dev boxes up to date, daily, is the poor man's regression test.


Of course stopping
and starting your app should be easy, especially for the developers so maybe
that's a better place to start. Then dev can do it when and how often suits
dev best (even cronning shutdown app; reload db; to happen 3am Sundays)


Ah ha! Clearly you don't develop on a laptop ☺  cron jobs are no longer 
useful for things like that, because it's in your backpack and asleep at 
3am. Yeah, it would run when you wake up, but what's more annoying than 
opening your laptop and having it freeze while it swaps everything back in, 
finds your network, runs your cron jobs, rearranges your windows, etc?


And yes, shutting down the app isn't as hard as I claim - it's two or three 
commands - but developers are lazy and avoid friction.  If you have to stop 
the app for five minutes to update, you'll do it later. It's like waiting 
for a compile; it interrupts your rhythm.  As Rails developers, we're 
spoiled; there's no compile, there's no local deploy. You change a line of 
source code, you switch windows, your editor automatically saves when it 
loses focus, you refresh the browser, Rails automatically reloads the 
changed code, you see the change. (There are three different browser 
extensions that will automatically refresh your browser, too, in case that 
was too hard.)


TL;DR: Reduce friction - more frequent database updates - fewer bugs.


On 01/01/2012 11:51 AM, Jay Levitt wrote:


revoke connect on database rails_dev from public;
select pg_terminate_backend(procpid) from pg_stat_activity where
datname='rails_dev';

Still, the app can reconnect. (So can psql.)

So...

1. How can I prevent (or redirect, or timeout, or anything) new
connections? I think superuser roles might be exempt from connection
limits and privileges. I could repeatedly terminate backends until I'm
able to rename the database, but... ick.

2. What's a better way to slave to a master database without being a
read-only slave? In other words, we want to use the production database as
a starting point each morning, but we'll be creating test rows and tables
that we wouldn't want to propagate to production. Can I do this while the
database is live through some existing replication tool? The production
database is on Ubuntu but the workstations are Macs, FWIW.

Jay

More along the lines of what Greg has said. Not clear to me why an
individual dev box needs to be that current data-wise. Of course stopping
and starting your app should be easy, especially for the developers so maybe
that's a better place to start. Then dev can do it when and how often suits
dev best (even cronning shutdown app; reload db; to happen 3am Sundays)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.2: Preventing connections / syncing a database

2012-01-02 Thread Jay Levitt

Greg Sabino Mullane wrote:

update pg_database set datallowconn = false where datname = 'foobar';


That's perfect - thanks.  Now I can (I think) do this:

pg_restore -d rails_dev_new
[wait]
psql template1
  update pg_database set datallowconn = false where datname = 'rails_dev';
  select pg_terminate_backend(procpid) from pg_stat_activity where \
datname='rails_dev';
  begin;
  alter database rails_dev rename to rails_dev_old;
  alter database rails_dev_new rename to rails_dev;
  commit;
  drop database rails_dev_old;
  \q


  2. What's a better way to slave to a master database without being a
  read-only slave?  In other words, we want to use the production database as
  a starting point each morning, but we'll be creating test rows and tables
  that we wouldn't want to propagate to production. Can I do this while the
  database is live through some existing replication tool?  The production
  database is on Ubuntu but the workstations are Macs, FWIW.


How incremental does it need to be? You could use Bucardo to create
slaves that can still be written to. Then in the morning you would simply
kick off a sync to bring the slave up to date with the master (and optionally
remove any test rows, etc.) Many caveats there, of course - it depends on
your exact needs. If you have the right hardware/software, using snapshots
or clones is an excellent way to make dev databases as well.


Bucardo looks great for replication, but it mentions that it won't do DDL. I 
think that means if someone added a new column to production yesterday, but 
I haven't run that migration yet locally, Bucardo will choke when it tries 
to sync.. ya? (Though the easy workaround is run the darn migration first.)


By snapshots, do you mean filesystem-level snapshots like XFS or LVM? OS X 
has no support for either, sadly.


If you mean Postgres snapshots (and can I mention that I don't yet 
understand where MVCC snapshots meet WAL/xlog, but let's say snapshots are a 
thing), I see in the Slony docs that:


 WAL-based replication duplicates absolutely everything, and nothing extra 
 that changes data can run on a WAL-based replica.


That sounds like I couldn't use production log-shipping to sync test 
databases.  Unless that doc is not *quite* true, and I could somehow:


- Sync from production
- Take a snapshot (whatever that means, exactly)
- Do whatever with the test database
- Tomorrow, just before I sync, roll back to that snapshot
- Repeat



If the pg_dump / restore is working for you, I'd keep that as long as you
can. Try fiddling with some of the compressions, etc. to maximize speed.
Quick ideas: try nc or tweak rsync, and on the slave: turn fsync off,
boost maintenance_work_mem and checkpoint_segments, look into parallel
restore.


Yeah, it's working for now, but we're at that hockey-stick point on the 
curve where what works now will be too slow three months from now, so I want 
to start thinking ahead.


Those are good ideas; I bet the pg_restore can be much faster with giant 
checkpoints, lots of memory, etc.  I also see Bucardo's split_pg_dump, which 
would probably help - no point creating indexes on-the-fly.


Jay

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] handling out of memory conditions when fetching row descriptions

2012-01-02 Thread 'Isidor Zeuner'

Hi there,

using the latest git source code, I found that libpq will let the
connection stall when getRowDescriptions breaks on an out of memory
condition. I think this should better be handled differently to allow
application code to handle such situations gracefully.

For now, I changed libpq to close the connection when this happens
(please see the attached patch). This approach works fine for me so
far.

Please inform me if there are better ways to handle this.

Thanks.

Best regards,

Isidor Zeunerdiff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 5c06a40..70ea020 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -270,7 +270,14 @@ pqParseInput3(PGconn *conn)
 	{
 		/* First 'T' in a query sequence */
 		if (getRowDescriptions(conn))
+		{
+			conn-asyncStatus = PGASYNC_UNEXPECTED;
+			pqsecure_close(conn);
+			closesocket(conn-sock);
+			conn-sock = -1;
+			conn-status = CONNECTION_BAD;		/* No more connection to backend */
 			return;
+		}
 
 		/*
 		 * If we're doing a Describe, we're ready to pass the
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 04b9dd9..9130bdf 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -219,7 +219,8 @@ typedef enum
 	PGASYNC_READY,/* result ready for PQgetResult */
 	PGASYNC_COPY_IN,			/* Copy In data transfer in progress */
 	PGASYNC_COPY_OUT,			/* Copy Out data transfer in progress */
-	PGASYNC_COPY_BOTH			/* Copy In/Out data transfer in progress */
+	PGASYNC_COPY_BOTH,			/* Copy In/Out data transfer in progress */
+	PGASYNC_UNEXPECTED			/* unexpected state */
 } PGAsyncStatusType;
 
 /* PGQueryClass tracks which query protocol we are now executing */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hagen Finley
Hi,

 

I am using psql (8.2.15) and I would like to input German characters (e.g.
ä,ß,ö) into char fields I have in a database. I am having trouble getting
the CENTOS Linux OS I am using to input German characters via a (apparently
supported) German Keyboard Layout. However, that might be a separate matter.
When I typed the German into Notepad in Windows and attempted to cut and
paste the words into an INSERT statement, the characters do not persist:

Daß becomes DaDa and Heißt becomes HeiHeit which falls short of what I was
hoping for. 

 

I am wondering if I need to enable an international character set within
Postgres before the German characters will input properly? If so, it’s not
clear from the documentation I have attempted to find how one enables other
characters sets within Postgres? Any suggestions will be much appreciated.
Thank you.

 

Hagen Finley

 

Boulder, CO



Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Adrian Klaver

On 01/02/2012 11:13 AM, Hagen Finley wrote:

Hi,

I am using psql (8.2.15) and I would like to input German characters


I am going to assume you are using a Postgresql 8.2.15 server(psql is 
the client program for Postgres, I am being pedantic because it reduces 
the confusion level:) )



(e.g. ä,ß,ö) into char fields I have in a database. I am having trouble
getting the CENTOS Linux OS I am using to input German characters via a
(apparently supported) German Keyboard Layout. However, that might be a
separate matter. When I typed the German into Notepad in Windows and
attempted to cut and paste the words into an INSERT statement, the
characters do not persist:


Not sure it would help, but if I had a choice I would use Wordpad. 
Notepad tends to fairly brain-dead when handling text.





Daß becomes DaDa and Heißt becomes HeiHeit which falls short of what I
was hoping for.

I am wondering if I need to enable an international character set within
Postgres before the German characters will input properly? If so, it’s
not clear from the documentation I have attempted to find how one
enables other characters sets within Postgres? Any suggestions will be
much appreciated. Thank you.


Using psql do a \l at the prompt. That will show what encoding the 
database was setup with.




Hagen Finley

Boulder, CO



Thanks,
--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.2: Preventing connections / syncing a database

2012-01-02 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Jay Levitt spoke:

Greg Sabino Mullane wrote:
 update pg_database set datallowconn = false where datname = 'foobar';

That's perfect - thanks.  Now I can (I think) do this:

 pg_restore -d rails_dev_new
 [wait]
 psql template1
update pg_database set datallowconn = false where datname = 'rails_dev';
select pg_terminate_backend(procpid) from pg_stat_activity where \
  datname='rails_dev';
   begin;
   alter database rails_dev rename to rails_dev_old;
   alter database rails_dev_new rename to rails_dev;
   commit;
   drop database rails_dev_old;
   \q

Yes, but if that's truly the process, you might as well save some steps 
and just drop the existing one and do a single rename:

select pg_terminate_backend(procpid) from pg_stat_activity where \
  datname='rails_dev';
drop database rails_dev;
alter database rails_dev_new rename to rails_dev;
\q

a developer will find they have something on there they need about 
two minutes after you drop it. :) Space permitting, of course.

 Bucardo looks great for replication, but it mentions that it won't do DDL. I 
 think that means if someone added a new column to production yesterday, but 
 I haven't run that migration yet locally, Bucardo will choke when it tries 
 to sync.. ya? (Though the easy workaround is run the darn migration first.)

Yes - the canonical way is to get the schema in sync first, then let Bucardo 
handle the data.

 By snapshots, do you mean filesystem-level snapshots like XFS or LVM? OS X 
 has no support for either, sadly.

Yes, that's exactly what I mean. We have clients using that with great success. 
Simply make a snapshot of the production database volumes, mount it on the 
dev box, and go.

 That sounds like I couldn't use production log-shipping to sync test 
 databases.  Unless that doc is not *quite* true, and I could somehow:
 ...

Well, you can use Postgres' PITR (point in time recovery) aka warm standby 
to create standby slaves identical to the master, and then at some point 
in time flip them to become live, independent databases that can be modified.
The downside is that you then have to create a new base backup, which means 
rsyncing the entire data directory to a new slave/standby box. However, this 
might be worth it as you can frontload the time spent doing so - once it is 
rsynced and the standby is up and running (and receiving data from the master), 
swtiching it from standby to nomal mode (and thus creating a perfect clone of 
production at that moment) is pretty much instant.

So the process would be:
* Turn on archive_command on prod, point it to the dev box
* Create a base backup, ship the data dir to the dev box, start up the db
* In the AM, tell the dev box to go into recovery mode. Tell the 
prod box to stop trying to ship logs to it
* Rearrange some dirs on the dev box, and start over again by making 
a new base backup, rsyncing data dir, etc.
* In the AM, stop the old dev database. Bring the new one up (recover it)

You could even make less frequent base backups if you keep enough logs around 
to 
play forward through more than a days worth of logs.

 Those are good ideas; I bet the pg_restore can be much faster with giant 
 checkpoints, lots of memory, etc.  I also see Bucardo's split_pg_dump, which 
 would probably help - no point creating indexes on-the-fly.

Well, if they are needed on prod, you probably want them on dev :)

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201201021458
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAk8CDIIACgkQvJuQZxSWSsj1cQCfdJtmW/fmgPDRYk2esngyng7a
WZMAnjafyd+EDFGVzPA/dPUUqhks9Qkb
=HJak
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hagen Finley
Thanks Adrian,

 

Looks like I am currently using UTF8:

 

gpdemo=# \encoding

UTF8

 

And it looks like UTF8 doesn’t include the German characters I seek. Can
someone explain how I can switch to -0FFF which looks like the Basic
Multilingual Plane Unicode which does include the characters I want?

 

Hagen

 

 

 

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Monday, January 02, 2012 12:40 PM
To: Hagen Finley
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adding German Character Set to PostgresSQL

 

On 01/02/2012 11:13 AM, Hagen Finley wrote:

 Hi,

 

 I am using psql (8.2.15) and I would like to input German characters

 

I am going to assume you are using a Postgresql 8.2.15 server(psql is the
client program for Postgres, I am being pedantic because it reduces the
confusion level:) )

 

 (e.g. ä,ß,ö) into char fields I have in a database. I am having 

 trouble getting the CENTOS Linux OS I am using to input German 

 characters via a (apparently supported) German Keyboard Layout. 

 However, that might be a separate matter. When I typed the German into 

 Notepad in Windows and attempted to cut and paste the words into an 

 INSERT statement, the characters do not persist:

 

Not sure it would help, but if I had a choice I would use Wordpad. 

Notepad tends to fairly brain-dead when handling text.

 

 

 

 Daß becomes DaDa and Heißt becomes HeiHeit which falls short of what I 

 was hoping for.

 

 I am wondering if I need to enable an international character set 

 within Postgres before the German characters will input properly? If 

 so, it’s not clear from the documentation I have attempted to find how 

 one enables other characters sets within Postgres? Any suggestions 

 will be much appreciated. Thank you.

 

Using psql do a \l at the prompt. That will show what encoding the database
was setup with.

 

 

 Hagen Finley

 

 Boulder, CO

 

 

Thanks,

--

Adrian Klaver

 mailto:adrian.kla...@gmail.com adrian.kla...@gmail.com



Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Pavel Stehule
Hello

2012/1/2 Hagen Finley finha...@comcast.net:
 Thanks Adrian,



 Looks like I am currently using UTF8:



 gpdemo=# \encoding

 UTF8



 And it looks like UTF8 doesn’t include the German characters I seek. Can
 someone explain how I can switch to -0FFF which looks like the Basic
 Multilingual Plane Unicode which does include the characters I want?


I am sure so it has

postgres=# create table xx(a text);
ERROR:  relation xx already exists
postgres=# create table x(a text);
CREATE TABLE
postgres=# insert into x values('ä,ß,ö');
INSERT 0 1
postgres=# select * from x;
   a
---
 ä,ß,ö
(1 row)

postgres=# \encoding
UTF8
postgres=#

Windows console doesn't work well with UTF8 - use pgAdmin there

regards

Pavel Stehule




 Hagen







 -Original Message-
 From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
 Sent: Monday, January 02, 2012 12:40 PM
 To: Hagen Finley
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Adding German Character Set to PostgresSQL



 On 01/02/2012 11:13 AM, Hagen Finley wrote:

 Hi,



 I am using psql (8.2.15) and I would like to input German characters



 I am going to assume you are using a Postgresql 8.2.15 server(psql is the
 client program for Postgres, I am being pedantic because it reduces the
 confusion level:) )



 (e.g. ä,ß,ö) into char fields I have in a database. I am having

 trouble getting the CENTOS Linux OS I am using to input German

 characters via a (apparently supported) German Keyboard Layout.

 However, that might be a separate matter. When I typed the German into

 Notepad in Windows and attempted to cut and paste the words into an

 INSERT statement, the characters do not persist:



 Not sure it would help, but if I had a choice I would use Wordpad.

 Notepad tends to fairly brain-dead when handling text.







 Daß becomes DaDa and Heißt becomes HeiHeit which falls short of what I

 was hoping for.



 I am wondering if I need to enable an international character set

 within Postgres before the German characters will input properly? If

 so, it’s not clear from the documentation I have attempted to find how

 one enables other characters sets within Postgres? Any suggestions

 will be much appreciated. Thank you.



 Using psql do a \l at the prompt. That will show what encoding the database
 was setup with.





 Hagen Finley



 Boulder, CO





 Thanks,

 --

 Adrian Klaver

 adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hannes Erven
Hagen,


 gpdemo=# \encoding
 UTF8

UTF8 includes virtually all characters you will need for any purpose  on
the earth.


But: you showed the output of \encoding when you were asked to show \l .

There is a subtle difference:
\encoding shows the encoding of the connection between psql and the
server; while
\l shows the encoding the database server will actually store the data in.

While the first can easily be changed for a session, the latter can only
be set when creating a database (or a cluster).


-hannes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Adrian Klaver
On Monday, January 02, 2012 12:25:26 pm Hagen Finley wrote:
 Thanks Adrian,
 
 
 
 Looks like I am currently using UTF8:
 
 
 
 gpdemo=# \encoding
 
 UTF8


Well that shows the client encoding not the server encoding.

For that either do 
\l
or
show server_encoding;

 
 
 
 And it looks like UTF8 doesn’t include the German characters I seek. Can
 someone explain how I can switch to -0FFF which looks like the Basic
 Multilingual Plane Unicode which does include the characters I want?

It should.

 
 
 
 Hagen
 
 
 


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] handling out of memory conditions when fetching row descriptions

2012-01-02 Thread Tom Lane
'Isidor Zeuner' postgre...@quidecco.de writes:
 using the latest git source code, I found that libpq will let the
 connection stall when getRowDescriptions breaks on an out of memory
 condition. I think this should better be handled differently to allow
 application code to handle such situations gracefully.

The basic assumption in there is that if we wait and retry, eventually
there will be enough memory.  I agree that that's not ideal, since the
application may not be releasing memory elsewhere.  But what you propose
doesn't seem like an improvement: you're converting a maybe-failure into
a guaranteed-failure, and one that's much more difficult to recover from
than an ordinary query error.

Also, this patch breaks async operation, in which a failure return from
getRowDescriptions normally means that we have to wait for more data
to arrive.  The test would really need to be inserted someplace else.

In any case, getRowDescriptions is really an improbable place for an
out-of-memory to occur: it would be much more likely to happen while
absorbing the body of a large query result.  There already is some logic
in getAnotherTuple for dealing with that case, which I suggest is a
better model for what to do than break the connection.  But probably
making things noticeably better here would require going through all
the code to check for other out-of-memory cases, and developing some
more uniform method of representing an already-known-failed query
result.  (For instance, it looks like getAnotherTuple might not work
very well if it fails to get memory for one tuple and then succeeds
on later ones.  We probably ought to have some explicit state that
says we are absorbing the remaining data traffic for a query result
that we already ran out of memory for.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Verifying a timestamp is null or in the past

2012-01-02 Thread Ondrej Ivanič
Hi,

On 2 January 2012 03:26, Raymond O'Donnell r...@iol.ie wrote:
 And also - does PERFORM works with FOUND?

 Not sure what you mean - can you elaborate?

No, perform (and execute) doesn't populate 'found' variable:
http://www.postgresql.org/docs/9.0/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS

You have to use something like this:
get diagnostics rr = row_count;


-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hagen Finley
Hannes,

The output of \l is:

gpdemo=# \l
   List of databases
Name|  Owner  | Encoding |  Access privileges  
+-+--+-
 acn| gpadmin | UTF8 | 
 gpdemo  | gpadmin | UTF8 | 
 philosophy | gpadmin | UTF8 | 
 postgres | gpadmin | UTF8 | 
 template0  | gpadmin | UTF8 | =c/gpadmin  : gpadmin=CTc/gpadmin
 template1  | gpadmin | UTF8 | =c/gpadmin   : gpadmin=CTc/gpadmin

It would be easy enough to create a new database with a different encoding -
only one record in the one in question. However, it sounded as though you
don't believe that is the issue - that UTF8 ought to support the German
characters I want. Am I understanding you correctly?

Hagen


-Original Message-
From: Hannes Erven [mailto:han...@erven.at] 
Sent: Monday, January 02, 2012 1:53 PM
To: pgsql-general@postgresql.org
Cc: finha...@comcast.net
Subject: Re: [GENERAL] Adding German Character Set to PostgresSQL

Hagen,


 gpdemo=# \encoding
 UTF8

UTF8 includes virtually all characters you will need for any purpose  on the
earth.


But: you showed the output of \encoding when you were asked to show \l .

There is a subtle difference:
\encoding shows the encoding of the connection between psql and the server;
while \l shows the encoding the database server will actually store the data
in.

While the first can easily be changed for a session, the latter can only be
set when creating a database (or a cluster).


-hannes



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Adrian Klaver
On Monday, January 02, 2012 1:47:13 pm Hagen Finley wrote:
 Hannes,
 
 The output of \l is:
 
 gpdemo=# \l
List of databases
 Name|  Owner  | Encoding |  Access privileges
 +-+--+-
  acn| gpadmin | UTF8 |
  gpdemo  | gpadmin | UTF8 |
  philosophy | gpadmin | UTF8 |
  postgres | gpadmin | UTF8 |
  template0  | gpadmin | UTF8 | =c/gpadmin  : gpadmin=CTc/gpadmin
  template1  | gpadmin | UTF8 | =c/gpadmin   : gpadmin=CTc/gpadmin
 
 It would be easy enough to create a new database with a different encoding
 - only one record in the one in question. However, it sounded as though
 you don't believe that is the issue - that UTF8 ought to support the
 German characters I want. Am I understanding you correctly?

So now we have established where it is going to. Now to find out where it is 
coming from. You said the server is running on Centos and that it was 
presumably 
set up for a German keyboard.
From a terminal in Centos what do the below show?
locale
locale -a


 
 Hagen
 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Verifying a timestamp is null or in the past

2012-01-02 Thread David Johnston
On Jan 2, 2012, at 16:46, Ondrej Ivanič ondrej.iva...@gmail.com wrote:

 Hi,
 
 On 2 January 2012 03:26, Raymond O'Donnell r...@iol.ie wrote:
 And also - does PERFORM works with FOUND?
 
 Not sure what you mean - can you elaborate?
 
 No, perform (and execute) doesn't populate 'found' variable:
 http://www.postgresql.org/docs/9.0/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS
 
 You have to use something like this:
 get diagnostics rr = row_count;
 
 
 -- 
 Ondrej Ivanic
 (ondrej.iva...@gmail.com)
 
 

Yes, PERFORM does populate FOUND.

From the documentation you just linked to

A PERFORM statement sets FOUND true if it produces (and discards) one or more 
rows, false if no row is produced.






Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hannes Erven
Hi Hagen,


  gpdemo  | gpadmin | UTF8 | 
 that UTF8 ought to support the German characters I want. 
 Am I understanding you correctly?

Yes, UTF-8 supports all the characters you'd want -- Wikipedia says it's
about 109.000 characters from 93 scripts, so that's pretty everything
you might need ( http://en.wikipedia.org/wiki/Unicode ).


So as we have excluded the database storage and the psql connection,
there seems to be only the terminal left as a suspect. Could you try
using pgAdmin or any other non-console based tool to connect to the
database in question?

The plain text you first posted looked quite strange, repeating previous
parts of the strings before the Umlauts -- usually, unsupported Umlauts
show up rather as two characters, or cut off the rest of the word
completely.
My guess would be that the characters are correctly stored in the DB,
and are just not displayed correctly within your terminal.

But again, this is best verified when you connect directly to the DB. I
don't know anything about terminals in CentOS, but have you tried
setting the LANG variable?
http://www.linuxreport.org/content/view/53/31/



Good luck :-)

-hannes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Verifying a timestamp is null or in the past

2012-01-02 Thread Ondrej Ivanič
Hi

2012/1/3 David Johnston pol...@yahoo.com:
 On Jan 2, 2012, at 16:46, Ondrej Ivanič ondrej.iva...@gmail.com wrote:
 Yes, PERFORM does populate FOUND.

 From the documentation you just linked to

 A PERFORM statement sets FOUND true if it produces (and discards) one or
 more rows, false if no row is produced.


Bummer! Thanks for the correction! I shouldn't (blindly) rely on my
own comments in the code :) Pgpsql code uses execute which is the
reason for 'get diagnostics'...

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hagen Finley
As you indicated UTF-8 has the whole kitchen sink in it. I did trying using the 
German Keyboard Layout with a Centos text editor and that works - I can produce 
the characters I want. Now I can also get the German characters to work in the 
Centos terminal but not in the psql command line client. Progress but still no 
joy.

[gpadmin@gp-single-host ~]$ locale
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
LC_NUMERIC=en_US.UTF-8
LC_TIME=en_US.UTF-8
LC_COLLATE=en_US.UTF-8
LC_MONETARY=en_US.UTF-8
LC_MESSAGES=en_US.UTF-8
LC_PAPER=en_US.UTF-8
LC_NAME=en_US.UTF-8
LC_ADDRESS=en_US.UTF-8
LC_TELEPHONE=en_US.UTF-8
LC_MEASUREMENT=en_US.UTF-8
LC_IDENTIFICATION=en_US.UTF-8
LC_ALL=

[gpadmin@gp-single-host ~]$ locale -a
aa_DJ
aa_DJ.iso88591
aa_DJ.utf8
aa_ER
aa_ER@saaho
aa_ER.utf8
aa_ER.utf8@saaho
aa_ET
aa_ET.utf8
af_ZA
af_ZA.iso88591
af_ZA.utf8
am_ET
am_ET.utf8
an_ES
an_ES.iso885915
an_ES.utf8
ar_AE
ar_AE.iso88596
ar_AE.utf8
ar_BH
ar_BH.iso88596
ar_BH.utf8
ar_DZ
ar_DZ.iso88596
ar_DZ.utf8
ar_EG
ar_EG.iso88596
ar_EG.utf8
ar_IN
ar_IN.utf8
ar_IQ
ar_IQ.iso88596
ar_IQ.utf8
ar_JO
ar_JO.iso88596
ar_JO.utf8
ar_KW
ar_KW.iso88596
ar_KW.utf8
ar_LB
ar_LB.iso88596
ar_LB.utf8
ar_LY
ar_LY.iso88596
ar_LY.utf8
ar_MA
ar_MA.iso88596
ar_MA.utf8
ar_OM
ar_OM.iso88596
ar_OM.utf8
ar_QA
ar_QA.iso88596
ar_QA.utf8
ar_SA
ar_SA.iso88596
ar_SA.utf8
ar_SD
ar_SD.iso88596
ar_SD.utf8
ar_SY
ar_SY.iso88596
ar_SY.utf8
ar_TN
ar_TN.iso88596
ar_TN.utf8
ar_YE
ar_YE.iso88596
ar_YE.utf8
as_IN.utf8
az_AZ.utf8
be_BY
be_BY.cp1251
be_BY@latin
be_BY.utf8
be_BY.utf8@latin
bg_BG
bg_BG.cp1251
bg_BG.utf8
bn_BD
bn_BD.utf8
bn_IN
bn_IN.utf8
bokmal
bokm
br_FR
br_FR@euro
br_FR.iso88591
br_FR.iso885915@euro
br_FR.utf8
bs_BA
bs_BA.iso88592
bs_BA.utf8
byn_ER
byn_ER.utf8
C
ca_AD
ca_AD.iso885915
ca_AD.utf8
ca_ES
ca_ES@euro
ca_ES.iso88591
ca_ES.iso885915@euro
ca_ES.utf8
ca_FR
ca_FR.iso885915
ca_FR.utf8
ca_IT
ca_IT.iso885915
ca_IT.utf8
catalan
croatian
csb_PL
csb_PL.utf8
cs_CZ
cs_CZ.iso88592
cs_CZ.utf8
cy_GB
cy_GB.iso885914
cy_GB.utf8
czech
da_DK
da_DK.iso88591
da_DK.iso885915
da_DK.utf8
danish
dansk
de_AT
de_AT@euro
de_AT.iso88591
de_AT.iso885915@euro
de_AT.utf8
de_BE
de_BE@euro
de_BE.iso88591
de_BE.iso885915@euro
de_BE.utf8
de_CH
de_CH.iso88591
de_CH.utf8
de_DE
de_DE@euro
de_DE.iso88591
de_DE.iso885915@euro
de_DE.utf8
de_LU
de_LU@euro
de_LU.iso88591
de_LU.iso885915@euro
de_LU.utf8
deutsch
dutch
dz_BT
dz_BT.utf8
eesti
el_CY
el_CY.iso88597
el_CY.utf8
el_GR
el_GR.iso88597
el_GR.utf8
en_AU
en_AU.iso88591
en_AU.utf8
en_BW
en_BW.iso88591
en_BW.utf8
en_CA
en_CA.iso88591
en_CA.utf8
en_DK
en_DK.iso88591
en_DK.utf8
en_GB
en_GB.iso88591
en_GB.iso885915
en_GB.utf8
en_HK
en_HK.iso88591
en_HK.utf8
en_IE
en_IE@euro
en_IE.iso88591
en_IE.iso885915@euro
en_IE.utf8
en_IN
en_IN.utf8
en_NZ
en_NZ.iso88591
en_NZ.utf8
en_PH
en_PH.iso88591
en_PH.utf8
en_SG
en_SG.iso88591
en_SG.utf8
en_US
en_US.iso88591
en_US.iso885915
en_US.utf8
en_ZA
en_ZA.iso88591
en_ZA.utf8
en_ZW
en_ZW.iso88591
en_ZW.utf8
es_AR
es_AR.iso88591
es_AR.utf8
es_BO
es_BO.iso88591
es_BO.utf8
es_CL
es_CL.iso88591
es_CL.utf8
es_CO
es_CO.iso88591
es_CO.utf8
es_CR
es_CR.iso88591
es_CR.utf8
es_DO
es_DO.iso88591
es_DO.utf8
es_EC
es_EC.iso88591
es_EC.utf8
es_ES
es_ES@euro
es_ES.iso88591
es_ES.iso885915@euro
es_ES.utf8
es_GT
es_GT.iso88591
es_GT.utf8
es_HN
es_HN.iso88591
es_HN.utf8
es_MX
es_MX.iso88591
es_MX.utf8
es_NI
es_NI.iso88591
es_NI.utf8
es_PA
es_PA.iso88591
es_PA.utf8
es_PE
es_PE.iso88591
es_PE.utf8
es_PR
es_PR.iso88591
es_PR.utf8
es_PY
es_PY.iso88591
es_PY.utf8
es_SV
es_SV.iso88591
es_SV.utf8
estonian
es_US
es_US.iso88591
es_US.utf8
es_UY
es_UY.iso88591
es_UY.utf8
es_VE
es_VE.iso88591
es_VE.utf8
et_EE
et_EE.iso88591
et_EE.iso885915
et_EE.utf8
eu_ES
eu_ES@euro
eu_ES.iso88591
eu_ES.iso885915@euro
eu_ES.utf8
fa_IR
fa_IR.utf8
fi_FI
fi_FI@euro
fi_FI.iso88591
fi_FI.iso885915@euro
fi_FI.utf8
finnish
fo_FO
fo_FO.iso88591
fo_FO.utf8
fran�ais
fr_BE
fr_BE@euro
fr_BE.iso88591
fr_BE.iso885915@euro
fr_BE.utf8
fr_CA
fr_CA.iso88591
fr_CA.utf8
fr_CH
fr_CH.iso88591
fr_CH.utf8
french
fr_FR
fr_FR@euro
fr_FR.iso88591
fr_FR.iso885915@euro
fr_FR.utf8
fr_LU
fr_LU@euro
fr_LU.iso88591
fr_LU.iso885915@euro
fr_LU.utf8
fy_NL
fy_NL.utf8
ga_IE
ga_IE@euro
ga_IE.iso88591
ga_IE.iso885915@euro
ga_IE.utf8
galego
galician
gd_GB
gd_GB.iso885915
gd_GB.utf8
german
gez_ER
gez_ER@abegede
gez_ER.utf8
gez_ER.utf8@abegede
gez_ET
gez_ET@abegede
gez_ET.utf8
gez_ET.utf8@abegede
gl_ES
gl_ES@euro
gl_ES.iso88591
gl_ES.iso885915@euro
gl_ES.utf8
greek
gu_IN
gu_IN.utf8
gv_GB
gv_GB.iso88591
gv_GB.utf8
hebrew
he_IL
he_IL.iso88598
he_IL.utf8
hi_IN
hi_IN.utf8
hr_HR
hr_HR.iso88592
hr_HR.utf8
hrvatski
hsb_DE
hsb_DE.iso88592
hsb_DE.utf8
hu_HU
hu_HU.iso88592
hu_HU.utf8
hungarian
hy_AM
hy_AM.armscii8
hy_AM.utf8
icelandic
id_ID
id_ID.iso88591
id_ID.utf8
is_IS
is_IS.iso88591
is_IS.utf8
italian
it_CH
it_CH.iso88591
it_CH.utf8
it_IT
it_IT@euro
it_IT.iso88591
it_IT.iso885915@euro
it_IT.utf8
iw_IL
iw_IL.iso88598
iw_IL.utf8
ja_JP
ja_JP.eucjp
ja_JP.ujis
ja_JP.utf8
japanese
japanese.euc
ka_GE
ka_GE.georgianps
ka_GE.utf8

Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Adrian Klaver
On Monday, January 02, 2012 3:41:40 pm Hagen Finley wrote:
 As you indicated UTF-8 has the whole kitchen sink in it. I did trying using
 the German Keyboard Layout with a Centos text editor and that works - I
 can produce the characters I want. Now I can also get the German
 characters to work in the Centos terminal but not in the psql command line
 client. Progress but still no joy.

So you are using psql on the Centos machine?

I have the same locale as you, on my Linux machine,  and using Pavels example I 
get:

test(5432)aklaver=create table x(a text);
CREATE TABLE
test(5432)aklaver=insert into x values('ä,ß,ö');
INSERT 0 1
test(5432)aklaver=SELECT * from x ;
   a   
---
 ä,ß,ö
(1 row)

What happens when you do the above on your machine?

 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hagen Finley
Yes I am running psql on Centos.

My psql client won't accept the German characters whether or not I attempt to 
type them or paste them.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Monday, January 02, 2012 5:05 PM
To: Hagen Finley
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adding German Character Set to PostgresSQL

On Monday, January 02, 2012 3:41:40 pm Hagen Finley wrote:
 As you indicated UTF-8 has the whole kitchen sink in it. I did trying 
 using the German Keyboard Layout with a Centos text editor and that 
 works - I can produce the characters I want. Now I can also get the 
 German characters to work in the Centos terminal but not in the psql 
 command line client. Progress but still no joy.

So you are using psql on the Centos machine?

I have the same locale as you, on my Linux machine,  and using Pavels example I
get:

test(5432)aklaver=create table x(a text); CREATE TABLE 
test(5432)aklaver=insert into x values('ä,ß,ö'); INSERT 0 1 
test(5432)aklaver=SELECT * from x ;
   a   
---
 ä,ß,ö
(1 row)

What happens when you do the above on your machine?

 

--
Adrian Klaver
adrian.kla...@gmail.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Adrian Klaver
On Monday, January 02, 2012 4:37:18 pm Hagen Finley wrote:
 Yes I am running psql on Centos.
 
 My psql client won't accept the German characters whether or not I attempt
 to type them or paste them.
 

So to be clear did you try?:

create table x(a text);
insert into x values('ä,ß,ö');
SELECT * from x ;

If so can you show the result?
Also maybe tail the log file to see if anything show up there?

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Hagen Finley
Yes but I couldn't input your second line - the  ('ä,ß,ö') was not possible via 
the psql client - just got beeped when I tried to type or paste those 
characters.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Monday, January 02, 2012 5:54 PM
To: Hagen Finley
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adding German Character Set to PostgresSQL

On Monday, January 02, 2012 4:37:18 pm Hagen Finley wrote:
 Yes I am running psql on Centos.
 
 My psql client won't accept the German characters whether or not I 
 attempt to type them or paste them.
 

So to be clear did you try?:

create table x(a text);
insert into x values('ä,ß,ö');
SELECT * from x ;

If so can you show the result?
Also maybe tail the log file to see if anything show up there?

--
Adrian Klaver
adrian.kla...@gmail.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Adrian Klaver
On Monday, January 02, 2012 6:21:53 pm Hagen Finley wrote:
 Yes but I couldn't input your second line - the  ('ä,ß,ö') was not possible
 via the psql client - just got beeped when I tried to type or paste those
 characters.

Hmmm.
Have you checked what client_encoding is set to in postgresql.conf and whether 
that setting is uncommented?
Is the PGCLIENTENCODING environment variable set?
So what happens when you get the beep, is the character not allowed at all or 
is 
changed?
Is there anything in the server logs?

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Duplicated entries are not ignored even if a do instead nothing rule is added.

2012-01-02 Thread 邓尧
Hi,

I'm new to pgsql, I need the do something like the INSERT IGNORE in
mysql. After some searching I got a solution, which is adding a do instead
nothing rule to the corresponding table, but it fails sometimes.
The table and the rule is created with the following sql statements:

create sequence ACCOUNT_ID_SEQ;
create table ACCOUNT (
ID bigint primary key default nextval('ACCOUNT_ID_SEQ'),
HOME char(255) not null,
NAME char(255)
);
create unique index on ACCOUNT(HOME);
create index on ACCOUNT(NAME);

create rule IGNORE_DUPLICATED_ACCOUNT
as on insert to ACCOUNT
where exists(select 1 from ACCOUNT where HOME = NEW.HOME)
do instead nothing;

There are about 20 clients do the following insertion (no UPDATE, some of
them might DELETE):
begin transaction:
insert into ACCOUNT(HOME) values (v1);
insert into ACCOUNT(HOME) values (v2);
...
commit;

Sometimes I got the error says the unique constraint account_home_idx is
violated.

Any suggestions?

Thanks
-Yao


[GENERAL] stop server

2012-01-02 Thread roberto sanchez muñoz
how can i stop the server i use this in terminal (pg_ctl -D 
/usr/local/var/postgres stop -s -m fast) and show this pg_ctl: server does not 
shut down

thanks for your help
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stop server

2012-01-02 Thread Raghavendra
Seems, some process are still running and looking for database access.
You can try unclean shutdown IMMEDIATE with below option by forcing all
process to stop

pg_ctl -D /usr/local/var/postgres stop -mi


---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 how can i stop the server i use this in terminal (pg_ctl -D
 /usr/local/var/postgres stop -s -m fast) and show this pg_ctl: server does
 not shut down

 thanks for your help
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] stop server

2012-01-02 Thread Raghavendra
Two things:

1. See the output of the postgres process which are running with utility
commands.
  ps -ef | grep postgres

2. Also, take look in the logs for any information written on any process
which is running and failing to abort.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 still failing it shows
 pg_ctl -D /usr/local/var/postgres stop -mi
 waiting for server to shut
 down... failed
 pg_ctl: server does not shut down

 El 02/01/2012, a las 23:09, Raghavendra escribió:

 Seems, some process are still running and looking for database access.
 You can try unclean shutdown IMMEDIATE with below option by forcing all
 process to stop

 pg_ctl -D /usr/local/var/postgres stop -mi


 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 how can i stop the server i use this in terminal (pg_ctl -D
 /usr/local/var/postgres stop -s -m fast) and show this pg_ctl: server does
 not shut down

 thanks for your help
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general






Re: [GENERAL] stop server

2012-01-02 Thread Roberto Sanchez
show this
ps -ef | grep postgres
  501  1402   100   0   0:00.10 ?? 0:00.15
/usr/local/Cellar/postgresql/9.1.2/bin/postgres -D /usr/local/var/postgres
-r /usr/local/var/postgres/server.log
  501  1404  1402   0   0:00.00 ?? 0:00.00 postgres: writer process

  501  1405  1402   0   0:00.00 ?? 0:00.00 postgres: wal writer
process
  501  1406  1402   0   0:00.00 ?? 0:00.00 postgres: autovacuum
launcher process
  501  1407  1402   0   0:00.00 ?? 0:00.00 postgres: stats
collector process
  501  1413  1329   0   0:00.00 ttys0000:00.00 grep postgres
  501  1400  1397   0   0:00.00 ttys0010:00.00 pg_ctl -D
/usr/local/var/postgres stop -mi


El 2 de enero de 2012 23:34, Raghavendra
raghavendra@enterprisedb.comescribió:

 Two things:

 1. See the output of the postgres process which are running with utility
 commands.
   ps -ef | grep postgres

 2. Also, take look in the logs for any information written on any process
 which is running and failing to abort.

 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 still failing it shows
 pg_ctl -D /usr/local/var/postgres stop -mi
 waiting for server to shut
 down... failed
 pg_ctl: server does not shut down

 El 02/01/2012, a las 23:09, Raghavendra escribió:

 Seems, some process are still running and looking for database access.
 You can try unclean shutdown IMMEDIATE with below option by forcing all
 process to stop

 pg_ctl -D /usr/local/var/postgres stop -mi


 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 how can i stop the server i use this in terminal (pg_ctl -D
 /usr/local/var/postgres stop -s -m fast) and show this pg_ctl: server does
 not shut down

 thanks for your help
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general







Re: [GENERAL] stop server

2012-01-02 Thread roberto sanchez muñoz
still failing it shows
pg_ctl -D /usr/local/var/postgres stop -mi
waiting for server to shut 
down... failed
pg_ctl: server does not shut down

El 02/01/2012, a las 23:09, Raghavendra escribió:

 Seems, some process are still running and looking for database access.  
 You can try unclean shutdown IMMEDIATE with below option by forcing all 
 process to stop
 
 pg_ctl -D /usr/local/var/postgres stop -mi
 
 
 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/
 
 
 
 2012/1/3 roberto sanchez muñoz trev2...@gmail.com
 how can i stop the server i use this in terminal (pg_ctl -D 
 /usr/local/var/postgres stop -s -m fast) and show this pg_ctl: server does 
 not shut down
 
 thanks for your help
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



Re: [GENERAL] stop server

2012-01-02 Thread Raghavendra
Also what are the last lines of logs showing.

Can you stop trying from the bin directory ...
$cd /usr/local/Cellar/postgresql/9.1.2/bin/
$./pg_ctl -D /usr/local/var/postgres stop -mi

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Tue, Jan 3, 2012 at 11:17 AM, Roberto Sanchez trev2...@gmail.com wrote:

 show this
 ps -ef | grep postgres
   501  1402   100   0   0:00.10 ?? 0:00.15
 /usr/local/Cellar/postgresql/9.1.2/bin/postgres -D /usr/local/var/postgres
 -r /usr/local/var/postgres/server.log
   501  1404  1402   0   0:00.00 ?? 0:00.00 postgres: writer
 process
   501  1405  1402   0   0:00.00 ?? 0:00.00 postgres: wal writer
 process
   501  1406  1402   0   0:00.00 ?? 0:00.00 postgres: autovacuum
 launcher process
   501  1407  1402   0   0:00.00 ?? 0:00.00 postgres: stats
 collector process
   501  1413  1329   0   0:00.00 ttys0000:00.00 grep postgres
   501  1400  1397   0   0:00.00 ttys0010:00.00 pg_ctl -D
 /usr/local/var/postgres stop -mi


 El 2 de enero de 2012 23:34, Raghavendra raghavendra@enterprisedb.com
  escribió:

 Two things:

 1. See the output of the postgres process which are running with utility
 commands.
   ps -ef | grep postgres

 2. Also, take look in the logs for any information written on any process
 which is running and failing to abort.

 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 still failing it shows
 pg_ctl -D /usr/local/var/postgres stop -mi
 waiting for server to shut
 down... failed
 pg_ctl: server does not shut down

 El 02/01/2012, a las 23:09, Raghavendra escribió:

 Seems, some process are still running and looking for database access.
 You can try unclean shutdown IMMEDIATE with below option by forcing all
 process to stop

 pg_ctl -D /usr/local/var/postgres stop -mi


 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 2012/1/3 roberto sanchez muñoz trev2...@gmail.com

 how can i stop the server i use this in terminal (pg_ctl -D
 /usr/local/var/postgres stop -s -m fast) and show this pg_ctl: server does
 not shut down

 thanks for your help
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general








Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-02 Thread Thomas Markus

Hi Hagen,

all german umlaut characters works fine in postgres from my experience.
Seems you have encoding issues between windows tools/console/db-client. 
Use a utf8 capable client. Any java tool or pgadmin or similar are fine.


regards
Thomas


Am 02.01.2012 20:13, schrieb Hagen Finley:


Hi,

I am using psql (8.2.15) and I would like to input German characters 
(e.g. ä,ß,ö) into char fields I have in a database. I am having 
trouble getting the CENTOS Linux OS I am using to input German 
characters via a (apparently supported) German Keyboard Layout. 
However, that might be a separate matter. When I typed the German into 
Notepad in Windows and attempted to cut and paste the words into an 
INSERT statement, the characters do not persist:


Daß becomes DaDa and Heißt becomes HeiHeit which falls short of what I 
was hoping for.


I am wondering if I need to enable an international character set 
within Postgres before the German characters will input properly? If 
so, it's not clear from the documentation I have attempted to find how 
one enables other characters sets within Postgres? Any suggestions 
will be much appreciated. Thank you.


Hagen Finley

Boulder, CO





Re: [GENERAL] Duplicated entries are not ignored even if a do instead nothing rule is added.

2012-01-02 Thread Alban Hertroys
On 3 Jan 2012, at 5:20, 邓尧 wrote:

 Hi,
 
 I'm new to pgsql, I need the do something like the INSERT IGNORE in mysql. 
 After some searching I got a solution, which is adding a do instead nothing 
 rule to the corresponding table, but it fails sometimes.

Yeah, if a concurrent transaction tries to create the same record, one of the 
transactions is going to find that it already exists on transaction commit. An 
INSERT-rule is not going to protect you against that.

 The table and the rule is created with the following sql statements:
 
 create sequence ACCOUNT_ID_SEQ;
 create table ACCOUNT (
   ID bigint primary key default nextval('ACCOUNT_ID_SEQ'),
   HOME char(255) not null,
   NAME char(255)
 );
 create unique index on ACCOUNT(HOME);
 create index on ACCOUNT(NAME);

It seems to me that account(home) is actually the PK - do you really need the 
artificial id column? That is a matter of personal preference; wars are waged 
on artificial vs natural keys. People in here will usually tell you to use what 
fits the problem best, both sides have benefits and drawbacks ;)

Another problem you'll have is that char columns are padded up to their full 
size with spaces - you'll end up trimming every value in your client 
applications. You probably want varchar(255) or perhaps better, text. The 
latter also rids you of that 255 length limit from mysql.

 There are about 20 clients do the following insertion (no UPDATE, some of 
 them might DELETE):
 begin transaction:
 insert into ACCOUNT(HOME) values (v1);
 insert into ACCOUNT(HOME) values (v2);
 ...
 commit;
 
 Sometimes I got the error says the unique constraint account_home_idx is 
 violated. 
 
 Any suggestions?

I assume you're talking about parallel inserts from a multi-process tool for 
importing this data? If that's the case then there are a number of solutions 
commonly used.

First of all, especially if you're inserting a lot of data like this, see if 
you can use COPY FROM STDIN instead. That loads the whole transaction contents 
in one go, which is a lot more efficient then thousands of sequential inserts. 
As it's a single statement that way, you don't even need to wrap it in a 
transaction anymore - you'll get an implicit transaction per single statement, 
which is in this case exactly what you want for this single COPY statement.

The other thing people usually do is to insert the data into a staging table 
without UNIQUE constraints. After that they issue:

INSERT INTO account(home) SELECT DISTINCT home FROM staging_table WHERE NOT 
EXISTS (SELECT 1 FROM account WHERE account.home = staging_table.home);

Other options are to use external tools written for batch inserting large 
amounts of data. I seem to recall pgfouine is such an application, but I've 
never used it.

Alban Hertroys

--
The scale of a problem often equals the size of an ego.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general