Re: [GENERAL] Somewhat automated method of cleaning table of corrupt records for pg_dump

2012-10-25 Thread Heiko Wundram
Am 22.10.2012 22:34, schrieb Martijn van Oosterhout: Something that has worked for me in the past is: $ SELECT ctid FROM table WHERE length(field) 0; As the structure of the tables (about four were affected) isn't something that I wanted to actually look at, I set off writing a small

Re: [GENERAL] Somewhat automated method of cleaning table of corrupt records for pg_dump

2012-10-22 Thread Heiko Wundram
Am 22.10.2012 09:05, schrieb Craig Ringer: Working strictly with a *copy*, does REINDEXing then CLUSTERing the tables help? VACCUM FULL on 8.3 won't rebuild indexes, so if index damage is the culprit a reindex may help. Then, if CLUSTER is able to rewrite the tables in index order you might be

[GENERAL] Somewhat automated method of cleaning table of corrupt records for pg_dump

2012-10-19 Thread Heiko Wundram
Hey! I'm currently in the situation that due to (probably) broken memory in a server, I have a corrupted PostgreSQL database. Getting at the data that's in the DB is not time-critical (because backups have restored the largest part of it), but I'd still like to restore what can be restored

Re: [GENERAL] [SOLVED] Very high memory usage on restoring dump (with plain pqsl) on pg 9.1.2

2012-03-23 Thread Heiko Wundram
Am 22.03.2012 18:21, schrieb Heiko Wundram: Am 22.03.2012 15:48, schrieb Tom Lane: There was a memory leak in the last-but-one releases for index operations on inet and cidr datatypes, so I'm wondering if that explains your problem ... I'll be updating pgsql now and then recheck the import

[GENERAL] Very high memory usage on restoring dump (with plain pqsl) on pg 9.1.2

2012-03-22 Thread Heiko Wundram
Hey! On a host that I'm currently in the process of migrating, I'm experiencing massive memory usage when importing the dump (generated using a plain pg_dump without options) using psql. The massive memory usage happens when the CREATE INDEX commands are executed, and for a table with about

Re: [GENERAL] Very high memory usage on restoring dump (with plain pqsl) on pg 9.1.2

2012-03-22 Thread Heiko Wundram
Am 22.03.2012 15:48, schrieb Tom Lane: What PG version are we talking about, and what exactly is the problematic index? Index is on (inet, integer, smallint, timestamp w/o timezone), btree and a primary key. There was a memory leak in the last-but-one releases for index operations on inet

Re: [GENERAL] Regular expression character escape

2012-02-24 Thread Heiko Wundram
Am 24.02.2012 17:04, schrieb Ronan Dunklau: On 24/02/2012 16:38, David Johnston wrote: You could (should?) write the escaping routine on the server side in a user-defined function: WHERE some_col ~ ('^' || make_regexp_literal(user_submitted_stringliteral) || '\d*$') I totally agree, but I

Re: [GENERAL] Regular expression character escape

2012-02-24 Thread Heiko Wundram
Am 24.02.2012 17:40, schrieb Ronan Dunklau: On 24/02/2012 17:09, Heiko Wundram wrote: Use the corresponding function of your programming language/framework of choice. E.g. Python delivers this as re.escape(). Thank you, but as I wrote in the original post, I don't know how postgresql

Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Heiko Wundram
Am 29.11.2011 23:44, schrieb Filip RembiaƂkowski: did you look at connlimit? http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit AFAIK, it applies only to ESTABLISHED state, so maybe it suits you. No, I didn't, and THANKS! That's exactly the hint I needed.

Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Heiko Wundram
Am 29.11.2011 23:49, schrieb Tom Lane: Another way that we've sometimes recommended people handle custom login restrictions is (1) use PAM for authentication (2) find or write a PAM plugin that makes the kind of check you want Very interesting - I'll first try the connlimit approach hinted at

Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Heiko Wundram
Am 30.11.2011 09:26, schrieb Magnus Hagander: I don't believe we do teardown using PAM, just session start. So you'd have to have your PAM module check the current state of postgresql every time - not keep some internal state. Okay, that's too bad - if connlimit doesn't do the trick, I'll try

[GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-29 Thread Heiko Wundram
Hello! Sorry for that subscribe post I've just sent, that was bad reading on my part (for the subscribe info on the homepage). Anyway, the title says it all: is there any possibility to limit the number of connections that a client can have concurrently with a PostgreSQL-Server with

Re: [GENERAL] Sporadic query not returning anything..how to diagnose?

2011-11-29 Thread Heiko Wundram
Am 29.11.2011 16:46, schrieb Phoenix Kiula: About 5% of the times (in situations of high traffic), this is not returning a value in my PHP code. Because it's not found, the code tries to INSERT a new record and there's a duplicate key error, which is in the logs. The traffic to the site is much

Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-29 Thread Heiko Wundram
Am 29.11.2011 20:44, schrieb Filip RembiaƂkowski: no easy, standard way of doing this in postgres. before we go into workarounds - what's the underlying OS? Okay, that's too bad that there's no standard way for this. The underlying OS is Linux (Gentoo, to be exact), and I'd already thought