Re: [GENERAL] How to restore the table space tar files created by pg_basebackup?

2011-11-30 Thread Venkat Balaji
Do you have Tablespace directories with a softlink to the data directory ?

Thanks
VB

On Wed, Nov 30, 2011 at 7:42 PM, Samba  wrote:

> Hi all,
>
> I have taken a base backup of my master server using pg_basebackup command
> as below:
>  pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9 -l masterbackup -h
> localhost -U replication -w
>
> The above created 4 tar files, namely: 16394.tar.gz  16395.tar.gz
>  16396.tar.gz  base.tar.gz
>
> I do know that my database contains 3 table spaces in addition to
> pg_default and pg_global ( I guess, that is why it created those three
> numbered tar.gz files, plus one base.tar.gz file ) and my master and
> standby servers are identical by all means.
>
> Now, I'm not sure how can I restore these files on the standby server. I
> could restore the base.tar.gz into the data directory on standby and the
> streaming replication has started working properly. But I'm not sure what
> to do with these additional numbered gz files which contains the same data
> that is already contained in the base.tar.gz file.
>
> Can some one explain me what to do with these files? The documentation for
> pg_basebackup does not mention this information, it just says that a
> different variant of the command will fail if there are multiple table
> spaces.
>
> Another related query is if  we can specify the name of the backup file
> instead of leaving it to be base.tar.gz file.
>
> Thanks and Regards,
> Samba
>
>
>
>
>


[GENERAL] Problem with custom aggregates and record pseudo-type

2011-11-30 Thread Maxim Boguk
I created special custom aggregate function to append arrays defined as:
CREATE AGGREGATE array_accum (anyarray)
(
sfunc = array_cat,
stype = anyarray,
initcond = '{}'
);


On arrays of common types it work without any problems:
SELECT array_accum(i) from (values (ARRAY[1,2]), (ARRAY[3,4])) as t(i);
 array_accum
-
 {1,2,3,4}
(1 row)


However once I try use it with record[] type I get an error:
SELECT array_accum(i) from (values (ARRAY[row(1,2),row(2,3)]),
(ARRAY[row(1,2),row(2,3)])) as t(i);
ERROR:  cannot concatenate incompatible arrays
DETAIL:  Arrays with element types record[] and record are not compatible
for concatenation.


The base function of the aggregate:
array_cat work with record[] without any complains:
SELECT array_cat(ARRAY[row(1,2),row(2,3)], ARRAY[row(1,2),row(2,3)]);
 array_cat
---
 {"(1,2)","(2,3)","(1,2)","(2,3)"}


What I doing wrong? Or how to create correct version of such aggregate
function?


-- 
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
If they can send one man to the moon... why can't they send them all?

МойКруг: http://mboguk.moikrug.ru/
Сила солому ломит, но не все в нашей жизни - солома, да и сила далеко не
все.


Re: [GENERAL] Using a domain

2011-11-30 Thread Tom Lane
Daniele Varrazzo  writes:
> I'm trying to use a domain to define a data type constraint, let's say
> an hypothetical uk_post_code with pattern LNNLL. I'd enforce no
> whitespaces, all uppercase.

> I would also need a way to normalize before validate: given an input
> such as "w3 6bq", normalize it to W36BQ before trying to apply the
> check. It would be great if I could give this function the same name
> of the domain, so that uk_post_code('w3 6bq') would return W36BQ cast
> to the domain.

That particular case isn't going to work unless you choose a different
function name --- as you've found out, the parser prefers the
interpretation that this means the same as 'w3 6bq'::uk_post_code,
which is not a cast but just a literal of the named type.

If you were willing to write something like uk_post_code('w3 6bq'::text)
and define your function as taking text (or varchar if that turns you on),
it should work.  Likewise anytime the argument is a variable/expression
of known type text.  But with a bare untyped literal, no.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using a domain

2011-11-30 Thread Daniele Varrazzo
Hello,

I'm trying to use a domain to define a data type constraint, let's say
an hypothetical uk_post_code with pattern LNNLL. I'd enforce no
whitespaces, all uppercase.

I would also need a way to normalize before validate: given an input
such as "w3 6bq", normalize it to W36BQ before trying to apply the
check. It would be great if I could give this function the same name
of the domain, so that uk_post_code('w3 6bq') would return W36BQ cast
to the domain.

Unfortunately it seems a domain implicitly defines a function, and
this function only perform the cast: the above is thus equivalent to
'w3 6bq'::uk_post_code, which would fail as the constraint doesn't
match. IIRC from when I've played with type definitions in C, for a
type there is no such automatic definition: a function converting text
to the type must be explicitly provided. \df doesn't show such
function for the domain (nor DROP FUNCTION seems knowing it), and if I
create one, it is not invoked (the cast takes precedence).

Is there any way to define a conversion in a function call
uk_post_code(text), or the only way to provide a normalization
function is to give it a different name (such as to_uk_post_code - I'd
like to know if there is a convention in how to name this function).

Is there any documentation about domains apart from the
CREATE/ALTER/DELETE commands? Haven't found any in the docs.

Thanks.

-- Daniele

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?

2011-11-30 Thread David Johnston
On Nov 30, 2011, at 18:44, Craig Ringer  wrote:

> On 11/30/2011 10:32 PM, Sergey Konoplev wrote:
>> Would it be more compact from the point of view of streaming
>> replication if we make the application accumulate changes and do one
>> COPY instead of lots of INSERTS say once a minute? And if it will be
>> so how to estimate the effect approximately?
> Streaming replication works on a rather lower level than that. It records 
> information about transaction starts, rollbacks and commits, and records disk 
> block changes. It does not record SQL statements. It's not using INSERT, so 
> you can't switch to COPY. Streaming replication basically just copies the WAL 
> data, and WAL data is not all that compact.

I think a better way to phrase the question is whether these three types of 
constructs affect different results on the replication side:

Insert into tbl values(...); [times 50]
insert into tbl values (...), (...), (...), ...; [ once with 50 values ]
Copy [ with 50 input rows provided ]

I would presume the first one is badly performing but no idea whether the 
multi-value version of insert would be outperformed by an equivalent Copy 
command (both on the main query and during replication)

Though, does auto-commit affect the results in the first case; I.e., without 
auto-commit do the first two results replicate equivalently?

> 
David J
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?

2011-11-30 Thread Craig Ringer

On 11/30/2011 10:32 PM, Sergey Konoplev wrote:

Would it be more compact from the point of view of streaming
replication if we make the application accumulate changes and do one
COPY instead of lots of INSERTS say once a minute? And if it will be
so how to estimate the effect approximately?
Streaming replication works on a rather lower level than that. It 
records information about transaction starts, rollbacks and commits, and 
records disk block changes. It does not record SQL statements. It's not 
using INSERT, so you can't switch to COPY. Streaming replication 
basically just copies the WAL data, and WAL data is not all that compact.


Try to run streaming replication over a compressed channel. PostgreSQL 
might gain the ability to do this natively - if someone cares enough to 
implement it and if it doesn't already do it without my noticing - but 
in the mean time you can use a compressed SSH tunnel, compressed VPN, etc.


Alternately, investigate 3rd party replication options like Slony and 
Bucardo that might be better suited to your use case.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query Optimizer makes a poor choice

2011-11-30 Thread Tomas Vondra
On 30.11.2011 23:22, Tyler Hains wrote:
>>> I haven't had a chance to experiment with the SET STATISTICS, but 
> that
>>> got me going on something interesting...
>>>
>>> Do these statistics look right?
>>>
>>> # SELECT attname, n_distinct, most_common_vals, histogram_bounds FROM
>>> pg_stats WHERE tablename = 'cards';
>>>
>> ...
>>> "card_set_id"   905
>>> "{5201,3203,3169,5679,5143,5204,5655,4322,5236,4513}"
>>> "{4,3080,3896,4349,4701,5179,5445,5706,6003,6361,6784}"
>>
>> This looks promising, because n_distinct is low enough that you can
>> cover almost all values with statistics.
>> raise the statistics and ANALYZE. should help.
>> (NOTE NOTE NOTE: assuming that the distribution is even)
>>
>>
>> ...
>> but one thing we see for sure is that you have not tuned your
>> PostgreSQL instance :-)
>> I would recommend pgtune, -> pgfoundry.org/projects/pgtune/
>> it covers most important stuff, *including* default_statistics_target.
>>
>>
>>
>> Filip
>>
> 
> I just tried the set statistics on our test system with essentially the 
> same end result. 

Can you describe the problem in a bit more detail? Because maybe you
just have the same problem as the OP.

Because with this (very simple) test case it works just fine.


create table test_tab (id int primary key, val int, txtval text);

insert into test_tab select i, mod(i, 1), md5(i::text) from
generate_series(1,1000) s(i);

create index test_tab_idx on test_tab (val);

analyze test_tab;


The table is about 730MB, the indexes are about 214MB each.


explain analyze select * from test_tab where val = 500 order by id;

1st execution (not cached): http://explain.depesz.com/s/1VQ (7786 ms)
2nd execution (cached): http://explain.depesz.com/s/cnt (1 ms)

explain analyze select * from test_tab where val = 500 order by id limit 1;

1st execution (not cached): http://explain.depesz.com/s/nlE (66 ms)
2nd execution (cached): http://explain.depesz.com/s/WNa (0.08 ms)


So in both cases the LIMIT (with index scan) is faster. Sure, there may
be cases when this does not work that well - maybe it's not well cached,
maybe there's some other issue.

But it clearly is not true that LIMIT is evil and should be avoided.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending the volume size of the data directory volume

2011-11-30 Thread Scott Mead
On Wed, Nov 30, 2011 at 4:19 PM, panam  wrote:

> Had to restart the import. This time, I tried with a smaller initial disk
> size (1GB) and extended it dynamically. It did not cause any problems.
> A different reason might be, that I remounted the volume in between during
> the last update to deactivate buffer flushing. Maybe a bad combination.
> Let's see how far it gets this time...
>
>
If you remounted without restarting postgres, I would say "that's your
problem"

--JT


>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5036767.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Filip Rembiałkowski
no easy, "standard" way of doing this in postgres.
before we go into workarounds - what's the underlying OS?



2011/11/29 Heiko Wundram :
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the number
> of connections that a client can have concurrently with a PostgreSQL-Server
> with "on-board" means (where I can't influence which user/database the
> clients use, rather, the clients mostly all use the same user/database, and
> I want to make sure that a single client which runs amok doesn't kill
> connectivity for other clients)? I could surely implement this with a proxy
> sitting in front of the server, but I'd rather implement this with
> PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
> software in question.
>
> Thanks for any hints!
>
> --
> --- Heiko.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Tomas Vondra
On 29.11.2011 14:49, Heiko Wundram wrote:
> Hello!
> 
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
> 
> Anyway, the title says it all: is there any possibility to limit the
> number of connections that a client can have concurrently with a
> PostgreSQL-Server with "on-board" means (where I can't influence which
> user/database the clients use, rather, the clients mostly all use the
> same user/database, and I want to make sure that a single client which
> runs amok doesn't kill connectivity for other clients)? I could surely
> implement this with a proxy sitting in front of the server, but I'd
> rather implement this with PostgreSQL directly.
> 
> I'm using (and need to stick with) PostgreSQL 8.3 because of the
> frontend software in question.
> 
> Thanks for any hints!

Hi,

maybe you could use a pgbouncer - it won't allow you to limit them by
source IP, but maybe you can group them by company or something.

For example like this

[databases]
conn_a = host=127.0.0.1 port=5432 user=user_a password='a' dbname=db_a
 pool_size=20
conn_b = host=127.0.0.1 port=5432 user=user_a password='a' dbname=db_a
 pool_size=10

The users will then connect just like today, but they'll connect to the
pgbouncer using dbnames conn_a and conn_b. Those using conn_a will be
able to use 20 connection, those using conn_b will be able to use 10
connections.

Each customer will get different credential and his own db name (in
pgbouncer).

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Merlin Moncure
On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram  wrote:
> Hello!
>
> Sorry for that subscribe post I've just sent, that was bad reading on my
> part (for the subscribe info on the homepage).
>
> Anyway, the title says it all: is there any possibility to limit the number
> of connections that a client can have concurrently with a PostgreSQL-Server
> with "on-board" means (where I can't influence which user/database the
> clients use, rather, the clients mostly all use the same user/database, and
> I want to make sure that a single client which runs amok doesn't kill
> connectivity for other clients)? I could surely implement this with a proxy
> sitting in front of the server, but I'd rather implement this with
> PostgreSQL directly.
>
> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
> software in question.
>
> Thanks for any hints!

I think the (hypothetical) general solution for these types of
problems is to have logon triggers.  It's one of the (very) few things
I envy from SQL Server -- see  here:
http://msdn.microsoft.com/en-us/library/bb326598.aspx.

Barring the above, if you can trust the client to call a function upon
connection I'd just do that and handle the error on the client with a
connection drop. Barring *that*, I'd be putting my clients in front of
pgbouncer with some patches to the same to get what I needed
(pgbouncer is single threaded making firewally type features quite
easy to implement in an ad hoc fashion).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc_fdw

2011-11-30 Thread Florian Schwendener

Hi Laurenz

Thank you for your help. As I know little about Linux and only a bit 
about make files, I really don't know if

I'm doing the right thing. I've typed this:

root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0# 
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 make

make: *** No rule to make target `make'.  Stop.

You see, I haven't come far... Am I doing something wrong?

Thanks again!
Flo

Am 29.11.2011 10:34, schrieb Albe Laurenz:

fschwend at hsr.ch wrote:

I built the current PostgreSQL 9.1.1 sources under Ubuntu 11.04 (in a

VMware under Win7).

I followed the steps in this guide:


www.thegeekstuff.com/2009/04/linux-postgresql-install-and-configure-from
-source

It seems to work (I can run the server and connect to it with

PgAdmin).

Now I'd like to integrate the ODBC_FDW extension in my installation.

However, I don't really

understand the steps described on the download page:
pgxn.org/dist/odbc_fdw/0.1.0

Can anybody tell me how to build it? I'm a software developer myself

but a Linux newbie...

I haven't tried it, but looking at the instructions I would say:

1) Install UnixODBC. If you use Packages, make sure you install the
headers too
(on my RHEL system, the packages are called unixODBC and
unixODBC-devel).

2) Get odbc_fdw and unzip the source.

3) Change into the source directory.

4) Run "make" and "make install" as the web page describes it
(substituting
the path to your configured PostgreSQL source tree).

If there still is something unclear, ask for details.

Yours,
Laurenz Albe



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc_fdw

2011-11-30 Thread Florian Schwendener

Oh, didn't see that! Now it says:

root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0# 
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1

make: Nothing to be done for `all'.

I remember trying a few options with the make command. Is it maybe 
already built?


Am 30.11.2011 09:46, schrieb Albe Laurenz:

Florian Schwendener wrote:

Thank you for your help. As I know little about Linux and only a bit
about make files, I really don't know if
I'm doing the right thing. I've typed this:

root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 make
make: *** No rule to make target `make'.  Stop.

You see, I haven't come far... Am I doing something wrong?

Yes, there are two "make"s in your line. Try

PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1

Yours,
Laurenz Albe



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query Optimizer makes a poor choice

2011-11-30 Thread Tyler Hains
>> I haven't had a chance to experiment with the SET STATISTICS, but 
that
>> got me going on something interesting...
>>
>> Do these statistics look right?
>>
>> # SELECT attname, n_distinct, most_common_vals, histogram_bounds FROM
>> pg_stats WHERE tablename = 'cards';
>>
>...
>> "card_set_id"   905
>> "{5201,3203,3169,5679,5143,5204,5655,4322,5236,4513}"
>> "{4,3080,3896,4349,4701,5179,5445,5706,6003,6361,6784}"
>
>This looks promising, because n_distinct is low enough that you can
>cover almost all values with statistics.
>raise the statistics and ANALYZE. should help.
>(NOTE NOTE NOTE: assuming that the distribution is even)
>
>
>...
>but one thing we see for sure is that you have not tuned your
>PostgreSQL instance :-)
>I would recommend pgtune, -> pgfoundry.org/projects/pgtune/
>it covers most important stuff, *including* default_statistics_target.
>
>
>
>Filip
>

I just tried the set statistics on our test system with essentially the 
same end result. 

I'm beginning to think the answer is to just avoid LIMIT.

Tyler



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending the volume size of the data directory volume

2011-11-30 Thread panam
Had to restart the import. This time, I tried with a smaller initial disk
size (1GB) and extended it dynamically. It did not cause any problems.
A different reason might be, that I remounted the volume in between during
the last update to deactivate buffer flushing. Maybe a bad combination.
Let's see how far it gets this time...


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5036767.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this safe to perform on PostgreSQL 8.3.7 -> Resize a column in a PostgreSQL table without changing data

2011-11-30 Thread Tom Lane
Reid Thompson  writes:
> My assumption is that converting varchar(n) to text would still force a
> re-write of the table?  i.e. currently there's no officially 'safe' way
> to convert the field type w/o incurring a table re-write.

If you do it through ALTER TABLE, yes.  Since text and varchar are the
same thing on disk, you could get away with just manually updating the
pg_attribute row for the column ... but I'd counsel practicing on a
scratch database ;-)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this safe to perform on PostgreSQL 8.3.7 -> Resize a column in a PostgreSQL table without changing data

2011-11-30 Thread Reid Thompson
On Tue, 2011-11-22 at 15:55 +, Gregg Jaskiewicz wrote:
> for the future it is better to just use text type, and: check
> length(field) < 35;

thanks to all for the respones.

The above seems a prudent way to go in my future.

My assumption is that converting varchar(n) to text would still force a
re-write of the table?  i.e. currently there's no officially 'safe' way
to convert the field type w/o incurring a table re-write.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with turning WAL archiving on

2011-11-30 Thread Rodrigo Gonzalez

On 11/30/2011 01:43 PM, Tomas Vondra wrote:

On 30 Listopad 2011, 17:23, BK wrote:

Hi Albe,

On Nov 30, 2011, at 2:31 PM, Albe Laurenz wrote:

Verify the current setting with

SELECT setting, source, boot_val, reset_val,
   sourcefile, sourceline
FROM pg_settings WHERE name = 'wal_level';

If the setting is not right (which is likely the case), try to find out
the cause.

This query shows that the settings are still on minimal. Strange. As I can
see there is just one postgresql.conf file (in the data directory) of the
9.1 installation. Everything is changed inside it according to the specs.
Wal_level is on archive. I even tried renaming the file, to see if when I
reboot PostgreSQL I would get an error. I got an error and therefor it is
the .conf that the DBMS uses.

Silly idea - the wal_level option is commented out by default. Are you
sure you've removed the '#' at the beginning?

Or maybe you have an included file after that that is hiding it?

Check for include directives in your configuration


Tomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.0 and asynchronous replication through VPN

2011-11-30 Thread Edson Richter

Em 30-11-2011 11:17, John DeSoi escreveu:

On Nov 30, 2011, at 5:02 AM, Edson Richter wrote:


I assume that the OpenVPN got disconnected for a few seconds, and came back 
again.

My question is: assuming I have enough wal segments on Master side, does the 
Slave get synchronized automatically after the connection is reestablished, or 
I'll need to restart Slave PostgreSQL to put it in sync again?

If I restart Slave PostgreSQL, I get:


Yes, it automatically catches up when the connection is working again. You 
should not have to restart the slave.
Thanks! Would be a nice improvement if when replication is restablished, 
then a log message occur.


Regards,

Edson.


John DeSoi, Ph.D.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with turning WAL archiving on

2011-11-30 Thread Tomas Vondra
On 30 Listopad 2011, 17:23, BK wrote:
> Hi Albe,
>
> On Nov 30, 2011, at 2:31 PM, Albe Laurenz wrote:
>> Verify the current setting with
>>
>> SELECT setting, source, boot_val, reset_val,
>>   sourcefile, sourceline
>> FROM pg_settings WHERE name = 'wal_level';
>>
>> If the setting is not right (which is likely the case), try to find out
>> the cause.
>
> This query shows that the settings are still on minimal. Strange. As I can
> see there is just one postgresql.conf file (in the data directory) of the
> 9.1 installation. Everything is changed inside it according to the specs.
> Wal_level is on archive. I even tried renaming the file, to see if when I
> reboot PostgreSQL I would get an error. I got an error and therefor it is
> the .conf that the DBMS uses.

Silly idea - the wal_level option is commented out by default. Are you
sure you've removed the '#' at the beginning?

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Strange problem with turning WAL archiving on

2011-11-30 Thread BK
Hi Albe,

On Nov 30, 2011, at 2:31 PM, Albe Laurenz wrote:
> Verify the current setting with
> 
> SELECT setting, source, boot_val, reset_val,
>   sourcefile, sourceline
> FROM pg_settings WHERE name = 'wal_level';
> 
> If the setting is not right (which is likely the case), try to find out
> the cause.

This query shows that the settings are still on minimal. Strange. As I can see 
there is just one postgresql.conf file (in the data directory) of the 9.1 
installation. Everything is changed inside it according to the specs. Wal_level 
is on archive. I even tried renaming the file, to see if when I reboot 
PostgreSQL I would get an error. I got an error and therefor it is the .conf 
that the DBMS uses. 

> Did you change the correct postgresql.conf?
> Are there more than one lines for wal_level in the file
> (try "grep wal_level postgresql.conf")?

I tried greping, there is just one nstance of it and is set on archive.

Any other ideas what could have gone wrong in this strange situation?

Best regards,
BK

> 
> Yours,
> Laurenz Albe
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?

2011-11-30 Thread Sergey Konoplev
Hi,

An application server does a lot of small inserts on several tables
(kind of users activity logging) in my database. These inserts are
creating a lot of (if not the most of) the traffic on the server. The
server has a hot standby replica working through a very slow channel
between them.

When the amount of this inserts increases the master does not manage
to send such a big stream of changes (because of the low bandwidth) to
the replica in time and the lag between them grows up dramatically.

Would it be more compact from the point of view of streaming
replication if we make the application accumulate changes and do one
COPY instead of lots of INSERTS say once a minute? And if it will be
so how to estimate the effect approximately?

PostgreSQL version is 9.0.4 on both servers. If you need more
specifics about the configuration just let me know what exactly.

Thank you very much in advance.

-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to restore the table space tar files created by pg_basebackup?

2011-11-30 Thread Samba
Hi all,

I have taken a base backup of my master server using pg_basebackup command
as below:
 pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9 -l masterbackup -h
localhost -U replication -w

The above created 4 tar files, namely: 16394.tar.gz  16395.tar.gz
 16396.tar.gz  base.tar.gz

I do know that my database contains 3 table spaces in addition to
pg_default and pg_global ( I guess, that is why it created those three
numbered tar.gz files, plus one base.tar.gz file ) and my master and
standby servers are identical by all means.

Now, I'm not sure how can I restore these files on the standby server. I
could restore the base.tar.gz into the data directory on standby and the
streaming replication has started working properly. But I'm not sure what
to do with these additional numbered gz files which contains the same data
that is already contained in the base.tar.gz file.

Can some one explain me what to do with these files? The documentation for
pg_basebackup does not mention this information, it just says that a
different variant of the command will fail if there are multiple table
spaces.

Another related query is if  we can specify the name of the backup file
instead of leaving it to be base.tar.gz file.

Thanks and Regards,
Samba


Re: [GENERAL] Strange problem with turning WAL archiving on

2011-11-30 Thread Albe Laurenz
BK wrote:
> I've spent a couple of hours trying some WAL archiving functionality
on PostgrSQL 9.1 (running on Mac
> OS X). I turned on all the needed options as specified in the
documentation:
> 
> wal_level = archive
> archive_mode = on
> archive_command='test ! -f /Volumes/baza/%f && cp %p /Volumes/baza/%f'
> 
> I also tried different archive commands, just to see if this is the
case, but every time I try
> starting a backup (as postgres user) I got the following error:
> 
> tester=# select pg_start_backup('h');
> ERROR:  WAL level not sufficient for making an online backup
> HINT:  wal_level must be set to "archive" or "hot_standby" at server
start.
> 
> The postgresql.conf file has all the changes I mentioned above and it
was of course rebooted after the
> changes. I cannot find anything online regarding this issue, seems a
bit strange as all the
> configuration looks all right, but the error is still here.

Verify the current setting with

SELECT setting, source, boot_val, reset_val,
   sourcefile, sourceline
FROM pg_settings WHERE name = 'wal_level';

If the setting is not right (which is likely the case), try to find out
the cause.

Did you change the correct postgresql.conf?
Are there more than one lines for wal_level in the file
(try "grep wal_level postgresql.conf")?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL & DML Logging doesn't work for calling functions

2011-11-30 Thread Albe Laurenz
MURAT KOÇ wrote:
> If we set log_statement='all', all of sql statements will be logged and log 
> file will grow up
> immediately (also including unnecessary sql statements).
> 
> We don't want all sql statements to be logged, so we continue logging 
> settings as my previous sending
> (log_statement = 'mod').

Unfortunately there is no way to log function executions withthis setting.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.0 and asynchronous replication through VPN

2011-11-30 Thread John DeSoi

On Nov 30, 2011, at 5:02 AM, Edson Richter wrote:

> I assume that the OpenVPN got disconnected for a few seconds, and came back 
> again.
> 
> My question is: assuming I have enough wal segments on Master side, does the 
> Slave get synchronized automatically after the connection is reestablished, 
> or I'll need to restart Slave PostgreSQL to put it in sync again?
> 
> If I restart Slave PostgreSQL, I get:


Yes, it automatically catches up when the connection is working again. You 
should not have to restart the slave.

John DeSoi, Ph.D.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc_fdw

2011-11-30 Thread Eduardo Morras

At 16:27 28/11/2011, you wrote:

Hi there!

I built the current PostgreSQL 9.1.1 sources under Ubuntu 11.04 (in 
a VMware under Win7).

I followed the steps in this guide:
www.thegeekstuff.com/2009/04/linux-postgresql-install-and-configure-from-source

It seems to work (I can run the server and connect to it with PgAdmin).

Now I'd like to integrate the ODBC_FDW extension in my installation. 
However, I don't really

understand the steps described on the download page:
pgxn.org/dist/odbc_fdw/0.1.0

Can anybody tell me how to build it? I'm a software developer myself 
but a Linux newbie...


Thank you for your help!


As a final note, be careful to install only one odbc driver. Some 
linux distros use iODBC and others use unixODBC but some apps install 
the one they use so you can get a fail system with both libraries 
"punching for being the king".  




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL & DML Logging doesn't work for calling functions

2011-11-30 Thread MURAT KOÇ
Thanks for reply.

If we set log_statement='all', all of sql statements will be logged and log
file will grow up immediately (also including unnecessary sql statements).

We don't want all sql statements to be logged, so we continue logging
settings as my previous sending (log_statement = 'mod').

Regards,
Murat KOC

2011/11/30 Albe Laurenz 

> MURAT KOÇ wrote:
> > Version is PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC
> gcc (GCC) 4.1.2 20080704 (Red
> > Hat 4.1.2-51), 64-bit.
> >
> > We set logging parameters as below for DDL & DML Logging:
> > logging_collector = on
> > log_statement = mod
> > log_line_prefix = '%t--%d--%u--%h--%a--%i--%e'
>
> > Logging works successfully while we run direct DML commands like
> "insert, update, delete".
> >
> > But, when I call a function that does DML, logging doesn't work and
> server log file has no information
> > about calling function.
> >
> > I call function like this: SELECT p_dummy_insert();
> >
> > ###This is sample insert function###
> > CREATE OR REPLACE FUNCTION p_dummy_insert ()
> >   RETURNS void AS
> > $BODY$
> > BEGIN
> >  INSERT INTO employee values ('dummy', 'test');
> > END$BODY$
> >   LANGUAGE plpgsql VOLATILE
> >   COST 100;
> >
> > "Not logging of function calls" is expected behavior or a bug? We have
> no information on server logs
> > about who called function or when was it called or what did called
> function do?
>
> The function call itself is logged, but SQL statements inside
> the function are not.
>
> The function call does not show up in your log because it is
> in a SELECT statement.
>
> Set log_statement = 'all' to log the function call.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] odbc_fdw

2011-11-30 Thread Albe Laurenz
Florian Schwendener wrote:
[has problems building odbc_fdw]
> Oh, didn't see that! Now it says:
> 
> root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
> PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1
> make: Nothing to be done for `all'.
> 
> I remember trying a few options with the make command. Is it maybe
> already built?

Looks like it.

You can
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 clean
to delete already built files.

Then you can rebuild.

After building, use
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 install
to install.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.0 and asynchronous replication through VPN

2011-11-30 Thread Edson Richter

  
  
Dear friends,

I have an somewhat unstable link between two different locations
with OpenVPN established and working.

Now, I've configured PostgreSQL 9.0.5 for asynchronous replication.
This morning I got the following message on Slave PostgreSQL log:

--
2011-11-30 07:52:23 BRST % - FATAL:  could not connect to the
primary server: could not connect to server: Connection timed out
        Is the server running on host "10.68.73.5" and accepting
        TCP/IP connections on port 5432?
    
2011-11-30 07:55:33 BRST % - FATAL:  could not connect to the
primary server: could not connect to server: Connection timed out
        Is the server running on host "10.68.73.5" and accepting
        TCP/IP connections on port 5432?
---

Detailed network configuration: PostgreSQL [Master = 10.68.73.5,
Slave = 10.68.73.1]; OpenVPN [Server = 10.68.73.1;
Client=10.68.73.5; both static IP]

I assume that the OpenVPN got disconnected for a few seconds, and
came back again.

My question is: assuming I have enough wal segments on Master side,
does the Slave get synchronized automatically after the connection
is reestablished, or I'll need to restart Slave PostgreSQL to put it
in sync again?

If I restart Slave PostgreSQL, I get:

--
2011-11-30 08:01:09 BRST % - LOG:  received fast shutdown request
2011-11-30 08:01:09 BRST % - FATAL:  terminating walreceiver process
due to administrator command
2011-11-30 08:01:09 BRST % - LOG:  shutting down
2011-11-30 08:01:09 BRST % - LOG:  database system is shut down
2011-11-30 08:01:18 BRST % - LOG:  database system was shut down in
recovery at 2011-11-30 08:01:09 BRST
2011-11-30 08:01:18 BRST % - LOG:  entering standby mode
2011-11-30 08:01:18 BRST % - LOG:  redo starts at A/420
2011-11-30 08:01:18 BRST % - LOG:  record with zero length at
A/4B0
--

Thanks for your help,
-- 
  
  

  

Edson Carlos Ericksson Richter
  
  SimKorp Informática Ltda

  
  
Fone:
(51) 3366-7964
  
  
Celular:
(51)9318-9766/(51)
8585-0796
  
  

  
  

  
  

  



[GENERAL] Strange problem with turning WAL archiving on

2011-11-30 Thread BK
Hello,

I've spent a couple of hours trying some WAL archiving functionality on 
PostgrSQL 9.1 (running on Mac OS X). I turned on all the needed options as 
specified in the documentation:

wal_level = archive
archive_mode = on 
archive_command='test ! -f /Volumes/baza/%f && cp %p /Volumes/baza/%f' 

I also tried different archive commands, just to see if this is the case, but 
every time I try starting a backup (as postgres user) I got the following error:

tester=# select pg_start_backup('h');
ERROR:  WAL level not sufficient for making an online backup
HINT:  wal_level must be set to "archive" or "hot_standby" at server start.

The postgresql.conf file has all the changes I mentioned above and it was of 
course rebooted after the changes. I cannot find anything online regarding this 
issue, seems a bit strange as all the configuration looks all right, but the 
error is still here.

Thanks in advance,
BK
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] : pg_compresslog (pglesslog)

2011-11-30 Thread Venkat Balaji
Hello Everyone,

Can someone please help me know if there exists a "pglesslog" version for
PG-9.0.

I only see beta version (1.4.2) for pg9.0 being released sometime ago.

Anyone using "pg_lesslog_1.4.2_pg90_beta.tar" for PG-9.0 production
successfully ?

Can we use the above said version on production ?

Please help !

Thanks
VB


Re: [GENERAL] DDL & DML Logging doesn't work for calling functions

2011-11-30 Thread Albe Laurenz
MURAT KOÇ wrote:
> Version is PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
> (GCC) 4.1.2 20080704 (Red
> Hat 4.1.2-51), 64-bit.
> 
> We set logging parameters as below for DDL & DML Logging:
> logging_collector = on
> log_statement = mod
> log_line_prefix = '%t--%d--%u--%h--%a--%i--%e'

> Logging works successfully while we run direct DML commands like "insert, 
> update, delete".
> 
> But, when I call a function that does DML, logging doesn't work and server 
> log file has no information
> about calling function.
> 
> I call function like this: SELECT p_dummy_insert();
> 
> ###This is sample insert function###
> CREATE OR REPLACE FUNCTION p_dummy_insert ()
>   RETURNS void AS
> $BODY$
> BEGIN
>  INSERT INTO employee values ('dummy', 'test');
> END$BODY$
>   LANGUAGE plpgsql VOLATILE
>   COST 100;
> 
> "Not logging of function calls" is expected behavior or a bug? We have no 
> information on server logs
> about who called function or when was it called or what did called function 
> do?

The function call itself is logged, but SQL statements inside
the function are not.

The function call does not show up in your log because it is
in a SELECT statement.

Set log_statement = 'all' to log the function call.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc_fdw

2011-11-30 Thread Albe Laurenz
Florian Schwendener wrote:
> Thank you for your help. As I know little about Linux and only a bit
> about make files, I really don't know if
> I'm doing the right thing. I've typed this:
> 
> root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
> PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 make
> make: *** No rule to make target `make'.  Stop.
> 
> You see, I haven't come far... Am I doing something wrong?

Yes, there are two "make"s in your line. Try

PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Heiko Wundram

Am 30.11.2011 09:26, schrieb Magnus Hagander:

I don't believe we do teardown using PAM, just session start. So you'd
have to have your PAM module check the current state of postgresql
every time - not keep some internal state.


Okay, that's too bad - if connlimit doesn't do the trick, I'll try and 
see how PAM is used, and possibly patch the respective session 
teardown-functionality into the server (which shouldn't be too hard, I 
guess).



FWIW, another option for writing your authentication module is to
write a simple RADIUS server running on the same box. It's pretty
trivial to do, especially in a high level language. The end result is
the same as if you use PAM - you get custom authentication that can
apply specific checks.


I'm much more used to writing PAM modules (which I've already done for 
authentication used by an FTP-server), so that'd be my first route to 
go, but keeping this in mind is handy, too. Thanks!


--
--- Heiko.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Magnus Hagander
On Wed, Nov 30, 2011 at 09:23, Heiko Wundram  wrote:
> Am 29.11.2011 23:49, schrieb Tom Lane:
>>
>> Another way that we've sometimes recommended people handle custom login
>> restrictions is
>> (1) use PAM for authentication
>> (2) find or write a PAM plugin that makes the kind of check you want
>
> Very interesting - I'll first try the connlimit approach hinted at by Filip,
> but if PostgreSQL does normal session setup/teardown using PAM (I've never
> used PAM authentication for PostgreSQL before), this should be a workable
> solution in case using iptables doesn't turn out to properly handle
> disconnected sessions quickly enough.

I don't believe we do teardown using PAM, just session start. So you'd
have to have your PAM module check the current state of postgresql
every time - not keep some internal state.

FWIW, another option for writing your authentication module is to
write a simple RADIUS server running on the same box. It's pretty
trivial to do, especially in a high level language. The end result is
the same as if you use PAM - you get custom authentication that can
apply specific checks.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Heiko Wundram

Am 29.11.2011 23:49, schrieb Tom Lane:

Another way that we've sometimes recommended people handle custom login
restrictions is
(1) use PAM for authentication
(2) find or write a PAM plugin that makes the kind of check you want


Very interesting - I'll first try the connlimit approach hinted at by 
Filip, but if PostgreSQL does normal session setup/teardown using PAM 
(I've never used PAM authentication for PostgreSQL before), this should 
be a workable solution in case using iptables doesn't turn out to 
properly handle disconnected sessions quickly enough.


Thanks for pointing me at the PAM!

--
--- Heiko.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-11-30 Thread Heiko Wundram

Am 29.11.2011 23:44, schrieb Filip Rembiałkowski:

did you look at connlimit?
http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit
AFAIK, it applies only to ESTABLISHED state, so maybe it suits you.


No, I didn't, and THANKS! That's exactly the hint I needed. I tried to 
use the hashlimit and/or recent matches with high burst rates and low 
limits, but that didn't work, and that's what I was hinting at.



I'm not sure how do you want to allow "many connections being created
almost at once" and "limit number of connections from same IP" at the
same time?


The intention being that I'm trying to limit the total amount of 
connections per client to something around 20; the behaviour of the 
clients is such that they create 10-15 connections in a very short burst 
(due to threaded accesses to the database), and (should) disconnect all 
of these in an interval of 10 minutes. When a client runs amok (which 
I've had twice this week), the batch of connections is not disconnected, 
and a single client gradually starts eating up all connections to the 
database, and thus hinders other clients from functioning. But: using 
connlimit should do the trick. I'll try that out immediately.


Thanks again!

--
--- Heiko.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general